CRLF Injection and a bad premise

A friend of ours, Phil, sent Duane and I a link to an article about web attacks (Phil does this alot).  He commented that he hadn’t heard of CRLF Injection before and while I had heard of it, I realized that I wasn’t comfortable explaining it on the spot with examples so I read the link.

While I think the writeup is good and felt refreshed of information on the topic (as esoteric as it is given how often we still find SQL Injection), I was struck by one badly worded comment in the text.  Namely the section that says, “The best way to defend against CRLF attacks it to filter extensively any input that a user can give. One should “remove everything but the known good data” and filter meta characters from the user input. This will ensure that only what should be entered in the field will be submitted to the server”.  The premise is well intended, but did you see the flaw?  Why would you remove anything from a submission that has anything bad in it?  OK, maybe there are innocent times when a user will insert something that doesn’t belong. However if you are doing the filter thing and you find something bad, overtly bad then you shouldn’t remove it, you should end the user’s session and redirect them to an error page (or some other circle of hell).

If a criminal came to your house and tried to open a window only to find it locked would you then allow them to keep trying?  If you can determine that the input was actually harmful (the opposite of good data) then you should think hard about maybe dumping the user and not going any further in their processing.

If you make your applications work more like the way the real world works then they are more likely to survive in the real world.

</rant> ;)

ASP.Net Application Pool Gotcha

Sharing a web server between development teams is always fun (not).  We had a problem surface today (or resurface) where if a developer creates a web application on IIS that uses .Net 1.1 for example (not an uncommon occurance) and some other developer creates a web application on that same server but this second one uses .Net 2.0 (something becoming more common every day).  Odds are that the developers and even sometimes the network engineer or web master will allow the defaults to lull them into the false sense that it was an easy and straightforward task.

The problem is that they both allowed the “Default Application Pool” to remain selected and now the second of these sites to load will crash IIS.

You can’t have two different versions of .Net loaded into the same process and Application Pool often (though not always) means the same process.

Scott Forsyth has an article about this very issue that will help describe the error that occurs when you have this problem (the “Server Application Unavailable” error).

If you haven’t seen this yet, then you will.

Membership Provider Source Code

Scott Guthrie pointed me at a link to the source code for the ASP.Net 2.0 providers including the Membership and Role Management providers.  While I think the Profiles, Web Parts and Site Navigation providers are important and cool, I expect to do much more with the Membership provider.  Expect to see some customizations in presentations I give in the future.

I think this is a great step and am not surprised to see Scott doing something this cool.

Check it out!

File System Permissions on copy or move

I was recently asked by a very technical and very sharp friend of mine about the symantics of permissions on copy.

I figured if he needed some guidance on how this works then there must be a ton of other developers who could use a refresher so here goes:

There are alot of reasons that a developer or QA engineer must use copy or move to get their applications running for test or even for production.  The problem is that the same old processes that worked so many times before can often mask a misconception or two that arise as “bugs” when the moons do not align to make the old process function as expected.  Case in point.  You want to deploy a web application which has notoriously particular permissions requirements.  If copy has always worked in  the past, but on the new server you are getting strange permissions then you might be forgetting some of the rules.

The first thing to take into account is whether this is this a move within the same volume (nothing fancy) or a move across volumes (maybe obscured by DFS) or even just a plain old copy (often the case).

A move within volumes would mean you should have the permissions preserved. A move across volumes is actually a copy and a delete combined and means you are just getting the permissions of the target folder which is by design and this is also the behavior of a copy unless you use something like scopy which preserves permissions.

If a copy in the past has preserved permissions and you didn’t use scopy (very handy by the way) then either there is a setting in Windows that I am unaware of (please enlighten me) or you got lucky in the past and the target folder permissions were what you expected.

Usually file permissions and especially the semantics of permissions on copy vs. move are the domain of network types.  In many cases it helps alot to be a mongrel from both worlds.

Community Credit

Like the Code Camps another good idea is coming out of the Microsoft Developer Evangelists.  This time it is a web site with an interesting concept.  If you go to you will see it in action and also be able to see the people who are working hard to build their technical communities. Think of it as an ongoing public resume where people who contribute to the community get credit for their efforts.

I think this is a good step in building a nice feedback system so the dev community keeps going.

Check it out.