Category Archives: Development

Sony writes a RootKit

Mark Russinovich is a brilliant guy and likely not so popular with the people at Sony these days.  Mark was testing out some root kit detection and removal software and discovered that in their exuberance to implement Digital Rights Management Sony has created a very ham handed solution that behaves more like a rootkit than some of the very worst actual rootkits out on the Internet.

Read Mark’s Blog which details his discovery or go to theregister.co.uk article that summarizes it.  Good reading about bad code!

Issues with generating accounts and passwords

A friend of mine has a system that will require them to generage a large number of username and passwords for their users and they want to use usernames that make sense to the users.  That is a common request, but he is concerned that a saavy user could deduce the username of others based on theirs.  This is a real possibility (or likelihood) if you use any of the standard methods such as employee number (just guess sequential numbers) or combinations of first and last name.

My response is as follows:


It is as always a tradeoff…


If you use a determinable username then the password must be that much more secure.  Ultimately we accept that user names are often guessable (in most systems), but just because that is a normally accepted risk it does not follow that it is OK.  Password guessing is a numbers game.  If we go to the simplest case of a single character password using a standard character set (alpha upper case + alpha lower case + digits = 26 + 26 + 10 = 62 possible characters) then there are only 62 guesses needed to get in once the username is known.  As we add more characters to the minimum password length then we approach numbers where brute force attacks will take a long time provided the password is not in a dictionary (my dictionary for such attacks has over 5 million words and well worn passwords).  At 6 characters you are at 56,800,235,584 (over 56 billion) possible combinations assuming the simple character set I mention above.  On average an attacker trying every single possible combination will stumble on the correct one before they finish every combination.  Keeping that fact out of the discussion we have to decide if we think a user can hit the site 56 billion times in a reasonable span of time to guess the password.  Drive minimum password length to 8 characters and we are at a healthy 218,340,105,584,896 (over 218 trillion) which is where I like to be.


 


This is very secure given one critical assumption.  It is assumed that the overhead of making a web request to test a guess adds enough overhead that you can’t hope to achieve millions of guesses per second or even per minute.  If this assumption falls then my conclusion below for a web based system is out the window.  Windows hashes of 8 characters fall very quickly even with larger character sets because I can crack them locally leveraging the full power of my processor and not bound by network latency (which is huge in comparison to local throughput).


 


Bottom line is that if you are comfortable with 8 character passwords that are complex enough (not findable in any competent hacking dictionary) then you can publish the user names on your home page and it won’t matter (but I wouldn’t because I am paranoid).


 


One final analogy to wrap up:  If you had a combination lock with the typical 4 numbers on tumblers (locker lock or suitcase lock).  There are 10,000 combinations from 0000 to 9999.  If someone could deftly try one per second then in under 3 hours it would be open without exception.  But if they could only try once per hour (due to surveillance or some other factor then it would take well over a year.  Complexity is comprised of number of characters times character set available.  Vulnerability is measured in potential passwords divided by the speed at which they can be tried.  I prefer adding techniques that detect and deter brute force attacks, but that is a topic for another day.

Your Name in Lights…

Thom Robbins of MS is introducing a really cool competition called the “Launch 2005 Screencast Contest”.  The concept is that you get a free 30 day copy of Camtasia and record one or more demos with audio.  The entries will be screened and the winners in the major launch cities will win some useful stuff.

Thom breaks it all down on his blog here.

I did one of these during the break at the last Code Camp and it was actually pretty cool.  My demo is up on Channel 9 and I am definitely going to be doing some more (though if I know Thom, I am not allowed in the contest).

TechEd Hong Kong Wrap-up

I have been out of it for about a week due to travel to present 7 sessions at TechEd Hong Kong, but now I am back.  It was a great event and as usual was characterized by very high energy keynotes!

The highlight for Bruce Backa and I in our presentations was our last session on Server Control Development for ASP.Net 2.0.  The demo of a control that leverages AJAX style updating to the content really churned the audience and opened some eyes.  I have been asked to provide the source code to that particular demo (for session WEB428) so here it is: WEB428Done.zip (51.22 KB)

I have to thank everyone who got us to go over there (for our fourth time!) and to Andres Sanabria from Microsoft for the slides and the framework for this particular demo.

Trading CPU for Money

The deeper I dig into each generation of tools (VS 2005 at the moment) the more I see the trade off of CPU for developer time.  It used to be that the programmer would go to extremes to maximize the performance of their code and that the tools were written in much the same way.  Over the years this trend has reversed and has really accelerated the other way.  When you hit enter in VS 2005 it is doing a background compile which allows it to catch typos and other errors in much the same way that Word does.  This is great if you want to be productive, but I often hear lamentations that performance is being tossed.  When I look at modern CPU power, I have to admit that I think it is high time we made reasonable tradeoffs.  I see more and more servers that are barely touching CPU usage in double digit percentages as the march of Moore’s law overtakes our consumption of the resulting CPU cycles.

I am not advocating wasting resources, but it I have to write a small application for a simple task then I am all for getting it done in half the time in exchange for it using more memory or even if it were to run 5% slower than writing it with older tools.  The truth is that as applications evolve and add features they almost always run slower in most circumstances than they used to, but only if you stick with the same hardware.

For myself I say keep the productivity gains coming in the tools and as long as it doesn’t get caprious, I won’t complain.