A friend of mine has a system that will require them to generage a large number of username and passwords for their users and they want to use usernames that make sense to the users. That is a common request, but he is concerned that a saavy user could deduce the username of others based on theirs. This is a real possibility (or likelihood) if you use any of the standard methods such as employee number (just guess sequential numbers) or combinations of first and last name.
My response is as follows:
It is as always a tradeoff…
If you use a determinable username then the password must be that much more secure. Ultimately we accept that user names are often guessable (in most systems), but just because that is a normally accepted risk it does not follow that it is OK. Password guessing is a numbers game. If we go to the simplest case of a single character password using a standard character set (alpha upper case + alpha lower case + digits = 26 + 26 + 10 = 62 possible characters) then there are only 62 guesses needed to get in once the username is known. As we add more characters to the minimum password length then we approach numbers where brute force attacks will take a long time provided the password is not in a dictionary (my dictionary for such attacks has over 5 million words and well worn passwords). At 6 characters you are at 56,800,235,584 (over 56 billion) possible combinations assuming the simple character set I mention above. On average an attacker trying every single possible combination will stumble on the correct one before they finish every combination. Keeping that fact out of the discussion we have to decide if we think a user can hit the site 56 billion times in a reasonable span of time to guess the password. Drive minimum password length to 8 characters and we are at a healthy 218,340,105,584,896 (over 218 trillion) which is where I like to be.
This is very secure given one critical assumption. It is assumed that the overhead of making a web request to test a guess adds enough overhead that you can’t hope to achieve millions of guesses per second or even per minute. If this assumption falls then my conclusion below for a web based system is out the window. Windows hashes of 8 characters fall very quickly even with larger character sets because I can crack them locally leveraging the full power of my processor and not bound by network latency (which is huge in comparison to local throughput).
Bottom line is that if you are comfortable with 8 character passwords that are complex enough (not findable in any competent hacking dictionary) then you can publish the user names on your home page and it won’t matter (but I wouldn’t because I am paranoid).
One final analogy to wrap up: If you had a combination lock with the typical 4 numbers on tumblers (locker lock or suitcase lock). There are 10,000 combinations from 0000 to 9999. If someone could deftly try one per second then in under 3 hours it would be open without exception. But if they could only try once per hour (due to surveillance or some other factor then it would take well over a year. Complexity is comprised of number of characters times character set available. Vulnerability is measured in potential passwords divided by the speed at which they can be tried. I prefer adding techniques that detect and deter brute force attacks, but that is a topic for another day.
Thom Robbins of MS is introducing a really cool competition called the “Launch 2005 Screencast Contest”. The concept is that you get a free 30 day copy of Camtasia and record one or more demos with audio. The entries will be screened and the winners in the major launch cities will win some useful stuff.
Thom breaks it all down on his blog here.
I did one of these during the break at the last Code Camp and it was actually pretty cool. My demo is up on Channel 9 and I am definitely going to be doing some more (though if I know Thom, I am not allowed in the contest).
In the media and likely on your network! I am suprised (pleasantly) to see so much attention being paid to a lurking menace. Jon Box recently posted about it on his blog and called for a few of us to comment (which I did).
The fact of the matter is that Rootkits are like the devil, their greatest trick is to convince the world that they aren’t there. They don’t show up in task manager or on service lists. That is the whole point.
Luckily as I said the media is getting in on the scoop as in addition to Jon, Eweek has posted a pretty good article on the topic. When you read this you should ask yourself two questions. First how do I check for these things and get rid of them if they are found and second how do I see what is actually stored on my network. It turns out that the hacked server becomes file server for media files is a common theme. I recommend solid auditing and solid storage reporting as the primary ways of getting a handle on this. For the reporting side we use Storage M&A by NTP Software. It has the added benefit of helping protect you by keeping forbidden file types (i.e. *.vbs and even *.exe) from being written to your drives. Exception based policies allow you to be flexible when needed, but you fix it if you don’t know it is broken.
My nephew, John Hynds, also happens to be a security consultant (big surprise) and he pointed me at a recent what we think it a perfect example of a Cross Site Scripting (XSS) exploit as carried out against MySpace.com.
We find that most people have trouble understanding Cross Site Scripting as an exploit as opposed to more transparent attacks like brute force or even SQL Injection.
One key take away from this is that while you are welcome to try to detect when a user inputs malicious data, but that is a war of escalation. Instead you should concentrate on only allowing valid data, it is much easier to screen and less likely to fail as MySpace.com did in this example.
I have been out of it for about a week due to travel to present 7 sessions at TechEd Hong Kong, but now I am back. It was a great event and as usual was characterized by very high energy keynotes!
The highlight for Bruce Backa and I in our presentations was our last session on Server Control Development for ASP.Net 2.0. The demo of a control that leverages AJAX style updating to the content really churned the audience and opened some eyes. I have been asked to provide the source code to that particular demo (for session WEB428) so here it is: WEB428Done.zip (51.22 KB)
I have to thank everyone who got us to go over there (for our fourth time!) and to Andres Sanabria from Microsoft for the slides and the framework for this particular demo.
The vulnerability scanner called Nessus will no longer be available under a GPL license starting with the next version (version 3.0).
The announcement pointed to the fact that the community has done very little to help the product evolve, but many competitors have exploited the loophole of providing hardware appliances to cut the makers of Nessus.