Places To Go
News and Reviews....
People To See
Places To Go
Monday, May 23, 2011
I am happy to announce that very soon I will be providing a monthly article in the SD Times
on Microsoft Technology.
With this regular writing task to spur me on I expect (and hope) to be doing alot more blogging as well...
Tuesday, February 15, 2011
If you looked into playing with Azure in the past, but did not jump in then it is time to take another look. Microsoft has added options over the last year that really remove objections to trying it out. If you have an MSDN subscription then you pretty much get a free playground in Azure that is going to waste if you don't use it and if you don't there is still the Introductory Special that goes through the end of March that gives you access to the basics of the service at no cost.
To look it over go to the Windows Azure Offers page at Microsoft.com and get going. You might not have a project that fits the Azure model currently, but you will. I am working on a new product for DTS that will have an Azure component and while it is still off in the horizon the time to jump in is before you are behind.
Thursday, December 16, 2010
Michele Bustamante and I have started recording the first episodes of our new security focused podcast LockDown. While the website is up, it has place holder content describing Carl Franklin of .Net Rocks fame as our first guest (that was the original plan). However as usual Carl was flying around the globe when we started and we all agreed to save him for later.
If you are interested watch the podcast url or my blog (here) for the first show when it releases.
Tuesday, December 01, 2009
Over the last year I have gotten an education on PHP and MySQL web sites to go along with my existing expertise with ASP.Net and SQL Server.
It turns out that I purchased a web site a little over a year ago that supports gamers who play World of Warcraft (a game I have played for years). The site gets about 100,000 unique users a month with just shy of a million page views a month. The site was written in PHP against a MySQL backend and is just not driving the revenue yet to justify porting it to ASP.Net and SQL Server (though as you will read here the balance of pain is shifting that equation). It turns out that we end up rebooting the system pretty damn often which was a problem with IIS back in the old days, but not one I have had in recent versions.
We have thrown more hardware at the system, brought in professional help and it just seems that at these levels of use the system runs down and needs a kick and sometimes intensive care.
My point here is that it has been an education for me to validate what I suspected, there is no magic with the non-MS stack. It can hang in some regards, but it seems that for really heavy loads, MS has got them beat on stability. I am working on an ASP.Net with SQL Server site now that handles similar traffic and it just doesn't suffer the same issues.
I plan to dig deeper into the tech here if for no other reason to figure out what it takes to port the site to ASP.Net with SQL Server.
Sunday, November 22, 2009
I just got back from the Microsoft PDC in LA and have been thinking about what I saw there.
It turns out that I have come to a couple of conclusions that I will surely post more about in the future, but for now here is the overview.
First there were several Windows Azure announcements that have swayed me from skeptic to seeing a real chance for Azure to be a contender. Chief among my concerns was the fact that I just didn't see companies doing a big rewrite just to leverage a cloud solution. Now it is much easier to port an existing application to Azure and there is the option to customize the hosted image. I also saw a demo that no one else seems to have noticed (or I was imaging things). I could have sworn I saw a demo where SQL data hosted behind the company firewall was opened up for consumption by an Azure hosted application. I plan to watch that keynote again to make sure I know what I am talking about so consider this a disclaimer.
Second, I am now confident that Microsoft will not abandon either WPF nor SilverLight developers since there were already announcements to make both able to run with the same assemblies. A small step, but when coupled with the fact that VS2010 is built with WPF I think the two technologies are both valid for development (I was worried about the future of WPF until recently).
There was of course more, but those will have to wait for other posts.
Thursday, November 12, 2009
As I work to build commercial software products I am regularly forced to remember that bug is a relative term. That sounds like a weasely way to explain away a fault in your software, but it really does turn out to be true especially when you have been on the ISV side of the conversation.
Back in August Steven Sinofsky posted a very insider view of how the Windows 7 team triaged bug reports on the Windows 7 Engineering blog. Microsoft products enjoy (a mixed blessing) more previewing eyes and shared opinions than most everyone. The bottom line you have to understand to put these things in perspective is that the creator of the software is on the hook for supporting, maintaining, justifying and profiting from their product. While the customer is always right about what they want, they aren't always right in their belief of how my product should work.
Case in point. I have worked with and for ISVs for more than a decade now and I have seen time and again the process of a potential or current customer insisting that a feature must be added or a functionality changed. Not always, but often when the ISV has caved and added a feature that they did not feel would add value the negative feedback drowned out the voices that were asking for it.
In software development for commercial use you have to follow the advice of the song lyrics sometimes, namely "If you can't please everyone, then you've got to please yourself".
Ultimately if your product fails you can't blame a customer or even a group of them for demanding things that ultimately took you off mission. Each customer complaint or feature request is a gift (as the book title goes), but it is not always one that you should embrace. This also goes for resellers, sales staff, developers and everyone else who is not on the blame line for the acceptance of the product by the market. That responsibility falls on the product owner who is often the business owner and visonary, or in cases like Microsoft a senior manager or executive.
If everyone remembered this we would probably have better software overall...
Wednesday, November 11, 2009
A friend of mine pointed out that now Dolly Parton is leveraging SilverLight and IE8 Web Slices on the site for her new album.
I think this is an interesting signpost that SilverLight is rapidly approaching widespread acceptance.
Check it out the web slice at the Add on Gallery.
Friday, July 17, 2009
Microsoft has always done well with version 3 goes the well worn saying. And so I have high expectations for SilverLight 3 which has just released. Being more involved with Security, Business Processes and Enterprise System Development I have not delved as deeply into SilverLight 1 and 2 as I had hoped. With this new release I feel I just have no choice and I suspect that if you are reading this then neither do you. Rich Internet Applications are really the best of both worlds given their low deployment hurdles (the gift that browser based apps bestowed on us) combines with rich and client processor driven user experience.
I had thought I would have years or at least a year more to wait for the third version, but Microsoft has been driven to outstrip the competition. I hope the competition tries to keep up since I like this pace very much.
If you are just getting started check out the "How Do I" Videos and read regularly Scott Guthrie's blog.
SilverLight in this new release has the makings of starting the next dev revolution I believe. If I am right this one will have as big an impact as the release of Visual Basic 3.0...
Wednesday, April 22, 2009
My favorite interviewers Carl Franklin and Richard Campbell invited me to appear again on .Net Rocks
recently. We talked at length about the circumstances that we often see that cause technical projects in particular to fail.
Initial feedback has been quite positive so if you happen to listen to it I hope you like it as well.
This particular episode is found here
Thursday, April 02, 2009
I noticed an article on Wired about robots stealing jobs
and got to thinking about outsourcing, this down economy and all the conversations I have had (calm and otherwise) about jobs moving offshore.
Ultimately I don't see any reasonable way to stop jobs from following a well established lifecycle that ends in automation. If you take any task that is currently done by a robot you can probably look far enough into the past to find a point in time when it was cutting edge technology and either a skilled technician or fine artisan performed the function for premium pay (Dot Com boom html programmers for our purposes). As time goes on the task or job becomes well understood, well documented and even taught in all the schools around the world and since the task is still highly paid (that has eroded by now) it attracts alot of people who want that job. Then the task moves toward commodity and the formerly highly paid technicians and artisans have chosen from exactly two courses of action. They have either moved on to the new cutting edge thing or they are moaning about the erosion of their value in the marketplace (blaming the marketplace of course and never themselves). Then it gets worse for this latter group since eventually (and eventually comes quick in the 21st century we have found) the commodity task is recognized to be cheaper to be done offshore. For high tech India and Egypt are hot along with many other locals (I just have most of my experience with offshore teams in these countries). The formerly high end task is drone work now and can be done by a bright student from any continent so the work flows to where it can be done most inexpensively. This is the point of maximum complaint by those who remember making $100 an hour for doing this task. They then stop paying attention just in time for that task to be automated by a program, system or abstraction layer so that no one would ever pay for it to be done by hand ever again. At this point you could probably hear people in the offshore tech districts complaining. This is progress. It is painful, but it is also inexorable, you cannot stop it and you shouldn't try to slow it down. Instead you should be like the other group of highly skilled technicians and artisans and find the next big thing and constantly hone your skills. This is absolutely doable in our high tech field.
I know this post will come off as callous to some and I am sorry if I am too blunt for some, but especially in times like these we have to stop looking back wistfully at the past and grab our books and browsers and dig in to invent and shape the next revolution. I personally think that energy and the technology that helps with conservation is the next big thing, but there is still lots of room elsewhere. If you view the lifecycle of a job as a good thing you see that it has freed us from farming our own food, making our own clothes and has allowed so many of the things that are best in our civilization. Embrace it or be marginalized.
Finally my apologies to those stock boys out there who have had their hopes and dreams shattered by R2D2.
Wednesday, April 01, 2009
In my business we deal with companies that are by their very nature risk averse and hence I only play with the newest tech for our internal projects, the occasional customer emergency and in my free time. Even so I have watched Microsoft's Azure pretty closely and while I am confident that eventually we will take cloud computer for granted as we do dynamic web technologies now, I am also pretty sure that we still don't know exactly what and how the real impact will take shape. Without clear SLAs and Pricing I just can't gauge how reasonable it will be for a customer of size X with application of type Y to opt for Azure or any other cloud computing platform. That belief also drives me to think that the Open Cloud Manifesto is at best irrelevent and at worst a major impediment to getting where we want to go. If we don't know what the best end state will be because we have yet to really evolve the technology in the real world then how can a group of people (any group) really hope to lay out the rules of the road. There isn't a road built yet after all.
It has been proposed that guidance is needed to ensure that solutions are "open". I can only assume that this means that they want code deployed on vendor A's platform can be moved to vendor B's platform unchanged (the classic case of wanting to not gamble on vendor lock in). While that is not specifically stated, I just don't see any other interpretation that makes sense.
Time will tell, but I suspect we are several years of market testing and evolution from a point where we can even begin to have this conversation intelligently.
To read more on this topic I will point you to blog posts by Chris Auld and Michelle Bustamante. I must say that I agree with them for the most part.
Thursday, September 18, 2008
I have worked on many software development projects, both commercial and line of business and every single time I talk about optimization to a developer they always jump to the same conclusion. They think I mean speed of execution. I grant that the majority of the time when people talk about optimization that is what they mean, but it is not 100% of the time correct. Often I care more about the maintainability of an application especially if I know it is destined (or doomed) to morph quite a bit over the next year or so. In these case it is often an application that will be used by employees and many of the standard assumptions do not apply.
Take our Intranet for instance. It is only used by employees and our closest contractors. We use it for tracking customers and projects, for forecasting sales and even timesheets. I don't care if it is 5% slower, I want it to be adaptable since we are an agile company. I don't mess with the code every week or even every quarter, but the code is written in such a way that I or any other developer on staff can go in and very quickly add a field or add other features very quickly. We didn't sacrifice security (that would be unacceptable), but we did forgo the multi tier architechure and stored procedures for parameterized queries. This is a sin in many circles, but if the application's backend is single use (only one application) then there is much less advantage to all the abstraction. I am sure the arguments will flow down on me now, but I see the same drive for complexity without purpose (real advantage I mean) in the Java world where code portability is everything and yet almost no one ever avails themselves of that costly feature.
The next time someone asks you to optimize something ask them if they mean for performance or maintainability and let the funny stares begin...
Wednesday, March 12, 2008
As I said a couple of days ago, I am speaking again in Cairo in a few weeks at the EDC. I have arrived on the topics that I am presenting. While these are still subject to change it looks like:
- A session on AJAX
- A session on Commercial Software Dev (vs. Business development)
- A session on Indexing Optimization in SQL Server
I am really looking forward to seeing all my friends and again want to thank Waleed Abdelwahab
for pushing me to revive this blog.
See you all soon!
Tuesday, March 11, 2008
Every few years I find that there are pieces (sometimes big ones) that I have not played with or encounted on a customer project and it tends to freak me out a bit. We have now arrived at that point in the cycle yet again! Expression, SilverLight, WPF and the like are all technologies that you will likely never see me present upon, but in the aftermath of MIX 08 and whole WideOpen Web movement I just have to dive in deeper and see what the implications are for the parts of the technology that I do use daily.
I think this is a key survival trait for me and I encourage everyone to reach down into that free time (you are still sleeping right?) and get a grip. The good news is that great blogs and podcasts are making this much easier then ten years ago. I promise to report what I find here and might even ask a non-rhetorical question or two ;)
Monday, March 10, 2008
I have finally confirmed the final dates for the Egypt Developers Conference which is held every year in Cairo. This year it is in Mid April and again I will be speaking. I really look forward to this event and for a short time I was afraid that the dates would move to a week where I couldn't attend, but I now know that this is not the case.
This week I have to solidify which sessions I will present and am thinking about doing a session on commercial software development (as opposed to business software development) on the new Software Architects track.
Last year I made the mistake of re-presenting session from previous years at the request of some very well intentioned people who were running the show, but I will not make that same mistake again.
See you in Cairo!
Friday, May 25, 2007
Someone in my office just forwarded me a link to a video that has Scott Guthrie talking about ASP.Net. Not very unexpected, but the video turns out to be set inside Halo thanks to the crew a Red vs. Blue and it fabulous.
I don't know what site it was originally hosted on, but if you remotely like Halo, or ASP.Net or Scott or anything remotely cool and / or entertaining, check it out!
Red vs. Blue themed ASP.Net ad featuring Scott Guthrie
Monday, May 21, 2007
StrangeLoop has finally announced their AppScaler device!Richard Campbell
told me about his involvement in StrangeLoop a while ago and I have been dying to tell people about it, but until now it has been confidential.
Basically the AppScaler takes a web farms major headaches and lifts them into the loadbalancer and out of the way of your developers. It really is a cool strategy because it gives sites real performance gains over hosting Session State on a state server or in a database along with a whole host of other performance enhancing and bandwidth saving features.
Check out the recent article at NetWorkWorld.com
Tuesday, December 19, 2006
My fellow Microsoft Regional Director, Jonathan Goodyear recently wrote a very full and detailed description of what the Microsoft Regional Director program really is
, that should help anyone who still thinks I am a MS employee.
I hope this helps clarify things a bit, though I do expect to still have to go through this once a week for those that don't read this blog ;)
Wednesday, November 29, 2006
My good friend, Eileen Rumwell, has started blogging
. Her blog is something I plan to keep watching especially since in the short time it has been up she has already thrown out some great insights. The really cool thing is that having come from a marketing background, Eileen has been thrust among developers for quite a few years now. Working at Microsoft she has great insight and maybe more importantly she also has insight into how we developers outside MS work and think about our role.Eileen's latest post
starts off talking about her dogs and quickly points out that developers seem to think that security is not their problem. I have seen this attitude quite a bit, but typically I get to beat it out of those who exhibit it to me since I am often cleaning up after a problem or onsite to beat it out of them.
Ignorance and apathy are both alive and well in the development community. It isn't the people who are motivated and willing to drag themselves to the user group meetings that are the problem it is those that are likely too lazy to even read a blog about their chosen profession let alone one about something tangential to it. If we hold our breath long enough the world will evolve and security will be baked in to everything that matters, but that is still a long way off if a majority of those building the future think that this whole security thing is a fad. Lets vote them off the island.
Thursday, November 23, 2006
Microsoft has just released their new Anti-XSS library which helps developers do the right thing more often without as much effort as before.
If you are interested in this (and trust me, you are) your first stop is to go to the tutorial and see how it is done. As you will see it isn't stupid simple, but an improvement.
Once you get confortable then go to the official page and download the library and make it part of all your web projects.
Friday, October 20, 2006
Code Camp 6 is tomorrow at the MS office in Waltham and this is the first one since the original world premier Code Camp that I am going to miss.
With Thom Robbins moving on to Redmond and the rush of business that everyone seems to be seeing, this 6th edition didn't come together nearly as early as previous editions.
I apologize for not making it, but since it is slimmed down to a single day this time and I specifically have a conflict tomorrow, I won't be there.
I expect we will do a better job for Code Camp 7 and provide much more advanced warning and I will do my best to defend the date ;)
Wednesday, September 06, 2006
I have been casting about for .Net Best Practices and came across Adam Cogan's lists of how to do pretty much everything. The funny thing is that I have known Adam for years and was aware that he had compiled quite alot of information on his site, but until I started to dig through it I hadn't realized just how much is there.
If you are trying to codify your companies "how we do it here" then make sure you check out Adam's site.
Friday, July 14, 2006
I normally don't post twice in one day, but this blog post by Rob Caron was VERY helpful in understanding VS2005 licensing and the relationship between the products. I expect it will help alot of people grasp it since I get asked this question a fairly often in my roaming.
Thanks Rob and Enjoy!
I was just thinking about one of the bugs listed in the latest hotfix from MS and realized that while aspx and config files are not at risk since they are mapped to aspnet, the express database if stored in App_Data probably is.
We don't typically use SQL Express, but my bet is that this is the greatest risk factor for this bug. Thoughts?
Thursday, May 25, 2006
If you are into threat modeling (and you should be) then you should check out the latest version of the product formerly code named "Torpedo". I think this is the first product to make real strides (bad pun intended) toward making threat modeling more approachable for the average developer.
Get it at:
Monday, May 08, 2006
At Code Camp 5 in Waltham this past Sunday I was delivering my session entitled "All you need to know about Membership", when I learned that I didn't know everything I need to know about membership.
Someone asked if the scripts were available that aspnet_regsql.exe uses to create the membership table. My answer was that I hadn't seen them so I assumed they were baked into the exe. WRONG! Our good buddy and fellow Code Camp presenter, Dan Krhla, pointed out that in the same directory that you find the aspnet_regsql.exe (namely C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727) you also find the scripts that the tool users including InstallMembership.sql. There are a bunch of them and you have to install them in order (installcommon.sql first, etc.). They offer some good insights and I have already spent a bit of time on them myself.
Thanks again Dan and I am happy that the question came up so I could learn something too. This is why I really love the Code Camp.
Wednesday, May 03, 2006
MS has committed, at some level, to support VB6 on Vista. In an article from February there are some details, but we now know that if you have a VB6 application that you cannot live without, you will probably be OK for years to come.
This is both good news and bad news. While I feel the pain of people who depend on these legacy tools for their products to work, I can't help wincing when I see this because old tools support old techniques and technologies that are often just not up to the task of building secure applications. Everything from cryptography to SQL Injection have evolved as have the tools to combat them.
If you are using / depending on VB6 then congratulations, but my advice is to get off of it (from a seasoned VB developer) unless you can really and truly convince yourself that it poses no weaknesses in security based on your use of it. Eventually you will have to jump.
Thursday, April 20, 2006
Sharing a web server between development teams is always fun (not). We had a problem surface today (or resurface) where if a developer creates a web application on IIS that uses .Net 1.1 for example (not an uncommon occurance) and some other developer creates a web application on that same server but this second one uses .Net 2.0 (something becoming more common every day). Odds are that the developers and even sometimes the network engineer or web master will allow the defaults to lull them into the false sense that it was an easy and straightforward task.
The problem is that they both allowed the "Default Application Pool" to remain selected and now the second of these sites to load will crash IIS.
You can't have two different versions of .Net loaded into the same process and Application Pool often (though not always) means the same process.
Scott Forsyth has an article about this very issue that will help describe the error that occurs when you have this problem (the "Server Application Unavailable" error).
If you haven't seen this yet, then you will.
Friday, April 14, 2006
Scott Guthrie pointed me at a link to the source code for the ASP.Net 2.0 providers including the Membership and Role Management providers. While I think the Profiles, Web Parts and Site Navigation providers are important and cool, I expect to do much more with the Membership provider. Expect to see some customizations in presentations I give in the future.
I think this is a great step and am not surprised to see Scott doing something this cool.
Check it out!
Wednesday, April 12, 2006
I was recently asked by a very technical and very sharp friend of mine about the symantics of permissions on copy.
I figured if he needed some guidance on how this works then there must be a ton of other developers who could use a refresher so here goes:
There are alot of reasons that a developer or QA engineer must use copy or move to get their applications running for test or even for production. The problem is that the same old processes that worked so many times before can often mask a misconception or two that arise as "bugs" when the moons do not align to make the old process function as expected. Case in point. You want to deploy a web application which has notoriously particular permissions requirements. If copy has always worked in the past, but on the new server you are getting strange permissions then you might be forgetting some of the rules.
The first thing to take into account is whether this is this a move within the same volume (nothing fancy) or a move across volumes (maybe obscured by DFS) or even just a plain old copy (often the case).
A move within volumes would mean you should have the permissions preserved. A move across volumes is actually a copy and a delete combined and means you are just getting the permissions of the target folder which is by design and this is also the behavior of a copy unless you use something like scopy which preserves permissions.
If a copy in the past has preserved permissions and you didn't use scopy (very handy by the way) then either there is a setting in Windows that I am unaware of (please enlighten me) or you got lucky in the past and the target folder permissions were what you expected.
Usually file permissions and especially the semantics of permissions on copy vs. move are the domain of network types. In many cases it helps alot to be a mongrel from both worlds.
Wednesday, April 05, 2006
Like the Code Camps another good idea is coming out of the Microsoft Developer Evangelists. This time it is a web site with an interesting concept. If you go to http://www.community-credit.com/DevCommunity.aspx you will see it in action and also be able to see the people who are working hard to build their technical communities. Think of it as an ongoing public resume where people who contribute to the community get credit for their efforts.
I think this is a good step in building a nice feedback system so the dev community keeps going.
Check it out.
Sunday, March 05, 2006
Ted Neward just launched his new site at http://www.tedneward.com
. Check it out, Ted is one of the most interesting and intelligent people I know. If you ever need to cross the .Net platform with Java then he is the guy to take a lesson from.
Tuesday, February 28, 2006
Monday, February 27, 2006
If you are at all into security or even if you just think technology is cool then you have to watch the latest episode of the The Code Room
. In this latest episode you will see our own Duane Laflotte, our resident top security expert as part of the team of evil doers that hack a casino in vegas.
I think it is really well done and makes some good fundamental points about security in a very entertaining way.
Friday, February 24, 2006
When I was in Cairo for the MDC a few weeks ago, I gave several talks that touched on the new membership controls in ASP.Net 2.0. One question that came up repeatedly was how far can you stretch the provider before you have to write a custom membership provider. The answer turns out to be not very far. The provided membership providers are very good and very extensive, but they are also fairly rigid in their implementations.
I think I have the 3 criteria that will force you to realize that you need to bite the bullet and write your own membership provider:
- If you need to access your own schema that is different (in any way) from the schema provided. Running Aspnet_regsql.exe creates a database and if you need to edit that schema then you cannot live without a custom provider except if you are adding tables for your own use, but bear in mind that the provider will just ignore your additions.
- If you need to access data in someplace that is not supported. Even if you want the same schema as the default providers support, you cannot use a proprietary database for that data and expect the providers to just work. The XML provider is the most common example (though not very real world), but you could think of many scenarios including SQL 7.0 where a custom provider would be in order
- If you need / want to insert some abstraction between the provider and the data. Stefan Schackow of Microsoft had a great session at PDC 2005 in which he demonstrated creating a provider that allowed for the situation where your web servers were not in direct contact with the database server. To solve that problem he wrote a provider that took a web service endpoint as its connection string.
So as you can see you are quite likely to find yourself having to write your own provider. The good news is that it really isn't that hard to do once you have done it once or twice ;)
Monday, February 20, 2006
Dominick Baier of DevelopMentor, wrote on Saturday about a pretty dramatic change in the way ClickOnce security
is configured by default in the RTM version of .Net 2.0.
This is a must read if you plan to use ClickOnce and haven't already revamped the default security settings. If you don't like the ramifications that not being able to disable ClickOnce brings then rather than avoiding the .Net 2.0 offering you might consider the lesser step of just removing the .application mapping from your systems.
I am hopeful that Microsoft will come up with a fix in a service pack to .Net 2.0 as they did in the original .Net 1.1 that will address this default.
Tuesday, January 17, 2006
A number of the Microsoft Regional Directors and I have been posting back and forth all day about the C# vs. VB.Net issue, but not in the way that contentious bone usually plays out.
Rocky Lhotka not only started the thread, but he also was the first to bring it into public space with his post and quote from one my my analogies.
The point I would like MS to get is that while C# and VB.Net are very useful and powerful each in their own right, they work to cross purposes. The biggest problem is that in spite of vastly different syntax and heritages there is a little difference between them to the effect that you can't walk into a development shop and say "you will save 30% of your time coding in C# because your projects are mostly X type". That is the type of distinctive purpose I want and that is what my customers by and large want. A decision point, a clear guideline, a stated objective.
At the moment I think both language are trying to do the same thing but for people with different prejudices vaguely based on their backgrounds. While choice is generally good it can be bad when it leads to paralysis. Jim Allchin once said that having brought NT to market, if he had it to do over again he would remove every single user configurable setting as they were the source of all the heartache.
The analogy that comes to mind for this situation (and the one borrowed by Rocky) is that as a commander in combat I know when to use a Tank (plodding and durable lethality) and I know when to use a A-10 (fast, manueverable and vulnerable lethality), but if you make tanks fly and add a few feet of armor on an A-10 then you get the same muddy water we have between C# and VB.Net. Those that know me will forgive the military analogy ;)
Wednesday, January 11, 2006
has done it again by teaming up with Scott Hanselman
to bring us the podcast called HanselMinutes. HanselMinutes is a deep technology podcast
that I find very compelling as well as informative. The combination of personalities (both of whom I am very happy to know well) is just easy to listen to. I learned more than I expected in just the first show.
I often listen to Carl's other shows including .Net Rocks, but I will be adding this to my Outlook schedule as new editions come out.
Monday, December 19, 2005
About a month ago I signed up for a newsletter called FastTips by Microsoft. My old friend, Thom Robbins had a big hand in creating this and actually has done many of the demos I have watched (very well I might add). I often get asked where people should go to get up to speed faster and I can't think of a better way to push learning then a push based technology like this newsletter. You can subscribe to FastTips here.
You can't know everything, but if you don't work at it you will soon find that you don't know enough.
Thursday, November 03, 2005
Mark Russinovich is a brilliant guy and likely not so popular with the people at Sony these days. Mark was testing out some root kit detection and removal software and discovered that in their exuberance to implement Digital Rights Management Sony has created a very ham handed solution that behaves more like a rootkit than some of the very worst actual rootkits out on the Internet.
Read Mark's Blog which details his discovery or go to theregister.co.uk article that summarizes it. Good reading about bad code!
Monday, October 31, 2005
A friend of mine has a system that will require them to generage a large number of username and passwords for their users and they want to use usernames that make sense to the users. That is a common request, but he is concerned that a saavy user could deduce the username of others based on theirs. This is a real possibility (or likelihood) if you use any of the standard methods such as employee number (just guess sequential numbers) or combinations of first and last name.
My response is as follows:
It is as always a tradeoff...
If you use a determinable username then the password must be that much more secure. Ultimately we accept that user names are often guessable (in most systems), but just because that is a normally accepted risk it does not follow that it is OK. Password guessing is a numbers game. If we go to the simplest case of a single character password using a standard character set (alpha upper case + alpha lower case + digits = 26 + 26 + 10 = 62 possible characters) then there are only 62 guesses needed to get in once the username is known. As we add more characters to the minimum password length then we approach numbers where brute force attacks will take a long time provided the password is not in a dictionary (my dictionary for such attacks has over 5 million words and well worn passwords). At 6 characters you are at 56,800,235,584 (over 56 billion) possible combinations assuming the simple character set I mention above. On average an attacker trying every single possible combination will stumble on the correct one before they finish every combination. Keeping that fact out of the discussion we have to decide if we think a user can hit the site 56 billion times in a reasonable span of time to guess the password. Drive minimum password length to 8 characters and we are at a healthy 218,340,105,584,896 (over 218 trillion) which is where I like to be.
This is very secure given one critical assumption. It is assumed that the overhead of making a web request to test a guess adds enough overhead that you can't hope to achieve millions of guesses per second or even per minute. If this assumption falls then my conclusion below for a web based system is out the window. Windows hashes of 8 characters fall very quickly even with larger character sets because I can crack them locally leveraging the full power of my processor and not bound by network latency (which is huge in comparison to local throughput).
Bottom line is that if you are comfortable with 8 character passwords that are complex enough (not findable in any competent hacking dictionary) then you can publish the user names on your home page and it won't matter (but I wouldn't because I am paranoid).
One final analogy to wrap up: If you had a combination lock with the typical 4 numbers on tumblers (locker lock or suitcase lock). There are 10,000 combinations from 0000 to 9999. If someone could deftly try one per second then in under 3 hours it would be open without exception. But if they could only try once per hour (due to surveillance or some other factor then it would take well over a year. Complexity is comprised of number of characters times character set available. Vulnerability is measured in potential passwords divided by the speed at which they can be tried. I prefer adding techniques that detect and deter brute force attacks, but that is a topic for another day.
Tuesday, October 25, 2005
Thom Robbins of MS is introducing a really cool competition called the "Launch 2005 Screencast Contest". The concept is that you get a free 30 day copy of Camtasia and record one or more demos with audio. The entries will be screened and the winners in the major launch cities will win some useful stuff.
Thom breaks it all down on his blog here.
I did one of these during the break at the last Code Camp and it was actually pretty cool. My demo is up on Channel 9 and I am definitely going to be doing some more (though if I know Thom, I am not allowed in the contest).
Thursday, October 13, 2005
I have been out of it for about a week due to travel to present 7 sessions at TechEd Hong Kong, but now I am back. It was a great event and as usual was characterized by very high energy keynotes!
The highlight for Bruce Backa and I in our presentations was our last session on Server Control Development for ASP.Net 2.0. The demo of a control that leverages AJAX style updating to the content really churned the audience and opened some eyes. I have been asked to provide the source code to that particular demo (for session WEB428) so here it is: WEB428Done.zip (51.22 KB)
I have to thank everyone who got us to go over there (for our fourth time!) and to Andres Sanabria from Microsoft for the slides and the framework for this particular demo.
Tuesday, August 30, 2005
The deeper I dig into each generation of tools (VS 2005 at the moment) the more I see the trade off of CPU for developer time. It used to be that the programmer would go to extremes to maximize the performance of their code and that the tools were written in much the same way. Over the years this trend has reversed and has really accelerated the other way. When you hit enter in VS 2005 it is doing a background compile which allows it to catch typos and other errors in much the same way that Word does. This is great if you want to be productive, but I often hear lamentations that performance is being tossed. When I look at modern CPU power, I have to admit that I think it is high time we made reasonable tradeoffs. I see more and more servers that are barely touching CPU usage in double digit percentages as the march of Moore's law overtakes our consumption of the resulting CPU cycles.
I am not advocating wasting resources, but it I have to write a small application for a simple task then I am all for getting it done in half the time in exchange for it using more memory or even if it were to run 5% slower than writing it with older tools. The truth is that as applications evolve and add features they almost always run slower in most circumstances than they used to, but only if you stick with the same hardware.
For myself I say keep the productivity gains coming in the tools and as long as it doesn't get caprious, I won't complain.