A conversation that comes up often concerns what rights a Windows Administrator (domain or local) has to folders and files. The common assumption is that being an Administrator is the backstage pass, but while it is somewhat true, the details are a bit more complex. Windows did not get to survive in the server space by oversimplifying security, but the defaults are quite open. The fact is that in most cases the Administrator will have rights to all files and folders, but that is not an innate right. It is more of a default circumstance that is very subject to change, especially in environments that have been around for a number of years.
The first thing to understand is that no user has inalienable rights to any file or folder. If an Administrator account or a group which the account is a member is granted no rights at all or is explicitly denied rights to a file or folder then the result will be Access Denied so long as that state persists. A single deny will override membership in a dozen groups with full control or even directly assigned full control. For mere mortal users that is game over, there is no way for them to change this situation without help. But here is where Administrator has a superpower. The key is that an Administrator has the ability to take ownership of any file or folder. This seems like a weak superpower, but it is in fact very powerful because once you own a file or folder, you can assign any permissions you like. This means that the deny can be removed or full permissions can be granted as needed to banish the Access Denied message. The root of this power is in the fact that the “Take ownership of files and other objects” user right in Local Security Policy defaults to giving this right to Administrators. Removing this right will allow permissions at the folder or file level to take precendence, but also removes the failsafe.
This mechanism has been around since Windows NT, but it has changed over the versions. Back in the early days an Admin could only take ownership for themselves, they could not assign ownership to any other user unless they logged in as that user. This meant that it would be hard for an Admin to take ownership, change permissions, read or edit something they should not be touching and then change permissions back and reassign the ownership to the original party. This changed several versions ago so that now Administrators can assign ownership since it must have been decided that the benefit of making ownership assignable outweighed the security of making the scenario from before more difficult.
Over time permissions get changed, often with the intent that the changes are temporary, but seldom does anyone find time to reverse these “temporary” changes to permissions. Sometimes blocking inheritance is part of the change and sometimes experiments become permanent. This all means that sometimes, even when you are logged in as an Administrator, you will see Access Denied. The key to overcoming this is understanding the way that being an Admin lets you access all files and folders. It is not as cut and dry as most people expect or would hope, but that is why it is secure.
Paul Randall has a compiled document with all his blog posts on SQL Myths that I think is a must read if you consider SQL Server part of your core competence. It is probably not very interesting to pure devs, but I would still suggest you take a scan of this so you can avoid making assumptions that are either out of date or just plain wrong.
Find the link to the PDF here: http://www.sqlskills.com/blogs/paul/CommonSQLServerMyths.pdf
I was recently asked how to cost effectively do backup and Disaster Recovery (DR) for a 50 or so person organization.
Here is what I have found to be a pretty good way to go that won’t break the bank.
For an organization this size I use Backup Assist (http://www.backupassist.com). It leverages Windows Backup and has agents for Exchange and SQL.
I then break things into three categories and treat each slightly differently.
The things you call critical such as active email, source code, CRM, financial data, etc.
This stuff gets backed up daily and depending on my level of paranoia (how screwed we are if we lose X days) I copy it offsite either to an alternate office or if none exists (your scenario) to either a hosted server at a datacenter somewhere (max on the disk and bandwidth and min on all else which is much less than you $750 per month) or to a server connected via VPN to the company principle’s house (poor man’s hosted server).
The things that change often, but just aren’t level 1 such as home directories, business shares and other data.
Data in this category gets weekly backups and usually gets posted monthly to a large USB drive which gets rotated with its twin monthly. The drive with the current data is brought offsite for storage (again maybe to the company principal’s house or maybe a safe deposit box). When the new drive is delivered the old one comes back to be used for the following month’s backup.
These are the unchanging files like images, email archives and stuff.
You can either burn these to optical media (if you do muliple copies with one going to the company principal’s house(s) and a copy to the safety deposit box if you got one) or you can lump this onto the USB drive shuffle.
Hope this helps those who might be looking for this kind of insight.
StrangeLoop has finally announced their AppScaler device!
Richard Campbell told me about his involvement in StrangeLoop a while ago and I have been dying to tell people about it, but until now it has been confidential.
Basically the AppScaler takes a web farms major headaches and lifts them into the loadbalancer and out of the way of your developers. It really is a cool strategy because it gives sites real performance gains over hosting Session State on a state server or in a database along with a whole host of other performance enhancing and bandwidth saving features.
Check out the recent article at NetWorkWorld.com about it.
The topic of the AT command and the command prompt came up on an internal list I am on with Microsoft the jist of which was, “How do I securely turn this junk off”.
The answer is that to some degree the command prompt and especially when coupled with the Task Scheduler is a security hole that is closable, but not trivially. You can patch it using things like this http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/regentry/93465.mspx?mfr=true
and you if you really want to wipe out the user’s option you should reset the task scheduler service to use a low / no priv account and disable it (I am paranoid, but I have my reasons). The problem is that the perspective of most that come up against this is that you shouldn’t have to do this, but the reality is that you do.
For a scary look at why simply taking the RUN command off the Start menu is not enough try the following:
Open up “Help and Support” from the Start menu and seach for “command”.
Select the entry that describes how to “Test a TCP/IP configuration using the ping command”
You will see that there is a link that will open up a command prompt (it doesn’t run as System, but it runs).
That is the XP version.
The Windows 2003 Server one takes more searching, but it is there.
The issue is not that the functionality exists, we all want functionality. The problem is when it is hard (or impossible) to shut something off effectively it is maddening and often leaves people dismayed.
Time for an analogy:
I have doors on my house that I leave unlocked all the time. The dogs and other things in the house keep it secure (if you know me then you know what I mean), but if I wanted to secure those doors and found that I could lock them, but the manufacturer set them up so that the hinges were on the outside and manipulatable by an intruder then I would be unhappy. Most security outrage and dismay comes from features that just didn’t take security into consideration for the times when I don’t want the user to do anything except what the user is told they can do.
This will always be an arms race. If one of our professional security gurus such as Duane Laflotte wants to get in and has physical access to a workstation or server then he can get in, but there is a point where I will say, yes I accept that there are some things I can’t defend against. If you use a tank to blow in my front door, I won’t moan to the manufacturer about them not being tank proof, that is what the mines are for
Is Vista the solution to all security problems? I doubt it. I expect that there will be improvement based on features I already know are in the most recent builds, but I won’t judge the security of Vista until after it ships (and won’t pay all that much attention to it until then either) since the devil is in the details and the truth is in the final bits. Submarines either leak or they don’t. The OS will be judged in much the same way in regards to security.
Ultimately information is power. Nowhere is that more true than in the realm of security. I suggest that you learn all you can and I will do what I can to help.
There are many varying opinions on almost everything, but Compliance is one of those topics like economics, everyone has a different opinion it seems.
I was reading an article by one of the Systems Engineers at Network Appliance entitled, “Six Tips for Archive and
Compliance Planning” and while I agree with most of the points Mike Riley makes, I had to think a bit about his words on Encryption.
He isn’t saying not to use encryption, on the contrary, he is saying that encryption is a must, but the advice is sound. Be careful what you do and the ramifications. With compliance systems, often search and rapid retrieval are key and these are some of the most plausible arguements against specific applications of encryption.
As always, look before you leap. I guarentee that if you think about where you should be using encryption you are already ahead of most.
I was recently asked by a very technical and very sharp friend of mine about the symantics of permissions on copy.
I figured if he needed some guidance on how this works then there must be a ton of other developers who could use a refresher so here goes:
There are alot of reasons that a developer or QA engineer must use copy or move to get their applications running for test or even for production. The problem is that the same old processes that worked so many times before can often mask a misconception or two that arise as “bugs” when the moons do not align to make the old process function as expected. Case in point. You want to deploy a web application which has notoriously particular permissions requirements. If copy has always worked in the past, but on the new server you are getting strange permissions then you might be forgetting some of the rules.
The first thing to take into account is whether this is this a move within the same volume (nothing fancy) or a move across volumes (maybe obscured by DFS) or even just a plain old copy (often the case).
A move within volumes would mean you should have the permissions preserved. A move across volumes is actually a copy and a delete combined and means you are just getting the permissions of the target folder which is by design and this is also the behavior of a copy unless you use something like scopy which preserves permissions.
If a copy in the past has preserved permissions and you didn’t use scopy (very handy by the way) then either there is a setting in Windows that I am unaware of (please enlighten me) or you got lucky in the past and the target folder permissions were what you expected.
Usually file permissions and especially the semantics of permissions on copy vs. move are the domain of network types. In many cases it helps alot to be a mongrel from both worlds.
As the title of this site states, it is a real battle to keep up with the technology and an even bigger challenge to have a life along with that effort. On a fairly regular basis now I realize this when a standard feature of a widely available tool or technology is virtually unknown and therefore unused. I am pretty sure that queries in Active Directory falls into this catagory.
In Active Directory Users and Computers you can create custom queries through the MMC that can help you track down security problems that are very work intensive to do manually. In the Common Quesries dialog you can even check a box to search for Non expiring passwords and disabled accounts. Disabled accounts aren’t very interesting since the UI gives you that list in a browsable AD, but accounts set to bypass the password expiration rules are a perfect way for an outgoing administrator to create and preserve a backdoor.
Check it out, who knows what else you might find in there!
Windows 2003 Server Pack 1 has a new capability that you might want to look into called Quarantine VPN.
With this technique you can validate that all clients that connect to your VPN meet specific requirements before they actually get access to network resources. Microsoft has been doing this on their network for quite a while now and they have finally given everyone else that uses their products the same capability.
For details on how to implement it and a more in depth overview on Quarantine VPN read this Technet article.
The concept of Least Privilege is applied to developers and software testers all the time to advocate that the application be developed and tested using the lowest privileged account possible to get the job done. For our purposes (network administration), I am referring to using administrative accounts for administration only and regular user accounts for everything else including word processing, research (aka web browsing) or the ever popular solitaire!
This is about using the proper tool for the job. If you wanted to trim some leaves from a tree you would be thought a bit odd if you decided to use a chainsaw, especially if the same job could be done easily with a pair of scissors. Why is this something almost everyone recognizes as inappropriate? Because the potential for you to do damage is huge! There are certainly people out there who will be able to perform the task with the excessive firepower and not lose a limb, but why take the risk? As an administrator, hitting the delete key by accident and inadvertently accepting the confirmation becomes a major problem as the odds of you having the rights to carry out the delete are much higher then if you were logged in as a normal user. When you delete a directory on a network share you can’t just go to the recycling bin on your client machine to undo the damage. Administrators even have the ability to change the permissions at the root of a system volume which will usually render the operating system unusable (requires a restore or rebuild). Why would you want to have these unnecessary risks when it could cost days of downtime. Claims that it is inconvenient to keep track of two logins are the most common justification. Now that network operating systems have tools like the Windows “Run As” this is a hollow excuse.
See developers and network professionals are that different after all!