Security WatchRevisiting the 10 Immutable Laws of Security, Part 1

Jesper M. Johansson

Back in 2000, Scott Culp published an essay called "10 Immutable Laws of Security." It is one of the best essays on security I've ever read. The information that he presented remains fundamental to all work in information security, and I recommend that you check it out (and even print it out) if you haven't already done so. You can find it at microsoft.com/technet/archive/community/columns/security/essays/10imlaws.mspx.

The essay was received with varying responses. Some people commented sarcastically that it was a way for Microsoft to avoid fixing what were perceived as many serious problems. Others considered it some of the most foundational writing on security and have plagiarized it at will due to its sheer importance. Some of my favorite responses, though, are those in which people have been inspired to create their own lists, such as the one that is available at edgeblog.net/2006/10-new-immutable-laws-of-it-security.

In the eight years since this essay was published, a lot has happened in the field of security. Virtually every major worm of note has been released. We have entered a state of information warfare (with organized crime, political entities, and so on). A variety of new words and phrases have become a part of common vocabulary, including phishing, pharming, botnet, spyware, and cross-site request forgery. We have some of the most sophisticated rootkits ever created running on Windows. We have new OS releases that are dedicated, to a large extent, to security and other operating systems where security is still largely ignored.

Social engineering has evolved into a major threat. Data breaches, such as when one major retailer exposed 94 million credit card numbers, have become familiar news stories (and yet people continue to shop at such stores). The United States and United Kingdom governments, collectively, have managed to misplace private information on a significant percentage of the inhabitants of the western world (and yet people still file private information with those governments). And a huge amount of security theater has entered our lives—and our airports.

I think it is time to take another look at the laws. Given all the changes we have seen come about in the first part of this century, can we still claim that these laws are immutable? If they are—if they have survived the past 8 years—it is probably safe to say they will survive the next 10.

In this three-part series, I will take a critical look at each of the 10 laws. This month I cover laws 1 through 3. In next month's installment, I will look at laws 4 through 7. Then in the final installment, I will examine laws 8 through 10 and offer some additional food for thought and comments that seem reasonable in light of what has happened since the laws were originally written in 2000.

Law 1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore.

This law states, effectively, that any software you execute on the computer can control that computer. When the immutable laws first came out, the current operating systems from Microsoft were Windows 98, Windows Me, and Windows NT 4.0. Today we have Windows Vista and Windows Server 2008.

On Windows 98 and Windows Me, any software you executed had full control over the computer. Windows NT 4.0 had a very solid underlying security model, but if you ran it as an Administrator, you effectively downgraded its isolation model to that of Windows 98 and Windows Me. It was possible to run Windows NT 4.0 as a non-administrator, but it was quite painful and very few organizations did so (you could probably count the total number of organizations that did this without taking off your shoes).

Say you did run Windows NT 4.0 as a non-administrator. Then did Law 1 even hold true when it was written? The answer is yes. First, Windows NT 4.0 had a large number of significant holes. For instance, there were permissions that could have been tighter, particularly on kernel objects and in the registry. There were also many types of attacks that hadn't been discovered yet, but that experts expected would surface. For example, in 1999 people hadn't realized that having processes running as elevated users on the interactive desktop could compromise the computer. It was not until 2002 when Chris Paget published his white paper on Shatter attacks, "Exploiting the Win32 API," that this became mainstream knowledge (seclists.org/bugtraq/2002/Aug/0102.html).

Had Microsoft foreseen the Shatter attacks when Law 1 was drafted? No, not really. Microsoft simply recognized a simple fact: there are very few true security boundaries that can keep an application executing on a computer from taking over that computer.

Windows Vista and Windows Server 2008 are two generations removed from Windows NT 4.0. Have they invalidated Law 1? Are there any other operating systems that do? It depends. There are certainly more solid security boundaries in the new OSs, and there were some experimental operating systems back in 2000 that had proper security boundaries. However, there are still only a few of those boundaries. For example, Code-Access Security in the Microsoft .NET Framework is a security boundary. It is designed specifically to keep code executing within the sandbox from affecting the underlying operating system.

Iframes in Internet Explorer provides another security boundary. But Iframes doesn't affect access to the OS itself, only access between parts of content on a Web page. Protected Mode in Internet Explorer, shown in Figure 1, is an OS-level security boundary. Its purpose is to prevent code that executes within a browser from affecting the underlying operating system—without user action. There are a few others, such as standard user accounts, that are supposed to prevent a user account from affecting the underlying OS or any other user.

fig01.gif

Figure 1 Internet Explorer 7 uses a security boundary called Protected Mode (Click the image for a larger view)

It is extremely important to understand what the term "security boundary" means. It does not mean there is an inviolable wall that is guaranteed to provide impermeable and indefinite isolation. What the term really means is that the software vendor, Microsoft for example, is responsible for fixing any violations of that boundary by means of a security patch. Software will always have bugs, and we will undoubtedly continue to discover more violations that the software vendors will continue to patch. Over time, they should improve their software to prevent those vulnerabilities in the first place. That may be considered as confirmation that Law 1 still holds.

There is one more crucial piece I need to consider, however. You may have previously noticed the phrase "without user action." Law 1 isn't really about shortcomings or vulnerabilities in software. It is really about vulnerabilities in people! The key phrase is "persuade you." If a bad guy can persuade you to run his program, he can probably persuade you to do so in a context that, by design, gives the program elevated privileges.

Even if you do not have administrative privileges, it may not matter. You, as a standard user, still have access to lots of juicy information: your bank files, love letters, pictures, videos, and company confidential data. All of this data is potentially interesting to an attacker, and all of it can be read without any elevated privileges. In terms of the information you manage on your computer, executing a malicious program hands over everything you do to the attacker. Therefore, if you define "your computer" as "the data you manage on your computer," you can ignore any discussions about privilege and simply conclude that Law 1 holds.

Even without splitting hairs over the definition of your computer, Law 1 still seems to have withstood the test of time. The point of Law 1 is to establish the fact that you, as the operator of a computer, must take responsibility for the software that you run on that computer. If you install a malicious driver or an evil video codec, you have handed off complete control of that computer to a criminal!

While software vendors can do a lot to prevent accidental compromise, which is really what the security boundaries are all about, intentional execution of malicious software will generally trump all those protective measures. This is why user education is critical in addition to ensuring that users do not have permission to perform administrative tasks. Therefore, it is safe to say that Law 1 holds true today, but possibly with a slight modification to the definition of your computer.

Law 2: If a bad guy can alter the OS on your computer, it's not your computer anymore.

On the surface, this law seems pretty straightforward. Obviously, if the bad guy can alter the operating system on your computer, you can't trust the computer anymore. But what is meant by OS has changed as computing needs have evolved. Many years ago I wrote a definition of the term OS for The Blackwell Encyclopedia of Management (managementencyclopedia.com). The definition said something about an OS managing access to input and output devices, access to hardware, and such things. Now, I never got a copy of the encyclopedia and my original submission has been lost to history, but I'm certain I didn't mention that an OS includes solitaire, a tablet input system, and video transcoders.

As computing has grown increasingly complex, OSs have grown to support many more capabilities. Further complicating the issue, OEMs often include their own assortment of additional software—this additional software typically ranges from somewhat useful to outright harmful. And some of this additional software duplicates functionality already built into the core operating system.

For example, a default installation of Windows Server 2008 Enterprise Edition has a disk footprint of more than 5 gigabytes. Windows Vista Ultimate Edition includes more than 58,000 files, comprising more than 10 gigabytes. And there are more than just files making up the OS. There are configuration settings, many thousands of them; and there are daemons or services.

Any and all of these things are part of the OS. It is a collective term that includes all of the files, all of the configuration settings, and all of the services, as well as all of the runtime objects—semaphores, named pipes, RPC endpoints—created by the files and configuration settings. Even such highly abstract constructs as the system time and certain types of data, such as event log contents, should be considered part of the OS.

Given how the OS has grown and evolved, is it really the case that modifying any one of those files constitutes making the computer not trustworthy? The direct answer is no. For example, Windows Vista comes equipped with edlin.exe, the old line editor from MS-DOS. I can't say for sure, but I would bet a grande, triple-shot mocha that edlin.exe has only been invoked twice across all installed copies of Windows Vista since the operating system was released. Both times were about three minutes ago, when I was trying to remember the syntax. If someone modifies edlin.exe or some other file that nobody ever uses, does it really mean that it is no longer your computer?

Edlin.exe is indisputably part of the operating system, but if nobody ever executes the file, how is a modification to it going to result in your computer being compromised? The answer, of course, is that it won't. Modifying a part of the operating system that is never used will not compromise your computer. And there are many parts of the OS that are never used.

The indirect answer, however, is yes. You cannot simply look at whether anyone executes a file to see whether its modification may result in a compromise of your computer. The problem is more subtle than that. Take a look at the access control list (ACL) for edlin.exe, shown in Figure 2.

fig02.gif

Figure 2 The ACL for edlin.exe is very restrictive (Click the image for a larger view)

The ACL for edlin.exe is very restrictive. Only the TrustedInstaller service has rights to modify that executable. This is very important: it means that there is an indirect effect for a bad guy modifying that file on your computer. The act of modifying edlin.exe does mean it is no longer your computer. It is the fact that the malicious user has the ability to modify edlin.exe that is key here. If the bad guy can modify that file, he can modify any file, which means you can no longer trust anything on the computer.

The OS will protect itself. Services are protected from unauthorized modification. Configuration settings are protected from unauthorized modification. Files on disk are protected against unauthorized modification. Even the semaphores and RPC endpoints used by the operating system are protected against unauthorized modification. If an attacker can modify any one of those protected objects, he can modify all of them and, quite possibly, already has.

This is a critical point. With several of the immutable laws, it is not the act of doing something that means your computer is compromised. The thing that matters only is that someone has the ability to do something. This is a point that must not be overlooked. In every aspect of computer security, you must always remember that capability is often far more important than action actually being taken.

If a computer is wide open to the Internet and goes unpatched for months, is it still trustworthy? No. That computer must be considered compromised. You just can't trust anything on a system that could have been compromised. (I said this same thing five years ago in my article "Help! I Got Hacked. Now What Do I Do?" available at technet.microsoft.com/library/cc512587.) If you are dealing with a skilled adversary, a compromised system may not even show any signs of having been compromised. The system may look perfectly normal.

Without question, Law 2 still holds. If an attacker has the ability to modify any protected object on your computer, it is not your computer any longer. Just remember, it is the ability to modify those objects that matters, not whether it has actually been attacked.

Law 3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore.

This law was critical in 2000. A large number of people didn't fully understand what you could do with physical access to a system. In fact, even some government agencies that ought to have known a whole lot better failed to grasp this fundamental point. At that time, security guidance recommended setting the Allow shut down without logon option to disabled. That causes the Shut down… button on the logon screen to be grayed out. The theory behind this was that in order to shut down the computer, the user must first log on so there would be an audit record of who shut down the system.

This is a case study in flawed thinking. To have access to the Shut down… button on the logon screen, you actually have to sit down at the console. And if you are sitting at the console and you really want to turn off the computer, you can usually use that big round button on the front of the computer—or even the power cord. System off. No audit trail.

Windows 2000 includes a security setting called Allow undock without logon, and this option is still available in Windows Vista, as shown in Figure 3. The principle was the same. In order to undock a laptop from its docking station, it's necessary to first log on to the system.

fig03.gif

Figure 3 Why wouldn’t you steal both the computer and the dock? (Click the image for a larger view)

The actual security value of this setting is extremely dubious. I think the theory was that if just anyone could walk up to the laptop and undock it, someone could easily steal the computer. Now, not that I would ever steal a laptop, but if I were going to do so, this preventative measure wouldn't deter me. I'd probably just grab the laptop and the docking station together as a single package. Heck, I'd just steal the network cable and the power cord as well. Talk about a meaningless ­security tweak!

The point about physical access really hit home when Petter Nordahl-Hagen created his Offline NT Password & Registry Editor. His creation was simply a Linux boot disk with an experimental NTFS file system driver that permitted read and write access to an NTFS volume. The software on the boot disk would mount the registry on the local machine and write a new password for the Administrator account into the SAM (software asset management) hive. All you needed was physical access to the system and one or two minutes.

Tools like this one are exactly why Law 3 was written in the first place. In fact, Nordahl-Hagen's tool was used in many demos. Unfortunately, the point never quite made it through to the majority audiences. I personally used the tool in some demos, but then I stopped using it after I tired of being asked, "How do we make sure none of our users know about tools like that?" and "What is Microsoft doing to fix this problem?" An alarmingly large portion of the IT industry just didn't want to accept or understand that physical access would trump all else.

In that environment, Law 3 was a very important statement. Yet critics assailed it without mercy. It was derided as an attempt by Microsoft to avoid having to fix any problem that could, even in the remotest way, be tied to physical access. Law 3 was actually used in several cases to dismiss vulnerability reports, including the Offline NT Password & Registry Editor hack. However, blocking attackers who have physical access to the system is really only possible in one way: by ensuring they can't get to anything.

That's where the potential chink in the armor of Law 3 lies. Since the laws were written, full disk encryption technology has become a viable solution. With full hard disk encryption, more correctly termed full-volume encryption, an entire volume (what is known as a partition in other operating systems) can be encrypted. So if the entire boot volume (in other words, the volume with the OS on it) is encrypted, the question we have to ask is does Law 3 still hold?

The answer is a firm probably. First, the decryption keys have to be stored somewhere. The simplest place to put the keys, and the default option in BitLocker, is in a Trusted Platforms Module chip in the computer. By doing so, the computer will boot unattended. Once the computer is booted, a resourceful and well-funded attacker with permanent physical control over the computer can attack it in any number of ways. Because the computer can now be connected to an arbitrary network, there may be network-related ways to attack the system.

The attacker may, for example, read or write memory by means of a direct memory access (DMA) device, such as a USB flash drive. Once the computer is running, all bets are off when it comes to an attacker with physical access to that computer.

If the keys are not stored on the computer itself, the attack then hinges on whether the attacker can obtain or guess the keys. If a PIN code is used to boot the computer, the attacker can probably guess the code with relatively little effort. If the keys are stored or derived from a separate hardware device, such as a USB flash drive or a one-time password fob, the attacker must have access to that separate device. There are certainly ways to obtain those keys or make life very uncomfortable for the person who has access to them, though the effort required is likely quite a bit higher.

You could also interpret Law 3 in a slightly different way: "If the attacker has physical access to your computer, it is likely the computer has been stolen and, therefore, it is unlikely you will get it back." From that perspective, it really is not your computer anymore. And from that perspective, it may not matter to the attacker whether he can access the data on your computer or not. However, that really is not what the spirit of Law 3 is about. It meant the attacker had access to the data on your computer and not just the computer itself.

All things considered, Law 3 does still apply. It is true that certain technologies available today go a long way towards stopping many attackers with physical access and thus minimize the number of attackers able to access data on a computer that employs a safety measure. That said, the capabilities of the attacker always define how much the attacker can actually achieve, and new technologies address many of the 10 immutable laws—to an extent. But physical access still offers ways, though more complex, into a system.

So far the immutable laws of security are proving very resilient, both to technological advances and to time. Of the first 3, Law 3 is on the shakiest ground, yet even it still holds in some cases. It is, however, the one that has the most readily available and robust mitigations in place. I'll be back in the next two issues of TechNet Magazine to continue this discussion and determine whether Laws 4 through 10 are still immutable.

Jesper M. Johansson is a Software Architect working on security software and is a contributing editor to TechNet Magazine. He holds a Ph.D. in Management Information Systems, has more than 20 years experience in security, and is a Microsoft Most Valuable Professional (MVP) in Enterprise Security. His latest book is the Windows Server 2008 Security Resource Kit.