The other day, there was a news story about some hackers who took over some U.S. government web sites. Most people, I would imagine, reacted in one of two ways: they thought that the hackers were very good, or that the "government" was really incompetent. And the story undoubtedly confirmed a widespread fear, that cyberspace is a very bad neighborhood where lawlessness abounds and no one is safe.
Are any of these beliefs accurate? Are all of them accurate? As is generally the case, none are completely correct, but all have an element of truth.
We can answer the first question easily enough. Yes, some hackers are excellent. They're capable of taking a new application or protocol and finding holes that completely eluded the designers. Such hackers are a small minority. Most are copycats, so-called "script kiddies", who run canned programs that exploit known flaws. A large part of running a secure system rests on this point; as we shall see, this is a very important matter.
But if typical hackers aren't that good, is the "government" --- more accurately, the system administrators who run the various machines affected --- that bad? Again, this will become clear later.
The last point, the notion that the Internet is inherently very dangerous is the most interesting. If you hear this from a technical notion, the same idea will be expressed somewhat differently, that the protocols used on the Internet are flawed, and that if the designers had paid proper attention to security, we wouldn't have any problems today. The corollary is that with a bit of willpower, we could deploy newer and better protocols, and fix the problem. As the following quote from a popular Web page indicates, many businesses appear to support the idea:
The context, of course, was an explanation of why it's safe to shop on this particular Web site. The page goes on to cite the large numbers of people who have shopped there without any problems. The implication is that cryptography is both necessary and sufficient to resolve all security concerns.
That cryptography is needed is almost beyond argument. Given the so-called "password sniffers" --- eavesdropping programs that pick up passwords in transit --- there is little doubt that analogous programs would have been developed to steal credit card numbers. In fact, it's a simpler problem; passwords are generally sent a character at a time, whereas credit card numbers are easily recognized and are likely to be contained in a single large packet. But to claim that the cryptography is sufficient is an exaggeration.
Ignoring the flaws in typical Web cryptography --- those are inherent in the nature of the human interactions when using the Web; it's doubtful that any other designers could have done better --- the real threat comes from the company's very success: they have many credit card numbers stored on their site. Anyone who successfully penetrates it can steal them by the millions. In other words, the use of encryption and authentication --- which is about all that better network protocols could do for us --- protect the data in transit, while leaving it unprotected on the destination host. Only if other forms of cryptography can prevent all ways to hack into that host can we say that better network protocols would do the trick. In other words, from a security perspective the Internet protocols are designed about as well as they could be; any other network of comparable power would have very similar security issues.
But if the Internet is a dangerous place, and the problem isn't with its design, what is the problem? What is wrong with our network security? That's a trick question, though; the right question is "what is wrong with our hosts?" While there are problems attributable to the network itself --- and these are the problems most easily fixed by cryptography -- what we generally see is the use of the Internet to exploit host security problems. The distinction can be seen most easily by asking this question of any security hole: if the Internet did not exist, could a local user exploit that hole to gain privileges? In most cases, the answer is "yes". In other words, the Internet has provided the access, rather than being an inherent part of the problem.
This realization --- that host security is the real issue --- is the key to achieving security on the Internet. All the cryptography in the world won't protect a machine that is insecure. The trick, then, is knowing how to protect hosts --- and it's not trivial.
One solution, of course, is limit what a host can do. If a program isn't available, it isn't a security risk. But deciding what programs should and shouldn't run on a given computer is a delicate task; one has to balance functionality against security. Indeed, given the intertwined nature of services on a modern operating system, many programs you need won't run without some others you regard as more dubious. Thorough knowledge is necessary when making such tradeoffs.
The purpose of a firewall --- the primary Internet security device in use today --- is now clear. A firewall shields risky services from hostile outsiders, while permitting their use by presumably-trustworthy insiders. In other words, it eases the problem by limiting access, and hence changing the risk/benefit equation. Firewalls, then, are not about network security; rather, they are communications interrupters. They limit access to risky host services.
From this, we see the fundamental limitation of firewalls: since they don't provide security per se, anyone who can bypass them --- an insider, or an outsider who has found some way around or through the firewall --- can still exploit residual problems on nominally-protected machines. Firewalls are quite valuable, but they are not panaceas, and they must be properly placed to be useful.
One more point must be raised before we can answer the questions posed at the beginning: what is the nature of these security holes? It turns out that virtually all of them are bugs, either in code or in system configuration. If we could eliminate bugs, we could eliminate almost all security problems, and most of the rest could be fixed by cryptography.
Of course, we can't prevent bugs, especially in code supplied to us by others. But we can apply patches as they are developed. Most successful attacks, it turns out, rely on known holes, holes for which patches and work-arounds already exist. Certainly, that is the case for virtually all of the attacks launched by the script kiddies.
Beyond that, system configuration can make a big difference. This is especially true for containing penetrations. That is, suppose that an intruder has somehow gained access to a system. Can they be stopped at that point, before further damage is done? That generally depends on whether or not they can obtain root privileges, which in turn is critically dependent on local system configuration.
Host security, then, rests on four legs: bug-free code and application design, up-to-date system patches, good configuration, and the proper balance between functionality and security. Three of these four are the responsiblity of the system administrator. You cannot have a secure system without good system administration.
Good system administration is not easy, however. Apart from internal pressures --- there is a perpetual tension between security on the one hand and functionality and ease of use on the other --- reliable information is hard to come by. Too many references are too vague and general, or try to cover too many different platforms. But the devil is in the details, and a routine vendor-supplied upgrade can overwrite carefully-tuned security mechanisms.
Were the system administrators to blame for the break-ins we described earlier? We don't know. But if they were, it was not because they were bad. Rather, it takes great system administration to keep a machine secure, and even good system administration is hard.