The Cybersecurity Act of 2009
Four senators (Rockefeller, Bayh, Nelson, and Snowe) have recently introduced S.773, the Cybersecurity Act of 2009. While there are some good parts to the bill, many of the substantive provisions are poorly thought out at best. The bill attempts to solve non-problems, and to assume that research results can be commanded into being by virtue of an act of Congress. Beyond that, there are parts of the bill whose purpose is mysterious, or whose content bears no relation to its title.
Let’s start with the good stuff. Section 2 summarizes the threat. If anything, it understates it. Section 3 calls for the establishment of an advisory committee to the president on cybersecurity issues. Perhaps that’s Just Another Committee; on the other hand, it reports to the president and "shall advise the President on matters relating to the national cybersecurity program and strategy". That’s good — but whether or not the president (any president!) actually listens to and understands their recommendations is another matter entirely.
Section 10 ("Promoting Cybersecurity Awareness") and Section 13 ("Cybersecurity Competition and Challenge") are innocuous, though I’m not convinced they’ll do much good. (I suspect that folks reading this blog already realize this, but I’ll state it explicitly anyway: the odds on anyone, whether in a "challenge" or not, finding a magic solution to the computer security problems are exactly 0. Most of the problems we have are due to buggy code, and there’s no single cause or solution to that. In fact, I seriously doubt if there is any true solution; buggy code is the oldest unsolved problem in computer science, and I expect it to remain that way.)
As an academic, I am, of course, in favor of more research dollars (Section 11). Once again, I’ll state the obvious: I would hope to benefit if that provision is enacted.
Section 21, on "International Norms and Cybersecurity Deterrance [sic] Measures", is problematic. I don’t think that lack of international norms or cooperation — to the extent that this provision might actually accomplish something — is much of a problem. But what does the bill mean by "deterrence"? There is no substance in that section. The proper role of government in dealing with cyber threats from abroad is indeed worthy of discussion; I’ve written about this elsewhere. This bill is silent on it, except for the title of this section.
I’m intrigued by Section 15, on risk management. The proper role of liability and insurance in cybersecurity has long been a topic of discussion; I would very much like to see a full-blown study of the question (probably by the National Academies), but I don’t think that can be done in one year.
I don’t know why the bill allots three years (Section 9) to implement DNSSEC; NIST already has that project well underway for the .gov zone. It would be good if the root zone and .com were signed, but I don’t think that that’s NIST’s responsibility. Calling for a review of IANA’s contracts to run the root zone of the DNS is just plain wrong; while IANA does administer the root zone, it does so under ICANN’s direction. ICANN is an international organization; a legislative attempt to wrest control of the root for the United States would not be well-received.
So much for the sections I like. The bad parts of this bill, I fear, outweigh the good parts.
Section 17 has a good title — "Authentication and Civil Liberties Report" — but it worries me. It calls for a study on the feasibility of "an identity management and authentication program … for government and critical infrastructure information systems and networks." Such a system is a bad idea.
The idea seems to have come from the "Securing Cyberspace for the 44th Presidency" report, about which I’ve written earlier. True, this bill calls for "appropriate civil liberties and privacy protections", but a centralized authentication system is likely to lead to serious security risks. As a National Academies study noted, "A centralized password system, a public key system, or a biometric system would be much more likely to pose security and privacy hazards than would decentralized versions of any of these." (Disclaimer: I was part of the committee that wrote that report. Naturally, I’m not representing the Academy in this posting.) The 44th President report wanted to ensure that certain actions were strongly tied to authorized individuals, but this approach simply won’t accomplish that goal. I say that for many reasons; now, I’ll mention just one: consider the effect of a tailored virus that infected the computer of someone who is supposed to control critical infrastructure systems. That virus could do anything it wanted, with the proper person’s credentials.
Sections 6 and 18 are seriously flawed. For one thing, they make the assumption that there is some easily-distinguished set of crucial networks. I doubt that there is. The 1999 National Academies report Trust in Cyberspace (again, I was on the committee; again, I’m speaking only for myself) stated
The study committee believes that implementing a single MEII ["Minimum Essential Information Infrastructure"] for the nation would be misguided and infeasible. An independent study conducted by RAND (Anderson et al., 1998) also arrives at this conclusion. One problem is the incompatibilities that inevitably would be introduced as nonhardened parts of NISs are upgraded to exploit new technologies. NISs ["Networked Information Systems"] constantly evolve to exploit new technology, and an MEII that did not evolve in concert would rapidly become useless.Looking more narrowly, we come to the same conclusion. Suppose that we only wanted to protect the water, power, and communications systems, and hence their networks, while other networks were under attack. How would spare parts be ordered, if the vendors’ factory networks weren’t functioning? Where would fuel come from, if trucking and shipping company networks were not protected? Could these companies even communicate with their employees, given how many rely on commercial ISPs for telecommuting? For that matter, these companies themselves rely on commercial ISPs to link their various locations. The ability of the Presiden to "declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network" would be of dubious utility. (The political and social wisdom of granting such power is itself an interesting question; for today at least, I’ll concentrate on the technical issues.)A second problem with a single national MEII is that "minimum" and "essential" depend on context and application (see Box 5.1), so one size cannot fit all. For example, water and power are essential services. Losing either in a city for a day is troublesome, but losing it for a week is unacceptable, as is having either out for even a day for an entire state. A hospital has different minimum information needs for normal operation (e.g., patient health records, billing and insurance records) than it does during a civil disaster. Finally, the trustworthiness dimensions that should be preserved by an MEII depend on the customer: local law enforcement agents may not require secrecy in communications when handling a civil disaster but would in day-to-day crime fighting.
Section 6 has other questionable provisions. 6(a)(1) calls for research in cybersecurity metrics. Research is a fine thing and security metrics are an active research area. Why should asking NIST to focus on this result in new answers? I’ve asserted that the most interesting question — how secure is a given piece of software — is not answerable, even in principle. Known weaknesses (see (6)(a)(2) and (6)(a)(3)) aren’t very interesting; if a site hasn’t fixed them, it’s generally because of overriding concerns, such as budget, backwards compatibility, or the sheer difficulty of updating a large-scale production system without breaking the applications you’re trying to run.
(6)(a)(4) is just strange. Yes, configuration management is difficult and security-relevant. That doesn’t mean that a standard configuration language would solve such problems. Why, for example, should the proper security settings for a web browser bear any relationship whatsoever to the settings of a laptop’s built-in firewall? Neither bears any particular relationship to permission setings for a database server, let alone permissions within the database itself. It’s not just comparing apples to oranges, it’s comparing apples to magnetic alloys of neodymium or some such. There might be a small benefit to having one parser, but the real problem is the policies configured, rather than the language. Perhaps the goal is to make it easy to swap out one box and get a different model that does the same thing, but that simply won’t work; the new box will have different concepts, and hence different secure configuration requirements.
The same can be said for (6)(a)(5), on standard software configurations. (That and some other sections apply to "grantees", among others. Does that mean that NIST will have to set standards for NetBSD, to accomodate people like me? Or does it mean that I can’t run NetBSD, despite the threat posed by software monocultures?
A vulnerability specification language ((6)(a)(6)) isn’t a bad idea, though I note that such a thing is inherently OS-dependent. Two things are necessary, though, to make it useful: sufficient knowledge of what components are implicated, and sufficient knowledge of what the vulnerability is, to permit realistic assessment of the actual risk to a given site. I’ll give a concrete example: the system I’m typing this on has a version of Ghostscript with a buffer overflow when processing PDF documents. Yes, that sounds serious — except that I never use Ghostscript to read external PDFs; I use a variety of other programs. To me, then, there is no risk.
Section (6)(a)(7) sounds great — national compliance standards for all software — but it’s doomed. We’ve been down that road before, ranging from the Orange Book to the Common Criteria. All of these projects tried to establish standards and evaluation criteria for trusted software systems. The problem is that building and testing such systems, and going through external evaluations, are slow and expensive processes. Far fewer systems were evaluated than should have been, because purchasers wanted to buy cheap commercial hardware and software. The result was an endless set of waivers. Is the government willing to pay premium prices, for all of its systems? Let me rephrase the question: will each and every government agency be willing to spend its own budget dollars on such systems, and will Congress appropriate enough money? Allow me to express serious doubt. "C2 by ’92" (an attempt by DoD to enforce minimal levels of security via use of C2-level systems by 1992) never went anywhere; I don’t think this one will succeed, either. There are many further reasons for skepticism — who will pay for private sector deployments; what security model is appropriate (the Orange Book was geared to the military classification model, which is simply wrong for most civilian use); whether the flaws are in the OS at all, etc.), and more — and we can’t just legislate useful, usable standards into being. Legislation may be appropriate when we know the goal (we don’t), or we have good reason to believe we’ll know it and can reach it in not very many years. Neither is the case here.
I could go on and on. Section 7, for example, calls for licensing of cybersecurity professionals. What is that supposed to do? The big flaws lie not in the ways we configure our firewalls and crypto boxes; rather, they’re in the software we choose to run, and in management that doesn’t listen to (or doesn’t understand) security warnings. Are the authors of the legislation concerned about sabotage by security folks who aren’t trustworthy? I’d start by worrying about supply chain vulnerabilities in hardware and software.
It’s fair to ask what I would recommend instead. Suppose I were drafting a bill or an executive order. Suppose (heaven help us all) President Obama appointed me as his National Cybersecurity Advisor. What would I suggest? A full answer would call for a much longer post than this; indeed, it would probably take at least a full-fledged technical paper and perhaps a book. The short answer is that just as there is no royal road to geometry, there is no presidential or Congressional road to cybersecurity. You have to do it step by step, system by system. Things we can do today — more cryptography, following industry best practices, and so on — are the low-hanging fruit; while we should do more of these, such things are demonstrably insufficient. A more drastic move is to accept that there are some things we just can’t do safely at any reasonable cost: the complexity will get us. We need to be more humble in our designs. At this point, to a first approximation all computer systems are interconnected; we cannot realistically hope to limit the spread of certain attacks. We can make progress if and only if we accept that as the starting point, ask "what then?", and build our systems accordingly.
The Open Source Quality Challenge
I realized this morning that I had to upgrade Firefox — again. It seems that 3.0.9 had a security problem. It’s less than a week since the last time I had to upgrade: 3.0.8 had security problems, too. In fact, in the year or so that Firefox 3 has been out, there have been about 50 official security advisories, 30 rated critical or high severity. What’s going on?
We have known for a long time that most security holes are simply a particular form of bug. The corollary, of course, is that reducing bugs in general is a good way to reduce the incidence of security problems. Is Firefox too buggy, and hence too insecure?
We’ve also known for a long time that good, structured development process do work. They may be expensive in the short run, but they do pay off. This is the challenge for the open source movement: can it impose such discipline?
Microsoft committed publicly to security improvements several years ago. From where I sit, the effort has been working. Windows is neither bug-free nor security hole-free, and probably never will be; that said, it’s a lot better today than it would have been had Bill Gates not gotten religion about security. I’ve heard a number of theories about why that happened, but those aren’t important; what counts is the end result.
I’ve also heard the claim that Firefox has had fewer days of vulnerability. That sounds great, until you realize that one way to achieve that is by shipping patches quickly, without adequate testing. Consider this security advisory:
One of the security fixes in Firefox 3.0.9 introduced a regression that caused some users to experience frequent crashes. Users of the HTML Validator add-on were particularly affected, but other users also experienced this crash in some situations. In analyzing this crash we discovered that it was due to memory corruption similar to cases that have been identified as security vulnerabilities in the past.Was 3.0.9 released too quickly, necessitating the very rapid release of 3.0.10?
It has been said that "given enough eyeballs, all bugs are shallow". That may be true. What we need now is a way for the many eyeballs to prevent the bugs in the first place. It won’t be easy. Submitting to discipline is difficult for many, and fun for very few. Programmers like to write code, not requirements, design documents, test scripts, and the like. I fear, though, that there are no other choices. Just as there is no royal road to geometry, I fear there is no royal road to correct software.
I’m a fan of open source software. Indeed, I’m not just writing this on a machine running an open source operating system, NetBSD, I’m a NetBSD developer (albeit not a very active one). However, if the open source movement is to fulfill its promise, it needs to solve its buggy code problem. We have several decades of experience that teach us there are no magic solutions or tools that will solve that problem. We’re going to have to do it the hard way.