Flame On!
Here we go again; another instance of really sophisticated spyware has been reported, a system that is "so complex and sophisticated that it’s probably an advanced cyber-weapon unleashed by a wealthy country to wage a protracted espionage campaign on Iran". I won’t get into the debate about whether or not it’s really more impressive than Stuxnet, whether or not it’s groundbreaking, or whether or not Israel launched it; let it suffice to say that there are dissenting views. I’m more interested in the implications.
The first take-away is that this is the third major piece of government-sponsored malware that has been found, after Stuxnet and Duqu. All three were out there for quite some time before they were noticed. If there are three, there’s no reason to think there aren’t more. Like any other covert action, the most successful cyberattacks are never found, and hence never receive any publicity. (There’s an important reason for this: while defense against the generic concept of cyberattack is hard, defending against a known piece of malware is relatively straight-forward; this is what antivirus companies do for a living. They’re not perfect, but by and large their systems work well enough.)
The second important point is that these three were found by commercial antivirus firms. This is perhaps not surprising, since all three apparently targeted countries that aren’t at the top of anyone’s list of highest-tech places. Government-grade malware targeting major powers — the U.S., Russia, China, Israel, Japan, much of Western Europe, etc. — would be much more likely to be analyzed by an intelligence agency; unlike commercial firms, intelligence agencies rarely publish their analyses. In other words, we don’t know how many other pieces of militarized malware have already been found, let alone how many others haven’t been detected yet. We do know that the US, Russia, and China regularly charge that others have been attacking their computers. (There’s been a lot of publicity about the attack against RSA, but almost no technical details have been released, unlike Stuxnet or Flame.)
Third, and most important: in cyberattacks, there are no accepted rules. (Some issues are discussed in a new New York Times article.) The world knows, more or less, what is acceptable behavior in the physical world: what constitutes an act of war, what is spying, what you can do about these, etc. Do the same rules apply in cyberspace? One crucial difference is the difficulty of attribution: it’s very hard to tell who launched a particular effort. That in turn means that deterrence doesn’t work very well.
It may be that these changes are for the better; according to that NY Times article, Stuxnet was seen as less risky than a conventional military operation. But we don’t know that, we don’t know the rules, and we don’t know how long it will take for a new world consensus to develop. We also have to face the fact that cyberweapons are a lot easier to develop than, say, nuclear bombs or ICBMs. While al Qaeda is not going to develop cyberweapons of the grade of Stuxnet or Flame any time soon—it’s not as easy to do as some scare stories would have you believe—it is far from clear that the defenses of, say, a water plant are as good as those of the Natanz centrifuge plant.
There needs to be a national and international debate on this topic. No one is going to supply details of their operations or capabilities, but the simple fact that they exist isn’t and shouldn’t be a secret. Basic US nuclear doctrine has never been concealed; why should this be different?
Restricting Anti-Virus Won't Work
In a blog post, Stewart Baker proposed restricting access to sophisticated anti-virus software as a way to limit the development of sophisticated malware. It won’t work, for many different and independent reasons. To understand why, though, it’s necessary to understand how AV programs work.
The most important technology used today is the "signature" — a set of patterns of bytes — of each virus. Every commercial AV program on the market operates on a subscription model; the reason for the annual payment is that the programs download a fresh set of signatures from the vendor on a regular basis. If you pay your annual renewal fee, you’ll automatically have very up-to-date detection capabilities. It has nothing to do with being a sophisticated defender or attacker.
Yes, there are new versions of the programs themselves. These are developed for many reasons beyond simply increasing vendor profits. They may be designed for new versions of the operating system (e.g., Windows Vista versus Windows XP), have a better user interface, be more efficient, etc. There may be also be some new functionality to cope with new concealment techniques by the virus writers, who haven’t stayed still. To give one simple example, if detection is based on looking for certain patterns of bytes, the virus might try to evade detection by replacing byte patterns with equivalent ones. Suppose that the standard antivirus test file was trying that. A simple variant might be to change the case of the letters printed. An AV program could try to cope by having more patterns, but that doesn’t work very well. In the test program, the message printed contains 30 letters, which means that there are 230 — 1,073,741,824 — variations. You don’t want to list all of them; instead, you have a more sophisticated pattern definition which can say things like "this string of bytes is composed of letters; ignore case when matching it against the suspect file". But that in turn means that the AV program has to have the newer pattern-matcher; at some point, that can’t be done in a weekly update, so you have to get newer code. To that extent, the suggestion almost makes sense, save for two problems: first, the overwhelming majority of folks with the newest versions are simply folks who’ve just purchased a new computer; second, updates that can’t be handled by today’s pattern-matching engines are comparatively rare. The real essence of "updated" is the weekly download of a new signature database, but any responsible computer owner or system administrator has that enabled; certainly, the software is shipped that way by the vendors.
The reliance on patterns, though, explains one reason why things like Stuxnet and Flame weren’t detected: they were rare enough that the vendors either didn’t have samples, or didn’t have enough information to include them in the signature database. Note carefully that a single instance of malware isn’t good enough: because of the variation problem, the vendors have to analyze the viruses enough to understand how they change themselves as they spread. This may require multiple samples since of course the virus writers try to make their code hard to analyze.
You might say that instead of everyone downloading new signatures constantly, the programs should simply upload suspect files to some central server. Again, there are problems. First, that would create a tremendous bottleneck; you’d need many, really large servers. Second, most companies don’t want internal documents sent to outside companies for scanning, but such documents can and have been infected. (I’ve seen many.) IBM has even banned Siri internally because they don’t want possibly proprietary information to be sent to Apple. Third, client machines have limited bandwidth, too (technologies like DSL and cable modems are designed for asymmetric speeds, with much more capacity downstream than upstream); they can’t send out everything they’re trying to work with. Fourth, although the primary defense is the AV program checking the file when it’s first imported, the weekly scan of an entire disk will pick up viruses that are matched by newly-installed signatures. Thus, the first machines that had Stuxnet weren’t protected by antivirus software. However, the signatures are now common, which means that presence of it can be detected after the fact. Fifth, you want to have virus checking even when you’re running without net access, perhaps when you’re visiting a client site but you’re not on their network. Sixth — I’ll stop here; that model just doesn’t work.
There’s a political reason why restricting AV vendors won’t work, too: it’s a multinational industry, and there are vendors that simply won’t listen to the US, NATO, etc. The New York Times ran an article that did more than speculate on possible links between Kaspersky Lab and Russian interests: "But the company has been noticeably silent on viruses perpetrated in its own backyard, where Russian-speaking criminal syndicates controlled a third of the estimated $12 billion global cybercrime market last year, according to the Russian security firm Group-IB."
On a technical level, the rise of ever-smaller and cheaper chips and devices has led to decentralization, a move away from large, shared computers and towards smaller ones. Decades ago, the ratio of users to computers — timesharing mainframes — was much greater than one; now, it’s less than one, as people use smart phones and tablets to supplement their laptops and desktops. Trying to move back towards centralization of security-checking is probably not going to work unless a countervailing technological trend, thin clients and cloud-based everything, gains more traction than has happened thus far.
There’s another technological wrinkle that suggests that restricting state-of-the-art antivirus might be counterproductive. Some AV programs use "anomaly detection" — looking for software that somehow isn’t "normal" — and uploading unusual files to the vendor for correlation with behavior on other computers. (Yes, I know I said that companies won’t like that. I doubt they know; this is fairly new technology, and not nearly as mature as signature matching.) I wonder if this is is one way that Kapersky and others got their old samples of Flame:
When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.If so, barring suspect sites from advanced AV technology would deny vendors early looks at some rare malware. (I won’t even go into the question of whether or not random computers can be authenticated adequately: they can’t.)
I’ve oversimplified the technology and left out some very important details (and some more of my arguments); the overall thrust, though, remands unchanged: trying to limit AV won’t work. However…
Kaspersky wrote "As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected." There are two ways to do that. First, you — or the bad guys — could buy lots of different AV programs. That’s not entirely unreasonable; no one vendor has a perfect signature database; there are many viruses caught by some reputable products but missed by other, equally reputable ones. The other way, though, is to use one of the many free multiple scanner sites. There may be some opportunity for leverage there.
Password Leaks
The technical press is full of reports about the leak of a hashed password file from LinkedIn. Worse yet, we hear, the hashes weren’t salted. The situation is probably both better and worse than it would appear; in any event, it’s more complicated.
Let’s look at the issue of "salting" first. Salting a password file is a technique that dates back to a classic 1979 paper by Morris and Thompson. Without going into the technical details, it generally helps to protect a large, compromised hashed password file against guessing attacks. It’s often less help against a targeted attack, one where the bad guy wants your password. Furthermore, there are situations where conventional salting simply isn’t possible, notably in authentication protocols where both sides need to know a shared secret — typically, either the password itself or a hashed version of it — and there’s no way for one side to send a userid and the other to reply with the salt before authentication. Neither seems to be the case here, but beware of blanket statements that "passwords should always be salted".
A second common theme in the uproar is "pick strong passwords". A strong password isn’t a bad idea per se; however, it’s not humanly possible to pick dozens (at least) strong passwords and never write them down anywhere. More importantly, it is very far from clear that password-guessing attacks are the real problem, as compared with keystroke loggers, phishing sites, and server compromise. Fiorencio et al. argue very convincingly that these other threats are far more important today. In fact, in this particular incident server compromise is a very real worry. Was a server compromised, and hence able to transmit all plaintext passwords as they were entered? That depends on both the LinkedIn architecture and the extent of the compromise. LinkedIn assuredly knows the former, though outsiders don’t; the latter may be a lot harder for anyone to ascertain. I can imagine many possible architectures and failure modes; some would imply risk of plaintext capture, while others would not. I can even come up with architectures where the password file could have been compromised without the username list being exposed. It would be unconventional to do things that way, but it would work.
Speaking about common designs and threat models, though, the odds are high that user names were compromised, too, and that accounts with weak passwords are therefore at risk from a guessing attack. We do not know if there was a deeper compromise that would expose strong passwords; if that happened, the accounts that are at most risk are those that have been active — one to which you’ve logged in — "recently", i.e., since the penetration. Less active accounts are at risk only from guessing. LinkedIn says they’ve reset a lot of passwords, but password reset and recovery schemes tend to be very weak. That implies you should go through that process very soon, and change your password from whatever it is they’ve set it to.
There’s another ironic point here. Once you log in to a web site, it typically notes that fact in a cookie which will serve as the authenticator for future visits to that site. Using cookies in that way has often been criticized as opening people up to all sorts of attacks, including cross-site scripting and session hijacking. But if you do have a valid login cookie and hence don’t have to reenter your password, you’re safer when visiting a compromised site.
There’s one more point: if you reuse passwords across different sites (and most people do, given the impossibility of following conventional advice), you’re at risk on very many other sites. In fact, password reuse is a far bigger problem than weak passwords,
Fixing Holes
According to press reports, DHS is going to require federal computer contractors to scan for holes and start patching them within 72 hours. Is this feasible?
It’s certainly a useful goal. It’s also extremely likely that it will take some important sites or applications off the air on occasion — patches are sometimes buggy (this is just the latest instance I’ve noticed), or they break a (typically non-guaranteeed or even accidental) feature that some critical software depends on. Just look at the continued usage rate for Internet Explorer 6 — there are very valid reasons why it hasn’t been abandoned, despite its serious deficits of functionality, standards compatibility, and security: internal corporate web sites were built to support it rather than anything else.
In other words, deciding to adopt this policy is equivalent to saying "protecting confidentiality and integrity is more important than availability". That’s a perfectly valid tradeoff, and very often the right one, but it is a tradeoff, and the policy should recognize it explicitly. I imagine that there will be a waiver process (and the headline says "begin fixing" holes), but the story doesn’t say — and of course, if there are too many waivers the policy is meaningless.
One more point: sometimes, hardware upgrades are required. For example, Windows XP support ends in 2014; security bugs past that point require switching to something more modern. Most older computers can’t support Windows Vista or Windows 7 — will the agencies have enough budget to do that?
Oh yes: this problem of long-delayed patch installation isn’t peculiar to the government. After all, the private sector is at least as far behind when it comes to, say, getting rid of IE 6. Again, there are reasons for such things to take a while, but that doesn’t mean they should be allowed to drag on indefinitely.