Cybercrime and "Remote Search"
According to news reports, part of the EU’s cybercrime strategy is "remote search" of suspects’ computers. I’m not 100% certain what that means, but likely guesses are alarming.
The most obvious interpretation is also the most alarming: that some police officer will have the right and the ability to peruse people’s computers from his or her desktop. How, precisely, is this to be done? Will Microsoft and Apple — and Ubuntu and Red Hat and all the BSDs and everyone else who ships systems — have to build back doors into all operating systems? The risks of something like that are mind-boggling; they’re far greater than the dangers of the cryptographic key escrow schemes proposed — and mostly discarded — a decade ago. Even assuming that the access mechanisms can be adequately secure (itself an assumption), who will control the private keys needed? Police departments? In what countries? Will all European computers be accessible to, say, Chinese and Russian police forces? Or perhaps Chinese and Russian computers will need to be accessible to Europol. Cybercrime is, of course international, and no one region has a monopoly on either virtue or vice.
Instead of back doors, perhaps law enforcement will exploit the many security holes that are already in many systems. Will running a secure system then be seen as obstruction of justice? (Will all security researchers and practitioners suddenly be seen as accomplices to crime?) What about firewalls and home NAT boxes? Will you need a police permit to run one? Or will these need to be hacked as well? German police have tried this, but were blocked by a court order. There have also been reports of similar FBI efforts.
Possibly, a hybrid strategy will be used: physical entry will be necessary to plant some device or software (as in the Scarfo case). This is less risky in an electronic sense, but of course carries risks to the agents involved. Note that any of the three strategies discussed here is likely to be detectable by the target.
For purely electronic variants, the question of jurisdiction is also important. How can an EU police officer know that a target computer is located within the EU? Suppose that it’s located in the U.S. — would warrants be needed from both jurisdictions? Suppose the officer was wrong about the location and only obtained an EU warrant — would the evidence be admissible in court? (For reasons too complex to go into here, Dell and YouTube frequently think my web connections are coming from Japan.) What if the suspect was taking deliberate evasive measures?
This is a complex topic with many ramifications. A lot more public discussion is necessary before anything like this is put into effect.
The Report on "Securing Cyberspace for the 44th Presidency"
A report "Securing Cyberspace for the 44th Presidency" has just been released. While I don’t agree with everything it says (and in fact I strongly disagree with some parts of it), I regard it as required reading for anyone interested in cybersecurity and public policy
The analysis of the threat environment is, in my opinion, superb; I don’t think I’ve seen it explicated better. Briefly, the US is facing threats at all levels, from individual cybercriminals to actions perpetrated by nation-states. The report pulls no punches (p. 11):
America’s failure to protect cyberspace is one of the most urgent national security problems facing the new administration that will take office in January 2009. It is, like Ultra and Engima, a battle fought mainly in the shadows. It is a battle we are losing.
The discussion of the organizational and bureaucratic problems hampering the government’s responses strikes me as equally trenchant, though given that my own organizational skills are meager at best and I have limited experience understanding the maze of the Federal bureaucracy I can’t be sure of that… (Aside: although there were some very notable technologists on the committee, it seems to me to have been dominated by political and management types. A strong presence of policy people on this committee was, of course, necessary, but perhaps there should have been more balance.)
The report noted that the US lacks any coherent strategy or military doctrine for response. To be sure, the government does seem to have some offensive cyberspace capability, but this is largely classified. (The report cites NSPD-16 from 2002 (p. 23), but as best I can determine it itself is classified.) As noted (p 26.), the "deterrent effect of an unknown doctrine is extremely limited". (I was pleased that I wasn’t the only one who thought of Dr. Strangelove when reading that sentence; the report itself has a footnote that makes the same point.)
The report is, perhaps, too gentle in condemning the market-oriented approach to cybersecurity of the last few years. That may reflect political realities; that said, when the authors write (p. 50):
In pursuing the laudable goal of avoiding overregulation, the strategy essentially abandoned cyber defense to ad hoc market forces. We believe it is time to change this. In no other area of national security do we depend on private, voluntary efforts. Companies have little incentive to spend on national defense as they bear all of the cost but do not reap all of the return. National defense is a public good. We should not expect companies, which must earn a profit to survive, to supply this public good in adequate amounts.they were too polite. How could anyone have ever conceived that it would work? The field wasn’t "essentially abandoned" to market forces; rather, the government appears to have engaged in an excess of ideology over reality and completely abdicated its responsibilities; it pretended that the problem simply didn’t exist.
I was rather surprised that there was no mention of a liability-based approach to security. That is, computer owners would be liable for attacks emanating from their machines; they in turn would have recourse (including class action suits) against suppliers. While there are many difficulties and disadvantages to such an approach, it should at least be explored.
The most important technical point in this report, in my opinion, is its realization that one cannot achieve cybersecurity solely by protecting individual components: "there is no way to determine what happens when NIAP-reviewed products are all combined into a composite IT system" (p. 58). Quite right, and too little appreciated; security is a systems property. The report also notes that "security is, in fact, part of the entire design-and-build process".
The discussion of using Federal market powers to "remedy the lack of demand for secure protocols" is too terse, perhaps by intent. As I read that section (p. 58), it is calling for BGP and DNS security. These are indeed important, and were called out by name in the 2002 National Strategy to Secure Cyberspace. However, I fear that simply saying that the Federal government should only buy Internet services from ISPs that support these will do too little. DNSSEC to protect .gov and .mil does not require ISP involvement; in fact, the process is already underway within the government itself. Secured BGP is another matter; that can only be done by ISPs. However, another recent Federal cybersecurity initiative — the Trusted Internet Connection program — has ironically reduced the potential for impact, by limiting the government to a very small number of links to ISPs. Furthermore, given how many vital government dealings are with the consumer and private sectors, and given that secured BGP doesn’t work very well without widespread adoption, US cybersecurity really needs mass adoption. This is a clear case where regulation is necessary; furthermore, it must be done in conjunction with other governments.
The scariest part of the report is the call for mandatory strong authentication for "critical cyber infrastructures (ICT [information and communications technology], energy, finance, government services)" (p. 61). I’m not sure I know what that means. It is perhaps reasonable to demand that employees in these sectors use strong authentication; indeed, many of us have advocated abolishing passwords for many years. But does this mean that I have to use "strong government-issued credentials" to sign up with an ISP or with an e mail provider? Must I use these to do online banking? The report does call for FTC regulations barring businesses from requiring such for "all online activities by requiring businesses to adopt a risk-based approach to credentialing". But what does that do? What if a business decides that risks are high? Is it then allowed to require strong authentication?
For that matter, the report is quite unclear on just what the goals are for strong authentication. It notes that "intrusion into DOD networks fell by more than 50 percent when it implemented Common Access Card" [sic] (p 62). But why was that? Because people couldn’t share passwords? Because there were no longer guessable passwords? Because keystroke loggers have nothing to capture? Because there is accountability, rather than deniability, for certain actions? There is no guidance here. The benefits of "in-person proofing" are lauded because it "greatly reduces the possiblity that a crminal can masquerade as someone else simply by knowing some private details". Quite true — but is the goal accountability or prevention of electronic identity theft? (It’s also worth noting that there are a variety of attacks — some already seen in the wild against online banking — that completely evade the nominal protections of strong authentication schemes. I don’t have space to discuss them here, but at a minimum you need secure operating systems (of which we have none), proper cryptography (possible but hard), automatic bilateral authentication with continuity-checking (relatively easy, but done all too rarely), and a well-trained user population (probably impossible) to deflect such attacks. That is not to say there are no benefits to strong authentication, but one should be cautious about looking for a panacea or rushing too quickly to cast blame on apparently-guilty parties without a lot more investigation.)
There are cryptographic technologies that permit multiple unlinkable credentials to be derived from a single master credential. This would allow for strong authentication and protect privacy (the latter an explicit goal of the report), but would perhaps do little for accountability. Should such technologies be adopted? Without more rationale, it’s impossible to say what the committe thinks. That said, the general thrust seems to be that centralized, strong credentials are what is needed. That directly contradicts a National Academies study (disclaimer: I was on that committee) that called for multiple, separate, unlinkable credentials, since they are better both for security and privacy.
This report calls for protecting privacy. It offers no guidance on how to
do that; it instead advocates policies that will compromise privacy.
And instead of describing as a legitimate concern "the spread of
European-style data privacy rules that restrict commercial uses of data
pertaining to individuals" (p. 68), it should have endorsed such rules. There
are only two ways to protect privacy in the large, technical and legal.
If the technical incentives are going to push one way, i.e., towards a
single authenticator and identity, the legal requirements
must push the other. It is not enough to say that "government must
be careful not to inhibit or preclude anonymous transactions in cases
where privacy is paramount" (p. 64), when technologies such as third-party
cookies
can be used to track people.
This can include the government; indeed,
www.change.gov itself uses
It is worth stressing that government violations of privacy are not the
only issue. The government, at least, is accountable. The private sector is
not, but dossiers
compiled by marketeers are at least as offensive. Sometimes, in fact,
government agencies
buy data from the private sector, an activity that has been
described as
"an
end run around the Privacy Act".
Will social networking sites require this sort of ID, in the wake of the
Lori
Drew case and the push to
protect
children
online?
If so, what will that do to social privacy? What will it
do to, say, the rate of stalking and in-person harassment?
Make no mistake about it, this "voluntary" authentication credential
is a digitally-enabled national identity card. Perhaps such a card is a
good idea, perhaps not; that said, there are
many questions
that need to be asked and answered before we adopt one.
There’s another scary idea in the report: it suggests that the U.S. might
need rules for "remote online execution of a data warrant" (p 68).
As I noted
the other day, that is a
thoroughly bad idea that can only hurt cybersecurity. More precisely,
having rules for such a thing is a good idea (if for no other reason than
because insecure computers will be with us for many years to come),
but wanting an ongoing capability to
actually use such things in practice is very, very dangerous.
This brings up the report’s biggest omission: there is no mention
whatsoever of the buggy software problem. Quite simply, most security
problems are due to buggy code. The hundreds of millions of "botted"
computers around the world are not infected because the attacker stole a
password for them; rather, there was some sort of flaw in their mailers,
browsers,
web servers,
social networking software, operating systems, or what have you.
Ignoring this when talking about cybersecurity is ignoring the 800
— nay, 8000 —
pound gorilla in the room.
The buggy software issue is also the problem with the discussion of
acquisitions and regulation (p. 55). There are certainly some things that
regulations can mandate, such as default
secure configurations. Given how long
the technical security community has called for such things, it is
shameful that vendors still haven’t listened. But what else should be
done to ensure that "providers of IT products and systems are accountable
and … certify that they have adhered to security and configuration
guidelines"? Will we end up with more meaningless checklists
demanding anti-virus software
on
machines that shouldn’t need it?
Of course, I can’t propose better wording. Quite simply, we don’t know
what makes a system secure unless it’s been designed for security from the
start. It is quite clear to me that today’s systems are not secure and
cannot be made secure. The report should have acknowledged this, and
added it to the call for more research (p. 74).
There’s another dynamic that any new government network security
organization needs to address: the tendency within government itself
to procure insecure systems. The usual priorities are basic functionality,
cost, and speed of deployment; security isn’t on the radar. Unless
Federal programs — and Federal program managers — are
evaluated on the inherent security of their projects (and of
course by that I do not mean the usual checklists), the effort will not
succeed. The report should have acknowledged this
explicitly: more security, from the ground up,
will almost certainly cost more time and money. It will require
more custom-built products; fewer COTS products will pass muster.
To be sure, I think it will save money in the long run, but when
budgets are tight will it be security that gets short shrift?
On balance, I think the report is an excellent first step. That said,
some of the specific points are at best questionable and probably wrong.
We need public debate — a lot of it.
Update: I’ve changed the URL to the report; it seems to have been
moved.
Another Cluster of Cable Cuts
News reports say that there has been another cluster of undersea cable cuts affecting the Middle East and south Asia. A cluster early this year gave rise to many conspiracy theories, though that incident was ultimately linked to a ship dragging its anchor.
Naturally, the conspiracy theories are afoot again. If you want to participate, note that
The cuts are causing traffic to be re-routed through the United States and elsewhere.
The answer seems more benign. The BBC story notes that "some seismic activity was reported near Malta shortly before the cut was detected". A quick web search shows a 5.3 magnitude quake near Sicily. (A search shows only one other quake near there, in Greece on December 13.) From the timing, the Sicilian quake didn’t cut the cables directly; it could, however, have left conditions ripe for an underwater landslide.
Companies, Courts, and Computer Security
The newswires and assorted technical blogs are abuzz with word that several researchers (Alex Sotirov, Marc Stevens, Jake Appelbaum, Arjen Lenstra, Benne de Weger, and David Molnar) have exploited the collision weakness in MD5 to create fake CA certificates that are accepted by all major browsers. I won’t go into the details here; if you’re interested in the attack, see Ed Felten’s post for a clear explanation. For now, let it suffice to say that I think the threat is serious.
What’s really interesting, from a broader perspective, is the entire process surrounding MD5’s weakness. We’ve known for a long time that MD5 is weak; Dobbertin found some problems in it in 1996. More seriously, Wang et al. published collisions in 2004, with full details a year later. But people reacted slowly — too slowly.
Verisign, in particular, appears to have been caught short. One of the CAs they operate still uses MD5. They said:
The RapidSSL certificates are currently using the MD5 hash function today. And the reason for that is because when you’re dealing with widespread technology and [public key infrastructure] technology, you have phase-in and phase-out processes that cane take significant periods of time to implement.But we’re talking about more than four years! Granted, it might take a year to plan a change-over, and another year to implement it. That time is long gone. Furthermore, the obvious change, from MD5 to SHA-1, is not that challenging; all browsers already support both algorithms. (Changing to a stronger algorithm, such as SHA-256, is much harder. Note that even SHA-1 is considered threatened; NIST is running a competition to select a replacement, but that process will take years to finish.)
The really scary thing, though, is that we might never have learned of this new attack:
Molnar says that the team pre-briefed browser makers, including Microsoft and the Mozilla Foundation, on their exploit. But the researchers put them under NDA, for fear that if word got out about their efforts, legal pressure would be brought to bear to suppress their planned talk in Berlin. Molnar says Microsoft warned Verisign that the company should stop using MD5.Legal pressure? Sotirov and company are not "hackers"; they’re respected researchers. But the legal climate is such that they feared an injunction. Nor are such fears ill-founded; others have had such trouble. Verisign isn’t happy: "We’re a little frustrated at Verisign that we seem to be the only people not briefed on this". But given that the researchers couldn’t know how Verisign would react, in today’s climate they felt they had to be cautious.
This is a dangerous trend. If good guys are afraid to find flaws in fielded systems, that effort will be left to the bad guys. Remember that for academics, publication is the only way they’re really "paid". We need a legal structure in place to protect security researchers. To paraphrase an old saying, security flaws don’t crack systems, bad guys do.