The RSA SecurID Problem
As has been widely reported, RSA suffered a serious security breach aimed at its SecurID product. The SecurID is a major product in its space ("token authentication", rather than the commonly reported "two factor authentication"; see below); a security problem with it would be a major issue indeed. But what, precisely, is the problem RSA experienced? They haven’t said, which of course has led to a lot of speculation. There are many possible scenarios here; some are serious and some are not. I’m going to lay out a few possibilities.
Note carefully: I am not saying that RSA made any of the bad decisions I’m outlining here. I hope they didn’t. I outline them solely to provide a list of questions people can ask their vendors. Again, I am not accusing RSA of having done anything wrong.
Authentication is conventionally divided into three types: something you know (i.e., passwords), something you are (biometrics), and something you have, such as some form of token. Two-factor authentication means using two of these three, to avoid problems from things like stolen tokens or worse. The normal way to use a SecurID these days is to combine its output with a PIN, i.e., a short password.
Fundamentally, a SecurID is a display, a clock T, a secret key K, and a keyed cryptographic hash function H, all in a tamper-resistant package. Every 30-60 seconds, the token calculates H(K, T) and displays the result on the screen. When you log in, you supply your userid, the displayed value, and a PIN. The system consults one database to map your userid to the serial number of your token; it consults another to find the secret key for your token. It then does the same H(K, T) calculation to make sure the results match what you sent; it also checks your PIN. If everything is ok, the login is successful. (Note: I’ve oversimplified things; in particular, I’ve left out some details that are quite essential to making the product successful in the real world. But none of those items are important to what I’m going to discuss here.)
The two most crucial security items here are the strength of the hash function and protection of the key database. Let’s take them one at a time.
If an enemy who observes H(K, T) can learn K, the whole scheme falls apart. (People shouldn’t even send that except over an encrypted connection, but we know that that happens.) That said, at least older (alleged) versions of H have been cryptanalyzed. (I do not know if that is the current algorithm; the paper speaks of a 64-bit key, but assorted web pages and Wikipedia speak of a 128-bit key.) The easiest way to build a new, secure algorithm would be to use something like HMAC. Assorted web pages mention an AES-based solution; that’s another good choice.
Is the risk that the attackers have now learned H? If that’s
a problem, H was a bad choice to start with. In 1883,
Kerckhoff
wrote
"Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient
tomber entre les mains de l’ennemi." (Roughly, "the system must not
require secrecy and can fall into enemy hands without causing inconvenience.")
But that principle has been known for a long time, so it doesn’t seem like
the most likely choice. The original 1985 version of H is, as noted,
too weak, but that was a reflection of the algorithms and technology of
the time. Any newer algorithm would certainly be stronger; I am
not seriously worried about this possibility.
The second big risk is compromise of the confidentiality of the key
database. If RSA stored copies of customer databases (see below),
these could be at risk. That would be nothing short of a disaster
for any affected company. (Note, of course, that every customer has
to have its own database; if that version is poorly protected, it’s
game over for that customer.)
However — where does K come from, and how does it get into both
the token and the key database? There are many possibilities, and they
interact deeply with the manufacturing process: one has to ensure that
knowledge of the key somehow stays with the device and its seriol
number. It is tempting to calculate K=G(K’,serial_number), where K’ is
a closely-held RSA secret and G is some other hash function. The risk
here is discovery of K’; any new devices would be at risk of compromise
if the attacker knew the serial numbers associated with some target
customer.
This is the really big question: how are the K values generated? Is there
some master secret with RSA that allows recalculation of them? If so,
any devices manufactured between when the compromise occurred and when
K’ was changed are much less secure than they should be.
It would be great if there was a new K’ every day, and perhaps there is;
again, though, if the machine holding it was hacked, there’s a problem.
Note, of course, that no matter how K is generated,
even by a true-random number generator,
that value somehow
has to be kept with its token. That information could also be a target.
The last obvious issue is the availability of the key database. Customers
care about this; if they lose their database, none of their users can
log in. Careful sites keep good backups, but as we all know not
everyone is that careful. Does RSA (or any other token vendor)
offer a managed service, where they
hold backup copies? It would be tempting for all concerned, but it
also poses a serious confidentiality risk. Such databases should be
stored offline and encrypted, with really strong physical
procedural protections. It’s hard to imagine that a company as
sophisticated as RSA doesn’t understand that.
However, there’s a scarier possibility: is there a database of K’ values?
That’s a lot harder to keep offline, because you’ll always need it for
some customer. There’s a difficult tradeoff here of confidentiality
and availability.
We can also look at less obvious possibilities. Perhaps what was
compromised is a database of customer authentication data. That is,
suppose you want RSA to send you your backed-up keys. How do you prove
you’re you? A SecurID token? Why not — but that means there’s
a key database.
I suspect that the real risk is none of these. They’re too obvious
to any defender; I strongly suspect that these issue have been handled properly.
RSA is a sophisticated company that does understand security.
Instead, I’m worried about
the source code to any of the myriad back end
administrative products necessary to use SecurID. Are there flaws?
That’s a lot easier to believe than (simple) flaws in an AES-based
hash function. There’s a lot of code needed for maintaining databases,
adding and deleting users, making backups, synchronizing master and
secondary copies of databases, and more.
An attacker who could penetrate these administrative systems doesn’t
have to worry about key generation or cryptanalysis; they could simply
steal existing keys or insert new ones of their own. To quote myself,
"you don’t go through strong security, you go around it".
The crypto may be strong, but what about the software?