Facebook, Privacy, and Cryptography
There has long been pressure from governments to provide back doors in encryption systems. Of course, if the endpoints are insecure it doesn’t matter much if transmission is encrypted; indeed, a few years ago, I and some colleagues even suggested lawful hacking as an alternative. Crucially, we said that this should be done by taking advantage of existing security holes rather than be creating new ones.
Facebook may have taken part of this to heart. A Forbes columnist has written that Facebook is incorporating content analysis into its mobile client and that
The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.(It’s not even clear that this claim is accurate, but for this analysis let’s assume that it is.)
Now, it’s not unreasonable for Facebook to move some analysis to the client; indeed, a few months ago I speculated that they might. But that’s a very different issue than using their clients for government access.
As I and others have often noted, security is a systems property. That is, you don’t achieve security just by hardening this or encrypting that or putting a firewall in front of some other thing. Rather, security emerges from a system where the individual elements are secure and they’re combined properly and there are no gaps and everything is used properly and—well, you get the picture. Let’s walk this back: if the Facebook mobile client has wiretapping ability, how might that fail?
First, of course, that code might itself be buggy. To give one example, suppose that the wiretap code tried to set up an encrypted connection back to Facebook. It turns out, though, that certificate-checking in that sort of code is very hard to get right:
We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon’s EC2 Java library and all cloud clients based on it; Amazon’s and PayPal’s merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack.The code would work correctly—until someone launched an active attack on the connections.
Alternatively, someone could try to hack Facebook. Facebook is a very sophisticated corporation and probably has very good internal security controls—but are they proof against a major attack by a foreign intelligence agency? At attack that is aided by pressure on some country’s expatriates who now work for Facebook?
Beyond that, of course, there are all of the procedural and diplomatic issues: how would Facebook authenticate requests, what about requests from oppressive governments, etc.?
In other words, although this scheme would (proably) not suffer from the fragility of cryptographic protocols, it would open up other avenues for attack.
As I noted above, we endorsed using existing holes, not creating new ones. Yes, it’s more expensive, but that isn’t necessarily a bad thing. As Justice Sotomayor noted in her concurrence in United States v. Jones, “limited police resources and community hostility” are major checks on police misbehavior. A cheap, surreptitious means of breaking security is exactly the wrong thing to do.
The claim about Facebook’s plans may be wrong. I certainly hope so.
Update: Will Cathcart, the VP in charge of WhatsApp, has categorically denied the allegation:
To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.I hope no one suggests that other companies try this, either—the reasons why it would be bad if Facebook did it are at least as applicable to anyone else, especially to companies with less engineering talent (and that’s most of the world).