The Untrusted Path
It happened again the other day: I received a pop-up telling me to update Flash. In the previous few days, I’d received prompts to update Office and Adobe Acrobat; I’ve also seen recent update prompts for Firefox, MacOS, and assorted minor pieces of software.
In some sense, this is all to the good. Most security problems are due to bugs for which patches already exist; anything that gets people to patch their software will help. But — are the patches legitimate? That is, are they real patches, or are they malware pretending to be patches?
I’ve been worrying about this for quite some time. I’m a sophisticated user, of course, and I hesitate to say how many decades I’ve been doing systems administration. My response to a pop-up saying "there are patches available for package XYZZY" — which doesn’t happen to be running at the time — is to cancel the request and manually check for updates to that package. Most people can’t or won’t do that. Password prompts and requests to go ahead are meaningless if they’re not linked — clearly and visibly linked — to the actual changes to be made. Otherwise, some piece of nasty software can generate the exact same prompts, or possibly the genuine system prompt for permission. This has now happened.
The traditional approach to this problem is the trusted path, some unspoofable mechanism that ensures the user is talking to a system component. Traditionally, the trusted path is entered when the user hits the Secure Attention Key (SAK). (Remember how Windows XP would insist that you hit CNTL-ALT-Delete to log in? Now you know why. Newer versions of Windows use the same SAK sequence to get to things like the task manager and the password change dialog.) This works well for protecting passwords, by ensuring that they go only to the operating system, but doesn’t help much if a piece of malware can invoke the standard system update mechanism. The real system will ask for your password, but it’s granting privileges to malware.
For many people, of course, the right thing to do is to enable auto-update: if a patch is fixing a serious security hole, you almost certainly want it installed as soon as possible. Note the word "almost", though; if you’re relying on a system, you probably want to test the update, or at least wait to hear about other people’s opinions before you install it. Even sophisticated companies can make mistakes in their patches.
To my knowledge, no system has a good solution to secure update problem. Pieces of the solution exist in various places, but they’ve never been put together properly.
- Passwords or other authorization to do certain things should never be entered by the user unless and until the user has hit the Secure Attention Key. This is necessary for all such interactions, and not just at login time. (And note that this is a challenging matter, especially in a networked environment. By definition, the SAK invokes the Trusted Path. How do you do remote administration, when SAK will go to your OS? Besides, how can you send a SAK — an unspoofable signal — over a network? Even without networks, SAK is used to establish a trust path to several different system components; ensuring that it’s the right one — the system’s update manager pointed at whatever software wants to be updated — and not the generic one.)
- The user must be given a very clear statement about what will be changed. We don’t get good indications today, a problem I’ve complained about in the past.
- The privileges granted to the update process must be
such that only the indicated application can
be touched. This might be achievable if every application
package has its own userID; if the update process ran as
that user, it wouldn’t be able to tinker with any other
pieces of the system. (This is
similar
to the way that Android sandboxes its apps.)
If this property exists, it may be possible to simplify things a bit. If an update package is digitally signed with a key known to the current version (or, more precisely, registered to the operating system when the current version was installed), it most likely came from the same source and should be accorded the same trust. No password entry is needed, though it’s still necessary to seek user consent.
We’re far from this ideal today. Until our systems address it, we’ll be victimized by increasingly clever pieces of malware, and software updates will be done via an untrusted path.
The Sins of the Flash
Recent news stories (based on research by Stanford student Feross Aboukhadijeh) state that an Adobe bug made it possible for remote sites to turn on a viewer’s camera and microphone. That sounds bad enough, but that’s not the really disturbing part. Consider this text from the Register article:
Adobe said on Thursday it was planning to fix the vulnerability, which stems from flaws in the Flash Player Settings Manager. The panel, which is used to designate which sites may access feeds from an enduser’s camera and mic, is delivered in the SWF format used by Flash.That’s right — code on a remote computer somewhere decides whether or not random web sites can spy on you. If someone changes that code, accidentally or deliberately, your own computer has just been turned into a bug, without any need for them to attack your machine.…
Because the settings manager is hosted on Adobe servers, engineers were able to close the hole without updating enduser software, company spokeswoman Wiebke Lips said.
From a technical perspective, it’s simply wrong for a design to outsource a critical access control decision to a third party. My computer should decide what sites can turn on my camera and microphone, not one of Adobe’s servers.
The policy side is even worse. What if the FBI wanted to bug you? Could they get a court order compelling Adobe to make an access control decision that would turn on your microphone? I don’t know of any legal rulings on this point directly, but there are some analogs. In The Company v. U.S., 349 F.3d 1132 (Nov. 2003), the 9th Circuit considered a case with certain similarities. Some cars are equipped with built-in cell phones intended for remote assistance. OnStar is the best-known such system; in this case, analysis of court records suggests that ATX Technologies was involved. Briefly, the FBI got a court order requiring "The Company" to turn on the mike in a suspect’s car. The Court of Appeals quashed that order, but only because given the way that particular system was designed, turning it into a bug disabled its other functionality. That, the Court felt, conflicted with the wording of the wiretap statute which required a "minimum of interference" with the service. If the service had been designed differently, the order would have stood. By analogy, if a Flash-tap doesn’t interfere with a user’s ability to have normal Flash-based voice and video interactions with a web site, such a court order would be legal.
No wonder the NSA’s Mac OS X Security Configuration guide says to disable the camera and microphone functions, by physically removing the devices if necessary.
Correction re "Sins of the Flash"
A few days, I posted a criticism of Adobe for a design that, I thought, was seriously incorrect. I made a crucial error: the access control decision is (correctly) made locally; what is done remotely is the user interface to the settings panel. The bug that Adobe fixed was a way for a remote site to hide the view of the UI panel, thus tempting you to click on what you thought were innocuous things but were in fact changing your privacy settings. (The annoying thing is that as I posted it, I had a niggling feeling that I had gotten something wrong, but I didn’t follow up. Sigh.)
This is a much better (though hardly good) design. It still leaves open the vulnerability: at least in theory, the bug could be reinstituted by court order, to aid in tricking a user into changing his or her own settings. In other words, a crucial part of the security and privacy process is still outsourced. The argument has been that back when Adobe designed the interface, it wasn’t as obviously wrong. I don’t think I agree — there was enough criticism of any form of active content going back to the very early days of the web — but I won’t belabor the point.
There’s one aspect I’m still unclear about. There is obviously some way to build a Flash file that tells the local plug-in to do something in conjunction with Adobe that changes local privacy settings. Is it possible for a malicious party to generate a Flash file that will talk to their site in conjunction with local privacy settings, rather than to Adobe? I hope (and think) not; if it is possible, the danger is obvious. Unless the interaction with Adobe is digitally signed, though, a malicious site could send a booby-trapped Flash file in conjunction with mounting a routing or DNS cache contamination attack and impersonate Adobe. This isn’t a trivial attack, but routing attacks and DNS attacks have been known for a very long time; until we get BGPSEC (and probably OSPFSEC) and DNSSEC widely deployed, that risk will remain. I do note that when I invoke the current remote-UI settings manager, I’m doing so over a connection that is at least initially HTTP, not HTTPS; I don’t know if there’s a second, secure connection set up.
To its credit, Adobe has realized that there are a number of problems with the whole model of a Flash-based settings mechanism; if nothing else, it was hard for most people to find. Accordingly, they’ve recently released versions of Flash that use local preference-setting interfaces (Control Panel on Windows; System Preferences on the Mac; something else normal on Linux) to change the values. That’s an excellent step forward. Now, they need to disable the remote variant (when contacted by a new Flash plug-in), and simply return a pointer to the local one…