Voluntary Reporting of Cybersecurity Incidents
One of the problems with trying to secure systems is the lack of knoweldge in the community about what has or hasn’t worked. I’m on record as calling for an analog to the National Transportation Safety Board: a government agency that investigates major outages and publishes the results.
In the current, deregulatory political climate, though, that isn’t going to happen. But how about a voluntary system? That’s worked well in avaiation—could it work for computer security? Per a new draft paper with Adam Shostack, Andrew Manley, Jonathan Bair, Blake Reid, and Pierre De Vries, we think it can.
While there’s a lot of detail in the paper, there are two points I want to mention here. First, the aviation system is supposed to guarantee anonymity. That’s easier in aviation where, say, there are many planes landing at O’Hare on a given day than in the computer realm. For that reason (among others), we’re focusing "near misses"—it’s less revelatory to say "we found an intruder trying to use the Struts hole" than to say "someone got in via Struts and personal data for 145 million people was taken".
From a policy perspective, there’s another important aspect. The web page for ASRS is headlined "Confidential. Voluntary. Non-Punitive"— with the emphasis in the original. Corporate general counsels need assurance that they won’t be exposing their organizations to more liability by doing such disclosures. That in turn requires buy-in from regulators. (It’s also another reason for focusing on near-misses: you avoid the liability question if the attack was fended off.)
All this is discussed in the full preprint, at LawArxiv or SSRN.