Computer Processing and the Law
For some reason, I started thinking about a couple of old court cases. Among other points, both relied on the notion that a computer doing something with data was somehow different than a person doing the same thing. Is that true? Should it be?
The first case was Warshak v. United States, 490 F.3d 455 (6th Cir. 2006), which I wrote about some time ago. The court noted
The fact that a computer scans millions of e-mails for signs of pornography or a virus does not invade an individual’s content-based privacy interest in the e-mails and has little bearing on his expectation of privacy in the content.Somehow, to this court, there was some sort of difference between a computer looking for nasty things and a person doing the same thing.
That decision was vacated for legal reasons (532 F.3d 521 (6th Cir. 2008)), so it sets no precedent, but the argument is similar to the one made about the FBI’s Carnivore wiretap system: that having a computer filter the data did not mean that the discarded data was "searched" without a warrant. Is this a reasonable position?
There are many different issues here. One, of course, is accuracy: is the filtering correct? Carnivore once got its filter so wrong that an FBI agent discarded email intercepts about Osama bin Laden because email from "non-covered targets" was picked up as well. This issue — what to do about erroneous intercepts — concerns the FBI, too, for legal, political, and PR reasons. The spirit of their point, though, remains the same: as long as their filters properly discard unauthorized data, no unlawful search has taken place. But is that true?
In Smith v. Maryland, 442 U.S. 735 (1979), the U.S. Supreme Court ruled that individuals had no legitimate expectation of privacy in data — in this case, dialed phone numbers — given to a third party: the phone company. After all, people know that the phone company collects that data and does things with it. But — the court wrote
The switching equipment that processed those numbers is merely the modern counterpart of the operator who, in an earlier day, personally completed calls for the subscriber. Petitioner concedes that, if he had placed his calls through an operator, he could claim no legitimate expectation of privacy. We are not inclined to hold that a different constitutional result is required because the telephone company has decided to automate.
Let’s turn that around. Suppose that the government decided to use people rather than software to do filtering, when the warrant does not allow inspection of content. Is that permissible? The central question is whether or not mechanization of the process makes a constitutional difference. As software gets "smarter", the question becomes more and more important.