Facebook's Initiative Against "Revenge Porn"
There’s been been a bit of a furor recently over what Facebook calls its "Non-Consensual Intimate Image Pilot". Is it a good idea? Does it cause more harm than good?
There’s no doubt whatsoever that "revenge porn"—intimate images uploaded against the will of the subject, by an unhappy former partner—is a serious problem. Uploading such images to Facebook is often worse than uploading them to the web, because they’re more likely to be seen by friends and family of the victim, multiplying the embarrassment. I thus applaud Facebook for trying to do something about the problem. However, I have some concerns and questions about the design as described thus far. This is a pilot; I hope there will be more information and perhaps some changes going forward.
My evaluation criterion is very simple: will this scheme help more than harm? I’m not asking how effective the scheme is; any improvement is better than none. I’m not asking if Facebook is doing this because they really care, or because of external pressure, or because they fear people leaving their platforms if the problem isn’t addressed. Those are internal questions; Facebook as a corporation is more competent to evaluate those issues than I am.
There are two obvious limitations that I’m very specifically not commenting on: first, that Facebook is only protecting images posted on one of their platforms, rather than scouring the web; second, that the victim has to have a copy of the images in question. Handling those two cases as well would be nice—but they’re not doing it, and I will not comment here on why or why not, or on whether they should.
I should also note that I have a great deal of respect for Facebook’s technical prowess. It is somewhere between quite possible and very probable that they’ve already considered and rejected some of my suggestions, simply because they don’t work well enough. More transparency on these aspects would be welcome, if only to dispel people’s doubts.
The process, as described, involves the following steps. My comments on each step are indented and in italics.
-
A person concerned about some images fills out a form on an
(Australian) government web site.
It is unclear to me why the original notification has to be to a government office. There may be legal reasons for this; some commentators have noted if an underage person submits an explicit picture of themself, it could technically be treated as transmission of child porn.
-
That person then sends themselves the image via Messenger.
Why submit the image via a message to oneself? An explicit button in their apps or website would seem simpler and less subject to error. (Have you ever sent an email or text message to the wrong person by mistake? I have.)
- The government office notifies Facebook of the submission.
-
A Facebook employee reviews the image and "hashes" it.
The human review issue has drawn the most comments. Is such viewing itself exploitive?
Human intervention is, I fear, almost certainly necessary. The obvious reason is to prevent someone from submitting other images, e.g., images of their ex’s current partner. But what are the criteria Facebook is using? Does the image have to be verifiably of the submitter? How will Facebook determine that? Face-matching, whether automated or human, is decidedly imperfect. Furthermore, there can certainly be sensitive, intimate images that do not contain someone’s face—think of a distinctive tattoo in a private place.
It may be possible and desirable to split the vetting process into two steps: confirmation of identity and confirmation of subject matter. The trick is facial identification. Recognizing the presence of a face is off-the-shelf technology at this point; many cameras do it. (I have a camera that can be configured to take a picture only when the subject smiles!) So: isolate the faces, black out the rest of the image, and use that for identity verification. A second person could verify subject matter, but with the faces obscured.
Yes, this process is imperfect. Match failures or a request by the submitter could result in human review of the entire image.
On a separate note, some people may want the ability to specify the gender of the reviewer. (Yes, I know that gender is non-binary and otherwise complicated.)
-
The hash, not the image, is stored. All images posted to any
Facebook platform are matched against the database of hashed
images; if there’s a match, the image cannot be posted.
It is good that Facebook does not want to store images. They’re excellent at security, but probably not perfect, and such a database would be a very attractive target for some people.
The "hash", by the way, is not a standard cryptographic hash, which would only work for exact matches. Facebook is using something resilient against cropping, rescaling, etc.
-
The original complainant is notified of the completion of
this via "the secure email they provided to the eSafety Commissioner’s
office" and advised to delete the submitted image from Messenger.
After that, Facebook will delete it from its servers.
"Secure email"? What’s that? For most people, there is no such thing. Furthermore, former parters often know or can guess an email account password. I wonder if the submission form urges people to create a new account for just this reason.
The part that concerns me the most is the image submission process. I’m extremely concerned about new phishing scams. How will people react to email messages touting the "new, one-step, image submission site", one that handles all social networks and not just Facebook? The two-step process here—a web site plus an unusual action on Facebook—would seem to exacerbate this risk; people could be lured to a fake website for either step. The experience with the US government-mandated portal for free annual credit reports doesn’t reassure me; there are numerous scam versions of the real site. A single-button submission portal would, I suspect, be better. Does Facebook have evidence to the contrary? What do they plan to do about this problem?
There has been criticism of the need for an upload process. Some have suggested doing the hashing on the submitter’s device. Facebook has responded that if the hashing algorithm were public, people would figure out ways around it. I’m not entirely convinced. For example, it’s been a principle of cryptographic design since 1883 that "There must be no need to keep the system secret, and it must be able to fall into enemy hands without inconvenience."
However… It may very well be that Facebook’s hash algorithm does not meet Kerckhoffs’s principle, as it is known, but that they don’t know how to do better. Fair enough—but at some point, it’s not unlikely that the algorithm will leak, or that people will use trial-and-error to find something that will get through. However, under my evaluation criterion—is this initiative better than nothing?—Facebook has taken the right approach. If the algorithm leaks or if people work around it, we’re no worse off than we are today. In the meantime, keeping it secret delays that, and if Facebook is indeed capable of protecting the images for the short time they’re on their servers (and they probably are) there is no serious incremental risk.
Another suggestion is to delay the human verification step, to do it if and only if there’s a hash match. While there’s a certain attractiveness to the notion, I’m not convinced that it would work well. For one thing, it would require near-realtime review, to avoid delays in handling a hash match. I also wonder how many submitted images won’t be matched—I suspect that most people will be very reluctant to share their own intimate images unless they’re pretty sure that someone is going to abuse them by uploading such pictures. By definition, these are very personal, sensitive pictures, and people will not want to submit them to Facebook in the absence of some very real threat.
My overall verdict is guarded approval. Answers to a few questions would help:
- What is the point of the clumsy, multi-step submission process?
- What measures are taken to protect privacy against the human reviewers?
- What are the criteria used by the human reviewers? Does the submitted image have to be verifiably of the submitter? How is this verified if not enough of a face is showing?
- How does Facebook plan to prevent phishing attacks against this scheme?
But I’m glad that someone is finally trying to do something about this problem!
Update: I’m informed that the pilot is restricted to people over 18, thus obviating any concerns about transmission of child pornography.