The chief executive of the Internet Watch Foundation, which maintains a list of sites allegedly containing child pornography which are then blocked by most U.K. Internet providers, recently declared that the organization had erred in blocking the Virgin Killer poster art on Wikipedia. But Peter Robbins also called it "one mistake in twelve years" and said that "[t]here are a lot of very credible people on our board, and we want to give assurance that there is independent oversight on what we do." The issue of "oversight" raises a question that I don't think received enough attention during the Wikipedia block controversy: Why doesn't the IWF notify domain owners or hosting companies when it blocks their content?
If an image is a borderline case, such as the album cover that was hosted on Wikipedia, then IWF could notify the hosting company that they had determined that the image could be illegal under U.K. law. If the host — in this case, Wikipedia — disagreed, they could provide arguments to the contrary and possibly change the IWF's mind, which is what in fact happened once Wikipedia users eventually found out about the block anyway. On the other hand, if the image is very obviously illegal, then a notification to the hosting company might persuade them to take the image down. In that case, any argument against notifying hosting companies has to be weighed against the obvious good that would be done by removing the image from that location on the Web.
I sent this question to the IWF, which must get this inquiry a lot, since they replied with a form letter which stated in part:
Contacting international hosts of such content directly may undermine a police investigation, is contrary to our remit and is contrary to INHOPE best practice.
Well, saying that it is contrary to their remit or to INHOPE best practices, obviously just begs the question of why it is contrary to those "best practices"; I replied to ask that question but didn't receive a response, and INHOPE did not respond when I sent them the same question. So consider the only substantive reason given in the IWF's response, which is that notifying the host "may undermine a police investigation." This could hypothetically be true in some cases — if police are preparing to move in on a suspected child pornographer, but he finds out that his ISP has removed content from his account after a notification from the IWF, he might know that he's about to be caught, and delete any incriminating pictures from his hard drive.
But this reason makes no sense in the case of images such as the cover art on Wikipedia, where the content has been generally available in the host country for a long time, and the original content producers are publicly known and wouldn't be able to run for cover even if the local government did declare the image illegal. It also would not apply in a wide range of other situations where the creator of the content is known and admits to creating it. Consider the case of Dr. Marcus Phillips, who was convicted of producing child pornography after he was hired by the parents of two girls, age 10 and 12, to take semi-nude photos of the girls (with the parents present) that could be digitally manipulated and super-imposed to produce "fairy art." Suppose Dr. Phillips had posted his photos in a portfolio online. In cases where the person posting the content admits that they took the photos themselves, and the subjects of the photos are identifiable people with a connection to the photographer, then consider the two possibilities: either (1) The images are such that the police and the courts will ultimately determine that they are child pornography. In this case, you might as well notify the user and their hosting company that the images are being blocked by the IWF, because even if this "tips off" the guilty user, they won't be able to destroy the evidence by erasing their hard drive, because the existence of the image is enough to incriminate them. Or, (2) The police and courts decide that the images do not constitute child pornography, in which case they should not have been blocked at all. In either situation, there's no rationale for the IWF to block the content without notifying the content owners. So why wouldn't the IWF notify the hosts in such cases — when the content creator is generally known, and admits to creating the content, and simply doesn't believe that it constitutes child pornography?
The elephant in the room is the obvious motivation that the IWF has for not notifying people when it blocks their sites: The IWF may be over-blocking such content, and doesn't want irate parties to complain when they find out that the IWF has mis-categorized their content as "child pornography." If several people came forward to say that the IWF had blocked, for example, their photographs of nudist children (which are not illegal), then it might undermine support for the IWF blacklisting system and for their mission in general. So by not telling people that their URLs are blocked, they minimize the number of people who find out and complain.
Perhaps the IWF does not over-block a lot of content, but the point is that we don't know. When Peter Robbins says the Wikipedia over-block amounted to "one mistake in twelve years," and adds, "Nobody in the years that we have been operating had any real reason to complain," there is no way of knowing if those statements are true, because any other mistakes made by the IWF are unlikely to have been brought to light, for two reasons. First, if a site or an image is blocked, most users are not going to realize what happened, since to them it simply appears that the remote server is not responding. Second, even if a user realizes that an image is blocked and the user knows that the image does not constitute child pornography, they may still be embarrassed to come forward and complain that they were visiting, say, a site full of nudist child photos or a porn site featuring adult models pretending to be mid-teen Japanese schoolgirls, and their favorite picture was blocked. The Wikipedia incident was probably a once-in-a-decade perfect storm of factors that led to the IWF having to retract a decision:
- Wikipedia was popular enough that people quickly noticed the blocked content.
- Wikipedia had the halo of legitimacy accorded to a popular research site; nobody had to feel dirty for admitting that they had been browsing it.
- The image in question had been commercially available for a long time, and nobody had been arrested for selling or possessing it.
- The image had a credible claim to artistic merit. Strictly speaking, "artistic merit" is not a defense against child pornography charges, but there is no unambiguous definition that can be used to determine if a given picture constitutes child pornography, and in a borderline case, a judge would probably be influenced by the fact that the photo was used as cover art for a "serious" album, and not seized from a darkroom in some creep's basement.
That last factor brings up a final irony: that the IWF, in labeling the Virgin Killer cover art as "child pornography," may have just been applying an objective standard that many people might not have disagreed with, if it hadn't been for the fact that the image was used as cover art for a rock album. Suppose you read a news article about a man who was arrested for possession of child pornography, and you happened to see a sample of the images (never mind how) that he was arrested for. And suppose the Virgin Killer album cover photo were been mixed in with those images. Would it have jumped out at you as an obvious case of over-reaching by the police? Would you speak out publicly, saying that even the guy should be prosecuted for the other images, he shouldn't be prosecuted for that one? (Again, ignoring the issue of how you happened to be looking at the photos in the first place, and assuming you couldn't get in trouble for that!) I doubt that I would have the nerve. By defending Wikipedia for hosting the same image, I'm guilty of a double standard. But would the IWF have agreed to un-block the image, if it hadn't been the cover art of an album, but instead had just been a grainy photo stuck in a sub-directory of someone's home page that they never intended to be made public? If not, then the IWF is guilty of a double standard too.
So not only do we not know how many mistakes are on the IWF's blacklist, it may be hard even to agree on an objective definition of a "mistake." But at least in cases where the content creator has already identified themselves — such as a public image on Wikipedia, or an image in a photographer's online portfolio — the IWF should notify people when it blocks their content. That would at least bring to light the cases where the content creator disagrees with the IWF's determination that their content constitutes child pornography. In some cases, such as the Wikipedia controversy, people would side with the content providers. In other cases, they wouldn't. But there's no reason to assume, as the IWF does when saying that Wikipedia represented "one mistake in twelve years," that in 100% of such cases, the courts and the police would side with the IWF's judgment.