Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Facebook Censorship Social Networks

Facebook's Complaint Process Is Arbitrary — But So Is Campaigning 114

Bennett Haselton writes "After initial abuse reports failed to shut down some anti-women and pro-rape pages on Facebook, a wider lobbying campaign succeeded in prompting a Facebook policy change. This has been alternately hailed as a vindication of the campaigner's cause, or derided as proof that Facebook can be cowed by humorless feminists. In reality, the success of the campaign was most likely the outcome of a mostly arbitrary and random process that required a lot of luck, just as the initial abuse reports didn't succeed because they didn't have the necessary luck on their side. Neither result should be taken to reflect on the merits of the campaigner's actual points." Read on for the rest of Bennett's thoughts.
On May 28th, Facebook released a statement acknowledging that it had not responded effectively to complaints against pages containing "gender-based hate speech" (e.g. "Slapping hookers in the face with a shoe." and several much worse examples glorifying rape or violence). The decision came at the end of the "#fbrape" campaign by feminist groups to pressure advertisers whose ads had been appearing on the most offensive pages; major advertisers like Nissan announced that they were withdrawing advertisements from Facebook until they could be assured their ads would not appear on the pages in question (most of which were ultimately shut down by Facebook).

I've written before about the arbitrariness of Facebook's abuse-report process, and in particular how it can be abused by convening a "flash mob" of users to file abuse reports about pieces of content that they want removed, even when that content doesn't violate Facebook's terms. The solution I proposed, briefly, was for Facebook to sign up, say, 100,000 volunteers (or even paid users) to review "abuse reports." and when an abuse report is received, have the report evaluated by a random subset of 100 of those volunteers, to vote on whether the report is legitimate. The decision whether to remove the content can be based on what percent of those 100 users vote that it violates the terms of service. The nice property of this system is that it can't be manipulated by conscripting a "flash mob" of users to file complaints all at once — no matter how many mobsters you have filing abuse reports, if your complaints don't have merit, they won't pass the random-sample review (unless you manage to control a significant proportion of the 100,000 users that the 100 users are randomly selected from, but that would be a very tall order).

This also means that no abuse complaint would be ignored because too few people submitted it — any abusive content that was reported, would trigger the 100-user review. (Or if you thought cranks would waste too much of the reviewers' time by filing phone abuse reports, you could only trigger the 100-user review after, say, 3 people had complained about a given page. Or you could start ignoring complaints from users after they had filed a certain number of complaints that were all rejected by the 100-user review process.) Readers suggested various improvements to the algorithm and pointed out potential problems, but I think the basic idea is still sound.

Some of the abusive pages cited by the #fbrape campaigners, are truly graphic and offensive, certainly in violation of Facebook's "community standards" against "hate speech." If they had been reviewed by a 100-user random sample, they probably would have been removed. As it is, the complaints probably landed in the lap of some $1-an-hour grunt worker who ignored them (Facebook's opacity in regards to its review process gives us little more information than that). If the complainers had been luckier, perhaps the abuse reports might have gotten noticed by someone more proactive.

So even if the #fbrape campaigners didn't put it in these terms, their gripe was essentially that the Facebook complaint review process leads to arbitrary outcomes, and the complaints didn't gain traction because luck wasn't on their side.

But what about the #fbrape campaign itself, to bring the pages to Facebook's attention through media action, after the initial abuse reports were ignored?

This is probably an example of what could be called the "Salganik Effect." Matthew Salganik is a Princeton University professor who in 2006 conducted a study examining how certain songs became popular in simulated worlds in which users could rate songs and recommend them to their friends. In his simulation, he divided users into eight artificial "worlds" in which the users in a given world could only see the ratings and recommendations from other users within that world. Then each world was seeded with the same set of songs to see which songs grew in popularity. His team found that the set of songs which became "successful." varied wildly between worlds — such that within any given world, although the very worst songs never came popular, the set of songs that did become popular were essentially a random selection from among those that were merely "good enough."

Online movements gain traction through such a similar process — users 'liking' a page or recommending it to friends, recommendations radiating out from the popular elite according to Malcolm Gladwell's "Law of the Few -- that this suggests the success of a campaign like #fbrape could have been the result of an arbitrary process dominated by luck, just like the success of a song in one of Salganik's artificial worlds. We can never know for sure, since we can't divide real-life Facebook users into multiple artificial worlds, or re-run history to see how often the outcome would be different. But you should read Everything Is Obvious* (Once You Know The Answer), a book written by Duncan J. Watts, one of the co-authors of Salganik's study. The book argues that many of the outcomes that seem like foregone conclusions in hindsight, such as the success of a product, twitter meme, company, idea, or person, are really the result of an arbitrary process that is impossible to predict, much less control. If you liked Freakonomics or Thinking, Fast And Slow, you should add Everything Is Obvious to your reading list right away.

In the case of the #fbrape campaign, the strong form of the conclusion would be to say that the success of the campaign is probably the outcome of a random process. But everyone should at least agree with the weak form of the conclusion, which is that the success and failures of online campaigns could be a random process — and that it's a mistake to say that the success of a campaign definitely is determined by the merits of the campaign's ideas or by the efforts of the campaign organizers. If we can't prove how much luck has to do with it, we have to acknowledge that it could be quite a lot.

That doesn't mean the #fbrape campaign didn't have merit. Like the songs in Salganik's artificial worlds that were "good enough" to succeed if given the chance, the #fbrape campaign organizers did have a point. But we shouldn't take the phenomenal success of the campaign to mean that they had that much more of a valid point than many other campaigns which fizzled out due to bad luck. (Thus I think that articles like this one by Sandy Garossino, even if they're right about the problem of pro-rape content, are missing the point insofar as they imply that the movement's success was due to the hard work of the "smartest feminists on the planet." It's a bit early to declare that "On May 27th, women won the Internet.")

The initial complaints failed because of an arbitrary process, and then the #fbrape campaign succeeded because of an equally arbitrary process. The next such awareness campaign, even if it has merit, might not have luck on its side.

The arbitrariness in both of these processes can be fixed. For the first process — abuse reports submitted to Facebook — the fix is easy: have each complaint reviewed by a random subset of volunteers or employees who are signed up to review such content, as described above. This makes the outcome dependent on the attributes of the content itself (as it should be), rather than luck and/or the size of the mob that wants something removed.

The arbitrariness in the second process — the process by which memes "catch fire" and spill over into mainstream media and broader awareness — is a taller order, but I think it can be fixed by essentially the same algorithm. What would be required would be for a site that has the power to make new memes through its sheer dominance, like Google+ or Reddit, to implement the random-sample-voting algorithm for memes and calls-to-action. Any user can submit an argument — very broadly, any type of exhortation that "we" should do "something" -- and these arguments could be reviewed by a random sample of, say, 20 other users on the site. Those arguments that have the highest percentage of "Yes" votes would get promoted on the front page. (This is the algorithm I was pushing in a previous article, Censorship By Glut.)

The system sounds deceptively simple, but note what's missing: You can't manipulate the voting by rallying your friends to vote for your idea (or by creating multiple "sockpuppet" accounts to vote up your own post). You don't even have the accidental Salganik effects, where friend-to-friend recommendations result in a chaotic feedback loop where certain ideas race ahead of others due to random factors that have nothing to do with the idea's merits. You've taken the arbitrariness out of the process, so that the fate of the idea is a function only of the attributes of the idea itself, which determines the percentage of randomly sampled users who vote it up. (This is not quite the same as rewarding the ideas with the most "merit" — rather, it's the ideas that the population being sampled perceive to have the most merit — but at least the outcome is not random, and the system cannot be gamed.)

Meanwhile, I hope that Facebook won't err too far on the side of abolishing sexist humor where the humor is in proportion to the offensiveness. In Women, Action & the Media's list of examples of "gender-based hate speech." they included a Facebook page titled "Hope you have pet insurance because I'm about to destroy your pussy." which I would optimistically like to think refers to enthusiastic sex and not rape. (The humor really derives from the fact that the words appear next to a physically unattractive man, which is one group that feminists never seem to get riled up about defending.) And what about jokes about anti-male violence, which were left out of WAM's examples? A friend of mine likes posting things on his Facebook like "I was trying to remember the name of Rihanna's ex, and then it hit me," which I thought was funny, but which some WAM supporters probably would have reported as "abusive content." I wonder how many of those same people would have filed a report if he'd said, "I was about to say the name of Lorena Bobbitt's ex, but I got cut off."
This discussion has been archived. No new comments can be posted.

Facebook's Complaint Process Is Arbitrary — But So Is Campaigning

Comments Filter:

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...