Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Censorship By Glut 391

Frequent Slashdot contributor Bennett Haselton writes "A 2006 paper by Matthew Salganik, Peter Dodds and Duncan Watts, about the patterns that users follow in choosing and recommending songs to each other on a music download site, may be the key to understanding the most effective form of "censorship" that still exists in mostly-free countries like the US It also explains why your great ideas haven't made you famous, while lower-wattage bulbs always seem to find a platform to spout off their ideas (and you can keep your smart remarks to yourself)." Read on for the rest of Bennett's take on why the effects of peer ratings on a music download site go a long way towards explaining how good ideas can effectively be "censored" even in a country with no formal political censorship.


In a country where you're free to say almost anything in the political arena, I think the only real censorship of good ideas is what you could call "censorship by glut". If you had a brilliant, absolutely airtight argument that we should do something -- indict President Bush (or Barack Obama), or send foreign investment to Chechnya, or let kids vote -- but you weren't an established writer or well-known blogger, how much of a chance do you think your argument would have against the glut of Web rants and other pieces of writing out there? Especially if your argument required people to read it and think about it for at least an hour? Perhaps your situation could be compared to that of a brilliantly talented band submitting a song for Matthew Salganik's experiment.

What Salganik and his co-authors did was recruit users through advertisements on Bolt.com (skewing toward a teen demographic) to sign up for a free music download site. Users would be able to listen to full-length songs and then decide whether or not to download the song for free. Some users were randomly divided into eight artificial "worlds" in which, while a user was listening to a song, they could see the number of times that the song had been downloaded by other users in the same world -- but only by other users within their own world, not counting the downloads by users in other worlds. The test was to see whether certain songs could become popular in some worlds while languishing in others, despite the fact that all groups consisted of randomly assigned populations that all had equal access to the same songs. The experiment also attempted to measure the "merit" of individual songs by assigning some users to an "independent" group, where they could listen to songs and choose whether to download them, but without seeing the number of times the song had been downloaded by anyone else; the merit of the song was defined as the number of times that users in the independent group decided to download the song after listening to it. Experimenters looked at whether the merit of the song had any effect on the popularity levels it achieved in the eight other "worlds".

The authors summed it up: "In general, the 'best' songs never do very badly, and the 'worst' songs never do extremely well, but almost any other result is possible." They also noted that in the "social influence" worlds where users could see each others' downloads, increasing download numbers had a snowball effect that widened the difference between the successful songs and the unsuccessful: "We found that all eight social influence worlds exhibit greater inequality -- meaning popular songs are more popular and unpopular songs are less popular -- than the world in which individuals make decisions independently." Figures 3(A) and 3(C) in the paper show that the relationship between a song's merit and its success in any given world -- while not completely random -- is tenuous. And if you're a talented musician and you want to get really depressed about your prospects of hitting the big time, Figures 3(B) and 3(D) show the relationship between a song's measured merit and its actual number of sales in the real world. (Although those graphs may cheer you up if you're a struggling musician who hasn't made it big yet -- maybe it's not you, it's just the roll of the dice.)

As the Richard Thaler and Cass Sunstein put it in their all-around fascinating book Nudge , where I first read about the Salganik study:

In many domains people are tempted to think, after the fact, that an outcome was entirely predictable, and that the success of a musician, an actor, an author, or a politician was inevitable in light of his or her skills and characteristics. Beware of that temptation. Small interventions and even coincidences, at a key stage, can produce large variations in the outcome. Today's hot singer is probably indistinguishable from dozens and even hundreds of equally talented performers whose names you've never heard. We can go further. Most of today's governors are hard to distinguish from dozens or even hundreds of politicians whose candidacies badly fizzled.

Is the blogosphere, or the "marketplace of ideas" in general, any different? If a random sample of bloggers were rated based on some independent measure of merit -- for example, independent ratings from a random sampling of blog readers, who were looking at the bloggers' writing samples for the first time, analogous to users in Salganik's "independent" world -- and then correlate that with the bloggers' traffic or some other measure of success, it's not hard to imagine the results would be similar to those of the 8-worlds experiment: the best often rise to the top, the very worst rarely do, but success in the vast middle would be close to random. In fact, while music listeners would have no logical reason to like a song just because others did, users in the blogosphere and other public forums would have several rational reasons to cluster around writers who are already popular: (1) errors are more likely to have been spotted and pointed out by someone else; (2) as an extension of that, others are more likely to have provided comments and other value-added content; (3) if you are the first person to spot an error, it's more important on a popular blog to point out the error and stop the misinformation from spreading, than on a minor blog that nobody has ever heard of. So the "snowball effect" of popularity in the blogosphere would be even more pronounced.

Then why do so many people believe in what Thaler and Sunstein call the "inevitability" of success based on merit, in domains like music, politics, and writing? I think it's because the belief is what scientists call an unfalsifiable one -- if the "best" acts are assumed to be the ones that end up on the top of the pile, then the marketplace has always sorted the "best" content to the top, by definition. Since the definition is circular, the premise could never be disproved by any amount of counter-evidence -- even if an act that used to be popular, suddenly falls under the radar, that could be seen as "proof" that they lost whatever magic touch they used to have, not as evidence of the arbitrariness of the market! The only disproof would be an artificial experiment like Salganik's, showing that once you get beyond a certain threshold of quality, commercial success has little relationship to independently measured merit -- but such experiments, which in Salganik's case required the cooperation of over 14,000 users, don't come along very often. And as long as most people don't realize how arbitrary the existing marketplaces are, there isn't enough demand to justify building a system that could work better -- indeed, to even justify asking the question of whether a system could be designed that would work better.

And that, I think, is how "censorship by glut" really works. It's not just the sheer amount of written content that censors small voices -- if you happen to know about a particular writer that you consider a fount of wisdom, then the existence of a billion other Web pages won't stop you from reading that writer's content. And it's not as if there aren't plenty of people who realize that success can be highly arbitrary. The problem is that as long as most people assume that the existing marketplace of ideas does a good job of sorting the best content to the top, then they'll be more inclined to stay with the most popular news sites and blogs, and even the minority who know that it's largely a lottery, will have no effective way of finding the best content among everything else, so they'll end up sticking with the most popular sites as well. Worse, as a secondary effect, most people with something useful to contribute won't even bother, if they don't already have a large built-in audience. I know plenty of people who could write insightful essays about social and technological issues, essays that would give most readers a new perspective such that they would definitely say afterwards: "That was worth my time to read it." But it wouldn't be worth it to the writers, because they know that their content isn't going to get magically sorted into its deserved place in the hierarchy.

(My own favorite blog that nobody's ever heard of is Seth Finkelstein's InfoThought, which is usually logical and insightful and is only about 25% of the time about how "nobody ever reads this blog, so what's the point". His Guardian columns are also good and usually don't have that subtext, perhaps because it's considered impolite to use a newspaper's column-inches (column-centimeters?) to complain that you have no voice.)

So can this problem be avoided, or is inequality and arbitrariness just a permanent part of the marketplace for content and ideas? You could create an artificial world that would sort user-submitted content according to some other algorithm -- and even if it didn't give good writers the fame that they theoretically deserved in the larger world, it might still provide them with enough of an audience within the artificial universe, to make it worth their time to keep writing. One option would be to use Salganik's "independence" world model, where users would read content without being able to see the ratings that other people had given to it, or without even seeing recommendations from similarly-minded friends within the system. The trouble is that without any information about what other readers liked, without any starting point to sort good content from bad content, it may not be worth the reader's time to read through all the dreck to find the occasional buried treasure. I believe about as strongly as a person can believe, that the existing marketplace for content is far from meritocratic, for example that there are probably thousands of songs on iTunes that I've never heard of but would nonetheless love -- but even I don't spend time listening to the 30-second clips of random songs on iTunes, because it takes too long to find the stuff I would like.

But I submit there is a solution -- a variant of an argument that I've suggested for stopping cheating on Digg, or building Wikia search into a meritocratic search engine, or helping the best writers rise to the top on Google Knol. The solution is sorting based on ratings from a random sample of users. The remainder of this speculation will be very theoretical, and will at times seem like a Rube-Goldberg approach to what should be a simple problem. But at each juncture, the complications to the algorithm are motivated by an argument that anything simpler would not work. At many points along the way, it will be tempting to throw up one's hands and say, "Why go to all this trouble, the existing system works well enough." But this statement is hard to quantify with any actual evidence -- unless you're just using the circular definition above, that whatever rises to the top is automatically the "best".

For music listeners, the gist of the algorithm is: When an artist submits a new song in the alt-rock category for example, the song is distributed to a random sample of 20 users who have indicated an interest in that genre. If the average rating from those users is high enough, the song gets recommended to all of the site's users who are interested in alt-rock. If the average rating is not high enough, then the artist receives a notification, perhaps with a list of comments from the listeners suggesting what to improve. As long as the initial random sample of users is large enough that the average rating is indicative of what the rest of the site's alt-rock fans would think, the good content will get to be enjoyed by all of the site's alt-rock customers, while the bad content would fizzle after only wasting the time of 20 people. If it turns out that a random selection of 20 users are typically too lazy to rate the songs that are submitted to them, you could even make artists submit $10 to have their songs rated by the focus group, and pay each of the 20 raters $0.50 each for their trouble. Artists can't withhold payment as revenge for a bad rating, so the average ratings should still be proportional to the song's actual quality.

At this point, you might object that this system suffers from the same unfalsifiable, circular reasoning as the belief that the marketplace rewards the "best" content, if the best content is the content that wins in the marketplace. If I define the "best" content to be the content that gets the highest average score in a random focus group, then of course this algorithm sorts the best content to the top, because that's how "best" was defined! But this system does actually have a non-trivial property: If you implement the system in multiple separate "worlds" (similar to those that Salganik created), then provided your focus groups are large enough to provide representative random samples, the same content should rise to the top in each of the worlds, unlike the results in Salganik's experiment.

This actually wouldn't be the case if the initial focus groups were not big enough -- then random variations in a few voters' opinions could cause many songs to succeed in one world and fail in another. So it's a non-trivial property that is not automatically true, and would not be true if you made an error in designing the system, like making the focus groups too small. But the larger the size of the random sample, the smaller the variance in the expected value of the average of their ratings, and the greater agreement you would expect between the results from different worlds.

As Salganik pointed out to me, this system does under-reward songs that might require repeated listenings over time to gain an appreciation of their qualities. But even this, strictly speaking, can be modeled in exchange for cash -- I'll pay 20 users $2 each if they listen to my song once today, once in three days, and once again a week after that (the site could stream the song to them to provide at least some likelihood that the users weren't cheating). This assumes some things, such as that repeated exposure has the same growing-on-you effect even if the exposure is forced -- but in the real world, songs often grow on you from repeated listenings that are "forced" anyway, if they're played in the doctor's office or on the radio when you don't bother to change the channel. And this might be more complicated than necessary -- often when a song grows on you, it at least interests you enough the first time you hear it, that you'd give it a positive rating on the first listen, which is all that the site requires for the song's success.

However, if you try to adapt this trick to a meritocracy for written content, you run into different problems. With a song, if you poll a random sample of users, the odds are very small that anyone being polled will be a vested interest in the success of the song, like one of the band members or one of the song's producers (assuming the population of users is large enough, and the song's producers have not been able to create a huge number of "sockpuppet" accounts to manipulate the voting). So you can assume the ratings will be free of any prior bias. But with a political post, for example, if you write a pro-Bush or anti-Bush essay, it's quite likely that among a random sample of users, there will be people who are biased to vote up (or vote down) any post that has anything good to say about the President. The essays voted to the top may not be the best-written ones, but simply the ones that pander to the most popularly held opinions.

But if the "best" essays are not the ones that receive the highest percentage of positive votes, even when polling a random sample of independent users -- which I was advocating as the gold standard for measuring merit -- then how do you define what makes the "best" essays, anyway? There are many possible answers, but I suggest: A necessary condition for being among the "best" essays would be to convince the most people of something that they didn't believe before, without resorting to tricks such as blatantly fabricating statistics or attributing made-up quotes. This is not a sufficient condition for merit -- maybe the point of view that you're convincing people of, is still wrong -- but I submit that if you're not at least changing some people's minds, then there's no point. An essay that changes a lot of people's minds in a random focus group, is usually worth reading, if only to see why it has that effect.

Unfortunately, this doesn't suggest a better way to poll users about the merit of an essay, because if you ask users, "Were you a Bush supporter before reading this essay?" and "Were you a Bush supporter afterwards?", Bush supporters are eventually going to figure out that the way to give the essay a high score on the mind-changing scale, would be to (falsely) say that they were not a Bush supporter before reading the essay, but they were one afterwards. So you'd still end up rewarding the essays that reinforce pre-existing opinions instead of the ones that change people's minds.

From here the counter-measures and counter-counter-measures get increasingly complicated. For each category of essays that a user wants to rate, such as Bush opinion pieces, you could require new users to enter their current opinion: either pro-Bush or anti-Bush. Then if they were asked to rate a pro-Bush essay, they would only be able to vote that the essay "changed their minds" by switching their registered opinion from "anti-Bush" to "pro-Bush". But Bush supporters could sign up initially as anti-Bush, just in the hopes of being part of a random focus group so they could cast their mind-changing vote for a Bush essay by changing their registration to "pro-Bush"! However, each user would only be able to do that once -- or do you allow users, after they've switched from anti-Bush to pro-Bush, to "reload" by spontaneously switching back to anti-Bush for no reason at all, so they're all set to cast a mind-changing vote for the next pro-Bush essay? Or would they only be allowed to switch back to anti-Bush, by casting a mind-changing vote as part of a random focus group for an anti-Bush essay -- thus giving a boost to an anti-Bush screed, as part of the price they pay for the next vote they cast for a pro-Bush piece? Then users could still game the system, by switching to "anti-Bush" when casting a vote for a very poorly written anti-Bush essay that they don't think anybody else will vote for anyway, and then switching back to "pro-Bush" only for the good essays that have a shot, hoping that their votes will coalesce around the decently-written pro-Bush essays and push them to the front page...

Am I over-thinking this? I submit this is an area where there's been too much under-thinking. Haven't we all been tempted to believe that the marketplace of ideas -- not to mention bands, blog posts, and business ventures -- efficiently sorts content to the place in the hierarchy of rewards that it deserves, without having any real evidence for this, except the circular definition of "quality" as being proportional to success? And the more people believe this, the more that marginalized voices will effectively be censored, even when they have something brilliant to contribute. We should at least think about ways that we could do better. Or else, prove logically that it can't be done (a logical proof can only approximate the real world, but it could show that such a pure meritocracy would be very improbable, or wouldn't work well). However I think the ideas above make it seem unlikely that a meritocracy is logically impossible. Maybe they're a step in the right direction. Maybe someone else's ideas would be better. The important thing is that a meritocratic algorithm be judged by something other than a circular definition, which simply decrees by fiat that the winning content is the best.

This discussion has been archived. No new comments can be posted.

Censorship By Glut

Comments Filter:
  • Basically, you do what you think others want you to do. This... this is not news.

    However, it's good to see it being properly analyzed. I'll need at least an hour to think about this.
  • by hey! ( 33014 ) on Monday December 01, 2008 @02:40PM (#25948651) Homepage Journal

    Yes. But we don't have our own network that feeds us back our viewpoint all day long. We scarcely have any print media left for that matter. It's not a profitable viewpoint.

    I suppose MSNBC might be a counter example of somebody trying to grab a distinct market segment off of Fox, and there is some legitimacy to that. But after all they took Olbermann and Matthews off their live event anchoring because they'd be perceived as biased. I think that was a good decision, but it is not something Fox would ever do.

    And that's one fundamental difference between a liberal and a conservative. A liberal values, at least in principle, contrary viewpoints. Naturally, we don't live our principles any more consistently than conservatives do, but those principles are, in fact, different when it comes to the value of expanding one's world view.

  • by Relic of the Future ( 118669 ) <dales@digi[ ]freaks.org ['tal' in gap]> on Monday December 01, 2008 @03:12PM (#25949263)
    The solution the author presents is not entirely unlike an idea I've had on my own, but applied to a completely different realm: moderation of internet forums. Many people have noticed that a site tends to coalesce toward a particular "group think" as it goes along (Slashdot hates copyright; every political blog is either left- or right-leaning; etc.)

    My idea goes in two stages: in stage one, a new user can only indicate whether they agree or disagree with a comment. Once the system can, by comparison with other users, determine with some certainty what a user will agree with, they then can instead indicate how well-written (or compelling or convincing) those comments are. The trick is users are not shown low-rated dissenting opinions, only the most highly-rated; and when a reply is made, again, it will only make it back to the dissenting camp if it is highly-rated.

    The idea is to weed-out the flame-warriors and troll-feeders, to cut through the glut, and get the really interesting ideas in front of people, which is how it's similar to this.

  • Rate the raters (Score:4, Interesting)

    by Squiffy ( 242681 ) on Monday December 01, 2008 @03:16PM (#25949355) Homepage

    In answer to the question about how to rate blog essays, I suggest that we need to rate the raters. How do we do that? I think a system can be built into the threads of discussion in response to an essay. If people rate your comment highly, it increases your standing as a rater, and your ratings figure more strongly into the rating metric. But people can't just rate. They must also supply, in the form of a comment, their reasoning behind the rating, which opens their comment and rating to responding comments and ratings, and so on. If people read and understand the terms of comment submission so they know that the point of the site is to rate the quality of reasoning, not the flavor of ideology, the system should correct itself.

    Then again, this system assumes that people will behave rationally, which is dubious, as any economist or divorce lawyer will tell you.

  • Rating algorithms (Score:4, Interesting)

    by bendodge ( 998616 ) <bendodge AT bsgprogrammers DOT com> on Monday December 01, 2008 @03:21PM (#25949443) Homepage Journal

    While I admit I haven't spent nearly as much time thinking or writing about this as Haselton has (he seems to have a great deal of time), I do like this paragraph particularly:

    And that, I think, is how "censorship by glut" really works. It's not just the sheer amount of written content that censors small voices -- if you happen to know about a particular writer that you consider a fount of wisdom, then the existence of a billion other Web pages won't stop you from reading that writer's content. And it's not as if there aren't plenty of people who realize that success can be highly arbitrary. The problem is that as long as most people assume that the existing marketplace of ideas does a good job of sorting the best content to the top, then they'll be more inclined to stay with the most popular news sites and blogs, and even the minority who know that it's largely a lottery, will have no effective way of finding the best content among everything else, so they'll end up sticking with the most popular sites as well. Worse, as a secondary effect, most people with something useful to contribute won't even bother, if they don't already have a large built-in audience. I know plenty of people who could write insightful essays about social and technological issues, essays that would give most readers a new perspective such that they would definitely say afterwards: "That was worth my time to read it." But it wouldn't be worth it to the writers, because they know that their content isn't going to get magically sorted into its deserved place in the hierarchy.

    I agree that there seems to be a lot of mob mentality and snowballing in Internet writing, but I think there are some external factors that are left out of his analysis. I think that the large chunk of people who 'can't be bothered' to contribute don't contribute because they have a personally successful life. I know it's gross stereotyping, but it seems as though the bulk of people who spend their time spouting ideals (Communists, OSS giants, pop stars, Obama) have done little to none of what society considers real work. These people have far more free time than personally ambitious, hardworking people who pursue personal success instead of a career in changing the world. Thus, these people who have too much time on their hands distort the written contents of the net. (Please keep in mind that this is a draft of a 5-minute theory, so it's sure to have some holes.)

    As far as remedies go, I think rating algorithms need to be much more sophisticated. For example, 5-star scales could calculate the rating based on the mean of the mode star and its two neighbors' frequencies.

    For simplicity, let's assume one person clicked 1, two people clicked 2, three people clicked 3, etc. This method would discard stars 1-3 and calculate a display rating of 4.5, instead of a simple mean of 3.6. By totally discarding far-out ratings, we might be able to keep ratings from all gravitating to the middle. This is another 5-minute theory, and I'm not a math whiz, so I'm sure there's a better/simpler way to implement a deviation scheme like this, but it's a thought.

    Hmm, for added fun, try taking ALL ratings in a database and adjusting them all on a curve! But that's liable to guzzle server resources...

  • Re:All Religions... (Score:3, Interesting)

    by oldspewey ( 1303305 ) on Monday December 01, 2008 @03:38PM (#25949753)

    Many famous Scientists and Inventors in history have to overcome preconceived ideas, to battle against the existing paradigm of the time. Same is true now.

    One good example (among many) is Alfred Wegener's [wikipedia.org] attempt to advance his theory of continental drift.

  • by tobiah ( 308208 ) on Monday December 01, 2008 @03:40PM (#25949789)

    "Censorship" (for lack of a better word) is occuring not due to the innane mob network effect of the masses, but is the fault of the ranking algorithm.
    Come up with a better algorithm and merit will be more accurately and "fairly" distributed. Of course, there are a lot of related stories out there, something like the Netflix competition [slashdot.org] may produce a better algorithm, although it may end up being too damn complicated.
    I see this more as a math/engineering story; you can complain about the behavior of mobs, or you can fix it with math.

  • by Garwulf ( 708651 ) on Monday December 01, 2008 @03:40PM (#25949793) Homepage

    That's not exactly uncommon, and it happens in just about every field where somebody can have an opinion. On here, the place where it tends to stand out for me is in the copyright debate - but then again, I've been a professional writer for around ten years, and a small press publisher now for two - I know most copyright issues like the back of my hand as an insider.

    There are people who don't like having their preconceptions challenged, even when there's ample evidence against them. One of my favorite moments on here was a discussion with this one fellow who was absolutely convinced that an ISBN number was a copyright, or close enough to it to be the same thing. He was still maintaining it after a deluge of information to the contrary, including Wikipedia articles, links to copyright offices, and links to the authorities that issue ISBNs.

    My theory is that once somebody becomes set in an ideology, evidence to the contrary challenges them on too deep a level. If it's an ideology that can actually cause harm to somebody else, proof that they're wrong also carries with it the burden of guilt for causing harm, which makes it even harder for them to come to grips with it.

    That's my theory, anyway, for what it's worth.

  • by Tekfactory ( 937086 ) on Monday December 01, 2008 @03:55PM (#25950097) Homepage

    Excuse me if this seems to ramble.

    I once read an article about Rating systems, ones that were resistant to gaming the system, unlike eBay's rating system. The system in questions rated things positively that you rated highly and negatively those that you didn't. Over time it tended to only show you things that were rated highly by people who rated things similarly to you.

    This leads to clustering of people with similar viewpoints, but lessens the effect of sockpuppets, trolls and griefers. They would have to be rated positively by enough people consistently in the same cluster to game the system.

    I wasn't looking at this for something like eBay, but rather an MMO. I also wonder sometimes about a Firefox plugin, but I digress.

    I'd like to further refine this system based on my experience with Amazon's recommendations. I and some of my friends have noticed if we buy a very new or niche publication we will get wierd and uneven recommendations off that purchase until enough people buy the book to smooth out the recommendations.

    Unfortunately Amazon only has "I own this" and "not interested" as responses. It doesn't have enough dimensions, and doesn't factor in reviews at all. I buy one video in a series, and I get recommendations for that series, and other series that are similar. When I say I am not interested in that other series episodes, say season 1 I still get recommendations for the rest of the series. I would like to be able to 'deny all' but I can't. If I wanted to tell it not to recommend horror movies, I can't.

    Likewise if I saw something online I didn't want to read, and I consistently didn't like, I'd filter it, not one blog post, but the author.

    Ok, so here I am ideally, rating things and filtering things, until I am at last, as the parent writes in my own "world" suddenly CNN, BBC, and Joe blogger have an equal voice because they are narrowcasting straight to my own little insular 'bubble' on the internet.

    To which I'd like to add we need more control, and more dimensions on this filtering thing if it's really supposed to work. I'd love to mod up stories 'Thought-Provoking', or 'I want my 5 minutes back'.

    I think there is an answer out there, and I think it has something to do with self-organizing systems.

  • by shaper ( 88544 ) on Monday December 01, 2008 @04:04PM (#25950277) Homepage

    Please don't invoke the fire-in-a-crowded-theater argument. It is most often inappropriate to do so. Shouting "Fire!" isn't really even speech. It is just the raising of an alarm, similar to pulling a fire alarm. Just because it is spoken does not raise it to the level of "speech" in the context of freedom of speech. Most people don't know or understand the origins of this argument and thus over-use it in inappropriate places. See this excellent analysis by Alan Dershowitz: http://www.theatlantic.com/issues/89jan/dershowitz.htm [theatlantic.com]

    The more appropriate response is to talk about "time and place" restrictions and mitigation of "clear and present danger" caused by certain types of "speech", such as incitement to riot. But even then, one has to be very careful that the motivation is avoiding imminent, needless harm and not just suppressing speech we don't like. I personally tend to think that most hate speech laws lean more toward the latter than the former.

  • by randyest ( 589159 ) on Monday December 01, 2008 @04:14PM (#25950413) Homepage

    While I agree hate speech laws are a stupid idea and a dangerous slippery slope...I'm less and less convinced that it wouldn't be worth it to forever silence people [with whom I don't like and/or agree]

    That is dangerous and, to be blunt, stupid and short-sighted thinking.

    The leftists these days what unified groupthink. The right wingers want everyone to be an individual so long as they are all identical individuals.

    What? That's also stupid. You have absolutely nothing to substantiate that ridiculous assertion (not to mention your first sentence is nonsensical.)

    The rest of your post is equally ridiculous and without base or merit. I consider myself a liberal so please shut up and stop making the rest of us look bad.

  • Intriguing... (Score:3, Interesting)

    by crmarvin42 ( 652893 ) on Monday December 01, 2008 @04:23PM (#25950583)
    I wonder how many /. readers think of themselves as being a member of the "Merit" group instead of a member of the "Social" group because they (mistakenly?) believe that they aren't effected by hit counters since they don't consciously pay attention to them.

    Taking it one step further, I wonder how many of the group above use that as personal validation that their opinions are "Correct" and everyone elses are "Wrong".
  • by chrono325 ( 796121 ) <chrono325@gma i l . com> on Monday December 01, 2008 @04:53PM (#25951041)

    Wow, that was an awesome article!

    I have been thinking about [haxney.org] the same problem for a little while, and have some comments on some of the stuff he proposed.

    It doesn't surprise me at all that Salganik's experiment showed that popular songs would become more popular, almost regardless of their quality. This seems to be a hard-wired human character trait, to conform to popular opinion, even if contradicts direct observation (see the Asch conformity experiments [wikipedia.org] or the No soap, radio [wikipedia.org] jokes). I think that any "solution" to this problem is likely to be fighting a losing battle against peer pressure, since people will likely try to subvert the "find the merit" process in order to figure out what the group thinks.

    On the topic of solving the problem of pre-existing bias, I say, "why bother?" He proposes a number of increasingly complex (though well-thought-out) solutions to the "Pro-Bush, Anti-Bush" article problem, all of which, I think, would be doomed to failure. This seems to be a cat-and-mouse game of "outsmart the rater" which I think the creator of the system would lose. If you try to make a smarter process of preventing people from injecting their bias into an article, they will just figure out a way around it. Rather than trying to outsmart the reader, simply include their bias in the rating of the article.

    If you saw an article with a rating that said "95% of Bush supporters liked this article," it would tell you something about it, as would "90% of Kerry supporters also liked this article." You could rely on self-reporting or fancy statistical extraction of preferences to figure out who is a "Bush supporter" and who is a "Kerry supporter." That way, you wouldn't have to trick people into anything.

    Additionally, who says something is an important part of what is being said. If you see an article about how 9/11 was a conspiracy by a well-known Truther, that is a very different piece of information than if President Bush says it. Likewise, a heavy metal fanatic liking a heavy metal song sends a different message than someone who thinks Vivaldi is just a young upstart and a passing fad liking the same song.

    There could be problems with this (like how to keep the display of who said what short enough to be comprehensible), but it could be a step in the right direction, or at least something interesting to think about.

  • by smellsofbikes ( 890263 ) on Monday December 01, 2008 @05:06PM (#25951231) Journal

    I hadn't ever thought about it that way: if the world pretty much revolves around you, you're not really self-centered.

    However:
    >It seems the news stations cater to what people want to watch, instead of what's important.

    To the news broadcasters, what people want to watch *is* what's important.
    The problem is that that's self-amplifying. If a lot of people wanted to watch educational, world-centric news, they'd provide that, and because it's available, more people would start to watch it. And, indeed, that's exactly what Bennet Hazelton is writing about: people watch what everyone else is watching, and then even more people do, and what everyone else is watching is the story about Annette's cat. And, because reality is a collective hunch, that *is* what's important, as much as we might like to argue the point.

  • by fugue ( 4373 ) on Monday December 01, 2008 @08:44PM (#25953723) Homepage

    There is a strong correlation between those in scientific fields and those with certain political persuasions. There is also a strong tendency for science to weed out people who seek out information solely to validate already-existing views, rather than being open to absorbing a variety of pieces of information and reaching the best-supported conclusion.

    If you take a group of people who can be shown to be better than average at incorporating new information and re-evaluating preconceived notions, and demonstrate what their opinions are generally (in this example, perhaps where they are in a political spectrum), then you have a piece of evidence that wherever they are on that spectrum is based on reason rather than on confirmation bias [wikipedia.org].

    The exact same thing can truthfully be said of those on the left of the political spectrum.

    People with average confirmation bias show up on "both" sides, but people with less confirmation bias tend to show up mostly on one side. I find this most interesting.

    It might also be worth mentioning that confirmation bias is obviously stronger than average among certain (large!) subsets of religious people, and that they tend to be (coincidence?) on the other side of the political spectrum in the USA. Finding out which side that is may be another interesting commentary on the logical consistency and credibility of that end of the spectrum...

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...