Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Courts The Internet

Supreme Court Allows Reddit Mods To Anonymously Defend Section 230 (arstechnica.com) 152

An anonymous reader quotes a report from Ars Technica: Over the past few days, dozens of tech companies have filed briefs in support of Google in a Supreme Court case that tests online platforms' liability for recommending content. Obvious stakeholders like Meta and Twitter, alongside popular platforms like Craigslist, Etsy, Wikipedia, Roblox, and Tripadvisor, urged the court to uphold Section 230 immunity in the case or risk muddying the paths users rely on to connect with each other and discover information online. Out of all these briefs, however, Reddit's was perhaps the most persuasive (PDF). The platform argued on behalf of everyday Internet users, whom it claims could be buried in "frivolous" lawsuits for frequenting Reddit, if Section 230 is weakened by the court. Unlike other companies that hire content moderators, the content that Reddit displays is "primarily driven by humans -- not by centralized algorithms." Because of this, Reddit's brief paints a picture of trolls suing not major social media companies, but individuals who get no compensation for their work recommending content in communities. That legal threat extends to both volunteer content moderators, Reddit argued, as well as more casual users who collect Reddit "karma" by upvoting and downvoting posts to help surface the most engaging content in their communities.

"Section 230 of the Communications Decency Act famously protects Internet platforms from liability, yet what's missing from the discussion is that it crucially protects Internet users -- everyday people -- when they participate in moderation like removing unwanted content from their communities, or users upvoting and downvoting posts," a Reddit spokesperson told Ars. Reddit argues in the brief that such frivolous lawsuits have been lobbed against Reddit users and the company in the past, and Section 230 protections historically have consistently allowed Reddit users to "quickly and inexpensively" avoid litigation. [...]

The Supreme Court will have to weigh whether Reddit's arguments are valid. To help make its case defending Section 230 immunity protections for recommending content, Reddit received special permission from the Supreme Court to include anonymous comments from Reddit mods in its brief. This, Reddit's spokesperson notes, is "a significant departure from normal Supreme Court procedure." The Electronic Frontier Foundation, a nonprofit defending online privacy, championed the court's decision to allow moderators to contribute comments anonymously.
"We're happy the Supreme Court recognized the First Amendment rights of Reddit moderators to speak to the court about their concerns," EFF's senior staff attorney, Sophia Cope, told Ars. "It is quite understandable why those individuals may be hesitant to identify themselves should they be subject to liability in the future for moderating others' speech on Reddit."

"Reddit users that interact with third-party content -- including 'hosting' content on a sub-Reddit that they manage, or moderating that content -- could definitely be open to legal exposure if the Court carves out "recommending' from Section 230's protections, or otherwise narrows Section 230's reach," Cope told Ars.
This discussion has been archived. No new comments can be posted.

Supreme Court Allows Reddit Mods To Anonymously Defend Section 230

Comments Filter:
  • of why we should keep section 230. Mods on Reddit ban people for producing facts that go against their narrative while letting trolls and bullies direct a discussion. You can't have a discussion online anymore. Social media is just noise. Even in forums of like minded people there are trolls.

    • If reddit mods didn't exist subreddits won't exist, because having them will become meaningless. They will get hijacked. Do you think reddit started out wanting to have mods? Try making a reddit clone without mods and see what happens. You're free to do so.

      • I think his point was that reddit moderation isn't remotely an example of moderation done right or a good application for Section 230 powers. In fact if Section 230 only ever applied to reddit I'd say fuck em, nuke the law and reddit from orbit in one fell swoop.

        Subreddits aren't moderated. They are kingdoms ruled by childish tyrants who throw temper tantrums banning things and people which should not be banned.

    • Mods on Reddit ban people for producing facts that go against their narrative while letting trolls and bullies direct a discussion. You can't have a discussion online anymore.

      Reddit is not the whole internet. There are lots of other places you can go. You are mad because you can't troll anymore.

      Even in forums of like minded people there are trolls.

      Yeah, but they banned you as soon as they caught on.

      • by MeNeXT ( 200840 )

        First I wasn't banned from Reddit. I just got tired of opinions disappearing. These weren't trolls like your comment.

        Did you add anything to this discussion? No? So why didn't you take your own advice and post it somewhere else.

        Your comments are an example of what I'm talking about. Nothing to say but you still need to be heard.

    • by AmiMoJo ( 196126 )

      And how would getting rid of Section 230 improve that? It's claimed that it would, hence the need to defend it.

      Reddit is a great example of a site that couldn't exist without S230. Masses of user content, reactively moderated by other users for the most part. If Reddit was liable for any of it, Reddit would have to employ people to do the moderation, and spend a huge amount of money defending frivolous lawsuits from trolls that think they were banned for their political views. The site would be gone in a we

      • by MeNeXT ( 200840 )

        Nothing in my comment indicated getting rid of Section 230.

        Reddit is a prime example of what is wrong with Section 230. There needs to be responsibility and consequences to statements that cause harm. We have freedom of speech and yet we can't scream "FIRE" in a crowded cinema without suffering the consequences for our actions. The problem I see is that we have people shouting "FIRE" with no consequences to the harm they are causing and we are arresting the people who are asking where?

        Reddit is not just ban

        • by AmiMoJo ( 196126 )

          You keep saying "Reddit is" but I think you mean "Reddit volunteer moderators are".

          In most cases there are other sub Reddits for dissenting views, if they get banned. Maybe you can cite an example where that's not the case.

        • You say people need to be accountable for what they say as though that isn't already the case. Section 230 just makes it clear they're not accountable for what other people say.

  • by XopherMV ( 575514 ) on Friday January 20, 2023 @05:33PM (#63226492) Journal
    Jesus, we're counting on sympathy for Reddit moderators? I can't think of a less sympathetic group. Might as well kiss Section 230 goodbye.

    Reddit moderators are the most pedantic, over-empowered tyrants in the digital landscape. They punish users on a whim based on whatever custom nonsense they have for their particular subreddit. The punishments damn near always start with an outright ban from the subreddit. The appeals process is a complete joke. Effectively, if you don't obsequiously apologize to the moderator for your transgression, then your account is banned for life. You can get banned from Reddit itself if you decide to create a second account and access the subreddit from which your other account is banned.

    The effect has turned Reddit into a large echo chamber. Plan on your account being banned if you happen to write anything controversial. That's especially true if the majority of young, inexperienced, and largely naive users disagree with your statements.

    And, the fact is that Reddit moderators are largely unnecessary and unneeded. Reddit users can largely police their own content through the use of up-voting and down-voting. If a statement downvoted enough, it effectively disappears from view. Posts need to get upvoted in the first place simply to be seen. The site doesn't need tyrannical moderators imposing arbitrary decisions for arbitrary rules.
    • by timeOday ( 582209 ) on Friday January 20, 2023 @05:52PM (#63226530)
      I agree with everything you said, but I still don't think reddit should be prohibited by the government from operating as it does.
      • by mjwx ( 966435 )

        I agree with everything you said, but I still don't think reddit should be prohibited by the government from operating as it does.

        This.

        The thing about laws (and freedom in general) is that they apply most to the scummiest of people, it's a base level, an absolute limit of freedom and liberty because simply, the treatment the worst of us receives is the same as the rest of us can expect. As once uttered, "the problem with defending freedom is that you spend most of your time defending scoundrels".

        The thing about Section 230 is that for Reddit, they aren't really pushing anything outside their own site. If you go to Reddit, you re

    • by Ksevio ( 865461 )

      I've never had to deal with any of that, so maybe it's just you.

      Even if they want to ban all the liberals from their Trump forum, they should be able to do that

    • by alvinrod ( 889928 ) on Friday January 20, 2023 @06:23PM (#63226580)

      I can't think of a less sympathetic group.

      Wikipedia editors.

    • by ToasterMonkey ( 467067 ) on Friday January 20, 2023 @06:25PM (#63226582) Homepage

      This is how internet chat rooms have ALWAYS worked. It's like whining about the mods in an IRC channel. Go to another room. This entitled I can say whatever I want in someone else's room crap is getting so old.

      • This entitled I can say whatever I want in someone else's room crap is getting so old.

        Same mentality as their participation trophies [pbs.org]. It's bad for us, but it's great for them!

      • I'm sick of unelected tyrants compelled to police the speech of others. Stop controlling others who simply disagree with you. Let people speak.

        And no, the default action of chat room moderators has not been to immediately kick and ban anyone with whom they disagree (presuming Reddit is anything like an IRC channel, which it's not).
    • 100% agree. Reddit moderators are the biggest joke on the internet. They're bullies. They're arrogant. They do not care about right or wrong. They only care that they are the tyrant and they have power over you.

    • Just to say this was precisely my own (brief) experience of Reddit and their pathetic mods.
    • Well section 230 clearly needs to go because a website banned you.

    • Comment removed based on user account deletion
    • Jesus, we're counting on sympathy for Reddit moderators? I can't think of a less sympathetic group.

      I've never had a single problem with a single Reddit moderator. The problem is you.

      Reddit moderators are the most pedantic, over-empowered tyrants in the digital landscape.

      Never tried to update a popular article on Wikipedia, huh?

      You can get banned from Reddit itself if you decide to create a second account and access the subreddit from which your other account is banned.

      Good. That's exactly what should happen.

      The effect has turned Reddit into a large echo chamber.

      Oh, that's the problem. You treated Reddit like it was one site, and you expected to be treated the same all over the site. There was no reason for you to do that, it didn't make any sense at all. You confused yourself.

    • Reddit moderators are the most pedantic, over-empowered tyrants in the digital landscape. They punish users on a whim based on whatever custom nonsense they have for their particular subreddit. The punishments damn near always start with an outright ban from the subreddit. The appeals process is a complete joke. Effectively, if you don't obsequiously apologize to the moderator for your transgression, then your account is banned for life. You can get banned from Reddit itself if you decide to create a second account and access the subreddit from which your other account is banned.

      I feel like there's a story there...

      And, the fact is that Reddit moderators are largely unnecessary and unneeded. Reddit users can largely police their own content through the use of up-voting and down-voting. If a statement downvoted enough, it effectively disappears from view. Posts need to get upvoted in the first place simply to be seen. The site doesn't need tyrannical moderators imposing arbitrary decisions for arbitrary rules.

      It's lovely how "defenders of free speech" want the courts to effectively ban certain types of online forum.

      I wonder how many have clued into the fact that virtually all successful online communities are fairly liberal when it comes to banning accounts. Could it be that to have an effective community you actually do need to aggressively moderate?

      • by XopherMV ( 575514 ) on Friday January 20, 2023 @09:28PM (#63226932) Journal
        Slashdot's moderation system doesn't allow banning. And, Slashdot's community has been active and successful for 25 years. It's moderation system enables limited adjustment of message visibility similar to Reddit's up and down votes. That results in Slashdot doing a better job of promoting free speech than Reddit. The comparison highlights the problem with Reddit's moderation, which are the moderators themselves.
        • by narcc ( 412956 )

          Slashdot's moderation system doesn't allow banning

          Yes, it does. It's not a power that comes with your mod points, but it is absolutely there.

          Slashdot's community has been active and successful for 25 years.

          There's what, like 100 regular posters still here? Not counting the mod point farming alts, of course.

          It's moderation system enables limited adjustment of message visibility similar to Reddit's up and down votes.

          Reddit's system allows anyone to up or down an unlimited number of posts. Slashdot's system allows a not-at-all random group of users a limited number of mod points periodically. These two things are nothing alike.

          That results in Slashdot doing a better job of promoting free speech than Reddit.

          What do you base that on? The fact that our Nazi ascii art stays up forever? The fact that marking o

          • Slashdot's moderation system doesn't allow banning

            Yes, it does. It's not a power that comes with your mod points, but it is absolutely there.

            No. It allows the banning of IPs, which are not user accounts. Go to a different IP and you can log in. Further, the bans of those IPs are for denial of service attacks or post comments that break Slashdot's rendering. It's not for disagreeing with some tyrannical moderator.

            Slashdot's community has been active and successful for 25 years.

            There's what, like 100 regular posters still here? Not counting the mod point farming alts, of course.

            And yet, here we are having this conversation on a website that is 25 years old. Many, many other websites have come and long gone. Slashdot doesn't have the traffic it once did. But, it's still here and people are still using it.

            It's moderation system enables limited adjustment of message visibility similar to Reddit's up and down votes.

            Reddit's system allows anyone to up or down an unlimited number of posts. Slashdot's system allows a not-at-all random group of users a limited number of mod points periodically. These two things are nothing alike.

            Random

            • by narcc ( 412956 )

              No. It allows the banning of IPs, which are not user accounts.

              They have banned and removed user accounts in the past. Don't for a moment think that's not an option just because it's not explicitly listed in the FAQ.

              And yet, here we are having this conversation on a website that is 25 years old. Many, many other websites have come and long gone. Slashdot doesn't have the traffic it once did. But, it's still here and people are still using it.

              That's hardly a success. There are countless individual subs on Reddit that get more traffic than Slashdot. Gee, I wonder why...

              Nazi ascii art isn't prevalent on Slashdot. I've never seen it in the long, long time I've been visiting here.

              That just tells me you're either barely active, a lying scumbag, or both. I saw some just a few days ago. It's not all over every article like it was a few years ago, but it's still common as dirt.

              People are largely fine with the Slashdot moderation system.

              What?! Are you new to the sit

            • by narcc ( 412956 )

              Now you're just making shit up to be an contrary asshole. Nazi ascii art isn't prevalent on Slashdot.

              I just stumbled some that was posted earlier today:
              Now you're just making shit up to be an contrary asshole. Nazi ascii art isn't prevalent on Slashdot. [slashdot.org]

              I don't make shit up, asshole.

        • by skam240 ( 789197 )

          Slashdots forum user numbers have been dropping for years and are a quarter or less of what they used to be so I wouldn't hold Slashdot up anymore in that context as for all we know more strict moderation might have helped maintain user numbers

          • Slashdot's been around while other websites have come and gone. And yet, it still has people using it's pages for discussion, like us. Yes, it's not as popular as it once was. People here are still active. And the website is still up and running after 25 years, which I'd call a success.
            • by skam240 ( 789197 )

              25 years is absolutely a good run, especially on the internet. I just wonder how much longer this site will be around though, it seems that every month that goes by the participation rate in the forums goes down some more. That's gotta mean lower readership and page views as well.

        • Slashdot's moderation system doesn't allow banning. And, Slashdot's community has been active and successful for 25 years. It's moderation system enables limited adjustment of message visibility similar to Reddit's up and down votes. That results in Slashdot doing a better job of promoting free speech than Reddit. The comparison highlights the problem with Reddit's moderation, which are the moderators themselves.

          No but they do ban [slashdot.org]:

          Do you ban people from Slashdot?

          Occasionally we ban IPs from which we see abuse, or disallow accounts from specific actions (such as posting or submitting stories) in response to spam, persistent flamebait, etc. If this happens unfairly to you, please read How do I get an IP Unbanned. These bans are relatively rare, but necessary when specific users or IPs disrupt service for others.

          Now bans are probably less common, but that's because /. still has a fairly technical focus compared to ot

          • Slashdot can ban IPs, which are not user accounts. Go to a different IP and you can still log in.

            And, Slashdot has a very narrow list of reasons for banning those IPs. These reasons include:
            * Your IP has been used to perform a denial of service attack (or the accidental equivalent) against Slashdot.
            * Your IP was used to post comments that break Slashdot's rendering.
            * You're using a proxy server used by someone who did one of those things.

            In short, it's limited to bad actors attacking the website
            • Slashdot can ban IPs, which are not user accounts. Go to a different IP and you can still log in.

              And, Slashdot has a very narrow list of reasons for banning those IPs. These reasons include:

              * Your IP has been used to perform a denial of service attack (or the accidental equivalent) against Slashdot.

              * Your IP was used to post comments that break Slashdot's rendering.

              * You're using a proxy server used by someone who did one of those things.

              In short, it's limited to bad actors attacking the website itself. None of the reasons include banning people for their message.

              or disallow accounts from specific actions (such as posting or submitting stories) in response to spam, persistent flamebait, etc

  • by backslashdot ( 95548 ) on Friday January 20, 2023 @05:42PM (#63226506)

    If any of you motherfuckers downmod me, I will sue your asses. Fuck you!!!

    I am glad over the last few years the right-wing dropped their principles and now support net-neutrality, "public utilities", interference in school curriculums, and even forced affirmative action (for white people, but hey it's a start.) Who knows, soon they might even support allowing the koran to be recited during school assemblies.

  • by DarkRookie2 ( 5551422 ) on Friday January 20, 2023 @05:47PM (#63226512)
    Everyone who want 230 banned is butthurt that they got kick out of a site because the owners didn't like you.
    That is it. I have yet to see any argument or data that says otherwise.
    Protect the kids? Fuck that! If you really really wanted to protect your children, you would disable the web entirely.
    • by ljw1004 ( 764174 ) on Friday January 20, 2023 @06:58PM (#63226642)

      Everyone who want 230 banned is butthurt that they got kick out of a site because the owners didn't like you. That is it. I have yet to see any argument or data that says otherwise.

      I've not been kicked out or banned from any social media sites, and yet don't agree with 230 in its current form. I don't know how else to answer you.

      The reason I don't agree with section 230 is because I think too much societal harm has come from recommendation algorithms. I got that impression from a few places... (1) a New York Times article about radicalization of Jan 6th folks via social media recommendations, (2) the plaintiffs example in the current court case about terrorist radicalization, (3) a literature survey book I read called "are filter bubbles real?", (4) a study from Germany a few years ago which showed that decreases in violence against immigrants was correlated with social media outages, (5) honestly just watching how my own 9yo child clicks on the "recommended" links on youtube, at least until I wrote a javascript filter which removes recommendations from youtube.

      Put it this way. Suppose the protections granted by section 230 have allowed internet companies to host ten billion user-submitted comments without legal liability, and they made a billion dollars in profit from this, and the company proudly says that it has suppressed 1 million pieces of offensive or harmful content that hurts or divides society, but it doesn't also mention that there were an additional 99 million pieces of such content that it didn't suppress. In such a case, section 230 has given the company a billion dollars in exchange and received in return 99 million pieces of content harmful to society that wouldn't have otherwise be placed there. I think reality is probably close to this, and I think it's a bad deal for society.

      (Some people will deny that such a thing exists as "content which is harmful to society" which I don't know how to argue against but seems ludicrous. Others will feel that any kind of restriction on free speech leads to worse outcomes than any other kind of restrictions on society, and they'll often argue this with faulty analogies to 1930s Germany while ignoring the many healthy societies today that have stronger restrictions on free speech than the USA.)

      I also think that western liberal democracies are uniquely vulnerable to state-level attacks on the integrity of their societies, a kind of asymmetric warfare, because western liberal democracies tend to allow vastly more speech (including content that was put out by adversary nations specifically to weaken them) whereas other regimes like Russia and China clamp down a lot more. I suspect this happened in the case of Brexit, and was probably a significant factor in the Brexit win, but it's hard to find evidence for or against this suspicion so everyone should be skeptical of my views here.

      I think the only way to restore health and trust in society is through complete transparency about how filtering and ranking is done. I think that section 230 protections should apply only to recommendation/filtering systems that are 100% transparent about how they work, and that protections should be removed from recommendation/filtering systems that are secret. I think that doing this will help heal US society, and will improve how well western liberal democracies can defend themselves against autocratic states.

      (Is it possible to craft legislation that would define "transparency"? Yes I think it'd be possible. I wrote an initial draft at https://slashdot.org/comments.... [slashdot.org] )

      • by sfcat ( 872532 )

        The reason I don't agree with section 230 is because I think too much societal harm has come from recommendation algorithms. I got that impression from a few places... (1) a New York Times article about radicalization of Jan 6th folks via social media recommendations, (2) the plaintiffs example in the current court case about terrorist radicalization, (3) a literature survey book I read called "are filter bubbles real?", (4) a study from Germany a few years ago which showed that decreases in violence against immigrants was correlated with social media outages, (5) honestly just watching how my own 9yo child clicks on the "recommended" links on youtube, at least until I wrote a javascript filter which removes recommendations from youtube.

        By that logic we should ban CNN, MSNBC and FoxNews. Not sure I disagree but in terms of the type of harm you are discussing, there is no real difference between a FB feed and MSNBC (or Fox if you prefer). The amount of misinformation is pretty constant between those two sources.

        I think the only way to restore health and trust in society is through complete transparency about how filtering and ranking is done.

        Absurd. This will just give people a better ability to game the algorithm. Users of social media sites won't look at an algorithm any more than they look at the click through license and for the same reason (they don't have the e

        • by narcc ( 412956 )

          Then the problem is recommendation algorithms, not section 230.

          To use a car analogy, the solution to the problem of drunk driving isn't to ban roads. Ugh...

        • by ljw1004 ( 764174 )

          ["I think the only way to restore health and trust in society is through complete transparency about how filtering and ranking is done."]
          Absurd. This will just give people a better ability to game the algorithm.

          I think society has already lost the war on gamed algorithms. That's why everything is dominated by click-bait and useless search results. Just as OSS invites eyeballs to remove bugs, and just as "security through obscurity" doesn't work as well as "security through transparency and many eyeballs", I think the solution to gamed algorithms is more eyeballs and more openness. I think there do exist solutions to gaming, but we're not even approaching them because the current "hidden and secret" paradigm isn't

        • The reason I don't agree with section 230 is because I think too much societal harm has come from recommendation algorithms. I got that impression from a few places... (1) a New York Times article about radicalization of Jan 6th folks via social media recommendations, (2) the plaintiffs example in the current court case about terrorist radicalization, (3) a literature survey book I read called "are filter bubbles real?", (4) a study from Germany a few years ago which showed that decreases in violence agains

      • The reason I don't agree with section 230 is because I think too much societal harm has come from recommendation algorithms.

        That's a lie. If that were what you cared about then you could promote a law making the algorithms public or prohibiting doing certain things with them, without screwing with Section 230, which will also destroy sites which do not use any such thing and simply display newer content at the top.

        • by ljw1004 ( 764174 )

          If that were what you cared about then you could promote a law making the algorithms public or prohibiting doing certain things with them, without screwing with Section 230, which will also destroy sites which do not use any such thing and simply display newer content at the top.

          I don't know if you read what I wrote, but I did specifically spell out in my worked examples that sites which "don't use any such thing and simply display newer content at the top" would retain their existing Section 230 protections unchanged. I think we're in solid agreement.

    • If you really really wanted to protect your children, you would disable the web entirely.

      Heh... I grew up with the Internet when it was the wild west, people said anything they wanted, and pop-ups and pop-unders were jammed full of porn.

      My generation turned out just fine. I miss those days (except for the pop-unders, of course).

  • Nationalization (Score:2, Insightful)

    This completely reeks of the US Gov wanting to nationalize social media, so that who ever is in charge can control the whole narrative.
    The right started being butthurt by left leaning companies practicing their rights. Then the left saw what they are doing and are jumping on the bandwagon.

    This will not improve discourse. This will not protect children. All this will do is hand more power into the hand of the Big Tech companies that they are trying to control. They are the only tech companies big enough
    • by skam240 ( 789197 )

      Nationalize social media? That's pretty far out there in terms of likelihood.

    • The right started being butthurt by left leaning companies practicing their rights. Then the left saw what they are doing and are jumping on the bandwagon.

      The move to destroy Section 230 is overwhelmingly Republican.

      This will go badly, blow up in everyones face, and will we all be literally owned by Apple, Google, Microsoft, or Apple.

      All known to be participants, willing or otherwise, in unconstitutional citizen spying programs. You are already 0wned by one of them

    • The right started being butthurt by left leaning companies practicing their rights.

      No, the right started being butthurt but right leaning companies practising their rights by not being right leaning enough. All the evidence is now that the alleged left-leaning companies were in fact much more lenient on right leaning material breaking the rules than left leaning.

  • What we are seeing here are political arguments being played out as justification for the decision and not merely an interpretation of the law. The fact that we've arrived at this situation on this and many other issues is a sign of how dysfunctional the US political system is; because it can't make good decision in Congress, the courts get to make all the hard calls and to implement significant changes. This is NOT GOOD boys and girls!

  • It is really getting bad here - so many ignornant, off-topic comments, and useless repetitive banging on. I think Slashdot should have moderators to filter out all this crap.
    • It is really getting bad here - so many ignornant, off-topic comments, and useless repetitive banging on. I think Slashdot should have moderators to filter out all this crap.

      You do realize that your comment has zero value to this discussion, and would be filtered out, right? So while on one hand it also proves the point, on the other hand it also proves that you're a hypocrite. It's pure trolling.

      • um, I was being sarcastic. My real point was that (at the time of my post ) most of the comments had "zero value to this discussion" as they were about how bad reddit moderators are rather than about the Supreme Court case or the reddit moderators amicus brief itself - I see now things have improved somewhat. It was boring and tedious to read.
  • by Budenny ( 888916 ) on Saturday January 21, 2023 @04:19AM (#63227178)

    Ars will find itself in an interesting position, depending how the case goes.

    Its is a classic case of a publication with strong editorial views on current controversial topics: gender, race, climate, all aspects of Trump. Ars is sure that trans women are just women, that there is a climate crisis and wind and solar are viable contributing solutions to it, that Trump is a fascist and that Jan 6 was an attempted coup. But you will get only a partial view of the editorial team's position on these matters from the lead postings or stories.

    The lead postings or stories are by an editorial team clearly identified in the byline and usually moderate in tone - the tone and the views are not much different than you will find in the NY Times.

    There are then reader comments attached to the stories. And here is where the 230 issue arises. Anyone who observes the comment pages over a period of years will have noticed that not only do contrary opinions get moderated down into invisibility, but anyone taking a stance which differs from the editorial view has their account cancelled. Its also noticeable that some of the more doctrinaire and vociferous regular posters appear suddenly and launch into full flow.

    This is where the difference between Ars and the NYT becomes apparent. The NYT also carries comments on stories. But the NYT being a newspaper is responsible for what it publishes, so its reader comments are pretty much treated as letters to the editor. The Ars comments are protected by 230.

    The effect of the combination of account cancellation by the editorial team and of moderation into invisibility is in practice to ensure that the comments section pretty much mirrors the point of view of the editorial team. Ars editorial would reply, so what, you don't like it, this is a private company, go elsewhere.

    They would also argue (as would many of the more vociferous commenters) that 230 is connected to the First Amendment. The First bans compelled speech. If Ars or any other site is obliged to carry comments and permit commenters with whom it disagrees, this (the argument goes) is compelled speech.

    So you will find people arguing on Ars a curious confused mixture of things. One, that 230 is already implicit in the Consititution. Two, that repealing it would breach the First and so be unconstitutional. Or sometimes that it would make no difference, being already in the Constitution. Three, that it would be a Very Bad Thing, and would lead Ars to abolish all its forums and all reader comments immediately. Obviously not all these things can be correct at once.

    You also find Ars editorial arguing that the comment sections are unimportant to its business model, but that 230 in its present form is very important.

    The interesting position that Ars would find itself in comes from exactly the thing that has made some people find 230 increasingly problematic. The issue is that its possible to arrange the conduct of user contributed material in forums by using the right policies and incentives in such a way that the forums mirror the editorial stance. Because all dissenting points of view and commenters have been either made invisible or banished.

    What is it then that justifies the very different liabilities that the NYT and Ars assume for reader postings? Why are the Ars forums not considered editorial content? They may not be written and bylined by the editorial team, but they are certainly managed in such a way that only approved points of view are permitted.

    We can also see the interesting position that Ars will find itself in if we compare with Slashdot. I don't think Slashdot will be materially affected by the abolition of 230, because it doesn't seem to manage the postings in the same way. Notably, it does not take them down, the moderation is entirely user driven, and the filter system gives readers fine grained control of what they are exposed to. Another site that can be compared is the climate skeptical site WUWT. It seems recently to have

    • Mod parent up. This is a concise and nicely balanced summary of the key issues and players.

      "What is it then that justifies the very different liabilities that the NYT and Ars assume for reader postings? Why are the Ars forums not considered editorial content? They may not be written and bylined by the editorial team, but they are certainly managed in such a way that only approved points of view are permitted."

      I agree, the disparate legal treatment of the same class of content written by the same class of pe

  • The Supreme Court is spared the sight of morbidly obese furries on rainbow-adorned mobility scooters.

Let's organize this thing and take all the fun out of it.

Working...