Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Government The Internet Technology

Has Section 230 'Outlived Its Usefulness'? (thehill.com) 278

In an op-ed for The Wall Street Journal, Representatives Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr (D-N.J.) made their case for why Section 230 of the 1996 Communications Decency Act has "outlived its usefulness." Section 230 of the Communications Decency Act protects online platforms from liability for user-generated content, allowing them to moderate content without being treated as publishers.

"Unfortunately, Section 230 is now poisoning the healthy online ecosystem it once fostered. Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children. Congress's failure to revisit this law is irresponsible and untenable," the lawmakers wrote. The Hill reports: Rodgers and Pallone argued that rolling back the protections on Big Tech companies would hold them accountable for the material posted on their platforms. "These blanket protections have resulted in tech firms operating without transparency or accountability for how they manage their platforms. This means that a social-media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from posts selling drugs, illegal weapons or other illicit content," they wrote.

The lawmakers said they were unveiling legislation (PDF) to sunset Section 230. It would require Big Tech companies to work with Congress for 18 months to "evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms." "Our bill gives Big Tech a choice: Work with Congress to ensure the internet is a safe, healthy place for good, or lose Section 230 protections entirely," the lawmakers wrote.

This discussion has been archived. No new comments can be posted.

Has Section 230 'Outlived Its Usefulness'?

Comments Filter:
  • by zendarva ( 8340223 ) on Wednesday May 15, 2024 @08:04AM (#64473533)

    c'mon.this is just silly.

    • by Enigma2175 ( 179646 ) on Wednesday May 15, 2024 @08:58AM (#64473705) Homepage Journal

      Exactly. Nope, it hasn't outlived its usefulness, question answered. Their justification is stupid too:

      This means that a social-media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from posts selling drugs, illegal weapons or other illicit content," they wrote.

      If the post is to facilitate illegal activity, then bust the user for that illegal activity. If someone is selling drugs on Facebook, then arrest them for that. Why does the medium matter, if a drug dealer is copying flyers to post in his neighborhood are you going to indict Kinkos? The problem is that it's legal to TALK about drugs or guns and what these assholes really want is to regulate the speech itself, instead of actually policing the illegal activity.

      • I agree. this change to section 230 is not well thought out.
        • There isn't a change yet to say anything about. They are asking the companies to work with them to create one. It sounds like they are taking the approach that by somehow promoting or recommending certain posts in search of "engagement" the companies might lose some protection.

          I don't really expect it to be great, but I'm really tired of recommended content that I haven't specifically looked for, so the end result might be better for me, but possibly worse for other reasons.

          • by Calydor ( 739835 ) on Wednesday May 15, 2024 @11:53AM (#64474423)

            I think we all know what will happen if the platforms aren't protected from showing stuff. Draconican AI-fueled censorship.

            The guy who just wants to talk to people about feeling suicidal because he misses his brother who died from an overdose of heroin will get banned with no recourse, no ability to ever explain himself to an actual human being (and who knows what THAT will do his suicidal mood), while the guy cleverly telling his friends that he's got some Fresh Time Share Demos to split with them will fly entirely under the radar because the AI can't tell that apart from ACTUAL time shares.

      • Sites and forums get busted for piracy and DMCA all the time. Silkroad couldn't sell drugs and host prostitute. The section 230 argument doesn't work to protect anyone involved in criminal activity. But libel claims can't be so broad that they include Xitter and every ISP. Which is exactly what 230 is supposed to do. But you still can't knowingly allow it on your platform, so try not to record shady behavior in internal company emails that can be found during discovery.

    • It's not that silly, but the issue isn't that it's outlived its usefulness so much that today's megacorp-run world-changing social media mega-sites have outgrown legislation made for '90s hobby sites and web startups. Perhaps section 230 should be restricted to non-commercial websites and those run by companies below a certain revenue amount. But to simply repeal 230 is a half-baked knee-jerk idea.

      • Take your sentence, and replace "230" with "the first amendment" and it reads *exactly* the same.

        • Now that's just silly. The first amendment was made for '90s hobby sites and web startups? Why would the first amendment be involved in the issue of liability for user-generated content on the web anyway?

  • by xevioso ( 598654 ) on Wednesday May 15, 2024 @08:09AM (#64473543)

    Every few years people bring this up again, utterly ignoring the predicted response of the tech companies.

    Look, you dummies. Itâ(TM)s very simple. If you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability for those users to produce content. Really itâ(TM)s that simple.

    They do not have the resources to put human beings in front of machines to handle every complaint, monitor every message and handle every instance of something that might run afoul of a law somewhere. They can do it in small batches when required by law for specific instances , but a *blanket removal* of protection will mean these companies will shut down user content of any type rather than deal with liability.

    And that is *bad* for the internet, not good. Christ in a chicken basket, why does this keep coming up?

    • by virtig01 ( 414328 ) on Wednesday May 15, 2024 @08:12AM (#64473545)

      Christ in a chicken basket, why does this keep coming up?

      ...because it's an election year?

      • by Budenny ( 888916 ) on Wednesday May 15, 2024 @08:40AM (#64473645)

        "...because it's an election year?"

        No. Its because people of both parties have the feeling that something has gone wrong with it. This is Biden

        https://www.cnbc.com/2020/01/1... [cnbc.com]

        And the right wing objections are too well known to need referencing.

        The problem is, social media operators in effect run forums with editorial control, but escape all responsibility for their content. They do this by moderation policies and by algorithmic editorial interventions.

        Now, the NYT does that too. But it does that because it has liability. The question people of various political points on the spectrum have trouble with is this: why should Facebook have no liability for what its user publish, when the NY Times has total liability for its readers letters pages?

        The original basis of Section 230 was that Compuserve wasn't a publisher. It should be granted the right to censor offensive content without becoming one. However, social media operators arguably have become publishers, but in doing so have become one particular section of the media without the responsibilities and liabilities that the others all have.

        You want to see this in action, go to Ars Technica, read the comments pages on some controversial topic, like is Trump fascist, are there 74 different genders, is there a climate crisis, do renewables work adequately.

        You'll find that a mixture of moderation and expulsion of dissidents ends up making the discussion comments uniformly reflect the Ars editorial position. But this doesn't count in law as editorial control and publication, though it achieves the same end.

        You can make the same point about right wing social media sitesm I think - though I do not frequent them so cannot cite clear examples.

        There is a real issue here, and its not about saving or destroying the Internet. If there is a topic Biden and Trump both agree on, as they seem to, that's a clue that there may be a real problem.

        • You need to read up on 230. You are exactly 180 degrees wrong on its intent.

          • How so?

            • Re: (Score:3, Informative)

              by zendarva ( 8340223 )

              Opinionated Moderation was specifically the goal. What we have, was the goal.

              • Opinionated Moderation was specifically the goal. What we have, was the goal.

                If that were the case, then...that needs to be changed.

                It should read more like the OP in this thread was describing.

                • by zendarva ( 8340223 ) on Wednesday May 15, 2024 @10:16AM (#64474033)

                  Nah. It should be exactly how it is, and how it was intended. I want you to try to picture running an internet chat room, when it's possible to be prosecuted for user generated content. Just seriously stop and think about that for a moment, draw out all the results.

                • Without opinionated moderation, how do you run a site that is dedicated to comment on a specific topic?

                  I build a forum to discuss only some obscure topic I'm passionate about. I build a community around it, suddenly some incels find it and turn my forum into a place to bitch about women and how pathetic their lives are. Can I censor them or is that wrong?

        • by dirk ( 87083 ) <dirk@one.net> on Wednesday May 15, 2024 @09:20AM (#64473805) Homepage

          The question people of various political points on the spectrum have trouble with is this: why should Facebook have no liability for what its user publish, when the NY Times has total liability for its readers letters pages?

          There is a really simple answer for this. The NYT picks the letters it publishes. It is not an open forum where anyone can post. They look at what comes in and choose what gets put on there. Therefore, they are responsible for what they choose to put on there. Facebook does not choose what users post, it is an open forum. This really isn't that hard.

          • It's really hard for people determined not to understand it.

          • Facebook does not choose what users post,

            No..the algorithms DO.

            And FB is in charge of the algorithms...hence they have their thumb on the scales, determining what gets seen or promoted and what doesn't....

            • Chronological order is an algorithm too, along with *any* method you use to present things. So... yeah, your argument is literally nonsense.

            • So you want government intervention on social media algorithms?

              Twitter even admitted that right wing views are amplified and travel faster https://cdn.cms-twdigitalasset... [cms-twdigitalassets.com]

              • So you want government intervention on social media algorithms?

                We already do...the govt is giving 230 protection where it should not.

                It needs to go back and set it to where you get 230 if you as a company do not use algorithms to push or suppression content, do not push content to people, but let them select.

                Basically to get the corporate "thumb" off the scales of information flow...

                Otherwise, they can still do it....they just don't get 230 protections since they are not publishers and editorializing...

        • Re: (Score:2, Insightful)

          by drinkypoo ( 153816 )

          There is a real issue here, and its not about saving or destroying the Internet. If there is a topic Biden and Trump both agree on, as they seem to, that's a clue that there may be a real problem.

          It's a clue that they're both fascists. Biden is just the kinder, gentler fascist [jacobin.com]. He was on the wrong side of basically every issue before he became VP, now he's only on the wrong side of every issue that corporations have a position on.

          Remember, the DNC fights tooth and nail against progressive candidates [newrepublic.com]. They would not support Biden if he did not do exactly what corporate interests want. That's why he's broken all of his progressive promises. For example he did massive fund more police without any meani

        • And my mod points expired two weeks ago.Score +1.

        • I would rather give the NYT immunity from slander (instead holding the individual reporter responsible), than hold Facebook responsible for its neonazi content.

          Giving people one throat to choke to sue stories they don't like out of existence is the very opposite of what "freedom of speech" was supposed to be doing for us. Catch criminals, sue criminals, jail criminals, but don't set fire to the street they walk on.

        • >>You want to see this in action, go to Ars Technica...

          Yes, and it's why I left it. I disagreed, once, with the GroupThink, and defended a user who had shared an opinion that went against the grain (IMO his point was valid) and got a week-long ban because of it. So, after being a 20 year long contributor (and had several articles published on their front page) I left for good. You wanna moderate, like Slashdot does? Cool. You want to impose your will on everyone so they all fall in line with your POV?

        • Why should Facebook have no liability for what its user publish, when the NY Times has total liability for its readers letters pages?

          Section 230 has fuck all to do with physical media. If Facebook made a paper newsletter that it mailed out, it would be just as liable, just like NYT isn't liable for their website's comment section.

    • by JBMcB ( 73720 )

      Christ in a chicken basket, why does this keep coming up?

      Because it costs money for the government to do it, and they want someone else to pay to enforce laws.

      You could make the exact same argument they are making about phone service and mail, and people would freak out as they don't want anyone reading all of their mail or listening to their phone calls.

    • Re: (Score:2, Insightful)

      by dfghjk ( 711126 )

      "If you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability for those users to produce content."

      Yes, and? You are assuming that is not the intention.

      The content we are talking about is problematic content, not content in general, and if content cannot be generally be generated without toxic consequences then good riddance. At this point, your result sounds really good.

      • Yeah, and we will lose all the *good* content, like this entire site, which is user generated content. Great plan.

      • Not on smaller platforms. What will happen is troll farms will flood whatever smaller website you're thinking of with illegal content and those sites will be forced to shut down or face massive lawsuits and/or criminal prosecution. Those farms are real. The head of the Arizona GOP used to run one. He paid high school and college kids to log in social media networks posting all sorts of nasty things. And that's just a domestic ones we know about Iran and Russia have massive troll farms of their own

        It'll
        • You phrased it a lot better than I could. If companies could be held liable for posted content, then I'm sure people will be doing their best to slap stuff on there in order to get the company into court and play the lawsuit lottery.

          Section 230 makes the Internet as an interactive medium possible. Otherwise, it would just be one-to-many sites like cable TV channels with zero chance of discussion or interaction.

          Of course, someone can come up with a P2P system for discussion, similar to USENET, but that wou

          • And everyone using the p2p network would be legally liable for everything on it, since they're serving it. Without 230, p2p should be called "I'm stupid, please arrest me"

      • by ctilsie242 ( 4841247 ) on Wednesday May 15, 2024 @10:00AM (#64473971)

        The original CDA, when it was passed was extremely hostile. You think the DMCA was bad, the CDA was a lot worse. It pretty much made writing any adult content a Federal crime... anything past an elementary school level. It even held ISPs accountable for packets going through their networks. It got modified by stuff like section 230, but most of this law was struck down the day it went into effect, and SCOTUS killed it.

        What happens if places are responsible for user content? One of two things:

        1: They just drop all forums and lock everything. No comments, no postings, just read-only blogs. Social media groups and such will be made read-only, and unofficial sites will be hunted down for trademark violations. Social media, even top tier platforms like Slashdot would shutter, or have to move overseas to a less restrictive country. At best, we would be back to Prodigy-like forums and paying by the hour for someone to actively moderate and curate all messages posted. Mastodon and decentalized social media would go extinct, of it really lucky, be remade as .onion sites, but even then, with the law making regular speech something that can be shut down at anyone's whim, those would be a stopgap at best.

        At best for social media, we would end up with something decentralized, using its own encrypted network. Of course, this means we will wind up playing cat and mouse, similar to how sites deal with DMCA notices, and there will be whole AI based businesses just looking for any social media node to pop up, because it is lucrative to sue them into the ground.

        At best, we will have to rely on GPG encrypted messages and shady email lists, pastebin sites, or sites in foreign countries that likely will get shut down by SOPA/PIPA like laws.

        #2:

        There will only be paid sites, like Compuserve, which will charge by the post because they will have an active mod team checking all legalities of any and all posts before they go up. Any griping about anything never will make it online.

        Either way, things are really bad, and we are worse off than if the Internet never existed, because BBSes did have some forum content. Yes, it sucks that current social media sites can do what they want, but the alternative is a lot worse. At least now, if you really don't like existing social networks, you can start a Mastodon server and go from there, maybe even serve it as a .onion site for that deep web goodness.

      • We saw this before. In the 70s and 80s, businesses didn't care if kids played on their property. Then a few "problematic" parents sued. Now, kids have nowhere to play for the fear of being sued.

        It will be the same with tech companies. They will just shut everything down as opposed to risking some dumb troll getting them hauled into court for a ton of consequences. Even if a site has an excellent signal to noise ratio, all it takes is one screenshotted post and the torts can start.

      • by CAIMLAS ( 41445 ) on Wednesday May 15, 2024 @10:46AM (#64474167)

        Yeah, no.

        The intended outcome here is that only approved goodthink gets "published". Anyone who posts things which are contrary to what's acceptable by the establishment gets the axe.

        Conspiracies? Political fraud? Fringe theories about history? You can't talk about that, you're harshing the vibe of the establishment NPCs and the thought police will be at your door shortly... just like is happening in Britain and a number of other "first world" countries now.

        That isn't the world anyone wants to live in. The powers that be can change in a heartbeat when there is contentious politics like this, but the thoughtcrime laws won't: that hammer will be wielded against an entirely different, previously supporting populace. This is how genocide is gleefully embraced.

        It isn't about child porn, or other similar things which cause actual harm. If it were about those things, they'd have addressed the problem already: they're already highly illegal. Instead, these platform operators seem to actually protect such "communities", and don't act on reports. Instead, they police speech and warn people for using mean words.

        You've either got freedom of speech or you don't. You can't say you support free speech, but not hate speech, and still support free speech.

    • > ItÃ(TM)s very simple. If you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability

      Why do you think Big Tech would /evaluate/ that legal liability for user content would be their recommendation?

      I mean, that would force decentralized social media eventually but I can't see Big Tech's evaluation being that a loss of their business
      would be the best course.

      ```
      It would require Big Tech companies to work with Congress for 18 months to "evaluate and

      • If Group A can be responsible fort content created by Group B, the decentralized social media should be called "Please arrest me, I'm stupid"

    • There is a way perhaps to do both: provide the liability shield up to a certain size of company. This would limit the size of social media companies and effectively limit the scope for abuse be limiting the size of the audience. It would still allow for full freedom of expression albeit with the volume of the megaphone turned down so instead of reaching the entire globe you'd be limited to a special focus group or region.

      Breaking up the social media giants like this and forcing the online communities to
    • > If you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability for those users to produce content.

      Okay, what's the downside?

      =Smidge=

      • Slashdot will close down? Imagine trying to run an mIRC node under the idea that you can be responsible for what users post?

      • An Internet of just spam, ads and online stores, with nothing else. Not something anyone really wants.

    • f you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability for those users to produce content.

      How about this...

      You get to keep 230 protections, and take out the algorithms pushing content to users, and aside from speech that is illegal (CP,etc)...you don't censor things.

      If you use an algo to promote some content over others, and editorialize on what is allowed to be published...then you lose 230 and are treated like a publisher.

      Simple, no?

      • by Ksevio ( 865461 )

        No, that would eliminate any sort of specialized forum if it wasn't allowed to moderate any content aside from that was illegal. They'd get overrun with spam and off-topic discussion

        • No, that would eliminate any sort of specialized forum if it wasn't allowed to moderate any content aside from that was illegal. They'd get overrun with spam and off-topic discussion

          I think this is all talking about "open forums"....if it is a specialized forum...membership is required and can be revoked.

          As long as it is not algorithm run....this way we keep the provider from putting a thumb on the scales of any content or conversation, hence no editorial work.

          The members of each forum could have the pow

          • Hey, you do know that "Chronological order" is an algorithm right? Any method of presenting *things* is going to require an algorithm. Always.

    • Look, you dummies. It's very simple. If you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability for those users to produce content. Really it's that simple.

      And that is *bad* for the internet, not good. Christ in a chicken basket, why does this keep coming up?

      You do know that laws aren't binary right?

      Right now many of those companies are getting the best of both worlds. They're able to enjoy protections from legal ramifications that a normal publisher would have to put up with, but they also editorialize their content to foster a certain viewpoint. That inability to monitor content is the exact justification cited for the special legal protections they're receiving.

      I think a fair revision would be: if you want Section 230 protection, then content filtering and

      • by Holi ( 250190 )

        So you get a bunch of people who drive the conversation to something the majority of your users find offensive but not illegal. Now you lose the majority of your users and are driven out of business.

        If you think that is unlikely, I point you to the Gamestop stock.

    • From summary: ....The lawmakers said they were unveiling legislation (PDF) to sunset Section 230. It would require Big Tech companies to work with Congress for 18 months to "evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms."

      This indicates a replacement law. The issue I do have with the specific wording in the quotes is that "encouraging them to be good stewards of their platforms". I am pret

      • >> This indicates a replacement law

        That's exactly what is being proposed. Section 230 is clearly obsolete, and it is too ambiguous to be appropriate for the present circumstances.

        "Meaningful changes will ensure these companies are no longer able to hide behind a broadly interpreted law while they manipulate and profit from Americans’ free-speech protections. Updating Section 230 will empower parents, children and others who have been exploited by criminals, drug dealers and others on social-medi

        • Because it obviously would and there's case law to show it, since that case law is what generated 230. And yet, you have people claiming it *wouldn't* when there's actual evidence that it would.

    • You just gave the best argument for abolishing 230: Get rid of SoMe. It is trash. It isn't in practice about citizen involvement and free speech. It is just trash, spreading misinformation and giving a lot of people psychological problems. And with AI on top we get bots trained to multiply the garbage.
    • Facebook, Xitter, Instagram, and the rest make money from the content their users provide, by selling advertising along with (or displacing) content the users want. If the tech companies shut down the ability for users to provide content, they also shut down their own ability to sell advertising alongside that user-provided content.

      Right now, the tech companies' incentives are on the side of letting all the content in; more content, more advertising revenue (at least until advertisers are turned off by the

    • Christ in a chicken basket, why does this keep coming up?

      Because we as a society recognize that Corporate Social Media is causing harm to our society, and we seek redress for our grievances.

      And there is a very simple way to address Section 230 so as not to throw the baby out with the bathwater. See, the cause of the harm Corporate Social Media brings to society boils down to this: they algorithmically curate their content to maximize their profit. They selectively pick and choose exactly what they want t

    • by ikegami ( 793066 )

      If you hold tech companies legally responsible for the content their users produce, they will *shut down* the ability for those users to produce content. Really itâ(TM)s that simple.

      Only the big rich companies will be able to able to host user-generated content. If you have a poor opinion of a tech giant, just imagine if you removed the possibility of competing with it... Note that Section 230 also protects you, the individual. Ever forwarded an email that might have had Copyrighted content?

  • by ugen ( 93902 ) on Wednesday May 15, 2024 @08:38AM (#64473627)

    Has the 1st amendment outlived its usefulness? Same here.

  • If nothing else, Section 230 makes it easier to identify the enemy.

    It's bad enough that you want to hold someone accountable for having an opinion you disagree with,
    without section 230 you'd try and hold them accountable for other peoples opinions too.
    (Even with 230 you still try, but it makes it obvious that you're in the wrong.)

  • Add the following to S230:

    1. No platform may lawfully advertise itself as practicing free speech, defending free speech or supporting free speech as a political principle if it shadowbans, limits exposure or removes content/user accounts for 1A-protected behavior except for pornography (only because open platforms that allow porn submissions invariably get flooded with CP so the poor moderators need some automated help here)

    2. All content moderator decisions on for-profit platforms, especially where there i

  • by Tablizer ( 95088 ) on Wednesday May 15, 2024 @10:29AM (#64474083) Journal

    I believe a content item should change category to "curated content" when it reaches a threshold of readers (such as diff IP addresses per day*). If your algorithms promote dangerous or defamatory content and you don't bother to check such high-rankers, then you should be held accountable.

    * An imperfect metric, but good enough for stated purpose.

The gent who wakes up and finds himself a success hasn't been asleep.

Working...