Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Courts Social Networks Technology

Supreme Court Sidesteps Challenge To Internet Companies' Broad Protections From Lawsuits (apnews.com) 48

The Supreme Court on Thursday sidestepped a case against Google that might have allowed more lawsuits against social media companies. From a report: The justices' decision returns to a lower court the case of a family of an American college student who was killed in an Islamic State terrorist attack in Paris. The family wants to sue Google for YouTube videos they said helped attract IS recruits and radicalize them. Google claims immunity from the lawsuit under a 1996 law that generally shields social media company for content posted by others. Lower courts agreed with Google. The justices had agreed to consider whether the legal shield is too broad. But in arguments in February, several sounded reluctant to weigh in now. In an unsigned opinion Thursday, the court wrote that it was declining to address the law at issue.
This discussion has been archived. No new comments can be posted.

Supreme Court Sidesteps Challenge To Internet Companies' Broad Protections From Lawsuits

Comments Filter:
  • by rsilvergun ( 571051 ) on Thursday May 18, 2023 @11:31AM (#63532629)
    they also use something reasonable and tragic like this.

    For those that don't know, this is an attack on Section 230 of the Communications Decency Act (CDA) which granted immunity to the owners of software based communication platforms from anything their users might say, even if they moderate the content.

    This was necessary because the Internet was fundamentally different than newspapers & TV, requiring new laws to protect speech online. Online platforms in particular cannot exist as a medium for the free exchange of ideas without Section 230. Without it sites like /. would have to shut down from either a deluge of lawsuits or trolls and bots.

    One thing to be careful of is there is a substantial number of extremist (typically on the right wing, along with a handful of authoritarian left winger called "Nazbols") who argue eliminating S230 would improve free speech because common carrier would take over and make it so that companies could continue to operate if they didn't moderate anything.

    Typically these are people advocating for various forms of violence in one way or another (either directly or indirectly via inciting forms of bigotry to the point where violence is an inevitable byproduct).

    The goal here is to be able to overwhelm online platforms with their trolls & bots. As explained here [upworthy.com].

    Don't be fooled. This would of course immediately kill any and all online spaces. At best some of the major ones like Twitter & Facebook could get exemptions that protected them, killing all small competitors and turning the Internet into Cable TV.
    • by Ichijo ( 607641 )

      [immunity] was necessary because the Internet was fundamentally different than newspapers & TV, requiring new laws to protect speech online. Online platforms in particular cannot exist as a medium for the free exchange of ideas without Section 230. Without it sites like /. would have to shut down from either a deluge of lawsuits or trolls and bots.

      If you think about it, social media is just a bunch of personal ads. Newspapers used to moderate personal ads and charge for said moderation. What has change

      • Because you PAID for every personal ad you posted and that payment combined with the significantly lower volume and expected delay in publishing meant that it was feasible for the newspaper to review all those postings. Social media can't be done the same way.

        That said, IMHO Section 230 has indeed granted powers a little too broadly. With so much communication being done online it basically puts 3 or 4 private companies in control of public discourse.

        It's pretty obvious that the internet as we know it can

        • by Ichijo ( 607641 )

          Because you PAID for every personal ad you posted and that payment combined with the significantly lower volume and expected delay in publishing meant that it was feasible for the newspaper to review all those postings. Social media can't be done the same way.

          Again, why not?

          • by suutar ( 1860506 )

            Facebook apparently gets about half a million comments per minute [bernardmarr.com]. It's not practical to hire enough people to vet all of those.

            • by Ichijo ( 607641 )

              Facebook apparently gets about half a million comments per minute. It's not practical to hire enough people to vet all of those.

              Do you really think they would still get that many comments if people paid to post them like personal ads in a newspaper?

              • by suutar ( 1860506 )

                Oh, I see. No, social media in general would just die.

                • by Ichijo ( 607641 )

                  Why? People pay $9.00 to post up to 25 words in a small local newspaper. [rocket-courier.com] Social media should be much cheaper with no printing or distribution costs.

                  • by suutar ( 1860506 )

                    Would you post regularly, given that you'd have to hand over payment information, and the price would have to be at least enough to cover the transaction fees, so like 50 cents?
                    Would enough of your friends post regularly for you to bother to look at the app?
                    Would enough people answer "yes" to those to maintain advertiser revenue?
                    If not, they go out of business.

                    • by Ichijo ( 607641 )

                      Would enough of your friends post regularly for you to bother to look at the app?

                      If my friends had to pay [wikipedia.org] 50 cents per post, Facebook might actually be worth my time again.

                    • There are some interesting ideas there.

                  • People pay newspapers because thousands of people still read them. People pay facebook to promote their posts to thousands of people. I'm obviously not going to pay to share my non-commercial post with my 20 facebook friends, and neither are any of my friends, and I'm not going to stay on the site if I can't see what my friends are posting, and neither or they. At that point, advertisers would no longer be interested in paying to promote their posts to tiny remaining audience that consists only of other adv

                    • by Ichijo ( 607641 )

                      "If you don't pay for the product, you are the product."

                      Do you enjoy being the product, Gavagai80?

            • by dgatwood ( 11270 )

              Facebook apparently gets about half a million comments per minute [bernardmarr.com]. It's not practical to hire enough people to vet all of those.

              Assuming three 8-hour shifts, and assuming that it takes a whole minute to review each comment (on average), you would need 1.5 million people to do that. Assuming adequate language proficiency, you could maybe do this in a relatively low-wage country like India for $6.35 per day, which would cost only about $3.7 billion per year. That's only about 3% of Facebook's annual revenue. So ignoring the payroll nightmare, it isn't *entirely* infeasible, at least in theory. :-)

              Now I'm not saying it makes *sense*

  • Here we go again.

    if youtube.com posts videos calling for violence and showing you how to commit said violence, let's say pipe bomb building videos, AND IS MAKING MONEY FROM IT, it's ok because "free speech".

    youtube is not only allowing videos that are subtly and not-so-subtly calling for violence, but they are also MAKING MONEY FROM IT. The level of fucked up CT related videos I see in youtube, without even having searched for anything, is amazing. youtube is a cesspool.

    Not only that but they are directing

    • I disagree, I don't want to be blocked from watching things because somebody (youtube) is afraid somebody (a jury) will be convinced that somebody (a terrorist) was influenced by it and levy a huge fine. And trying to turn it into a financial argument (criminal to civil) is just a weasly strategy to restrict freedom by lowering the threshold for restricting and punishing people.
      • by cats-paw ( 34890 )

        sorry, you actually should be blocked from watching videos about making pipe bombs.

        The point being it doesn't have to be that overt.

        there's a big difference between you finding the CT videos that you want to see and youtube funneling large numbers of related videos directly to you and making money from it.

        i use that word responsibility again, and it DOES mean what I think it means.

        if this puts a dent in their business model because they have to stop doing that or losing lawsuits, tough shit.

        • by suutar ( 1860506 )

          sorry, you actually should be blocked from watching videos about making pipe bombs.

          Why?

    • by cats-paw ( 34890 )

      correction, when i've done searches i've seen a sidebar full of _unrelated_ CT and pseudoscientific wackiness.

    • by Anubis IV ( 1279820 ) on Thursday May 18, 2023 @02:00PM (#63533065)

      Here we go again.

      This case had nothing to do with protecting offensive content, which seems to be the bee you have in your bonnet.

      This case hinged on whether Section 230 protections from the CDA extend to recommendation engines, i.e. whether by providing recommendations the platform could be held liable for them in some way. Simple as that. It had nothing to do with the actual nature of the content, which is what you're talking about.

      It's fine if you don't like that content (I don't either). It's fine as well if you think the laws should be changed to make it illegal to host that sort of content (depending on the details, I might agree). But you're ranting about a case that didn't even address the rights you're talking about, so no, they didn't say Google is "free of any responsibility" for offensive content "because 'free speech'". To the contrary, they didn't even talk about that topic at all.

    • Here we go again.

      if youtube.com posts videos calling for violence and showing you how to commit said violence, let's say pipe bomb building videos, AND IS MAKING MONEY FROM IT, it's ok because "free speech".

      Yes, it is okay, because Youtube isn't making money SPECIFICALLY from videos which "call for violence" as you say. Youtube makes money from ALL videos. They don't have a preference for violent content and that's why they are shielded from liability. It's like saying that the phone company makes money from long distance calls in which one party incites another to commit a crime. (Someone calling a hitman to put a contract on their wife, for instance.) This is true, but the phone company makes money on A

      • Actually, if they have chosen to demonetize the video in question, they are getting money specifically from a topic that they don't see the need to pay the creator for. Which means they have judged the content harmful in some way and categorized it as such which means they could easily exclude ads for those videos but choose not to.

        • If YouTube took ads off violent content, you could as easily complain they were promoting violent content by making it more watchable by providing the added service of removing commercials which the viewer has to pay to get rid of on other less offensive videos. More people would finish watching the video and more would recommend it to others, because of the lack of advertising to impair their experience.

    • by Tyr07 ( 8900565 )

      Maybe you shouldn't be making videos on pipe bombs and getting ad revenue from it or whatever. Here we go again, personal responsibility? No way! Blame google!

      Maybe it's the fault of the person showing how to make something dangerous, or MAYBE it's the person who does something dangerous with that knowledge who is accountable.

      What's next? Sue walmart because they still steak knives and someone figured out you can use it for something other than cutting steak, some sort of violence?
      I'm going to use the most

      • Maybe you shouldn't be making videos on pipe bombs and getting ad revenue from it or whatever. Here we go again, personal responsibility? No way! Blame google!

        Maybe it's the fault of the person showing how to make something dangerous, or MAYBE it's the person who does something dangerous with that knowledge who is accountable.

        What's next? Sue walmart because they still steak knives and someone figured out you can use it for something other than cutting steak, some sort of violence?
        I'm going to use the most hated word of the generation these days, two in fact. Personal, Responsibility.
        I'll combine it with the forbidden A word.
        Accountability.
        Goes with Accountability for your actions.

        I don't think you understand how accountability works.

        No one is saying that the video makers or terrorists aren't accountable for their actions because of Google. The question is whether Google is also accountable for not doing enough to stop terrorists from using their tools to recruit.

        Now, I don't know the videos in question. But if Google was aware of videos that were showing people how to make pipe bombs for the purpose of committing terrorist acts and did nothing to stop it then I think they should hav

        • by Tyr07 ( 8900565 )

          I don't think you understand how accountability works.

          I'm not saying this is you but a lot of people say that while trying to shift personal responsibility from themselves to another entity.

          Everyone always tries to push to well, "You are sort of to blame too" so that "everyone" is kind of at fault, and in regards, that's always true, for even existing I suppose. I have a disdain for that, as it's often a tool used for people to not "really" take accountability for what happened.

          The question is whether Google is also accountable for not doing enough to stop terrorists from using their tools to recruit.

          So in this regards, from what I understand google is doing enough, as flagged conte

    • by suutar ( 1860506 )

      Do you really think AI is ready to reliably detect violence in videos? Because you're not going to find enough people to vet that stuff.

  • Liability (Score:5, Insightful)

    by Anonymous Coward on Thursday May 18, 2023 @12:15PM (#63532739)

    Once upon a time, lawsuits were based on liability, someone doing something that damaged you.
    Nowadays, we just sue whoever has deep pockets.
    Google didn't kiill the student, but the family can't get money from the people who did.
    The First Amendment prohibits the government from broad censorship, but some people are trying to get the government to force companies to do it for them.
    I'm so grateful this attempt failed.

  • Comment removed based on user account deletion

Your own mileage may vary.

Working...