Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Courts United States

Appeals Court Questions TikTok's Section 230 Shield for Algorithm (reuters.com) 92

A U.S. appeals court has revived a lawsuit against TikTok over a child's death, potentially limiting tech companies' legal shield under Section 230. The 3rd U.S. Circuit Court of Appeals ruled that the law does not protect TikTok from claims that its algorithm recommended a deadly "blackout challenge" to a 10-year-old girl.

Judge Patty Shwartz wrote that Section 230 only immunizes third-party content, not recommendations made by TikTok's own algorithm. The decision marks a departure from previous rulings, citing a recent Supreme Court opinion that platform algorithms reflect "editorial judgments." This interpretation could significantly impact how courts apply Section 230 to social media companies' content curation practices.
This discussion has been archived. No new comments can be posted.

Appeals Court Questions TikTok's Section 230 Shield for Algorithm

Comments Filter:
  • I agree (Score:2, Insightful)

    Judge Patty Shwartz wrote that Section 230 only immunizes third-party content, not recommendations made by TikTok's own algorithm

    100% agree. Once you start exerting editorial control (ie. censorship, which is usually always political) over content passing through your service, you're no longer some type of "neutral pipe" and you shouldn't be entitled to any of these protections.
    • Re: (Score:1, Informative)

      Once you start exerting editorial control (ie. censorship, which is usually always political) over content passing through your service, you're no longer some type of "neutral pipe" and you shouldn't be entitled to any of these protections.

      Then Twitter is no longer entitled to protections since Musk routinely censors content. Particularly comments poking fun of him or ideas he doesn't agree with.
      • Censorship is probably still allowed.

        The issue is that 'recommendations' of problematic content is not protected, which I've been arguing for years.

        So in that regard, ex-Twitter can still liable for pushing problematic posts on its users, as can Facebook, YouTube, etc.

        • Re: I agree (Score:5, Interesting)

          by smooth wombat ( 796938 ) on Thursday August 29, 2024 @01:09PM (#64746614) Journal
          And here is Twitter censoring right now [techcrunch.com].

          X, the Elon Musk-owned platform formerly known as Twitter, is marking some links to news organization NPR's website as "unsafe" when users click through to read the latest story about an altercation between a Trump campaign staffer and an Arlington National Cemetery employee. The warning being displayed is typically applied to malicious links, like those containing malware, and other types of misleading content or spam. However, in this case, the web page being blocked is an NPR news report, raising questions about whether or not Musk's X is actively trying to stop the news story from spreading.

          On Thursday X users began to notice that a link to the NPR story about the Arlington Cemetery event, when clicked, would display the following message: "Warning: this link may be unsafe" followed by the URL of the webpage in question, https://npr.org/2024/08/29/nx-... [npr.org].

          Instead of being taken to the website, the warning encourages them to go "back to the previous page" by clicking the big blue button. To read the news story, users would have to click on the small text below that reads, "Ignore this warning and continue." /quote.

          • Asking "do you really want to visit this website" is not the same as "censorship." It's not censorship at all, it's something that has a name, a name called "editorializing."
        • Censorship is probably still allowed.

          The issue is that 'recommendations' of problematic content is not protected, which I've been arguing for years.

          Gee, if only someone wouldn’t have censored helpful content instead of promoting problematic content..

        • by mysidia ( 191772 )

          Censorship is probably still allowed.

          Censorship is specifically protected by the 230 shield. A good-faith effort to delete obscene and objectionable material does not create liability when it fails.

          But when social media websites display banners suggesting an item Is Recommended, Featured, or Promoted. The website itself is editorializing.

          Also, Search engines should be able to be held liable as the publisher for original text their ChatGPT or Claud-based machinations throw at people in Any manner wh

      • by taustin ( 171655 )

        And? A lot of people would agree.

      • Editorial control is prioritizing or featuring content above others. Censorship is moderation, which is separately allowed under Section 230 without losing protection.

        • Uh prioritization might as well be censorship .. that's like telling someone they have free speech but only they have to go to a back room somewhere where nobody can hear them.

          • Free speech isn't the same as free audience. It's hosted in the same place just not being announced as much.

            Nobody wants to drink from a firehose. Nobody has time for that.

    • these aren't recommendations, this is Tik Tok keeping track of what's popular for a giving demographic and saying "This is what people like you are watching".

      This isn't like a news room editor picking stories to appeal to viewers either. It's something entirely new that's only possible because websites have so much more demographic information than they used to.

      It's the difference between a news room putting spin on stories and a large scale demographic tool that says "this demo is going to watch X"
      • Re:I disagree (Score:4, Interesting)

        by Zak3056 ( 69287 ) on Thursday August 29, 2024 @01:20PM (#64746640) Journal

        these aren't recommendations, this is Tik Tok keeping track of what's popular for a giving demographic and saying "This is what people like you are watching".

        I'm going to go out on a limb and say that the average ten year old girl isn't especially interested in getting blackout drunk but, hey, you do you.

        This isn't like a news room editor picking stories to appeal to viewers either. It's something entirely new that's only possible because websites have so much more demographic information than they used to.

        "The computer did it" should not shield you from liability when you are the party in control of said computer.

        With all of the above said, I would agree with you that this is counter to the first amendment, though the extraterritoriality of ByteDance makes things a bit of a grey area.

        • It wasn't about drinking, it was about choking yourself.

          • by Zak3056 ( 69287 )

            It wasn't about drinking, it was about choking yourself.

            Then I'll go out on another limb and suggest that the average ten year old girl isn't especially interested in choking herself into unconsciousness.

        • The minimum age for Tik Tok is 13 years old.
        • Maybe stop letting your kid go unattended into cesspools? No, can't do that. Parents can't parent. Don't have time. Nanny state government to the rescue, right? Fuck that noise. Ban kids from social media and let adults be responsible for themselves and their children.

          All this think of the children shit is a great way to take rights away from adults and it's all authoritarian bullshit.

      • That kind of algorithm leads to a positive feedback loop where a butterfly in china can cause a tornado in oklahoma.

      • Tik tok isn't making any value judgements on the content one way or another, they're just presenting what you're most likely to watch.

        Not true, because they demote and filter nudity and suicidal content. They already do this, and this slipped through the cracks. They should most certainly be held liable.

        A recommendation algorithm is still exactly what it says, recommendations. If I recommend you go see a movie and it sucks, I'm at fault. A similar concept applies.

        The simple solution is to not recommend content to minors that is age inappropriate, or at all. That includes guns, self-harm, choking yourself, nudity, sexual content, etc..

        • Yeah but you aren't LIABLE for damages, sure it is your fault that I went to see the movie, but I can't sue you for renumeration just because you recommended it and I didn't like it
          • It depends on the circumstances. I don't know if it has ever happened in that exact context, but if you knew of someone's traumas and recommended a movie that would exacerbate their traumas knowingly, I bet you have room to sue for emotional harm. You would probably likely have to insist they go see the movie (with fear of termination of friendship or something), but I would say it is possible in some courts. It's all about the intent.

            In these cases there was no malicious intent, supposedly, but companies
    • Except in this case, they are being sued for FAILING to censor. I hope the courts can figure out that failing to censor is related to making a recommendation and not just platforming.

      • No. Failing to censor is *specifically* protected by section 230.

        The logic being applied is that by recommending the content they were engaging in first-party speech -for which they are responsible.

    • by hey! ( 33014 )

      While I can sympathize with the feeling that people who exercise editorial control don't deserve legal protections for site content, in *this* case that feeling gets the facts exactly backward. TikTok needs to exercise editorial control *in order to* have legal protections.

      We're talking about the Communications Decency Act, which prohibits distribution of child pornography in general and obscene material specifically to minors. Section 230 of that law is a safe harbor provision for sites that host third p

      • There are two types of editorial control here.

        1) Censoring illegal content imperfectly. Even if they miss some, they are protected under Section 230.

        2) Featuring content by picking some content to push ahead of other content. This is not a filter. This is curation. Curation has already been protected as free speech - speech by the platform. But their speech is not protected under Section 230 because they are not a user. They are the first party entity.

    • Judge Patty Shwartz wrote that Section 230 only immunizes third-party content, not recommendations made by TikTok's own algorithm

      100% agree. Once you start exerting editorial control (ie. censorship, which is usually always political) over content passing through your service, you're no longer some type of "neutral pipe" and you shouldn't be entitled to any of these protections.

      Your opinion is entirely WRONG with regard to facts of the law and the Judge's reasoning.

      Section 230 was created to encourage content censorship (this was confirmed by it's authors). It protects from any legal labilities from censoring, even if something is censored that should be legally protected speech (it says this directly in the law). It also protects against any lability for something that was not censored, as long as it was authored by a third party.

      There is no requirement for neutrality. None.

      Se

    • I've seen absolutely zero coherent explanations for a principled way that explains what in the law supports revoking the liability shield if they show content based on factors a, b, and c, instead of d, e, f, where one group is date uploaded or search match, and the other is popularity in general with others who watched videos you watched. Can you order one? This is nullifying 230 based purely on "save the children" reasoning.
      • They are not liable for the user's content. They are liable for their own, which is their recommendation and curation of the content. That speech is not directly part of the content - it's the arrangement of it. Section 230 only covers users' content, not first-party content.

    • Once you start exerting editorial control (ie. censorship, which is usually always political) over content passing through your service, you're no longer some type of "neutral pipe" and you shouldn't be entitled to any of these protections.

      And you're just flat out wrong [eff.org].

      That said, there is a difference between exercising editorial control and actively recommending content.

  • From a legal perspective, we're already attempting to ban TikTok in the US. I really don't see what a US civil court can add to that.
    • A U.S. appeals court has revived a lawsuit against TikTok by the mother of a 10-year-old girl who died after taking part in a viral "blackout challenge" in which users of the social media platform were dared to choke themselves until they passed out.

      The government actions against TikTok do not shield it against lawsuits from private citizens.

      • What are they going to do, send a bill to their home office in China?
        • Basically yes, that's neither here nor there yet though, first they have to win the case, then the judge will give the judgement on compensation then it can be worked out how that can actually be collected, that's the step that can get stymied but often this is more about the judgement itself rather than the compensation.

          Also as I understand it ByteDance is the Chinese parent corporation but TikTok in North America is actually a US based subsidiary so that part of the company can be more easily leveraged ag

    • so they have to follow our laws or leave. So yeah, it has huge implications.

      Worse, I argued elsewhere this ruling is bad because predictive algorithms that present content you're likely to watch based on known demographic information gleamed from past searches & views is not the same as editorializing. Which is the entire purpose of S230.

      This ruling guts S230. It'll be applied to the entire internet if it's allowed to stand and eventually used to undermine free speech. Any site where you can spe
      • This ruling guts S230. It'll be applied to the entire internet if it's allowed to stand and eventually used to undermine free speech. Any site where you can speak your mind freely will shut down or they'll lock posts down so much or hide those posts based on pressure from wealthy people who can sue or fund lawsuits.

        Good, gut 230 and rewrite it, or amend it.

        If you don't have algorithms promoting things....the fine, you are protected and people can say what they want.

        This way, people get out and actively s

  • by u19925 ( 613350 ) on Thursday August 29, 2024 @12:31PM (#64746488)

    --> Supreme Court opinion that platform algorithms reflect "editorial judgments."

    IANAL, but the algorithms based on technical metrics should be exempted. The algorithms based on contents can be considered as editorial judgments. Thus if TikTok or any other recommendation engine is simply recommending based on likes, dislikes, comments, viewership, geolocation, language preference, then they are not editorial judgments. But if it is based on contents like "challenge", "Trump", "election fraud", "illegal immigration" etc, then it is clearly editorial judgments. There have been several cases where insiders have been accused of promoting/demoting specific items based on what they contained and article 230 should not apply. But if a company is just doing using mathematical tools, then they should be exempted.

    • by oneiros27 ( 46144 ) on Thursday August 29, 2024 @12:41PM (#64746522) Homepage

      Facebook has admitted that their algorithms are to drive 'stickiness' (addictiveness) of the site. And itâ(TM)s been shown that controversial subjects and misinformation drives 'engagement' (people reacting to the content), so even an algorithm based on what looks to just be simple math is actually known to be biased against truth, and the company is aware of it (and has fired whistleblowers for reporting about it)

      • by Rujiel ( 1632063 )

        Recommendations are secondary to censorship, though, and FB has been doing that for 15+ uears targeting Iranian, Venezuelan and Palestinian accounts. Now you can get a business account banmed for using the word zionism. Now we have Zuckerberg regretting that they overdid it on censoring things around covid.
        So it isn't just a stickiness algorithm, it is intentional moderation
        https://www.msn.com/en-us/news... [msn.com]

    • by TheStatsMan ( 1763322 ) on Thursday August 29, 2024 @12:44PM (#64746540)

      As someone who writes these algorithms, I completely disagree. They perform the exact same function as an editor deciding what gets put on the front page.

      >based on likes, dislikes, comments, viewership

      That's an editorial decision, folks.

      • by u19925 ( 613350 )

        There is nothin wrong if you make your decisions on technicality. But if you make based on content, then the responsibility of that content may fall on you. Sun (or was it some other paper) decides to hide Karen McDugal story after paying 150k. Obviously, this was not a technical decision but a content decision. Math decisions can be made in one cultural/political environment and can be applied at another. Content based on cannot. What is censored in Pakistan is not same as what is censored in US or China.

        • >If you say put top 10 viewed articles on front page, you are not responsible for what those articles contain.

          Hard disagree. It's a decision based on popularity with your audience. You don't have to promote anything. If you promote based on popularity, that's an editorial decision.

      • in charge of Fox News, what would that Fox News look like vs today?

        Editors do a lot more than calculate what to display based on likes & dislikes. They make personal decisions about the content they want to display. That's the difference.

        Tik Tok only cares about engagement, it has no particular agenda. That's why S230 gets invoked and should protect them.

        But folks don't like Tik Tok. So they're an easy target. But when they're done with Tik Tok, you're next.
        • What you're describing is two different editorial decisions. Both are editorial decisions, but the rationale for each is different.

          >it has no particular agenda

          That is a completely untrue inference of the editorial power of promoting content based on engagement.

          • The guy who's in charge of figuring out the timing on when the run a segment isn't an editor. He doesn't have any say in anything except when shit runs. And he bases those decisions on hard numbers not on editorial control

            I get it people on the right wing hate section 230 because it lets us people on the left wing go all around the internet talking about shit you don't like to hear about.

            But you know what when they are done silencing me they're going to do the same to you. You're not part of the in g
            • I get it people on the right wing hate section 230 because it lets us people on the left wing go all around the internet talking about shit you don't like to hear about.

              It's not that at all.

              It is that it is NOT fairly balanced.

              The right just wants to NOT be censored or shadow banned for what they say....they want to be as free to say anything they want just like the left is currently able to do.

            • It's your opinion that if you make objectionable content, and legally YouTube is not allowed to promote it, say, to minors, that constitutes "silencing you?" I don't think I have the same definitions.

        • If I put the editor of Mother Jones Magazine in charge of Fox News, what would that Fox News look like vs today?

          Somewhat less angry, weirdly enough.

      • If it's based solely on user-generated content (accumulated likes, dislikes, comments, viewership are generated by the users) then the editorializing would be protected under Section 230 itself. Most of the algorithms for the largest social media sites go much further in depth.

        If they try to sway the algorithm to specifically drive certain types of engagement or to attract a particular audience for advertisers, then it is no longer fully shielded. There's a lot of grey where selectively choosing specific

      • As someone who writes these algorithms, I completely disagree. They perform the exact same function as an editor deciding what gets put on the front page.

        >based on likes, dislikes, comments, viewership

        That's an editorial decision, folks.

        It's also irrelevant.

        Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

        That is the "editorial judgement" from the Supreme Court the judge referenced. The SC is skeptical that Florida and Texas can put limits on what can be moderated, because it can violate the platform's free speech.

        How we connect the dots from there to you're now liable for third party content because you choose what order to show the content is beyond me. It's ridiculous. The SC didn't just make every single action a platform takes an editorial judgement, or I'd like to hear how. Sorry /

        • It's a weird world. I would be interested to understand more what "free speech" a platform has, it seems like exactly the kind of grey area the courts should be exploring further. Because, what you've written here does not make a lot of sense to me.

          >you're now liable for third party content because you choose what order to show the content is beyond me.

          Well what if that content is objectionable, but got a lot of views? It doesn't even have to get a lot of views, it just has to trigger the engagement p

      • by hawk ( 1151 )

        and "deciding" is the key word there.

        If the algorithm reflects decisions or preferences of the carrier, then that takes it outside of the intent of the common carrier stature.

        And if there's *any* judgment going on, it takes it out of the conceptual range for what the authors were providing protection. Phone calls, classified ads, letters, and the like were the technologies. Email fits that description (well, generally. Some of the stuff google has been accused of is would take it outside, if true).

    • by Rinnon ( 1474161 )

      IANAL, but the algorithms based on technical metrics should be exempted. The algorithms based on contents can be considered as editorial judgments. Thus if TikTok or any other recommendation engine is simply recommending based on likes, dislikes, comments, viewership, geolocation, language preference, then they are not editorial judgments. But if it is based on contents like "challenge", "Trump", "election fraud", "illegal immigration" etc, then it is clearly editorial judgments. There have been several cases where insiders have been accused of promoting/demoting specific items based on what they contained and article 230 should not apply. But if a company is just doing using mathematical tools, then they should be exempted.

      Why should using mathematical tools be exempted? If we were talking about a newspaper, for example, we wouldn't care how a given article was selected for inclusion in the paper, be it political, mathematical, bribery etc. we'd say the onus is on the editorial staff that is letting it in. Social media algorithms are far more complex, to be sure, but why does that complexity allow a company to throw their hands up and say "We're not responsible, this is what the computer spit out." Well, okay, you're the one

    • by znrt ( 2424692 )

      But if a company is just doing using mathematical tools, then they should be exempted.

      i don't see why? the point here is preventing the company from recommending dangerous shit to minors, i don't think the nature or process of that recommendation makes a diffrence when it comes to responsibility.

      that algorithm is specifically tuned by the provider to select content for optimization for online engagement. that is active, targeted moderation and implies that the provider is no longer a neutral channel, thus responsible of the shit they show and to whom.

      tik tok is clearly targeted for political

      • that algorithm is specifically tuned by the provider to select content for optimization for online engagement. that is active, targeted moderation and implies that the provider is no longer a neutral channel, thus responsible of the shit they show and to whom.

        But this is not the same as promoting just based on popularity. Because the accumulation of stats like that is really just more user-generated content covered under Section 230. A user makes a click, the video raises in the ranks.

        Much like illegal content, I think it's fair that a covered entity can filter clicks that are detected as coming from a click bot as long as they aren't basing the click filtering on the content of what is clicked.

    • by ljw1004 ( 764174 ) on Thursday August 29, 2024 @01:17PM (#64746634)

      But if a company is just doing using mathematical tools, then they should be exempted.

      Can you get close to a precise definition of what you're talking about? I kind of don't think you can.

      A recommendation algorithm currently is "stick every piece of metadata about posts into a machine-learning model, train it against the metric of engagements, and then recommend whatever post you think will get the highest engagement". That's pretty mathematical! :)

      I can see it'd be reasonable to want to exempt the algorithm "show posts from friends in chronological order".

      But for the life of me, I haven't yet been able to come up with any good definitions to separate the two. The closest I can imagine is a society-approved list of which algorithms are exempt, and anything that doesn't follow one of these algorithms counts as editorial.

      • I can see it'd be reasonable to want to exempt the algorithm "show posts from friends in chronological order".

        A sufficiently simple algorithm (inputs plus internal complexity, or number of bits in a database quarry) would necessarily have little to no editorial bias. It might be stupid and imperfect but we could just say if it uses less than so many bits to make a decision it's fair game.

      • I opted in to the content from my friends. I didn't opt in to the other crap that Facebook shows me. That's what Facebook wants to show me, not what I chose to see.
    • by DarkOx ( 621550 )

      I would argue that unless the user is in total control of the algorithm, the act of selecting criteria and weights IS editorializing. Farming the task out to a computer, rather than an staff member in the editorial office to pick stories should not give you magic exception. You are no longer just hosting third party content, you are synthesizing a new publication from multiple sources according to your judgements and that should fall outside CDA-230, which is not to say there still are not a lot of 1A prot

      • unless the user is in total control of the algorithm

        That is the internet I was dreaming about: where the browser would allow the users to program web pages to display the information they want.

    • Even that can easily be manipulated. Don't want Jews to have a say? Use geolocation to eliminate voices that have 2nd order ties to someone in Israel.

      The solution is to have the user specify the criteria for recommendation, and only those criteria be used. Therefore, no editorial judgment.

    • ... using mathematical tools ...

      This is like the bully saying "Stop hitting yourself": Under your rules, Microsoft "Tay" is allowed because a (statistical) algorithm forces it to choose racist and white-supremacy talking-points. Because the antisemitic data of "Tay" was selected by an algorithm, its words are legal.

      ... insiders have been accused ...

      How does the court prove the algorithm was overridden? If the owner loads antisemitic data into a word-engine, how much responsibility does he have when an algorithm selects racist and xenophobic phrases.

    • Your approach is similar to the approach used by the courts for protests. Cities can limit protests based on time or location (no loud protesting at midnight), but they cannot limit protests based on the content of the protest.
      • Your approach is similar to the approach used by the courts for protests. Cities can limit protests based on time or location (no loud protesting at midnight), but they cannot limit protests based on the content of the protest.

        1 The city cannot tell the protest organizers who is or isn't allowed at their event. 1st amendment
        2 The protest organizers are not responsible for what the protestors do. 230c1
        3 The protest organizers are expressly allowed to remove indecent protestors. 230c2

        Your privilege from 1 doesn't make everything you do outside of 3 automatically waive 2. That seems to be where this judge is going.

    • --> Supreme Court opinion that platform algorithms reflect "editorial judgments."

      IANAL, but the algorithms based on technical metrics should be exempted. The algorithms based on contents can be considered as editorial judgments. Thus if TikTok or any other recommendation engine is simply recommending based on likes, dislikes, comments, viewership, geolocation, language preference, then they are not editorial judgments. But if it is based on contents like "challenge", "Trump", "election fraud", "illegal immigration" etc, then it is clearly editorial judgments. There have been several cases where insiders have been accused of promoting/demoting specific items based on what they contained and article 230 should not apply. But if a company is just doing using mathematical tools, then they should be exempted.

      I don't think you get it, the editorial judgements thing is some bullshit this judge made up on the spot.

      Censoring and limiting your liability for third party speech is all that 230 is about, it is literally the whole point. The Communications Decency Act. It allows you to host someone else's garbage online and the law will look at it as their garbage not yours, the second thing it does is EXPLICITLY allow you to censor their garbage whether or not it is constitutionally protected, in good faith. This was w

    • by jonwil ( 467024 )

      My view is that if the algorithm for YouTube (as an example) is recommending content based on other content you watched (e.g. "you watched a video about old computers, here is another video about old computers that we think you might like" or "you watched a video about guns, here is another video about guns we think you might like") is fine if the algorithm is deciding whether content is similar or not.

      But if a human is directing the algorithm to promote or suppress certain content (specific topics, channel

  • What kind of parent is this? The parent is culpable in the child's death for not monitoring what the child viewed.

    • Exactly. Such a law will never pass because 99% of people allow their 10 year old to do whatever, so they can't relate to the idea of restricting.

  • The only thing that I hate more than tiktok is the fact that users use tiktok in the way that they do. Of course, this ignores the fact that parents are allowing their children to use devices that connect them to the internet where things have turned very sour for young minds. But I can't really give a shit about a 10yo kid that chokes themselves to death following the sick and twisted social norms in place. Good for her. She got there. She achieved the tiktok award of fame because here we are talking

One small step for man, one giant stumble for mankind.

Working...