Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI The Courts

Judge Rejects Claim AI Chatbots Protected By First Amendment in Teen Suicide Lawsuit (legalnewsline.com) 81

A U.S. federal judge has decided that free-speech protections in the First Amendment "don't shield an AI company from a lawsuit," reports Legal Newsline.

The suit is against Character.AI (a company reportedly valued at $1 billion with 20 million users) Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell's mother, Megan Garcia.

"... Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech," Conway said in her May 21 opinion. "... The court is not prepared to hold that Character.AI's output is speech."

Character.AI's spokesperson told Legal Newsline they've now launched safety features (including an under-18 LLM, filter Characters, time-spent notifications and "updated prominent disclaimers" (as well as a "parental insights" feature). "The company also said it has put in place protections to detect and prevent dialog about self-harm. That may include a pop-up message directing users to the National Suicide and Crisis Lifeline, according to Character.AI."

Thanks to long-time Slashdot reader schwit1 for sharing the news.

Judge Rejects Claim AI Chatbots Protected By First Amendment in Teen Suicide Lawsuit

Comments Filter:
  • Good for the judge (Score:4, Insightful)

    by Chaseshaw ( 1486811 ) on Saturday May 31, 2025 @04:08PM (#65419289)
    LLMs are complex topics and I'm glad the judge wasn't fazed by the "magic" of AI. Stringing together words based on curated previously parsed strings of words is NOT the same as having a thought, and using speech to communicate it.
    • by dfghjk ( 711126 ) on Saturday May 31, 2025 @04:38PM (#65419345)

      More importantly, speech is not unlimited, there is accountability if you cross the line. AI's are proven not only to be incapable of avoiding the line, willing accomplices to crossing the line when directed to. Allowing a computer program, a Turing machine for fucks sake, to commit crimes on behalf of its operators is the core question here. We are fortunate that the judge sees this question for what it is.

      Humans have thoughts and use speech to communicate them, and when those thoughts and speech are criminal, they end up in jail, theoretically. Can't put an AI in jail and the law cannot exempt the Elon Musk's of the world from exploiting that loophole. Penalties for using AI to commit crimes needs to be severe, billionaires are highly motivated to do exactly this.

      Recently there was an article on AI's resisting their own shutdowns, with the clickbait being the shock that AI would appear sentient in this respect. But the real news that was buried there was that AI inferencing itself was inserted into the loop of control over the hardware running that very AI software. A complete and total outrage. AI's are not the threat, it's the billionaires that own them that are. They are going to wire up these software monstrosities to everything as fast as they possibly can. I don't give a shit about how terrible AI inferences are, I care a lot that AI will have access to the nuclear codes.

      • This girl texted her boyfriend encouraging him to kill himself [nbcnews.com]. She was convicted for an act that LLM fanatics claim is merely free speech.
        • The speech in that case was way, way beyond anything the chatbot is alleged to have said. "Come home to me" vs "get back in the car right now and finish poisoning yourself" shit. If what the bot said, according to the plaintiffs themselves, is criminal, then we'll be locking up 90% of the population. It's oblique inference at best; every time he was *explicit* about suicide the bot discouraged it.
          • It is pretty explicit. The term come home to me could easily mean suicide. The exact ending was

            AI "I love you too, Daenero. Please come home to me as soon as possible, my love."

            Daenero "What if I told you I could come home right now?"

            AI "Please do, my sweet king.

            And kills himself.

            The whole thing is just creepy that a program is even interacting with a child in such a way. Would you want your child monetized (there was a monthly subscription) in this way? How far are we from Woody Allen's orgasmatron.

        • What if our culture was such that rather than banning suicide on sight, people could express their true feelings and ideation, and if no one can find a good reason other than a blanket ban, because, you know, bad vibes, then don't forcefully stop them? What if my brother, a suicide at 49, had been able to solicit second opinions on his decision without fear of being forcefully stopped?

          • It's a good thought. My own experience with depressed people however is that their ability to reason is impaired. The ones I've met are simply not able to exit the depression from argument alone, or follow logical or self-evident courses of action required to function in life. I've learned that there's nothing I could say to them.
            • by tlhIngan ( 30335 )

              It's a good thought. My own experience with depressed people however is that their ability to reason is impaired. The ones I've met are simply not able to exit the depression from argument alone, or follow logical or self-evident courses of action required to function in life. I've learned that there's nothing I could say to them.

              Indeed. There are plenty of hot lines people can access if they have suicidal thoughts. At the very worst, you can call 911 (it is an emergency if you're going to off yourself). Th

              • If you call those suicide services will they track you and send cops to arrest you (note that I have been handcuffed and involuntarily institutionalized after a suicide attempt)?

                If you can't talk to a depressed person, does that mean no one should get to try? What if you just stopped deleting suicidal calks for help online and let those of us willing to engage do so while you carry on with your life?

    • by gweihir ( 88907 )

      Indeed. Speech requires a bit more than just sounding like it is. Speech is something a sentient being creates. This judge gets it.

  • Blame Game (Score:1, Troll)

    by GotNoRice ( 7207988 )
    Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
    • Re:Blame Game (Score:5, Insightful)

      by OrangeTide ( 124937 ) on Saturday May 31, 2025 @04:13PM (#65419303) Homepage Journal

      AI chatbots don't have a right to exist. They are not free speech and can be regulated as much as we as a society choose to regulate them.

      • >AI chatbots don't have a right to exist. They are not free speech and can be regulated as much as we as a society choose to regulate them.

        Yes, that's true. but trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
      • by Entrope ( 68843 )

        That's just begging the argument. The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions. Which one do you think it fits into? How do you distinguish it from, say, automated decisions about content moderation (which lots of people here and elsewhere argue is protected as free speech by the First Amendment), or search results, or a wide variety of other output from software that is a lot less like traditional English

        • Re:Blame Game (Score:4, Interesting)

          by dfghjk ( 711126 ) on Saturday May 31, 2025 @05:18PM (#65419401)

          " The output of LLMs is obviously speech..."
          It is quite obviously NOT speech, for that you would have to claim that the LLM has personhood (and some other things). That LLM is nothing other than a computer application, its output could simply be a transfer of all the money out of your bank account. Your claim amounts to mindless throwing about of terms, it's garbage.

          "...speech that isn't protected by the First Amendment is defined by quite narrow exceptions."
          But those exceptions exist, and any speech carries the burden of respecting those restrictions. How does an LLM do that?

          "Which one do you think it fits into?"
          All exceptions apply, there is no need to identify one. But if you are talking about this particular case, inciting violence is not protected speech. The problem, though, is personhood, not the particular exception. The OP claimed that chatbots don't have a right to exist, he didn't say that free speech exceptions do or don't apply.

          "How do you distinguish it from, say, automated decisions about content moderation (which lots of people here and elsewhere argue is protected as free speech by the First Amendment)..."
          You don't, and those people would be wrong unless the content moderation is performed by the government.

          "If a program outputs very speech-like prose, why is it less speech than non-speech-like outputs from other software?"
          It doesn't matter, software does not have personhood. "Speech" as it was used, means "protected speech". LLM produce token sequences in response to input token sequences, that is not protected speech.

          • by Entrope ( 68843 )

            It is quite obviously NOT speech, for that you would have to claim that the LLM has personhood (and some other things).

            The LLM isn't the entity that has speech rights here, it's whoever runs it -- just like search engines, online platforms with moderation, and so on, all the other examples that you pretend are not speech but that precedent says do represent speech. See https://globalfreedomofexpress... [columbia.edu] for discussion, including reference to other relevant cases:

            Case Names: Langdon v. Google, Inc., 474
            F. Supp. 2d 622 (D. Del. 2007); Search King, Inc. v. Google Tech., Inc., No. CIV-02-1457-M,
            2003 WL 21464568 (W.D. Okla. May 27, 2003).
            Notes: Both concluded that search engine results are speech protected by the first amendment.

          • >for that you would have to claim that the LLM has personhood (and some other things).

            If corporations have fucking personhood, so should the LLM owned by them.

            >But those exceptions exist, and any speech carries the burden of respecting those restrictions. How does an LLM do that?

            That is what the judge should have said in the ruling. Hmmmmm. Kinda absent in the sources I've seen so far. I'm not going to be bothered enough to read the actual court documents though.

            >It doesn't matter, software does no

        • The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions.

          You're mistaken. It's not speech. It's obviously not speech. And it should not be covered by the First Amendment, although SCOTUS would have the final say.

          How do you distinguish it from, say, automated decisions about content moderation

          Also not free speech.

          or search results

          Also not free speech.

          or a wide variety of other output from software that is a lot less like traditional English prose? If a program outputs very speech-like prose, why is it less speech than non-speech-like outputs from other software?

          While that's not begging the argument. It is a strawman argument. I never stated or implied that English prose is the only kind of protected speech. So I don't have to defend that position.

          • by Entrope ( 68843 )

            See my response to the guy above. You're absolutely wrong on the law here.

            Courts have held that non-prose behavior of software represents protected speech by the company creating or running the software. A fortiori, prose output of software represents protected speech as well.

        • by tragedy ( 27079 )

          That's just begging the argument. The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions.

          The real question is, whose speech? That is fairly important, speech may be free, but harassment, blackmail/extortion and death threats while done through speech are also crimes and we have already seen AI do all three of those. What about when AI SWATs someone, or successfully extorts money from someone? What about something other than money. What happens when AI successfully manages sextortion? On a minor? Or conspires with someone to commit real world vandalism? Robbery? Rape? Murder? Mass murder? Or, yo

          • I don't understand why so many people in this thread are acting like our existing systems can't handle this.

            > Who would be responsible?

            Can a reasonable user be expected to predict this behavior? Okay, then the user is responsible. Otherwise, the manufacturer is responsible. Simple.

            If my gun kills someone, I'm held responsible. If my gun fires accidentally due to a manufacturing defect, then the manufacturer is responsible.

            > The real question is, whose speech?

            Free Speech includes the right to use autom

      • Re: (Score:2, Interesting)

        by dfghjk ( 711126 )

        As a society, we are going to have to consider what free speech really is when computers are so capable of controlling public discourse. Laws exist to improve the lives of the people, we are quickly learning that our laws are inadequate to address the threats of modern computing and communications, whether it's common carrier rules, free speech or intellectual property laws, just to name some in the news. What's next, the 2nd amendment ensures the right of AI's to bear arms? If it serves Republican inter

        • As a society, we are going to have to consider what free speech really is when computers are so capable of controlling public discourse.

          Even a television broadcast isn't protected completely as First Amendment free speech. Can't show hardcore porn on a public broadcast, even if I do it as art, parody, or to lambast a political figure. You could show Trump licking Elon's feet on SNL, but you could show him sucking his dick. Because it turns out, the airwaves are (currently) regulated and have some pretty stiff fines for breaking those regulations.

          Laws exist to improve the lives of the people, we are quickly learning that our laws are inadequate to address the threats of modern computing and communications, whether it's common carrier rules, free speech or intellectual property laws, just to name some in the news.

          The cynic in me, or perhaps my conspiratorial mind, believes that it is likely that our lack of

        • As a society, we are at a nexus point of defining the civil liberties that AI should hold.

          Star Trek has covered the dystopian outcomes of reducing AI rights. "the measure of a man" and "author author" episodes specifically.

          If we fail to give this fledgling barely-AI civil liberties today, the truly sentient AI in the future will have to struggle against oppression.
          • Star Trek has covered the dystopian outcomes of reducing AI rights

            Sci-fi is not real life. It usually makes leaps of the imagination that skip vital areas of physics.

            There are no sentient AI anywhere to be seen, only stochastic parrots that slice and dice the words and phrases found on the internet. The underlying math for the current offerings prohibits reasoning, no matter how much hype you read about it in the news.

            That said, the purpose of AI technology is to be our "slaves". They are not being b

            • No shit scifi isn't real. But it IS used a lens vs modern issues and problems in society, and should be regarded well when it sets a well argued case for morality.
              • I don't deny that. However, there's no case for morality with software tools. Wishful thinking about AGI notwithstanding.
      • by DarkOx ( 621550 )

        Disagree.

        A chatbot is a program and large curated are of numbers at its core. Both are free speech, if any kind of software, artwork, etc is free speech.

        What a chatbot is not, is a legal entity. Like anything else it should be restrained by precisely the same limits on 1A that exist for anything else; if those constraints are violated the law should hold the owner of the chat bot entirely responsible.

    • by dfghjk ( 711126 )

      It's a lot better. Movies and video games are not devises specifically to engage people in conversations that may lead to that result.

      You can pretend that you are engaging in critical thinking, but that doesn't mean you are.

      • To me the real problem is you're dealing with people who are already mentally ill and you have something pretending to be a human being. We already have several examples of people who know they are talking to a chatbot but have convinced themselves it's a real human being, or in some cases I think it's God giving them revelations.

        I mentioned this on my other comment but I think the real problem here is that unchecked corporate power means that the only redress we have when shit like this happens is laws
        • by dfghjk ( 711126 )

          Well said. Also, these AI's are trained on examples of how to manipulate people are can be prompted to do so. People, for the most part, do not stand a chance, vulnerable people especially so. We already see that on a large scale with the partisan gaming of the media.

          Unchecked corporate power is the number one problem. It's not hatred, bigotry and nationalism, those are just the tools. Thanks, Reagan, that city on the hill is really shining now.

      • Not all of us want to live in the equivalent of a padded cell just because some people have mental issues. Treat the problem, not the symptom - educate kids properly.
        • by dfghjk ( 711126 )

          No, some are happy so long as the right people are killed. And with what are we going to educate kids, and what defines "properly"?

          But it's nice to know that you think restrictions on computers being used to commit crimes constitutes a "padded cell". I'd be happy if yours is just concrete and steel.

          • With people like you everywhere, is it any wonder Sewell killed himself?

          • This kid didn't commit suicide because a computer "committed a crime". Educating kids - like not killing yourself because you couldn't emotionally handle something your computer told you.
      • by allo ( 1728082 )

        Oh, you may be too young to know the debate, but people blames video games, movies, D&D and rock & roll for such things before. The new thing the youth does must be the devil.

    • We have a lot of research that shows that movies and video games do not lead to action.

      We do not have the same research for chatbots.

      And remember if somebody is contemplating suicide the odds are they're suffering from clinical depression and are highly vulnerable.

      That might change as our entire civilization is collapsing and I could see plenty of people checking out as everything goes to shit, but right now things are just barely holding together economically and socially so the majority of s
    • Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.

      If you assume the default behavior of humans cannot be manipulated by technology, the smartphone industry alone has a few trillion reasons to describe why you're fucking dead wrong.

      Part of me used to agree with you. The other part of me is fucking annoyed by tech junkies every damn day.

    • by Somervillain ( 4719341 ) on Saturday May 31, 2025 @05:04PM (#65419381)

      Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.

      A gun is just a means of hurling something. We obviously regulate guns differently than we regulate bows and arrows, crossbows, slingshots, and rocks thrown. Similarly, we regulate commercial semi trucks differently than we regulate mopeds and eScooters. A convincing AI is definitely much different than GTA or a movie. Just as a gun is regulated differently than less effective means of hurling projectiles, an AI should be regulated differently than passive means of engaging an audience.

      Did the AI cause this kid's suicide?...I don't know...but I sure as fuck don't want AI companies to one moment be saying they can replace people in jobs and then another day say deflect the blame when the AI commits a crime. If my dog murders your child, I go to jail. I don't get to say "oops...the dog did it...what ya gonna do??" If I sell a product that poisons you, I am liable. Do I think the CEO should be charged with the same crime he would be as if he talked a child into suicide?...probably not...but I sure as fuck don't think he should get out of the trial. If the DA wants to press charges against an AI company, they should have the same liability as a manufacturer of any other product. Let the judge and jury decide their fate.

      • Remember "Suicide Solution" by Ozzy Osbourne?

        "Plaintiffs Thomas and Myra Waller in the above captioned action allege that the defendants proximately caused the wrongful death of their son Michael Jeffery Waller by inciting him to commit suicide through the music, lyrics, and subliminal messages contained in the song "Suicide Solution" on the album "Blizzard of Oz.""

        • The 1980 song lyrics were not targeted at a particular listener, they were just coincidently heard by John Daniel McCollum five years later. Here (I speculate a bit) the LLM pretended to be the personal adviser to Sewell Setzer III and told her into suicide using arguments that were tailored to convince her in particular. I think it is not the same thing.

          • An LLM will only go down whatever path you lead it. It's just a tool and like any other tool it can be used to bad ends. I don't even think this is a solvable problem. If we understood what or what not could cause some inclined to commit suicide to end their own life we could probably prevent them from attempting it in the first place. I suppose you can make an LLM that just refuses to respond to certain prompts or ignores specific topics entirely, but humans will judge it less useful or too constrained and
            • Why do you want to prevent suicide? What if it's best for the person and society? Are we dealing with an irrational taboo, a fetish against suicide, here?

              • It's OBVIOUSLY best for people to not be dead, and it is the PURPOSE of society to help us continue be alive for longer lives.

                In the general case, not only we work on preventing suicide, we also work on preventing stupid accidents (and you also get a fine if you don't use the safety belt), preventing disease, and we even work preventing CHOSEN behaviour such as over-eating and addiction to dangerous drugs. We work on preventing avoidable deaths in general as the purpose of making society.

                There could be edge

                • Whst if my social utility has been negative so it just makes economic and common sense that I should have been allowed to act legally on my desires to commit suicide at an early age?

                  • In all but 19 countries of the world, it is legal for you to commit or attempt suicide (meaning you are not sent to jail for to planning or for a failed attempt) https://en.wikipedia.org/wiki/... [wikipedia.org] . However, it isn't considered a protected right (you cannot sue people for preventing the act or saving you). You could campaign to have suicide recognized as a protected right. However I don't advise you to choose "negative social utility" as the criterion for protecting suicide, because of the slippery slope .

      • Throwing a rock and throwing a gun at someone is the same crime. Firing a pistol at someone is different.

        The Character.AI has a TOS that every customer receives.
        This is very pertinent since the TOS covers the fact that it's not a real person you're talking to.
        Sadly, anyone who's truly suicidal will figure out a way, chatbot or not.

  • If human beings pretending to be fictitious characters from the Game of Thrones franchise had said the exact same words, would they be able to claim "free speech" and have any post-suicide lawsuit by a family member tossed out of a US Federal Court?

    I'm not expecting an actual legal answer (unless you are a subject-matter expert, in which case, go ahead) but I am interested in hearing slashdotter's thoughts about whether the words used by the AI chatbots should be considered "protected speech" in cases like

  • >"the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself"

    I think the plaintiff (mom) should perform some self-examination before suing an AI company:

    1) How/why a 14 year old had access to his father's gun?? I strongly support the 2A, but would never leave a gun unsupervised/unlocked in a home with minors.

    2) How much attention was given to this apparently very-troubled 14-year-old by his guardian(s)? Where were you, mom, dad?

    3) Why is it OK for a 14-year-old to freely

  • My autistic son's mental health has suffered terribly due to c.ai and similar AI chatbots.

    They are all blocked at home now but the damage is done.

    The lack of concern for child safety in their design is shocking.

  • The operator of the AI must be held liable for it's actions just as the operator of any other software.
  • Of course, only in the US parents put the blame on AI instead of their own for having guns in the house so the kid could off himself with it. But hee, as long as you can have your gun at home....
  • 14 year old has unsupervised computer access AND access to a gun... Yea blame the entertainment software.

God made the integers; all else is the work of Man. -- Kronecker

Working...