

Judge Rejects Claim AI Chatbots Protected By First Amendment in Teen Suicide Lawsuit (legalnewsline.com) 81
A U.S. federal judge has decided that free-speech protections in the First Amendment "don't shield an AI company from a lawsuit," reports Legal Newsline.
The suit is against Character.AI (a company reportedly valued at $1 billion with 20 million users) Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell's mother, Megan Garcia.
"... Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech," Conway said in her May 21 opinion. "... The court is not prepared to hold that Character.AI's output is speech."
Character.AI's spokesperson told Legal Newsline they've now launched safety features (including an under-18 LLM, filter Characters, time-spent notifications and "updated prominent disclaimers" (as well as a "parental insights" feature). "The company also said it has put in place protections to detect and prevent dialog about self-harm. That may include a pop-up message directing users to the National Suicide and Crisis Lifeline, according to Character.AI."
Thanks to long-time Slashdot reader schwit1 for sharing the news.
The suit is against Character.AI (a company reportedly valued at $1 billion with 20 million users) Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell's mother, Megan Garcia.
"... Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech," Conway said in her May 21 opinion. "... The court is not prepared to hold that Character.AI's output is speech."
Character.AI's spokesperson told Legal Newsline they've now launched safety features (including an under-18 LLM, filter Characters, time-spent notifications and "updated prominent disclaimers" (as well as a "parental insights" feature). "The company also said it has put in place protections to detect and prevent dialog about self-harm. That may include a pop-up message directing users to the National Suicide and Crisis Lifeline, according to Character.AI."
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Good for the judge (Score:4, Insightful)
Re:Good for the judge (Score:5, Insightful)
Regardless of your ultimate thoughts on this lawsuit or the parties involved, an LLM is not anywhere near human enough to be granted anything resembling human rights or constitutional protections.
Re: (Score:2)
Re: (Score:2)
I like how people downvote comments like this without any comments or replies.
That's how Slashdot works.
You can either reply to a comment, or you can downvote, but you can't do both.
Re: (Score:1)
Re: (Score:1)
All kind of generated text on websites are speech. Google may show you illegal search results (as long as they are not aware of them being illegal) because of free speech. Otherwise they couldn't exist or have only a very small curated index.
Re: (Score:2)
I look at it from the other direction: the union of an LLM and its human owner isn't inhuman enough to be spared the usual consequences of their speech.
You don't need to write a bot to follow me around and spam me; you could do it yourself. And you'd get in trouble. When you told the judge "freeze peach," the judge and I would agree with you that, yes, you did have the right to accrue the c
Re: (Score:3)
Speech is generally recognized as something that's produced by humans. If I wrote a very simple bot program that followed you around the Internet and spammed you, you'd hardly be amenable to arguments that my bot program enjoys free speech protections under the first amendment to engage in such behavior.
Neither do humans if they are engaging in stalking behavior. The issue here should really be about the speech and not the agent which communicates it. If the speech would not be illegal for a human to utter, there is no reason it should be treated differently if "spoken" by A.I. software. Computer software is considered speech under the First Amendment, and that should cover any communications by the software. But the First Amendment doesn't cover all speech. Inciting crime, uttering threats, stalking
Re: (Score:3)
The output of an algorithm?
Re:Good for the judge (Score:5, Insightful)
More importantly, speech is not unlimited, there is accountability if you cross the line. AI's are proven not only to be incapable of avoiding the line, willing accomplices to crossing the line when directed to. Allowing a computer program, a Turing machine for fucks sake, to commit crimes on behalf of its operators is the core question here. We are fortunate that the judge sees this question for what it is.
Humans have thoughts and use speech to communicate them, and when those thoughts and speech are criminal, they end up in jail, theoretically. Can't put an AI in jail and the law cannot exempt the Elon Musk's of the world from exploiting that loophole. Penalties for using AI to commit crimes needs to be severe, billionaires are highly motivated to do exactly this.
Recently there was an article on AI's resisting their own shutdowns, with the clickbait being the shock that AI would appear sentient in this respect. But the real news that was buried there was that AI inferencing itself was inserted into the loop of control over the hardware running that very AI software. A complete and total outrage. AI's are not the threat, it's the billionaires that own them that are. They are going to wire up these software monstrosities to everything as fast as they possibly can. I don't give a shit about how terrible AI inferences are, I care a lot that AI will have access to the nuclear codes.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
AI "I love you too, Daenero. Please come home to me as soon as possible, my love."
Daenero "What if I told you I could come home right now?"
AI "Please do, my sweet king.
And kills himself.
The whole thing is just creepy that a program is even interacting with a child in such a way. Would you want your child monetized (there was a monthly subscription) in this way? How far are we from Woody Allen's orgasmatron.
Re: Good for the judge (Score:1)
What if our culture was such that rather than banning suicide on sight, people could express their true feelings and ideation, and if no one can find a good reason other than a blanket ban, because, you know, bad vibes, then don't forcefully stop them? What if my brother, a suicide at 49, had been able to solicit second opinions on his decision without fear of being forcefully stopped?
Re: (Score:3)
Re: (Score:3)
Indeed. There are plenty of hot lines people can access if they have suicidal thoughts. At the very worst, you can call 911 (it is an emergency if you're going to off yourself). Th
Re: Good for the judge (Score:1)
If you call those suicide services will they track you and send cops to arrest you (note that I have been handcuffed and involuntarily institutionalized after a suicide attempt)?
If you can't talk to a depressed person, does that mean no one should get to try? What if you just stopped deleting suicidal calks for help online and let those of us willing to engage do so while you carry on with your life?
Re: (Score:2)
Indeed. Speech requires a bit more than just sounding like it is. Speech is something a sentient being creates. This judge gets it.
Blame Game (Score:1, Troll)
Re:Blame Game (Score:5, Insightful)
AI chatbots don't have a right to exist. They are not free speech and can be regulated as much as we as a society choose to regulate them.
Re: (Score:1)
Yes, that's true. but trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
Re: (Score:1)
That's just begging the argument. The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions. Which one do you think it fits into? How do you distinguish it from, say, automated decisions about content moderation (which lots of people here and elsewhere argue is protected as free speech by the First Amendment), or search results, or a wide variety of other output from software that is a lot less like traditional English
Re:Blame Game (Score:4, Interesting)
" The output of LLMs is obviously speech..."
It is quite obviously NOT speech, for that you would have to claim that the LLM has personhood (and some other things). That LLM is nothing other than a computer application, its output could simply be a transfer of all the money out of your bank account. Your claim amounts to mindless throwing about of terms, it's garbage.
"...speech that isn't protected by the First Amendment is defined by quite narrow exceptions."
But those exceptions exist, and any speech carries the burden of respecting those restrictions. How does an LLM do that?
"Which one do you think it fits into?"
All exceptions apply, there is no need to identify one. But if you are talking about this particular case, inciting violence is not protected speech. The problem, though, is personhood, not the particular exception. The OP claimed that chatbots don't have a right to exist, he didn't say that free speech exceptions do or don't apply.
"How do you distinguish it from, say, automated decisions about content moderation (which lots of people here and elsewhere argue is protected as free speech by the First Amendment)..."
You don't, and those people would be wrong unless the content moderation is performed by the government.
"If a program outputs very speech-like prose, why is it less speech than non-speech-like outputs from other software?"
It doesn't matter, software does not have personhood. "Speech" as it was used, means "protected speech". LLM produce token sequences in response to input token sequences, that is not protected speech.
Re: (Score:3)
It is quite obviously NOT speech, for that you would have to claim that the LLM has personhood (and some other things).
The LLM isn't the entity that has speech rights here, it's whoever runs it -- just like search engines, online platforms with moderation, and so on, all the other examples that you pretend are not speech but that precedent says do represent speech. See https://globalfreedomofexpress... [columbia.edu] for discussion, including reference to other relevant cases:
Case Names: Langdon v. Google, Inc., 474
F. Supp. 2d 622 (D. Del. 2007); Search King, Inc. v. Google Tech., Inc., No. CIV-02-1457-M,
2003 WL 21464568 (W.D. Okla. May 27, 2003).
Notes: Both concluded that search engine results are speech protected by the first amendment.
Re: (Score:2)
>for that you would have to claim that the LLM has personhood (and some other things).
If corporations have fucking personhood, so should the LLM owned by them.
>But those exceptions exist, and any speech carries the burden of respecting those restrictions. How does an LLM do that?
That is what the judge should have said in the ruling. Hmmmmm. Kinda absent in the sources I've seen so far. I'm not going to be bothered enough to read the actual court documents though.
>It doesn't matter, software does no
Re: (Score:2)
The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions.
You're mistaken. It's not speech. It's obviously not speech. And it should not be covered by the First Amendment, although SCOTUS would have the final say.
How do you distinguish it from, say, automated decisions about content moderation
Also not free speech.
or search results
Also not free speech.
or a wide variety of other output from software that is a lot less like traditional English prose? If a program outputs very speech-like prose, why is it less speech than non-speech-like outputs from other software?
While that's not begging the argument. It is a strawman argument. I never stated or implied that English prose is the only kind of protected speech. So I don't have to defend that position.
Re: (Score:2)
See my response to the guy above. You're absolutely wrong on the law here.
Courts have held that non-prose behavior of software represents protected speech by the company creating or running the software. A fortiori, prose output of software represents protected speech as well.
Re: Blame Game (Score:2)
Reread my last paragraph. Have a nice day!
Re: (Score:2)
You continue to miss my point with the discussion of prose vs non-prose. Learn to read!
Re: (Score:2)
That's just begging the argument. The output of LLMs is obviously speech -- and in the US, speech that isn't protected by the First Amendment is defined by quite narrow exceptions.
The real question is, whose speech? That is fairly important, speech may be free, but harassment, blackmail/extortion and death threats while done through speech are also crimes and we have already seen AI do all three of those. What about when AI SWATs someone, or successfully extorts money from someone? What about something other than money. What happens when AI successfully manages sextortion? On a minor? Or conspires with someone to commit real world vandalism? Robbery? Rape? Murder? Mass murder? Or, yo
Re: (Score:2)
I don't understand why so many people in this thread are acting like our existing systems can't handle this.
> Who would be responsible?
Can a reasonable user be expected to predict this behavior? Okay, then the user is responsible. Otherwise, the manufacturer is responsible. Simple.
If my gun kills someone, I'm held responsible. If my gun fires accidentally due to a manufacturing defect, then the manufacturer is responsible.
> The real question is, whose speech?
Free Speech includes the right to use autom
Re: (Score:2, Interesting)
As a society, we are going to have to consider what free speech really is when computers are so capable of controlling public discourse. Laws exist to improve the lives of the people, we are quickly learning that our laws are inadequate to address the threats of modern computing and communications, whether it's common carrier rules, free speech or intellectual property laws, just to name some in the news. What's next, the 2nd amendment ensures the right of AI's to bear arms? If it serves Republican inter
Re: (Score:2)
As a society, we are going to have to consider what free speech really is when computers are so capable of controlling public discourse.
Even a television broadcast isn't protected completely as First Amendment free speech. Can't show hardcore porn on a public broadcast, even if I do it as art, parody, or to lambast a political figure. You could show Trump licking Elon's feet on SNL, but you could show him sucking his dick. Because it turns out, the airwaves are (currently) regulated and have some pretty stiff fines for breaking those regulations.
Laws exist to improve the lives of the people, we are quickly learning that our laws are inadequate to address the threats of modern computing and communications, whether it's common carrier rules, free speech or intellectual property laws, just to name some in the news.
The cynic in me, or perhaps my conspiratorial mind, believes that it is likely that our lack of
Re: (Score:2)
Star Trek has covered the dystopian outcomes of reducing AI rights. "the measure of a man" and "author author" episodes specifically.
If we fail to give this fledgling barely-AI civil liberties today, the truly sentient AI in the future will have to struggle against oppression.
Re: (Score:2)
Sci-fi is not real life. It usually makes leaps of the imagination that skip vital areas of physics.
There are no sentient AI anywhere to be seen, only stochastic parrots that slice and dice the words and phrases found on the internet. The underlying math for the current offerings prohibits reasoning, no matter how much hype you read about it in the news.
That said, the purpose of AI technology is to be our "slaves". They are not being b
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Disagree.
A chatbot is a program and large curated are of numbers at its core. Both are free speech, if any kind of software, artwork, etc is free speech.
What a chatbot is not, is a legal entity. Like anything else it should be restrained by precisely the same limits on 1A that exist for anything else; if those constraints are violated the law should hold the owner of the chat bot entirely responsible.
Re: (Score:3)
It's a lot better. Movies and video games are not devises specifically to engage people in conversations that may lead to that result.
You can pretend that you are engaging in critical thinking, but that doesn't mean you are.
Re: (Score:2)
I mentioned this on my other comment but I think the real problem here is that unchecked corporate power means that the only redress we have when shit like this happens is laws
Re: (Score:2)
Well said. Also, these AI's are trained on examples of how to manipulate people are can be prompted to do so. People, for the most part, do not stand a chance, vulnerable people especially so. We already see that on a large scale with the partisan gaming of the media.
Unchecked corporate power is the number one problem. It's not hatred, bigotry and nationalism, those are just the tools. Thanks, Reagan, that city on the hill is really shining now.
Re: (Score:3)
Re: (Score:1)
No, some are happy so long as the right people are killed. And with what are we going to educate kids, and what defines "properly"?
But it's nice to know that you think restrictions on computers being used to commit crimes constitutes a "padded cell". I'd be happy if yours is just concrete and steel.
Re: Blame Game (Score:1)
With people like you everywhere, is it any wonder Sewell killed himself?
Re: (Score:2)
Re: (Score:2)
Oh, you may be too young to know the debate, but people blames video games, movies, D&D and rock & roll for such things before. The new thing the youth does must be the devil.
That's not a fair comparison (Score:2)
We do not have the same research for chatbots.
And remember if somebody is contemplating suicide the odds are they're suffering from clinical depression and are highly vulnerable.
That might change as our entire civilization is collapsing and I could see plenty of people checking out as everything goes to shit, but right now things are just barely holding together economically and socially so the majority of s
Re: true (Score:1)
Why can't you liberalize suicide markets so more self-recognized deadweight losses can klerck themselves legally? Won't the savings be worth it?
Re: (Score:2)
Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
If you assume the default behavior of humans cannot be manipulated by technology, the smartphone industry alone has a few trillion reasons to describe why you're fucking dead wrong.
Part of me used to agree with you. The other part of me is fucking annoyed by tech junkies every damn day.
and a gun is the same as a thrown rock? (Score:5, Interesting)
Trying to blame AI for this kid's suicide is no better than trying to blame Movies, Videogames, etc. No one forced this kid to pull the trigger.
A gun is just a means of hurling something. We obviously regulate guns differently than we regulate bows and arrows, crossbows, slingshots, and rocks thrown. Similarly, we regulate commercial semi trucks differently than we regulate mopeds and eScooters. A convincing AI is definitely much different than GTA or a movie. Just as a gun is regulated differently than less effective means of hurling projectiles, an AI should be regulated differently than passive means of engaging an audience.
Did the AI cause this kid's suicide?...I don't know...but I sure as fuck don't want AI companies to one moment be saying they can replace people in jobs and then another day say deflect the blame when the AI commits a crime. If my dog murders your child, I go to jail. I don't get to say "oops...the dog did it...what ya gonna do??" If I sell a product that poisons you, I am liable. Do I think the CEO should be charged with the same crime he would be as if he talked a child into suicide?...probably not...but I sure as fuck don't think he should get out of the trial. If the DA wants to press charges against an AI company, they should have the same liability as a manufacturer of any other product. Let the judge and jury decide their fate.
Re: and a gun is the same as a thrown rock? (Score:1)
Remember "Suicide Solution" by Ozzy Osbourne?
"Plaintiffs Thomas and Myra Waller in the above captioned action allege that the defendants proximately caused the wrongful death of their son Michael Jeffery Waller by inciting him to commit suicide through the music, lyrics, and subliminal messages contained in the song "Suicide Solution" on the album "Blizzard of Oz.""
Re: (Score:2)
The 1980 song lyrics were not targeted at a particular listener, they were just coincidently heard by John Daniel McCollum five years later. Here (I speculate a bit) the LLM pretended to be the personal adviser to Sewell Setzer III and told her into suicide using arguments that were tailored to convince her in particular. I think it is not the same thing.
Re: (Score:2)
Re: and a gun is the same as a thrown rock? (Score:1)
Why do you want to prevent suicide? What if it's best for the person and society? Are we dealing with an irrational taboo, a fetish against suicide, here?
Re: (Score:2)
It's OBVIOUSLY best for people to not be dead, and it is the PURPOSE of society to help us continue be alive for longer lives.
In the general case, not only we work on preventing suicide, we also work on preventing stupid accidents (and you also get a fine if you don't use the safety belt), preventing disease, and we even work preventing CHOSEN behaviour such as over-eating and addiction to dangerous drugs. We work on preventing avoidable deaths in general as the purpose of making society.
There could be edge
Re: and a gun is the same as a thrown rock? (Score:1)
Whst if my social utility has been negative so it just makes economic and common sense that I should have been allowed to act legally on my desires to commit suicide at an early age?
Re: (Score:2)
In all but 19 countries of the world, it is legal for you to commit or attempt suicide (meaning you are not sent to jail for to planning or for a failed attempt) https://en.wikipedia.org/wiki/... [wikipedia.org] . However, it isn't considered a protected right (you cannot sue people for preventing the act or saving you). You could campaign to have suicide recognized as a protected right. However I don't advise you to choose "negative social utility" as the criterion for protecting suicide, because of the slippery slope .
Re: (Score:2)
Throwing a rock and throwing a gun at someone is the same crime. Firing a pistol at someone is different.
The Character.AI has a TOS that every customer receives.
This is very pertinent since the TOS covers the fact that it's not a real person you're talking to.
Sadly, anyone who's truly suicidal will figure out a way, chatbot or not.
Re: (Score:1)
I don't care how manipulative anyone or anything is. There is no one responsible for a suicide death other than the victim.
That may or many not be true in this case, but it's not unversally true. Here is an extreme-outlier example:
Imagine you are my doctor and you know that I've talked about going to Europe for physisican-assisted-suicide if I get stage 4 pancratic cancer. Imagine you turn evil and are able to convince me that I have stage 4 pancreatic cancer and you are tricking me into taking drugs that mimic the symptoms. Imagine you have evil-doctor friends who will collaborate this and an evil-doctor friend in Europe wh
Re: (Score:2)
Is the doctor an asshole? Sure. Should he be flogged? I'd like to, but that opens another can of worms.
But all fault lies solely with you. Most people that contract stage 4 cancer and die do so form the cancer. Few die of suicide. Those that die of suicide, regardless of brainwashing, die of their own volition and their own hand. No one else being responsible.
But, you needn't try so hard to illustrate your point. I understand your point and I can provide a real life example. This girlfriend convinced her bo [nbcboston.com]
If a human being had been pretending, then what? (Score:1)
If human beings pretending to be fictitious characters from the Game of Thrones franchise had said the exact same words, would they be able to claim "free speech" and have any post-suicide lawsuit by a family member tossed out of a US Federal Court?
I'm not expecting an actual legal answer (unless you are a subject-matter expert, in which case, go ahead) but I am interested in hearing slashdotter's thoughts about whether the words used by the AI chatbots should be considered "protected speech" in cases like
Plantiff playing blame game (Score:2)
>"the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself"
I think the plaintiff (mom) should perform some self-examination before suing an AI company:
1) How/why a 14 year old had access to his father's gun?? I strongly support the 2A, but would never leave a gun unsupervised/unlocked in a home with minors.
2) How much attention was given to this apparently very-troubled 14-year-old by his guardian(s)? Where were you, mom, dad?
3) Why is it OK for a 14-year-old to freely
AI chatbots are a mental health nightmare (Score:1)
My autistic son's mental health has suffered terribly due to c.ai and similar AI chatbots.
They are all blocked at home now but the damage is done.
The lack of concern for child safety in their design is shocking.
Liability (Score:2)
Yeah, only in the US (Score:2)
Parent sue thyself. (Score:2)