OpenAI Must Defend ChatGPT Fabrications After Failing To Defeat Libel Suit 65
An anonymous reader quotes a report from Ars Technica: OpenAI may finally have to answer for ChatGPT's "hallucinations" in court after a Georgia judge recently ruled against the tech company's motion to dismiss a radio host's defamation suit (PDF). OpenAI had argued that ChatGPT's output cannot be considered libel, partly because the chatbot output cannot be considered a "publication," which is a key element of a defamation claim. In its motion to dismiss, OpenAI also argued that Georgia radio host Mark Walters could not prove that the company acted with actual malice or that anyone believed the allegedly libelous statements were true or that he was harmed by the alleged publication.
It's too early to say whether Judge Tracie Cason found OpenAI's arguments persuasive. In her order denying OpenAI's motion to dismiss, which MediaPost shared here, Cason did not specify how she arrived at her decision, saying only that she had "carefully" considered arguments and applicable laws. There may be some clues as to how Cason reached her decision in a court filing (PDF) from John Monroe, attorney for Walters, when opposing the motion to dismiss last year. Monroe had argued that OpenAI improperly moved to dismiss the lawsuit by arguing facts that have yet to be proven in court. If OpenAI intended the court to rule on those arguments, Monroe suggested that a motion for summary judgment would have been the proper step at this stage in the proceedings, not a motion to dismiss.
Had OpenAI gone that route, though, Walters would have had an opportunity to present additional evidence. To survive a motion to dismiss, all Walters had to do was show that his complaint was reasonably supported by facts, Monroe argued. Failing to convince the court that Walters had no case, OpenAI's legal theories regarding its liability for ChatGPT's "hallucinations" will now likely face their first test in court. "We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case," Monroe told Ars. "Walters sued OpenAI after a journalist, Fred Riehl, warned him that in response to a query, ChatGPT had fabricated an entire lawsuit," notes Ars. "Generating an entire complaint with an erroneous case number, ChatGPT falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation."
"With the lawsuit moving forward, curious chatbot users everywhere may finally get the answer to a question that has been unclear since ChatGPT quickly became the fastest-growing consumer application of all time after its launch in November 2022: Will ChatGPT's hallucinations be allowed to ruin lives?"
It's too early to say whether Judge Tracie Cason found OpenAI's arguments persuasive. In her order denying OpenAI's motion to dismiss, which MediaPost shared here, Cason did not specify how she arrived at her decision, saying only that she had "carefully" considered arguments and applicable laws. There may be some clues as to how Cason reached her decision in a court filing (PDF) from John Monroe, attorney for Walters, when opposing the motion to dismiss last year. Monroe had argued that OpenAI improperly moved to dismiss the lawsuit by arguing facts that have yet to be proven in court. If OpenAI intended the court to rule on those arguments, Monroe suggested that a motion for summary judgment would have been the proper step at this stage in the proceedings, not a motion to dismiss.
Had OpenAI gone that route, though, Walters would have had an opportunity to present additional evidence. To survive a motion to dismiss, all Walters had to do was show that his complaint was reasonably supported by facts, Monroe argued. Failing to convince the court that Walters had no case, OpenAI's legal theories regarding its liability for ChatGPT's "hallucinations" will now likely face their first test in court. "We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case," Monroe told Ars. "Walters sued OpenAI after a journalist, Fred Riehl, warned him that in response to a query, ChatGPT had fabricated an entire lawsuit," notes Ars. "Generating an entire complaint with an erroneous case number, ChatGPT falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation."
"With the lawsuit moving forward, curious chatbot users everywhere may finally get the answer to a question that has been unclear since ChatGPT quickly became the fastest-growing consumer application of all time after its launch in November 2022: Will ChatGPT's hallucinations be allowed to ruin lives?"
Ahahahahaha! (Score:5, Insightful)
Also, hahahahah! These fuckers may finally get taken to task for their crappy product.
Re:Ahahahahaha! (Score:4, Interesting)
Given that Tucker can get away with it, I'm sure ChatGPT will skate free. https://www.npr.org/2020/09/29... [npr.org]
Re: (Score:3)
It's very hard to see a result in which the results of a GPT query is considered a publication, so while this may be a setback for OpenAI they still seem very favored to win this case. Someone who publishes the results of a GPT query will ultimately be on the hook for any libel lawsuits.
Re: (Score:2)
The legal definition publication when it comes to libel is pretty broad. If OpenAI showed a 3rd party some bad info on a web site that maybe could qualify. What a mess.
https://www.legalmatch.com/law... [legalmatch.com]
Re: (Score:2)
From your link:
"A publication is the delivery or announcement of a defamatory statement to another person through any medium."
A publication therefore requires a defamatory statement and two persons, one delivering and one receiving. Is ChatGPT a person?
Note: If the definition did not assume it was a person doing the delivering, the explicit person in the definition would not be "another". There are two, one is implied.
I am growing tired of the constant conflating of LLMs with the applications that use the
Re: Ahahahahaha! (Score:4, Informative)
Re: (Score:3)
First, it is not ChatGPT, the software, but OpenAI, the company, that is on the hook. Second, giving this fabrication to anyone other than the person it is about is publication in the sense of libel law.
Then in the sense of libel law the case should have been thrown out [cornell.edu]:
To prove prima facie defamation, a plaintiff must show four things: 1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.
#1 and #2 are satisfied but it falls apart at #4.
To prove defamation you need to prove damages, at a very
Re: (Score:2)
I do think this is a case of an attention seeker filing a bullshit lawsuit against a company bullshitting everyone that their bullshit generator is some kind of intelligence, and everyone knows it, including the attention seeker, almost as dishonest in his claims as NYT is in theirs.
That said I'm not sure OAI can afford to be particularly aggressive given the bad PR they have. If they had a working product they could annihilate the guy, but because they really don't and need public goodwill, they'll need to
Re: (Score:2)
I do think this is a case of an attention seeker filing a bullshit lawsuit
It's a Conservative radio personality, you're making a safe guess.
against a company bullshitting everyone that their bullshit generator is some kind of intelligence, and everyone knows it,
They aren't Elon Musk.
I haven't heard OpenAI claiming that ChatGPT is intelligence (though lots of folks have discussed that), just that their software produces useful outputs, which is does.
There's already a ton of people using ChatGPT to do useful work, some of them are BS but not all.
That said I'm not sure OAI can afford to be particularly aggressive given the bad PR they have. If they had a working product they could annihilate the guy, but because they really don't and need public goodwill, they'll need to be careful while at the same time make sure they are not off the hook for their product's BS generation.
Why not? When you're sued you're expected to defend yourselves. With the NYTime lawsuit there's at least some questions to be asked. This on the other hand is
Re: (Score:2)
#1 falls apart right away. The ChatGPT interface clearly states that none of its content is to be taken as fact.
"ChatGPT can make mistakes. Consider checking important information."
Re: (Score:2)
The issue will probably come down to the fact that ChatGPT answered the question at all. If it had said "sorry, I can't give legal advice" they might have a defence.
I have not used ChatGPT, but Google Bard will refuse certain things, like medical advice and legal advice.
Re: (Score:2)
I have not used ChatGPT, but Google Bard will refuse certain things, like medical advice and legal advice.
That is because Google has competent lawyers. OpenAI appears to either not have them or to not listen to them. Incidentally, the MS artificial moron does refuse some things too.
Re: (Score:2)
Also, hahahahah! These fuckers may finally get taken to task for their crappy product.
Why would they be any more liable for libel than you would for someone typing a name into your mad lib website?
"____ eats toenail butter in the parlor." I type in, gweihir, did the operator of that site just defame you? If I type the name first and the site picks the untrue sentence after?
What sets it apart from naming my Baldur's Gate character "Gweihir" and generating some racy chat dialog? Is that grounds for a libel suit because some simulation wrote "Gweihir killed a small humanoid tiefling with a fir
Re: (Score:2)
You can always simplify things so grossly that all connection to the original situation is lost. Congratulations on having done so.
Re: (Score:2)
Hmm, I'm not sure how ChatGPT could be held responsible for anything. Only OpenAI.
Which comes back to the curation argument against letting LLM's ingest curated data that wasn't curated.
The output from LLM's is largely terrible when it's presented as "general AI", chatbots are just kinda stupid overall.
Re: (Score:2)
Hmm, I'm not sure how ChatGPT could be held responsible for anything. Only OpenAI.
That is literally what I wrote...
Re: (Score:2)
I wonder if GenAI will ever be fit for purpose in all but a small range of applications.
Re: (Score:2)
Yep, that nicely sums it up.
Reminds one of Tesla's "Full Self Driving" (Score:3)
Going for gold... (Score:2)
...in the olympics of stupid lawsuits
ChatGPT is NOT under control of its creators, they don't even know exactly how it works
If there is merit in the case, it should be filed against the person who created the prompt, not the stupid robot
Re: (Score:2, Insightful)
ForkLiftTruck666 is NOT under control of its creators, they don't even know exactly how it works
Do you see the problem?
Re: (Score:2)
Re: (Score:2)
Not the same, ChatGPT as far as I know has never claimed to be accurate, there are no current regulation controlling AI chat bots as far as I know. LLMs are kind of a black box.
Forklift on the other hand have regulations controlling them, also the manufacturer should know the principle of how they work very well.
To me the question here is: Should it clear to the an reasonable user that this can happen.
If so they should not be liable. To me I think its blindly clear that AI chat bots can make up random nonse
Re: (Score:2)
No, despite your hamfisted attempt at shitty analogy.
Because the OP is right to point out that "the person who created the prompt" is important.
If you ask a chatbot to tell you a joke and it insults your mother, that could simply be the joke. Context is important. GPT may produce good results or bad results, but arguing that it produces defamatory information includes bad assumptions. A person can defame using an LLM tool, that doesn't mean the LLM defames.
Re: (Score:3)
If there is merit in the case, it should be filed against the person who created the prompt
In that case the problem is that any prompt could be a roll of the dice legality wise, thus making the product a potential liability to every user.
Re: (Score:1)
Re: (Score:2)
If your Tesla runs over and kills a child, he shouldn't have been playing on the sidewalk...and his parents are pedos.
Re: (Score:3)
If the prompt is, "Can you tell me all lawsuits PERSON has been or is involved with?" then ... whose fault is it REALLY if ChatGPT then starts fabricating lawsuits?!
Re: (Score:3)
Re: (Score:3)
Ladies and gentlemen of the jury, we built and published a machine that only randomly libels people and anyway we don't really know how the machine works. So you see it's really the fault of the person who used the machine. We should be free to make the libel machine as long as technically we're not the ones pushing the "gimme some libel" button.
Re: (Score:2)
If there is merit in the case, it should be filed against the person who created the prompt....
That is ridiculous on its face.
You: Tell me about [person].
Historian: Tells you lies about [person].
[person]: That's false and defamatory!
By your login, you should be sued for asking the question, rather than the historian being used for lying about [person]?! That's a tremendous failure in reasoning.
Too bad for them (Score:2)
Surely something of the data they made available to the public is a publication. And no doubt ChatGPT's knowledgebase will contain bits of knowledge about various individuals -- some of which might be both untrue and harmful to their reputation. So if someone was widely accused of being a perv then likely ChatGPT will confidently and without reference to the origial source declare it to be fact.
Re: (Score:2)
Surely something of the data they made available to the public is a publication.
Not necessarily, because from what I can find online a publication must be copyrightable work. If you cannot copyright AI generated content, then you cannot consider it a publication. While that may seem pedantic, it fits well with the stance that it takes a human author using AI as a tool to both create copyrightable content and then publish it. Making that author responsible for the published content, not the AI tool used to help create it.
Re: Too bad for them (Score:2)
Re: (Score:3)
"...confidently and without reference to the origial source declare it to be fact."
No, because ChatGPT has no ability to declare anything as fact. You would be the one assuming unverified information as fact, precisely what you accuse ChatGPT of doing.
Good ... (Score:5, Interesting)
It is about time ...
These so called hallucinations can cause real harm and ruin people's lives.
Examples:
Professor accused of sexual harassment [usatoday.com] based on a non-existent article in the Washington Post.
Another professor accused of being convicted and imprisoned for seditious conspiracy against the USA [reason.com].
Lawyer fined $5,000 for submitting an AI generated brief to court quoting non-existent precedence cases [mashable.com].
Fake complaint against man for embezzlement [forbes.com].
Re: (Score:2)
The second one was a case of another person with the same name.
This is a ridiculous lawsuit since you can basically get it to generate any text you want
Re: (Score:2)
Two different accused, two different accusations.
The article from The Volokh Conspiracy is about one Dr Jeffery Battle, who was accused of seditious conspiracy.
The USA Today link is written by Dr. Jonathan Turley, who was accused by ChatGPT of sexual harassment. Eugene Volokh is the one who alerted him of the accusation.
The Volokh Conspiracy is a blog by Dr. Eugene Volokh on legal matters (his specialty).
Re: Good ... (Score:2)
accused by ChatGPT
If I type "kbahey", "doodoo", "eats" into a mad lib, did the mad lib website operator defame you?
ChatGPT is not a person. Is OpenAI negligent in allowing GPT to generate untrue things? Where do you think liability starts? You provide an input and get an output. It's not printed out for the public unless you do that. You can coax a lot of tools into saying untrue things.
Where exactly does the liability shift from you to the algorithm's designers? You can argue you didn't intend to get the output it gives, an
Re: Good ... (Score:2)
Re: (Score:2)
Your neighbor is a person and can consider his thoughts. ChatGPT is generating sentences based on training data.
Re: (Score:3)
all of these involved a person or a corporation. "Hallucinations" cause no harm, how they are used by people might.
Re: (Score:2)
Isn't that equivalent to saying libel causes no harm, it's only how it's used by people that matters?
Re:Good ... (Score:5, Insightful)
The only people (or person) on the hook for causing harm is the USER who used the tool to generate said text and then acted on said text as if it were fact.
Example: I manipulate the prompts to spit back out that "kbahey" (whatever your legal name is) as well as insert whatever libelous nonsense you want to imagine - it doesn't matter - along with "proof". I take that information and publish it a freelance journalist. Who is in the wrong here? Me for using the tool to generate this story? Or OpenAI for "letting" this text get generated? The answer would be ME. I would be at fault for not checking my story. Same goes for anyone who relied on ChatGPT in their jobs or schooling. The lawyer who used ChatGPT and got fed all kinds of wrong info? Fire 'em and take their license.
You don't punish the manufacturers of hammers just because someone out there is smacking people in the face with one. You punish the person doing it. Likewise, OpenAI isn't at fault here, the users are. Punish those who keep misappropriating the tools.
Re: (Score:1)
But you can sue the manufacturer of a hammer if it's sold with a defect where the head appears to be attached and it actually is likely to fly off in normal use, and some guy buys it, uses it, and the head flies off and hits his buddy in the face.
If a professional news reporter or lawyer uses ChatGPT and doesn't check their sources or citations, that's on them, but if some random person tries to use ChatGPT to obtain current news or legal advice, the developers and owners of the service are ultimately respo
Molson Canadian’s Million Monkeys on Typewri (Score:2)
Re: Good ... (Score:2)
These so called hallucinations can cause real harm and ruin people's lives.
So can Wikipedia articles, but that's not what it takes to win a libel suit against them.
You guys are basically arguing that for a sufficiently advanced magic 8 ball, its manufacturer can be sued for libel.
Where do you draw the line between a random text generator and an advanced chatbot, THIS is where the maker is negligent for it generating text that is untrue? For certain, the best result you'll get out of this is a simple disclaimer if ChatGPT doesn't have one already and that seems highly unlikely.
I'd
It'll just drive AI content creators underground. (Score:2)
Re: (Score:2)
No dude. Anyone can get LLMs running locally on commodity laptops in about 15-20 minutes with a couple clicks. You could provide the complete instructions for it in a 3 minute youtube video.
Re: (Score:2)
Re: (Score:2)
Nobody is arguing that "AI" should be "outlawed", they are arguing over who takes the blame and who gets the cash, with those two concerns closely coupled. No one is worrying about damage actually done and who is doing it. In today's political climate, half the people think damage is a virtue.
It should be noted that one of the most high profile "pro-damage" people alive today just started his own AI company and has already started taking credit for the invention of AI. Meanwhile, he's been caught stealin
Re: (Score:2)
This is an improvement. Every step you take to make it more difficult means fewer people do it. Forcing people to go through hoops means they're more likely to run across stories about how inaccurate LLM output is.
It can't even do math properly (Score:2)
I asked ChatGPT "Calculate the result of how many watt hours does it take to bring one cup of 70F water to a boil at sea level?" and it came back with a long winded formula and a result of:
Therefore, it takes approximately 0.4147 watt-hours to bring one cup of 70F water to a boil at sea level.
Some quick Googling tells me the answer is approximately 23 watt hours, which seems much more in line with the power consumption I've seen when I've plugged one of those single mug immersion heaters into my Jackery clone. If ChatGPT is botching math and that's something computers are supposed to be great at, I don't know
Re: It can't even do math properly (Score:2)
Re: (Score:2)
chatGPT is not a computer. chatGPT is an application that runs on a computer. What makes people think that LLMs are designed to be expert at providing truth?
Since both answers are "approximate", aren't they both correct? I mean, close enough for someone who'd ask ChatGPT or "Google" it but wouldn't just calculate it for himself.
Re: (Score:2)
Less than half a watt hour is "close" to 23 watt hours? I think the laws of physics would like a word with you.
Re: (Score:2)
I don't know why people are putting faith in the accuracy of anything else it spits out.
Same reason I put my faith in the gas pipe fitting my plumber did when installing a new boiler. I lack to the tools and knowledge to really know if they are up to code and won't kill me in a gas explosion. Doesn't this mean I'm a raging moron?
Yeah maybe. On the other hand just about everyone is a raging moron in about 80% of their life.
Most people lack the tools to know what's going on with AI and LLM. Very few people do
Misleading heading (Score:4, Insightful)
Re: Misleading heading (Score:2)
Re: (Score:2)
but why would they settle in a suit that they would easily win?
Lawsuits are time-consuming and expensive. It's cheaper to settle, even when you're likely to win. Which is why nuisance lawsuits are so profitable in the US.
Re: (Score:2)
Or they could want precedent as early as possible and to show people not to sue for this.
If they settle, what prevents other people from wanting money too? I don't see that as viable for OpenAI.
Re: (Score:2)
On the contrary, there chances of defeating this suit are slim. Of course they will settle, but why would they settle in a suit that they would easily win?
On the contrary their chances of defeating this suit are close to 100%.
To be defamed you need an injury, ie, someone actually had to believe the statement that was made and their false believe caused a quantifiable harm. This statement had an audience of one, and that one didn't believe the statement.
If no one believed the statement no harm was done, if no harm was done not only was there no defamation, if there was no defamation this is a frivolous suit that should have been dismissed but nevertheless is d
Definition of "published" (Score:2)
I think it'll boil down to a question of the definition of "published" for purposes of the law. The basic question would be: "If someone writes a letter to me asking for information, and I reply with a letter containing a statement that qualifies as defamatory about person X, does my letter constitute "publishing" that statement such that person X could sue me for defamation?". I suspect the answer's going to be "Yes.".
Some may argue that the person asking ChatGPT the question is just using a tool. Well, ho
Propaganda machine on massive industrial scale (Score:2)