Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI The Courts

OpenAI Must Defend ChatGPT Fabrications After Failing To Defeat Libel Suit 65

An anonymous reader quotes a report from Ars Technica: OpenAI may finally have to answer for ChatGPT's "hallucinations" in court after a Georgia judge recently ruled against the tech company's motion to dismiss a radio host's defamation suit (PDF). OpenAI had argued that ChatGPT's output cannot be considered libel, partly because the chatbot output cannot be considered a "publication," which is a key element of a defamation claim. In its motion to dismiss, OpenAI also argued that Georgia radio host Mark Walters could not prove that the company acted with actual malice or that anyone believed the allegedly libelous statements were true or that he was harmed by the alleged publication.

It's too early to say whether Judge Tracie Cason found OpenAI's arguments persuasive. In her order denying OpenAI's motion to dismiss, which MediaPost shared here, Cason did not specify how she arrived at her decision, saying only that she had "carefully" considered arguments and applicable laws. There may be some clues as to how Cason reached her decision in a court filing (PDF) from John Monroe, attorney for Walters, when opposing the motion to dismiss last year. Monroe had argued that OpenAI improperly moved to dismiss the lawsuit by arguing facts that have yet to be proven in court. If OpenAI intended the court to rule on those arguments, Monroe suggested that a motion for summary judgment would have been the proper step at this stage in the proceedings, not a motion to dismiss.

Had OpenAI gone that route, though, Walters would have had an opportunity to present additional evidence. To survive a motion to dismiss, all Walters had to do was show that his complaint was reasonably supported by facts, Monroe argued. Failing to convince the court that Walters had no case, OpenAI's legal theories regarding its liability for ChatGPT's "hallucinations" will now likely face their first test in court. "We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case," Monroe told Ars.
"Walters sued OpenAI after a journalist, Fred Riehl, warned him that in response to a query, ChatGPT had fabricated an entire lawsuit," notes Ars. "Generating an entire complaint with an erroneous case number, ChatGPT falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation."

"With the lawsuit moving forward, curious chatbot users everywhere may finally get the answer to a question that has been unclear since ChatGPT quickly became the fastest-growing consumer application of all time after its launch in November 2022: Will ChatGPT's hallucinations be allowed to ruin lives?"
This discussion has been archived. No new comments can be posted.

OpenAI Must Defend ChatGPT Fabrications After Failing To Defeat Libel Suit

Comments Filter:
  • Ahahahahaha! (Score:5, Insightful)

    by gweihir ( 88907 ) on Wednesday January 17, 2024 @06:04PM (#64168453)

    Also, hahahahah! These fuckers may finally get taken to task for their crappy product.

    • Re:Ahahahahaha! (Score:4, Interesting)

      by mesterha ( 110796 ) <chris@mesterharm.gmail@com> on Wednesday January 17, 2024 @06:33PM (#64168501) Homepage

      Given that Tucker can get away with it, I'm sure ChatGPT will skate free. https://www.npr.org/2020/09/29... [npr.org]

    • by ranton ( 36917 )

      It's very hard to see a result in which the results of a GPT query is considered a publication, so while this may be a setback for OpenAI they still seem very favored to win this case. Someone who publishes the results of a GPT query will ultimately be on the hook for any libel lawsuits.

      • The legal definition publication when it comes to libel is pretty broad. If OpenAI showed a 3rd party some bad info on a web site that maybe could qualify. What a mess.
        https://www.legalmatch.com/law... [legalmatch.com]

        • by dfghjk ( 711126 )

          From your link:
          "A publication is the delivery or announcement of a defamatory statement to another person through any medium."

          A publication therefore requires a defamatory statement and two persons, one delivering and one receiving. Is ChatGPT a person?

          Note: If the definition did not assume it was a person doing the delivering, the explicit person in the definition would not be "another". There are two, one is implied.

          I am growing tired of the constant conflating of LLMs with the applications that use the

      • Re: Ahahahahaha! (Score:4, Informative)

        by gnasher719 ( 869701 ) on Wednesday January 17, 2024 @08:02PM (#64168711)
        First, it is not ChatGPT, the software, but OpenAI, the company, that is on the hook. Second, giving this fabrication to anyone other than the person it is about is publication in the sense of libel law.
        • First, it is not ChatGPT, the software, but OpenAI, the company, that is on the hook. Second, giving this fabrication to anyone other than the person it is about is publication in the sense of libel law.

          Then in the sense of libel law the case should have been thrown out [cornell.edu]:

          To prove prima facie defamation, a plaintiff must show four things: 1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

          #1 and #2 are satisfied but it falls apart at #4.

          To prove defamation you need to prove damages, at a very

          • I do think this is a case of an attention seeker filing a bullshit lawsuit against a company bullshitting everyone that their bullshit generator is some kind of intelligence, and everyone knows it, including the attention seeker, almost as dishonest in his claims as NYT is in theirs.

            That said I'm not sure OAI can afford to be particularly aggressive given the bad PR they have. If they had a working product they could annihilate the guy, but because they really don't and need public goodwill, they'll need to

            • I do think this is a case of an attention seeker filing a bullshit lawsuit

              It's a Conservative radio personality, you're making a safe guess.

              against a company bullshitting everyone that their bullshit generator is some kind of intelligence, and everyone knows it,

              They aren't Elon Musk.

              I haven't heard OpenAI claiming that ChatGPT is intelligence (though lots of folks have discussed that), just that their software produces useful outputs, which is does.

              There's already a ton of people using ChatGPT to do useful work, some of them are BS but not all.

              That said I'm not sure OAI can afford to be particularly aggressive given the bad PR they have. If they had a working product they could annihilate the guy, but because they really don't and need public goodwill, they'll need to be careful while at the same time make sure they are not off the hook for their product's BS generation.

              Why not? When you're sued you're expected to defend yourselves. With the NYTime lawsuit there's at least some questions to be asked. This on the other hand is

          • by rea1l1 ( 903073 )

            #1 falls apart right away. The ChatGPT interface clearly states that none of its content is to be taken as fact.

            "ChatGPT can make mistakes. Consider checking important information."

        • by AmiMoJo ( 196126 )

          The issue will probably come down to the fact that ChatGPT answered the question at all. If it had said "sorry, I can't give legal advice" they might have a defence.

          I have not used ChatGPT, but Google Bard will refuse certain things, like medical advice and legal advice.

          • by gweihir ( 88907 )

            I have not used ChatGPT, but Google Bard will refuse certain things, like medical advice and legal advice.

            That is because Google has competent lawyers. OpenAI appears to either not have them or to not listen to them. Incidentally, the MS artificial moron does refuse some things too.

    • Also, hahahahah! These fuckers may finally get taken to task for their crappy product.

      Why would they be any more liable for libel than you would for someone typing a name into your mad lib website?
      "____ eats toenail butter in the parlor." I type in, gweihir, did the operator of that site just defame you? If I type the name first and the site picks the untrue sentence after?
      What sets it apart from naming my Baldur's Gate character "Gweihir" and generating some racy chat dialog? Is that grounds for a libel suit because some simulation wrote "Gweihir killed a small humanoid tiefling with a fir

      • by gweihir ( 88907 )

        You can always simplify things so grossly that all connection to the original situation is lost. Congratulations on having done so.

    • by Kisai ( 213879 )

      Hmm, I'm not sure how ChatGPT could be held responsible for anything. Only OpenAI.

      Which comes back to the curation argument against letting LLM's ingest curated data that wasn't curated.

      The output from LLM's is largely terrible when it's presented as "general AI", chatbots are just kinda stupid overall.

      • by gweihir ( 88907 )

        Hmm, I'm not sure how ChatGPT could be held responsible for anything. Only OpenAI.

        That is literally what I wrote...

    • Sounds to me like an analogy of someone being physically injured by a poorly designed or misconfigured piece of machinery. Someone's (arguably) been hurt because of the lack of duty of care by the machine's manufacturer/operator. There's probably a better way to say this in legalese.

      I wonder if GenAI will ever be fit for purpose in all but a small range of applications.
  • by ffkom ( 3519199 ) on Wednesday January 17, 2024 @06:10PM (#64168473)
    Sure OpenAI can claim that ChatGPT is just some tool that was never guaranteed to "only tell the truth", and it does not publish anything on its own. Just like Tesla's "Full Self Driving Autopilot" is claimed to be only some tool that the driver(!) must never rely on. In both cases, though, the marketing and the actual use made of the tools clearly indicate a totally irresponsible level of reliance.
  • ...in the olympics of stupid lawsuits
    ChatGPT is NOT under control of its creators, they don't even know exactly how it works
    If there is merit in the case, it should be filed against the person who created the prompt, not the stupid robot

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      OK, let's change one word in your argument and check if it still holds:

      ForkLiftTruck666 is NOT under control of its creators, they don't even know exactly how it works

      Do you see the problem?

      • Sounds like a good plot for a disaster movie.
      • Not the same, ChatGPT as far as I know has never claimed to be accurate, there are no current regulation controlling AI chat bots as far as I know. LLMs are kind of a black box.

        Forklift on the other hand have regulations controlling them, also the manufacturer should know the principle of how they work very well.

        To me the question here is: Should it clear to the an reasonable user that this can happen.

        If so they should not be liable. To me I think its blindly clear that AI chat bots can make up random nonse

      • by dfghjk ( 711126 )

        No, despite your hamfisted attempt at shitty analogy.

        Because the OP is right to point out that "the person who created the prompt" is important.

        If you ask a chatbot to tell you a joke and it insults your mother, that could simply be the joke. Context is important. GPT may produce good results or bad results, but arguing that it produces defamatory information includes bad assumptions. A person can defame using an LLM tool, that doesn't mean the LLM defames.

    • If there is merit in the case, it should be filed against the person who created the prompt

      In that case the problem is that any prompt could be a roll of the dice legality wise, thus making the product a potential liability to every user.

    • If  your dangerous monkeys escape from your control and kills the next-door-neighbor, then YOU goto prison. Case resolved years ago. If your  pet pit-viper escapes from your control and bites/kills the next-door kid, then YOU goto prison. Case resolved years ago.  If your pet robot escapes control and crunches  bones  of the lady next door then ....   that's JapeChat for you. 
      • by dfghjk ( 711126 )

        If your Tesla runs over and kills a child, he shouldn't have been playing on the sidewalk...and his parents are pedos.

    • by Calydor ( 739835 )

      If the prompt is, "Can you tell me all lawsuits PERSON has been or is involved with?" then ... whose fault is it REALLY if ChatGPT then starts fabricating lawsuits?!

    • "ChatGPT is NOT under control of its creators, they don't even know exactly how it works" So what you are saying is, the clock is ticking down to zero on Humanity.
    • Ladies and gentlemen of the jury, we built and published a machine that only randomly libels people and anyway we don't really know how the machine works. So you see it's really the fault of the person who used the machine. We should be free to make the libel machine as long as technically we're not the ones pushing the "gimme some libel" button.

    • If there is merit in the case, it should be filed against the person who created the prompt....

      That is ridiculous on its face.

      You: Tell me about [person].
      Historian: Tells you lies about [person].
      [person]: That's false and defamatory!

      By your login, you should be sued for asking the question, rather than the historian being used for lying about [person]?! That's a tremendous failure in reasoning.

  • Surely something of the data they made available to the public is a publication. And no doubt ChatGPT's knowledgebase will contain bits of knowledge about various individuals -- some of which might be both untrue and harmful to their reputation. So if someone was widely accused of being a perv then likely ChatGPT will confidently and without reference to the origial source declare it to be fact.

    • by ranton ( 36917 )

      Surely something of the data they made available to the public is a publication.

      Not necessarily, because from what I can find online a publication must be copyrightable work. If you cannot copyright AI generated content, then you cannot consider it a publication. While that may seem pedantic, it fits well with the stance that it takes a human author using AI as a tool to both create copyrightable content and then publish it. Making that author responsible for the published content, not the AI tool used to help create it.

      • You are confused. In copyright law, publication means one thing. In libel law, it means something else. Sending a libellous statement to a single person other than the one who is libelled is a publication.
    • by dfghjk ( 711126 )

      "...confidently and without reference to the origial source declare it to be fact."

      No, because ChatGPT has no ability to declare anything as fact. You would be the one assuming unverified information as fact, precisely what you accuse ChatGPT of doing.

  • Good ... (Score:5, Interesting)

    by kbahey ( 102895 ) on Wednesday January 17, 2024 @06:46PM (#64168549) Homepage

    It is about time ...

    These so called hallucinations can cause real harm and ruin people's lives.

    Examples:

    Professor accused of sexual harassment [usatoday.com] based on a non-existent article in the Washington Post.

    Another professor accused of being convicted and imprisoned for seditious conspiracy against the USA [reason.com].

    Lawyer fined $5,000 for submitting an AI generated brief to court quoting non-existent precedence cases [mashable.com].

    Fake complaint against man for embezzlement [forbes.com].

    • by Ksevio ( 865461 )

      The second one was a case of another person with the same name.

      This is a ridiculous lawsuit since you can basically get it to generate any text you want

      • by kbahey ( 102895 )

        Two different accused, two different accusations.

        The article from The Volokh Conspiracy is about one Dr Jeffery Battle, who was accused of seditious conspiracy.

        The USA Today link is written by Dr. Jonathan Turley, who was accused by ChatGPT of sexual harassment. Eugene Volokh is the one who alerted him of the accusation.

        The Volokh Conspiracy is a blog by Dr. Eugene Volokh on legal matters (his specialty).

        • accused by ChatGPT

          If I type "kbahey", "doodoo", "eats" into a mad lib, did the mad lib website operator defame you?

          ChatGPT is not a person. Is OpenAI negligent in allowing GPT to generate untrue things? Where do you think liability starts? You provide an input and get an output. It's not printed out for the public unless you do that. You can coax a lot of tools into saying untrue things.

          Where exactly does the liability shift from you to the algorithm's designers? You can argue you didn't intend to get the output it gives, an

      • My neighbour hates everyone. If you tell him "I don't like X" he will tell you "did you know X was in jail for five years for a robbery?" My neighbour is as stupid as ChatGPT, and he has just committed libel.
        • by Ksevio ( 865461 )

          Your neighbor is a person and can consider his thoughts. ChatGPT is generating sentences based on training data.

    • by dfghjk ( 711126 )

      all of these involved a person or a corporation. "Hallucinations" cause no harm, how they are used by people might.

    • Re:Good ... (Score:5, Insightful)

      by Bahbus ( 1180627 ) on Wednesday January 17, 2024 @10:26PM (#64168937) Homepage

      The only people (or person) on the hook for causing harm is the USER who used the tool to generate said text and then acted on said text as if it were fact.

      Example: I manipulate the prompts to spit back out that "kbahey" (whatever your legal name is) as well as insert whatever libelous nonsense you want to imagine - it doesn't matter - along with "proof". I take that information and publish it a freelance journalist. Who is in the wrong here? Me for using the tool to generate this story? Or OpenAI for "letting" this text get generated? The answer would be ME. I would be at fault for not checking my story. Same goes for anyone who relied on ChatGPT in their jobs or schooling. The lawyer who used ChatGPT and got fed all kinds of wrong info? Fire 'em and take their license.

      You don't punish the manufacturers of hammers just because someone out there is smacking people in the face with one. You punish the person doing it. Likewise, OpenAI isn't at fault here, the users are. Punish those who keep misappropriating the tools.

      • But you can sue the manufacturer of a hammer if it's sold with a defect where the head appears to be attached and it actually is likely to fly off in normal use, and some guy buys it, uses it, and the head flies off and hits his buddy in the face.

        If a professional news reporter or lawyer uses ChatGPT and doesn't check their sources or citations, that's on them, but if some random person tries to use ChatGPT to obtain current news or legal advice, the developers and owners of the service are ultimately respo

    • Can the Molson Monkeys hacking away in a Quebec church eventually produce an entire lawsuit? Theoretically yes, but it would still be random text and who are you gonna sue? Meantime, I’ll drink to that - Cheers!
    • These so called hallucinations can cause real harm and ruin people's lives.

      So can Wikipedia articles, but that's not what it takes to win a libel suit against them.

      You guys are basically arguing that for a sufficiently advanced magic 8 ball, its manufacturer can be sued for libel.

      Where do you draw the line between a random text generator and an advanced chatbot, THIS is where the maker is negligent for it generating text that is untrue? For certain, the best result you'll get out of this is a simple disclaimer if ChatGPT doesn't have one already and that seems highly unlikely.

      I'd

  • Right now you'd need some fair technical skill to setup your own LLM or similar and front-end it. However, the hardware (vector processing units on GPU-style PCIe cards) required is getting smaller, cheaper, and more effective. However, if we make the content generated by AI something folks get litigious over (and those folks have a winning streak) then I imagine that drives the access to the tech to only be available to folks operating these systems for their own purposes in a very secretive manner. There
    • by CAIMLAS ( 41445 )

      No dude. Anyone can get LLMs running locally on commodity laptops in about 15-20 minutes with a couple clicks. You could provide the complete instructions for it in a 3 minute youtube video.

      • Okay, sure, but my point is more that "If AI is outlawed only outlaws will have AI." not that it's terribly difficult to get operational.
        • by dfghjk ( 711126 )

          Nobody is arguing that "AI" should be "outlawed", they are arguing over who takes the blame and who gets the cash, with those two concerns closely coupled. No one is worrying about damage actually done and who is doing it. In today's political climate, half the people think damage is a virtue.

          It should be noted that one of the most high profile "pro-damage" people alive today just started his own AI company and has already started taking credit for the invention of AI. Meanwhile, he's been caught stealin

    • This is an improvement. Every step you take to make it more difficult means fewer people do it. Forcing people to go through hoops means they're more likely to run across stories about how inaccurate LLM output is.

  • I asked ChatGPT "Calculate the result of how many watt hours does it take to bring one cup of 70F water to a boil at sea level?" and it came back with a long winded formula and a result of:

    Therefore, it takes approximately 0.4147 watt-hours to bring one cup of 70F water to a boil at sea level.

    Some quick Googling tells me the answer is approximately 23 watt hours, which seems much more in line with the power consumption I've seen when I've plugged one of those single mug immersion heaters into my Jackery clone. If ChatGPT is botching math and that's something computers are supposed to be great at, I don't know

    • One kilowatt for 3.6 seconds is one watt hour. With 0.4 watt hours your 1000 watt kettle will run for 1.44 seconds. Yes, it canâ(TM)t do math properly. It canâ(TM)t even do it improperly.
    • by dfghjk ( 711126 )

      chatGPT is not a computer. chatGPT is an application that runs on a computer. What makes people think that LLMs are designed to be expert at providing truth?

      Since both answers are "approximate", aren't they both correct? I mean, close enough for someone who'd ask ChatGPT or "Google" it but wouldn't just calculate it for himself.

      • Less than half a watt hour is "close" to 23 watt hours? I think the laws of physics would like a word with you.

    • I don't know why people are putting faith in the accuracy of anything else it spits out.

      Same reason I put my faith in the gas pipe fitting my plumber did when installing a new boiler. I lack to the tools and knowledge to really know if they are up to code and won't kill me in a gas explosion. Doesn't this mean I'm a raging moron?

      Yeah maybe. On the other hand just about everyone is a raging moron in about 80% of their life.

      Most people lack the tools to know what's going on with AI and LLM. Very few people do

  • Misleading heading (Score:4, Insightful)

    by mkwan ( 2589113 ) on Wednesday January 17, 2024 @07:36PM (#64168669)
    OpenAI failed to dismiss the suit. They haven't failed to defeat the suit. In fact they'll probably defeat the suit easily - although they'll probably settle.
    • On the contrary, there chances of defeating this suit are slim. Of course they will settle, but why would they settle in a suit that they would easily win?
      • by mkwan ( 2589113 )

        but why would they settle in a suit that they would easily win?

        Lawsuits are time-consuming and expensive. It's cheaper to settle, even when you're likely to win. Which is why nuisance lawsuits are so profitable in the US.

        • by jvkjvk ( 102057 )

          Or they could want precedent as early as possible and to show people not to sue for this.

          If they settle, what prevents other people from wanting money too? I don't see that as viable for OpenAI.

      • On the contrary, there chances of defeating this suit are slim. Of course they will settle, but why would they settle in a suit that they would easily win?

        On the contrary their chances of defeating this suit are close to 100%.

        To be defamed you need an injury, ie, someone actually had to believe the statement that was made and their false believe caused a quantifiable harm. This statement had an audience of one, and that one didn't believe the statement.

        If no one believed the statement no harm was done, if no harm was done not only was there no defamation, if there was no defamation this is a frivolous suit that should have been dismissed but nevertheless is d

  • I think it'll boil down to a question of the definition of "published" for purposes of the law. The basic question would be: "If someone writes a letter to me asking for information, and I reply with a letter containing a statement that qualifies as defamatory about person X, does my letter constitute "publishing" that statement such that person X could sue me for defamation?". I suspect the answer's going to be "Yes.".

    Some may argue that the person asking ChatGPT the question is just using a tool. Well, ho

  • If ChatGPT is allowed to make up whatever BS without any controls, who is to stop it from flooding the internet with made up stuff to manipulate people in elections or nefarious purposes? We worry about propaganda by foreign state actors, LLM is like an ultra well funded, large state nation propaganda but on a personal budget. You can use it to flood the internet with targeted BS faster than any previous propaganda machine.

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...