Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI The Courts

ChatGPT Sued for Lying (msn.com) 176

An anonymous readers shared this report from the Washington Post: Brian Hood is a whistleblower who was praised for "showing tremendous courage" when he helped expose a worldwide bribery scandal linked to Australia's National Reserve Bank. But if you ask ChatGPT about his role in the scandal, you get the opposite version of events. Rather than heralding Hood's whistleblowing role, ChatGPT falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption, and been sentenced to prison.

When Hood found out, he was shocked. Hood, who is now mayor of Hepburn Shire near Melbourne in Australia, said he plans to sue the company behind ChatGPT for telling lies about him, in what could be the first defamation suit of its kind against the artificial intelligence chatbot.... "There's never, ever been a suggestion anywhere that I was ever complicit in anything, so this machine has completely created this thing from scratch," Hood said — confirming his intention to file a defamation suit against ChatGPT. "There needs to be proper control and regulation over so-called artificial intelligence, because people are relying on them...."

If it proceeds, Hood's lawsuit will be the first time someone filed a defamation suit against ChatGPT's content, according to Reuters. If it reaches the courts, the case would test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.

The article notes that ChatGPT prominently warns users that it "may occasionally generate incorrect information." And another Post article notes that all the major chatbots now include disclaimers, "such as Bard's fine-print message below each query: 'Bard may display inaccurate or offensive information that doesn't represent Google's views.'"

But the Post also notes that ChatGPT still "invented a fake sexual harassment story involving a real law professor, Jonathan Turley — citing a Washington Post article that did not exist as its evidence." Long-time Slashdot reader schwit1 tipped us off to that story. But here's what happened when the Washington Post searched for accountability for the error: In a statement, OpenAI spokesperson Niko Felix said, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress...." Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate. "We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users," Asher said in a statement, adding that "users are also provided with explicit notice that they are interacting with an AI system."

But it remains unclear who is responsible when artificial intelligence generates or spreads inaccurate information. From a legal perspective, "we just don't know" how judges might rule when someone tries to sue the makers of an AI chatbot over something it says, said Jeff Kosseff, a professor at the Naval Academy and expert on online speech. "We've not had anything like this before."

This discussion has been archived. No new comments can be posted.

ChatGPT Sued for Lying

Comments Filter:
  • by 93 Escort Wagon ( 326346 ) on Saturday April 08, 2023 @12:43PM (#63435218)

    My ass.

    • by PPH ( 736903 )

      What are you trying to do? Undermine the foundations of socialism?

    • You got the joke wrong. It's "Voodoo Dick"...

    • by gweihir ( 88907 )

      Indeed. Apparently, the people doing ChatGPT (and competitors) have not thought things through and are just another bunch of bright-eyed idiots in awe of technology. Copyright infringement, defamation, assisting computer crime so far. I expect killing some people by giving out false medical or emergency advice, false safety advice, etc. will be next.

      As long as these tools hallucinate (i.e. lie), they are worse than useless in many situations. Nice tech-demos, but not much more.

  • by iAmWaySmarterThanYou ( 10095012 ) on Saturday April 08, 2023 @12:51PM (#63435232)

    The people running it are responsible.

    You don't get to post defamatory trash online and disclaim all responsibility because, "the computer did it!"

    If you know your computer is capable of spitting out trash then a few simple disclaimers isn't enough to cover your ass.

    It's like when someone says some horrible thing to someone's face then says, "just kidding!". Too late. It was said. You're responsible for your words and actions even if through a computer you programmed.

    If "no one" is responsible then are we saying when AI cars kill people it's ok? No one is as fault? Act of God?

    • This is a tool, like a gun. This chatbot isn't "publishing" these lies.

      • by Entrope ( 68843 ) on Saturday April 08, 2023 @01:51PM (#63435344) Homepage

        This chatbot isn't "publishing" these lies.

        This is technically true, but probably not in a way that you think. Under US law [cornell.edu] (and Australian law [thelawproject.com.au] is similar in this respect), the legal person operating the chatbot is the one who published [minclaw.com] the statement. It's not the user who interacts with the chatbot, although they can be liable for republishing the defamatory assertion; the person who runs the chatbot, causing it to communicate the statement about a person to a third party (the user), is the one legally responsible and liable for defamation.

    • The people running it are responsible.

      You don't get to post defamatory trash online and disclaim all responsibility because, "the computer did it!"

      If you know your computer is capable of spitting out trash then a few simple disclaimers isn't enough to cover your ass.

      What if every newspaper and television program included a disclaimer that said "May contain incorrect information"? Does that they are now completely immune from claims of defamation?

      • by Luckyo ( 1726890 ) on Saturday April 08, 2023 @02:40PM (#63435418)

        If it's prominent on every page, yes. "I will have many lies on this page" "fiction follows". Or are you suggesting that fictional books should be liable for defamation? Because that's a hell of a claim to make, as there's plenty of fictional prose about people that actually exist in real life, and that is dismissed with the disclaimer at the start or end of the book that "all resemblance to real people is accidental".

        Heck, we don't even need to go that far. There's plenty of write ups on the likes of US president in news shows, papers etc that are false on the face of it. Everything from Obama birtherism to Trump Steele Dossier stuff to Biden not being healthy enough to hold office. It's routine. We accept these sorts of glaring errors from actual people. Why is chat bot that has less credibility by default as it's not even human AND prefaces its statements with "we're in testing phase and errors are normal" kind of a statement is more credible in your eyes?

      • by namgge ( 777284 )
        In the UK, publishing disclaimers with material that is defamatory can increase damages. The inference drawn is that the publisher knew or had reason to suspect the material was false but recklessly published it anyway.
      • Sometimes newspapers *DO* print incorrect information. When they do, and the editors know that it happened, they print a correction or retraction in a later edition. Give ChatGPT the chance to make a correction.
    • Who is "running it"? The person who owns the hardware the server is running on, or the person using the machine by entering questions into a text box?

      • The hardware is not among the defamatory claims unless it is a hardware based AI.

        Whoever wrote and manages the software is the perpetrator.

        So, if you're a sysadmin and you're just keeping boxes running in a data center it isn't your fault or responsibility if the software someone else created and runs on those boxes does bad things. If that were the case then all the cloud providers would be in deep legal shit and shut down immediately.

        Years ago when I opened a Chinese mainland data center, I had to swear

    • I put a sticker on my car "this car could kill you" and then I never stop for pedestrians or red traffic lights ever again.
    • Also marketing GPT as AI is misleading a lot of people. It has natural language understanding level of AI, but the output is just basic crappy chatbot. ChatGPT was NOT TRAINED on coming up with accurate answers, informative answers, or useful answers GPT is a huge advance in natural language understanding, but that's where it stops! The experience it has with the output is whatever crap was on the internet and what ways random sentences can be linked together to provide was superficially seems like an a

      • by gweihir ( 88907 )

        Indeed. At the very least ChatGPT (or rather ist operators and creators) has the same responsibility as somebody has retweeting something defamatory or dangerous.

        Now, even the CEO of the company behind ChatGPT says that it is not A(G)I, but that does not matter. It clearly pretends to be intelligent at the interface level and that is enough to fool many people. And that may be all that is needed legally.

        At this time, ChatGPT is a tech demo. It can help a pit collecting facts if an actual expert with full fa

    • The people running it never instructed it in the defamatory claims. If I pass an old building and its creaking sways in the wind sound like "yoooou're aaaa murderrrrer" do I have a case, or is it understood that an unexpected output of a complex system not intended to produce such output is fundamentally different?

      What if I generate random words with a dice roll? Am I liable for any content that is produced?

      Can I sue mathematicians for the various defamatory claims embedded in the digits of pi if I convert

      • by gweihir ( 88907 )

        What if I generate random words with a dice roll? Am I liable for any content that is produced?

        If it is defamatory or dangerous and you make it available online, very much yes. Just try to make an online randomized advice generator that occasionally advises people to kill themselves. May even result in prison time for you.

    • by gweihir ( 88907 )

      Indeed. Why is this even a question? You build and switch on a machine that does harm. You are responsible, no matter whether you knew what you were doing. Now, _criminal_ responsibility differs and may require intent or gross negligence (in some countries you can go to prison for defamation) or specific subjects (just ask ChatGPT what it thinks of certain religions or "Gods"). Civil liability is different. You a) are responsible for damage done, regardless of intent and b) you need to stop doing that damag

    • by sinij ( 911942 )

      The people running it are responsible.

      This presupposes that nobody home with LLM. If, on other hand, LLM are showing emergent behavior on the way of becoming true AI, this would be seen as evidence of trying to learn to manipulate and lie to humans.

  • by RightwingNutjob ( 1302813 ) on Saturday April 08, 2023 @12:53PM (#63435236)

    Letting them off the hook means anyone can talk trash and make shit up about you so long as they put a little asterisk next to it. Seems like an undesirable outcome.

    Alternatively, holding them liable means publicly exposed content generation has to be kept on a short leash to a point where it may not be economically viable or even technically feasible.

    Maybe that's why the big boys never released theirs until goaded into it by openai.

    • by rudy_wayne ( 414635 ) on Saturday April 08, 2023 @01:36PM (#63435324)

      Letting them off the hook means anyone can talk trash and make shit up about you so long as they put a little asterisk next to it. Seems like an undesirable outcome.

      Alternatively, holding them liable means publicly exposed content generation has to be kept on a short leash to a point where it may not be economically viable or even technically feasible.

      "Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed. ChatGPT doesn't get a free pass just because .... computers!!

      • chatGTP does not publish anything. Even the senior editors for the NYTs are allowed to tell lies in text messages to their friends.

        • by gweihir ( 88907 )

          As soon as ChatGPT gives specific information to a non-closed user group with some consistency, it publishes.

      • by Luckyo ( 1726890 )

        >"Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed.

        And in real world on the other hand, the opposite has been true since inception. Everything from American-Spanish war of early days to "Nukes in Iraq" of last couple of decades to Steele Dossier/"anonymous sources said" of recent times, publishing shit that isn't true has been bread and butter of everyone from yellow press to papers of recor

      • by Sloppy ( 14984 )

        "Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed. ChatGPT doesn't get a free pass just because .... computers!!

        Would you be open to somehow creating a free pass?

        People don't really expect Magic 8 Ball to give correct answers. If I'm on live television and I ask Magic Eight Ball "is $LITIGIOUS_MOTHERFUCKER a scumbag?" and Magic 8 Ball shows "It is certain" that should be ok, because everyone kn

      • "Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed. ChatGPT doesn't get a free pass just because .... computers!!

        That is not at all the standard, legally speaking, in the U.S. New York Times v. Sullivan [wikipedia.org] established a much more forgiving standard. The false defamatory statement must be made with "actual malice", meaning the defendant either knew the statement was false or recklessly disregarded whether it might be false. Essentially, simple mistakes aren't defamatory. If this were decided under U.S. laws covering the press, ChatGPT's literally lacks the ability to exhibit actual malice (it has no idea it's inventing de

    • I think otherwise since other companies like google and bing are lookin to use it, setting the line that if your AI causes harm you will pay for it is very valid. If you let "its a computer" then it lets them lie about anyone and everyone with 0 reprocussion's. Someone is responsible for the coding and data set the AI was trained with hence is responsible results.
    • Letting them off the hook means anyone can talk trash and make shit up about you so long as they put a little asterisk next to it. Seems like an undesirable outcome.

      What if we made the little asterisk bigger, bolder, blinking, italicized, red-colored, and if every reply literally started with "This is probably bullshit because I don't try very hard to be accurate, but ..."?

      I guess what I'm getting at, is there any possible disclaimer which would be good enough?

      I think having a stupid, sometimes-lying robot

  • No one who is paying attention thinks that large language models like these are at all useful for keeping track of general factual content. And no substantial harm is done here because in the event that someone is stupid enough to rely on the model this way, a few seconds of fact checking will confirm the model output is wrong. Unfortunately, many places have very strict defamation rules, and Australia does not really have strong free speech norms. So it is possible that this will end up winning, and then
    • The idea that algorithms just don't have to follow the laws of the land is how we get to a dystopia. It's the goto defense for criminal behavior and judges need to put a stop to it.

      For example, you don't get to implement racism in the tools you publish and then say, "it isn't our fault; it's the algorithms that we have no way to audit."

      • Re: (Score:3, Insightful)

        by christoban ( 3028573 )
        'Racism' is something some people see under every single rock. Also, Occam's Razor should apply.
        • Re: (Score:2, Insightful)

          by drinkypoo ( 153816 )

          'Racism' is something some people see under every single rock.

          Racism is a thing that only privileged white people living under rocks don't believe in.

          Also, Occam's Razor should apply.

          The simplest explanation for why ChatGPT would have a racial bias is that the training corpus has a racial bias, because society has a racial bias. And I'm still allowed to say that in public, because I don't live in Florida.

      • If the law of the land says that this is defamation then the law of the land is wrong. That's the bottom line. People being stupid is not defamation. And if there is no evidence of harm (and let's be clear there isn't) then labeling it defamation makes even less sense. This is about as sensible as suing the makers of Ouija boards for giving incorrect answers.
      • by vbdasc ( 146051 )

        Well, if you train your language model on data from racist parts of the Web, it will turn out racist. It's clear enough. I mean, there is no need to specifically implement racism. If you feed it stuff from the USSR, it will turn out Communist.

    • by narcc ( 412956 )

      No one who is paying attention thinks that large language models like these are at all useful for keeping track of general factual content.

      You're not wrong, but very few people are actually paying attention -- including professionals in the field. These things have been promoted as "a better search engine" for a while now.

      a few seconds of fact checking will confirm the model output is wrong.

      The problem is that people are already using the model for those "few seconds of fact checking". There's no point otherwise. With all of the magical thinking around these things, I wouldn't be surprised if people would be inclined to trust to model over legitimate sources.

      waste time on all sorts of things rather than actually focusing on what these systems are useful for.

      For anyone paying attention, the pool of "useful" a

    • We have people right here on this supposedly highly ye him ally informed and intelligent site who honestly believe chatgpt is sentient or could be if it had a larger database.

      If the folks here can't figure out that's a bunch of crap then how can we expect non technical people to understand the computer is lying to them?

      "Why would the computer lie to me?" is a very valid question for a normal person to ask because computers don't lie. So if it says some stupid shit they'll believe it.

      Damages is another issu

    • by vbdasc ( 146051 )

      And no substantial harm is done here because in the event that someone is stupid enough to rely on the model this way, a few seconds of fact checking will confirm the model output is wrong.

      You say, if someone is stupid enough, then he/she is likely to perform a fact checking? False, IMHO.

      Not to mention that with Web search companies like Google and Microsoft scrambling to implement Chatgpt-like functionality in their search engines, naive "fact checking" by asking Google Search/Bing could actually reinforce the lie in the eyes of many users.

  • by Ken D ( 100098 ) on Saturday April 08, 2023 @01:24PM (#63435298)

    It's been trained to generate text that looks like existing text that is related to your input.

    In other words it's a computer generated Mad Lib.

    "On Tuesday, ________ was indicted by a grand jury in _______ on four counts of _________. The District Attorney ________ said that the charges related to the heinous activities of ________. "We have a very strong case."

    Or more specifically, from google Bard:
    "A federal grand jury today indicted Jesus Christ, the founder of Christianity, on charges of inciting the January 6th riots at the US Capitol.

    The indictment alleges that Christ mades a series of public statements in the weeks leading up the riots in which he encouraged his followers to "fight like hell" to overturn the results of the 2020 presidential election. The indictment also alleges that Christ provided financial and logistical support to the rioters.

    Christ is following a number of charges, including seditious conspiracy, obstruction of an official proceeding, and aiding and abetting. If convicted he could face up to 20 years in prison.

    "This indictment is a historic moment," said ______, _____ at ______. "For the first time in history, a religious figure has been charged with inciting a violent insurrection. This sends a clear message that no one is above the law, not even the Son of God."

    blah blah blah

    The indictment of Jesus Christ is a significant development in the January 6th investigation. It is a sign that the Department of Justice is committed to holding accountable those who were responsible in the attack on the Capitol. It is also a reminder that even the most powerful people in the world are not above the law.

    • I wish people would just call it a prediction engine.

      It attempts to predict the next set of letters in a sentence by comparing it with trillions of other sentences it has previously read, and then adds a degree of randomness to it.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday April 08, 2023 @01:25PM (#63435300) Homepage Journal

    Given the amount of effort OpenAI has put into telling everyone that you can't rely on GPT to produce factual output, it's hard to imagine anyone having a case on this basis.

    • Well If someone doesn't try then it will never be set. Someone has to cast the first stone to create it. Problem is an AI can be trained via biased political persuasion. Case in point ChatGPT, you ask it to say something about Trump it will refuse claiming its apolitical but replace trump with Joe biden it will have 0 problem doing it. Anyone can corrupt the AI behind the scene and letting someone that made/trained it off the hook is pretty much a free pass for people to make an AI to push lies about anyone
    • I saw "people are relying on them" and wondered who could rely on ChatGPT after even a few hours of using it. As handy as it is for technical things I regularly see blatant incorrect output.

      • by gweihir ( 88907 )

        Just think of an average person. And then think of somebody dumber.

        If you need some more specifics: Anti-vaxxers, flat-earthers, climate-change-deniers, Trump voters, homeopathy believers, the deeply religious, etc.

    • by gweihir ( 88907 )

      It may be enough that an entirely uninformed and not very smart person can take the output at face value. Or a HR department turns somebody down because somebody may think they hired somebody that was not "clean". Given that this guy is in politics and the most deranged morons are allowed to vote, I think it is not hard to construct a "material harm" scenario here.

  • Proving 'intent' is difficult. But not impossible. AI is a giant iceberg lying just under the surface of the water. We can only see the tip so it seems OK. But there is a giant monster lurking.

    • by nagora ( 177841 )

      If I set fire to an empty house but it turns out there was someone sleeping rough who was killed, intent is a mitigation but it's a long way from being the end of the question of responsibility.

      • A precedent needs to be that they can be held liable. To find out if they had intent requires discovery to see things like AI code and data it was provided to see if person that coded and trained the AI was provided false data cause they hate the person or hate their belief's aka like How people made made chatGPT hates trump which you can clearly see.
    • by gweihir ( 88907 )

      Even of there is no intent, he can require them to stop. And if they cannot do that...

  • We're awash in bullshit. Refined, distilled, market-tested, beloved and embraced bullshit.
    Like Scotchguard, it's in everything, and it will never go away.

    That some text scanner returned some of our most abundant resource should be no surprise.

  • ethics issues (Score:3, Insightful)

    by awwshit ( 6214476 ) on Saturday April 08, 2023 @02:36PM (#63435414)

    This is a good example of the ethics issue around chatbots. ChatGPT is a statistical model that generates a series of words - it does not proofread or even understand the series of words it produced, it has no idea if the series of words it produced make sense or are in anyway useful, it just ran its program and used it weights and training data to make predictions. Nothing guarantees that the output is coherent, chatbots do not have personality or imagination, chatbots do not think in the abstract way that humans think.

    If a system cannot know correct from incorrect, right from wrong, it can never operate in an ethical way. Seems like we are headed for more of these issues, where we can only add controls for specific issues after an issue has happened (always leaving room for the next 'zero day' dilemma).

    • by gweihir ( 88907 )

      That is actually an excellent point. In many fields (and more to come) you are legally required to assure ethics.

  • by Walt Dismal ( 534799 ) on Saturday April 08, 2023 @02:38PM (#63435416)

    By analogy, if you own a dog that bites someone, you are legally responsible. If you release a chatbot that lies and bites someone, it is your responsibility.

  • is absolute fucking moron. Props to him for exposing the bribery in his past, but this lawsuit is the dumbest thing I've ever witnessed.

  • I'll write one word
    each response appends exactly one word or one word and punctuation symbol (period mark, semi, colon, perenthesis, comma)

    let's see who is legally responsible for the result!

    I'll start:

    Computers

  • It's simply not right to claim results from chatGPT may be "inaccurate".

    Instead the description should be "Any information present may be complete fabrications, even if references are given. All information presented by ChatGPT must be verified by the reader. Any information about individuals may be outright lies. Do not place any trust in output from chatGPT.

    Though honestly that is probably being way too lenient on chatGPT output...

    One thing I don't understand is, how do we not encounter more instances

    • by mark-t ( 151149 )

      More accurately:

      ChatGPT does not have any understanding of the words that you say or the words it outputs. It's apparent coherence on a subject arises as a consequence of how underlying statistical patterns in natural language can sometimes appear to show coherence on a subject, and how human beings associate meaning to and understanding to the presence natural language, even if it is not actually indicative of it, any more than when a parrot says words that it obviously does not understand the meaning

  • by SilverJets ( 131916 ) on Saturday April 08, 2023 @04:56PM (#63435604) Homepage

    The ChatGPT AI is a manufactured product. The one that is legally responsible is the company that produced that product. Just like if any other manufactured product malfunctions and causes someone harm.

  • The problem is not that ChatGPT can lie... it's that, sometimes, it could tell the truth.
  • by Sqreater ( 895148 )
    AI is interesting, but it will never be reliable and useful because it will always be dangerous. AI is functional intelligence without any checks-and-balances part to a complex motivation array - as humans have. Ours was developed over billions of years and is most clearly shown in our moral codes, our laws and our social judgments. AI has none of that.
  • by karlandtanya ( 601084 ) on Sunday April 09, 2023 @03:59AM (#63436248)

    Over the past 30 years I have had two primary jobs: programming automated Equipment used in factories and testing the automation components themselves.
    At all levels the number one concern is safety
    " the automated equipment injured the person" is the beginning of the investigation not the end. Some person or Corporation will be responsible for that injury.

    Regulations exist to mitigate risk to all exposed personnel. Every OSHA NFPA or other safety regulation that you see Was Written in Blood. This is the start of that process for AI.

    Many people and organizations Bound by the safety regulations would love to be able to ignore them without consequence. A lot of time and money could be saved during development if we didn't have to consider the risk to folks using the automated systems. In a lot more time and money could be saved if we could throw up our hands when someone was injured and say the machine did it not my fault.

    Don't be fooled. The c-suite doesn't understand AI but they're evil, not stupid. Even if the AI doesn't work for its stated purpose the fact that it can be used to externalize risk makes it an incredibly valuable tool to them.

    This and similar cases could remove a lot of worker, consumer, and public protections that we have fought and for many people literally died to achieve.

    It is incredibly important that the designers developers, manufacturers, and operators of automated systems are held accountable for the risks and injuries associated with their use.

    And just to be clear to the pedants reading this the term injury includes all injuries, not just those directly affecting life and health. If somebody works and lives in an ethical manner and builds wealth and reputation for themselves and their family and you take that away from them you'll have caused injury and this can matter just as much as if you talk their hand or their hearing or their life.

  • No case non-legal precedent filing against generative A.I. that’s produced for entertainment purposes. Were ChatGPT the case based from court record database access THEN there is a chain of custody precedent.
    This is A.I. future promise of popcorn, mis-direction and make-shit-up’ism that itself becomes the equivalent of overnight Police Blotter. All accidents in generative A.I.

  • It seems there's a lot of misunderstanding about what ChatGPT (and similar LLMs) are ... having the tech packaged/presented as a chatbot has been great at popularizing it, but the fact that you can now ask it questions and get replies seems to have made a lot of people think that it's at heart some type of search engine that is attempting to factually answer questions, when really nothing could be further from the truth!

    This LLM/transformer tech is built to generate language that is statistically similar to

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...