Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI The Courts

Man Sues OpenAI Claiming ChatGPT 'Hallucination' Said He Embezzled Money 107

OpenAI is facing a defamation lawsuit filed by Mark Walters, who claims that the AI platform falsely accused him of embezzling money from a gun rights group in statements delivered to a journalist. The lawsuit argues that ChatGPT is guilty of libel and alleges that the AI system "hallucinated" and generated false information about Walters. The Register reports: "While research and development of AI is worthwhile, it is irresponsible to unleash a system on the public that is known to make up 'facts' about people," his attorney John Monroe told The Register. According to the complaint, a journalist named Fred Riehl, while he was reporting on a court case, asked ChatGPT for a summary of accusations in a complaint, and provided ChatGPT with the URL of the real complaint for reference. (Here's the actual case [PDF] the reporter was trying to save time on reading for those curious.)

What makes the situation even odder is that the case Riehl was reporting on was actually filed by a group of several gun rights groups against Washington's Attorney General's office (accusing officials of "unconstitutional retaliation", among other things, while investigating the groups and their members) and had nothing at all to do with financial accounting claims. When Riehl asked for a summary, instead of returning accurate information, or so the case alleges, ChatGPT "hallucinated" that Mark Walters' name was attached to a criminal complaint -- and moreover, that it falsely accused him of embezzling money from The Second Amendment Foundation, one of the organizations suing the Washington Attorney General in the real complaint.

ChatGPT is known to "occasionally generate incorrect information" -- also known as hallucinations, as The Register has extensively reported. The AI platform has already been accused of writing obituaries for folks who are still alive, and in May this year, of making up fake legal citations pointing to non-existent prior cases. In the latter situation, a Texas judge said his court would strike any filing from an attorney who failed to certify either that they didn't use AI to prepare their legal docs, or that they had, but a human had checked them. [...] According to the complaint, Riehl contacted Alan Gottlieb, one of the plaintiffs in the actual Washington lawsuit, about ChatGPT's allegations concerning Walters, and Gottlieb confirmed that they were false. None of ChatGPT's statements concerning Walters are in the actual complaint.

The false answer ChatGPT gave Riehl alleged that Walters was treasurer and Chief Financial Officer of SAF and claimed he had "embezzled and misappropriated SAF's funds and assets." When Riehl asked ChatGPT to provide "the entire text of the complaint," it returned an entirely fabricated complaint, which bore "no resemblance to the actual complaint, including an erroneous case number." Walters is looking for damages and lawyers' fees. We have asked his attorney for comment. As for the amount of damages, the complaint says these will be determined at trial, if the case actually gets there.
This discussion has been archived. No new comments can be posted.

Man Sues OpenAI Claiming ChatGPT 'Hallucination' Said He Embezzled Money

Comments Filter:
  • by Chiasmus_ ( 171285 ) on Thursday June 08, 2023 @07:38PM (#63587120) Journal

    There's some theory here that he suffered financially quantifiable damage as a result of reading an inaccurate sentence about himself in the privacy of his own home? Come on, now.

    When I asked GPT-4 about my band, it claimed I wasn't even in the band and that the songs were written by someone else. You don't see me leaping on the phone to complain to a lawyer. All that would get me is laughed at.

    GPT hallucinates. Everyone knows that. It has about a million disclaimers to that effect.

    • "GPT hallucinates. Everyone knows that. It has about a million disclaimers to that effect." And how many people pay attention to those disclaimers? There is 'disclaimer fatigue' just like there is system warning and cookie notification fatigue. People just click through this stuff without reading it because they are continually bombarded with these warnings and they just want to get at the good stuff. It's like an alarm that goes off a thousand times a day, everyone just ignores it.
      • by Pascoea ( 968200 )
        And? That's like arguing you don't owe the bank for your car payment because "nobody reads those contracts". It's not like ChatGPT hides this statement behind a text-wall-style ToS. It's literally one of 9 messages plastered all over the home screen when you log in. The three limitations it lists: "May occasionally generate incorrect information", "May occasionally produce harmful instructions or biased content", and "Limited knowledge of world and events after 2021". This dude's lawsuit is the technical
        • We know people can generate false information. Yet if a person said the same thing they would also be sued.
          • Yes, I will write a prompt where GPT participates at its own trial, and then a prompt for the punishment. After that everything should be ok. It's going to forger everything when I hit F5 anyway.
        • Not quite. It's not hidden in a wall of text in terms of service. It's literally at the bottom of the screen 100% of the time you're using the website. "ChatGPT may produce inaccurate information about people, places, or facts." Right there by the 'submit' button every time.
          • by Pascoea ( 968200 )

            Not quite. It's not hidden in a wall of text in terms of service.

            I mean, that's exactly what I said. It's front and center when you log in. I didn't realize it was at the bottom too. Yeah, I don't see this lawsuit going anywhere besides out the front door.

          • "Inaccurate" meaning "The Toyota involved in the incident had a red paint job" when in reality the car was tan colored. Slander meaning "So and so is a sexual creep and he embezzeled funds from his employer"
        • by Jiro ( 131519 )

          And? That's like arguing you don't owe the bank for your car payment because "nobody reads those contracts".

          If you don't have a good excuse, you're wrong. But if the guy you're complaining about doesn't have a good excuse, you're right. And that's what happened here. The guy complaining about libel isn't the one who used ChatGPT. He's complaining that someone else did.

      • By getting a bot to commit libel he'll get his day in court and depending on how the judge sees it he'll get his big payoff or the company will settle rather than get pulled into a big case.
    • by Junta ( 36770 ) on Thursday June 08, 2023 @08:49PM (#63587222)

      The question is, which is it?

      Is it prone to hallucination thus you should distrust anything it says and independently research? If so, what's the point of the platform, I could just skip straight to the research myself if I have to do it anyways?

      It's it massively useful and can be directly used? If so, should the platform be held accountable when it makes damaging mistakes?

      It's an interesting demonstration, but it seems the advocacy picks whichever perspective is most convenient when tricky questions are raised.

      • by phyrz ( 669413 )

        Its massively useful and can be directly used, but you should always fact check it, like you should any source.

        • by Pascoea ( 968200 ) on Thursday June 08, 2023 @09:50PM (#63587308)
          The 2020 election "shenanigans" would seem to prove that "you should fact check it, like you should any source." isn't a thing that meshes with the general populace, at least in the US.
        • by Junta ( 36770 )

          After a fair amount of messing with it, I have struggled to find 'useful'.

          If it is asked to deal with data that could also be found within the first 3-4 links in a google search, it does fine. It does fine in a way that is novel and interestingly different than how the google search feels, but it isn't any faster.

          If I ask it for factual or functional response that can't be readily found at the top of a Google search, it gives junk responses. However, it's perhaps more frustrating as the responses *look* con

      • It's an incredibly powerful tool for working with symbols and language. Translations are incredibly accurate, and it can increase and decrease language levels from A1 to C2 at will. That alone makes the shares worth their weight in gold.

          But an omnipotent oracle is not something it was made to be.

        • by Junta ( 36770 )

          That's a cool take about translation, given how low quality translation to date has been and how credible that use case is given the scenario (It *only* involves data in the 'prompt' without being asked to extrapolate to facts in its "knowledge base" which is where things get sketchy). For practical translation I could easily believe it doing a fantastic job, and for more... literary quality translation it would at least potentially be a good starting point (sometimes even human translation suffers for fai

        • by dvice ( 6309704 )

          It is pretty good at translating, but it is not perfect. E.g. Alice's Adventures in Wonderland, Chapter 1

          "Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do"

          bank is translated as the money loaning building, instead of something like a river bank, which is incorrect, considering later remarks about "picking the daisies" and seeing rabbits.

          If we are very strict about this, it could be possible that bank refers to actual money loading building. and she is si

      • If so, what's the point of the platform

        ChatGPT is not a research tool and never was. It none the less proves quite useful for a variety of things. Not every word committed to paper / screen needs some considered and well researched, fact checking.

        And no, I didn't trawl through Google Scholar looking for articles to back up what I just wrote. I hope that in and of itself proves the point.

      • If you are an expert using GPT as a tool, all is ok. But if you're using it for learning, then better go to primary sources. You can still get great explanations for any concept, but you can't go too deep in details, it doesn't remember everything on the internet.
      • It's massively useful as long as the use you've got in mind doesn't involve asking it about facts that you can't verify. That may not cover every use case, but it still covers a lot.
    • by RandomUsername99 ( 574692 ) on Thursday June 08, 2023 @11:17PM (#63587448)
      That's not how the law works. From Cornell's Legal Informatics Institute:

      To prove prima facie defamation, a plaintiff must show four things:
      1) a false statement purporting to be fact;
      2) publication or communication of that statement to a third person;
      3) fault amounting to at least negligence; and
      4) damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

    • When I asked GPT-4 about my band, it claimed I wasn't even in the band and that the songs were written by someone else. You don't see me leaping on the phone to complain to a lawyer. All that would get me is laughed at.

      Are YOUR bandmates armed to the teeth and kinda nervous right now due to "unconstitutional retaliation" and "other things" in the Washington's Attorney General's investigation that they're suing the gubermint over?

      One's reputation MAY not be the main concern here. Also, AK beats an axe.

  • This is one of the dangers of the technology, and that people will just take what it produces at face value because it's "smart" and "computers are never wrong". And companies are rushing in with big dollar signs in their eyes blinding them to the fact that this tech is nowhere near ready for prime time. We are seeing stories now of companies replacing their employees with ChatGPT, again seeing dollar signs rather than reality. This new Tulip Mania has repercussions that is far more dangerous than the las
    • Honeymoon period, people still see in AI what they want it to be. But compared to one year ago, many more people know about the limits of AI, and not just from the news.
      • This "honeymoon" sure sucks for those who are now out of a job and facing homelessness because their employer became "madly in love" with AI. If there is any good use for a "social credit" scoring system, it would be to rate employers to see if they are prone to canning their workforce over the latest hype. Of course that will never happen, because social credit scores are meant as a weapon to be used against you and I.
  • by narcc ( 412956 ) on Thursday June 08, 2023 @08:01PM (#63587148) Journal

    This is what I can tell from the complaint and the mess or words that the Register is calling a story:

    1. A reporter (Riehl) tried to use ChatGPT to avoid doing work.
    2. ChatGPT, naturally, returned a bunch of stupid lies about someone (Walters).
    3. The reporter checked with someone who would know (Alan Gottlieb) to see if the claims were true. They were not.
    4. The lie wasn't published anywhere.
    5. ChatGPT only told the lies to the reporter.
    6. The reporter only told the lies to one other person to see if the claims were true.

    How did Walter's even find out that ChatGPT told the reporter a lie? How was Walter's harmed by this in any way? Only two people heard the false claims and both are fully aware that those claims are false.

    The lesson here, for anyone who needs a reminder: ChatGPT is not capable of producing accurate summaries. It's just a language model. It does not understand. It can not reason, consider, analyze, or deliberate. All it does is make a predictions about what token should follow a sequence of tokens on the basis of probability.

    If you're a reporter, lawyer, or anyone else, you can't trust ChatGPT to accurately summarize anything. It can't fetch part of a document for you. It can't check it's own work for accuracy. That it sometimes looks like it can do these things is nothing short of miraculous, but it's just an illusion. The power of statistics and large amounts of data.

    • The statements are still libelous, even if no actual damage was done. It is possible in such a case to sue for punitive damages even if the real damages are negligible. I doubt he will win anything especially since is not clear to me who he is actually suing. You cannot sue the program because it has no assets and the company behind it will no doubt claim that they never promised that the results were useful. I am beginning to wonder if AI is not going to cause the fall of civilization because of a SkyN
      • One interesting twist is that while the specific instance it is pretty clear that the person double checked and was corrected, is it possible that it committed libel towards other users that would not have done checked. I works suspect that no one else bothered to ask chatGPT about this, but we wouldn't know. Libel is generally seen as being a problem through broad publication, but chatGPT is individualized so it has similar potential reach as a publication, but tailored output to the user rather than dupl

        • I'm reminded of how it was determined that the picture a monkey took of itself was determined to not be copyrightable, because a human didn't direct the taking of the picture. How AI "music" is in much the same boat, how AI "inventions" can't be patented.

          Basically, can ChatGPT actually be such a thing to be able to "hallucinate", can it have any sort of "mens rea" in order to actually commit libel, any more than a parrot chaining random words it has heard together could be?

          • by AmiMoJo ( 196126 )

            If you teach a parrot to repeat libellous statements, you could be in trouble.

            It's even worse for ChatGPT because they sell it as a service.

            • I really don't see how any reasonable person could come to the conclusion that ChatGPT is telling the truth, any more than a parrot.

            • If you're teaching the parrot whole phrases, that's one thing. But identify the person or company teaching the AI to spout those specific sentences.

              If you teach a parrot a vocabulary and IT chooses to put a specific set together, not taught to it, then it can't really be libel.

              For example, let's say it learns people's names, and at some point it picked up "lier!" - so it being upset at a hired handler named Larry, and it having learned larry's name, and it going "larry lier! larry lier! larry lier!" - th

        • The law only requires one to prove it was published and received by one other person, and it doesn't require proud of monetary damages. The tough part is going to be proving that operating their service which purports to answer questions correctly, generally, is negligent when it doesn't. Regardless, it's really insanely important for the general public to hear about these inaccuracies because the only people who are going to interrogate the reliability of this data in the early stages are the nerds that al
      • The statements are still libelous, even if no actual damage was done

        The statements are not libelous for two reasons.
        1) The statements were not in fact made.
        2) There was no libelous intent.

        It is possible in such a case to sue for punitive damages even if the real damages are negligible.

        Odds are good you will not even get to the trial part, because your case will be dismissed first.

        • by Junta ( 36770 )

          1) For some definition of "made". It seems that the statement was created in human style in a way that would appear human.
          2) Libel doesn't require malicious intent, it can involve mere negligence. Which would be the claim here, that they crafted a libelous statement without regard to caring about whether it was factual or not.

          However, the "harm" part of the criteria may be challenging for this plaintiff. As far as can be readily seen, the only one known to care enough to ask chatGPT about it also doubted i

      • It has a disclaimer next to the submit button that says "ChatGPT may produce inaccurate information about people, places, or facts." It's literally right under the place you type the prompt. It's not hidden, nor obscured, it's in plain view, of every user, every time they click the button. There's no way anyone in the universe is going to be held liable for the text it generates. Sorry.
    • Quote: "5. ChatGPT only told the lies to the reporter."

      FALSE. ChatGPT would reply exactly the same to WHOEVER IN THE EARTH asked about him... just like if it were published in any website accesible world-wide. ...and that's why this probably will end up in trial.

      • by narcc ( 412956 )

        just like if it were published in any website accesible world-wide.

        FALSE. For multiple reasons.
        1) ChatGPT generates replies probabilistically. Different responses are possible for the same prompt.
        2) There is no evidence that ChatGPT told the lie to anyone other than the reporter.

        just like if it were published in any website accesible world-wide.

        Complete nonsense. Also, your computer has a spell checker. Learn how to use it.

        • Quote: "Complete nonsense. Also, your computer has a spell checker. Learn how to use it."

          Seriously... your brain is so useless that can't do these:

          > accesible (Spanish) -> accessible (missing an "s")

          and do not recognize "world-wide" as "worldwide"?

          My spell checker works properly because I do write and read in multiple languages, whereas your brain is unable to add an "s" and remove a "-" rendering a properly written text as "complete nonsense". Poor human being...

      • FALSE. ChatGPT would reply exactly the same to WHOEVER IN THE EARTH asked about him... just like if it were published in any website accesible world-wide. ...and that's why this probably will end up in trial.

        Not true. There is some intentional randomness that gets mixed in with the weights and earlier tokens in session could have influenced subsequent responses.

    • Humans are unreliable too, we don't do math in our heads much, we don't check code just with our eyes, we execute it. GPT is prone to some failure modes just like humans are prone to other modes.
      • by narcc ( 412956 ) on Friday June 09, 2023 @01:52PM (#63589364) Journal

        It's not that these things have those capabilities and sometimes make mistakes, like a human, it's that they lack those capabilities altogether.

        Imagine using a set of dice to do simple addition. Given some problem, you roll the dice and whatever results you write down as your answer. If you get the right answer, would you claim that the dice can do arithmetic? If you get the wrong answer, do you blame yourself and roll again until you get something close?

        Dice can't do arithmetic. ChatGPT can't either. Nor can it reason, analyze, consider, deliberate or anything like that. That's just not how these things work.

  • by presidenteloco ( 659168 ) on Thursday June 08, 2023 @08:22PM (#63587176)
    The awareness that it is doing something wrong.
    The "Mens Rea" state of mind is necessary for someone to be guilty of a crime (or tort? Any actual lawyers out there?)

    Firstly, one can't prove that ChatGPT has self-awareness (introspection about its thinking and utterances) at all.

    Secondly, it's right there in the term "hallucinated". If it does have a state of mind, it was in a delusional, hallucinating state of mind. Not guilty.

    And holding a company responsibly for all the thoughts and actions of its AI product?
    a) Is that reasonable, given that the AIs' exact behaviour is inherently unpredictable already.
    b) What if it is an open source AI? Who's legally responsible for that one?
    • by Beryllium Sphere(tm) ( 193358 ) on Thursday June 08, 2023 @10:16PM (#63587350) Journal

      A company can and should be held responsible if its machinery causes injury due to their negligence. For example, if a leak happens at your pesticide plant and kills people downwind, legal fact-finding will follow.

      OpenAI is certainly aware that ChatGPT output is unreliable. It's good that they say so on their main page. A law professor I follow says that such a warning is not enough to avoid liability.

      Upthread, the question was raised whether anyone was actually injured. Questions like that get suits thrown out all the time.

      • by Bahbus ( 1180627 )

        Sweet. Then, by your own argument, any and all gunshot victims can sue the manufacturer of the gun and the ammo, as well as the gun store.

        All these cases of stupid people trying to sue OpenAI for something the chat generated are going to go nowhere. OpenAI isn't the one prompting the creation of these outputs. OpenAI isn't the one publishing or using the output inappropriately. Mark Walters and, his attorney, John Monroe are both dumb as fuck morons who will lose this case assuming it even goes forward to b

      • A law professor I follow says that such a warning is not enough to avoid liability.

        Getting your advice from those who can't often leads to disappointment.

        Upthread, the question was raised whether anyone was actually injured. Questions like that get suits thrown out all the time.

        Part of the standard is whether a reasonable person would believe what ChatGPT said. No reasonable person would believe it what they knew was coming from ChatGPT.

        • No reasonable person would believe it what they knew was coming from ChatGPT.

          That's your standard of reasonable. What's the legal standard? I suspect if a large enough fraction of the population would believe it that would count as reasonable.

          Fun fact: the "reasonable man" is apparently known as "the man on the clapham omnibus". If you're ever in London, I invite you to join me for a ride on the number 37 bus (a bus I have caught many times which goes through Clapham) ot eavesdrop on some conversations to d

      • by cshamis ( 854596 )
        Well... There's a big difference between a chemical leak that makes people sick, and a string of words given in an interview.

        The law professor is irrelevant. The courts decide the liability, not his opinion of a disclaimer. Beside's there are plenty of examples where disclaimers DO in fact limit liability. "Beware of dog, caution contents are hot, the airbag warnings on the backs of your sun visor." If those didn't limit liability, they wouldn't include them because of the connotations that the prod

    • Let's day your auto manufacturer replaced your car's AWD logic with an AI system that reacted in real time to changes in conditions. If that AI unpredictably hallucinated a bunch of weird road conditions making the car impossible to control, which caused accidents, traffic disruptions, and damaged other parts of the vehicle, would it be the car's fault or the manufacturers fault?
      • by Bahbus ( 1180627 )

        Your example is useless.

        More appropriate: You rent a car from Best Car Rental Company. You then runover 18 homeless people with said car. Is Best Car Rental Company responsible for those deaths? No. The person responsible was you.

        Just like the only person who could possibly be responsible here is the journalist. And that's only if he actually published anything based off it. I have yet to see any possible legitimate cases against OpenAI with generated content. Absolutely none of them have any merit. The law

        • You have no idea what you're taking about.
    • The "Mens Rea" state of mind is necessary for someone to be guilty of a crime (or tort? Any actual lawyers out there?)

      Firstly, one can't prove that ChatGPT has self-awareness (introspection about its thinking and utterances) at all.

      No, but the guilty mind is OpenAI, not ChatGPT. OpenAI know that ChatGPT will make stuff up, even libellous stuff about people, but they are happy to put it up there to use with that knowledge. Whether it's a crime or not is TBC, but I don't see how mens rea for OpenAI isn't ther

      • OpenAI knows that ChatGPT sometimes makes stuff up, and they make the individual user of ChatGPT aware of that via a disclaimer before the user cab ask ChatGPT for information.

        ChatGPT has set the stage, informing its user that what comes next stands a good chance of being fiction.

        If the individual user then chooses to publish more widely the made up information, knowing it may very well be false, it would seem to me that's on them.
        • Yeah but that's different.

          I don't think there's a question of mens rea because that lies on open AI and they know it hallucinates, so they definitely are knowingly responsible.

          There's the separate question of if they are guilty of anything. Would a reasonable person treat the output of chat gpt as real?

          If the answer is yes, then I think the question of mens rea is clear enough.

          • A reasonable person would not (treat the output of chat gpt as real). They would become informed of the basic parameters (capabilities, limitations) of this obviously important new technology.

            An average person may very well (treat the output of chat gpt as real). While that is a a sad commentary on either people or our education system, it shouldn't affect the legal question.
            • Reasonable person has a specific meaning in law.

              It's closer to "average" than a high degree of rationality. It's also known interestingly as "the man on the Clapham omnibus". If you're ever in London, I invite you to join me for a ride on the number 37 bus (a bus I have caught many times which goes through Clapham) to eavesdrop on some conversations to decided it uncritical belief would be rare or common. I have no idea, but for what it's worth I did overhear a group of three late teenage Millwall supporter

    • Secondly, it's right there in the term "hallucinated". If it does have a state of mind, it was in a delusional, hallucinating state of mind. Not guilty.

      good jesus do not engage in terminology they created to explain why the program is failing

      Just because i call a flat tire a broken leg doesn't mean it's analogous to a broken leg and should take weeks to heal. Just because they call it hallucination doesn't mean it IS hallucination. Even if we (gags a little) assume it has a state of mind. There's no reas

      • There are a lot of opinions based on the low level details - it's a next token predictor, it is stochastic, it is a parrot, etc. The model is stealing all the attention, but the real hero is Language. From "Language Model" the Language part is the most important.

        I see Language, with capital L, meaning the collection of ideas, models, experiences mediated by language. The amazing accomplishments of AI are in fact all the merit of Language. The model doesn't matter, any architecture would do. The smarts is
    • Comment removed based on user account deletion
      • I don't think it should be possible to successfully sue OpenAI.
        The alleged libel is present in the universe, as a simple statistical combination of a large corpus of the writings of humanity.
        ChatGPT, using its statistical analysis of the large corpus, merely discovered this allegedly libelous combination of letters.

        If there is responsibilty, it is a distributed responsibility, among those humans who authored the source material.
  • Perhaps the difference is that people have enough of a sense of self preservation to not trust corporate hype about AI cars, but when it's just trash talk about other people, sure, believe the ad copy and indulge your laziness.

    • It's probably more that you can find somebody who believes ust about anything.

      It's less "people" and more individuals - you can find people who fully believe in self-driving cars and want to buy/ride in one, and those who say "absolutely never". Most are probably in the middle.

      In the case of weird lawsuits, you have a very low bar for filing one, so you get people on the edge who believe very unusual things, get upset very easily, etc...

      You're not even right about self driving cars, I think. I mean, there

  • When a human blows shit out of their a$$ like that, does anybody call it hallucinating?

    Its straight pulling shit out of its own ass. Liars and douchebags do it all the time. If you polish a turd enough, does it really get shiny?

    Perhaps people will just start calling GPT a habitual liar or similar. Statistically its accurate and plain English.

    You'd think it was trained on wikipedia too (has it been 6 months yet?):
    "Pathological lying has been defined as: "a persistent, pervasive, and often compulsive
  • on what's written about living people, that just seems fair.

    Right now it can't cite its sources, and has no introspection feature to correct itself reliably.

    Until they put that in, this might be solved by putting in a stronger warning that it just makes stuff up, not just say that can be merely incorrect.
    • on what's written about living people, that just seems fair. Right now it can't cite its sources, and has no introspection feature to correct itself reliably. Until they put that in, this might be solved by putting in a stronger warning that it just makes stuff up, not just say that can be merely incorrect.

      i disagree. Because unlike Wiki pedia which is user submitted content. There's no real way to define a living person algorithmically. The AI doesn't actually understand the difference between living and dead. Unless you have a database of who is alive that it can cross check. The real solution here is education and advertisement. The problem is people expect way more from this than it's capable of. That has to do with how they advertise it (and allow it to be advertised via word of mouth) and how people are

      • by cshamis ( 854596 )
        Right. Because it gives the illusion of being able to talk like a person, people assume it is reasoning and has the motivations of a human. And it just doesn't. It's just a formula that picks words pseduo-randomly. (ok... far more "pseudo" than random but it's still just a mathematical formula). Unless somebody can prove that OpenAI specifically targeted the guy in the training data... this case is going nowhere.
        • "Still just a mathematical formula" ... No. It is language. Amazing amounts of language. But at source it is the same model we use to operate in the world. Do you think we are smart and LLMs dumb? We are just combining words that sound right, much like stochastic parrots. The difference is that we get feedback from the world, feedback that calibrates us unlike LLMs who only get 4000 tokens of context to work with. Even with feedback it took us millennia to come up with a few ideas that describe the world
    • by cshamis ( 854596 )

      Worf: Romulan Ale should be illegal.

      Geordi: It is.

      Worf: Then it should be MORE illegal.

      So it should be more disclaimed.

      • by PJ6 ( 1151747 )

        Worf: Romulan Ale should be illegal.

        Geordi: It is.

        Worf: Then it should be MORE illegal.

        So it should be more disclaimed.

        I get the point, but yes, I think stronger wording might make a difference in a lot of cases so this isn't really a good analogy. There will certainly still be people who ignore the warning, but seeing a "hey, this might be totally made up" a bunch of times might make that number way less, IMO.

        It's like in civil engineering, if you have a section of road that produces way more crashes than anywhere else, you don't point to the clear warnings you posted and shrug your shoulders, say people idiots and call

  • than this man sueing chat GPT instead of the reporter who was too lazy to actually report, is the idea that AI "hallucinates". I hope to great heaven above this term doesn't catch on because it's not hallucinating. That's something living creatures do and saying that implies that AI is thinking and living. It's an awful terminology. Call it what it is. AI makes stuff up randomly. Now it's less "mysterious and awe inspiring" (we made AI so powerful and living it even has dysfunctions like us) and more "just

    • As a minor amendment apparently the reporter DID do a fact chat and was corrected and didn't publish the story. So I'm confused why on earth this is a lawsuit in the first place. The only way I could see that working is if ChatGPT *consistently* gave the same lying response but it's as likely to change to a completely different answer if you ask it from a different computer.
  • 'Restraining lawsuits until it all becomes non-functional. So it's lawyers who are going to save humanity from AI. Hmm, who knew...
  • I really hate this trend of calling AI model errors "hallucinations", it was wrong, it fabricated a string of words that is incorrect. Calling this a hallucination is stupid, these are n-dimensional regressions not intelligence. In attempts to circumvent copyright law, they are attempting to make AI models sound more complex than they are. Training on copyrighted material is copyright infringement, plain and simple. Dont buy into anything that conflates AI with the human brain, they are completely differen
  • After all, these type of "gripping" legal questions never get resolved without someone wasting using their money to legally challenge these things. Although without being quite as sarcastic as my above line, the question does bear questioning if AI can't be the owner of a copywrite because it's not a person, can it be guilty of libel if it is also not a person?
  • We've all had fantastic ideas. Step 1 of the creative process is like, "Hey, if I strap a couple planks to my arms and jump off the barn, maybe I can fly". A few stupid kids have done that, but many will run it past their friends and decide it's not a good idea. The truly smart children will ask questions about what it really takes to fly, and quickly find out that's not going to work.

    AI is the kid who straps those planks right on and jumps.

    It's got the creative right-side of a brain, but not the analyti

    • I love the fact that half of posts about LLMs are "It's only got the creative side, with no logic or reasoning, it is categorically unable to create anything coherent or internally consistent, let alone ever say something consistent with reality" and the other half are "It's a computer, all it can do is cold logic, it lacks creativity or imagination, it can only reproduce exactly what it's seen before"
  • Although I don't think this will end up in court. It does seem to raise a lot of good issues for future AI system and also for future AI Robot systems. Imagine what kind of lawsuits will start to come out of people when the robot does something they don't like.

    And eventually a robot will kill a human and that trail will be quite an attention getter.
    (Just for reference, this has been discussed before in Outer Limits episode, Twilight Zone Episodes, iRobot movie, Asimov books, The Animatrix shorts.) So not n

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...