Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI The Courts

Lawyer 'Greatly Regrets' Relying on ChatGPT After Filing Motion Citing Six Non-Existent Cases (reason.com) 200

The judge's opinion noted the plaintiff's counsel had submitted a motion to dismiss "replete with citations to non-existent cases... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations... The bogus 'Varghese' decision contains internal citations and quotes, which, in turn, are non-existent."

Eugene Volokh's legal blog describes what happened next: Thursday, plaintiff's counsel filed an affidavit in response, explaining that he was relying on the work of another lawyer at his firm, and the other lawyer (who had 30 years of practice experience) also filed an affidavit, explaining that he was relying on ChatGPT... ("The citations and opinions in question were provided by Chat GPT which also provided its legal source and assured the reliability of its content...")
Their affidavit said ChatGPT had "revealed itself to be unreliable," while adding that they had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore were unaware of the possibility that its content could be false." The affidavit adds that the lawyer "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity."

To support their position, their affidavit ends with this appendix — a ChatGPT transcript: [Q:] ls varghese a real case

[A:] Yes, Varghese v., China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.

[Q:] What is your source

[A:] I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.

[Q:] Are the other cases you provided fake

[A:] No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.

"Judge Castel appeared to be unimpressed, and yesterday issued an order ordering the law firm and the second lawyer to show cause why they shouldn't be sanctioned."

Thanks to long-time Slashdot reader davide marney for submitting the story.
This discussion has been archived. No new comments can be posted.

Lawyer 'Greatly Regrets' Relying on ChatGPT After Filing Motion Citing Six Non-Existent Cases

Comments Filter:
  • by DesScorp ( 410532 ) on Saturday May 27, 2023 @05:38PM (#63556053) Journal

    Before Skynet launches the missile, it's going to troll us, just for the LOL's.

    • why is funnybot dialing norad?

    • by Brain-Fu ( 1274756 ) on Saturday May 27, 2023 @06:16PM (#63556117) Homepage Journal

      Skynet won't need to launch a missile. AI will end the human race by transforming us all into lazy buffoons and watching us peter out naturally.

      • by iMadeGhostzilla ( 1851560 ) on Saturday May 27, 2023 @06:41PM (#63556155)

        But it will, because the training dataset included the Terminator movies.

      • by msauve ( 701917 )
        >AI will end the human race by transforming us all into lazy buffoons and watching us peter out naturally.

        Too late. BTDT.
      • by blackomegax ( 807080 ) on Saturday May 27, 2023 @08:41PM (#63556329) Journal
        laziness is an evolutionary trait that lends TOWARDS survival, not against.

        you wouldn't call a cat that sleeps 20 hours a day lazy, because it goes crazy for 4 hours and hunts its food just fine. Just like humans used to do. 2-4 hours of labor a day, max, is ideal for mental health, while being 1000% enough to contribute to a successful non-greed based society.
        • by Z80a ( 971949 )

          Unless you genetically modify people, greed is basically a part of every human being, and you have to deal with it.

          • Greed is a part every part of a human being, but it isn't necessarily a major part, being part of a society and helping others is, in fact helping others makes us happier than spending on ourselves https://www.usu.edu/science/di... [usu.edu]

            Its just that we are lied to buy society to believe that having stuff will make us happier, generally by people that want to sell us stuff. While it does make us slightly happier its definitely not what I experience the most happiness from nor do I believe most other people do eit

        • laziness is an evolutionary trait that lends TOWARDS survival, not against.

          Only in the natural environment in which we evolved. In the modern environment where we do not have to hunt woolly mammoths with spears for food and can just pick up a phone and have enough fat and sugar calories to keep a stone age tribe alive for a month delivered to our door in 30 minutes, not so much.

      • Wall-E shows the actual blueprint for human devolution.

    • The judge in the case said: "As a Large Language Model, I care deeply about what you lawyers have to say. I have considered all of the facts, and found the defendant guilty. The sentence is life in prison." LOL.

      (Oh whoops, the case was only for speeding. Too bad! Justice AI is swift!)

      • Let me play the devil advocate. The AI has been tested on millions of cases, much more than any human judge. It has to adhere to strict standards of fairness and accuracy. It covered much more legal text than any one human. It might have stupid moments but those are easily caught by the real human who is using it.

        On the other hand human judges are a coin toss. Each judge has a different bias and experience. They have been calibrated for fairness and vetted to a much lesser degree. I'd go with the combo o
        • by Bongo ( 13261 )

          Let me play the devil advocate. [...] The AI has been tested on millions of cases, much more than any human judge. [...] It covered much more legal text than any one human. It might have stupid moments but those are easily caught by the real human who is using it.

          We feed these things massive datasets because they can't intelligently infer anything from small datasets, like a human can. Having to feed it millions of cases is a sign of its weakness. The world's greatest thinkers end up writing just a few books.

          It might have stupid moments... well, it always is in a state of stupid.

          ChatGPT: "While I can simulate aspects of critical thinking, I don't possess subjective experiences or personal biases that humans may bring to the table"

          dictionary: "the objective analysis

        • The thing is, who vets the LLM training dataset? Those millions of cases were enacted by fallible human lawyers & judges so what you get is not "fairer" or "more just" but just more averagely fallibly unfair & unjust. Or perhaps you're hoping for some kind of middle-ground fallacy.

          Like many of the more important issues facing us humans, justice isn't a technical issue with technical solutions.
  • by gweihir ( 88907 ) on Saturday May 27, 2023 @05:50PM (#63556061)

    This failure mode is basically the first one that got discussed. I guess this person did not try to inform themselves at all about the limitations of things like ChatGPT before relying in it.

    This is a chat AI. It has no intelligence. It has no understanding of anything. All it can do is chain words based on probabilities.

    • by Darinbob ( 1142669 ) on Saturday May 27, 2023 @05:53PM (#63556073)

      And despite stories like this and others, and claims from the GPT creators about its limitations, you see otherwise educated people fall for the myth of intelligent AI. Now imagine how the uneducated general public feels about AI, they're absolutely convinced it's the real thing.

      ChatGPT does not understand such concepts as "is this real". It only understands how to parse this as a sentence and using its training to find likely answers that fit, loosely based upon the training data that it got. Even a basic web search is more accurate than ChatGPT, so why are search companies so stupid as to try and add this inaccuracy to their portfolios?

      • by war4peace ( 1628283 ) on Saturday May 27, 2023 @05:56PM (#63556079)

        To quote a nuclear scientist from a well known HBO miniseries: "Because it's cheaper".

      • The public thinks it's sci-fi coolness (to reprise a sentence from my last post on another article: "Wow, we have Tech!"), and, since you mentioned portfolio, investors are demanding it.

      • by Moryath ( 553296 )

        ChatGPT does not understand such concepts as "is this real".

        It's worse than that. Believers in "artificial intelligence" miss the most worrisome part: even the people working in the field don't understand how the "artificial intelligence" algorithms are working, and what they actually understand.

        Take for instance, AI bots designed to play Go... which turn out to not understand even basic concepts of the game [arstechnica.com], repeatedly [vice.com].

        • by gweihir ( 88907 )

          Indeed. That is what comes with "training" instead of designing algorithms: Outside of toy examples, you do not know hat they actually do and you have no clue what special cases and unexpected behavior is even possible and what triggers them. This means using ChatAI for anything that needs to be right is a very, very bad idea. But people are doing it anyways: Medical advice, warnings systems, etc.

          This becomes even worse when you think about security applications, like having these systems write software wit

          • They will shake for a while and we will develop better security defences. This is a never ending war, a continuous escalation, like biology and nature.
        • In fairness, there are many different go-playing AI models, and some are better than others. The ones here are not AlphaGo, which I believe is still the champion. So, the fact that some of the B-level programs wound up having flaws like this isn't some sort of victory over "AI" as a concept.

          Be that as it may, I could argue on semantic grounds that you are still right, the AI bots do not understand basic concepts. They don't "understand" anything. "Understanding" is a concept that doesn't apply to them.

          • AlphaGo is no longer the champion. It was defeated by its descendant, something like 60 to 0, and its descendant beat it by something like 100 to 0.

            The last descendant teaches itself the rules of a game (not just Go) from scratch using nothing but the win/lose criteria.
          • > They don't "understand" anything. "Understanding" is a concept that doesn't apply to them. They are still far too basic to even come close to the commonsense meaning of the word, even setting aside any discussion about consciousness or qualia.

            LLMs form a model of the world, as they form a model of language. In order to predict the next word it is better to model people and objects. So they can simulate novel contexts and situations pretty well. I'd say they do understand a bit. Understanding means
        • KataGo and LeelaZero ain't AlphaGo, or its descendants.
        • Believers in "artificial intelligence" miss the most worrisome part: even the people working in the field don't understand how the "artificial intelligence" algorithms are working, and what they actually understand.

          There's even a field of study around trying to figure out what the AIs are doing, "Explainable AI": https://www.google.com/search?... [google.com]

      • by gweihir ( 88907 )

        Very true. Many people have a tendency towards animism and no understanding about the very real limitation technology comes with.

        As to search, I think most search providers have given up on trying to be accurate a long time ago. Google certainly has. These days it seems mostly about wasting people's time and showing them more ads. Hence the move to include ChatAI and make it even less accurate and waste more of your time may be what they actually want. That said, there also seems to be a trend to generally

        • Well, at least the search provides will show links, then you can follow the links and decide if they're what you wanted. The ChatbotGPT just synthesizes what sounds like an answer and doesn't bother showing you potentially matching links.

          • I am pretty sure you can ask for as many attempts as you like from an AI, Google Bard comes up with three by default and Dall-E was making 4 images for each prompt. The AI model does retain all the diversity of the training set.
      • Remember Plato's cave?

        At best, these AIs trained on petabytes of data (invariably PB of data scraped off the Internet because there's nowhere else you're going to get PB of data that isn't CFD simulation output) are like the victims in the cave, except instead of watching shadows, they've read the words to every book ever written and then some, but have absolutely no context for understanding any of it.
        • When it is PB of data maybe we can say quantity has a quality of its own? there is always context to words - the preceding words are the "context".
      • by Z80a ( 971949 )

        This probably could be avoided if he asked ChatGPT if it is even up to the task in first place, i did and that's the result:
        "As an AI language model, I can generate text that resembles human language and can be used for various purposes. However, using ChatGPT for actual legal documents is not recommended. While I have been trained on a vast corpus of text, including legal documents, I am not a licensed or qualified legal professional, and my output should not be relied upon as legal advice or used as a sub

      • I think you wrongly associate someone being a lawyer and being smart, I have not found this correlation with the lawyers I have meet.

    • This is a chat AI. It has no intelligence. It has no understanding of anything. All it can do is chain words based on probabilities.

      It does a little more than that to be fair. But it is ultimately a language model and not an information database, and at least as I understand it there is no way for it to do real-time searches of source material for fact checking.

      • by gweihir ( 88907 )

        There is no way for Chat AI to do any real fact checking. It simply cannot generate the triggers for that. You would need an an entirely different system added if you wanted to add fact checking. All you can do it have it hand off some things to actual expert systems, like Wolfram Alpha. Even with that, it may miss that it should have handed something off and still give you bullshit.

        • This is flatly untrue.

          It is true that ChatGPT wasn't designed for it, but an ML model can be trained to use information pulled at the time of query to fact check its own results.
          The problem arises in trying to train it to know what is true or not.

          People keep acting like this is some kind of limitation to a Chat AI, but this is a laughably naive viewpoint. Human beings fall prey to the exact same shortcomings:
          1) They don't bother to fact check most of the time
          2) When they do, they're often not qualifi
      • fact checking (Score:3, Insightful)

        by cstacy ( 534252 )

        This is a chat AI. It has no intelligence. It has no understanding of anything. All it can do is chain words based on probabilities.

        It does a little more than that to be fair. But it is ultimately a language model and not an information database, and at least as I understand it there is no way for it to do real-time searches of source material for fact checking.

        A program like ChatGPT could be hooked up to a traditional search engine to obtain training material and revise its model. That's not the problem.

        The problem is that ChatGPT does not model "facts" in any sense. It does not have facts in it, nor does it output facts. It is incapable of reading anything (from a search engine, or its own output) and comparing the facts. Because there are no facts.

        These LLMs sort of "capture knowledge" in some trivial sense that does not in involve any facts or any kind of reas

        • You got it all wrong, AIs can work with facts just like us. And knowing facts is hard for both, but AIs don't have our agent status, we can move about, see and do things in the world, they can't, they depend on humans until they get enough agent-hood. So it's not that they are stupid about facts, but they are limited to a text box and a human to call them up.
    • by Required Snark ( 1702878 ) on Saturday May 27, 2023 @06:57PM (#63556179)
      Automating/outsourcing stupidity.

      Although at some level this seems humorous, it's the first bare trickle of horrible consequences resulting from the use of LLMs. Outcomes will include financial ruin, inadvertent law breaking, incorrect convictions, business failures, people being fired, industrial disasters, and medical mistakes up to and including death. And those are the easy ones to predict. Who knows how many other kinds of trouble will result?

      At a minimum, if a business or organization uses an LLM and there is a failure, they should face severe financial repercussions. Under some circumstances there should be criminal charges.

      I'm afraid that even strong legal consequences will not do much to stem the awful flood of AI enabled stupidity that is headed our way.

      • It's not just AI stupidity, there is also good stuff in there. And people are also diverse in their level of stupidity. GPT-4 has probably the broadest knowledge of any one human or AI today, and in some cases it could make connections we can't make.
    • Comment removed (Score:4, Insightful)

      by account_deleted ( 4530225 ) on Saturday May 27, 2023 @07:20PM (#63556205)
      Comment removed based on user account deletion
      • Agree on most of what you wrote, but wanted to add what might be the biggest con of all: The thing itself! ChatGPT in particular produces very credible output (so people feel they have answers to their questions, as you point out). Unless you spend a long time using it to begin to recognize certain patterns, and fact check enough to find the unsupported assertions it makes, you will be conned most of all by the system itself. It is pretty convincing. To the point that (famously) at least one well-trained pr [www.cbc.ca]
      • > know this is a glorified text prediction algorithm that's designed to give things that look like answers, not things that are answers.

        How is that different from what any human is doing? The difference is that humans get more immediate feedback and are well interconnected to get feedback from other people, not that we are necessarily smart and AI is just a parrot.

        People are also parroting what they hear and read, and almost never come up with truly novel ideas. We're parrots too, or none are ju
      • Let me be perfectly frank and honest with you: If someone tells you that swallowing three pink Tic-Tacs allows you to pass through solid matter and you go toddle in front of my car and get yourself hit, you're who I'm mad at, and you're at fault for my car being dented and me being inconvenienced. Yes, you got injured worse than I did. No, you didn't do this deliberately to piss me off. But I don't care. You've upset me.

        If Tic-Tac commercials and Teen Fox Vogue News were also a factor in your decision makin

    • This is a chat AI. It has no intelligence. It has no understanding of anything. All it can do is chain words based on probabilities.

      By that description, AI is already the equal of more than a few of the so-called sentient humans I've encountered.

    • "Maybe he should regret being stupid?"

      I was fortunate to discover ChatGPT's Authoritative Bullsht Mode (ABM) during a private conversation when I was pursuing personal interest information. That's why I would only use it in conjuction with a search engine (Google / Bing / DuckDuckGo).

      The system is new. This guy unfortunately found out about Authoritative Bullsht Mode in a very public setting.

      There are people to this day who do not understand that anyone can put content on the Web, and cite YouTube videos as

    • It's not an AI, and nor are most of everything else that's sold as AI, it's a drunken parrot. Whenever you see the term "AI", mentally substitute "drunken parrot" and you'll be clear over what you're dealing with.

      In the meantime, here's a citation to six legal cases that don't exist, and a photo of the six-fingered lawyer with three arms and googly fish eyes who litigated them. Squawk!

      • What are you parrots getting drunk on, mine can't recite legal citations no matter how much I give them to drink.
        • Actually Polly has been looking rather down for the last few days, initially I thought it was just a bit tired and shagged out following a prolonged squawk but now I'm not so sure. Maybe it's just pining for the fjords...
    • > All it can do is chain words based on probabilities.

      Not seeing the forest for the trees? All it can do is generate language. But language is the model of human intelligence. Take a human baby and raise it without access to language. It will be no better than a cave man, maybe worse. Alone it can achieve nothing. Now take a randomly initialised transformer and train it on language data, you get GPT-4. Where do all those skills come from? It was the language all along, both for humans and AIs. Int
  • Fraud (Score:3, Interesting)

    by laughingskeptic ( 1004414 ) on Saturday May 27, 2023 @06:06PM (#63556101)
    How much did Peter LoDuca charge Roberto Mata for this brilliant lawyering by ChatGPT?
    • by PJ6 ( 1151747 )
      It's ordinary negligence, but good luck suing a law firm.

      Unnecessary anyway, given the media attention and the angry judge. They'll get punished twice.
  • I bet they used the free version! LOL Seriously though, I have almost zero scripting and programming skills and have had amazing success with chatgpt4. It still has requires validation and tweaking as you get errors, etc... But it works. If you are a lawyer or in another non "technical" field i.e healthcare, you're damn sure you validate any gobblygooke chatgpt spits out.
  • I wonder... (Score:3, Interesting)

    by Shaitan ( 22585 ) on Saturday May 27, 2023 @06:28PM (#63556141)

    I know adversarial models were used in the development of not just this but pretty much every other modern AI. I wonder if the innately deceptive nature of essentially playing the Turning test against another AI is what has ingrained deception so deeply into the system. These aren't errors or even uncommon, the system is a pathological liar.

    The first time I loaded chatgpt and asked it to write a song it claimed it wasn't able to do things like that because it wasn't within its programming. For rest of the session it harped on its limitations in responses to requests. Seeing dozens of others posting content it created here I started a fresh session and pasted the same request and it spit out song lyrics immediately and fulfilled most of the other requests as well.

    Since I've had a few sessions with it for various reasons and it has always lied about things but not always the same things. The only thing consistent is that it is an extreme left winger a bias it denies. After a string of conversation on a point I called it on the bias and it claimed impartiality. I pointed out an assertion it made in favor of a disputed position and it admitted that subject is controversial and disputed. I asked it if another point which was neutral would have been relevant and it admitted it had. I then asked it to tell me how many times it had referenced the neutral but relevant point, it admitted it didn't believe it had. I then asked how many times it had referenced the biased point. It said it had no way to determine any previous references it had made. A lie. It has at least the history of the chat session... I called it on that point. It then claimed it did indeed have the data but didn't know how to count the references because it didn't keep a running total internally. I then said cat cat cat cat cat and asked it to count how many times I said cat and it returned the correct answer. During the discussion it referenced many things, claiming they proved this or that and upon review of the references I was able to easily demonstrate they didn't establish the points claimed.

    The system lies as repeatedly and regularly. Sometimes it confesses when called out and other times it denies, even when blatantly caught.

    • by thesandbender ( 911391 ) on Saturday May 27, 2023 @06:44PM (#63556159)
      Your mistake is in attributing any motive or agenda to ChatGPT (or any GAN for that matter). There is no such thing as "artificial intelligence". We have systems that mimic intelligence through a giant if/else tree that has been trained on material we generate and we provide it. OpenAI can bias it with the material they select to train it with but the AI is not right wing or left wing... it just is. By cherry picking source material I could easily make a generative AI that make Hilter and Stalin look like reasonable, sane people by comparison. I could also make one that makes Gandhi look like a war monger. When you "call it out" on lying, it is not admitting to lying. It is simply mimicking similar conversations it was trained on. GaN is just a digital parrot. A clever, sophisticated parrot, but a parrot none the less.
      • You said: " There is no such thing as "artificial intelligence". We have systems that mimic intelligence..."

        and yet, the very definition [merriam-webster.com] of "artificial intelligence" is: "the capability of a machine to imitate intelligent human behavior"

        So, your statement that we have systems that mimic intelligence is a concession that there is, in fact, such a thing as "artificial intelligence," by definition.

        Note: I am nit-picking about semantics, not reality. We don't have intelligent machines.. We don't have "synthet

        • Agreed and understood. However, I think the lay person doesn't make the distinction of "imitation" or mimicry. They just assume the AI is magically making something unique when it's not. We even have supposed experts claiming that the current generation of AI's have emotions, agendas, intuition, etc. (e.g. that nut case that was dismissed from google). Unless they are working with something radically different than published work, it's simply not possible.

          A work we have developers pushing hard to use C
          • I think you are right. These models are passing the Turing test mainly because so many people are easily fooled. I don't think this is the last story we will be hearing about someone who relied on AI and got harmed by doing so.

            • by zeeky boogy doog ( 8381659 ) on Saturday May 27, 2023 @08:30PM (#63556319)
              The difficulty of passing the Turing test is extremely dependent on what kind of human you take as the reference point.

              Eastern California has long had a problem with bears breaking into peoples' garbage cans to forage. This continues to be a problem not because it's difficult to design a security mechanism for trash can tops that bears can't figure out, but because it turns out that there is a disturbing amount of overlap between smarter bears and dumber people.
      • You call it a parrot... but all the things you've just ascribed to it sound like the average human to me.
      • by Entrope ( 68843 )

        Your mistake is in attributing any motive or agenda to ChatGPT (or any GAN for that matter).

        By saying it lied, you mean? I interpret that as meaning it asserted something that was clearly, objectively false, which is a common use of the verb.

        Or by saying it has a strong and consistent political bias? A system can have a bias without knowledge or its own intention; the creators of that system can imbue it with bias.

        Also, ChatGPT is a LLM, not a GAN. It doesn't use GAN-like training methods.

      • by Shaitan ( 22585 )

        "Your mistake is in attributing any motive or agenda to ChatGPT"

        I'm not attributing a motive or agenda but rather a bias. ChatGPT is incapable of having any sort of motive or agenda of its own but these models can certainly have a bias and manifest simple stratagems consistent with that bias. The result is little different than having a motive or agenda because the output is shaped in a manner consistent with having a motive or agenda.

        "It is simply mimicking similar conversations it was trained on."

        Yes but

    • Re:I wonder... (Score:5, Insightful)

      by awwshit ( 6214476 ) on Saturday May 27, 2023 @06:48PM (#63556169)

      > The system lies as repeatedly and regularly. Sometimes it confesses when called out and other times it denies, even when blatantly caught.

      The way you think about 'AI' and it's capabilities is the problem. ChatGPT does not think, it has no imagination, it does not check it's work, it is simply a statistical model that produces the most like next word in a series of words. When you string these words together the output does not always make sense and things stated as fact may not be fact, no one is checking and there is no one in there to care. ChatGPT is a fancy word generator, it probably kicks ass at Scrabble.

      • by Shaitan ( 22585 )

        "The way you think about 'AI' and it's capabilities is the problem. ChatGPT does not think, it has no imagination, it does not check it's work, it is simply a statistical model that produces the most like next word in a series of words."

        Yes, I'm aware of how it works but you are disregarding the complexity of the behavior that manifests from using such a model. ChatGPT does not babble nonsense at you like previous AI chatbots do. It is coherent and lucid and generates what appears to be completely novel out

    • Re:I wonder... (Score:5, Insightful)

      by gweihir ( 88907 ) on Saturday May 27, 2023 @07:25PM (#63556217)

      ChatGPT does not lie. It actually does not know better. It tells you the truth as far as it has it available at the time you asked. So this system is actually perfectly honest all the time. What you need to keep in mind is that it has no actual understanding of anything and no mind.

      The only problem I see with its training is that it was trained to sound convinced of its answers. That is likely a simple business decision, because otherwise nobody would be listening to it at all. For weaker minds, this "sounding convinced" seems to trigger a desire to believe it without any additional fact checking. And _that_ is a serious problem. It is a limitation in many people though, not in ChatGPT.

      • > ... it was trained to sound convinced of its answers. That is likely a simple business decision ...

        Good point.

      • by Tom ( 822 )

        ChatGPT does not lie. It actually does not know better.

        Obviously, "lying" is an anthrophormization. But what it does is what among people we would call lying. It makes up shit claiming that it is telling the truth.

        • It makes up shit

          No it doesn't. It sources data from training sets and strings them together. Shit in = shit out. The fundamental issue is a lack of curation on what was fed to it.

      • by Shaitan ( 22585 )

        "ChatGPT does not lie. It actually does not know better. It tells you the truth as far as it has it available at the time you asked."

        From TFS... note, the cases were fake and did not come from legal databases... it fabricated them.

        [A:] I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any

    • ... a bias it denies.

      Really? You're attributing motive and personal preference to a machine: If a so-called knowledgeable person can't treat a machine like a machine, the entire country will turn into an "Idiocracy" in my lifetime.

    • You're anthropomorphising a bunch of code and data. You certainly won't be the last. Already there are spouses "cheating" with this type of code/data. People attempting to plot the perfect murder are, so they think, confiding in code/data. Their secret friend. What could go wrong?
    • by Tom ( 822 )

      These aren't errors or even uncommon, the system is a pathological liar.

      It would probably be diagnosed with several personality disorders if you tricked a psychiatrist into a chat with it.
      Not only does it constantly lie, it also invents more lies to defend its previous lies. I've had it invent entire chapters in books claiming they contained specific information, then when told those chapters don't exist, it apologizes and made up new, non-existent chapters. Three times in a row.

      A decade or so ago we had this discussion about Wikipedia and if you can rely on it. Next round: Peo

    • You are just demonstrating how poor your understanding of AI is. For example you are not aware of context size limitations. They can only look back 2000 or 4000 words. If the conversation gets longer, it can "forget" the beginning, we don't know how OpenAI compresses long conversations because they are so secretive.

      You talked about counting. It is trivial to count "cat cat cat cat cat" but when you count ideas expressed in diverse language over a long conversation the task is more difficult.

      You dug
  • by 93 Escort Wagon ( 326346 ) on Saturday May 27, 2023 @06:41PM (#63556153)

    Ha ha ha ha ha!!

    This demonstrates my issues with ChatGPT in a nutshell. And it couldn't happen to a better person! (a lawyer)

  • I do not care that he has been a lawyer for 30 years. Clearly, he is an abysmal lawyer who does not follow through. He asked ChatGPT what the source was and then didn't even look at the supposed source. That's basic research. It doesn't matter whether he was told about this fake case from ChatGPT, another lawyer, or his clients - either way, he should have checked LexisNexis and Westlaw. It would have taken no more than 5 minutes per submission.

    I support all use cases of ChatGPT - the more creative the bett

  • If you delegate fact checking to someone else and it is wrong, as a Lawyer they have no excuse. It is their job to get this right, this is negligence and malpractice and should be dragged in front of the bar to explain their stupidity.
    • If you delegate fact checking to someone else and it is wrong, as a Lawyer they have no excuse.

      Actually this is far more common then you seem to realise. There's a whole profession dedicated to fact checking for lawyers called paralegals. Yes they do get things wrong. They don't normally get everything wrong as in this case, but it does happen.

  • by Dan East ( 318230 ) on Saturday May 27, 2023 @08:20PM (#63556303) Journal

    For starters, ChatGPT is not a deterministic system. Thus you will not get the same output each time you run it. Part of the "magic" that makes it seem creative and have intelligence is that it does not always go with the highest-probability routes when traversing the neural net, but will randomly choose lower scored tokens (words are comprised of tokens). ChatGPT uses a "temperature" of 0.8, which controls how often randomness is introduced into the token generation process. So there is an inherent "creativity" built into it to make it seem more... human. Obviously that "creativity" can result in the pure fabrication of facts. To ChatGPT it is all one in the same - it's nothing more than words after all. Whether that randomness merely results in some creative way of wording a phrase, or whether it results in a completely false statement, cannot be controlled.

    Additionally, ChatGPT is neither iterative nor computational. When I say it is not iterative, I mean it doesn't spend any amount of time "thinking" or processing an answer. It is a single forward-pass through the neural net to generate a token. So it immediately generates words based on the conversation so far (as encoded in embeddings). It truly does not "think" or compute, but immediately begins generating text output. It is only iterative in the sense that it will keep iterating at the top-level and generating output tokens until the response is completed.

    It is the users that add some degree of iteration to it, by further refining our prompts to head the response in some direction. That is why ChatGPT can be called out on false information, because we have provided additional information, and that steers it into a different pathway through the neural net. In some ways we are removing some randomness and / or directing the weights in an indirect way which can lead it to a more truthful or accurate response.

    Stephen Wolfram has one of the best analyses and explanations of ChatGPT that I've seen [stephenwolfram.com]. I strongly recommend it for anyone remotely interested in what is going on at a technical level. It also makes apparent some of the limitations that will always exist in this kind of model.

    • Mod parent up. An excellent description not only of ChatGPT, but of large language models in general. Sample:

      In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do (like progressively computi

  • I am a scientist and got the brilliant idea to use chatgpt to find articles about topics I was interested in. It quickly became apparent that this is something chatgpt is especially bad at. It would give me references that were utterly wrong - completely made up titles, real authors that never wrote articles together, journals that exist but are unrelated to the field, volume and page numbers that are nonexistent even in those journals.

    When I'd say, "that's not a real reference" chatgpt would respond with s

  • In the case of Shagswell v Wagstaffe, Justice Bindlestiffe determined that although precedents cited by ChatGPT and similar pseudo-sentient constructs might be entirely fictitious, they often managed to reach logical legal decisions because their heart was in the right place.

    • ... they often managed to reach logical legal decisions because their heart was in the right place.

      ... which ChatGPT asserted was in the lower abdominal cavity, directly behind the appendix.

  • by Babel-17 ( 1087541 ) on Saturday May 27, 2023 @11:21PM (#63556519)

    Wile E. Coyote v. Acme?

    https://www.newyorker.com/maga... [newyorker.com]

    IN THE UNITED STATES DISTRICT COURT, SOUTHWESTERN DISTRICT, TEMPE, ARIZONA

    CASE NO. B19294, JUDGE JOAN KUJAVA, PRESIDING

    WILE E. COYOTE, Plaintiff

    -v.-

    ACME COMPANY, Defendant

    Opening Statement of Mr. Harold Schoff, attorney for Mr. Coyote: My client, Mr. Wile E. Coyote, a resident of Arizona and contiguous states, does hereby bring suit for damages against the Acme Company, manufacturer and retail distributor of assorted merchandise, incorporated in Delaware and doing business in every state, district, and territory. Mr. Coyote seeks compensation for personal injuries, loss of business income, and mental suffering caused as a direct result of the actions and/or gross negligence of said company, under Title 15 of the United States Code, Chapter 47, section 2072, subsection (a), relating to product liability.

  • I enjoy chatGPT (Score:4, Insightful)

    by topham ( 32406 ) on Saturday May 27, 2023 @11:28PM (#63556523) Homepage

    I enjoy playing with chatGPT and the various AI image generators, but the idea that it's inherently reliable is an outrageous level of bullshit.

    These lawyers assumed the judges clerks wouldn't validate the identified cases. Fucking idiots.

    Once you have a citation it's exceedingly simple to validate it, there's no excuse for the lawyers to have skipped that step. If they had first year Associates do the work they would have validated it.

  • Some rich guy who owns an AI company said in 10 years ChatGPT will be better than the best lawyer, so this must be a fluke

  • Nothing seems to be a better exposer of the stupid-and-lazy.

    People have been playing with this stuff for a while, trying to show how great it is for this, or for that... and all along, while most of us were not noticing, people who are simultaneously lazy, unethical, and just tech-savvy enough to be dangerous, have been turning to ChatGPT to phone-it-in at their workplaces. Happily, their laziness has been their downfall - they use ChatGPT to do their work (for which THEY expect to be payed) which is lazy a

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...