Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Courts AI

Lawyer Cited 6 Fake Cases Made Up By ChatGPT; Judge Calls It 'Unprecedented' (arstechnica.com) 48

An anonymous reader quotes a report from Ars Technica: A lawyer is in trouble after admitting he used ChatGPT to help write court filings that cited six nonexistent cases invented by the artificial intelligence tool. Lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity," Schwartz wrote in an affidavit (PDF) on May 24 regarding the bogus citations previously submitted in US District Court for the Southern District of New York.

Schwartz wrote that "the use of generative artificial intelligence has evolved within law rms" and that he "consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed." The "citations and opinions in question were provided by ChatGPT which also provided its legal source and assured the reliability of its content," he wrote. Schwartz admitted that he "relied on the legal opinions provided to him by a source that has revealed itself to be unreliable," and stated that it is his fault for not confirming the sources provided by ChatGPT. Schwartz didn't previously consider the possibility that an artificial intelligence tool like ChatGPT could provide false information, even though AI chatbot mistakes have been extensively reported by non-artificial intelligence such as the human journalists employed by reputable news organizations. The lawyer's affidavit said he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."

Federal Judge Kevin Castel is considering punishments for Schwartz and his associates. In an order on Friday, Castel scheduled a June 8 hearing at which Schwartz, fellow attorney Peter LoDuca, and the law firm must show cause for why they should not be sanctioned. "The Court is presented with an unprecedented circumstance," Castel wrote in a previous order on May 4. "A submission filed by plaintiff's counsel in opposition to a motion to dismiss is replete with citations to non-existent cases... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations." [...] In the order issued on Friday last week, Castel said that Schwartz may be sanctioned for "the citation of non-existent cases to the Court," "the submission to the Court of copies of non-existent judicial opinions," and "the use of a false and fraudulent notarization." Schwartz may also be referred to an attorney grievance committee for additional punishment.
Castel wrote that LoDuca may be sanctioned "for the use of a false and fraudulent notarization in his affidavit filed on April 25, 2023." The law firm could be sanctioned for "the citation of non-existent cases to the Court," "the submission to the Court of copies of non-existent judicial opinions annexed to the Affidavit filed on April 25, 2023," and "the use of a false and fraudulent notarization in the affidavit filed on April 25, 2023."
This discussion has been archived. No new comments can be posted.

Lawyer Cited 6 Fake Cases Made Up By ChatGPT; Judge Calls It 'Unprecedented'

Comments Filter:
  • But this story is OLD now.

  • by Baron_Yam ( 643147 ) on Tuesday May 30, 2023 @05:03PM (#63562397)

    The guy should be banned from practising law as he's proven he's criminally negligent in his duty to properly represent clients.

    • by ffkom ( 3519199 )
      I rather expect standards will be lowered to allow usage of AI in practicing law. At least that is what seems to happen in most other professions where AI has popped up recently.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday May 30, 2023 @05:22PM (#63562441)
      Comment removed based on user account deletion
      • by gweihir ( 88907 )

        Some tech people here understand the limitations quite well. A lot of others pray to yet another surrogate God found in technology.

        The thing about replacing jobs is that a lot of low-level white-collar jobs can be done by a well-spoken moron with some minimal factual knowledge. And that seems to be quite within the range of a slightly augmented "Chat AI". And since these are low-level white-collar jobs, i.e. not producing anything, there will not be any new jobs by additional demand. Any job that requires a

      • by hey! ( 33014 )

        AI obviously isn't going to do all the programming, but it sure is going to change the nature of programming as a job.

      • Re:At the very least (Score:5, Interesting)

        by Junta ( 36770 ) on Tuesday May 30, 2023 @07:16PM (#63562687)

        I'm a little less worried about whether AI can do all the programming so much as I'm worried that business folk will *think* it can. That means layoffs for my colleagues or myself, a huge burden on the remainder, and dealing with management "fixing" it by hiring "prompt engineers" and insisting those prompt engineers contribute an obvious amount to the projects.

        Business leaders don't have the best track record of making the best decisions in the face of massive amounts of marketing hype telling them the "right" way to go.

        • That's why I exercise my prompts on various AI solutions.
          If management decides to fire me, they will come back begging within a couple weeks tops, opening up the opportunity for me to charge them external consultant fees.

        • They'll find out one way or the other. and if it can't in a particular case, they'll fix it, and try to get the bot to do it better next time. none of the outcomes results in going back to hire more programmers and ditching the AI
        • Where I work an executive decision was made to ban most AI tools because they represent a data exfiltration security risk. We only allow certain ones and strictly govern their use, and we only allow them to be used at all because they help people be more productive. Nobody, at any level, is under any illusions of them replacing any jobs.

      • It's enough to look at machine-based language translation solutions.
        How many years since they popped up? A decade? More? They still can't translate properly from one language to another.
        On the other hand, many companies switched from human translation to automated translation for their websites, with hilarious (and occasionally dangerous) results. So... jobs were lost, or, at least, not gained.

        I, for one, started to branch out years ago. And I mean really branch out. Examples include but are not limited to

        • Altavista's Babelfish launched about 25 years ago, so quite a bit more than 10 years ago. Google Translate launched 17 years ago.

      • "AI will do all the programming" - Slashdotters panic, "They're taking our jerbs!"

        "AI wrote bogus thing that someone relied on who wasn't technically skilled and didn't know the limitations" - Slashdotters point and laugh at the "idiot" who doesn't understand "AI" isn't capable of doing what they claim.

        If tech people don't understand the limitations, how are non-tech people supposed to understand it? And why are you all calling this person an idiot when you're panicking every time the idea your own job will be replaced comes up?

        Both can be true. AI will replace programmers AND a particular instance of AI spews nonsense with regularity. The thing to be mindful of is trendline of technical capability and not get caught up in what today's systems can and can't do.

        Nobody knows what will actually happen with this technology. It was never expected in advance something with capabilities of GPT-4 could arise from such a small ANN and most tech d00ds are not at all confused over the question of fallibility of present day chat bots.

    • Is it criminally negligent? I think dis-barring him is a bit too much. I suspect that, like many people, he thought using AI would help with this sort of thing, and I can't blame him for that. The critical mistake was not double-checking the result. Given how this has made the news, I think the bad PR is punishment enough. He won't be making that mistake again and this served as a warning to others that (currently) ChatGPT is not helpful for finding case precedence.

      ChatGPT turns out to be especially unhel

      • >I think the bad PR is punishment enough.

        So that people who aren't aware of his past can end up with a lazy lawyer who takes shortcuts and does sub-standard work?

        This is what disbarment should be for - so that potential customers know they're getting someone who has passed the minimum standard for the industry because if they don't they lose their license.

        • by JoeRobe ( 207552 )

          I see what you're saying, but the question is whether this is sufficiently bad that the guy should lose his career over it, because that's what disbarment would mean. Should he go flip burgers because he made the mistake of trusting ChatGPT for legal precedence references? To me, at least, this doesn't rise to that level. But then again, I'm not a legal scholar or able to disbar anyone, so what the hell do I know?

          If this was the tenth time he's done this sort of thing - if they went back and found a hist

      • by rpnx ( 8338853 )
        ChatGPT is plenty good for finding case precedents, just not verifying they exist.
  • by klipclop ( 6724090 ) on Tuesday May 30, 2023 @05:11PM (#63562423)
    Not realizing that the chatbot already covered this topic in a past article.
  • I don't imagine it's rare to catch lawyers being negligent with their sources. In fact, it's probably a healthy jolt to legal skepticism that he was caught.

    On the other hand, the creation of an entire "ecosystem" of fraudulent jurisprudence has the smell of totalitarian history-revision, so the response should be harsh and proactive.
    • by gweihir ( 88907 ) on Tuesday May 30, 2023 @05:44PM (#63562505)

      Usually you have paralegals doing the fact-checking and the finding of the rulings. They are a lot cheaper than lawyers, but they still need to be paid. This guy went cheap without understanding what he was doing. An obvious case of legal malpractice.

    • I don't imagine it's rare to catch lawyers being negligent with their sources.

      Actually, t's incredibly rare.

      Because the core of just about any legal argument is, look at what these other cases decided in similar circumstances.

      And, you know that whatever judge is looking at this is going to look at every case (well really his clerks or other legal staff) you cite VERY closely, which is easy to do as every lawyer on Earth has access to Westlaw.

      You are simply never, ever going to get away with citing a fake ca

      • Hence my concern about an "ecosystem of lies." If anyone is able to get by with abusing this tool in a legal setting, sooner or later one of them might be a judge and allow it, which passes the buck to appeals courts...and the poison spreads from there. But for the moment, simple diligence would seem to be enough protection.
  • Based on what I read about the lawyers interactions with ChatGPT, it would appear to qualify as a pathological liar. That should make passing the Turning Test a snap. Maybe we should run it for president. It would fit right in.
    • It isn't a pathological liar. Just as it is not intelligent. What is correct to say is that itâ(TM)s responses can resemble what a pathological liar would say.
      • and for the same reason. It is just words strung together in a way that is statistically likely to have some meaning.
      • So, just like a president!
      • So, it is simulating a liar. It apparently not only made up the fake information, but then assured the lawyer that it was in fact real and could be found in various legal databases. That is pretty sophisticated. I expect that politicians will be using to create their campaign materials i the next election. The ability to make up convincing "facts" to support your position would be ideal. For some reason, no one ever seems to check if the claims are true. See George Santos for an example.
    • An AI is just a statistical approach to common social engineering hacks, so it's not really lying so much as answering the questions of its owners rather than the people interacting with it. The ones who built and deployed it are the liars.
  • by gnasher719 ( 869701 ) on Tuesday May 30, 2023 @05:31PM (#63562463)
    This case here needs the harshest possible punishment. When case law is cited, the judge must be able to assume that such citations are not fraudulent. So if they are, it should be enough punishment that nobody will ever try again.
  • by Mononymous ( 6156676 ) on Tuesday May 30, 2023 @05:33PM (#63562467)

    I know judges like to issue cute rulings, but they could spare us the dad jokes.

  • by aldousd666 ( 640240 ) on Tuesday May 30, 2023 @06:04PM (#63562539) Journal
    If you're enough of an idiot to cite ChatGPT's output and use it as your own original work, the invisible hand will claim you one way or another. Either when someone finds you liable for malpractice/holds you in contempt, or you know, people will just start to think you're too stupid to hire as a lawyer.
  • by Nkwe ( 604125 ) on Tuesday May 30, 2023 @06:31PM (#63562593)
    If a law firm used a human attorney, legal clerk, or whatever to do the research and that person came up with bogus citations, and that bad info was submitted to the court, what penalty should the court assign to the lawyer(s) presenting the information in court? That is the sanction that should be handed out. From the court's point of view, why should it be relevant if the source of the bad info was ChatGPT or any other person, entity, or thing that doesn't have a law degree / bar certification?
  • The big, big problem with current chat bot technology right now is that it lacks mechanisms to ascertain truth from fiction. It relies on predictive extrapolation to form output, but that is insufficient by itself without adequate analysis of meaning. The LLM model is a flawed concept. It merely searches for vectors close to a given vector. That is not enough because it inherently does not look at meaning, only form. I look at the race between Google, OpenAI, Microsoft, Facebook, and see that they are all
  • by FudRucker ( 866063 ) on Tuesday May 30, 2023 @07:58PM (#63562769)
    To check if the cases were real, too late for him now

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...