Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Courts AI

Two Lawyers Fined For Submitting Fake Court Citations From ChatGPT 40

An anonymous reader quotes a report from The Guardian: A US judge has fined two lawyers and a law firm $5,000 after fake citations generated by ChatGPT were submitted in a court filing. A district judge in Manhattan ordered Steven Schwartz, Peter LoDuca and their law firm Levidow, Levidow & Oberman to pay the fine after fictitious legal research was used in an aviation injury claim. Schwartz had admitted that ChatGPT, a chatbot that churns out plausible text responses to human prompts, invented six cases he referred to in a legal brief in a case against the Colombian airline Avianca.

The judge P Kevin Castel said in a written opinion there was nothing "inherently improper" about using artificial intelligence for assisting in legal work, but lawyers had to ensure their filings were accurate. "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance," Castel wrote. "But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings." The judge said the lawyers and their firm "abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question."
Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers "respectfully" disagreed with the court that they had acted in bad faith. "We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth," it said.
This discussion has been archived. No new comments can be posted.

Two Lawyers Fined For Submitting Fake Court Citations From ChatGPT

Comments Filter:
  • by david.emery ( 127135 ) on Friday June 23, 2023 @03:48PM (#63627310)

    Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers “respectfully” disagreed with the court that they had acted in bad faith. “We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” it said.

    Considering the absolute obligation on lawyers to check their citations, this firm and the lawyers who filed the original brief should be considered for disbarment. And statements like this that fail to show any understanding or contrition for how badly they f'd up should be part of the justification for disbarment.

    Shouldn't ChatGPT come with an explicit disclaimer that nothing produced by ChatGPT can be considered as accurate?

    The only good that could come from this is if this law firm decides to advocate for legal liability for software. Even then, under just about any liability regime I can imagine, submitting an unchecked brief would still be an act of malpractice. (After all, lawyers are supposed to read and understand the citations they use, to make sure they're relevant.)

    • In a related development, Mr Wily E Coyote announced the retention of Levidow, Levidow & Oberman for claims against the Acme Corporation, including false advertising, negligence, and breach of contract.

    • by SuperKendall ( 25149 ) on Friday June 23, 2023 @04:01PM (#63627360)

      Considering the absolute obligation on lawyers to check their citations, this firm and the lawyers who filed the original brief should be considered for disbarment.

      My wife is a lawyer and she pointed out that not only should they be checking to see if they exist, they should also have been reading through making sure the arguments made (which were probably generated by chatGPT as well) are supported by the cases you are citing... so they failed to an ever greater depth because obviously they didn't even read through citations that did exist to make sure they supported the arguments!

      • they failed to an ever greater depth because obviously they didn't even read through citations that did exist to make sure they supported the arguments

        You're looking at it entirely the wrong way.

        What they did was bet on the judge being even lazier than they were and they lost.

        • If their damage is limited to $5000, they won.
        • Lol. Exactly. Also, surely there is an AI that will fact check your AI generated content.
        • That's incorrect. They are betting on THE DEFENDENT to be lazier than them. Its not the judges job to spell check. It was the fact the airline replied with the "what the fuck is this?" and then it was the judges job to say, "What the fuck did you do?"
    • Considering the absolute obligation on lawyers to check their citations, this firm and the lawyers who filed the original brief should be considered for disbarment.

      Maybe they used ChatGPT to do the citation check?

      • by Dwedit ( 232252 )

        Actually, they did exactly that.

        • Also when pressed, ChatGPT made up the citations which the lawyer did not check against current online law databases nor offline sources like the physical law book the citation claimed to have originated.
    • by MobyDisk ( 75490 )

      The only good that could come from this is if this law firm decides to advocate for legal liability for software.

      Oh that's JUST what we need - lawyers who not only don't have to do real research, but aren't culpable for their laziness!

    • AI makes stuff up. If you're too stupid to see that, you're too stupid to hold down a job, IMO.
      • by HiThere ( 15173 )

        To be fair, this did happen awhile ago, with it wasn't so well know that ChatBots fantasize. But they still should have checked the citations.

        • I'm usually one for giving the benefit of the doubt, but It's pretty goddamned hard to construe any situation where they thought a chatbot would be knowledgeable about outside events. That pig-farming rodeo-clown who asked the chatbot if his students had used it was at least acting under the assumption that it would understand if someone had used itself. This situation? Explain to me how a lawyer would feasibly think a tool like a chatbot would have knowledge of confidential statements.
        • Especially since it is trivially easy to locate a case with citations. It would have taken them seconds to use Westlaw or LexisNexis to find out the cases were not real.
    • The law firm is lucky only to have been fined $5,000 and not sanctioned more or by other means. The court could have imposed more restrictive punishments like every future filing from the lawyers must be checked by independent lawyer outside the firm and paid for by the firm.
  • That's the minimum retainer fee most of the competent lawyer's in my area want to charge individuals before they'll let you schedule an appointment and it doesn't include anything toward billable hours so it's just a door fee.

  • ... the fines are very, very real.

  • Lawyers have been out-bullshat - and even better, taken for a ride - by a stupid machine! Serves them right.

  • ... but it's just BS.
    • by HiThere ( 15173 )

      It's not BS, because when you BS you know you're fabricating stuff. Some meanings of hallucination are about right, but I prefer fantasize.

      • "Fantasize" sounds too intentional. "Hallucination" implies pathology. BS can mean you do not care, and I doubt LLM's care. So, it's really BS.
  • $ 5,000 to a lawyer is just pocket change. The corrupt court wanted a painless fine for their corrupt lawyer buddies.
    • by HBI ( 10338492 )

      Throw a couple more zeroes on that and the lawyers would be tempted to appeal.

      The judge is metering his approach to avoid that.

      • So they would appeal. What are they going to argue, that they didn't actually do what they admit to doing? The most they could really hope for is to argue that $500,000 is unreasonable and they should pay less, and would still likely end up paying far more than $5,000. Kind of how like companies are fined a billion dollars and end up paying a few million.

        Starting out with a fine of $5,000 is still just a slap on the wrist. That sets a bad precedent.

        • by HBI ( 10338492 )

          No judge is particularly convinced that their decisions won't be reversed on appeal. Even something that seems as obvious as this.

          He may also be constrained a bit by statute in terms of sanctions.

  • by devloop ( 983641 ) on Friday June 23, 2023 @05:01PM (#63627500)
    The SCOTUS ruled 17 to 1 in favor of Levidow, Levidow & Oberman having the constitutionally protected right to use AI created case law.
    Chief Justice Jayden Guevara-Smith wrote the majority decision, emphasizing that "fictitious AI generated court cases are protected under the 38th amendment."
    US v Schwartz, LoDuca et al, 2013 SCOTUS 206.
  • Seriously, just $5000? Need to set some examples to nip it in the bud.
  • ...and ChatGPT diagnosed a fictitious illness to a patient... would the doctor just be fined with 5,000$ or they would be fired?

    I mean, if the judge didn't catch the LIES made by an accusation, an innocent might have lost years (or even their live) inside a prison...

  • Defending attorney was cross examining a coroner.
    The attorney asked, "Before you signed the death certificate had you taken the man's pulse?"
    The coroner said, "No."
    The attorney then asked, "Did you listen for a heart beat?"
    "No."
    "Did you check for breathing?"
    "No."
    "So when you signed the death certificate you had not taken any steps to make sure the man was dead, had you?"
    The coroner, now tired of the brow beating said, "Well, let me put it this way. The man's brain was sitting in a jar on my desk, but for al

  • Some background (Score:4, Insightful)

    by UnknowingFool ( 672806 ) on Friday June 23, 2023 @11:06PM (#63628112)

    Legal Eagle [youtube.com] covered this and Lawful Masses [youtube.com] read the sanctions transcript on camera.

    The lawsuit:
    A man named Robert Matta claims that on an August 2019 Avianca flight 670 from San Salvador to New York City that a flight attendant struck him with a serving cart that left him with severe and lasting injuries. Matta hired the law firm of Levidow, Levidow & Oberman (LLO) who filed suit against Avianca in February 2022 in New York state court.

    Complications:
    Avianca Airlines is a Colombian airlines and petitioned the state court to move the case to federal court under the Montreal Convention [iata.org] treaty as this was an international flight and Matta was a resident of New York. Adding to this complication, Avianca filed for bankruptcy in May 2020. Originally Steven Schwartz was the lead attorney for LLO; however, he was not licensed to practice law at the federal level and only at the New York state level. While he could work on the case, court filing had to be done by an attorney who was licensed. That is where Peter LoDuca who was licensed came into the case.

    Avianca's Motion to Dismiss:
    In February 2023, Avianca filed a motion to dismiss citing two defenses that should have spelled doom for the suit. First, since Avianca filed bankruptcy in May 2020, Matta missed his window to file as a creditor to the bankruptcy court. Second, the Montreal Convention Article 35 specifically limits passengers to 2 years from the date they arrived at their destination to file a claim. Matta should have filed no later than August 2021.

    The infamous ChatGPT response:
    Peter LoDuca for LLO then filed an opposition response that cited fictional cases that on the surface said that New York's 3 year limitation supersedes the Montreal's 2 year limitations and over came the bankruptcy limitations. Avianca's law firm Condor and Forsyth which specializes in aviation law responded that they could either not find the cited cases or the cited cases did not represent what LLO said they represented. While locating a case that supports a legal position might be difficult, it is trivially easy to find a case that has citations.

    LLO has some explaining to do:
    In April 2023, the judge issued a show cause order to LoDuca to produce the cases he cited. After receiving a week delay due to a vacation, LoDuca responded that he could not locate at least one of cases at all. LLO was able to locate some of cases only online (from ChatGPT) but not in official sources like LexisNexis or WestLaw or a lawbook. The cases that were submitted had glaring problems like wrong formatting, changing fonts, weird wording (not legal wording), and were suspiciously only 5 pages long. Avianca responded with a letter to the court the authenticity of the cited cases was "questionable".

    Sanctions hearing (what had happened was . . .):
    The judge also could not find the cases and personally called the 11th Circuit where the fake cases supposedly originated to verify that they did not exist. The judge set a sanctions hearing for June as to why LoDuca should not be sanctioned. When questioned by the court, LoDuca admitted that Schwartz wrote the filings even though he signed them. While this is not unusual, lawyers are responsible for everything they sign and attest to the court. Under questioning Schwartz first admitted he had used ChatGPT to do some legal research before admitting it was all his research in this case. Ironically, in his testimony Schwartz said he asked ChatGPT to verify that the cases were real which the AI responded that they were real. When gathering the cases ordered by the court, neither Schwartz nor LoDuca checked other sources but instead asked ChatGPT for them which the AI simply generated the cases.

    Many, many sanctionable transgressions:
    The sanctions for each lawyer and the law fi

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...