Two Lawyers Fined For Submitting Fake Court Citations From ChatGPT 40
An anonymous reader quotes a report from The Guardian: A US judge has fined two lawyers and a law firm $5,000 after fake citations generated by ChatGPT were submitted in a court filing. A district judge in Manhattan ordered Steven Schwartz, Peter LoDuca and their law firm Levidow, Levidow & Oberman to pay the fine after fictitious legal research was used in an aviation injury claim. Schwartz had admitted that ChatGPT, a chatbot that churns out plausible text responses to human prompts, invented six cases he referred to in a legal brief in a case against the Colombian airline Avianca.
The judge P Kevin Castel said in a written opinion there was nothing "inherently improper" about using artificial intelligence for assisting in legal work, but lawyers had to ensure their filings were accurate. "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance," Castel wrote. "But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings." The judge said the lawyers and their firm "abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question." Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers "respectfully" disagreed with the court that they had acted in bad faith. "We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth," it said.
The judge P Kevin Castel said in a written opinion there was nothing "inherently improper" about using artificial intelligence for assisting in legal work, but lawyers had to ensure their filings were accurate. "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance," Castel wrote. "But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings." The judge said the lawyers and their firm "abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question." Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers "respectfully" disagreed with the court that they had acted in bad faith. "We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth," it said.
"grossly negligent in -good faith-?" (Score:5, Insightful)
Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers “respectfully” disagreed with the court that they had acted in bad faith. “We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” it said.
Considering the absolute obligation on lawyers to check their citations, this firm and the lawyers who filed the original brief should be considered for disbarment. And statements like this that fail to show any understanding or contrition for how badly they f'd up should be part of the justification for disbarment.
Shouldn't ChatGPT come with an explicit disclaimer that nothing produced by ChatGPT can be considered as accurate?
The only good that could come from this is if this law firm decides to advocate for legal liability for software. Even then, under just about any liability regime I can imagine, submitting an unchecked brief would still be an act of malpractice. (After all, lawyers are supposed to read and understand the citations they use, to make sure they're relevant.)
Re: (Score:2)
In a related development, Mr Wily E Coyote announced the retention of Levidow, Levidow & Oberman for claims against the Acme Corporation, including false advertising, negligence, and breach of contract.
Goes even further than existing... (Score:5, Insightful)
Considering the absolute obligation on lawyers to check their citations, this firm and the lawyers who filed the original brief should be considered for disbarment.
My wife is a lawyer and she pointed out that not only should they be checking to see if they exist, they should also have been reading through making sure the arguments made (which were probably generated by chatGPT as well) are supported by the cases you are citing... so they failed to an ever greater depth because obviously they didn't even read through citations that did exist to make sure they supported the arguments!
Re: (Score:2)
they failed to an ever greater depth because obviously they didn't even read through citations that did exist to make sure they supported the arguments
You're looking at it entirely the wrong way.
What they did was bet on the judge being even lazier than they were and they lost.
Re: (Score:2)
Re: (Score:2)
Well, the only true losers are the clients. But that's nothing new.
Re: (Score:2)
This was unforced errors on the part of the lawyers on the level of accidentally submitting papers about the wrong case, and then on being called by the court, doubling down on it.
Re: Goes even further than existing... (Score:1)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Considering the absolute obligation on lawyers to check their citations, this firm and the lawyers who filed the original brief should be considered for disbarment.
Maybe they used ChatGPT to do the citation check?
Re: (Score:2)
Actually, they did exactly that.
Re: (Score:2)
Re: (Score:2)
The only good that could come from this is if this law firm decides to advocate for legal liability for software.
Oh that's JUST what we need - lawyers who not only don't have to do real research, but aren't culpable for their laziness!
Re: (Score:1)
Re: (Score:2)
To be fair, this did happen awhile ago, with it wasn't so well know that ChatBots fantasize. But they still should have checked the citations.
Re: "grossly negligent in -good faith-?" (Score:1)
Re: (Score:2)
Re: (Score:2)
A whole $5000! (Score:1)
That's the minimum retainer fee most of the competent lawyer's in my area want to charge individuals before they'll let you schedule an appointment and it doesn't include anything toward billable hours so it's just a door fee.
Just to be clear about this ... (Score:1)
... the fines are very, very real.
Re: (Score:2)
No they're not. $5,000 is pocket money for your typical shyster.
Whodathunkit eh? (Score:2)
Lawyers have been out-bullshat - and even better, taken for a ride - by a stupid machine! Serves them right.
they call it hallucination... (Score:2)
Re: (Score:2)
It's not BS, because when you BS you know you're fabricating stuff. Some meanings of hallucination are about right, but I prefer fantasize.
Re: (Score:2)
$5,000.00? The fine is 2 zeros short! (Score:2)
Re: (Score:2)
Throw a couple more zeroes on that and the lawyers would be tempted to appeal.
The judge is metering his approach to avoid that.
Re: (Score:2)
So they would appeal. What are they going to argue, that they didn't actually do what they admit to doing? The most they could really hope for is to argue that $500,000 is unreasonable and they should pay less, and would still likely end up paying far more than $5,000. Kind of how like companies are fined a billion dollars and end up paying a few million.
Starting out with a fine of $5,000 is still just a slap on the wrist. That sets a bad precedent.
Re: (Score:2)
No judge is particularly convinced that their decisions won't be reversed on appeal. Even something that seems as obvious as this.
He may also be constrained a bit by statute in terms of sanctions.
This goes against legal precedent (Score:5, Funny)
Chief Justice Jayden Guevara-Smith wrote the majority decision, emphasizing that "fictitious AI generated court cases are protected under the 38th amendment."
US v Schwartz, LoDuca et al, 2013 SCOTUS 206.
Disbar (Score:1)
So... if this were in a hospital... (Score:2)
...and ChatGPT diagnosed a fictitious illness to a patient... would the doctor just be fined with 5,000$ or they would be fired?
I mean, if the judge didn't catch the LIES made by an accusation, an innocent might have lost years (or even their live) inside a prison...
Lawyers? (Score:2)
Defending attorney was cross examining a coroner.
The attorney asked, "Before you signed the death certificate had you taken the man's pulse?"
The coroner said, "No."
The attorney then asked, "Did you listen for a heart beat?"
"No."
"Did you check for breathing?"
"No."
"So when you signed the death certificate you had not taken any steps to make sure the man was dead, had you?"
The coroner, now tired of the brow beating said, "Well, let me put it this way. The man's brain was sitting in a jar on my desk, but for al
Some background (Score:4, Insightful)
Legal Eagle [youtube.com] covered this and Lawful Masses [youtube.com] read the sanctions transcript on camera.
The lawsuit:
A man named Robert Matta claims that on an August 2019 Avianca flight 670 from San Salvador to New York City that a flight attendant struck him with a serving cart that left him with severe and lasting injuries. Matta hired the law firm of Levidow, Levidow & Oberman (LLO) who filed suit against Avianca in February 2022 in New York state court.
Complications:
Avianca Airlines is a Colombian airlines and petitioned the state court to move the case to federal court under the Montreal Convention [iata.org] treaty as this was an international flight and Matta was a resident of New York. Adding to this complication, Avianca filed for bankruptcy in May 2020. Originally Steven Schwartz was the lead attorney for LLO; however, he was not licensed to practice law at the federal level and only at the New York state level. While he could work on the case, court filing had to be done by an attorney who was licensed. That is where Peter LoDuca who was licensed came into the case.
Avianca's Motion to Dismiss:
In February 2023, Avianca filed a motion to dismiss citing two defenses that should have spelled doom for the suit. First, since Avianca filed bankruptcy in May 2020, Matta missed his window to file as a creditor to the bankruptcy court. Second, the Montreal Convention Article 35 specifically limits passengers to 2 years from the date they arrived at their destination to file a claim. Matta should have filed no later than August 2021.
The infamous ChatGPT response:
Peter LoDuca for LLO then filed an opposition response that cited fictional cases that on the surface said that New York's 3 year limitation supersedes the Montreal's 2 year limitations and over came the bankruptcy limitations. Avianca's law firm Condor and Forsyth which specializes in aviation law responded that they could either not find the cited cases or the cited cases did not represent what LLO said they represented. While locating a case that supports a legal position might be difficult, it is trivially easy to find a case that has citations.
LLO has some explaining to do:
In April 2023, the judge issued a show cause order to LoDuca to produce the cases he cited. After receiving a week delay due to a vacation, LoDuca responded that he could not locate at least one of cases at all. LLO was able to locate some of cases only online (from ChatGPT) but not in official sources like LexisNexis or WestLaw or a lawbook. The cases that were submitted had glaring problems like wrong formatting, changing fonts, weird wording (not legal wording), and were suspiciously only 5 pages long. Avianca responded with a letter to the court the authenticity of the cited cases was "questionable".
Sanctions hearing (what had happened was . . .):
The judge also could not find the cases and personally called the 11th Circuit where the fake cases supposedly originated to verify that they did not exist. The judge set a sanctions hearing for June as to why LoDuca should not be sanctioned. When questioned by the court, LoDuca admitted that Schwartz wrote the filings even though he signed them. While this is not unusual, lawyers are responsible for everything they sign and attest to the court. Under questioning Schwartz first admitted he had used ChatGPT to do some legal research before admitting it was all his research in this case. Ironically, in his testimony Schwartz said he asked ChatGPT to verify that the cases were real which the AI responded that they were real. When gathering the cases ordered by the court, neither Schwartz nor LoDuca checked other sources but instead asked ChatGPT for them which the AI simply generated the cases.
Many, many sanctionable transgressions:
The sanctions for each lawyer and the law fi