

Judges Are Fed Up With Lawyers Using AI That Hallucinate Court Cases (404media.co) 74
An anonymous reader quotes a report from 404 Media: After a group of attorneys were caught using AI to cite cases that didn't actually exist in court documents last month, another lawyer was told to pay $15,000 for his own AI hallucinations that showed up in several briefs. Attorney Rafael Ramirez, who represented a company called HoosierVac in an ongoing case where the Mid Central Operating Engineers Health and Welfare Fund claims the company is failing to allow the union a full audit of its books and records, filed a brief in October 2024 that cited a case the judge wasn't able to locate. Ramirez "acknowledge[d] that the referenced citation was in error," withdrew the citation, and "apologized to the court and opposing counsel for the confusion," according to Judge Mark Dinsmore, U.S. Magistrate Judge for the Southern District of Indiana. But that wasn't the end of it. An "exhaustive review" of Ramirez's other filings in the case showed that he'd included made-up cases in two other briefs, too. [...]
In January, as part of a separate case against a hoverboard manufacturer and Walmart seeking damages for an allegedly faulty lithium battery, attorneys filed court documents that cited a series of cases that don't exist. In February, U.S. District Judge Kelly demanded they explain why they shouldn't be sanctioned for referencing eight non-existent cases. The attorneys contritely admitted to using AI to generate the cases without catching the errors, and called it a "cautionary tale" for the rest of the legal world. Last week, Judge Rankin issued sanctions on those attorneys, according to new records, including revoking one of the attorneys' pro hac vice admission (a legal term meaning a lawyer can temporarily practice in a jurisdiction where they're not licensed) and removed him from the case, and the three other attorneys on the case were fined between $1,000 and $3,000 each. The judge in the Ramirez case said that he "does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden." In fact, he noted that he's a vocal advocate for the use of technology in the legal profession.
"Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution," he wrote. "It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution."
In January, as part of a separate case against a hoverboard manufacturer and Walmart seeking damages for an allegedly faulty lithium battery, attorneys filed court documents that cited a series of cases that don't exist. In February, U.S. District Judge Kelly demanded they explain why they shouldn't be sanctioned for referencing eight non-existent cases. The attorneys contritely admitted to using AI to generate the cases without catching the errors, and called it a "cautionary tale" for the rest of the legal world. Last week, Judge Rankin issued sanctions on those attorneys, according to new records, including revoking one of the attorneys' pro hac vice admission (a legal term meaning a lawyer can temporarily practice in a jurisdiction where they're not licensed) and removed him from the case, and the three other attorneys on the case were fined between $1,000 and $3,000 each. The judge in the Ramirez case said that he "does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden." In fact, he noted that he's a vocal advocate for the use of technology in the legal profession.
"Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution," he wrote. "It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution."
Q: How do you save a lawyer from drowning? (Score:5, Funny)
A: Shoot him before he hits the water.
This joke is not fabricated using AI.
Re:Q: How do you save a lawyer from drowning? (Score:4, Insightful)
Like in anything else, a minority of lawyers are pretty good and know what they are doing. All those stupid lawyers would have to do is check if the cases really exist and review the cases. Just use the silly AI as hints which have to be verified. Same thing for any type of information really, it doesn't have to come from AI.
Re: (Score:1)
Re:Q: How do you save a lawyer from drowning? (Score:5, Insightful)
That doesn't save any work though.
Yes, it does. Verifying a citation is way less work than doing the research to find relevant citations.
Re: Q: How do you save a lawyer from drowning? (Score:2)
It shouldn't be any more persuasive as some court somewhere than an AI hoovering up the internet and positing the same.
Re: (Score:2)
Re: (Score:2)
Why would you hire, trust or vote for someone you couldn't trust at all?
Pollsters have asked voters this question. The most common response is that they felt the alternative was even worse.
Re: (Score:1)
The most common response is that they felt the alternative was even worse.
I am absolutely convinced that this is the only reason that anyone votes in any election in the United States.
Re: (Score:2)
Actually good candidates are weeded out much sooner. I'm not exactly sure of the entire reason for this. Part of it is that they have a lot to lose if they fail to be elected and so they never run. But most of it seems to be that complex problems require complex solutions and nobody wants to hear that so they vote for someone who gives them simple answers.
the application of actual intelligence (Score:4, Insightful)
There you have it, the courts have ruled that AI is not actually intelligent. But you already knew that.
Re: (Score:1)
Re: (Score:2)
More they've said lawyers using AI tools poorly is not intelligent
Re: (Score:2)
The courts have ruled that the lawyers are not intelligent. The LLM is a tool and they didn't know how to use it properly. If you keep bashing a screw with a hammer, you can usually get it into wood. But it still won't do the job it was supposed to.
A measly fine? (Score:1)
Re: (Score:2)
Re: (Score:2)
If we react too hastily, then suddenly we'll have no lawyers left. And then.....And then....OK, we can go ahead.
"Shepardizing" (Score:3)
This lawyer was failing in multiple ways. He clearly did not Shepadize his citations to ensure that they had not been overturned.
Re: (Score:3)
They certainly would have found no record of these cases being overturned anywhere in there.
Headline is a lie as usual (Score:4, Insightful)
If you read the decision linked above, headline is the opposite of the ruling. Judge clearly states that using AI is not just fine, but probably the future of the profession.
What judge sanctions lawyers for is not fact checking findings of AI for hallucinations. I.e. AI that occasionally hallucinates and using it to draft documents is fine. Not checking the references to see if they're accurate or not is not fine.
Re: (Score:2)
Bizarre edit considering that definitionally there's no need for such a sanction.
Re: (Score:3, Insightful)
If you read the decision linked above, headline is the opposite of the ruling. Judge clearly states that using AI is not just fine, but probably the future of the profession.
What judge sanctions lawyers for is not fact checking findings of AI for hallucinations. I.e. AI that occasionally hallucinates and using it to draft documents is fine. Not checking the references to see if they're accurate or not is not fine.
- This particular set of lawyers got greedy. Rather than underpaying paralegals to do their work for them, they tried to substitute AI to do the same work. With disastrous results. ... what's the incentive not to do this?*.
- The fines seem minuscule overall. If I can replace 10 paralegals per year and at worst only pay fines which amount to the cost of 1 paralegal
- *In full fairness, this is asking the question only for morally abhorrent/reprehensive attorneys. Many good ones out there who would neve
Re: (Score:2)
No, they didn't get greedy. They got lazy. When it's your name on the filing, you are responsible for what is in it. So you should check what's in it. That's what every lawyer will tell you.
Re: (Score:2)
No, they did it to save time and effort. While those can be converted to money, they are not the same thing.
Re: (Score:2)
I could see pitfalls with this, too. Sure, the AI may reference a genuine case that actually exists - rather than just a hallucination - but who's to say if the AI's interpretation of the cited case law is actually correct. One would expect the AI summary to be correct based on the mass of training data available, but that's hardly a guarantee.
As an exam
Re: (Score:2)
The expectation is that you read the documents you file, and you agree with things stated in the documents.
Unironically that is what legal advice given by lawyers is. It is naturally follows that if a lawyer delegates formulation of legal arguments to someone or something else, that lawyer should at minimum verify that arguments are sound and citations are correct.
This is literally what lawyers are paid for. And notably what they were fined for in this case. Because the fine didn't come from not checking th
Judges and lawyers both are terrified of AI (Score:3, Interesting)
AI has the potential to bust that wide open. An AI could find case law in seconds that you would normally pay tens of thousands of dollars for a lawyer to know about and leverage to win your case. They could find all the little tricks that you would pay a lawyer big bucks for and do it instantaneously. And they could apply complex rules that we normally need a judge with a ton of training to do.
I don't think there is anything, even programming, that is more ripe to be eliminated by AI. And lawyers know it and they're damn well going to make sure they protect themselves.
Re: (Score:2)
An AI can clearly make up case law that doesn't exist in seconds.
Basing your legal defence on made up case law is pretty risky.
So right now all the effort (Score:2)
That said I think lawyers, unlike programmers, are smart enough to see when their jobs are at stake and aren't going to let it happen.
Re: (Score:2)
Maybe they'll improve but unless you suck as a developer you'll be better off with normal Google searches an
Re: (Score:2)
In the old days they could start businesses and compete and make new products but now with a complete lack of antitrust law enforcement and the banks having all the cap
Re: (Score:1)
thats literaly the duumbest take on anything ive ever read. are you sure youre not an AI ? AI's are pretty worthless except for garbage tasks. they hallucinate 30% of the time. you need to feed answers back to them hundreds of times to get anything accurate. and then the models sometimes blow up. yeah they are useful for drafting time sheets and mundane bs but they arent going to replace good coders anytime soon.
Re: (Score:1)
Re: (Score:2)
That said I think lawyers, unlike programmers, are smart enough to see when their jobs are at stake and aren't going to let it happen.
How are they going to stop it from happening? They don't control the technology (that's what the programmers do, remember?), and anybody is allowed to represent themself in court without an attorney. So the lawyers have already lost that battle.
But back to reality: until somebody can create an AI that demonstrably, provably, does not hallucinate case citations, anybody relying on AI output is taking a huge risk. Even if AI is proven to never hallucinate citations, somebody (hopefully a lawyer!) will stil
Re: (Score:2)
An AI can clearly make up case law that doesn't exist in seconds.
Basing your legal defence on made up case law is pretty risky.
A paralegal could do the same. Ultimately, the lawyer in charge is responsible for everything resented to the court/judge ...
Re: (Score:2)
The parent comment was implying you don't need a lawyer if you have a fancy AI. Or a judge.
I wouldn't want an AI judge making up laws and precedence in cases.
Re: (Score:3)
I wouldn't want an AI judge making up laws and precedence in cases.
Indeed. Besides, we have SCOTUS for that. :-)
Never were, never will be. (Score:2)
case against a hoverboard manufacturer
Sideways skateboards are not "hoverboards".
Re: (Score:2)
"In other words:" (Score:2)
"In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution."
Or, in laymen terms: "Stop being a dumbass and quit fucking up at your high-paying job, Mr. Attorney."
Er ... (Score:1)
... couldn't lawyers make up stuff manually before? I mean perish the thought, but I think it was in fact possible.
Re: (Score:3)
Re: (Score:1)
Lazy, greedy assholes (Score:2)
Re: Lazy, greedy assholes (Score:2)
Zealous representation is actually a higher ethical priority than candor in every US state I'm aware of.
Judges can't deal with current tech. Fine them!!! (Score:2)
Meaning they have no automated and effective way to know if a lawyer is lying about a citation or not. Meaning they're not doing their own jobs effectively, and to point that out... they're going to penalize someone for a mistake that they DID catch.
Unrealistic (Score:2)
In an ideal world the judge would know all the case law pertaining to every case and so be able to rule on them without difficulty. In practice it is unreasonable to expect a judge to know everything, so they are dependent on lawyers to quote case law accurately to establish their case. In theory the judge's clerks could be expected to confirm the validity of the quoted precedents, but in practice that isn't going to happen. So the attorney for the other side should be encouraged to do the checking, and the
Re: (Score:1)
Jail time? A little harsh? Nah - spot on (Score:2)
Great suggestion! Just a week the first time - but doubling on every subsequent offence(!)
Judges Are Fed Up? (Score:3)
Needs to be disbarred and maybe charged (Score:5, Interesting)
I can understand the early mistakes before people were aware of the ways in which AI can make mistakes, but its been long enough that there is no longer any excuse. A lawyer who uses AI without verifying the results is endangering his clients and committing perjury.
The same applies to any industry where the results can have a major impact on peoples lives. I don't want to see engineers designing structures based on AI supplied calculations.
Re: (Score:2)
Re: (Score:2)
It has been quite a while for those who are paying attention to the AI field. For everyone else, it's only been "a couple years" and most people are not prepared for tech that goes from ELIZA to Skynet in less than a decade. The more change accelerates, the less humans can stay in the loop on all of them.
Surely AI can check its own hallucinations? (Score:1)
Re: (Score:1)
Re: (Score:2)
Confirmation as you seek is a job for tools like Deep Research, not another similar type of LLM to the one that hallucinated. Such tools have only become available to the public in the last month or so.
Re: (Score:2)
And Australia, and Canada, and the UK, and Singapore, and New Zealand, and South Africa, and India, and Pakistan, and a bunch more. It's not "US centric", it's common law - where that comes from should be obvious from that list...
...allow the union a full audit of its books... (Score:2)
Forget about the lawyers and the AI. Why on earth would any company allow a union to do a full audit of its books and records? If the business allowed that to be negotiated in to a Union contract then they deserve to go out of business.