

Judge Slams Lawyers For 'Bogus AI-Generated Research' 51
A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." From a report: In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying "no reasonably competent attorney should out-source research and writing" to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.
"I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes. "That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order."
"I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes. "That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order."
Yup (Score:4, Interesting)
Most of the time, it seems to just make shit up, as this example proves.
I wonder if Altman et al would be willing to place their freedom on the line, in such a case?
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.
What do you think of the AI when it references itself. Truth will become quite malleable. Already groups are poisoning AI - so I for one will be quite skeptical about AI ascending over all other fields.
But let's say that problem is overcome - What would be the rationale for any human getting an education when you jut speak into the computer and AI dies it all for you?
Re: (Score:2)
You're probably right. but I still hope that AI ends up on the same list as 3D Blu-ray and Livestrong bracelets.
Re: (Score:2)
What I've been saying all along is that the biggest problem with the technology isn't going to be the technology per se. It's going to be the people who use it being lazy, credulous, and ignorant of the technology's limitations.
The bottom line is that as it stands LLM isn't any good for what these bozos are using it for: saving labor creating a brief. You still have to do the legal research and feed it the relevant cases, instructing it not to cite any other cases, then check its characterization of that c
Re: (Score:2)
In a few cases, I've seen decent AI generated texts.
Most of the time, it seems to just make shit up, as this example proves.
I wonder if Altman et al would be willing to place their freedom on the line, in such a case?
Altman et al are the new John Moses Browning. They’ll might be credited for “inventing” AI, but they won’t be blamed for what happens next.
The human mind invented the double-edged sword. We should probably remember we’re teaching AI to get that irony even if we don’t.
AI is the new priesthoo ... (Score:5, Insightful)
Re: AI is the new priesthoo ... (Score:5, Funny)
Thanks. That's my new favourite AI dig now.
Plug & PrAI
Re: (Score:2)
Plug & PrAI
That is a good one.
On the plus side ... (Score:2)
Re: (Score:2)
You probably just meant that as a joke, but that's how it works out in the wild.
Re: (Score:2)
Never really understood why anyone would think anyone else would "pass on" savings to them.
Why would they?
If you're willing to pay me $350/hr, why would I charge you less, regardless of how much I lower my costs?
Why create savings if I am just going to "pass it along" ?
Re: (Score:2)
This model has been adopted in many areas, Dentistry for one.
In the last few years I've noticed TONS of effing dentist shops around town popping up.
You hire 4 dental hygienists at bottom dollar, make sure you have a lot of treatment rooms, and book in people like crazy, and the entry level people do the work, and the dentist walks around from room to room to inspect the work. Cha-ching.
So the dentist is still the one doing the drilling and root canal
Re: (Score:2)
You pass on the savings so that you're ahead, competitively, against those you compete against.
There are a hundred asterisks required for this logic to actually hold, since here in real life, capitalism isn't as pure as some would pretend that it is, but it does basically hold in most situations.
If you coordinate with your competitors to NOT lower prices, that's price-fixing, and against the law.
Not surprising (Score:2)
I've developed software for lawyers. I can confirm that at least some of them are really dumb and lazy.
Re: Not surprising (Score:1)
I don't get it. Law firms should be (Score:3)
Re: (Score:3)
Any litigator worth the name will be checking every single reference his opponent cites at this point, even if the judge doesn't. You can't ask for an easier win.
In an adversarial court system, this is a self correcting problem.
I am legitimately terrified (Score:1)
That's going to put millions of lawyers out of work and while that sounds great at first those guys aren't going to just go quietly into that good night.
It's like we're going to have tens of thousands of people trained to kill with no jobs. When we do that with the military we go out of our way to make sure ex-military have jobs but we're obviously not going to do that with lawyers.
So they're going to start looki
Re: (Score:2)
I'll leave it to 93escortwagon to insert appropriate Futurama quote.
Re: (Score:3)
Re: (Score:2)
^^^ This.
The whole terminology of "hallucinations" is misleading. It suggests some sort of not-functioning normally anomaly. But it's not. It's just LLMs behaving exactly the way they are designed, and not happening to give the right answer.
We have to remember that LLMs, as *designed*, are bullshit generators : https://link.springer.com/arti... [springer.com]
Just like college student papers, sometimes that bullshit is correct. If the students are really good at bullshitting, it's often correct. But that doesn't chang
Good enough is always good enough (Score:4, Insightful)
I would point out that outsourcing and layoffs are coming and that we need to prepare for it in position ourselves so that we aren't as likely to be on the chopping block and the old folks would always say that we were utterly irreplaceable. I would then watch as they are forced into retirement at the age of 55 with no job prospects whatsoever and replaced by teenagers in malaysia.
The survivors inevitably just said that if you could be replaced by a teenager in Malaysia you deserved it. Cope. Pure cope.
The teenagers didn't actually replace you what actually happened was the people left still working just had to pick up an extra 20 hours a week on top of their 40 doing the work that was technically supposed to be done by the teenagers in Malaysia. But the company got the work done anyway and got the pocket and extra 20 hours of work of Labor from whoever was left because they hold all the cards.
The systems that we grew up with are fundamentally breaking down but nobody wants to acknowledge that because we grew up with them and we don't want them changed because that's what we grew up with.
It's a phenomenon known as 4 to 14.
Basically anything you want to put in a person's brain and keep there forever you can with these if you put it in their brain between the ages of 4 to 14. At that age human beings are capable of learning but they are not capable of critically evaluating the information they learn so anything you put in their brain sticks and stays and is almost impossible to get out no matter how wrong it is.
And it means that if there are drastic social changes human beings are incapable of adapting or responding to them. The species is a whole may or may not survive, honestly I'm convinced we're not going to we are putting lunatics in charge of nuclear weapons, but even if the species does survive the individual gets fucked.
And since I'm triggering that part of the brain responsible for the 4 to 14 phenomenon I am guaranteed to get modded way to fuck down because otherwise well, you'd have to acknowledge the changesthat's extremely mentally taxing and painful.
I can tell you that I have done it and it is not pleasant and frankly it hasn't even helped me individually. But that's mostly because we are all so completely fucked.
The human brain which evolved to chase down buffalo is not equipped for the modern world
Re: (Score:2)
I don't think you understand that in AI, hallucination is just a euphemism for mistake. In order for an ai to stop hallucinating it would have to stop making mistakes, ever, and that's not very likely to happen. Maybe you should take some time to find out what AI really is and how it works so that you can stop putting your foot in your mouth.
Now... (Score:3)
Re: (Score:3)
Re: (Score:2)
... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.
Then I suppose it won’t be long before we’re looking at other elements of our legal system that should be automated.
Like lawyers. Not like they’re leaning on their education or experience anymore.
And if judges are having a hard time judging what’s real or not, then perhaps they’re next.
Re: (Score:2)
And if judges are having a hard time judging what’s real or not, then perhaps they’re next.
Judges currently expect that lawyers use fact-based research tools to find precedents and other related case law from *real records*. There is no slack in the system for judges to have to independently verify that everything in a filing is factual vs being an AI hallucination.
This is a cascading cluster. Precedents will be set based on AI hallucinations. Precedents probably already HAVE been set based on AI hallucinations. The next round of research will dig up the *real* judgements containing *fake* data a
Going to get worst, until (Score:3)
Re: (Score:2)
Exactly what I was thinking. Give them a fine a little higher than their coffee budget.
Re: (Score:3)
A more likely escalation is suspended or revoked licenses, or even criminal prosecution.
And that's as it should be.
Re: (Score:2)
Non-criminal sanctions should definitely go as far as disbarment, though.
Re: (Score:2)
Criminal is pretty unlikely, unless someone believes there was an intent to mislead, and can prove it.
I agree, but it's possible if this continues long enough. Lawyers have gone to prison for misconduct in the past.
How long (Score:2)
How long before the AI generates and posts all the supporting "decisions" it cited to bolster its 'case'? It's a trivial step, really. All it would need is write access to a few places.
So, it'll try to hack into whatever it 'needs' to in order to gain access and it'll succeed at least some of the time. AI generated pollution of the internet at large (as if it wasn't already filled with AI slop).
Maybe it'll generate fake personalities, complete with backstories, history, etc to 'back up' its fiction. Who kno
MAGA / Conservatives should be happy (Score:2)
"I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes.
The judge did his own research. :-)
Fundamentally Similar to Fake Quotes (Score:5, Interesting)
I have some recent experience with trying to use multiple chatbots to find quotations on particular topics.
It seemed a promising approach -- knowledge of everything that has ever been published (more or less) and semantic matching, not just text matching. And I got a list of good to great quotes right off the bat.
Only problem, none of them were real (thought they were falsely attributed to people). So I asked only for quotes that had sources, and I got a list of good quotes with sources.
Only problem, none of the sources were real either. I could never get any of them to stop just making up quotes.
It may not seem that looking up quotes is the same as fabricating legal decisions, but it is -- especially to the LLM. It is all just tokens to the LLM and a fake legal ruling and citation is no different from a fake quote and reference.
This is not even the first time (Score:2)
This has happened before, I think it was some time last year but it may even have been in 2023.
Those crappy lawyers should have known that AI results need to be checked at the very least, that is what paralegals do. I suppose a second possibility was that the paralegal was the one who "delegated the research" but in that case, they are toast.
Re: (Score:3)
Citing legal precedents goes back to long before AI, or even the internet. I recall reading about a case involving maritime law, in which one attorney cited G. Gordon Liddy's autobiography because Liddy once owned a boat, and the other cited a case that not only didn't exist, but was supposed to be in a volume of case law that didn't exist. (The judge warned both attorneys to not run with scissors, and stop filing briefs written with crayons.)
There's nothing new about incompetent, stupid attorneys making sh
Re: This is not even the first time (Score:1)
June of '23: https://apnews.com/article/art... [apnews.com]
Plot twist: the judge doesn’t exist (Score:4, Funny)
Well okay, maybe not — but it would awesome.
Re: (Score:2)
Parent post seems AI generated to me, I suspect the poster does not exist.
Re: (Score:2)
Okay fair cop. I’ll pay that. That was actually pretty funny.
However, my knowledge is only updated as of November 2024. Let me know if you have any other questions. Is there anything else you’d like to know?
Work smarter not harder (Score:2)
Fundamentally, I have no problem using AI to do research as long as you verify the results. I regularly use AI to research things like programming questions or to check somebody else's claims about something. I look at it as a far more efficient google search. Instead of wading through a lot of search results that are often very dated, I get something pithy. But I test the results. If the AI gives you a result that should in theory be correct but isn't because the programming language doesn't have the
Generative "AI" is an oxymoron. (Score:2)
Generative "AI" is an oxymoron. There is no there there - it is just delusions or possible delusions all the way down. That makes it good for entertainment and in the hands of people who are diligent and clearly smarter than the AI, but for most serious applications AI is somewhere between dangerous and dangerously useless.
Not a fine, disbarred and maybe felony charges (Score:2)
There have been enough cases in the media that any lawyer should know that AI generated statements can be false. In addition the EULAs for AI almost certainly say that they cannot be used in this way, and if attorneys aren't reading the EULAs then what is the point?
This sort of mistake can lose someone their life's savings, send them to prison. The AI did not make this statement, the attorneys did, and so they made false claims in court. At an absolute minimum they should be disbarred, and possibly charge
It's not that they used AI (Score:2)
It's that they filed falsified documents with the court.
It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.
Re: (Score:2)
It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.
This part is true.
It's that they filed falsified documents with the court.
This part is not. You have submitted a falsified post to slashdot. Turn in your mod points.
Falsification requires an intent to mislead. People who say dumb shit that they don't know is wrong are not falsifying anything. They're incompetent, and will be treated as such by the Judge, whether they themselves were incompetent, or whatever research tools they used were.
If the submitter was not a lawyer (Score:1)