Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
The Courts AI

Judge Slams Lawyers For 'Bogus AI-Generated Research' 51

A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." From a report: In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying "no reasonably competent attorney should out-source research and writing" to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.

"I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes. "That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order."

Judge Slams Lawyers For 'Bogus AI-Generated Research'

Comments Filter:
  • Yup (Score:4, Interesting)

    by Ol Olsoc ( 1175323 ) on Wednesday May 14, 2025 @01:53PM (#65376553)
    In a few cases, I've seen decent AI generated texts.

    Most of the time, it seems to just make shit up, as this example proves.

    I wonder if Altman et al would be willing to place their freedom on the line, in such a case?

    • It boarded the USS Make Shit Up [youtu.be] immediately.
    • AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.
      • AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.

        What do you think of the AI when it references itself. Truth will become quite malleable. Already groups are poisoning AI - so I for one will be quite skeptical about AI ascending over all other fields.

        But let's say that problem is overcome - What would be the rationale for any human getting an education when you jut speak into the computer and AI dies it all for you?

      • You're probably right. but I still hope that AI ends up on the same list as 3D Blu-ray and Livestrong bracelets.

      • by hey! ( 33014 )

        What I've been saying all along is that the biggest problem with the technology isn't going to be the technology per se. It's going to be the people who use it being lazy, credulous, and ignorant of the technology's limitations.

        The bottom line is that as it stands LLM isn't any good for what these bozos are using it for: saving labor creating a brief. You still have to do the legal research and feed it the relevant cases, instructing it not to cite any other cases, then check its characterization of that c

    • In a few cases, I've seen decent AI generated texts.

      Most of the time, it seems to just make shit up, as this example proves.

      I wonder if Altman et al would be willing to place their freedom on the line, in such a case?

      Altman et al are the new John Moses Browning. They’ll might be credited for “inventing” AI, but they won’t be blamed for what happens next.

      The human mind invented the double-edged sword. We should probably remember we’re teaching AI to get that irony even if we don’t.

  • by drnb ( 2434720 ) on Wednesday May 14, 2025 @01:53PM (#65376555)
    AI is the new priesthood we go to for answers, and place too much trust in. Very typical behavior for humans, another iteration on the appeal to authority fallacy.
  • On the plus side, the attorney's billable hours were rather low.
    • That's not how it works. Not how lawyers work. They get a low paid lackey, paralegal, or whatever they call the junior junior people to do the work, at say 25/hour then they bill you for 375/hour. Now they would pocket the 25 and have an effective billing rate of 400/hour. You don't think they'd pass the savings along to the customer, I mean chump, do you ?

      You probably just meant that as a joke, but that's how it works out in the wild.
      • Never really understood why anyone would think anyone else would "pass on" savings to them.

        Why would they?

        If you're willing to pay me $350/hr, why would I charge you less, regardless of how much I lower my costs?

        Why create savings if I am just going to "pass it along" ?

        • Oh, exactly... remember we are talking about lawyers here.
          This model has been adopted in many areas, Dentistry for one.
          In the last few years I've noticed TONS of effing dentist shops around town popping up.
          You hire 4 dental hygienists at bottom dollar, make sure you have a lot of treatment rooms, and book in people like crazy, and the entry level people do the work, and the dentist walks around from room to room to inspect the work. Cha-ching.

          So the dentist is still the one doing the drilling and root canal
        • Capitalism 101.

          You pass on the savings so that you're ahead, competitively, against those you compete against.

          There are a hundred asterisks required for this logic to actually hold, since here in real life, capitalism isn't as pure as some would pretend that it is, but it does basically hold in most situations.

          If you coordinate with your competitors to NOT lower prices, that's price-fixing, and against the law.
  • I've developed software for lawyers. I can confirm that at least some of them are really dumb and lazy.

  • by hdyoung ( 5182939 ) on Wednesday May 14, 2025 @02:02PM (#65376583)
    scared sh*tless of this. Let's say a judge gets fooled by hallucinated crap submitted by lawyers and puts in some sort of wrong/flawed judicial order that results in a death, or some sort of massive financial or reputational damage. People would probably be on the hook for real prison time or 10^(insert many zeros) dollars of liability, and I'd be surprised if any insurance policy would cover it.
    • by taustin ( 171655 )

      Any litigator worth the name will be checking every single reference his opponent cites at this point, even if the judge doesn't. You can't ask for an easier win.

      In an adversarial court system, this is a self correcting problem.

  • Of ai-powered lawyers. Eventually the tech will work and it will stop hallucinating.

    That's going to put millions of lawyers out of work and while that sounds great at first those guys aren't going to just go quietly into that good night.

    It's like we're going to have tens of thousands of people trained to kill with no jobs. When we do that with the military we go out of our way to make sure ex-military have jobs but we're obviously not going to do that with lawyers.

    So they're going to start looki
    • tell that to the AI judge.

      I'll leave it to 93escortwagon to insert appropriate Futurama quote.
    • Maybe it'll start working without errors aka "hallucinations." The problem is that the technology is fundamentally disconnected from reality. It reads the writing that we've bothered to put online. But, AI has no means of independently verifying that writing. It has no independent means of verifying the truth or falsehood of a section of text. I don't see a good means for it to get that data without first changing the way in which it interacts with the real world.
      • by rknop ( 240417 )

        ^^^ This.

        The whole terminology of "hallucinations" is misleading. It suggests some sort of not-functioning normally anomaly. But it's not. It's just LLMs behaving exactly the way they are designed, and not happening to give the right answer.

        We have to remember that LLMs, as *designed*, are bullshit generators : https://link.springer.com/arti... [springer.com]

        Just like college student papers, sometimes that bullshit is correct. If the students are really good at bullshitting, it's often correct. But that doesn't chang

      • by rsilvergun ( 571051 ) on Wednesday May 14, 2025 @03:44PM (#65376855)
        This is something old people always have a really fucking hard problem with. Especially anyone who thinks to survivor bias has never faced a lot of layoffs.

        I would point out that outsourcing and layoffs are coming and that we need to prepare for it in position ourselves so that we aren't as likely to be on the chopping block and the old folks would always say that we were utterly irreplaceable. I would then watch as they are forced into retirement at the age of 55 with no job prospects whatsoever and replaced by teenagers in malaysia.

        The survivors inevitably just said that if you could be replaced by a teenager in Malaysia you deserved it. Cope. Pure cope.

        The teenagers didn't actually replace you what actually happened was the people left still working just had to pick up an extra 20 hours a week on top of their 40 doing the work that was technically supposed to be done by the teenagers in Malaysia. But the company got the work done anyway and got the pocket and extra 20 hours of work of Labor from whoever was left because they hold all the cards.

        The systems that we grew up with are fundamentally breaking down but nobody wants to acknowledge that because we grew up with them and we don't want them changed because that's what we grew up with.

        It's a phenomenon known as 4 to 14.

        Basically anything you want to put in a person's brain and keep there forever you can with these if you put it in their brain between the ages of 4 to 14. At that age human beings are capable of learning but they are not capable of critically evaluating the information they learn so anything you put in their brain sticks and stays and is almost impossible to get out no matter how wrong it is.

        And it means that if there are drastic social changes human beings are incapable of adapting or responding to them. The species is a whole may or may not survive, honestly I'm convinced we're not going to we are putting lunatics in charge of nuclear weapons, but even if the species does survive the individual gets fucked.

        And since I'm triggering that part of the brain responsible for the 4 to 14 phenomenon I am guaranteed to get modded way to fuck down because otherwise well, you'd have to acknowledge the changesthat's extremely mentally taxing and painful.

        I can tell you that I have done it and it is not pleasant and frankly it hasn't even helped me individually. But that's mostly because we are all so completely fucked.

        The human brain which evolved to chase down buffalo is not equipped for the modern world
    • Eventually the tech will work and it will stop hallucinating.

      I don't think you understand that in AI, hallucination is just a euphemism for mistake. In order for an ai to stop hallucinating it would have to stop making mistakes, ever, and that's not very likely to happen. Maybe you should take some time to find out what AI really is and how it works so that you can stop putting your foot in your mouth.
  • by larwe ( 858929 ) on Wednesday May 14, 2025 @02:11PM (#65376611)
    ... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.
    • I'm sure you're correct. If only all legal decisions were easily accessible by the public so that this could be investigated.
    • ... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.

      Then I suppose it won’t be long before we’re looking at other elements of our legal system that should be automated.

      Like lawyers. Not like they’re leaning on their education or experience anymore.

      And if judges are having a hard time judging what’s real or not, then perhaps they’re next.

      • by larwe ( 858929 )

        And if judges are having a hard time judging what’s real or not, then perhaps they’re next.

        Judges currently expect that lawyers use fact-based research tools to find precedents and other related case law from *real records*. There is no slack in the system for judges to have to independently verify that everything in a filing is factual vs being an AI hallucination.

        This is a cascading cluster. Precedents will be set based on AI hallucinations. Precedents probably already HAVE been set based on AI hallucinations. The next round of research will dig up the *real* judgements containing *fake* data a

  • by oldgraybeard ( 2939809 ) on Wednesday May 14, 2025 @02:39PM (#65376669)
    You add 2 zeros to the sanctions against the law firms "imposed $31,000 in sanctions against the law firms" and submit the lawyers who did not even review things for legal action/disbarment.
    • Exactly what I was thinking. Give them a fine a little higher than their coffee budget.

    • by taustin ( 171655 )

      A more likely escalation is suspended or revoked licenses, or even criminal prosecution.

      And that's as it should be.

      • Criminal is pretty unlikely, unless someone believes there was an intent to mislead, and can prove it.
        Non-criminal sanctions should definitely go as far as disbarment, though.
        • by taustin ( 171655 )

          Criminal is pretty unlikely, unless someone believes there was an intent to mislead, and can prove it.

          I agree, but it's possible if this continues long enough. Lawyers have gone to prison for misconduct in the past.

  • How long before the AI generates and posts all the supporting "decisions" it cited to bolster its 'case'? It's a trivial step, really. All it would need is write access to a few places.

    So, it'll try to hack into whatever it 'needs' to in order to gain access and it'll succeed at least some of the time. AI generated pollution of the internet at large (as if it wasn't already filled with AI slop).

    Maybe it'll generate fake personalities, complete with backstories, history, etc to 'back up' its fiction. Who kno

  • "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes.

    The judge did his own research. :-)

  • by careysub ( 976506 ) on Wednesday May 14, 2025 @03:14PM (#65376785)

    I have some recent experience with trying to use multiple chatbots to find quotations on particular topics.

    It seemed a promising approach -- knowledge of everything that has ever been published (more or less) and semantic matching, not just text matching. And I got a list of good to great quotes right off the bat.

    Only problem, none of them were real (thought they were falsely attributed to people). So I asked only for quotes that had sources, and I got a list of good quotes with sources.

    Only problem, none of the sources were real either. I could never get any of them to stop just making up quotes.

    It may not seem that looking up quotes is the same as fabricating legal decisions, but it is -- especially to the LLM. It is all just tokens to the LLM and a fake legal ruling and citation is no different from a fake quote and reference.

  • This has happened before, I think it was some time last year but it may even have been in 2023.
    Those crappy lawyers should have known that AI results need to be checked at the very least, that is what paralegals do. I suppose a second possibility was that the paralegal was the one who "delegated the research" but in that case, they are toast.

    • by taustin ( 171655 )

      Citing legal precedents goes back to long before AI, or even the internet. I recall reading about a case involving maritime law, in which one attorney cited G. Gordon Liddy's autobiography because Liddy once owned a boat, and the other cited a case that not only didn't exist, but was supposed to be in a volume of case law that didn't exist. (The judge warned both attorneys to not run with scissors, and stop filing briefs written with crayons.)

      There's nothing new about incompetent, stupid attorneys making sh

  • by VaccinesCauseAdults ( 7114361 ) on Wednesday May 14, 2025 @04:18PM (#65376921)
    The story is AI generated and the judge doesn’t exist.

    Well okay, maybe not — but it would awesome.

    • by Ecuador ( 740021 )

      Parent post seems AI generated to me, I suspect the poster does not exist.

      • Okay fair cop. I’ll pay that. That was actually pretty funny.

        However, my knowledge is only updated as of November 2024. Let me know if you have any other questions. Is there anything else you’d like to know?

  • Fundamentally, I have no problem using AI to do research as long as you verify the results. I regularly use AI to research things like programming questions or to check somebody else's claims about something. I look at it as a far more efficient google search. Instead of wading through a lot of search results that are often very dated, I get something pithy. But I test the results. If the AI gives you a result that should in theory be correct but isn't because the programming language doesn't have the

  • Generative "AI" is an oxymoron. There is no there there - it is just delusions or possible delusions all the way down. That makes it good for entertainment and in the hands of people who are diligent and clearly smarter than the AI, but for most serious applications AI is somewhere between dangerous and dangerously useless.

  • There have been enough cases in the media that any lawyer should know that AI generated statements can be false. In addition the EULAs for AI almost certainly say that they cannot be used in this way, and if attorneys aren't reading the EULAs then what is the point?

    This sort of mistake can lose someone their life's savings, send them to prison. The AI did not make this statement, the attorneys did, and so they made false claims in court. At an absolute minimum they should be disbarred, and possibly charge

  • It's that they filed falsified documents with the court.

    It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.

    • It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.

      This part is true.

      It's that they filed falsified documents with the court.

      This part is not. You have submitted a falsified post to slashdot. Turn in your mod points.

      Falsification requires an intent to mislead. People who say dumb shit that they don't know is wrong are not falsifying anything. They're incompetent, and will be treated as such by the Judge, whether they themselves were incompetent, or whatever research tools they used were.

Speed of a tortoise breaking the sound barrier = 1 Machturtle

Working...