Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
The Courts AI

Judge Slams Lawyers For 'Bogus AI-Generated Research' 85

A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." From a report: In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying "no reasonably competent attorney should out-source research and writing" to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.

"I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes. "That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order."

Judge Slams Lawyers For 'Bogus AI-Generated Research'

Comments Filter:
  • Yup (Score:5, Interesting)

    by Ol Olsoc ( 1175323 ) on Wednesday May 14, 2025 @01:53PM (#65376553)
    In a few cases, I've seen decent AI generated texts.

    Most of the time, it seems to just make shit up, as this example proves.

    I wonder if Altman et al would be willing to place their freedom on the line, in such a case?

    • It boarded the USS Make Shit Up [youtu.be] immediately.
    • AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.
      • AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.

        What do you think of the AI when it references itself. Truth will become quite malleable. Already groups are poisoning AI - so I for one will be quite skeptical about AI ascending over all other fields.

        But let's say that problem is overcome - What would be the rationale for any human getting an education when you jut speak into the computer and AI dies it all for you?

        • Well someone has to maintain the datacenters that power the AI, so every year it will select a lucky few to be "educated" before being sent to toil in the bowels of the North Virginia hypercomplex, never seeing the light of day again...

        • I do not know, but I am pretty sure it will change who we are.
          • I do not know, but I am pretty sure it will change who we are.

            Oh goodness yes. If I were to guess, if people have issues with social media at present, they'll be driven off the deep end with where AI might take us.

            And it is already happening https://c3.unu.edu/blog/the-ri... [unu.edu]

            And examples: https://www.sciencealert.com/a... [sciencealert.com]

            My best solution is to write off AI from other than research, lest it become something that Josef Goebbels would wish he had at his disposal. And it is already there, just waiting for refinement.

            • I often recommend it to my students. Had a kid today that had trouble writing formal mails to teachers. "Yo bro" is not a way we want to be addressed. Sat down with him, I pasted his mail in chatgpt, asked if it was appropriate. It explained it was too informal. It gave two options to make it more formal. One very serious, one a compromise, that kept some of the original wit. He was happy with the compromise. Chatgpt's compromise was a nice outcome. Without it, I would have given him a brainless lecture abo
      • You're probably right. but I still hope that AI ends up on the same list as 3D Blu-ray and Livestrong bracelets.

      • Re:Yup (Score:5, Insightful)

        by hey! ( 33014 ) on Wednesday May 14, 2025 @05:39PM (#65377099) Homepage Journal

        What I've been saying all along is that the biggest problem with the technology isn't going to be the technology per se. It's going to be the people who use it being lazy, credulous, and ignorant of the technology's limitations.

        The bottom line is that as it stands LLM isn't any good for what these bozos are using it for: saving labor creating a brief. You still have to do the legal research and feed it the relevant cases, instructing it not to cite any other cases, then check its characterization of that case law for correctness. In other words, you still have to do all the hard work, so it's hardly worth using if all you are interested in is getting an acceptable brief quickly.

        But if you *have* done all that work, it's quite safe to use AI to improve your brief, for example tightening up your prose. You can use it to brainstorm arguments. You can use it to check your brief for obvious counter-arguments you missed. There's absolutely nothing wrong with lawyers *who know what they're doing* using AI to improve their work. It just can't *do* their work for them.

        • The hype about AI is that it is "better than human". That is its only economic reason for existing. That's why investors put so much money into these companies.

          Now you come around and say things like "AI has limitations and it's the fault of lazy credulous ignorant people that things don't work".

          Do you not see the contradiction here? You're directly contradicting the market. What makes you think you are smarter than the market?

          • by hey! ( 33014 )

            That's why I'm not investing in AI stocks. I don't believe the pitch the companies are making to investors. That doesn't mean that LLMs aren't a tremendous technological achievement that could be very useful.

            Whether it's a net good for mankind, I'm skeptical. But as long as it exists, use it cautiously and wisely.

        • Exactly. Sad part is that we probably will need an Ai to filter out all the garbage Ai. Instead of one nuclear power plant, we now need two.
    • In a few cases, I've seen decent AI generated texts.

      Most of the time, it seems to just make shit up, as this example proves.

      I wonder if Altman et al would be willing to place their freedom on the line, in such a case?

      Altman et al are the new John Moses Browning. They’ll might be credited for “inventing” AI, but they won’t be blamed for what happens next.

      The human mind invented the double-edged sword. We should probably remember we’re teaching AI to get that irony even if we don’t.

  • by drnb ( 2434720 ) on Wednesday May 14, 2025 @01:53PM (#65376555)
    AI is the new priesthood we go to for answers, and place too much trust in. Very typical behavior for humans, another iteration on the appeal to authority fallacy.
  • On the plus side, the attorney's billable hours were rather low.
    • That's not how it works. Not how lawyers work. They get a low paid lackey, paralegal, or whatever they call the junior junior people to do the work, at say 25/hour then they bill you for 375/hour. Now they would pocket the 25 and have an effective billing rate of 400/hour. You don't think they'd pass the savings along to the customer, I mean chump, do you ?

      You probably just meant that as a joke, but that's how it works out in the wild.
      • Re: (Score:3, Insightful)

        Never really understood why anyone would think anyone else would "pass on" savings to them.

        Why would they?

        If you're willing to pay me $350/hr, why would I charge you less, regardless of how much I lower my costs?

        Why create savings if I am just going to "pass it along" ?

        • Oh, exactly... remember we are talking about lawyers here.
          This model has been adopted in many areas, Dentistry for one.
          In the last few years I've noticed TONS of effing dentist shops around town popping up.
          You hire 4 dental hygienists at bottom dollar, make sure you have a lot of treatment rooms, and book in people like crazy, and the entry level people do the work, and the dentist walks around from room to room to inspect the work. Cha-ching.

          So the dentist is still the one doing the drilling and root canal
        • Capitalism 101.

          You pass on the savings so that you're ahead, competitively, against those you compete against.

          There are a hundred asterisks required for this logic to actually hold, since here in real life, capitalism isn't as pure as some would pretend that it is, but it does basically hold in most situations.

          If you coordinate with your competitors to NOT lower prices, that's price-fixing, and against the law.
          • Capitalism 101.
            You pass on the savings so that you're ahead, competitively, against those you compete against.

            That is only true if you compete on price. Lots of business don't work that way.

            It is uncommon to comparison shop for lawyers based on price, for example.

            Hell, even good auto shops are booked way out. They don't have to compete on price. You don't want to pay? Fine. Plenty of other people do, which is why it can take a month to get a appointment.

            • That is only true if you compete on price. Lots of business don't work that way.

              All businesses compete on price.

              It is uncommon to comparison shop for lawyers based on price, for example.

              That's just silly.
              People don't have infinite resources.

              Hell, even good auto shops are booked way out. They don't have to compete on price. You don't want to pay? Fine. Plenty of other people do, which is why it can take a month to get a appointment.

              And they still compete on price.

              If you don't, then you're literally not in the same market- by definition- but they do.

              Competing on price does not mean that your price goes as low as possible- it means that if you raise your price, some percentage of your customers will move to your competitor. If they do not do that, you have a monopoly.

              • Sure, buddy.

                • There's a difference between focusing on price, and price-competition-insensitivity.

                  Apple, for example, does not focus on price (what you'd call "not competing on price")
                  They "compete on value" (or perception thereof)

                  But ultimately, if Apple raises the prices of their goods, is there a point where customers go elsewhere?
                  If no, then Apple is in a market alone.

                  This is the precise logic that doomed Microsoft, who argued that Apple was in their market.
                  All businesses in the US are measured on price comp
        • This happens all the time. Pretty much any culture that has haggling, you can get the savings passed along.

    • by hawk ( 1151 )

      >On the plus side, the attorney's billable hours were rather low.

      not according to the billing statement prepared by the AI . . .

  • I've developed software for lawyers. I can confirm that at least some of them are really dumb and lazy.

  • by hdyoung ( 5182939 ) on Wednesday May 14, 2025 @02:02PM (#65376583)
    scared sh*tless of this. Let's say a judge gets fooled by hallucinated crap submitted by lawyers and puts in some sort of wrong/flawed judicial order that results in a death, or some sort of massive financial or reputational damage. People would probably be on the hook for real prison time or 10^(insert many zeros) dollars of liability, and I'd be surprised if any insurance policy would cover it.
    • by taustin ( 171655 ) on Wednesday May 14, 2025 @04:20PM (#65376923) Homepage Journal

      Any litigator worth the name will be checking every single reference his opponent cites at this point, even if the judge doesn't. You can't ask for an easier win.

      In an adversarial court system, this is a self correcting problem.

      • by reanjr ( 588767 ) on Wednesday May 14, 2025 @07:55PM (#65377429) Homepage

        "In an adversarial court system, this is a self correcting problem."

        Not for poor people.

        • by taustin ( 171655 )

          Or people who assume they'll lose, and don't even try. You know, pathetic losers.

        • by mjwx ( 966435 )

          "In an adversarial court system, this is a self correcting problem."

          Not for poor people.

          The problem is, even if they lose the lawyer still gets paid. Hence good lawyers won't take on a poor client.

          Loser pays is the equaliser. In most civilised countries, the loser is on the hook for the winners legal fees, so a high priced lawyer will take on a case pro-bono (that means he likes U2) knowing that if the case is solid enough he can get his exorbitant fee from the loser... possibly twice if the loser sues him claiming his fees are unreasonable.

          It also means people with money can't use the l

        • Poor people don't matter... or have you not witnessed society since the 1980s? Once you can not work enough to pay your bills, you are left to sit at a park and starve until you are arrested for trespassing and taken 100 miles outside of town and left by the local sheriff without shoes or a coat in the winter.

          What? You didn't know being poor is like being in a meat grinder? Maybe that is why it continues.

    • Lawyers can't go to jail for incompetence, even if somebody gets killed.

      I don't know where the fuck people get this idea that if somebody dies, people go to jail. That's only true in very specific, narrowly defined circumstances.

      And liability is mostly an insurance issue. Insurance companies don't care about theory, they care about actuarial accounting; how many liability cases do they already have involving this, and how much do they cost? That's an important input to the actuarial formulas. So far, it has

    • Let's say a judge gets fooled by hallucinated crap submitted by lawyers and puts in some sort of wrong/flawed judicial order that results in a death, or some sort of massive financial or reputational damage.

      It's actually worse. The next generation of hallucinating LLMs will be directed to *this* noise because it is now official case record and it will rapidly devolve into a full blown dramedy of errors and fake references to the point that the legal system ceases to function.

      At which point it would become our civic duty to euthanize lawyers on sight.

      So, I guess, not necessarily all bad?

  • Of ai-powered lawyers. Eventually the tech will work and it will stop hallucinating.

    That's going to put millions of lawyers out of work and while that sounds great at first those guys aren't going to just go quietly into that good night.

    It's like we're going to have tens of thousands of people trained to kill with no jobs. When we do that with the military we go out of our way to make sure ex-military have jobs but we're obviously not going to do that with lawyers.

    So they're going to start looki
    • tell that to the AI judge.

      I'll leave it to 93escortwagon to insert appropriate Futurama quote.
    • by XopherMV ( 575514 ) on Wednesday May 14, 2025 @02:35PM (#65376657) Journal
      Maybe it'll start working without errors aka "hallucinations." The problem is that the technology is fundamentally disconnected from reality. It reads the writing that we've bothered to put online. But, AI has no means of independently verifying that writing. It has no independent means of verifying the truth or falsehood of a section of text. I don't see a good means for it to get that data without first changing the way in which it interacts with the real world.
      • by rknop ( 240417 )

        ^^^ This.

        The whole terminology of "hallucinations" is misleading. It suggests some sort of not-functioning normally anomaly. But it's not. It's just LLMs behaving exactly the way they are designed, and not happening to give the right answer.

        We have to remember that LLMs, as *designed*, are bullshit generators : https://link.springer.com/arti... [springer.com]

        Just like college student papers, sometimes that bullshit is correct. If the students are really good at bullshitting, it's often correct. But that doesn't chang

      • by rsilvergun ( 571051 ) on Wednesday May 14, 2025 @03:44PM (#65376855)
        This is something old people always have a really fucking hard problem with. Especially anyone who thinks to survivor bias has never faced a lot of layoffs.

        I would point out that outsourcing and layoffs are coming and that we need to prepare for it in position ourselves so that we aren't as likely to be on the chopping block and the old folks would always say that we were utterly irreplaceable. I would then watch as they are forced into retirement at the age of 55 with no job prospects whatsoever and replaced by teenagers in malaysia.

        The survivors inevitably just said that if you could be replaced by a teenager in Malaysia you deserved it. Cope. Pure cope.

        The teenagers didn't actually replace you what actually happened was the people left still working just had to pick up an extra 20 hours a week on top of their 40 doing the work that was technically supposed to be done by the teenagers in Malaysia. But the company got the work done anyway and got the pocket and extra 20 hours of work of Labor from whoever was left because they hold all the cards.

        The systems that we grew up with are fundamentally breaking down but nobody wants to acknowledge that because we grew up with them and we don't want them changed because that's what we grew up with.

        It's a phenomenon known as 4 to 14.

        Basically anything you want to put in a person's brain and keep there forever you can with these if you put it in their brain between the ages of 4 to 14. At that age human beings are capable of learning but they are not capable of critically evaluating the information they learn so anything you put in their brain sticks and stays and is almost impossible to get out no matter how wrong it is.

        And it means that if there are drastic social changes human beings are incapable of adapting or responding to them. The species is a whole may or may not survive, honestly I'm convinced we're not going to we are putting lunatics in charge of nuclear weapons, but even if the species does survive the individual gets fucked.

        And since I'm triggering that part of the brain responsible for the 4 to 14 phenomenon I am guaranteed to get modded way to fuck down because otherwise well, you'd have to acknowledge the changesthat's extremely mentally taxing and painful.

        I can tell you that I have done it and it is not pleasant and frankly it hasn't even helped me individually. But that's mostly because we are all so completely fucked.

        The human brain which evolved to chase down buffalo is not equipped for the modern world
        • The human brain which evolved to chase down buffalo is not equipped for the modern world

          The human brain absolutely is capable of being equipped for the modern world. It is a conscious choice to indoctrinate rather than teach.

    • Eventually the tech will work and it will stop hallucinating.

      I don't think you understand that in AI, hallucination is just a euphemism for mistake. In order for an ai to stop hallucinating it would have to stop making mistakes, ever, and that's not very likely to happen. Maybe you should take some time to find out what AI really is and how it works so that you can stop putting your foot in your mouth.
  • by larwe ( 858929 ) on Wednesday May 14, 2025 @02:11PM (#65376611)
    ... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.
    • I'm sure you're correct. If only all legal decisions were easily accessible by the public so that this could be investigated.
    • ... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.

      Then I suppose it won’t be long before we’re looking at other elements of our legal system that should be automated.

      Like lawyers. Not like they’re leaning on their education or experience anymore.

      And if judges are having a hard time judging what’s real or not, then perhaps they’re next.

      • by larwe ( 858929 )

        And if judges are having a hard time judging what’s real or not, then perhaps they’re next.

        Judges currently expect that lawyers use fact-based research tools to find precedents and other related case law from *real records*. There is no slack in the system for judges to have to independently verify that everything in a filing is factual vs being an AI hallucination.

        This is a cascading cluster. Precedents will be set based on AI hallucinations. Precedents probably already HAVE been set based on AI hallucinations. The next round of research will dig up the *real* judgements containing *fake* data a

      • by hawk ( 1151 )

        the lawyers on the other side should be looking up these surprising "cases" they didn't find.

        At that point, they file a motion to set aside.

        And there is *no* time limit und Rule 60(d)(3) for "fraud upon the court."

        Once this is filed, and the judge scraped from the ceiling, there will be a hearing so fast as to make heads spin!

        hawk, esq.

  • by oldgraybeard ( 2939809 ) on Wednesday May 14, 2025 @02:39PM (#65376669)
    You add 2 zeros to the sanctions against the law firms "imposed $31,000 in sanctions against the law firms" and submit the lawyers who did not even review things for legal action/disbarment.
    • Exactly what I was thinking. Give them a fine a little higher than their coffee budget.

    • by taustin ( 171655 )

      A more likely escalation is suspended or revoked licenses, or even criminal prosecution.

      And that's as it should be.

      • Criminal is pretty unlikely, unless someone believes there was an intent to mislead, and can prove it.
        Non-criminal sanctions should definitely go as far as disbarment, though.
        • by taustin ( 171655 )

          Criminal is pretty unlikely, unless someone believes there was an intent to mislead, and can prove it.

          I agree, but it's possible if this continues long enough. Lawyers have gone to prison for misconduct in the past.

        • Intent to mislead is not a crime, it's a lawyer's job.

          They're supposed to throw everything at the wall.

          If there is no good argument to help their client, they're supposed to argue the most plausible bullshit they can think of.

          There is a good reason why witnesses swear to the court that what they say is truthful, but lawyers rarely are asked to; pretty much only after they're already in trouble for going to far and are in a "show cause" hearing. And usually not even then.

          • Incorrect on all counts.

            A lawyer's job is to produce plausible explanations for the facts, which lead to reasonable doubt.
            Misleading or dishonest arguments will result in sanction by the Court.

            You sound like an edgy 12th grader who thinks they know some shit. How cool for someone who must be over the age of 50.
  • by JustAnotherOldGuy ( 4145623 ) on Wednesday May 14, 2025 @02:44PM (#65376689) Journal

    How long before the AI generates and posts all the supporting "decisions" it cited to bolster its 'case'? It's a trivial step, really. All it would need is write access to a few places.

    So, it'll try to hack into whatever it 'needs' to in order to gain access and it'll succeed at least some of the time. AI generated pollution of the internet at large (as if it wasn't already filled with AI slop).

    Maybe it'll generate fake personalities, complete with backstories, history, etc to 'back up' its fiction. Who knows?

    10 years from now, how deep will you have to dig to see if all the information you got (from ANY source) is legit, or just AI slop?

    As I've said before, pretty soon you won't be able to trust anything except what's been printed on paper before ~2020.

  • "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes.

    The judge did his own research. :-)

    • by Anonymous Coward
      This is formulated like a joke, but everyone should be happy with the judge's process and this outcome.
  • by careysub ( 976506 ) on Wednesday May 14, 2025 @03:14PM (#65376785)

    I have some recent experience with trying to use multiple chatbots to find quotations on particular topics.

    It seemed a promising approach -- knowledge of everything that has ever been published (more or less) and semantic matching, not just text matching. And I got a list of good to great quotes right off the bat.

    Only problem, none of them were real (thought they were falsely attributed to people). So I asked only for quotes that had sources, and I got a list of good quotes with sources.

    Only problem, none of the sources were real either. I could never get any of them to stop just making up quotes.

    It may not seem that looking up quotes is the same as fabricating legal decisions, but it is -- especially to the LLM. It is all just tokens to the LLM and a fake legal ruling and citation is no different from a fake quote and reference.

    • by Ed Avis ( 5917 )
      Did you try asking one chatbot to check the quotations given by the other chatbot? If you ask the AI to find something then it will do its best to please you. But if you ask it "is this quotation real" or "is there any evidence for X" then at least some of the time it can perform the useful service of saying "no, can't find it".
    • by hawk ( 1151 )

      no, even moreso than that.

      they don't *distinguish* between quotations and the rest of the text; it's just another sentence to the AI, just like past and future tenses, plurals, and so forth.

  • This has happened before, I think it was some time last year but it may even have been in 2023.
    Those crappy lawyers should have known that AI results need to be checked at the very least, that is what paralegals do. I suppose a second possibility was that the paralegal was the one who "delegated the research" but in that case, they are toast.

    • by taustin ( 171655 )

      Citing legal precedents goes back to long before AI, or even the internet. I recall reading about a case involving maritime law, in which one attorney cited G. Gordon Liddy's autobiography because Liddy once owned a boat, and the other cited a case that not only didn't exist, but was supposed to be in a volume of case law that didn't exist. (The judge warned both attorneys to not run with scissors, and stop filing briefs written with crayons.)

      There's nothing new about incompetent, stupid attorneys making sh

    • This is the third one I've read about this year .

  • by VaccinesCauseAdults ( 7114361 ) on Wednesday May 14, 2025 @04:18PM (#65376921)
    The story is AI generated and the judge doesn’t exist.

    Well okay, maybe not — but it would awesome.

    • by Ecuador ( 740021 )

      Parent post seems AI generated to me, I suspect the poster does not exist.

      • Okay fair cop. I’ll pay that. That was actually pretty funny.

        However, my knowledge is only updated as of November 2024. Let me know if you have any other questions. Is there anything else you’d like to know?

  • Fundamentally, I have no problem using AI to do research as long as you verify the results. I regularly use AI to research things like programming questions or to check somebody else's claims about something. I look at it as a far more efficient google search. Instead of wading through a lot of search results that are often very dated, I get something pithy. But I test the results. If the AI gives you a result that should in theory be correct but isn't because the programming language doesn't have the

  • Generative "AI" is an oxymoron. There is no there there - it is just delusions or possible delusions all the way down. That makes it good for entertainment and in the hands of people who are diligent and clearly smarter than the AI, but for most serious applications AI is somewhere between dangerous and dangerously useless.

  • by joe_frisch ( 1366229 ) on Wednesday May 14, 2025 @05:01PM (#65377003)

    There have been enough cases in the media that any lawyer should know that AI generated statements can be false. In addition the EULAs for AI almost certainly say that they cannot be used in this way, and if attorneys aren't reading the EULAs then what is the point?

    This sort of mistake can lose someone their life's savings, send them to prison. The AI did not make this statement, the attorneys did, and so they made false claims in court. At an absolute minimum they should be disbarred, and possibly charged with making false statements under oath.

    This sort of behavior cannot be tolerated. A world where people can pass all responsibility off to an AI leads all sorts of bad places .

    • In addition the EULAs for AI almost certainly say that they cannot be used in this way

      Same with Microsoft and medical equipment... and yet here we are.

  • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Wednesday May 14, 2025 @05:35PM (#65377075) Homepage

    It's that they filed falsified documents with the court.

    It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.

    • It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.

      This part is true.

      It's that they filed falsified documents with the court.

      This part is not. You have submitted a falsified post to slashdot. Turn in your mod points.

      Falsification requires an intent to mislead. People who say dumb shit that they don't know is wrong are not falsifying anything. They're incompetent, and will be treated as such by the Judge, whether they themselves were incompetent, or whatever research tools they used were.

    • Submitting briefs with false citations would probably fall under "lying to the court" charges. While the submitter will not face the same types of sanctions, they would still face some sort of punishment, civil or criminal.
  • by Slashythenkilly ( 7027842 ) on Wednesday May 14, 2025 @08:52PM (#65377517)
    How many hours were billed to clients for the push of a button? Its all fine and good to use software but you still need fact checkers and a paralegal/attorney to retool any arguments otherwise anyone could play armchair lawyer and not do anyone any good. Good for the judge to throw the book at them.
  • I am surprised how lenient this is. For the faking of legal citations and precedents I would have expected immediate disbarment. I cannot think of anything more disgraceful for a legal professional.

  • by misnohmer ( 1636461 ) on Thursday May 15, 2025 @05:09AM (#65378013)
    I don't see why it matters whether or not the attorney used AI or made up the citations, verdicts, etc. If you are a professional, you are free to ask AI, or anyone else to do your work, but at the end of the day it is your name of the filings, which means you take full responsibility for the content. The same penalty should apply if an attorney purposefully made up citations (arguing in bad faith?), or just copied AI provided hallucinations (ignorance). Last I checked, ignorance is not a mitigating factor for violating laws. Making a distinction between attorney vs. AI made up citations is a slipper slope we really should not allow - every malicious attorney making up case history can claim "AI game me those", even if they in fact manufactured said case history.

    AI is a tool, no different than a hammer. You cannot go easy on a murderer by blaming the hammer they used to kill someone.

They are relatively good but absolutely terrible. -- Alan Kay, commenting on Apollos

Working...