Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI The Courts

Anthropic's Lawyer Forced To Apologize After Claude Hallucinated Legal Citation (techcrunch.com) 23

An anonymous reader quotes a report from TechCrunch: A lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations. Anthropic apologized for the error and called it "an honest citation mistake and not a fabrication of authority." Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic's expert witness -- one of the company's employees, Olivia Chen -- of using Claude to cite fake articles in her testimony. Federal judge, Susan van Keulen, then ordered Anthropic to respond to these allegations. Last week, a California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." The judge imposed $31,000 in sanctions against the law firms and said "no reasonably competent attorney should out-source research and writing" to AI.

Anthropic's Lawyer Forced To Apologize After Claude Hallucinated Legal Citation

Comments Filter:
  • by NicknamesAreStupid ( 1040118 ) on Thursday May 15, 2025 @06:11PM (#65379553)
    These examples are becoming too common to notice. For those who write, using AI is perhaps the greatest temptation since the concept of plagiarism. It may sometimes be banal or insipid or just flat-out wrong, but it can deliver like a fire hose. And there seems to be no stopping it. I would be willing to bet that any legislation drafted to regulate AI content was partly or wholly created by AI.
    • Yep, I sit on a board or two, and recently it isn't the millennials or younger who would ignore the problem that is being discussed because they are too busy to hammer chatgpt with requests for something to say, it is people in their 40s and 50s, who should know better, having grown up without being attached to a fondle-slab all the time.

      The market for neuralink-like shit is now obvious though.

      If these can do all that chat-gpteeing and regurgitation without petting or reading a glass panel all the time, the

  • by khb ( 266593 ) on Thursday May 15, 2025 @06:17PM (#65379561)

    There have been several such cases in the news over the last two years. I guess the penalties haven’t been painful enough to get their attention. Higher $$$ and perhaps professional sanctions.

  • by gurps_npc ( 621217 ) on Thursday May 15, 2025 @06:17PM (#65379567) Homepage

    If I lie to a judge, they can (theoretically - almost no one gets charged) charge me with perjury and send me to jail.

    The fact that they claim the lie was done by a machine they used is not relevant. You submit false information you should go to jail. It is YOUR job to make sure the information is true, not the judges.

    Using AI is not an excuse to let you get out of jail free.

    • If I lie to a judge, they can (theoretically - almost no one gets charged) charge me with perjury and send me to jail.

      The fact that they claim the lie was done by a machine they used is not relevant. You submit false information you should go to jail. It is YOUR job to make sure the information is true, not the judges.

      Using AI is not an excuse to let you get out of jail free.

      Is the specific problem the use of AI for legal papers or rather incompetent editing and proofreading? That is, if a human had written something incorrect or maybe correct but disadvantageous or maybe something that should be said but is said in an offensive way, would we blame the writer or the editor?

      I encourage my kids to use gen AI for their homework, but as a first step to get started then followed by a thought process about how improvement as well as proofreading. If used correctly as an aid rather

      • by Rinnon ( 1474161 ) on Thursday May 15, 2025 @07:17PM (#65379697)

        Is the specific problem the use of AI for legal papers or rather incompetent editing and proofreading? That is, if a human had written something incorrect or maybe correct but disadvantageous or maybe something that should be said but is said in an offensive way, would we blame the writer or the editor?

        In the context of lawyers and their tools/paralegals/assistants, the only person responsible to the courts is the lawyer. "My Paralegal screwed up" is, and has never been, an acceptable excuse. A lawyer can have their paralegal write an entire document if they want, and they can submit it without checking it if they have that level of trust, but if they get called out, it is the lawyer who is responsible for their work and the work of their staff.

      • by sjames ( 1099 )

        It's several things. Nobody likes typos, misspellings, or awkward phrases but it happens. The briefs that have gotten lawyers in hot water included entirely fictional court cases and case law. That's not missing something, that's obviously rubber stamping something the lawyer has a legal duty to verify for accuracy.

        • And given how common it has become, disbarment is the answer. It is draconian, but given the judge must check the veracity of every single submission from these clown lawyers now, appropriate. I'd let slide typos, but as you say these are fictional cases and I doubt the AI makes spelling errors when it hallucinates.
  • Perhaps I'm misunderstanding the problem; but shouldn't verifying the existence and consistency of a citation be a completely trivial search problem? Actually verifying relevance would be another matter; but simply determining whether a given law says what you say it does or a given decision exists is basically just string matching.
    • by Jason Earl ( 1894 ) on Thursday May 15, 2025 @07:25PM (#65379713) Homepage Journal

      Yes, verifying the citations is trivially easy, which is how these people get caught. You will notice that the lawyers in question say that it was an honest citation mistake and not "fabrication of authority" which is a legal term for a crime that carries jail time and fines. The problem with that defense is that the article that they cited doesn't actually exist. They say it has an inaccurate title and inaccurate authors, but I suspect that is legal speak for, "AI made up the article."

      Now, if an article exists that happens to say approximately the same thing, and it just has a different title and authors then it is possible that the lawyers in question might be able to pretend that they really did just goof up the title and authors. If not, then what they did actually fits the definition of fabrication of authority. At which point I think that they should throw the book the fools.

      The reality is that our current legal system relies heavily on lawyers not pulling these kinds of stunts. The system is adversarial, for sure, but it is generally assumed that the opposing counsel isn't making things up whole cloth. That's why fabrication of authority carries such high penalties. No one has time to check every citation. The assumption is that the person writing the brief is citing correctly and not misrepresenting what is actually said. The fact that these particular lawyers took it a step further and included a citation that doesn't even exist is absolutely ridiculous.

  • by eepok ( 545733 ) on Thursday May 15, 2025 @06:55PM (#65379633) Homepage

    I work a LOT with law and use ChatGPT when I'm trying to find something in the law that uses very common terms/phrases. It's often better than searching my state's site or any of the poorly maintained registers of federal law.

    However, I never, ever, ever, ever, ever, EVER trust what it says is true because I know that ChatGPT's #1 job above all else is to sound like a confident human. I will ask, "Hey ChatGPT... in what part of federal law is BLAH BLAH discussed?"

    ChatGPT will respond, "Over here!" and it may even provide a link.

    When I check the link, it might be good. It might be good, but out of context. It might be good, but not federal law. It might be blatantly incorrect.

    Recently I set ChatGPT to have a default prompt to reduce the hallucinations and I've been telling everyone to do the same.

    "Be brief and use bullet points wherever possible. Use statistics where possible. Cite all data sources, legal code, and literature. Do not guess or estimate. If asked a question for which you do not have a 100% certain answer, say so in a bold-titled section. Correctness is more important than confidence."

  • I'm curious about AI hallucinations. Are they like some myth, based on a kernel of truth? Did they extrapolate wrongly from that kernel>?

    Did they misunderstand a joke or sarcasm, like someone referring to a case like Dipsh*t vs DumpA**? Maybe with less obvious names.

    Or did it infer a desired answer from the way the question was asked, or the user's history or profile, and just tall the user what they wanted to hear?
    • Re: (Score:3, Insightful)

      by Lehk228 ( 705449 )
      generative AI is fundamentally just predicting the most likely next token (word-ish) based on the prior, it's autocorrect on crystal meth.

      it has no comprehension of anything it says or does, it just generates valid text based on the analysis of immense amounts of previously harvested text.

      hallucinated citations are because AI does know what a valid citation should look like and where to expect to find one, so it puts one where you should find one, and it makes one that looks like it has seen before so th
    • by abulafia ( 7826 )
      "Hallucination" is a deceptive term, because it implies there is some sort of baseline-connection to reality in the first place.

      LLMs do not know, or think, or perceive. They just generate tokens. Because the tokens are based on the statistical analysis of massive amounts of text written by humans, the generated sequences tend to make sense to humans. But there's nothing in the generator to "know" that a given answer is physically impossible, or nonsensical, or a sequence of tokens that humans will interpr

  • by zawarski ( 1381571 ) on Thursday May 15, 2025 @07:03PM (#65379659)
    An "honest" mistake?
    • I'm also wondering how it can not be a "fabrication of authority". Fabricating authority is what AI does.

      That the lawyer passed along this fabricated authority - when he knew or should have known what it was - shouldn't be any shield from responsibility.

      If an assistant or paralegal handed them some made-up sound-good stuff, presumably that employee would be fired.

    • An "honest" mistake?

      Honestly, most people are lazy and stupid.

A man is not complete until he is married -- then he is finished.

Working...