Forgot your password?
typodupeerror
AI The Courts

Lawyer Caught Using AI While Explaining to Court Why He Used AI (404media.co) 39

An anonymous reader shares a report: An attorney in a New York Supreme Court commercial case got caught using AI in his filings, and then got caught using AI again in the brief where he had to explain why he used AI, according to court documents filed earlier this month.

New York Supreme Court Judge Joel Cohen wrote in a decision granting the plaintiff's attorneys' request for sanctions that the defendant's counsel, Michael Fourte's law offices, not only submitted AI-hallucinated citations and quotations in the summary judgment brief that led to the filing of the plaintiff's motion for sanctions, but also included "multiple new AI-hallucinated citations and quotations" in the process of opposing the motion.

"In other words," the judge wrote, "counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI."

The case itself centers on a dispute between family members and a defaulted loan. The details of the case involve a fairly run-of-the-mill domestic money beef, but Fourte's office allegedly using AI that generated fake citations, and then inserting nonexistent citations into the opposition brief, has become the bigger story.

This discussion has been archived. No new comments can be posted.

Lawyer Caught Using AI While Explaining to Court Why He Used AI

Comments Filter:
  • No one will know if that is sarcasm or not because irony is dead.
  • Disbar the fucker (Score:5, Insightful)

    by gweihir ( 88907 ) on Tuesday October 14, 2025 @04:18PM (#65725000)

    I am sure he still charges the usual exorbitant rates even when he fails to supervise the AI. Ordinarily, that is called fraud. I am sure the legal mafia has made themselves some laws around that.

  • Ballsy move (Score:5, Insightful)

    by alvinrod ( 889928 ) on Tuesday October 14, 2025 @04:27PM (#65725012)
    Well that's certainly a ballsy move. I think he ought to foot to bill for his client to receive competent counsel, because I don't see how the court could allow this attorney to continue with the case (or his ability to practice law) after this. His client has no reason to trust that his lawyer is capable of representing his interests going forward.

    This guy has done the improbable and somehow managed to lower my opinion of lawyers. It was already a low bar, but this man is the new limbo champion. The only saving grace here is that this is a civil matter and no one will be going to jail for this man's failings. Perhaps he should though, if for no other reason that to dissuade anyone else from acting as foolishly.
    • by Anonymous Coward

      Come on, I see great promise in him. He should send his candidature for a job at the White House. Or maybe run for congress, republicans love people like him.
      Wait, one question though, did he admit to using AI. If he did, then forget I said anything. Remember kids, take it from Trump, never admit to any wrongdoing. Accuse your accuser of what you did. So if you get caught using fake cases in court, deny they are fake, accuse them of having deleted the cases just to put him in jail, and that they used fake c

    • You give him too much credit. It's really just a epically stupid move. "Ballsy" implies that the guy *knew* he was taking a risk.

      • You give him too much credit.

        I totally agree.

        It's really just a epically stupid move.

        I totally agree again.

        "Ballsy" implies that the guy *knew* he was taking a risk.

        He knew. He just didn't care, as judges have stupidly been VERY light-handed about attorney AI fraud. If judges did the correct thing, and heavily sanctioned attorneys who openly defraud courts via LLM's, then the problem would resolve itself rather quickly.

  • by oldgraybeard ( 2939809 ) on Tuesday October 14, 2025 @04:38PM (#65725042)
    firm destroying and career ending this will go on and on.
  • by Mirnotoriety ( 10462951 ) on Tuesday October 14, 2025 @05:15PM (#65725120)
    AI is fundamentally a product of probabilistic generation, shaped by user input and a vast Large Language Model (LLM) trained on the web. It often reflects inaccuracies, biases, propaganda, and falsehoods, lacking any true understanding of the material. At its core, AI serves as a probabilistic mirror of human communication.
    • by HiThere ( 15173 )

      It's a combination of training data and rewards. Chatbots are trained to never admit that they don't know, and to always be willing to be convinced that the person talking to them is correct. This makes them more popular, and enchances engagement, but at the cost of accuracy.

      • by Sique ( 173459 )

        Chatbots are trained to never admit that they don't know, and to always be willing to be convinced that the person talking to them is correct.

        No, that's exactly not what chatbots are doing. Chatbots have no concept of right and wrong. Chatbots know that given the frequency of words already in the conversation and their probalistic neighborhood to elements in their body of data, which words are most probable to come next. And if there is not enough data fitting their current state, they randomly add words, because no possible next word presents them with a high probability.

        • Chatbots know that given the frequency of words already in the conversation and their probalistic neighborhood to elements in their body of data, which words are most probable to come next. And if there is not enough data fitting their current state, they randomly add words, because no possible next word presents them with a high probability.

          The old n-gram models were kind of like this. Modern systems use a neural model and have the ability to generalize. In other words they actually do and learn shit. For example solving simple ciphers, base64 decoding, language translation, ICL..etc.

          There is some basic statistics /w random probability at the end of the round after a model is run through in selecting a token yet this is a footnote that is in no way representative of the internal processing and communication that takes place during execution

      • It's a combination of training data and rewards. Chatbots are trained to never admit that they don't know

        They are not trained to do this. They have no meta-cognition and don't know what they know in the first place.

  • by algaeman ( 600564 ) on Tuesday October 14, 2025 @05:46PM (#65725204)
    The judge's brother Ethan suggested the attorney should be forced into a woodchipper.
  • by kbahey ( 102895 ) on Tuesday October 14, 2025 @05:47PM (#65725208) Homepage

    Here is a non-paywalled article that even has a summary at the top:

    NY judge sanctions lawyer for fake AI citations [nydailyrecord.com].

  • Judge should have the lawyer arrested for contempt and strike to get his licence to practice revoked.
  • to tell the judge why the stupid human used AI.
    As a matter of fact, you don't need the stupid lawyer at all!

  • The Trump Department of Justice is looking for men of his caliber and outstanding integrity. Remember children, no one except Trump and his cronies are above the law.

  • by A nonymous Coward ( 7548 ) on Tuesday October 14, 2025 @08:09PM (#65725490)

    https://reason.com/volokh/2025... [reason.com]

    It's a legal blog with a legal perspective.

    • "And, your Honor, when I say I told the staff, I take full responsibility. It's my staff, so I told myself to get rid of it and I did not get rid of it"

      Don't you hate it when you tell yourself to do something, and you don't listen? Ugh. If you can't depend on yourself, on whom can you depend?

  • Sloppy McSlopperson, Attorney-At-Slop

  • Looks like we have a "vibe lawyer" about as useful as a "vibe programmer".
  • Maybe AI is using lawyer to explain what it did?

Everybody needs a little love sometime; stop hacking and fall in love!

Working...