Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Courts AI

US Judge Orders Lawyers To Sign AI Pledge, Warning Chatbots 'Make Stuff Up' (reuters.com) 24

An anonymous reader quotes a report from Reuters: A federal judge in Texas is now requiring lawyers in cases before him to certify that they did not use artificial intelligence to draft their filings without a human checking their accuracy. U.S. District Judge Brantley Starr of the Northern District of Texas issued the requirement on Tuesday, in what appears to be a first for the federal courts. In an interview Wednesday, Starr said that he created the requirement to warn lawyers that AI tools can create fake cases and that he may sanction them if they rely on AI-generated information without verifying it themselves. "We're at least putting lawyers on notice, who might not otherwise be on notice, that they can't just trust those databases. They've got to actually verify it themselves through a traditional database," Starr said.

In the notice about the requirement on his Dallas court's website, Starr said generative AI tools like ChatGPT are "incredibly powerful" and can be used in the law in other ways, but they should not be used for legal briefing. "These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up -- even quotes and citations," the statement said. The judge also said that while attorneys swear an oath to uphold the law and represent their clients, the AI platforms do not. "Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle," the notice said.

Starr said on Wednesday that he began drafting the mandate while attending a panel on artificial intelligence at a conference hosted by the 5th Circuit U.S. Court of Appeals, where the panelists demonstrated how the platforms made up bogus cases. The judge said he considered banning the use of AI in his courtroom altogether, but he decided not to do so after conversations with Eugene Volokh, a law professor at the UCLA School of Law, and others. Volokh said Wednesday that lawyers who use other databases for legal research might assume they can also rely on AI platforms. "This is a way of reminding lawyers they can't assume that," Volokh said.
Starr issued the requirement days after another judge threatened to sanction a lawyer for using ChatGPT to help write court filings that cited six nonexistent cases.
This discussion has been archived. No new comments can be posted.

US Judge Orders Lawyers To Sign AI Pledge, Warning Chatbots 'Make Stuff Up'

Comments Filter:
  • by narcc ( 412956 ) on Friday June 02, 2023 @06:17PM (#63571825) Journal

    Oh, look at that. It's not actually better than a real lawyer. Who could have guessed?

    Expect more stories like this as people realize that chatbots are nothing at all like the science fiction they've imagined.

  • by Charlotte ( 16886 ) on Friday June 02, 2023 @06:20PM (#63571833)

    I like this guy, he's even got the details right.

  • by OrangeTide ( 124937 ) on Friday June 02, 2023 @06:29PM (#63571847) Homepage Journal

    Once you step outside the area of expertise these language models will confidently spin you a fantasy and stand behind and defend it to the death. Trying to get assembler code out of one is like talking to a drunk toddler. It will lie to your face and output even more garbage to defend the original garbage. Whatever game it is playing, it thinks it is winning.

    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Friday June 02, 2023 @07:03PM (#63571911)

      Once you step outside the area of expertise these language models will confidently spin you a fantasy and stand behind and defend it to the death. Trying to get assembler code out of one is like talking to a drunk toddler. It will lie to your face and output even more garbage to defend the original garbage. Whatever game it is playing, it thinks it is winning.

      I think the turning point will be when lazy people use ChatGPT and others to get them commands and scripts to perform a task, and the tool spits out something that wipes the data. So you ask ChatGPT for a script to do X, and it does X, but it also inserted a line that formats your OS drive at the same time.

      Or you ask it for commands on git and it deletes everything on the server.

      Then I think the great reckoning will occur.

  • Wow (Score:5, Insightful)

    by 93 Escort Wagon ( 326346 ) on Friday June 02, 2023 @06:36PM (#63571873)

    This may be the first time I've seen the phrase "Texas Judge" associated with something so eminently reasonable.

  • Roget's Thesaurus is declared illegal and banned from all libraries.
    If you cannot detect bullshit, maybe you're doing the job that you're being paid to do.
    Just saying.
  • by khchung ( 462899 ) on Friday June 02, 2023 @09:41PM (#63572171) Journal

    Would it be any different if a human cited 6 nonexistent cases? Wouldn't the lawyer who submitted such filing just as liable if his human assistant wrote it instead of some AI? Or would the judge be more lenient if the lawyer's cousin wrote that? Or if the lawyer's cousin used ChatGPT to write it and "checked it" before passing to the lawyer?

    The lawyer is already responsible for what was filed, regardless of it being written by whatever tool, be it human or AI. Why is an extra pledge necessary?

    • by Midnight Thunder ( 17205 ) on Friday June 02, 2023 @09:51PM (#63572185) Homepage Journal

      The extra pledge is necessary, because there has been at least one case where a lawyer cited cases that didnâ(TM)t exist, because they trusted the output of the AI without question.

      This is more of a reminder that what an AI spits out might be actual bullshit. Trusting it, without question, will likely make a mockery of the legal system and put the client at risk.

      Lawyers can get lazy and donâ(TM)t necessarily understand or realise the limits of the tools they are using. Heck, this goes for many programmers too. This is why we need this declaration.

      • by khchung ( 462899 )

        The extra pledge is necessary, because there has been at least one case where a lawyer cited cases that didnâ(TM)t exist, because they trusted the output of the AI without question.

        So if there is at least one case where a lawyer cited cases that didn't exist because they trusted their cousin's output without question, lawyers would need to sign another pledge to check what their cousins wrote before filing? Do American lawyers sign pledges every week? What's the pledge of the week before this?

        • by narcc ( 412956 )

          Was his post really that hard to understand? Here are the important bits, edited for clarity:

          Lawyers don't necessarily understand or realize the limits of the tools they are using. This is more of a reminder that what an AI spits out might be actual bullshit.

    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Friday June 02, 2023 @11:16PM (#63572303)

      Would it be any different if a human cited 6 nonexistent cases? Wouldn't the lawyer who submitted such filing just as liable if his human assistant wrote it instead of some AI? Or would the judge be more lenient if the lawyer's cousin wrote that? Or if the lawyer's cousin used ChatGPT to write it and "checked it" before passing to the lawyer?

      Lawyers aren't so bold as to make up citations. Why? Because you can be absolutely sure that the other side is going to scrutinize the citation, looking the up and reading it in order to formulate a response.

      ChatGPT not only produced fake citations, it produced citations that were obviously invalid by just looking at it. Citations literally tell you where the relevant piece of information is to be found - which courthouse, which book, which page, which paragraph, etc. You don't cite cases, you cite a specific decision in the case by pointing out where that decision is precisely.

      So it's completely pointless to make up a citation because you will bet the other side will check your references because they will try to address your points by seeing if they can invalidate your citation (e.g., perhaps it was narrowly constructed to not work in your case).

      It's not like a research paper where everyone glosses over the citations and assumes they're correct. Here, to form a proper legal response, the other side's lawyers will try to discredit your citations as much as possible, which means looking them up.

      • obviously chatgpt isn't a lawyer. but, GPT4 over API could serve as the basis for a Lawyer Bot, when properly coupled with another controlling program that has the ability to recognize citations, fetch the actual case law, read it, determine if the bot actually was correct, and then, tell the bot to fix its output if it is not. I do not think this is outside the bounds of current tech. Even the twitter 'AI entrepreneurs' can cook that one up.
        • by narcc ( 412956 )

          I do not think this is outside the bounds of current tech.

          It is. Specifically the "read it, determine if the bot actually was correct" part. Things like 'understanding' and 'analysis' are far-beyond the capabilities of modern LLMs.

  • Seems unnecessary. (Score:5, Insightful)

    by Petersko ( 564140 ) on Saturday June 03, 2023 @12:04AM (#63572341)

    Just punish them directly for lying in their filings, to the maximum extent permitted.. Don't bother asking where it originated - just hold them 100% responsible for the content.

    • The problem with your proposal is that in the meanwhile someone has received ineffective counsel so now there is additional overhead of appeals on that basis, and that's assuming that situation remains remediable.

      • You can't prevent ineffective council, and it's not the job of the court to do so. You can punish errant council, however.

  • The regular databases don't take oaths either. But I get his point. Hallucinations are for a different reason. It wouldn't be hard to add a layer on top of a 'legalbot' that actually fetches the cited case law, and reads it to verify that it actually says what the LLM said it did... so while ChatGPT isn't yet a lawyer, lawyer bots are just a coordinating program away from them.
    • by narcc ( 412956 )

      It wouldn't be hard to add a layer on top of a 'legalbot' that actually fetches the cited case law, and reads it to verify that it actually says what the LLM said it did.

      If you think such a thing is easy, you should do so immediately. Fame and fortune await.

      The "verify" part is not something you'll be able to do reliably with a modern LLM. The best you can realistically hope to accomplish is detecting fake citations, but you don't really need (or want) AI for that.

  • by aldousd666 ( 640240 ) on Saturday June 03, 2023 @01:04AM (#63572399) Journal
    I have to say... this judge seems to have an unexpectedly sober assessment of AI, its current state and capabilities, as well as a basic understanding of the promise -- and the risks -- that go with it. Bravo to the judge for being well-informed!

To the systems programmer, users and applications serve only to provide a test load.

Working...