Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI The Courts

Judge Uses ChatGPT To Make Court Decision (vice.com) 59

An anonymous reader quotes a report from Motherboard: A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator -- or at least, the first time we know about it. Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document (PDF) dated January 30, 2023.

"The arguments for this decision will be determined in line with the use of artificial intelligence (AI)," Garcia wrote in the decision, which was translated from Spanish. "Accordingly, we entered parts of the legal questions posed in these proceedings." "The purpose of including these AI-produced texts is in no way to replace the judge's decision," he added. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."

The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included "Is an autistic minor exonerated from paying fees for their therapies?" and "Has the jurisprudence of the constitutional court made favorable decisions in similar cases?" Garcia included the chatbot's full responses in the decision, apparently marking the first time a judge has admitted to doing so. The judge also included his own insights into applicable legal precedents, and said the AI was used to "extend the arguments of the adopted decision." After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision.

This discussion has been archived. No new comments can be posted.

Judge Uses ChatGPT To Make Court Decision

Comments Filter:
  • This just seems like a recipe for an appeal. Any time the court saved here is going to our for ten times over dealing with the appeals.
  • Doesn't ChatGPT just get its training off the internet? Isn't it a bit concerning that a judge would basically use the internet to make such a decision?
    • by Roger W Moore ( 538166 ) on Friday February 03, 2023 @07:41PM (#63263897) Journal

      Doesn't ChatGPT just get its training off the internet?

      Well, one way to look at it is that if the judge thinks that using ChatGPT for decisions is a good idea then why would you think that his training and judgement are any better than ChatGPT's?

      • by NFN_NLN ( 633283 )

        "It isn't about the outcome of the court case, it's about the friends we made along the way." - That judge, probably

    • by Nkwe ( 604125 ) on Friday February 03, 2023 @08:47PM (#63263999)

      Doesn't ChatGPT just get its training off the internet? Isn't it a bit concerning that a judge would basically use the internet to make such a decision?

      The role of the court is to interpret the law and determine how the letter of the law applies to real situations that may not be clear in how the law is written. When the court makes a judgement, part of the decision is made by what the law is as written, part of it is made by precedent, and part of it is made by how the court feels the law applies to what society believes in general - how "normal" people would interpret the law.

      Assuming that ChatGPT is trained off the Internet, and assuming that the information on the Internet is at whole a reflection of societal beliefs, it doesn't feel weird at all to me that a court could use a tool such as ChatGPT to support the portion of the decision based on how the common person would interpret the law.

      There are or course risks that the court would solely rely on AI and skip over the legal training and precedent parts related to a decision. AI can spit out meaningless garbage that sounds good. However separating meaningless garbage from fact is really what judges do when they listen to testimony. Assuming that a judge is good at this, using AI to support and supplement a decision may be okay.

      • chatGPT never provides its sources, it also hallucinates facts. chatGPT is like a liar with a lot of knowledge. I'm not sure it is a good tool to take life changing decisions right now. An AI trained for the legal system of a specific country and tuned/tested by experts, maybe yes but not this impressive toy.
      • ChatGPT is not a reliable source of information. Literally everything it says has to be verified.
      • It'd be one thing if ChatGPT was trained on the law, given all jurisprudence to digest (with the ability to restrict everything to a particular country) and able to be directed to lend significant weight to that over any and all other Intarwebbynetz content. It's quite another to include so much of the garbage out there.

        Side note: four days ago, someone posted over at that alien site that he'd asked ChatGPT to write a message in the style of a bookFace MLM hun trying to recruit, with emoji. The result
      • "Assuming that ChatGPT is trained off the Internet, and assuming that the information on the Internet is at whole a reflection of societal beliefs,"

        In other news, LawChat, our new robot overlord has decreed that the planet is dying, Thus inheritance goes to the 'meek'. This follows last weeks ruling that 'meek' is defined as the "Nazi/waifu/incel/trolls" that so graciously contributed to LawChats language training from the internet.

    • Doesn't ChatGPT just get its training off the internet? Isn't it a bit concerning that a judge would basically use the internet to make such a decision?

      Conviction by Wikipedia -- great. /s

      Although, to be fair, it looks like the judge is just using ChatGPT to help research and write up his decisions, not make them -- though the last part of (below) is a little disconcerting "information provided by AI."

      "The purpose of including these AI-produced texts is in no way to replace the judge's decision," he added. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."

    • by k0t0n ( 7251482 )
      Another way to look at it is that the bot sources data from all the books, magazines, white papers, articles and what not out there. This well of knowledge is very deep.
    • by Briareos ( 21163 )

      Well, then it obviously was a software problem that can't be blamed on the judge... </sarcasm>

    • Doesn't ChatGPT just get its training off the internet?

      No. This is a common misperception.

      ChatGPT does not have internet access. It only has information in it's training dataset.

      The bot was trained with a plethora of data, from books and articles to conversations. ChatGPT knows nothing in the world post-2021 as its data has not been updated since then.

  • by OrangeTide ( 124937 ) on Friday February 03, 2023 @07:48PM (#63263915) Homepage Journal

    If you're not willing to write your own papers for science, engineering, or law. Then how the hell can you demand that anyone will bother to read it? We're turning paperwork into a meaningless ritual, when it is intended to communicate and record information.

    The singularity is coming, and it's looking more and more like the plot to Idiocracy than I care to admit. This is not the right timeline anymore.

    • If you're not willing to write your own papers for science, engineering, or law. Then how the hell can you demand that anyone will bother to read it?

      This is a very good and central question. The reason it is a question at all is because too many people have failed to realize that AI software generated text gives the illusion of original content, but never any actual original content.

      ChatGPT is basically the ultimate in automated plagiarism. It has its use, but this use is not it.

      • I don't think ChatGPT responses generally include whole copies of sentences or phrases exactly. If you rearrange the words or substitute some is it still plagiarism?

        Seems more like a really good plagiarism obfuscator. Certainly it only works within its training text and rules.

        • But it's there any reason to think anything chaygpt outputs could be considered original in the human sense?
          • I think so. Think about the definition of haiku.
            haiku, unrhymed poetic form consisting of 17 syllables arranged in three lines of 5, 7, and 5 syllables respectively

            I think ChatGPT knows enough vocabulary to fill those requirements. Here is what it came up with for 'haiku about slashdot'.

            News and tech unite,
            Slashdot, a community strong,
            Sharing with all eyes.

            Seems original enough to me, not great though.

            • Second try is better:

              News and tech mix well
              Slashdot, a website so great
              Read and stay informed.

              • Both of those are made from the first 11 words of the first hit when you enter Slashdot into google. Hardly what I would call original.
                • News flash, there is actually no such thing as original thought in humans. we just re-arrange the same words into new patterns.
                  • Case in point, my sentence came from my mind, without googling or researching.

                    but if you take my own words and google them, you'll find *ancient* ideas reflecting the same thing in the same or similar words.

                    Mark Twain — 'There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope'
                    • I think the Mark Twain quote nails it.

                      Also, ChatGPT knows words and the patterns in which they are arranged but easily creates nonsense.

                  • E=MC^2

                  • If that were true, then it would be illogical to create a patent or copyright for anything. It can be proven that ChatGPT doesn't have an original thought, all one would have to do is add a trace at every step of the decision from source to destination. The sources for ChatGPT are very close to the final product. For example, if you google on 'Slashdot' the first hit has "Slashdot: News for nerds, stuff that matters. Timely news source for technology related news with a heavy slant towards Linux and Open
                    • > If that were true, then it would be illogical to create a patent or copyright for anything

                      You are so close to understanding, yet not quite getting it.
                  • I guess this all comes down to bounds of creativity. ChatGPT can never imagine anything that is not on the internet, so its creativity is bound to anything that is on the internet. Where as I imagined something in my mind right off the bat that I can't find with an internet search. I imagined a house with its walls and roof made of light. I searched for it on google and I get a lot of design ideas for houses with light, and some light houses but no houses made entirely of light. I would hypothesize tha
                    • GPT can imagine original ideas if you ask it to. You may have to ask it, in a specific way, but it will. Inspired by the TNG episde “Elementary, Dear Data”, I asked it for an original sherlock holmes story, and it produced one "in the holmsian style".
                    • But that's not imagining at all. That's taking a Sherlock Holmes story and maybe another mystery and madlibbing it together. What would happen if you told it to write about something that doesn't exist in reality or fiction?
                    • But that's basically what every human writer or artist has done through all human history, is simply remix pre-existing ideas. GPT can just as well produce an original story if you ask it to.
                    • I didn't say "an original story". I said "write about something that doesn't exist in reality or fiction". Yes, I know it could randomly choose among a collection of stories and madlib those together too.
                    • > I didn't say "an original story". I said "write about something that doesn't exist in reality or fiction"

                      Those are the same thing you pedantic twit
                    • Those are absolutely not the same thing. An original story can be about a person, a car, a duck, a time traveling space ship; things you can find on the internet. In those cases it won't be making up anything, just finding things on the internet and making variations on them. I want to know if it can actually crate something that is not found on the internet anywhere, which would be "something that doesn't exist in reality or fiction".
      • The majority of 'original content' consists of pre-existing thoughts and ideas put together in a novel way.

    • We're turning paperwork into a meaningless ritual

      This has been true for many years already. Those terms of service that you agreed to in order to use that web site, EULAs, home purchase contracts, employment agreements, and countless other forms of paperwork have been required, and also ignored, for a long time. If ChatGPT is used to generate more of this kind of paperwork, it will certainly accelerate this trend, and make it even more meaningless than it already is.

    • We're turning paperwork into a meaningless ritual, when it is intended to communicate and record information

      About those TPS reports...

    • by drolli ( 522659 )

      If you write an engineering report for a building which collapses and kills people then I wish you good luck in court if it becomes apparent that your report is the ChatGPT answer to the quesiton: "is a concrete foundation stable?"

  • The horror!
  • fire the judge, he obviously has no use anymore and is a burden to the local taxpayers

  • The text being machine-generated isn't, in and of itself, what bothers me. Rather, it's what that reveals about the judge's attitude and rigor.

    If having to formally express his judgement, for the record, once per case is just too much work for him, he's not performing his job as a judge. Formulating, condensing, and expressing an argument (in the philosophical sense of the word) is how you come to a thorough understanding of it. To avoid that is to avoid refining your argument. It is to avoid true thought a

  • by quantaman ( 517394 ) on Saturday February 04, 2023 @12:59AM (#63264307)

    "The arguments for this decision will be determined in line with the use of artificial intelligence (AI)," Garcia wrote in the decision, which was translated from Spanish. "Accordingly, we entered parts of the legal questions posed in these proceedings." "The purpose of including these AI-produced texts is in no way to replace the judge's decision," he added. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."

    The issue I see with this is the AI-text ends up becoming a first draft, and first drafts often have a very strong influence on the final draft.

    Of course the council's for the two parties will also present the judge with drafts, but with those drafts it's clear to the judge that they have a strong bias. In this case I fear the judge will view ChatGPT as a neutral source.

  • Kinda sounds like he used it to help write the written form of the decision, not to make it.
  • Here we go again. I see eyes in the dark! Does this mean that I should go back to screwing flatheads with my fingernail? It's not the one true tool? Why should the smart people here at Slashdot believe that a tool that collates data now overrides a person. Oh, they want that to be true. That's why. It fits a personal narrative of impish loser talk and desperation.

    What if the device is wrong!? Then we override it, or ignore it outright. Hearing out a tool that can quickly show relationships in data is pro
  • What kind of Mickey Mouse legal system do they have down there? Although if the lawyers were replaced with chatGPT - would anyone care or even notice?
    • by ebvwfbw ( 864834 )

      What kind of Mickey Mouse legal system do they have down there? Although if the lawyers were replaced with chatGPT - would anyone care or even notice?

      Who knows, maybe it would do a better job.
      I think I might use it to confirm or help a decision. Maybe it would help the judge from making a mistake.

  • Sounds a lot like using Wikipedia as a source on a research paper. Might be a useful tool to find related decisions on cases and present potentially relevant arguments, but any hard references still have to be fact-checked by an aide because it isn't always accurate. As long as it's used as a first research step I don't see the harm, unless people get lazy and don't check the results, or use it as the only research source.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...