Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI The Courts

BC Lawyer Reprimanded For Citing Fake Cases Invented By ChatGPT 42

A B.C. lawyer has been ordered to pay costs for opposing counsel for the time they took to discover that two cases she cited as precedent were created by ChatGPT. CBC News reports: The cases would have provided compelling precedent for a divorced dad to take his children to China -- had they been real. But instead of savouring courtroom victory, the Vancouver lawyer for a millionaire embroiled in an acrimonious split has been told to personally compensate her client's ex-wife's lawyers for the time it took them to learn the cases she hoped to cite were conjured up by ChatGPT. In a decision released Monday, a B.C. Supreme Court judge reprimanded lawyer Chong Ke for including two AI "hallucinations" in an application filed last December. The cases never made it into Ke's arguments; they were withdrawn once she learned they were non-existent.

Justice David Masuhara said he didn't think the lawyer intended to deceive the court -- but he was troubled all the same. "As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers," Masuhara wrote in a "final comment" appended to his ruling. "Competence in the selection and use of any technology tools, including those powered by AI, is critical."
This discussion has been archived. No new comments can be posted.

BC Lawyer Reprimanded For Citing Fake Cases Invented By ChatGPT

Comments Filter:
  • Oh, Canada (Score:2, Troll)

    by Wrexs0ul ( 515885 )

    We try so hard to America that our lawyers are graduating from the Costco school of not reading so good.

    Brought to you by Carl's jr.

    • We try so hard to America that our lawyers are graduating from the Costco school of not reading so good.

      Brought to you by Carl's jr.

      You mean Cracker Jack, that famous old kid's treat, is still in the law school diploma game?

    • That's the one where the Starbucks does the "Full Body" Latte? I think I've been there.
  • by OpenSourced ( 323149 ) on Thursday February 29, 2024 @06:02PM (#64280292) Journal

    I have used ChatGPT just once. I wanted to find a story that I had read a long time ago about a boat race somewhere in Britain. Nice story. I had tried goodreads, but no bacon. So I used the same description with ChatGPT and the Skynet-to-be answered with no doubt that the story in question was "The two rowers", by none less than Alistair McLean. I wondered that such a know author had remained obscure to the goodread members. Cue to the loss of one hour of my life till I learned that Mr.McLean had never written such a story, the concoction was completely invented from the too-avid-to-please AI.

    I think that the main mistake is calling these things AI, they are not intelligent at all, they are more like search on steroids. Steroids that can make them hallucinate, apparently.

    • Realize these things are constant evolving and improving at a rapid rate.

      Let's try your example again with the current version. Reply with your vague recollections of the story, and I'll submit it to GPT4. (I guarantee I won't know, or fudge, the answer.)

      • not really, they're just feeding it more data points so it appears to be improving - like subsequent calculus integrations get "closer". We're just starting to test the legality of stealing others intellectual property and feeding it to these datasets for profit. https://www.nytimes.com/2024/0... [nytimes.com] this could go very badly for OpenIA.
    • by Chozabu ( 974192 )
      ChatGPT (without extensions) won't search the internet, it's a LLM that gives you a direct response "from memory". Good chance of getting something right if it's fairly common knowledge, bad chance otherwise.

      If you want to look up something more factual, try perplexity.ai, or phind.com - the way they work is along the lines of taking what you say, reworking it (perhaps with several variations), doing some internet searches, then answering your question with the result of the searches as context.
      They s
      • The problem here is the "from memory" response was originally scraped from someone else's content, to which, OpenAI may or may not have the rights to have done. This is just beginning to get played out in court. If the courts find that AI has been trained on copywrited material, including pictures, then OpenAi will be forced to pay up or delete the underlying data. This will result in either huge price increases to use OpenAI or it will kneecap the project. Either way, we'll be done talking about AI. Wh
        • Often AI takes words or phrases from people, but mashes them up in its own way. If it did get an idea from somewhere, its not easy to prove because its not directly copied. Its kind of how humans work. What original thoughts do we have, well they're all in our own words from memory, but they came from somewhere sometime when we learned them from some other source. Music has this problem often. An artist will produce a work, then another artist will sue saying they were copied. When maybe they didn't even in
      • That is not correct. ChatGPT 4, will definitely do a web search if necessary or if you tell it to.

        That said, in the last few days it will often reply that it was not allowed to access the web page. The cards are still very much up in the air as to who will pay whom and how much, to make this all work.

        And it's only reasonable, because until it started getting blocked, ChatGPT was becoming my main interface to the internet - it filters out all the annoyances - that is, everything that makes it worthwhil

    • you are correct, they not intelligent in any away. In it's very basic description of function, it's just math based pattern recognition. Allowing it to "self learn"~ you know, like living real people do from birth till death ~ results in data poisoning. Why? Because when you average averages, you get garbage.
      The problem here is that greedy corporate whores are being sold they can slash staff and replace with "AI" ~ and they're buying it.
      • by MobyDisk ( 75490 )

        I think everything you just said applies to human beings too.

        • I asked how many parameters a LLM would need to equal a human brain, an apples to oranges question but perplexity.ai responded

          To estimate the number of parameters needed for a large language model (LLM) to be equivalent to a human brain, we can consider the number of synapses in the human brain and assume that each synaptic connection is roughly equivalent to one parameter in a deep neural network.

          The human brain has approximately 86 billion neurons, and each neuron has roughly 7000 synapses. Therefore, the human brain has about 86 billion * 7000 = 602 trillion synapses. If we assume that each synaptic connection is roughly equivalent to one parameter in a deep neural network, then the human brain has about 602 trillion parameters.

          However, this is a simplification, as the human brain is not a transformer architecture and has a complex network of neurons and synapses with various types of connections. The actual number of parameters needed for an LLM to match the processing power of a human brain would require a more detailed understanding of the brain's neural architecture and the specific connections between neurons.

          Additionally, the quality of the training data, the training process, and the architecture of the model also play important roles in the performance of an LLM

    • Current models are "stochastic parrots" and nothing more, people should be aware of this. They do not "know" anything, these are just probabilistic generators trained on the vastness of the Internet - most of the general stuff they do get right, however these models do not operate in right/wrong or true/false or real/unreal categories - the creators do keep improving them though.

      Can they be called AI - it all depends how AI is defined, I'd say considering the leap in their capabilities it's justified, yet c

      • by MobyDisk ( 75490 )

        Interesting opinion article about Stochastic parrots [quantamagazine.org]

        • It's an interesting article - still need time to go through the details (i.e. references), however from what I know none of the existing models have the so called "narrator", which is essential to consider them "understanding" what they are processing - they are active only to process requests - no background "thoughts", and additionally (disclaimer here: I might be wrong) from experience lots of such studies look into positive result cases only, e.g. just the recent issue (with the Google AI) generating ob

      • by HiThere ( 15173 )

        They're a necessary PART of a real AI. We already have other parts, but integrating the pieces is difficult, and we may not already have all of the pieces. My guess is that we are 1.5 breakthroughs away from a real AI, but it could be 2.5 or even more. (The ".5" is figuring out how to integrate the pieces together.)

    • Except that they're not "search". They were trained on data but when you ask a question they are not necessarily searching the internet for answers.

    • by Bongo ( 13261 )

      I think they call it "hallucinate" to give the impression it's intelligent.

      • by HiThere ( 15173 )

        Not really. "Hallucinate" is probably the most reasonable word in non-technical English...or at least one of a very few reasonable choices that hasn't been used already to mean something else. And it's not that bad a description when you look at the actual process.

    • by MobyDisk ( 75490 )

      I notice that most of the people saying AI is not useful comes from people who start by saying "I tried it one time for something trivial and gave up."

  • by Roger W Moore ( 538166 ) on Thursday February 29, 2024 @06:39PM (#64280380) Journal

    Justice David Masuhara said he didn't think the lawyer intended to deceive the court

    Probably not and paying the costs incurred is just basic reparations but there is still the question of the gross incompetence of the lawyer in question that surely should have been forwarded to the BC Law Society for investigation.

    • by geekmux ( 1040042 ) on Thursday February 29, 2024 @08:02PM (#64280576)

      When you consider that a citizen had to fund an actual defense forced to investigate and prove a completely false narrative in order to prove innocence, I would say SEVERE punishment is required at this point.

      Ask yourself what is going to happen to the next 99 cases where a defendant cannot afford to fund an investigation, and innocent people end up being punished. Why in the hell should we not hold a lawyer to a HIGHER standard here when they are the ones found guilty of deception in the worst way?

      That’s not mere “incompetence”, since the lawyer didn’t slip and fall on ChatGPT. Disbar that fucking criminal who doesn’t deserve to represent law.

      • by G00F ( 241765 )

        I say the fine and what not belongs on the Judge for not checking the referenced cases.

        • I say the fine and what not belongs on the Judge for not checking the referenced cases.

          I disagree. It's the laywer's responsibility to not lie in court. Using chatGPT is fine, but the laywer knows where he used it and can easily search to verify the claims. This is a situation where the LLM are ideal since it's relatively easy to verify.

          • This is a situation where the LLM are ideal since it's relatively easy to verify.

            If LLMs are barely more than glorified search engines right now, then explain to me exactly how one cannot feed an LLM pure bullshit for someone else to “verify” said bullshit.

            Sorry, but neither ChatGPT or any LLM is ready for legal prime time. And probably never should be. The hell would I want to entrust my literal freedom to some lazy lawyer pointing at an LLM-enabled internet and saying “look it up!” or ”An LLM told me, so it must be true.” That’s quite danger

            • I think your missing the point. It's like NP-complete problems. It's very hard to come up with the right solution, but easy to verify. The hard work is finding the precedents that prove your case. It's easy to verify that those decisions exist. It's still might be hard to show that they are relevant, but it's still a big help to find them.

              I guess your arguing that a LLM can't come with non-obvious relevant cases. However, I thought the point of this was to complain about the LLM hallucinating the

  • by larryjoe ( 135075 ) on Thursday February 29, 2024 @08:17PM (#64280602)

    Would the lawyer have trusted without verification something that a friend, a law school professor, or a judge told her? Why would a result from ChatGPT be treated any differently? I use ChatGPT all the time, and I trust it as much as something I hear from a human being. ChatGPT is just a resource. If that resource is trusted without verification, the problem lies entirely with the human user and not ChatGPT.

    There is a parallel to Wikipedia when it first appeared. Some people claimed that printed encyclopedias were more reliable until studies showed accuracy to be roughly equal. There is currently a similar distrust of ChatGPT, but it really should be considered to be yet another resource.

    • You bring a fair point. That said, I’m not about to give a lying lawyer the benefit of the doubt here until I see their exact query.

      Sadly, this is one of those cases where I believe a lawyer was purposely intending to deceive rather than ChatGPT just being ignorantly stupid.

  • by ukoda ( 537183 ) on Thursday February 29, 2024 @09:15PM (#64280700) Homepage
    With the push to add AI too everything there is a risk of searching for facts using the 'wrong' tool and being unaware that AI answers have been included.

    Of course that wouldn't let a lawyer off the hook as they should looking for and actually verify a precedent they quote really exists and is actually relevant to their case.

    I can see this biting a whole lot of other professions too, until the proper usage of AIs is understood.
  • Lawyers by now should know not to do this shit.

    They should be disbarred when they do. Do the work. Sure you can use AI, but you can't create lies to feed the court system.

  • Justice David Masuhara said he didn't think the lawyer intended to deceive the court

    Of course your honor, he probably didn't. However, he clearly intended to not invest the needed time and effort to do their job properly.

  • by jbssm ( 961115 ) on Friday March 01, 2024 @04:14AM (#64281254)
    Anything coming from China was already a running joke in science during the last decade due to the very low quality and made up results that usually accompanied their papers. Nowadays, with the AI generated papers they are spewing out, we just completely disregard anything where the main author's name sounds Chinese.

Real Programs don't use shared text. Otherwise, how can they use functions for scratch space after they are finished calling them?

Working...