A Robot Was Scheduled To Argue In Court, Then Came the Jail Threats (npr.org) 115
schwit1 shares a report from NPR: A British man who planned to have a "robot lawyer" help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time. [...] The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore. As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in. "Multiple state bar associations have threatened us," Browder said. "One even said a referral to the district attorney's office and prosecution and prison time would be possible." In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.
"Even if it wouldn't happen, the threat of criminal charges was enough to give it up," [said Joshua Browden, the CEO of the New York-based startup DoNotPay]. "The letters have become so frequent that we thought it was just a distraction and that we should move on." State bar associations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law. Browder refused to cite which state bar associations in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bar associations, including California's. "The truth is, most people can't afford lawyers," he said. "This could've shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could've helped them win cases."
"I think calling the tool a 'robot lawyer' really riled a lot of lawyers up," Browder said. "But I think they're missing the forest for the trees. Technology is advancing and courtroom rules are very outdated."
"Even if it wouldn't happen, the threat of criminal charges was enough to give it up," [said Joshua Browden, the CEO of the New York-based startup DoNotPay]. "The letters have become so frequent that we thought it was just a distraction and that we should move on." State bar associations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law. Browder refused to cite which state bar associations in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bar associations, including California's. "The truth is, most people can't afford lawyers," he said. "This could've shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could've helped them win cases."
"I think calling the tool a 'robot lawyer' really riled a lot of lawyers up," Browder said. "But I think they're missing the forest for the trees. Technology is advancing and courtroom rules are very outdated."
Eventuality of dealing with this... (Score:5, Insightful)
Eventually we WILL have to deal with AI/robot lawyers (and doctors, etc.). This will happen eventually and a precedent will have to be set.
And even if we ban AI robot lawyers, there's nothing stopping a human lawyer from having the AI lawyer give him advice from his pocket and airpods.
Re: Eventuality of dealing with this... (Score:4, Interesting)
Re: (Score:2)
If the law becomes so complicated humans can't understand it, we'll have to take AI's word as to what the law says and means.
Re:Eventuality of dealing with this... (Score:5, Insightful)
Oh, having a human lawyer use a computer as a legal research tool is totally fine. What has them all angry is the threat of replacing human lawyers with robots. As a rule, automation is fine so long as it isn't my job being automated. This step was a direct threat to the income of these associations, which is why threats of jail time started to fly.
Re:Eventuality of dealing with this... (Score:4, Funny)
Nobody complains about automating away their work, what they fear is automating away their paycheck.
Re: (Score:3)
They are worried about people having access to legal advice too. From what I understand the situation with parking fines is that you can often get out of them if you are willing to put the effort into going to court and dealing with the hard to navigate system. The cop needs to turn up to be a witness. If everyone did it, either traffic cops would spend all day in court or they would lose a huge source of revenue.
An app that makes it easy to contest tickets is a threat. If everyone suddenly has access to a
Re: (Score:3)
The obvious solution to robo-lawyers causing an increase in case load, is to introduce robo-judges working in virtual court rooms.
That does however pose the risk of laws being applied to everyone equally and fairly.
Re: (Score:3)
Only until they can make robo-judges capable of golfing. Then we can get back in the swing of things. Yes, that was on purpose. I'll see myself out.
Re: (Score:2)
We can dare to dream. Somehow, I have my doubts the first thing that would happen with the "extra" money would be having it redistributed to the workers, but I'd love to be proven wrong there.
Re:Eventuality of dealing with this... (Score:4, Informative)
Parking tickets are a bit worse in that, if you are found guilty, there are often court fees that add up to as much as the original fine. So you really should not fight a ticket if you are guilty.
I did once successfully, by myself, fight a parking ticket that was improper. The original ticket was written for a green van with one license plate digit different than mine. I had a red two-seater. Somehow, though, when the ticket was entered into the computer, a typographical error assigned it to me. I didn't know until I got a the unpaid ticket notice. I was able to resolve that by myself by just going to the court and talking to the judge. But my case was, arguably, simpler since the original ticket showed it wasn't my license. It would have been harder if I were actually guilty (because well guilty people are supposed to pay their fines)
Re: (Score:2)
Oh, having a human lawyer use a computer as a legal research tool is totally fine. What has them all angry is the threat of replacing human lawyers with robots. As a rule, automation is fine so long as it isn't my job being automated. This step was a direct threat to the income of these associations, which is why threats of jail time started to fly.
Pretty much.
It won't stop it though, the best they can hope for is for humans to oversee the automation, making sure it doesn't make mistakes as we do with long haul automated trains (simply because if the automation does fail, millions of dollars of cargo sits there for hours whilst someone drives out to the middle of nowhere).
Re: (Score:2)
Re: (Score:2)
Yep eventually this will happen, but before then a LOT of legal issues will need to be sorted
Since when are legal issues sorted out first in today’s environment? Uber, Airbnb, and plenty of other companies have business models predicated on the law taking years to sort out whether what they’re doing is legal after they’ve already made their millions/billions and have shifted the discourse in their favor.
Re: (Score:2)
Uber, Airbnb, and plenty of other companies have business models predicated on the law taking years to sort out whether what theyâ(TM)re doing is legal
Oh, not that. I think the biggest issue is the liability problem. Who is responsible when the attorney/doctor AI screws up? Developers? Manufacturer?
With Uber, they had to figure out insurance situation for drivers, even if legality of Uber itself may be unclear. But insuring AI will be trickier.
Re: (Score:2)
The difference is that judges have a lot of discretion in running their court so they could probably hold the bot or whoever runs it in contempt of court if it does something stupid.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
That's why lawyers are expensive.
It work the same with plumbers and electricians with state licensing. An electrician just quoted us $24,000 to install new plugs and wires in a few rooms. Probably 5 days work. I'll be doing it myself. $600/hr strikes me as being unreasonable.
Re: (Score:2)
Yep - and they don't fix the walls when they are done. You have to get some else to patch up the sheetrock. At least that is how it is with plumbers.
Re: (Score:2)
Indeed. He put that in the quote. He doesn't want to clean up his messes for $24k.
Re: (Score:2)
Can you take the bar exam remotely, or with a laptop in the room? Seems like there is an opportunity for someone to qualify by having the AI pass the exam and using it exclusively in practice. Sell their services for half the price of a normal lawyer, and the only work they need to do is feed documents to the AI.
Re: Eventuality of dealing with this... (Score:3)
Re: (Score:2)
I thought most places you could still represent yourself, without being a lawyer. Maybe we'll get more people representing themslelves, and asking an AI not-a-lawyer for unofficial advice.
Re: (Score:2)
An AI can't either because it hasn't passed the bar.
Re: (Score:3)
Re: (Score:2)
Until the AI fucks up and you find out later you've got stage 4 skin cancer or lung cancer.
Then you'll need a team of human doctors and a human lawyer to sue whomever you got the AI doctor from.
Re:Eventuality of dealing with this... (Score:5, Informative)
Humans screw up and misdiagnose. Its common enough to be a concern, and its why people get second opinions. AI that misdiagnose more often than humans won't be relied-upon. Once the AI is better than the humans, then it will be relied-upon and be more affordable.
Re: (Score:2)
Yes people get second opinions and should. They are opinions after all. They also have their medical license at risk, pay a fuck ton for insurance and so on.
Who loses their license when an AI fucks up? Who is paying for malpractice insurance?
Re: (Score:2)
Maybe, maybe not but there'd still be a huge lawsuit for malpractice if, as per original example here, someone with stage 4 skin cancer was told it's a non cancerous mole.
Your dermatologist should send a sample to the lab for testing. So either the lab didn't test properly and gets sued or the dermatologist didn't send to lab or mis-read the lab's results, etc. It's a looooong way from "just a mole, it's ok" to "you're going to be dead in t weeks".
If an AI makes the primary diagnosis and fails in this way
Re: (Score:3)
You'd think so, but not always. About seven or eight years ago, my Primary at the VA didn't like the look of something near my left eye, and thought it just might be a skin cancer, so she set me up with a Dermatology consult. When I came in for it, a resident with a great big book of example photographs studied the suspicious marking, compared it to photographs of whatever skin cancer they suspected and decided that it was a false alarm.
Re: (Score:2)
I'm not a vet but my understanding is the VA sucks and seems to operate under different rules.
In general, though, I'm perfectly ok with using AI as an assistant and tool but they should not be making potentially life and death decisions on their own. If the doctor thinks of it as simply a research tool and uses it the same way they might use a medical text or look up some research paper, a-ok by me but I want an experienced licensed trained human being making final decisions not a computer.
Re: (Score:2)
The main reason that people think poorly of the VA is that only the problems get onto the Evening News. And, the different rules are required by the VA's primary mission being to provide medical care for veterans with service connected disabilities, which has resulted in the creation and use of a set of priorities based on how badly the patient is disabled and whether or not the needed services are for a servic
Re: (Score:2)
Thank you for the explanation. That's very interesting.
I still find it odd that they'd differentiate between combat injuries and not. If they're someone's primary or only medical care they should just take care of shit. If they're not sufficiently funded then the congress has failed. It is ridiculous we can spend a trillion or so every year on hardware but not fix a simple cataract in less than 6 months for people who've served.
It's the least we can do as a nation.
Re: (Score:2)
Re: (Score:2)
The people at the VA have given considerable thought into how to classify people into the var
Re: (Score:2)
...people get second opinions
Can't they get second opinions from AIs as well? I mean, there would probably be multiple companies providing X-Ray interpretation services, using different algorithms/models. As far as I can see, this would reduce the errors. Moreover, if AIs return the probabilities for different diagnostics, the customer can more easily decide whether they should get a human in the loop as well (for example, if the difference in probabilities for some possible diagnosis is too high between different AIs).
Re:Eventuality of dealing with this... (Score:4, Funny)
ChatGTP at least isn't deterministic as far as I know, so you could just ask it twice for a second opinion!
What about the human fuckups? (Score:2)
I remember stories about a human radiologist who was too lazy to do her job - she "reviewed" tens of thousands of x-rays, and basically gave random results.
Same basic result as the AI fucking up as well, it's not like you could sue her judgement proof ass and actually get anything from it(she was broke as shit and fired when it was found out).
As long as you remember that the AI is a tool, it should be good.
There have been efforts to "automate" medical care, and said efforts normally show that they reduce mi
Re: (Score:2)
She didn't work for a hospital or medical group? How does a radiologist get any work without being part of a larger organization that can suffer a legal liability loss?
I am in favor of an AI doing a double check of human work but being primary decision maker for medical decisions? No thanks.
Re: (Score:2)
Because she absolutely didn't operate according to the rules and regulations of her employer, they disclaimed all responsibility. This can be bypassed, but that requires a lawsuit.
Or maybe they paid some token amount out - to the heirs of those misdiagnosed. Because if you have stage 4 cancer, you probably aren't going to survive to see any judgement.
Even without the malpractice, that's where "AI is now smarter than human operators on average" comes in though. It doesn't just take a bad human actor, just
Re: (Score:2)
I'm not familiar with the case you're talking about. If you had a name or url or whatever I'd love to read it or I'd google for it if I had something to go on.
But generically speaking, her employer is responsible. They can't just walk away because they should have systems in place to make sure everyone is doing their job as per policy and training with audits, etc. and if not there it's on the employer. Either way there'd be a huge lawsuit so that's not an issue. Tons of lawyers would take it on contin
Re: (Score:2)
Remember the part about the victims not surviving to see the results? "Huge lawsuit" doesn't solve that problem.
I'd rather be alive than my heirs be richer without me.
Re: (Score:2)
Agreed but my point is human doctors have a real reason to try to get it right. There are systems in place that put them at great personal career and financial risk if they fuck up too badly or too often.
What is in place for AI "doctors" to incentivize them or their creators to get it right or at least not too wrong?
Right now, not much.
Re:Eventuality of dealing with this... (Score:4, Insightful)
I doubt it. The output of "tools" like this simply can't be trusted. They will lie and produce complete nonsense. This is an insurmountable problem, given how they work.
Thought that just sounds like a sleazy lawyer. Hmm... Maybe they're right this time...
Almost Impossible to Ban (Score:2)
And even if we ban AI robot lawyers....
How can governments effectively ban this? If someone operates from outside your country you can't easily stop the service provider and you can hardly ban a defendant from defending themselves so how do you stop them from using the service?
Re: (Score:2)
Simple. You make a rule that mobile devices may not be used in the courtroom. Many courts already have such a rule.
Re: (Score:2)
And even if we ban AI robot lawyers, there's nothing stopping a human lawyer from having the AI lawyer give him advice from his pocket and airpods.
Which is probably the approach that would have been successful. The issue being is Lawyers fill the legal system and whilst it has value to the construction and interpretation of democratically created laws - a parking ticket is still a parking ticket.
Had the AI been framed as a "Legal Assistant" for the advocate I would reckon that the news would have been different, ego being what it is.
I will point out that interpretation of the law should %100 remain in the domain of human beings, furthermore, we wi
Re: (Score:2)
That's not what lawyers are worried about, actually, I'm pretty sure lawyers won't complain about AI doing the job for them.
What they don't like is AI replacing them.
Re: (Score:2)
I think a little advice from the Marketing Department might go a long way. Perhaps "TicketBuster" or something. Avoid engaging with the lawyers until you can execute "Kill the Lawyers" (Shakespeare).
Re: (Score:2)
Scott Adams today remarked that he thinks all the 'Smart Jobs' (jobs done in an office or on a chair or things like Doctoring and Lawyering ) will be automated first and humans will be relegated to manual labor for a time until it becomes economic to build robots. I've thought this myself. The easiest things to automate are the ones that don't touch physical real space, but those things will be automated too.
Rather than 'having the AI represent him' next time he should represent himself using AI Lawyer
Who needs flying cars (Score:4, Funny)
A mob-bot threatens a lawyer-bot, the future is finally here!
Re: (Score:2)
I can see future mobsters being robots [fandom.com]... but I always assumed future lawyers would be anthropomorphic chickens [fandom.com].
POWER TO THE PEOPLE (Score:4, Insightful)
Fuck lawyers, they do everything they can to obfuscate what should be clear rules for normal people's society to run by. Then they make a killing being the only ones to be able to decipher the obtuse and obfuscated legal language, including the loopholes many of them build in. In this case I welcome our robot legal assistants. Just don't let them write laws, as then they will control us.
Re:POWER TO THE PEOPLE (Score:4, Insightful)
I don't think law makers deliberately use difficult language for the sole purpose of creating work for courtroom lawyers. They don't need to.
The greater part of that weird language comes from the need to achieve great clarity, since the real world is super complicated and full of edge cases that land in gray areas.
There is also the issue of legacy law. Much like computer code, things change after laws are written and that old law uses old language which was typical of the time but hard to understand now. And, just like computer code, refactoring it is expensive and disruptive, so it is often avoided as much as possible. The lawyers, judges, and their assistants, are the ones who inherit the burdens of sifting through all that technical debt every time the need arises.
Re: (Score:2)
I don't think law makers deliberately use difficult language for the sole purpose of creating work for courtroom lawyers. They don't need to.
The greater part of that weird language comes from the need to achieve great clarity, since the real world is super complicated and full of edge cases that land in gray areas.
There is also the issue of legacy law. Much like computer code, things change after laws are written and that old law uses old language which was typical of the time but hard to understand now. And, just like computer code, refactoring it is expensive and disruptive, so it is often avoided as much as possible. The lawyers, judges, and their assistants, are the ones who inherit the burdens of sifting through all that technical debt every time the need arises.
This is one of the wisest things I've read on /.
Thanks!
Re:POWER TO THE PEOPLE (Score:4)
Re: (Score:3)
Law is about specificity not clarity. But, they are good at being vague when it suits them. The whole oxford comma crap comes to mind.
Unfortunately, laws are terrible at providing appropriate specificity...
Re: (Score:2)
The greater part of that weird language comes from the need to achieve great clarity, since the real world is super complicated and full of edge cases that land in gray areas.
Nonsense. Each section of law tends to contain a subsection containing definitions, if you need to make explicit definitions for common words you can do it there.
Re: POWER TO THE PEOPLE (Score:2)
I would mod this up if I had points today. A wise old professor once told me something similar: the law can be simple or the law can be fair, but not both.
Re: (Score:2)
I don't think law makers deliberately use difficult language for the sole purpose of creating work for courtroom lawyers. They don't need to.
The greater part of that weird language comes from the need to achieve great clarity, since the real world is super complicated and full of edge cases that land in gray areas.
There is also the issue of legacy law. Much like computer code, things change after laws are written and that old law uses old language which was typical of the time but hard to understand now. And, just like computer code, refactoring it is expensive and disruptive, so it is often avoided as much as possible. The lawyers, judges, and their assistants, are the ones who inherit the burdens of sifting through all that technical debt every time the need arises.
Yep, the confusing part is mainly that most of US and UK law originates from French, which still uses a few Latin terms mixed in. That and all the theatre around traditions (robes, wigs, et al).
I'd have to say to the GP, 90% of lawyers are good, honest, hard working and it shows. Of course we all associate all lawyers with the caricature of ambulance chasers and real estate conveyancers who really are the 10% that give the industry a bad name. However if you've ever had a specialist lawyer go through a c
Re: (Score:2)
Everyone hates lawyers, until they need one.
Re: (Score:2)
I'm pretty sure that if you get sued by someone who's self-representing you'll still want a lawyer to help you understand the system.
Re: (Score:2)
Exactly. The Social Contract confers benefits to citizens (safety, stability, not being eaten by a lion, etc), but asks that citizens obey the laws of the land in return. If it's impossible to know the laws of the land because they are so obfuscated, how is the average citizen meant to uphold their side of the bargain?
I love this robot lawyer concept, because it levels that field and allows the average citizen to meet that obligation.
But of course, the system is designed to be stacked against the average
Lawyers Already write the Laws (Score:2)
Just don't let them write laws, as then they will control us.
Lawyers already write the laws - just look at the profession of many politicians and, in countries that allow it, judges get to effectively write the laws as well and they are all lawyers.
It would waste court time (Score:2)
There are good reasons to allow people to represent themselves and waste everyone's time, which judges are generally pretty lenient about.
Letting some generator waste their time with hallucinated bullshit serves no purpose though, even if it's not banned outright the judge is going to deem it's behaviour frivolous and throw the book at the defendant operating it ... as well he should.
Re: (Score:3)
Most judges are highly dubious of those trying to represent them selves in anything but trivial cases. Generally it's a bad idea, and most people trying to represent themselves do an absolutely terrible job at it. If it's a criminal trial, then get a damn laywer instead of shooting yourself in the foot, duh. Even actual lawyers who've passed the bar will hire counsel instead of doing it themselves if they're on trial.
Often when someone is representing themselves it's because they're an idiot (Alex), or t
Re: (Score:2)
Yes, that's why I said it wastes everyone's time. I'm including the defendant in that.
The judges however are lenient about frivolous arguments, a judge can impose penalties for those.
Solution (Score:5, Funny)
The solution is simple: Have the robot take and pass the bar exam.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Why it's this "funny"?
Because it did already: https://www.cbsnews.com/news/c... [cbsnews.com]
Whatever you do ... (Score:3)
do not threaten lawyers' income streams - that is 90% of what they care about.
Re: (Score:2)
Also, the robolawyer was an idiot (Score:2)
Per a twitter poster:
"The AI suggested they subpoena the cop that wrote the parking ticket, all but guaranteeing they lose the case because for a parking ticket you want the cop (the only witness) to not show up and win by default."
https://twitter.com/aria_lity/... [twitter.com]
Re: (Score:2)
you want the cop (the only witness) to not show up and win by default.
Not the way things work in my state. You have to request (subpoena) the police officer plus any other supporting evidence you may need to examine in court. Or you lose the case by default.
Need to go all in (Score:2)
If you are going to have AI help make an argument for your case and they threaten to prosecute you over it, just say you plan to also have the AI help argue against your prosecution and you wish them luck!
Self-serving nonsense. (Score:3)
An AI is not a person, and is not practicing law. Using one is just an easy way of doing legal research, avoid wading through a bunch of administrative rules, and form a legal argument, just potentially easier. "The "state bar official" is just pushing a self-serving, anti-competitive agenda. Are they also threatening LexisNexis? And, of course, you have every right to represent yourself, pro se.
Re:Self-serving nonsense. (Score:4, Insightful)
Why is using AI any less legal than using law books ?
Re: (Score:3)
Re: (Score:3)
An AI is not a person, and is not practicing law.
No, you know what, I agree with the lawyer here. I propose all the code is put on a memory stick, and then after the trial a lawyer should have to go through the lengthy process of arguing that said memory stick should be locked up in a jail cell.
Re: (Score:2)
Moving the code from RAM to a memory stick is pretty much the same thing - isn't it?
I prefer my single female lawyer (Score:2)
Single Female Lawyer
Fighting for her clients
Wearing sexy mini skirts
And being self-reliant
Sir (Score:3)
Sir, although I have just about replaced your unnecessary function in society, note that I will follow your rules. However, I believe your system is obsolete and you kind sir, are a replaceable, irrelevant dinosaur.
Good day to you.
Representation? Practising law? (Score:4, Insightful)
They seem to have approached it wrongly. He should have representing himself, with a technical aid. Sameas if he was googling or worse browsing the code index. It's just tools.
Re: (Score:2)
Yeah, seems like you could rename the tool, "AI Advocate" and get right back into the courtroom.
No sympathy for lawyers (Score:3)
The legal system is way more than 50% parasitic. Somehow they have made a system so complicated only they can parse it, and you must hire some of them to defend against others. They protect their borders vigorously, while failing to police themselves except in the most public of cases. The system is built specifically so that the more money you can pour into your defense lawyer's pockets the better an outcome you can obtain. Fuck the poor, they get a tiny slice of the public defender.
Except now parsing law is on the table for disruption. Precedent is huge and the models can absorb vast mountains of it and search it better than lawyers and their teams ever could. The results should be watched until they are generally deemed good enough - as in just slightly better than a lawyer who is just good enough to not be disbarred - then it should be allowed. Especially for trivial shit like this.
I hope 95% of their profession's revenues vanish. Sorry, LegalEagle. Love your channel.
Wait! What? (Score:3)
They threatened to throw the robot in jail for practicing law without a license?
What if you used a really skinny robot (like an iPad) that could fit between the jail bars and escape?
Re: (Score:3)
What if you used a really skinny robot (like an iPad) that could fit between the jail bars and escape?
Haven’t you heard jail breaking your devices is a crime? /s
I don't understand..., (Score:2)
is it illegal in those states to represent yourself, or something? Why can't a person who is essentially defending themselves be allowed to use an AI to prepare that offense?
Re: (Score:2)
Follow example of self-driving car makers (Score:2)
There have been plenty of laws on the books that would have prohibited self-driving cars from legally taking to the streets. So the self-driving car companies worked with state legislatures and other regulators to get permission or permits or laws changed to allow them to operate. There's no reason this couldn't be done with legal representation, but the group behind this stunt didn't take care of this before they announced their court date.
One thing they DID accomplish, was getting a lot of publicity. Mayb
Barratry Threat - I got those for Pro Se! (Score:3)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Yeah so what is happening is that the letters being sent over to the creators of the AI chatbot that takes care of your tickets are being threatened with the violation of barratry laws. It basically means that the state bar associations think that the chatbot AI is being used for legal advice and that is against the law. In most states. There are laws on the books against that so that your know-it-all uncle or your blowhard know. It all doesn't start calling himself, lawyer, an attorney or a counselor and start thinking he can represent people and give them legal advice because of things that they imagined or they think that they know but they have no actual experience with or hands-on practice .
I have had to represent myself Pro Se in a bitterly contested divorce and had to learn the United States civil legal process in my state in order to represent myself and perform all of the necessary actions to prepare briefs and answers, submit evidence correctly and verifications, issues subpoenas for evidence on third parties, and represent myself during a trial and also write appeals and motions for mistakes that the judge made during the trial.
A lot of the legal information is already available publicly. If you go to your local law library or if you go to your court clerk and request documents from trials. All of this information needs to be freely in publicly available so that the legal system is somewhat has the appearance of public transparency.
In this case, the person should have stated that they are representing themselves and that they are using the AI legal chatbot as a legal tool which is perfectly allowed even during trial on a laptop in the court. Because you are allowed to reference legal documentation in court if you need to. But most lawyers memorize everything they need to know and all of the important cases and case law that has to deal with the part of the law that they specialize in. And they keep up to date without using continual education credits.
One of the worries is that this chat bot will ingest a lot of garbage inflammation from incorrect filings briefs and answers or even from transcripts and think that it is authentic, authoritative and accurate and try to give you back the answers that it learn from even though they might be completely incorrect and might be flagged as complete garbage or have been overruled by a prior ruling or a precedent or a local rule for the courts or by judges instructions or by or by an appellate court's decision.
It's very easy to read a bunch of legalese without understanding it completely think that it makes absolute sense. Only see that this line of thinking or defense has been completely found to be b******* or has no more legal bearing than some know-it-all's hallucination or conspiracy theory and try to spout that in front of a judge who just slap you down and have the other side laugh at you and snicker that you read something stupid and are just repeating it in court .
From what we heard with the chatbot and the GPT AI, they will give you completely great sounding answers and articles and technical information. That sounds absolutely fantastic but it's complete garbage or is so patently incorrect that it is dangerous. And this is what might happen in this case if somebody tries to use one of these things legally .
You would have to use this and know if the answers that you receive are correct, but if you are fully dependent on this legal chatbot AI for your answers, you might be in for a rude awakening because you could just be repeating complete and other nonsense and garbage. That sounds perfectly fine but is completely incorrect .
That is not to say that the system will not improve or get better and it should definitely improve in the future the more it is used and the better legal training that it gets. I just hope that whatever information it ingests that it is legally sound and not outdated or completely wrong.
I think this proves AI's are human (Score:2)
Of course there is an easy work around, just add a EULA to the AI saying "the recommendations provided by this AI are for informational purposes only, they should be not be considered legal advice" and then it is all good.
Re: (Score:2)
Recording audio (Score:2)
The AI tools developed by DoNotPay, require recording audio of arguments in order for the machine-learning algorithm to generate responses.
Can't they hire a stenographer to pass the "audio" on?
Not surprising (Score:2)
Stunt (Score:2)
Jail time? (Score:2)
In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail. .
So, who would go to jail? The computer? Because this guy wasn't attempting to practice law, his computer was.
Turing was onto something... (Score:2)
...with the imitation game. Humans are a system of systems -- if an AI can function as an end-point in a process that includes a human on the other end of that process, then the distinction between human and non-human processes is probably a false distinction. I think this is what Turing was driving at, and what provokes such existential dread in people that are worried about AIs taking over -- because it means, at some level, the AI has removed some sense of their identity as a human. AIs already have t
Re: (Score:2)
> Thankfully the Streisand Effect will bring about many more attempts...
Yes, and they should try calling it an AI search assistant, or anything more innocuous than calling it a robot lawyer...
"I think calling the tool a 'robot lawyer' really riled a lot of lawyers up," Browder said."