Man Sues OpenAI Claiming ChatGPT 'Hallucination' Said He Embezzled Money 107
OpenAI is facing a defamation lawsuit filed by Mark Walters, who claims that the AI platform falsely accused him of embezzling money from a gun rights group in statements delivered to a journalist. The lawsuit argues that ChatGPT is guilty of libel and alleges that the AI system "hallucinated" and generated false information about Walters. The Register reports: "While research and development of AI is worthwhile, it is irresponsible to unleash a system on the public that is known to make up 'facts' about people," his attorney John Monroe told The Register. According to the complaint, a journalist named Fred Riehl, while he was reporting on a court case, asked ChatGPT for a summary of accusations in a complaint, and provided ChatGPT with the URL of the real complaint for reference. (Here's the actual case [PDF] the reporter was trying to save time on reading for those curious.)
What makes the situation even odder is that the case Riehl was reporting on was actually filed by a group of several gun rights groups against Washington's Attorney General's office (accusing officials of "unconstitutional retaliation", among other things, while investigating the groups and their members) and had nothing at all to do with financial accounting claims. When Riehl asked for a summary, instead of returning accurate information, or so the case alleges, ChatGPT "hallucinated" that Mark Walters' name was attached to a criminal complaint -- and moreover, that it falsely accused him of embezzling money from The Second Amendment Foundation, one of the organizations suing the Washington Attorney General in the real complaint.
ChatGPT is known to "occasionally generate incorrect information" -- also known as hallucinations, as The Register has extensively reported. The AI platform has already been accused of writing obituaries for folks who are still alive, and in May this year, of making up fake legal citations pointing to non-existent prior cases. In the latter situation, a Texas judge said his court would strike any filing from an attorney who failed to certify either that they didn't use AI to prepare their legal docs, or that they had, but a human had checked them. [...] According to the complaint, Riehl contacted Alan Gottlieb, one of the plaintiffs in the actual Washington lawsuit, about ChatGPT's allegations concerning Walters, and Gottlieb confirmed that they were false. None of ChatGPT's statements concerning Walters are in the actual complaint.
The false answer ChatGPT gave Riehl alleged that Walters was treasurer and Chief Financial Officer of SAF and claimed he had "embezzled and misappropriated SAF's funds and assets." When Riehl asked ChatGPT to provide "the entire text of the complaint," it returned an entirely fabricated complaint, which bore "no resemblance to the actual complaint, including an erroneous case number." Walters is looking for damages and lawyers' fees. We have asked his attorney for comment. As for the amount of damages, the complaint says these will be determined at trial, if the case actually gets there.
What makes the situation even odder is that the case Riehl was reporting on was actually filed by a group of several gun rights groups against Washington's Attorney General's office (accusing officials of "unconstitutional retaliation", among other things, while investigating the groups and their members) and had nothing at all to do with financial accounting claims. When Riehl asked for a summary, instead of returning accurate information, or so the case alleges, ChatGPT "hallucinated" that Mark Walters' name was attached to a criminal complaint -- and moreover, that it falsely accused him of embezzling money from The Second Amendment Foundation, one of the organizations suing the Washington Attorney General in the real complaint.
ChatGPT is known to "occasionally generate incorrect information" -- also known as hallucinations, as The Register has extensively reported. The AI platform has already been accused of writing obituaries for folks who are still alive, and in May this year, of making up fake legal citations pointing to non-existent prior cases. In the latter situation, a Texas judge said his court would strike any filing from an attorney who failed to certify either that they didn't use AI to prepare their legal docs, or that they had, but a human had checked them. [...] According to the complaint, Riehl contacted Alan Gottlieb, one of the plaintiffs in the actual Washington lawsuit, about ChatGPT's allegations concerning Walters, and Gottlieb confirmed that they were false. None of ChatGPT's statements concerning Walters are in the actual complaint.
The false answer ChatGPT gave Riehl alleged that Walters was treasurer and Chief Financial Officer of SAF and claimed he had "embezzled and misappropriated SAF's funds and assets." When Riehl asked ChatGPT to provide "the entire text of the complaint," it returned an entirely fabricated complaint, which bore "no resemblance to the actual complaint, including an erroneous case number." Walters is looking for damages and lawyers' fees. We have asked his attorney for comment. As for the amount of damages, the complaint says these will be determined at trial, if the case actually gets there.
Seems absolutely impossible to prove damages. (Score:4, Interesting)
There's some theory here that he suffered financially quantifiable damage as a result of reading an inaccurate sentence about himself in the privacy of his own home? Come on, now.
When I asked GPT-4 about my band, it claimed I wasn't even in the band and that the songs were written by someone else. You don't see me leaping on the phone to complain to a lawyer. All that would get me is laughed at.
GPT hallucinates. Everyone knows that. It has about a million disclaimers to that effect.
Re: (Score:2)
Re: (Score:1)
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Not quite. It's not hidden in a wall of text in terms of service.
I mean, that's exactly what I said. It's front and center when you log in. I didn't realize it was at the bottom too. Yeah, I don't see this lawsuit going anywhere besides out the front door.
Re: (Score:1)
Re: (Score:2)
And? That's like arguing you don't owe the bank for your car payment because "nobody reads those contracts".
If you don't have a good excuse, you're wrong. But if the guy you're complaining about doesn't have a good excuse, you're right. And that's what happened here. The guy complaining about libel isn't the one who used ChatGPT. He's complaining that someone else did.
I'll wage he knows what he's doing. (Score:1)
Re: (Score:1)
Re: Seems absolutely impossible to prove damages. (Score:4, Insightful)
The question is, which is it?
Is it prone to hallucination thus you should distrust anything it says and independently research? If so, what's the point of the platform, I could just skip straight to the research myself if I have to do it anyways?
It's it massively useful and can be directly used? If so, should the platform be held accountable when it makes damaging mistakes?
It's an interesting demonstration, but it seems the advocacy picks whichever perspective is most convenient when tricky questions are raised.
Re: (Score:2)
Its massively useful and can be directly used, but you should always fact check it, like you should any source.
Re: Seems absolutely impossible to prove damages. (Score:5, Informative)
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
After a fair amount of messing with it, I have struggled to find 'useful'.
If it is asked to deal with data that could also be found within the first 3-4 links in a google search, it does fine. It does fine in a way that is novel and interestingly different than how the google search feels, but it isn't any faster.
If I ask it for factual or functional response that can't be readily found at the top of a Google search, it gives junk responses. However, it's perhaps more frustrating as the responses *look* con
Re: Seems absolutely impossible to prove damages. (Score:2)
It's an incredibly powerful tool for working with symbols and language. Translations are incredibly accurate, and it can increase and decrease language levels from A1 to C2 at will. That alone makes the shares worth their weight in gold.
But an omnipotent oracle is not something it was made to be.
Re: (Score:2)
That's a cool take about translation, given how low quality translation to date has been and how credible that use case is given the scenario (It *only* involves data in the 'prompt' without being asked to extrapolate to facts in its "knowledge base" which is where things get sketchy). For practical translation I could easily believe it doing a fantastic job, and for more... literary quality translation it would at least potentially be a good starting point (sometimes even human translation suffers for fai
Re: (Score:2)
It is pretty good at translating, but it is not perfect. E.g. Alice's Adventures in Wonderland, Chapter 1
"Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do"
bank is translated as the money loaning building, instead of something like a river bank, which is incorrect, considering later remarks about "picking the daisies" and seeing rabbits.
If we are very strict about this, it could be possible that bank refers to actual money loading building. and she is si
Re: (Score:2)
If so, what's the point of the platform
ChatGPT is not a research tool and never was. It none the less proves quite useful for a variety of things. Not every word committed to paper / screen needs some considered and well researched, fact checking.
And no, I didn't trawl through Google Scholar looking for articles to back up what I just wrote. I hope that in and of itself proves the point.
Re: (Score:2)
Re: (Score:2)
Re: Seems absolutely impossible to prove damages. (Score:5, Informative)
Re: Seems absolutely impossible to prove damages. (Score:2)
Re: (Score:2, Troll)
Note that it requires proving harm to their reputation, not their finances.
Note that you failed to read the first two words of the relevant sentence.
Re: Seems absolutely impossible to prove damages. (Score:2)
Re: Seems absolutely impossible to prove damages. (Score:2)
Re: (Score:2)
I think 4) may be a challenge, as the only incident in evidence was one where the persons proven to be privy to the misinformation seemed to treat it as untrustworthy by default. The incident specifically doesn't seem to have harmed the person, finances or not. There may be unknown parties that received similar misinformation and took it as fact, but that's not really known. Which is the tricky thing in a lot of these "AI might have harmed me" cases, as generally the specific incidents they point to did n
Yeah, but... (Score:2)
When I asked GPT-4 about my band, it claimed I wasn't even in the band and that the songs were written by someone else. You don't see me leaping on the phone to complain to a lawyer. All that would get me is laughed at.
Are YOUR bandmates armed to the teeth and kinda nervous right now due to "unconstitutional retaliation" and "other things" in the Washington's Attorney General's investigation that they're suing the gubermint over?
One's reputation MAY not be the main concern here. Also, AK beats an axe.
Can't take at face value (Score:2)
Re: (Score:2)
Re: (Score:1)
So... how was anyone harmed? (Score:5, Insightful)
This is what I can tell from the complaint and the mess or words that the Register is calling a story:
1. A reporter (Riehl) tried to use ChatGPT to avoid doing work.
2. ChatGPT, naturally, returned a bunch of stupid lies about someone (Walters).
3. The reporter checked with someone who would know (Alan Gottlieb) to see if the claims were true. They were not.
4. The lie wasn't published anywhere.
5. ChatGPT only told the lies to the reporter.
6. The reporter only told the lies to one other person to see if the claims were true.
How did Walter's even find out that ChatGPT told the reporter a lie? How was Walter's harmed by this in any way? Only two people heard the false claims and both are fully aware that those claims are false.
The lesson here, for anyone who needs a reminder: ChatGPT is not capable of producing accurate summaries. It's just a language model. It does not understand. It can not reason, consider, analyze, or deliberate. All it does is make a predictions about what token should follow a sequence of tokens on the basis of probability.
If you're a reporter, lawyer, or anyone else, you can't trust ChatGPT to accurately summarize anything. It can't fetch part of a document for you. It can't check it's own work for accuracy. That it sometimes looks like it can do these things is nothing short of miraculous, but it's just an illusion. The power of statistics and large amounts of data.
Re: (Score:3)
Re: So... how was anyone harmed? (Score:3)
One interesting twist is that while the specific instance it is pretty clear that the person double checked and was corrected, is it possible that it committed libel towards other users that would not have done checked. I works suspect that no one else bothered to ask chatGPT about this, but we wouldn't know. Libel is generally seen as being a problem through broad publication, but chatGPT is individualized so it has similar potential reach as a publication, but tailored output to the user rather than dupl
Can it actually be libel? (Score:3)
I'm reminded of how it was determined that the picture a monkey took of itself was determined to not be copyrightable, because a human didn't direct the taking of the picture. How AI "music" is in much the same boat, how AI "inventions" can't be patented.
Basically, can ChatGPT actually be such a thing to be able to "hallucinate", can it have any sort of "mens rea" in order to actually commit libel, any more than a parrot chaining random words it has heard together could be?
Re: (Score:3)
If you teach a parrot to repeat libellous statements, you could be in trouble.
It's even worse for ChatGPT because they sell it as a service.
Re: Can it actually be libel? (Score:2)
I really don't see how any reasonable person could come to the conclusion that ChatGPT is telling the truth, any more than a parrot.
Re: (Score:2)
If you're teaching the parrot whole phrases, that's one thing. But identify the person or company teaching the AI to spout those specific sentences.
If you teach a parrot a vocabulary and IT chooses to put a specific set together, not taught to it, then it can't really be libel.
For example, let's say it learns people's names, and at some point it picked up "lier!" - so it being upset at a hired handler named Larry, and it having learned larry's name, and it going "larry lier! larry lier! larry lier!" - th
Re: So... how was anyone harmed? (Score:2)
Re: (Score:1)
The statements are still libelous, even if no actual damage was done
The statements are not libelous for two reasons.
1) The statements were not in fact made.
2) There was no libelous intent.
It is possible in such a case to sue for punitive damages even if the real damages are negligible.
Odds are good you will not even get to the trial part, because your case will be dismissed first.
Re: (Score:2)
1) For some definition of "made". It seems that the statement was created in human style in a way that would appear human.
2) Libel doesn't require malicious intent, it can involve mere negligence. Which would be the claim here, that they crafted a libelous statement without regard to caring about whether it was factual or not.
However, the "harm" part of the criteria may be challenging for this plaintiff. As far as can be readily seen, the only one known to care enough to ask chatGPT about it also doubted i
Re: (Score:2)
Re: (Score:2)
Quote: "5. ChatGPT only told the lies to the reporter."
FALSE. ChatGPT would reply exactly the same to WHOEVER IN THE EARTH asked about him... just like if it were published in any website accesible world-wide. ...and that's why this probably will end up in trial.
Re: (Score:2)
just like if it were published in any website accesible world-wide.
FALSE. For multiple reasons.
1) ChatGPT generates replies probabilistically. Different responses are possible for the same prompt.
2) There is no evidence that ChatGPT told the lie to anyone other than the reporter.
just like if it were published in any website accesible world-wide.
Complete nonsense. Also, your computer has a spell checker. Learn how to use it.
Re: (Score:1)
Quote: "Complete nonsense. Also, your computer has a spell checker. Learn how to use it."
Seriously... your brain is so useless that can't do these:
> accesible (Spanish) -> accessible (missing an "s")
and do not recognize "world-wide" as "worldwide"?
My spell checker works properly because I do write and read in multiple languages, whereas your brain is unable to add an "s" and remove a "-" rendering a properly written text as "complete nonsense". Poor human being...
Re: (Score:2)
FALSE. ChatGPT would reply exactly the same to WHOEVER IN THE EARTH asked about him... just like if it were published in any website accesible world-wide. ...and that's why this probably will end up in trial.
Not true. There is some intentional randomness that gets mixed in with the weights and earlier tokens in session could have influenced subsequent responses.
Re: (Score:2)
Re:So... how was anyone harmed? (Score:4, Informative)
It's not that these things have those capabilities and sometimes make mistakes, like a human, it's that they lack those capabilities altogether.
Imagine using a set of dice to do simple addition. Given some problem, you roll the dice and whatever results you write down as your answer. If you get the right answer, would you claim that the dice can do arithmetic? If you get the wrong answer, do you blame yourself and roll again until you get something close?
Dice can't do arithmetic. ChatGPT can't either. Nor can it reason, analyze, consider, deliberate or anything like that. That's just not how these things work.
ChatGPT lacks "Mens Rea" (Score:3)
The "Mens Rea" state of mind is necessary for someone to be guilty of a crime (or tort? Any actual lawyers out there?)
Firstly, one can't prove that ChatGPT has self-awareness (introspection about its thinking and utterances) at all.
Secondly, it's right there in the term "hallucinated". If it does have a state of mind, it was in a delusional, hallucinating state of mind. Not guilty.
And holding a company responsibly for all the thoughts and actions of its AI product?
a) Is that reasonable, given that the AIs' exact behaviour is inherently unpredictable already.
b) What if it is an open source AI? Who's legally responsible for that one?
Re:ChatGPT lacks "Mens Rea" (Score:5, Insightful)
A company can and should be held responsible if its machinery causes injury due to their negligence. For example, if a leak happens at your pesticide plant and kills people downwind, legal fact-finding will follow.
OpenAI is certainly aware that ChatGPT output is unreliable. It's good that they say so on their main page. A law professor I follow says that such a warning is not enough to avoid liability.
Upthread, the question was raised whether anyone was actually injured. Questions like that get suits thrown out all the time.
Re: (Score:1)
ChatGPT can't publish anything, it lacks this ability. It only responds to one-on-one inquiries.
To anyone in the public who cares to ask...
Re: (Score:2)
Sweet. Then, by your own argument, any and all gunshot victims can sue the manufacturer of the gun and the ammo, as well as the gun store.
All these cases of stupid people trying to sue OpenAI for something the chat generated are going to go nowhere. OpenAI isn't the one prompting the creation of these outputs. OpenAI isn't the one publishing or using the output inappropriately. Mark Walters and, his attorney, John Monroe are both dumb as fuck morons who will lose this case assuming it even goes forward to b
Re: (Score:2)
A law professor I follow says that such a warning is not enough to avoid liability.
Getting your advice from those who can't often leads to disappointment.
Upthread, the question was raised whether anyone was actually injured. Questions like that get suits thrown out all the time.
Part of the standard is whether a reasonable person would believe what ChatGPT said. No reasonable person would believe it what they knew was coming from ChatGPT.
Re: (Score:3)
No reasonable person would believe it what they knew was coming from ChatGPT.
That's your standard of reasonable. What's the legal standard? I suspect if a large enough fraction of the population would believe it that would count as reasonable.
Fun fact: the "reasonable man" is apparently known as "the man on the clapham omnibus". If you're ever in London, I invite you to join me for a ride on the number 37 bus (a bus I have caught many times which goes through Clapham) ot eavesdrop on some conversations to d
Re: (Score:1)
The law professor is irrelevant. The courts decide the liability, not his opinion of a disclaimer. Beside's there are plenty of examples where disclaimers DO in fact limit liability. "Beware of dog, caution contents are hot, the airbag warnings on the backs of your sun visor." If those didn't limit liability, they wouldn't include them because of the connotations that the prod
Re: ChatGPT lacks "Mens Rea" (Score:2)
Re: (Score:2)
Your example is useless.
More appropriate: You rent a car from Best Car Rental Company. You then runover 18 homeless people with said car. Is Best Car Rental Company responsible for those deaths? No. The person responsible was you.
Just like the only person who could possibly be responsible here is the journalist. And that's only if he actually published anything based off it. I have yet to see any possible legitimate cases against OpenAI with generated content. Absolutely none of them have any merit. The law
Re: (Score:2)
You'd be fine if I published a website. Where anyone could go to get a list of known paedophiles. And it just happens to tell everyone Bahbus is a paedophile.
If you published a site containing false information, I'd go after the publisher - you. ChatGPT and OpenAI do not publish anything.
What if Facebook did it and you never got another job interview ever again?
If a Facebook employee published the false information, I would go after the employee first.
ChatGPT doesn't publish anything. It is not capable of doing so. It doesn't make posts on Facebook or Twitter. These are all human actions. And humans are responsible for their own actions.
I don't blame the hammer for being used to bash in a person's skull, I blame the person wielding the
Re: ChatGPT lacks "Mens Rea" (Score:2)
Re: (Score:2)
Says the person who makes an unequivocal and nonsensical comparison.
Re: ChatGPT lacks "Mens Rea" (Score:2)
Re: (Score:2)
The "Mens Rea" state of mind is necessary for someone to be guilty of a crime (or tort? Any actual lawyers out there?)
Firstly, one can't prove that ChatGPT has self-awareness (introspection about its thinking and utterances) at all.
No, but the guilty mind is OpenAI, not ChatGPT. OpenAI know that ChatGPT will make stuff up, even libellous stuff about people, but they are happy to put it up there to use with that knowledge. Whether it's a crime or not is TBC, but I don't see how mens rea for OpenAI isn't ther
Re: (Score:2)
ChatGPT has set the stage, informing its user that what comes next stands a good chance of being fiction.
If the individual user then chooses to publish more widely the made up information, knowing it may very well be false, it would seem to me that's on them.
Re: (Score:2)
Yeah but that's different.
I don't think there's a question of mens rea because that lies on open AI and they know it hallucinates, so they definitely are knowingly responsible.
There's the separate question of if they are guilty of anything. Would a reasonable person treat the output of chat gpt as real?
If the answer is yes, then I think the question of mens rea is clear enough.
Re: (Score:2)
An average person may very well (treat the output of chat gpt as real). While that is a a sad commentary on either people or our education system, it shouldn't affect the legal question.
Re: (Score:2)
Reasonable person has a specific meaning in law.
It's closer to "average" than a high degree of rationality. It's also known interestingly as "the man on the Clapham omnibus". If you're ever in London, I invite you to join me for a ride on the number 37 bus (a bus I have caught many times which goes through Clapham) to eavesdrop on some conversations to decided it uncritical belief would be rare or common. I have no idea, but for what it's worth I did overhear a group of three late teenage Millwall supporter
Re: (Score:2)
good jesus do not engage in terminology they created to explain why the program is failing
Just because i call a flat tire a broken leg doesn't mean it's analogous to a broken leg and should take weeks to heal. Just because they call it hallucination doesn't mean it IS hallucination. Even if we (gags a little) assume it has a state of mind. There's no reas
Re: (Score:2)
I see Language, with capital L, meaning the collection of ideas, models, experiences mediated by language. The amazing accomplishments of AI are in fact all the merit of Language. The model doesn't matter, any architecture would do. The smarts is
Re: (Score:2)
Re: (Score:2)
The alleged libel is present in the universe, as a simple statistical combination of a large corpus of the writings of humanity.
ChatGPT, using its statistical analysis of the large corpus, merely discovered this allegedly libelous combination of letters.
If there is responsibilty, it is a distributed responsibility, among those humans who authored the source material.
Re: (Score:2)
Now do Full Self Driving (Score:2)
Perhaps the difference is that people have enough of a sense of self preservation to not trust corporate hype about AI cars, but when it's just trash talk about other people, sure, believe the ad copy and indulge your laziness.
Re: (Score:2)
It's probably more that you can find somebody who believes ust about anything.
It's less "people" and more individuals - you can find people who fully believe in self-driving cars and want to buy/ride in one, and those who say "absolutely never". Most are probably in the middle.
In the case of weird lawsuits, you have a very low bar for filing one, so you get people on the edge who believe very unusual things, get upset very easily, etc...
You're not even right about self driving cars, I think. I mean, there
Its called Digital Diarrhea or Habitually Lying (Score:1, Interesting)
Its straight pulling shit out of its own ass. Liars and douchebags do it all the time. If you polish a turd enough, does it really get shiny?
Perhaps people will just start calling GPT a habitual liar or similar. Statistically its accurate and plain English.
You'd think it was trained on wikipedia too (has it been 6 months yet?):
"Pathological lying has been defined as: "a persistent, pervasive, and often compulsive
Re: Its called Digital Diarrhea or Habitually Lyin (Score:1)
Chatgpt cannot lie by the very definition of the word "lie".
Re: (Score:2)
Re: (Score:2)
A lie is always intended because its task is to deceive. ChatGPT confabulates because it doesn't actually know anything and is not capable of wanting anything.
Re: (Score:2)
Like Wikipedia, there should be some controls (Score:2)
Right now it can't cite its sources, and has no introspection feature to correct itself reliably.
Until they put that in, this might be solved by putting in a stronger warning that it just makes stuff up, not just say that can be merely incorrect.
Re: (Score:2)
on what's written about living people, that just seems fair. Right now it can't cite its sources, and has no introspection feature to correct itself reliably. Until they put that in, this might be solved by putting in a stronger warning that it just makes stuff up, not just say that can be merely incorrect.
i disagree. Because unlike Wiki pedia which is user submitted content. There's no real way to define a living person algorithmically. The AI doesn't actually understand the difference between living and dead. Unless you have a database of who is alive that it can cross check. The real solution here is education and advertisement. The problem is people expect way more from this than it's capable of. That has to do with how they advertise it (and allow it to be advertised via word of mouth) and how people are
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Worf: Romulan Ale should be illegal.
Geordi: It is.
Worf: Then it should be MORE illegal.
So it should be more disclaimed.
Re: (Score:2)
Worf: Romulan Ale should be illegal.
Geordi: It is.
Worf: Then it should be MORE illegal.
So it should be more disclaimed.
I get the point, but yes, I think stronger wording might make a difference in a lot of cases so this isn't really a good analogy. There will certainly still be people who ignore the warning, but seeing a "hey, this might be totally made up" a bunch of times might make that number way less, IMO.
It's like in civil engineering, if you have a section of road that produces way more crashes than anywhere else, you don't point to the clear warnings you posted and shrug your shoulders, say people idiots and call
The only thing dumber... (Score:2)
than this man sueing chat GPT instead of the reporter who was too lazy to actually report, is the idea that AI "hallucinates". I hope to great heaven above this term doesn't catch on because it's not hallucinating. That's something living creatures do and saying that implies that AI is thinking and living. It's an awful terminology. Call it what it is. AI makes stuff up randomly. Now it's less "mysterious and awe inspiring" (we made AI so powerful and living it even has dysfunctions like us) and more "just
Re: (Score:2)
(Don't) Kill all the lawyers (Score:2)
Hallucination is a weasel word (Score:2)
Glad someones willing to bring this quesiton up (Score:2)
AI seems like half a brain in this regard (Score:2)
We've all had fantastic ideas. Step 1 of the creative process is like, "Hey, if I strap a couple planks to my arms and jump off the barn, maybe I can fly". A few stupid kids have done that, but many will run it past their friends and decide it's not a good idea. The truly smart children will ask questions about what it really takes to fly, and quickly find out that's not going to work.
AI is the kid who straps those planks right on and jumps.
It's got the creative right-side of a brain, but not the analyti
Re: (Score:2)
The Future of AI and Law (Score:2)
Although I don't think this will end up in court. It does seem to raise a lot of good issues for future AI system and also for future AI Robot systems. Imagine what kind of lawsuits will start to come out of people when the robot does something they don't like.
And eventually a robot will kill a human and that trail will be quite an attention getter.
(Just for reference, this has been discussed before in Outer Limits episode, Twilight Zone Episodes, iRobot movie, Asimov books, The Animatrix shorts.) So not n