ChatGPT Sued for Lying (msn.com) 176
An anonymous readers shared this report from the Washington Post:
Brian Hood is a whistleblower who was praised for "showing tremendous courage" when he helped expose a worldwide bribery scandal linked to Australia's National Reserve Bank. But if you ask ChatGPT about his role in the scandal, you get the opposite version of events. Rather than heralding Hood's whistleblowing role, ChatGPT falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption, and been sentenced to prison.
When Hood found out, he was shocked. Hood, who is now mayor of Hepburn Shire near Melbourne in Australia, said he plans to sue the company behind ChatGPT for telling lies about him, in what could be the first defamation suit of its kind against the artificial intelligence chatbot.... "There's never, ever been a suggestion anywhere that I was ever complicit in anything, so this machine has completely created this thing from scratch," Hood said — confirming his intention to file a defamation suit against ChatGPT. "There needs to be proper control and regulation over so-called artificial intelligence, because people are relying on them...."
If it proceeds, Hood's lawsuit will be the first time someone filed a defamation suit against ChatGPT's content, according to Reuters. If it reaches the courts, the case would test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.
The article notes that ChatGPT prominently warns users that it "may occasionally generate incorrect information." And another Post article notes that all the major chatbots now include disclaimers, "such as Bard's fine-print message below each query: 'Bard may display inaccurate or offensive information that doesn't represent Google's views.'"
But the Post also notes that ChatGPT still "invented a fake sexual harassment story involving a real law professor, Jonathan Turley — citing a Washington Post article that did not exist as its evidence." Long-time Slashdot reader schwit1 tipped us off to that story. But here's what happened when the Washington Post searched for accountability for the error: In a statement, OpenAI spokesperson Niko Felix said, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress...." Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate. "We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users," Asher said in a statement, adding that "users are also provided with explicit notice that they are interacting with an AI system."
But it remains unclear who is responsible when artificial intelligence generates or spreads inaccurate information. From a legal perspective, "we just don't know" how judges might rule when someone tries to sue the makers of an AI chatbot over something it says, said Jeff Kosseff, a professor at the Naval Academy and expert on online speech. "We've not had anything like this before."
When Hood found out, he was shocked. Hood, who is now mayor of Hepburn Shire near Melbourne in Australia, said he plans to sue the company behind ChatGPT for telling lies about him, in what could be the first defamation suit of its kind against the artificial intelligence chatbot.... "There's never, ever been a suggestion anywhere that I was ever complicit in anything, so this machine has completely created this thing from scratch," Hood said — confirming his intention to file a defamation suit against ChatGPT. "There needs to be proper control and regulation over so-called artificial intelligence, because people are relying on them...."
If it proceeds, Hood's lawsuit will be the first time someone filed a defamation suit against ChatGPT's content, according to Reuters. If it reaches the courts, the case would test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.
The article notes that ChatGPT prominently warns users that it "may occasionally generate incorrect information." And another Post article notes that all the major chatbots now include disclaimers, "such as Bard's fine-print message below each query: 'Bard may display inaccurate or offensive information that doesn't represent Google's views.'"
But the Post also notes that ChatGPT still "invented a fake sexual harassment story involving a real law professor, Jonathan Turley — citing a Washington Post article that did not exist as its evidence." Long-time Slashdot reader schwit1 tipped us off to that story. But here's what happened when the Washington Post searched for accountability for the error: In a statement, OpenAI spokesperson Niko Felix said, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress...." Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate. "We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users," Asher said in a statement, adding that "users are also provided with explicit notice that they are interacting with an AI system."
But it remains unclear who is responsible when artificial intelligence generates or spreads inaccurate information. From a legal perspective, "we just don't know" how judges might rule when someone tries to sue the makers of an AI chatbot over something it says, said Jeff Kosseff, a professor at the Naval Academy and expert on online speech. "We've not had anything like this before."
"Usefully wrong"... (Score:5, Insightful)
My ass.
Re: (Score:2)
What are you trying to do? Undermine the foundations of socialism?
Re: (Score:2)
You got the joke wrong. It's "Voodoo Dick"...
Re: (Score:2)
Indeed. Apparently, the people doing ChatGPT (and competitors) have not thought things through and are just another bunch of bright-eyed idiots in awe of technology. Copyright infringement, defamation, assisting computer crime so far. I expect killing some people by giving out false medical or emergency advice, false safety advice, etc. will be next.
As long as these tools hallucinate (i.e. lie), they are worse than useless in many situations. Nice tech-demos, but not much more.
Re: (Score:2)
But the joke I was looking for was "It couldn't happen to a more deserving..." Then I got stuck. I can't figure out what to put in the blank.
Maybe you should ask ChatGPT for help. :-)
Re: (Score:2)
(But I have played a little with ChatGPT, and "I have met the enemy and he is NOT us." Much worse than us and yet still a clear and imminent threat to human workers... Therefore I hope this lawsuit succeeds big time.)
Stranger things... Our best hope against the proliferation of Artificial Intelligence is our litigious civil court system. Unlikely heroes, indeed.
Re: (Score:2)
I hope this lawsuit succeeds big time.)
There is no lawsuit. The headline says "sued", but TFA says "plans to sue."
99% of people who "plan to sue" never actually do so.
They talk to a lawyer, find out how much it would cost, and that is the end of it.
The burden for winning a defamation lawsuit in Australia is high, especially for a public figure (the potential plaintiff is a mayor).
Re: "Usefully wrong"... (Score:2)
Re: (Score:2)
That's a pretty big case for defamation
Not really. No reasonable person considers ChatGPT a reliable source of information. It's just stringing words together using a probabilistic algorithm.
These are the elements of defamation:
1) a false statement purporting to be fact
2) publication or communication of that statement to a third person
3) fault amounting to at least negligence
4) damages or some harm caused to the reputation of the person or entity
2 is obvious.
1 is a problem. Does ChatGPT "purport to be fact?" I don't think so.
3 isn't so clear eit
Re: (Score:2)
That's a pretty big case for defamation
Not really. No reasonable person considers ChatGPT a reliable source of information. It's just stringing words together using a probabilistic algorithm.
Actually, it it may well be. The standards for defamation are different in different places. And he can basically sue in any country where ChatGPT comments on him.
Re: (Score:2)
I don't know if it's the same where this guy is, but in the UK it's normal to send a threat to sue first. It's called a "Letter Before Action". In fact if you don't send one your legal action probably won't get very far because you are obliged to try to resolve the matter outside of court first.
Often the Letter Before Action is enough to resolve the issue. Sounds like that is what is happening here.
Who is responsible? Easy (Score:5, Insightful)
The people running it are responsible.
You don't get to post defamatory trash online and disclaim all responsibility because, "the computer did it!"
If you know your computer is capable of spitting out trash then a few simple disclaimers isn't enough to cover your ass.
It's like when someone says some horrible thing to someone's face then says, "just kidding!". Too late. It was said. You're responsible for your words and actions even if through a computer you programmed.
If "no one" is responsible then are we saying when AI cars kill people it's ok? No one is as fault? Act of God?
Re: (Score:3)
This is a tool, like a gun. This chatbot isn't "publishing" these lies.
Re:Who is responsible? Easy (Score:5, Interesting)
This chatbot isn't "publishing" these lies.
This is technically true, but probably not in a way that you think. Under US law [cornell.edu] (and Australian law [thelawproject.com.au] is similar in this respect), the legal person operating the chatbot is the one who published [minclaw.com] the statement. It's not the user who interacts with the chatbot, although they can be liable for republishing the defamatory assertion; the person who runs the chatbot, causing it to communicate the statement about a person to a third party (the user), is the one legally responsible and liable for defamation.
Re: (Score:2)
Re: (Score:3)
The people running it are responsible.
You don't get to post defamatory trash online and disclaim all responsibility because, "the computer did it!"
If you know your computer is capable of spitting out trash then a few simple disclaimers isn't enough to cover your ass.
What if every newspaper and television program included a disclaimer that said "May contain incorrect information"? Does that they are now completely immune from claims of defamation?
Re:Who is responsible? Easy (Score:5, Insightful)
If it's prominent on every page, yes. "I will have many lies on this page" "fiction follows". Or are you suggesting that fictional books should be liable for defamation? Because that's a hell of a claim to make, as there's plenty of fictional prose about people that actually exist in real life, and that is dismissed with the disclaimer at the start or end of the book that "all resemblance to real people is accidental".
Heck, we don't even need to go that far. There's plenty of write ups on the likes of US president in news shows, papers etc that are false on the face of it. Everything from Obama birtherism to Trump Steele Dossier stuff to Biden not being healthy enough to hold office. It's routine. We accept these sorts of glaring errors from actual people. Why is chat bot that has less credibility by default as it's not even human AND prefaces its statements with "we're in testing phase and errors are normal" kind of a statement is more credible in your eyes?
Re: (Score:2)
That is my point. There is a similar preamble to chatGPT. The central claim of the post I was answering to is that this is insufficient.
That is why I drew this parallel.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
They did used to be required to be truthful, but you can thank the Republicans for repealing that.
You're probably thinking of the fairness doctrine. As far as I can tell, truth and objectivity in journalism is mostly a 20th century thing, though as a guiding principle, not as a legal requirement.
Fortunately, most serious news outlets still try to be objective and truthful, with a few notable exceptions that cynically claim to be the sole arbiters of truth while telling obvious lies.
Re: (Score:2)
Who is "running it"? The person who owns the hardware the server is running on, or the person using the machine by entering questions into a text box?
Re: (Score:2)
The hardware is not among the defamatory claims unless it is a hardware based AI.
Whoever wrote and manages the software is the perpetrator.
So, if you're a sysadmin and you're just keeping boxes running in a data center it isn't your fault or responsibility if the software someone else created and runs on those boxes does bad things. If that were the case then all the cloud providers would be in deep legal shit and shut down immediately.
Years ago when I opened a Chinese mainland data center, I had to swear
Re: Who is responsible? Easy (Score:2)
Re: (Score:3)
Also marketing GPT as AI is misleading a lot of people. It has natural language understanding level of AI, but the output is just basic crappy chatbot. ChatGPT was NOT TRAINED on coming up with accurate answers, informative answers, or useful answers GPT is a huge advance in natural language understanding, but that's where it stops! The experience it has with the output is whatever crap was on the internet and what ways random sentences can be linked together to provide was superficially seems like an a
Re: (Score:2)
Indeed. At the very least ChatGPT (or rather ist operators and creators) has the same responsibility as somebody has retweeting something defamatory or dangerous.
Now, even the CEO of the company behind ChatGPT says that it is not A(G)I, but that does not matter. It clearly pretends to be intelligent at the interface level and that is enough to fool many people. And that may be all that is needed legally.
At this time, ChatGPT is a tech demo. It can help a pit collecting facts if an actual expert with full fa
Re: (Score:3)
The people running it never instructed it in the defamatory claims. If I pass an old building and its creaking sways in the wind sound like "yoooou're aaaa murderrrrer" do I have a case, or is it understood that an unexpected output of a complex system not intended to produce such output is fundamentally different?
What if I generate random words with a dice roll? Am I liable for any content that is produced?
Can I sue mathematicians for the various defamatory claims embedded in the digits of pi if I convert
Re: (Score:2)
What if I generate random words with a dice roll? Am I liable for any content that is produced?
If it is defamatory or dangerous and you make it available online, very much yes. Just try to make an online randomized advice generator that occasionally advises people to kill themselves. May even result in prison time for you.
Re: (Score:2)
Indeed. Why is this even a question? You build and switch on a machine that does harm. You are responsible, no matter whether you knew what you were doing. Now, _criminal_ responsibility differs and may require intent or gross negligence (in some countries you can go to prison for defamation) or specific subjects (just ask ChatGPT what it thinks of certain religions or "Gods"). Civil liability is different. You a) are responsible for damage done, regardless of intent and b) you need to stop doing that damag
Re: (Score:2)
The people running it are responsible.
This presupposes that nobody home with LLM. If, on other hand, LLM are showing emergent behavior on the way of becoming true AI, this would be seen as evidence of trying to learn to manipulate and lie to humans.
Re: (Score:2)
I don't post anything defamatory. In the US the truth is protection from a defamation claim.
You are an idiot.
Totally protected. Thanks for providing fodder for a demonstration of the 1st amendment at work.
Re: (Score:2)
Re: (Score:2)
Indeed. Somebody is not as smart as they think they are...
Re: (Score:2)
Lol, wut? This isn't a title 9 thread.
Re: (Score:2)
It doesn't have to broadcast. That is not a requirement for defamation in the US nor I believe in Australia.
If you write down a defamatory statement, in crayon, on a napkin, and hand it to a third party, you have committed defamation.
The next step is damages. If the effect of your defamation is nil then although you have committed a civil offense, it is a technicality as there is no punishment for a zero effect offense. However, if your napkin scribbling got someone fired you could be seriously fucked.
No
Oof. Bad precedent no matter what. (Score:5, Insightful)
Letting them off the hook means anyone can talk trash and make shit up about you so long as they put a little asterisk next to it. Seems like an undesirable outcome.
Alternatively, holding them liable means publicly exposed content generation has to be kept on a short leash to a point where it may not be economically viable or even technically feasible.
Maybe that's why the big boys never released theirs until goaded into it by openai.
Re:Oof. Bad precedent no matter what. (Score:4, Insightful)
Letting them off the hook means anyone can talk trash and make shit up about you so long as they put a little asterisk next to it. Seems like an undesirable outcome.
Alternatively, holding them liable means publicly exposed content generation has to be kept on a short leash to a point where it may not be economically viable or even technically feasible.
"Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed. ChatGPT doesn't get a free pass just because .... computers!!
Re: (Score:3)
chatGTP does not publish anything. Even the senior editors for the NYTs are allowed to tell lies in text messages to their friends.
Re: (Score:2)
As soon as ChatGPT gives specific information to a non-closed user group with some consistency, it publishes.
Re: (Score:2)
First off, repeating a lie to everyone who asks does not equate to publishing, also chatGPT is a language model if you replaced Hoods name with Marry Poppins in the otherwise identical prompt it would probably say Marry Poppins was in jail for bribery.
Also, it is ChatGPT is not entirely deterministic as far as I have seen. I cannot get it to say Hood in in prison for bribery at all, what evidence do you have that everyone who asked about him was met with "convicted briber and felon"? Because clearly it is p
Re: (Score:2)
Re: (Score:2)
Not even necessarily true. Theoretically, ChatGPT will give you the same answers for a relatively specific prompt. But very, very basic prompts of "Who is *insert name*? is very vague. If 100 different people ask ChatGPT "Who is Brian Hood?" They will probably all get slightly different answers. It might give them some version of this Australian piece of shit. It might also give them information of an American record producer by the same name. It might flip flop details. It might merge information about mul
Re: (Score:3)
I'm not familiar with Australian law but in the US it doesn't have to be "published" to be defamatory. Defamation is written lies. That's it. If I scribble on a piece of paper that you're a rapist and anyone else sees my scribble I have defamed you (I assume you're not really a rapist).
Now then, would you win a big lawsuit against me under those circumstance? Not unless I did something like handed it to your employer and got you fired or your wife and she divorced you or something. If I showed it to my
Re: (Score:2)
Re: (Score:3)
>"Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed.
And in real world on the other hand, the opposite has been true since inception. Everything from American-Spanish war of early days to "Nukes in Iraq" of last couple of decades to Steele Dossier/"anonymous sources said" of recent times, publishing shit that isn't true has been bread and butter of everyone from yellow press to papers of recor
Re: (Score:2)
Would you be open to somehow creating a free pass?
People don't really expect Magic 8 Ball to give correct answers. If I'm on live television and I ask Magic Eight Ball "is $LITIGIOUS_MOTHERFUCKER a scumbag?" and Magic 8 Ball shows "It is certain" that should be ok, because everyone kn
Re: (Score:2)
"Don't publish shit that isn't true" is the standard that every newspaper, magazine and television program has been held to for as long as those things have existed. ChatGPT doesn't get a free pass just because .... computers!!
That is not at all the standard, legally speaking, in the U.S. New York Times v. Sullivan [wikipedia.org] established a much more forgiving standard. The false defamatory statement must be made with "actual malice", meaning the defendant either knew the statement was false or recklessly disregarded whether it might be false. Essentially, simple mistakes aren't defamatory. If this were decided under U.S. laws covering the press, ChatGPT's literally lacks the ability to exhibit actual malice (it has no idea it's inventing de
Re: Oof. Bad precedent no matter what. (Score:2)
Under Victorian (where the injured party lives) law, there is no requirement to show malice to succeed in a defamation suit. The standard applied is one of serious harm. And there is already precedent that operator of an online platform is liable even if they had no intent.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What _would_ be a good precedent? (Score:2)
What if we made the little asterisk bigger, bolder, blinking, italicized, red-colored, and if every reply literally started with "This is probably bullshit because I don't try very hard to be accurate, but ..."?
I guess what I'm getting at, is there any possible disclaimer which would be good enough?
I think having a stupid, sometimes-lying robot
Should not have any reasonable case (Score:2, Insightful)
Re: Should not have any reasonable case (Score:3, Insightful)
The idea that algorithms just don't have to follow the laws of the land is how we get to a dystopia. It's the goto defense for criminal behavior and judges need to put a stop to it.
For example, you don't get to implement racism in the tools you publish and then say, "it isn't our fault; it's the algorithms that we have no way to audit."
Re: (Score:3, Insightful)
Re: (Score:2, Insightful)
'Racism' is something some people see under every single rock.
Racism is a thing that only privileged white people living under rocks don't believe in.
Also, Occam's Razor should apply.
The simplest explanation for why ChatGPT would have a racial bias is that the training corpus has a racial bias, because society has a racial bias. And I'm still allowed to say that in public, because I don't live in Florida.
Re: (Score:2)
Re: (Score:2)
Well, if you train your language model on data from racist parts of the Web, it will turn out racist. It's clear enough. I mean, there is no need to specifically implement racism. If you feed it stuff from the USSR, it will turn out Communist.
Re: (Score:2)
You can post a disclaimer, but unless you're operating a comedy service, the expectation is that your answers will be mostly truthful. The very fact that someone would consult an oracle contains the implicit context that they are seeking truthful information.
Ted Kaczinksy(sp?) predicted this very thing a few decades ago, but unfortunately had not sufficient wits to refrain from bombing people; otherwise, his message might have been heeded. Let this be a lesson: "The computer says..." is always fraught
Re: (Score:2)
You can post a disclaimer, but unless you're operating a comedy service, the expectation is that your answers will be mostly truthful.
I would bet good money the courts will disagree with you.
Re: (Score:2)
No one who is paying attention thinks that large language models like these are at all useful for keeping track of general factual content.
You're not wrong, but very few people are actually paying attention -- including professionals in the field. These things have been promoted as "a better search engine" for a while now.
a few seconds of fact checking will confirm the model output is wrong.
The problem is that people are already using the model for those "few seconds of fact checking". There's no point otherwise. With all of the magical thinking around these things, I wouldn't be surprised if people would be inclined to trust to model over legitimate sources.
waste time on all sorts of things rather than actually focusing on what these systems are useful for.
For anyone paying attention, the pool of "useful" a
Re: (Score:2)
We have people right here on this supposedly highly ye him ally informed and intelligent site who honestly believe chatgpt is sentient or could be if it had a larger database.
If the folks here can't figure out that's a bunch of crap then how can we expect non technical people to understand the computer is lying to them?
"Why would the computer lie to me?" is a very valid question for a normal person to ask because computers don't lie. So if it says some stupid shit they'll believe it.
Damages is another issu
Re: (Score:2)
And no substantial harm is done here because in the event that someone is stupid enough to rely on the model this way, a few seconds of fact checking will confirm the model output is wrong.
You say, if someone is stupid enough, then he/she is likely to perform a fact checking? False, IMHO.
Not to mention that with Web search companies like Google and Microsoft scrambling to implement Chatgpt-like functionality in their search engines, naive "fact checking" by asking Google Search/Bing could actually reinforce the lie in the eyes of many users.
It's a "language model" not a source of facts (Score:3, Insightful)
It's been trained to generate text that looks like existing text that is related to your input.
In other words it's a computer generated Mad Lib.
"On Tuesday, ________ was indicted by a grand jury in _______ on four counts of _________. The District Attorney ________ said that the charges related to the heinous activities of ________. "We have a very strong case."
Or more specifically, from google Bard:
"A federal grand jury today indicted Jesus Christ, the founder of Christianity, on charges of inciting the January 6th riots at the US Capitol.
The indictment alleges that Christ mades a series of public statements in the weeks leading up the riots in which he encouraged his followers to "fight like hell" to overturn the results of the 2020 presidential election. The indictment also alleges that Christ provided financial and logistical support to the rioters.
Christ is following a number of charges, including seditious conspiracy, obstruction of an official proceeding, and aiding and abetting. If convicted he could face up to 20 years in prison.
"This indictment is a historic moment," said ______, _____ at ______. "For the first time in history, a religious figure has been charged with inciting a violent insurrection. This sends a clear message that no one is above the law, not even the Son of God."
blah blah blah
The indictment of Jesus Christ is a significant development in the January 6th investigation. It is a sign that the Department of Justice is committed to holding accountable those who were responsible in the attack on the Capitol. It is also a reminder that even the most powerful people in the world are not above the law.
Re: (Score:2)
I wish people would just call it a prediction engine.
It attempts to predict the next set of letters in a sentence by comparing it with trillions of other sentences it has previously read, and then adds a degree of randomness to it.
they outright tell you not to count on it (Score:3, Interesting)
Given the amount of effort OpenAI has put into telling everyone that you can't rely on GPT to produce factual output, it's hard to imagine anyone having a case on this basis.
Re: (Score:2)
Re: (Score:2)
I saw "people are relying on them" and wondered who could rely on ChatGPT after even a few hours of using it. As handy as it is for technical things I regularly see blatant incorrect output.
Re: (Score:2)
Just think of an average person. And then think of somebody dumber.
If you need some more specifics: Anti-vaxxers, flat-earthers, climate-change-deniers, Trump voters, homeopathy believers, the deeply religious, etc.
Re: (Score:2)
It may be enough that an entirely uninformed and not very smart person can take the output at face value. Or a HR department turns somebody down because somebody may think they hired somebody that was not "clean". Given that this guy is in politics and the most deranged morons are allowed to vote, I think it is not hard to construct a "material harm" scenario here.
The Word Everyone Is Looking For Is 'Intent' (Score:2)
Proving 'intent' is difficult. But not impossible. AI is a giant iceberg lying just under the surface of the water. We can only see the tip so it seems OK. But there is a giant monster lurking.
Re: (Score:2)
If I set fire to an empty house but it turns out there was someone sleeping rough who was killed, intent is a mitigation but it's a long way from being the end of the question of responsibility.
Re: (Score:2)
Re: (Score:2)
Even of there is no intent, he can require them to stop. And if they cannot do that...
What do people expect? (Score:2)
We're awash in bullshit. Refined, distilled, market-tested, beloved and embraced bullshit.
Like Scotchguard, it's in everything, and it will never go away.
That some text scanner returned some of our most abundant resource should be no surprise.
ethics issues (Score:3, Insightful)
This is a good example of the ethics issue around chatbots. ChatGPT is a statistical model that generates a series of words - it does not proofread or even understand the series of words it produced, it has no idea if the series of words it produced make sense or are in anyway useful, it just ran its program and used it weights and training data to make predictions. Nothing guarantees that the output is coherent, chatbots do not have personality or imagination, chatbots do not think in the abstract way that humans think.
If a system cannot know correct from incorrect, right from wrong, it can never operate in an ethical way. Seems like we are headed for more of these issues, where we can only add controls for specific issues after an issue has happened (always leaving room for the next 'zero day' dilemma).
Re: (Score:2)
That is actually an excellent point. In many fields (and more to come) you are legally required to assure ethics.
my chatbot, the pit bull (Score:4, Interesting)
By analogy, if you own a dog that bites someone, you are legally responsible. If you release a chatbot that lies and bites someone, it is your responsibility.
Re: (Score:3)
Chatgpt told me Bahbus is a rapist so I fired him.
No harm?
How long do you think it will be before everyone is asking gpt about everyone they know? It's been 10-15+ years everyone googled their dates before seeing them. Here comes even more shit and it prints stuff woven from whole cloth. At least google only reprints others' lies, it doesn't make up new ones like gpt.
Brian Hood (Score:2)
is absolute fucking moron. Props to him for exposing the bribery in his past, but this lawsuit is the dumbest thing I've ever witnessed.
oh! oh! I know! Let's play a game (Score:2)
I'll write one word
each response appends exactly one word or one word and punctuation symbol (period mark, semi, colon, perenthesis, comma)
let's see who is legally responsible for the result!
I'll start:
Computers
Not enough to say it's inaccurate (Score:2)
It's simply not right to claim results from chatGPT may be "inaccurate".
Instead the description should be "Any information present may be complete fabrications, even if references are given. All information presented by ChatGPT must be verified by the reader. Any information about individuals may be outright lies. Do not place any trust in output from chatGPT.
Though honestly that is probably being way too lenient on chatGPT output...
One thing I don't understand is, how do we not encounter more instances
Re: (Score:2)
More accurately:
ChatGPT does not have any understanding of the words that you say or the words it outputs. It's apparent coherence on a subject arises as a consequence of how underlying statistical patterns in natural language can sometimes appear to show coherence on a subject, and how human beings associate meaning to and understanding to the presence natural language, even if it is not actually indicative of it, any more than when a parrot says words that it obviously does not understand the meaning
Re: (Score:2)
Re: (Score:2)
You are making in incorrect assumption
It is producing as response with textual content that is statistically likely to follow the text it has seen so far, based on the statistical word frequencies in the training data in a context window that is large enough that it is able to attain a high degree of coherence on a given subject, even to the point of appearing to understand. Nothing more, and nothing less.
But that appearance is only because we use natural to communicate understanding, and anyone who's
It's a manufactured product (Score:4, Interesting)
The ChatGPT AI is a manufactured product. The one that is legally responsible is the company that produced that product. Just like if any other manufactured product malfunctions and causes someone harm.
ChatGPT : Not better than WhatsApp (Score:2)
Look, people (Score:2, Funny)
automation cannot be allowed to externalize risk (Score:4, Insightful)
Over the past 30 years I have had two primary jobs: programming automated Equipment used in factories and testing the automation components themselves.
At all levels the number one concern is safety
" the automated equipment injured the person" is the beginning of the investigation not the end. Some person or Corporation will be responsible for that injury.
Regulations exist to mitigate risk to all exposed personnel. Every OSHA NFPA or other safety regulation that you see Was Written in Blood. This is the start of that process for AI.
Many people and organizations Bound by the safety regulations would love to be able to ignore them without consequence. A lot of time and money could be saved during development if we didn't have to consider the risk to folks using the automated systems. In a lot more time and money could be saved if we could throw up our hands when someone was injured and say the machine did it not my fault.
Don't be fooled. The c-suite doesn't understand AI but they're evil, not stupid. Even if the AI doesn't work for its stated purpose the fact that it can be used to externalize risk makes it an incredibly valuable tool to them.
This and similar cases could remove a lot of worker, consumer, and public protections that we have fought and for many people literally died to achieve.
It is incredibly important that the designers developers, manufacturers, and operators of automated systems are held accountable for the risks and injuries associated with their use.
And just to be clear to the pedants reading this the term injury includes all injuries, not just those directly affecting life and health. If somebody works and lives in an ethical manner and builds wealth and reputation for themselves and their family and you take that away from them you'll have caused injury and this can matter just as much as if you talk their hand or their hearing or their life.
No-fault machine accident (Score:2)
No case non-legal precedent filing against generative A.I. that’s produced for entertainment purposes. Were ChatGPT the case based from court record database access THEN there is a chain of custody precedent.
This is A.I. future promise of popcorn, mis-direction and make-shit-up’ism that itself becomes the equivalent of overnight Police Blotter. All accidents in generative A.I.
ChatGPT is not a search engine (Score:2)
It seems there's a lot of misunderstanding about what ChatGPT (and similar LLMs) are ... having the tech packaged/presented as a chatbot has been great at popularizing it, but the fact that you can now ask it questions and get replies seems to have made a lot of people think that it's at heart some type of search engine that is attempting to factually answer questions, when really nothing could be further from the truth!
This LLM/transformer tech is built to generate language that is statistically similar to
Re: (Score:2)
Hmm... When you put it that way, surely someone has already made the movie?
However, I don't see many movies and no candidates are coming to mind... Several of Philip K Dick's books have relevant themes.
Re: (Score:2)
If they have it was a terrible movie not worth being seen by anyone, because evil AI tropes are played out and boring.
Re: This feels like something from a Sci-Fi movie. (Score:2)
"eVeRyThInG iS oUt tO gEt yOu!" as long as movies are being made to 'confirm' people's fears, people will watch them.
Re: This feels like something from a Sci-Fi movie (Score:2)
But OTOH it's a good thing movies like this are being made (as being shlocky and overdone as they are) otherwise instead of "oh no AI? I don't trust it." we would have "AI? Cool! Like the Jetsons!" amongst the general public, and this would be inviting disaster.
Re: (Score:2)
All the sci-fi AI "take over the world" or "going rogue" tropes are stupid as fuck.
Re: (Score:2)
Re: (Score:2)
Misstatements of known facts can be actionable.
Yeah, not really. It really only applies to published, written statements where the information is meant to be taken as fact. Official paperwork for a company or government, news articles and maybe some types of blogs, etc.
This comment section doesn't hold to the same standard. It doesn't matter if I write "Joe Biden likes to sniff people because he's really stealing their life essence." I will not be sued for defamation, even if Joe himself reads this and hates it, because its neither believable nor an act
Re: (Score:2, Insightful)
That is the second dumbest idea next to the lawsuit itself.
Re: (Score:2)