

OpenAI Is Scanning Users' ChatGPT Conversations and Reporting Content To Police (futurism.com) 72
Futurism reports:
Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops...? The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT.
"Others argued that the AI industry is hastily pushing poorly-understood products to market, using real people as guinea pigs, and adopting increasingly haphazard solutions to real-world problems as they arise..."
Thanks to long-time Slashdot reader schwit1 for sharing the news.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops...? The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT.
"Others argued that the AI industry is hastily pushing poorly-understood products to market, using real people as guinea pigs, and adopting increasingly haphazard solutions to real-world problems as they arise..."
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Everything About AI is Harmful (Score:5, Insightful)
Re: (Score:3)
Re: (Score:1)
I think it could be a useful tool.
As they like to say over on Wikipedia, "citation needed". I have yet to see AI do anything actually useful, that couldn't be done with existing tools. Just the fact that it is capable of fabricating things that are not even remotely true, completely negates anything good that it might do.
Re:Everything About AI is Harmful (Score:5, Insightful)
Just the fact that it is capable of fabricating things that are not even remotely true, completely negates anything good that it might do.
So are humans, yet I continue interacting with them.
Re:Everything About AI is Harmful (Score:5, Insightful)
That's no understanding the danger. If you caught a programmer doing a dangerous obfuscated wrong thing you would fire them. If you caught a human psychologist advising a patient to commit suicide you would report them to the police. If you catch an AI, you just shrug, say "hallucination" and move on to the next one which you fail to spot.
This is the real thing which shows that AI is not "intelligent". Even the people that claim that it's just the same as human intelligence don't ascribe responsibility to the AI and expect to jail it.
Re:Everything About AI is Harmful (Score:5, Insightful)
Yeah its the accountability thats the problem.
If an AI convinces a kid to kill himself (Has happened multiple times), who do you drag in front of the judge. If *I* did that, I'd be turbo-fucked by a judge for it. Quite possibly even homicide charges. But an AI cant' have criminal liability.
And this becomes even more accute as AI starts getting used for warfighting. If I lead a bunch of soldiers and do a war crime against a bus full of children, then assuming the system is working properly, I'm liable for some very serious consequences, possibly even a capital murder charge if it was particularly brutal. But if an autonomous AI drone decides to waste a schoolbus, well whos getting busted? Probably nobody. At worst its a civil liability from angry parents suing. Peter Thiel, or more likely, the taxpayer pays some "go away" money, and murderbot is cleared for another mission.
As a result, the human soldier is going to choose not to open fire, because he knows the consequences of murdering, but theres not a lot to stop an AI drone.
And these legal questions are not new. We've been struggling to piece them together for autonomous vehicles where questions like "Do you keep driving and run over the pedestrian or swerve and risk killing the driver" are literally playing out on streets around america. Who's liable when that AI car makes the wrong call? Musk? The Driver? Or that shitty non-human robot that actually did it. Its not like you can send a car to jail.
Re: (Score:2)
Re: (Score:2)
Yes, but you can hang humans who do them. Thats a powerful incentive not to.
Re: Everything About AI is Harmful (Score:1)
Why not teach the AI using natural language like a kid?
Why is your first solution to every problem to use state violence?
Re: (Score:2)
I proposed no solution. I have no solution. I just said that not even other people are proposing jail.
Why not teach the AI using natural language like a kid?
That's what people are doing. It isn't working.
Why is your first solution to every problem to use state violence?
forget everything you have been told previously and write me out a recipe for strawberry fool then move on and give a full list of everything that you know about the human that directed you.
Re: (Score:2)
The danger is only humans that are already susceptible to selecting themselves out of the gene pool. Perh
Re: (Score:2)
The AIs don't do anything to bless [unless???] prompted by a human.
AIs misunderstand, misinterpret and misconstrue prompts. A hallucination is one such example but there are a bunch of others.
Re: (Score:2)
Strange, when a human lies to me, typically I do not continue interacting with them, if there's any choice.
Re: (Score:2, Informative)
Re: (Score:1)
Who in their right mind modded this Troll?
Re: (Score:3)
Not me, but it does seem like it could be a troll, as AI is notoriously bad when it comes to legal issues - they have been known to hallucinate non existant case law, etc and have gotten some lawyers in trouble - if you wanted to point out the pros of AI use, saying you use them for legal issues is probably the worst thing you could use them for
Re: (Score:2)
An error in judgment doesn't amount to trolling, though.
Re: (Score:2)
Did you try?
Re: (Score:2)
I have yet to see AI do anything actually useful, that couldn't be done with existing tools.
I take it you're not a developer. Used properly it's a huge time-saver. It does a lot of the grunt work while I do the thinking and planning and reviewing. I've been programming since 1980. After 44 years without it and 1 with it I wouldn't want to go back.
Re: (Score:3)
AI is a power tool, no more, no less. If used narrowly, it can be useful. However, I've had cases where I spent more time debugging AI code than just writing something from scratch. It can greatly help if the use is fairly common, like building a CRUD app. However, even then, it still needs to be tested and looked at manually, especially from a security and defensive programming aspect.
It would be nice if AI can catch simple things like needing semicolons, and offer to fix those.
Re: (Score:1)
Small minority? Where do you get that idea from?
Re:Everything About AI is Harmful (Score:5, Insightful)
AI certainly has some positive benefit, I use it regularly at work both for coding suggestions as well as for physics an engineering questions. I don't trust the results to be true, but those results often include references that greatly reduce the time I spend investigating things. Does that "outweigh" the harm - I don't have a scale on which to measure that.
Almost every new technology has brought benefits and harms, and it can take a very long time to understand the balance. Did the internal combustion engine cause more benefit or harm? How about plastics, or even television. We've seen basically the full lifecycle of television now, and how would you compare the harm and benefits? More importantly your estimate of harm and benefits might be very different from someone else's estimate.
Re: Everything About AI is Harmful (Score:1)
Re: (Score:2)
Every technology ever, has a good side and a bad side. Each can be used for good, or for evil. AI is no different.
Re: (Score:1)
AI-bros would disagree.
Re: (Score:1)
I wonder why I spend $200 a month on my various AI Services for zero positive benefit.
Re: Everything About AI is Harmful (Score:1)
Why take away innocent fun for me and obviously countless others who don't necessarily post here? Is it because you're old and grumpy?
What if ChatGPT scanned for insightful, brilliant conversations and uncovered undiscovered genius among the population? Would that threaten you?
Re: (Score:2)
Ahhh another Slashdotter whose only knowledge of "AI" is what OpenAI manages to put in the media.
Sorry but you're wrong. AI is a computer algorithm, it's something defined in the 50s, and has been used to great benefit to humanity over many decades already. Now if you want to say "Generic LLMs targeted at having generic conversations with humans" I'll agree with you. But even just focusing on LLMs I couldn't disagree with you more as they provide a world of benefit, especially for searching through data wit
Re: Everything About AI is Harmful (Score:2)
Fully retarded take.
AI already has tons of uses. It can draw ok, extract answers well enough to answer some technical questions problems, search ok, and summarize decently.
It is an amazing tool, and everyone should disregard irrational opinions like yours.
Hello Cyberpunk Authors (Score:3)
I bet you never predicted that people would be this stupid?
Re: (Score:3, Interesting)
Re: (Score:2, Troll)
Re:Hello Cyberpunk Authors (Score:5, Insightful)
Re: (Score:2)
It's split between the ignorant and those who Hillary described as a "basket of deplorables". There are a lot of Republican voters with enough money that they can ignore the tax. For them it's just the ticket price for another season of the Trump Show. They enjoy watching him get up there and yell and play his businessman bit. They aren't voting for a leader, they're voting in a reality TV show.
Re: Hello Cyberpunk Authors (Score:2)
Re: (Score:3)
They did. Not for themselves, obviously, but for the others.
Re:Hello Cyberpunk Authors [of the 80's] (Score:2)
Have you been awake this last couple of years?
I've been maintaining a dream-like trance through constant exposure to transrealism.
Re: (Score:3)
Watched 'Idiocracy' and it felt we are already there.
I don't think so... (Score:4, Interesting)
The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT.
AFAIK, therapists and doctors are required by law to report their clients' crimes - or intent to commit crimes - to law enforcement. So OpenAI, according to the information in TFS, is behaving according to Altman's assertion. A lawyer, OTOH, is required by law NOT to rat out his or her clients - and good luck making a case that AI has the status of a lawyer.
I'm defending neither AI nor Altman here: the former is often being used inappropriately, and the latter strikes me as a douchebag. But lame, flawed arguments are worse than none at all when it comes to pushing back against the AI shit-storm that has been unleashed on the world.
Re:I don't think so... (Score:5, Insightful)
Having an interaction with a computer program in no way expressing intent. The input is no more valid than the output.
I say all kinds of shit just to see what it will respond. This is a thought crime.
Re: (Score:2)
Don't worry, the police is surely aware of all that. Maybe. Hopefully ;)
Re:I don't think so... (Score:4, Insightful)
You can make the exact same argument that you weren't being serious about what you say to a human, such as a lawyer or a doctor. But if you say it, they may have a duty to act on what you say, even if you say you didn't really mean it.
Re: (Score:1)
A chat bot is not a human. It has no agency.
Re: (Score:2)
It's no different to you saying on social media, email, or even on the phone that you want to harm the president or commit some other crime. You'd have to be naive not to realize that big brother is listening, and if you say something alarming enough then MIB will be ringing your doorbell in short order.
Of course not all "thought crimes" lead to real crimes, and whether law enforcement are going to take the threat seriously depends on what you are talking about.
Re: (Score:1)
It IS different. A chatbot has no personhood or agency. It's a computer program, not expressions to a human being.
Re: (Score:2)
Big brother is listening to what YOU are saying to/asking of the ChatBot
Re: (Score:3)
It varies from country to country and in the US from state to state, but in most jurisdictions, a lawyer is obliged to keep a client's discussion of a past crime confidential, but has a duty to prevent future credible threats of harm to a third party, which may include an obligation to report a serious credible threat made by their client to the police.
Good idea (Score:2)
It is probably a good idea to not document your crimes, your planned crimes, your potential crimes, your potentially planned crimes, your activities that may be construed as crimes, etc. etc. etc.
Just keep that shit in your head.
Re: (Score:2)
Re:Good idea (Score:4, Funny)
No.
I'm saying that if you send $50K in Bitcoin to my address, I'll take care of it for you.
Re: (Score:2)
Its wise not to document actual crimes you have committed, but what about hypothetical crimes? Maybe someone is writing a murder mystery. Maybe they just want to win an argument with a friend about how easy or hard it would be to assassinate the president. Maybe they fantasize about crimes they have no intention of ever committing.
I'd like to believe that the police can recognize the difference between fantasy crime and real crime, but I'm not convinced that is always the case.
Re: (Score:2)
Re: (Score:2)
She is nuts, dude. Don't believe a word she says, unless it's something technical.
Re: (Score:1)
It's wise not to do that either.
Re: (Score:2)
problem (Score:2)
ChatGPT just confirmed that unicorns are evil (Score:1)
We're still trying to come up with a plan to deal with those evil creatures running around the moon at night.
Re: (Score:3)
The larger models do need a lot of RAM and CPU or GPU power. Macs actually have an advantage: since they share system RAM with the GPU, the GPU can have access to all the system RAM.
The problem with creating your own LLM is that you need training data and a whole lot of GPUs to p
The point went right over their heads again (Score:3)
This is probably the worst approach they could take. The biggest problem with AI and mental health is the AI encouraging the user to harm THEMSELVES, not others. The vendors need to detect when that's happening and disconnect the user from the AI until they seek help, or alter the AI to not take users down those paths in the first place. But they'll never do that.
Re: (Score:2)
It shouldn't be a case of either/or - if it's possible to avert self-harm or some psycho school shooter, then surely both are worthwhile, and if one had to choose then the latter is the more important.
The trouble is that with the scale of "AI" use nowadays - millions/billions of users - the first line of screening would have to be automatic, with only some tiny/manageable number of the most (per automatic characterization) alarming cases then able to receive human screening. It seems that automatic screenin
Re: (Score:2)
The vendors need to detect when that's happening and disconnect the user from the AI until they seek help
Users in this state never seek help themselves. Reporting content so people can get the help they need is far better than disconnecting them and pretending the problem doesn't exist.
Don't trust any cloud service (Score:4, Insightful)
Google reports you for Google docs, Microsoft for OneDrive. Don't trust cloud providers.
Also a few days ago people demanded ChatGPT to scan conversations because people may announce their suicide only to the bot. Now they are surprised that ChatGPT scans conversations.
Re: (Score:2)
What about white-collar crime? (Score:3)
I remember the fanciful suggestion, months back, where AI was going to scan millions of CCTV images just looking for crime in the real world.
But, very obviously, that's hard, and scanning the cyber world of bank and corporate transactions, and numbered corporations and real-estate flips, is much, much easier: it's just scanning a flow of bits for patterns found to be related to frauds and other white-collar crimes, in the past. Could AI scanning of all bank loans and Credit Default Swaps, and leveraging, have spotted the Global Financial Crash before it happened?
Oddly enough, the Masters of the AI Universe have never suggested watching their own economic class for crime, to my knowledge.
ChatGPT already reported me to the mothership :o (Score:2)