

Judge Denies Creating 'Mass Surveillance Program' Harming All ChatGPT Users (arstechnica.com) 39
An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to "indefinitely" retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user's request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.
The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT "from time to time," occasionally sending OpenAI "highly sensitive personal and commercial information in the course of using the service." In his filing, Hunt alleged that Wang's preservation order created a "nationwide mass surveillance program" affecting and potentially harming "all ChatGPT users," who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs "inherently reveal, and often explicitly restate, the input questions or topics input."
Hunt claimed that he only learned that ChatGPT was retaining this information -- despite policies specifying they would not -- by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court's decision and proposed a motion to vacate the order that said Wang's "order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users." [...] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker's concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.
Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI's fight to block the order this week fail. But that's likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data -- even if it's never shared with news organizations -- could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users' privacy if other concerns -- like "financial costs of the case, desire for a quick resolution, and avoiding reputational damage" -- are deemed more important, his filing said.
The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT "from time to time," occasionally sending OpenAI "highly sensitive personal and commercial information in the course of using the service." In his filing, Hunt alleged that Wang's preservation order created a "nationwide mass surveillance program" affecting and potentially harming "all ChatGPT users," who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs "inherently reveal, and often explicitly restate, the input questions or topics input."
Hunt claimed that he only learned that ChatGPT was retaining this information -- despite policies specifying they would not -- by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court's decision and proposed a motion to vacate the order that said Wang's "order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users." [...] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker's concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.
Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI's fight to block the order this week fail. But that's likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data -- even if it's never shared with news organizations -- could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users' privacy if other concerns -- like "financial costs of the case, desire for a quick resolution, and avoiding reputational damage" -- are deemed more important, his filing said.
Here we go... (Score:5, Interesting)
Well- that halted my use of all AI at this point.
I've given it proprietary information about myself, my coding, a lot of things. But that was under the assumption these chats were private.
Now a court gets to look at responses to my input without a warrant. It's far reaching and universal.
I'm off AI.... unless this settles in a way that respects users.
Aside from technical use, I've used AI with drafts of a book I'm writing. Guess I'll go back to using a human editor.
This order is bullshit.
Re:Here we go... (Score:5, Insightful)
lol
Re: Here we go... (Score:2)
Re: (Score:2)
Look- I don't care if they use the conversations for training.
What I don't want is a mass subpoena exposing answers I received. That's called privacy.
I can't imagine how anyone thinks that is funny. My problem is with the court- not the AI.
Oh and by the way- I'm not laughing at you- you're lack of empathy tells me everything I need to know about you. Laughing at you would be cruel.
Re: (Score:3)
Is that *really* what you're saying? I almost can't believe this.
Re: (Score:2)
60... asshole.
Re: Here we go... (Score:3)
Old man yells at cloud. Literally
Re: (Score:2)
Just because he's yelling at clouds, doesn't mean he ain't right.
Re: Here we go... (Score:5, Insightful)
"What I don't want is a mass subpoena exposing answers I received. That's called privacy."
What you should have realized was that every interaction was being retained, and this meant it was out of your control.
"I can't imagine how anyone thinks that is funny. My problem is with the court- not the AI."
Your problem is that you're using AI with your brain turned off. Practically every slashdotter would laugh at your lack of realization that data, once out of your hands, is out of your control. There is no cloud, you are not Buddha, your data just went to someone else's server and it was always subject to subpoena and this is true of everything you ever did online. Nobody is throwing away data they have about you except as required by law, and they probably sold it first.
Re: (Score:2)
I wouldn't laugh at them. It's 2025. We're no longer allowed to laugh at the mentally handicapped.
I kid. I kid!
There's no reason for me to really trust AI. In fact, I did a vanity search of my username and was surprised how accurate it was. I guess that's on me for reusing my username across many sites over the years. Then again, I'm not the only one who uses this combination of letters as their username. So, I guess the AI vanity check would be wrong for the rest of the people who use my username.
But, it g
Re: Here we go... (Score:3)
Did you really think ANY unencrypted cloud service was private? I feel like we've all learned this lesson over & over & over...
Re:Here we go... (Score:5, Insightful)
If you had an assumption your chats were private, you were wrong way before the court got involved.
I don't even assume singularly-focused messaging clients are private anymore, but if it's an AI company I'm submitting all my information to, there's no question whether they're remixing and reselling my data. That is all the company exists to do, and they don't need a warrant either.
Re: (Score:3)
They have the means, motive, and opportunity to use the data you submit for their profit. So, they will.
Any promises they make in policy statements are automatically untrustworthy. At literally any time the government can swoop in and demand this sort of thing, with gag orders to prevent you from ever finding out. Not to mention the possibility of hackers sneaking in and stealing the data, or disgruntled employees leaking it. Or incompetent employees mishandling it. And of course there is the corporati
Re:Here we go... (Score:4, Insightful)
Now a court gets to look at responses to my input without a warrant. It's far reaching and universal.
I am not a lawyer, but as a layman I don't see how a court order for documents is functionally any different than a warrant. That's what discovery is, no?
If you share any sensitive information with ANY entity, you should assume that at some point a judge or law enforcement officer might be able to look at that information should the entity become the target of a criminal investigation or a litigant.
Aside from technical use, I've used AI with drafts of a book I'm writing. Guess I'll go back to using a human editor.
That's probably the better choice regardless of this court case.
Re: (Score:2)
I've used AI with drafts of a book I'm writing
I presume the book is not of the dystopian science fiction genre then.
Personally, I don't even let any LLM come near my writing. They're only advanced statistic machines and that goes against my sense of creativity.
Re: (Score:2)
Yeah we're putting a stop to it at work. We can't meet our legal compliance requirements if the. company is doing this.
Re: (Score:2)
But that was under the assumption these chats were private.
Is that a satirical comment meant to trigger people? Why on Earth would you assume what you assumed. We've been warning people for DECADES that:
1) Any of your data put on someone else's server (aka, "the Cloud") is no longer your data. You must assume it is being scanned, read, and used for the service provider's revenue, because it is.
2) If a service is free, then the service is not the product. You are, as is your data.
And for the last few years (at least):
1) Anything posted into an AI service is being do
Our Legal System at Work (Score:2)
Re: (Score:2)
While what you said is true, I'm not sure it really applies in this case. In this case the rich and powerful are being forced to bend over due to a court order.
I might have a different perspective if this were a peer to peer chat app and the logs are private conversations between people (and especially if it were a peer to peer encryption solution and the judge stupidly said, "you have to intercept that!"). But just because people pretend they're having a conversation with a chat bot doesn't make it so. It'
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Im very confused (Score:5, Insightful)
Now that we’ve got that out of the way - what’s the point is this judges order? We can have a legit debate as to whether OpenAI should be required to disclose user data if subject to a court order. But the data is most definitely there.
Re:Im very confused (Score:5, Insightful)
The point of the order is so they can be held in contempt if they want to claim (again) that they're not saving the data.
Re: (Score:2)
The point of the order is to preserve evidence for Discovery. It's standard practice in every court case ever.
Re:Im very confused (Score:4, Interesting)
The point is this:
OpenAI does bad things. It collects and uses information that it has no right to. When people knock on the door and say "tell me what you've got that doesn't belong to you", OpenAI answers "I don't have your stuff, and you can't come in and check".
So now they go to a judge and the judge says "OpenAI, don't throw anything away, we're going to start looking through your stuff to get to the bottom of this"
Meanwhile, the peanut gallery goes "judge overreach! Leave OpenAI aloooooone!"
HTH.
Re: (Score:1)
The very epitome of a strawman argument, nicely done.
Re: (Score:2)
That's different than disclosing to their customer base that they keep everything.
Or the incredible existential/financial risk of OpenAI being forced to disclose any part of everything to any government in civil or even criminal court. OpenAI is not trying to protect consumers here; they are trying to protect their business model and their business. But with a proper judicial system, their busin
Bad argument (Score:2)
What is wrong with people? (Score:5, Informative)
Our use of content. We may use Content to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Services safe. [openai.com]
OpenAI, and all AI companies, openly scrape all information they can get their hands on regardless of ownership, copyright, or any sort of legality. Their crawlers abuse other companies' websites [techcrunch.com]. Many of their executives openly claim they don't bother seeking copyright [forbes.com]. Why would they protect an individual's right to privacy, particularly if it gets in the way of making a buck? I'm stunned that anyone would trust these companies in such a way.
THE INTERNET NEVER FORGETS! (Score:1)
ChatGPT users (Score:4, Informative)
You're using privacy-invading software made by privacy-invading Big Data companies using data stolen from millions of people. What did you expect?
If you don't want mass surveillance, avoid Big Data software.
Guys...take a deep breath. (Score:2)
To quote Predator 2, "Take a deep breath; loosen your sphincters."
Yes, I've I had interactions with it that I'd REALLY rather no one ever sees. I took the gamble, and I firmly believe my life is better for it.
Anyway. No one wants all our inner secrets blasted open so the NY Times can run them on the front page, *especially* the NY Times. Can you imagine what would happen if the Times, an organization of about 6000 people in a dying industry running out of money, disclosed every thought of the, I dunno, hund
It's *not* secret? (Score:2)
Like the old saying goes... (Score:2)
Email, chat, post, and websurf like it will be used against you... whether read aloud in court, used to deny insurance, loans, and jobs, or fed to AI to make the case that you're likely to commit crimes and need to be pre-emptively locked up.