OpenAI's ChatGPT Breaches Privacy Rules, Says Italian Watchdog (reuters.com) 6
An anonymous reader quotes a report from Reuters: Italy's data protection authority has told OpenAI that its artificial intelligence chatbot application ChatGPT breaches data protection rules, the watchdog said on Monday, as it presses ahead with an investigation started last year. The authority, known as Garante, is one of the European Union's most proactive in assessing AI platform compliance with the bloc's data privacy regime. Last year, it banned ChatGPT over alleged breaches of European Union (EU) privacy rules. The service was reactivated after OpenAI addressed issues concerning, amongst other things, the right of users to decline to consent to the use of personal data to train algorithms. At the time, the regulator said it would continue its investigations. It has since concluded that elements indicate one or more potential data privacy violations, it said in a statement without providing further detail. The Garante on Monday said Microsoft-backed OpenAI has 30 days to present defense arguments, adding that its investigation would take into account work done by a European task force comprising national privacy watchdogs.
Who could have seen this coming? (Score:5, Insightful)
The service was reactivated after OpenAI addressed issues concerning, amongst other things, the right of users to decline to consent to the use of personal data to train algorithms. At the time, the regulator said it would continue its investigations. It has since concluded that elements indicate one or more potential data privacy violations, it said in a statement without providing further detail.
That's what happens when we let the weasel guard the hen-house.
Re: (Score:1)
What's "what happens"? "...it said in a statement without providing further detail." I know it's urgent for you to express your prejudices, that doesn't make it worth reading.
Re: (Score:1)
-Damien "rsilvergun" Lee
Re: Who could have seen this coming? (Score:2)
What is the hen-house in that analogy? You provide personal data to OpenAI, they use it to train the models, the consent checkbox was potentially not sufficient according to regulators. It's their house. You walked into the weasel-house if you want to look at it that way.
What is even the issue, is this some gotcha like their consent button says anonymous, but the prompt text box let me type in my real name anyway kind of bs?
Re: (Score:2)
The specific issue is that they use personal data without obtaining consent first. Their "opt-out after you discover what ChatGPT did" is inadequate to comply with GDPR, and also doesn't really work because I don't think they can selectively remove training data from the completed AI.
Re: (Score:3)
Yep. And there is more: People can withdraw consent under certain conditions and underage people can do so at any time later, with no reason needed and no limitations. Of course, if they did not ask for consent in the first place, no withdrawing of consent is needed and they need to delete the data. In all cases all data has to be deleted irretrievably. For an LLM, that means scrapping the model. Oh, and if they bought PII from somewhere, they have 30 days to infirm people that they have it even if the purc