EU Votes To Ban AI In Biometric Surveillance, Require Disclosure From AI Systems 34
European Union officials have voted in favor of stricter regulations on artificial intelligence, including a ban on AI use in biometric surveillance and a requirement for AI systems like OpenAI's ChatGPT to disclose when content is generated by AI. Ars Technica reports: On Wednesday, European Union officials voted to implement stricter proposed regulations concerning AI, according to Reuters. The updated draft of the "AI Act" law includes a ban on the use of AI in biometric surveillance and requires systems like OpenAI's ChatGPT to reveal when content has been generated by AI. While the draft is still non-binding, it gives a strong indication of how EU regulators are thinking about AI. The new changes to the European Commission's proposed law -- which have not yet been finalized -- intend to shield EU citizens from potential threats linked to machine learning technology.
The new draft of the AI Act includes a provision that would ban companies from scraping biometric data (such as user photos) from social media for facial recognition training purposes. News of firms like Clearview AI using this practice to create facial recognition systems drew severe criticism from privacy advocates in 2020. However, Reuters reports that this rule might be a source of contention with some EU countries who oppose a blanket ban on AI in biometric surveillance. The new EU draft also imposes disclosure and transparency measures on generative AI. Image synthesis services like Midjourney would be required to disclose AI-generated content to help people identify synthesized images. The bill would also require that generative AI companies provide summaries of copyrighted material scraped and utilized in the training of each system. While the publishing industry backs this proposal, according to The New York Times, tech developers argue against its technical feasibility.
Additionally, creators of generative AI systems would be required to implement safeguards to prevent the generation of illegal content, and companies working on "high-risk applications" must assess their potential impact on fundamental rights and the environment. The current draft of the EU law designates AI systems that could influence voters and elections as "high-risk." It also classifies systems used by social media platforms with over 45 million users under the same category, thus encompassing platforms like Meta and Twitter. [...] Experts say that after considerable debate over the new rules among EU member nations, a final version of the AI Act isn't expected until later this year.
The new draft of the AI Act includes a provision that would ban companies from scraping biometric data (such as user photos) from social media for facial recognition training purposes. News of firms like Clearview AI using this practice to create facial recognition systems drew severe criticism from privacy advocates in 2020. However, Reuters reports that this rule might be a source of contention with some EU countries who oppose a blanket ban on AI in biometric surveillance. The new EU draft also imposes disclosure and transparency measures on generative AI. Image synthesis services like Midjourney would be required to disclose AI-generated content to help people identify synthesized images. The bill would also require that generative AI companies provide summaries of copyrighted material scraped and utilized in the training of each system. While the publishing industry backs this proposal, according to The New York Times, tech developers argue against its technical feasibility.
Additionally, creators of generative AI systems would be required to implement safeguards to prevent the generation of illegal content, and companies working on "high-risk applications" must assess their potential impact on fundamental rights and the environment. The current draft of the EU law designates AI systems that could influence voters and elections as "high-risk." It also classifies systems used by social media platforms with over 45 million users under the same category, thus encompassing platforms like Meta and Twitter. [...] Experts say that after considerable debate over the new rules among EU member nations, a final version of the AI Act isn't expected until later this year.
Artificial Dumbness (Score:5, Interesting)
So, we can just rebrand a tech to "Artificial Dumbness", and use it anyway ?
AD is better fitting anyway.
Re: (Score:2)
So Artificial Dumbness it is then.
Re: (Score:2)
AI gives the answers demanded it gives by the user.
In other words, AI would be a straight-A student in our education system.
Re:Artificial Dumbness (Score:4, Informative)
You can read the act here: https://artificialintelligence... [artificial...enceact.eu]
You will see that it's not written like a typical law, with specific legal definitions and rules. That's because the way it works in the EU is that individual member states have to craft their own laws, and then the EU monitors them to check that the laws work the way they were intended and achieve the intended goals.
As such it lays out what the goals are, and what terms success will be judged on.
Needless to say, such an obvious loophole as simply not calling it AI would be considered a failure, and the country that implemented it that way would need to fix the problem.
Re: (Score:2)
They do define what they mean by AI:
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
I think this is pretty dumb, but they do a
The horses left the barn. (Score:2)
Preventing AI systems from being trained on data that is publically available is pointless. Every picture on every website and publically accessible social networks will be analyzed or already was analyzed for training purposes, in any case, cannot stop AI from watching TV shows and other mass media. As to EU "protecting" itself from AI, go ahead, prevent legal public AI products (didn't France just put 130 million USD into a startup with no business plan?) People will be using AI no matter what EU decide
It will stiffle the field in EU (Score:1, Troll)
I have a robot whom I take with me to see an exhibition. It helps him learn to paint.
But, when he paints something, he needs to disclose copyrighted sources. Does it apply to every picture he ever used to learn?
It explains why the proposal is wrong on so many levels. When a human learns, he is not a slave to copyright owners of material he used to teach himself. Machine learning should also not be chained to copyright holders (and thus forced to use only free materials to avoid paying hefty royalties).
Re: (Score:3)
I believe the reference was to an amendment recently proposed for the AI Act: see https://www.reuters.com/techno... [reuters.com] or https://www.theguardian.com/te... [theguardian.com] .
The latter link in particular claims the AI Act, as passed, includes the disclosure requirements mentioned by the GP comment, although I cannot find the corresponding language in the text of the bill linked above.
Technology vs. how it's applied (Score:2)
This could go down much like how humans build on each others' creations:
Say an art lover visits countless museums, reads countless books on various art styles, does some work on its own, and creates a unique artwork. We call that "inspiration". This is fine.
Now suppose the new work has an uncanny resemblance to a specific existing work. Basically a 1:1 copy with some tweaks. We call this "stealing" or similar terms. For some purposes this is fine (for example if you 'stole' a basic concept, for parody, rese
Re: (Score:1)
Now suppose the new work has an uncanny resemblance to a specific existing work. Basically a 1:1 copy with some tweaks. We call this "stealing" or similar terms. For some purposes this is fine (for example if you 'stole' a basic concept, for parody, research etc), for other purposes (eg. selling the work for profit) it is not. In case of conflict, a court of law will decide, using intent & other 'yardsticks' as guideline.
You can't steal an idea. That's why the standard for copyright violation is recognizable [literally copied] elements. A reimagining of the same artwork has to be so similar as to be nearly indistinguishable before it's in violation, basically an obviously deliberate copy.
Re: (Score:2)
> You can't steal an idea.
*** start nit picking
The GP wrote "We call this "stealing" or similar terms."
Which seems to me like a reasonable statement. Even if he had only written "We call this "stealing" ", I'm going to read the quotes around stealing as meaning on some peculiar definition of stealing and therefore it follows that one can 'steal' an idea.
end nit picking ***
Re: (Score:2)
The GP wrote "We call this "stealing" or similar terms."
Which seems to me like a reasonable statement.
It might be anywhere but here, where people are expected to know the difference between copyright infringement and theft. This is something we settled long before you discovered Slashdot.
Re: (Score:2)
> people are expected to know the difference between copyright infringement and theft
I'm not disagreeing. I think it's pretty clear he knows that when he wrote -- we call this "stealing" or similar terms -- and similarly when he wrote -- if you 'stole' a basic concept.
And I like it when someone plays with language the way he did. He didn't use some accepted or legal terminology but his comment doesn't need that to make sense. Hence my nit picking too.
developers argue against its technical feasibility (Score:3)
Re: (Score:2)
Infringing copyrights is indeed cheaper and easier than complying to the law.
Are you sure? Given the bizarre fines per song that have been the result of download cases, some judge should "set an example" and charge the same fines per byte / instance of training data if it is illegally harvested.
Re: (Score:2)
If you're just an average Joe infringing on the music copyright mafia, then you're pretty much screwed.
Here where I live in Germany, we have the GEMA mafia. They have an effective monopoly on the music market in Germany where you if you want your music to be on the radio you have to become a member of that pyramid scheme, where most of the money goes to the higher ups an
Re: (Score:2)
Re: (Score:1)
Red Flag (Score:4, Funny)
Re: (Score:2)
It is because you cannot walk in front of an AI.
Re: Red Flag (Score:2)
Re: (Score:2)
It is strange that the EU doesn’t require a man with a red flag to walk in front of the AI.
Those evil Europeans, passing laws meant to *gasp* benefit their citizens. When will they learn.
Something the United States should be leading in (Score:3)
Re: (Score:2)
I guess Europe has fewer billionaires and less powerful corporations to stop things like this.
Well, yes, this may a be factor. But it may also be that we don't cheer (or criticise) people just by being rich. They are just people after all.
Re: (Score:2)
More legislation to respect and protect the average person passed by the E.U.. Something the U.S. *should* be leading at.
I hope the US does lead in quality legislation regarding AI in society, but this EU legislation is a dumpster fire. I don't want the US to lead in that.
Why? (Score:2)
Or are they just worried a computer will point out that the EU is rife with fatal flaws, and people will listen to it instead of the crackpots in charge?