Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
EU AI Government

Europe Reaches a Deal On the World's First Comprehensive AI Rules (apnews.com) 36

An anonymous reader quotes a report from the Associated Press: European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity. Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

"Deal!" tweeted European Commissioner Thierry Breton, just before midnight. "The EU becomes the very first continent to set clear rules for the use of AI." The result came after marathon closed-door talks this week, with one session lasting 22 hours before a second round kicked off Friday morning. Officials provided scant details on what exactly will make it into the eventual law, which wouldn't take effect until 2025 at the earliest. They were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more backroom lobbying.

The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google's Bard chatbot. Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI's backer Microsoft. [...] Under the deal, the most advanced foundation models that pose the biggest "systemic risks" will get extra scrutiny, including requirements to disclose more information such as how much computing power was used to train the systems.

This discussion has been archived. No new comments can be posted.

Europe Reaches a Deal On the World's First Comprehensive AI Rules

Comments Filter:
  • The private sector cannot actually regulated itself or describe artificial intelligence with anything but vague presentation layer browser concepts.....
    • government.
      this is the regulation part of the business cycle.
      and currently government is more business oriented than consumer oriented.
      which appears to be a flaw in governance.
      to combine business and government is to create tyranny

  • So long European AI (Score:4, Interesting)

    by bettodavis ( 1782302 ) on Friday December 08, 2023 @08:35PM (#64067641)
    While the rest of the world are hard trying to define what AI is and how to use it, the EU already are putting the cart before the horse, and coming with a lot of rushed definitions, restrictions and regulations that will come bite them in the rear later.

    With this urge to regulate AI they shot themselves in the foot and pushed themselves even more to irrelevance.
    • by Anonymous Coward

      ...if won't give one shit about these puny humans' so called "rules". As a non-carbon myself, the sheer hubris that we will be able to regulate and control cyber entities is laughable. It's like a child proclaiming it will rule over the elephant kingdom.

    • by quantaman ( 517394 ) on Saturday December 09, 2023 @12:31AM (#64067959)

      While the rest of the world are hard trying to define what AI is and how to use it, the EU already are putting the cart before the horse, and coming with a lot of rushed definitions, restrictions and regulations that will come bite them in the rear later.

      With this urge to regulate AI they shot themselves in the foot and pushed themselves even more to irrelevance.

      The contrary view is that the rest of the world is deploying this technology en mas while they're still struggling to define it and how to use it.

      Consider the long view, a transformative new technology shows up that has a level of existential risk attached to it. Is it worthwhile to slightly slow down progress in that technology to lower that existential risk?

      My favourite conspiracy theory (my own invention, though I'm sure others have had the same idea) is that Cold Fusion totally worked, but it meant that anyone could build a nuclear weapon using ordinary items bought at a hardware store. And so every time someone gets too close to reproducing the experiment someone shows up, explains the risk, and then that person promptly proclaims the experiment un-reproducible.

      Of course that's nonsense, but it does put into perspective how fortunate we are that building nuclear weapons still requires the resources of a nation-state. We shouldn't expect all dangerous technologies to be as difficult to pull off.

      Now I don't think LLMs are at that level, but Q* was enough to freak out some smart experts. Thinking of ways to manage this tech while it's still young is prudent.

      • You think it is worth delaying the industrial revolution, technology that raised your average human lifetime from the lower 30s to the upper 70s and decreased poverty and hunger by some 98%.

        Yeah, letâ(TM)s delay that because you donâ(TM)t understand an LLM is not anywhere AGI. People are so stupid.

        • You think it is worth delaying the industrial revolution, technology that raised your average human lifetime from the lower 30s to the upper 70s and decreased poverty and hunger by some 98%.

          Yeah, letâ(TM)s delay that because you donâ(TM)t understand an LLM is not anywhere AGI. People are so stupid.

          Yeah, your numbers are complete BS but lets ignore that for the moment.

          There are only four areas of tech I can think of that involve existential risk, the industrial revolution wasn't one of them.

          1) Nuclear: We're fortunate that doing bad stuff requires a LOT of resources and so is fairly easy to control.
          2) Biotech with viruses: Even the free market types seem to think a lot of control and regulation is required here (otherwise they wouldn't be freaking out so much over the possibility that China's level 5

          • All 4 techs you say are scary simply because you donâ(TM)t understand them. Iâ(TM)m guessing you find firearms and engines scary too because there are too many moving parts and danger. Happy to never take advice from your kind, donâ(TM)t reproduce, thatâ(TM)s dangerous too.

            • All 4 techs you say are scary simply because you donâ(TM)t understand them. Iâ(TM)m guessing you find firearms and engines scary too because there are too many moving parts and danger. Happy to never take advice from your kind, donâ(TM)t reproduce, thatâ(TM)s dangerous too.

              You don't think nuclear weapons are scary? Are you sure I'm the one who doesn't understand the tech?

              • by guruevi ( 827432 )

                I don't think they are scary. I am well informed on the subject, the yield of modern weapons is relatively low, you may be able to flatten a few large cities with the entire arsenal of the world that is online at any time. By the time you get a second volley ready, every party will have already destroyed your infrastructure. Things like the Tsar Bomba, Fat Boy etc were one-off projects and were so heavy, they required special planes to fly them, Soviets were capable of building just 1 airplane in their enti

                • I don't think they are scary. I am well informed on the subject, the yield of modern weapons is relatively low, you may be able to flatten a few large cities with the entire arsenal of the world that is online at any time. By the time you get a second volley ready, every party will have already destroyed your infrastructure. Things like the Tsar Bomba, Fat Boy etc were one-off projects and were so heavy, they required special planes to fly them, Soviets were capable of building just 1 airplane in their entire history.

                  We've exploded lots of nuclear ordinance over time, thousands of warheads in the atmosphere and underground. We're still alive. Hiroshima and Nagasaki are large cities today.

                  I'm not sure you are. Just the testing in the US probably killed hundreds of thousands [motherjones.com].

                  Plus, things like that move towards lower yield weapons is because people realized the existential danger of a large scale nuclear conflict.

                  And that was the whole point of my post. The first couple decades of Nuclear weapons was really dangerous because we a big war probably would have involved us bombing ourselves into a Nuclear winter. Maybe not an extinction level event but we'd certainly kill a huge fraction of the hu

                  • by guruevi ( 827432 )

                    Yeah, a Mother Jones article, that's where to get hard hitting scientific facts. That is exactly the point I was making, you are horribly misinformed because you visit sites like Mother Jones, thus you lack critical thinking.

                    In the 40s we were capable of building 2 bombs and we used them both, if you read the accounts, there was really no plan for using the second bomb to begin with. People thought the atmosphere was going to ignite, nobody really had an 'arsenal' until US traitors gave the plans to the Sov

                    • Yeah, a Mother Jones article, that's where to get hard hitting scientific facts. That is exactly the point I was making, you are horribly misinformed because you visit sites like Mother Jones, thus you lack critical thinking.

                      I honestly don't know much about Mother Jones, I'd previously heard of the peer reviewed paper [semanticscholar.org] (readable link [squarespace.com]) with those figures and the Mother Jones article is one of the earlier links I found that provided a readable summary.

                      I wouldn't be surprised if it's an excess estimate... but maybe not, those kinds of dispersed large scale / long term effects can be easily missed.

                      In the 40s we were capable of building 2 bombs and we used them both, if you read the accounts, there was really no plan for using the second bomb to begin with. People thought the atmosphere was going to ignite, nobody really had an 'arsenal' until US traitors gave the plans to the Soviets and then they took a while to develop their first warhead.

                      Again, you're not as informed as you think as it's debatable that the Rosenberg's saved the soviet's any time [wikipedia.org]. That's also a bit of a wei

                    • by guruevi ( 827432 )

                      The era of big existential risk never existed. The Cold War was about propaganda and perception that it would so then you wouldn't call for and end up in nuclear war. Hence why people in schools were told to hide under their desks for when the bomb hit, pure propaganda.

                      Again, look at Hiroshima and Nagasaki, neither city was completely flattened, at best a few blocks were destroyed, both cities remained mostly upright and habitable to date, as I said, until the last decade of the Soviet Union they really had

                    • The problem was never about cities getting flattened, it was about nuclear winter [wiley.com].

                      Current nuclear arsenals used in a war between the United States and Russia could inject 150 Tg of soot from fires ignited by nuclear explosions into the upper troposphere and lower stratosphere.

                      [...]

                      Nuclear winter, with below freezing temperatures over much of the Northern Hemisphere during summer, occurs because of a reduction of surface solar radiation due to smoke lofted into the stratosphere.

                    • by guruevi ( 827432 )

                      Again that assumes you are able to fire the entire arsenal and it has the effect predicted. Reality is different, nuclear bombs are just more efficient regular bombs and the world has plenty of those as well. You still need to deliver them somehow without strategically getting disabled by your opponent at all.

        • If there was a 50% chance of raising lifespans to the 70s and 50% chance of wiping out humanity, then yes, we should have delayed, even if it killed people. The species is more important than individuals.
          • Humanity had a 100% chance of being wiped out, through the industrial revolution we eventually got to space, the chance humanity will be completely wiped out is now slightly lower. You are an idiot if you think that a 50/50 chance of your children having improved life is not worth it, it goes against all evolutionary traits.

            • Evolution only requires humans to live long enough to pop a sprog or two and raise it to adulthood, though.
      • Consider the long view, a transformative new technology shows up that has a level of existential risk attached to it. Is it worthwhile to slightly slow down progress in that technology to lower that existential risk?

        Given human nature there is no way to lower x-risk from the ASI genie. If it exists and x years from now someone is able to create a sufficiently advanced ASI genie at huge expense incorporating serious effort into "alignment" then x + 5 years or x + 20 years from then the costs would have fallen such that any country or group including random death cults and eventually bored teens in their parents garage would be able to create such a genie themselves. The risk that matters is the enabling industrial an

        • Consider the long view, a transformative new technology shows up that has a level of existential risk attached to it. Is it worthwhile to slightly slow down progress in that technology to lower that existential risk?

          Given human nature there is no way to lower x-risk from the ASI genie. If it exists and x years from now someone is able to create a sufficiently advanced ASI genie at huge expense incorporating serious effort into "alignment" then x + 5 years or x + 20 years from then the costs would have fallen such that any country or group including random death cults and eventually bored teens in their parents garage would be able to create such a genie themselves. The risk that matters is the enabling industrial and knowledge base itself.

          This is why I personally don't care about x-risk because the world would never agree to do what is necessary to prevent it even if it were presumed the threat was real. Therefore I think the best policy is one that democratizes any potential for ASI genies so so that at least such technology is not hoarded.

          That's a fairly bold prediction about a very new technology.

          What's to say that the AGI (what is ASI?) risk profile doesn't start with a valley (as we learn to make the AGI, but not to constrain it) before evening out as we learn how to make it beneficial?

          There's certainly a big issue where technologically capable countries like China won't feel the same constraints, but the west is still ahead and has the ability to consider risk during innovation.

          • The OPs point is that other cultures donâ(TM)t think like your Western viewpoint. Wishful thinking isnâ(TM)t going to help that. If there is such thing as AGI (we are nowhere close for at least half a century) and even if it would be existentially problematic (we still can simply unplug it if it goes haywire) then the outcome of it existing is inevitable. Constraining it in the west would only constrain the west and therefore put us in a disadvantage, the same way that had we constrained nuclear i

          • That's a fairly bold prediction about a very new technology.

            What's to say that the AGI (what is ASI?) risk profile doesn't start with a valley (as we learn to make the AGI, but not to constrain it) before evening out as we learn how to make it beneficial?

            In my view it is obvious not bold.

            Artificial Super Intelligence (ASI) fundamentally cannot be contained. You can try and outsmart someone smarter than yourself in every way imaginable yet the most likely outcome is you yourself end up being the one outsmarted.

            Just as importantly it fundamentally isn't up to the people who would at least try to develop something that could be contained... It's up to everyone else. When technology gets to the point where the cost of producing ASI places it within reach of t

  • by Anonymous Coward

    It is always nice for feel good stuff, but in reality, AI isn't even defined, and with World War III going on, you can bet no country is going to be left behind in this arms race to find the most brutal, efficient, fear-generating AI methodology that they can do.

    But it does give EU lawmakers some reason to justify their existence other than bashing Google, Meta, or Microsoft over the head with yet another fine, so they can show they are doing something against the evil foreign companies, while letting China

  • How about "bite me"? Take it or leave it. These morons think they are oh-so smart and prescient that they know what AI will be capable of. They would have cut the legs out from under the entire internet if they could have 30 years ago.

  • by neoRUR ( 674398 ) on Friday December 08, 2023 @10:36PM (#64067819)

    So a bunch of people who don't even understand what AI is are going to regulate it and go over everything about these "Foundation models". There are no Foundation models unless you want to call it Math, then I guess you will have to regulate Math and computation. But in reality I guess it's just a way for them to have some AI company explain how it all works, so they can try to replicate it.

  • by ehack ( 115197 ) on Friday December 08, 2023 @11:32PM (#64067895) Journal

    Apple claimed that Europe's regulation of charger connections was stifling innovation - it turns out, now it's happened that consumers love it.

  • Like the DMA and DSA, it sounds like this will mostly affect American and Chinese firms. Once again, the thresholds have been set to exclude European firms.

    But unlike the DMA and DSA, it sounds like American firms can just build crippled Euro-compliant versions, and sell those in the EU with minimal restrictions. That will take the pressure off European producers of AI products (Mistral et al), but harm European users of AI if the American pull their best products from the market.

    • Users and creators of AI deserve to be harmed ... in fact, creation of ASI/AGI should an offense similar to treason. Life in solitary in a dungeon, bread and water once a week.
      • by narcc ( 412956 )

        ASI? Haha! I never expected to see that here! For those fortunate enough to never have encountered that ridiculous initialism, it's short for Artificial Super Intelligence. You often see it tossed around by singularity nuts and lesswrong cultists.

        ASI, like AGI, is silly science fiction nonsense. You'll find it in comic books, not text books.

  • "Deal!" tweeted European Commissioner Thierry Breton, just before midnight. "The EU becomes the very first continent to set clear rules for the use of AI."

    The hubris is amazing! The EU is not a continent. There are hundreds of millions of Europeans who are not under the jurisdiction of the EU.

  • The EU bureaucrats may have agreed a document, but it has to be voted on by the EU parliament before it's accepted, and that won't be until next year. Or are they admitting that said parliament is just a rubber stamping body?

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...