Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Government AI

California AI Policy Report Warns of 'Irreversible Harms' 29

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."

California AI Policy Report Warns of 'Irreversible Harms'

Comments Filter:
  • by thesjaakspoiler ( 4782965 ) on Wednesday June 18, 2025 @12:10AM (#65457291)

    wonder if I can sue OpenAI for that.

  • Ya know what? It's typically not Democrat states like CA which get called out for being overly conservative - and this absolutely is, verging on neophobic.

    This is along the lines of conservatives wanting to ban the Internet because it allows people to communicate (this was a thing for a hot minute in the 90s).

    Do that many people have such a problem with temporal permanency that they're not able to realize how rapidly things have changed over the past 5, 10, 25, 50 years? You can't hold back progress by bann

    • by AmiMoJo ( 196126 )

      Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

      AI will be the same. The US won't regulate it enough and it will exploit everyone, and bizarrely like with personal data privacy the citizens will just accept it as if there is nothing they can do about it. Meanwhile Europe will protect people.

      • Lovely job painting these unproductive chuckledinks as heroes.

        Completely BS of course, but great job nonetheless. Europe *will* craft law aimed at monetising technologies by means of fines. *THAT* is what Europe does.

        When you can't innovate, legislate to make it illegal then coin it in court.

        "Protect people"! Hilarious.

        • Lovely job painting these unproductive chuckledinks as heroes.

          Completely BS of course, but great job nonetheless. Europe *will* craft law aimed at monetising technologies by means of fines. *THAT* is what Europe does.

          When you can't innovate, legislate to make it illegal then coin it in court.

          "Protect people"! Hilarious.

          You are exactly correct. The EU uses other technology as a way to make money. They do not, and apparently are not capable of innovation.

          Oh.... Apple makes money. Let's find a way to take some of it!

          So then along comes their protection, force all Apple phones and pads to have USB-C connectors, just like Android. Which apparently in a sea of USB, the ones on Android phones are the final version never to be - improved.

          With outlandishly lame excuses for that action.

          I mean, what happened? Europe use

      • Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

        Thanks to the GDPR the world is full of websites constantly wasting our time by brandish misleading modal prompts served by third parties spewing meaningless drivel about cookies. The most absurd aspect is rendering prompts themselves are often themselves served from third parties further increasing vectors for tracking.

        Random example:
        https://www.euronews.com/ [euronews.com]

        I rejected the prompt yet it is still connecting to Bombora, Adobe Tag Manager, DoubleVerify, DoubleClick, Opti Digital, Privacy Center. So much for

        • by AmiMoJo ( 196126 )

          GDPR isn't without its flaws. Some websites did the right thing and just disabled analytics so they don't need the requests. It could do with updating to make prompting illegal. Arguably it already is, but the regulators don't seem to be enforcing it.

          Get an add-on like I Still Don't Care About Cookies, and 99% of them will go away.

          What are these amazing personal data processing AI services we are missing out on?

        • Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

          Thanks to the GDPR the world is full of websites constantly wasting our time by brandish misleading modal prompts served by third parties spewing meaningless drivel about cookies.

          Yes, I don't have that on my websites - I geoblock Europe. Yes, they can get in if they use a VPN, but that's their problem then.

          The most absurd aspect is rendering prompts themselves are often themselves served from third parties further increasing vectors for tracking.

          Random example: https://www.euronews.com/ [euronews.com]

          I rejected the prompt yet it is still connecting to Bombora, Adobe Tag Manager, DoubleVerify, DoubleClick, Opti Digital, Privacy Center. So much for protecting me. Your false sense of security is worse than nothing because at least people are not being mislead.

          If European citizens are so concerned about their Data, it seems maybe their government - in its zeal to protect them - should be requiring the use of script blockers instead of the brain dead cookie bullshit. We always hear talk from them about privacy, but taking a look at what they do allow. they have accomplished nothing for all their sound and fury. I once went to a news si

      • Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

        AI will be the same. The US won't regulate it enough and it will exploit everyone, and bizarrely like with personal data privacy the citizens will just accept it as if there is nothing they can do about it.

        Can you point on this doll where America hurt you?

        Meanwhile Europe will protect people.

        Look, unless you were never taught European history, you really need to drop your constant idea that Europe is this bastion of kindness, protecting it's people making certain that all it's citizens are happy and healthy, provided by governance that really cares.

        This isn't to say 'Murrica is flawless, we aren't. Not even close. But we admit it.

        Meanwhile there is a contingent here who believe that Europe is a

  • by NotEmmanuelGoldstein ( 6423622 ) on Wednesday June 18, 2025 @02:37AM (#65457431)

    ... the fearful approach provides ...

    IANAL but it's easy to see the consequences of the GOP pro-AI bill. It's wonderful only the US federal government can demand drones contain a password, firewall, anti-virus and kill-switch. Only the federal government can make it illegal for AI to dox police officers, undress school-girls, or short-sell all the blue-chip stocks. Sure, the states might be able to throw someone in prison for training the AI to commit a crime, or for running the software. But the results of the software is untouchable, thanks to federal law on AI.

    • by schwit1 ( 797399 )

      Good or bad, it is what it is. This idea by California is like trying to hold back the tide, it's wishful thinking.

      Since the US and its states do not have regulatory authority over all AIs, all they can do is hamper the US AI companies, which the CCP is lobbying for and hoping for.

    • Yup. Sure, AI could cause trouble. Monkeys could also fly out of my a$$.

      It's not enough to say something could happen so we need to regulate. We need to assess how likely the bad consequences are. Given how much coaching Cursor needs to re-write some code for me, I don't think it's staging a worldwide robot revolution any time soon. TBH, I think it's much more likely some code monkey will use Claude or Copilot to create some faulty document which another drone reads and follows and causes problems.

  • AI is the death of the search engine. It allows SEO leeches to build websites too easily, that appear informative and match all of Google's requirements to be voted up.

    This is in particular a problem because the SEO websites built by AI are often wrong. At least with a handmade SEO website like Wirecutter, the information is mediocre, but real. With the new AI-SEO, reality has nothing to do with it.
  • This report leverages broad evidence including empirical research, historical analysis, and modeling and simulations to provide a framework for policymaking on the frontier of AI development.

    If true this would be a first, every AI doom / policy paper I've ever seen consists entirely of evidence free conjecturbation. To date all I've seen is people saying some bad thing "could" happen without objectively supporting the statement and almost always without even trying to provide any kind of useful characterization of what vague "could be" assertions are even supposed to represent.

    Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms.

    What else is new? Everyone always invokes this same tired meaningless "could" rhetoric.

    Evidence-based policymaking incorporates not only observed harms but also prediction and analysis grounded in technical methods and historical experience, leveraging case comparisons, modeling, simulations, and adversarial testing.

    LOL perhaps they will perform

  • I first heard this comparison back when IDEs were young (kudos to Larry Masinter, at Xerox PARC at the time).

    Amplifiers don't really know or care what they are amplifying.
    If you tell them to create good, bad, immoral, or dangerous code, they'll try to comply.
    Laws against bad uses of LLMs just make them illegal - they don't make them impossible.

    Mediocre programmers with IDE/LLM support will create reams of mediocre code, at best.

  • California’s new Report on Frontier AI Policy (June 17, 2025) is a rare thing: a clear, technically literate framework for AI risk without the usual hysteria. No doom-laden speculation, no calls to ban technology, and—refreshingly—no breathless invocation of terrorism. That word—“terrorism”—does not appear once in the report. I checked.

    Instead, this policy leans heavily on the kind of safeguards engineers can actually implement: pre-deployment evaluations, red teami

  • Just like project requirements, policies should specify "whats" and not "hows". What outcomes are desirable, and what are not. Also like project requirements, the views of all stakeholders should be reflected in the policies.

    Regulatory frameworks that specify the "hows" are more likely to result in meaningless compliance as game-playing organizations seek to maximize returns under the rules. Regulatory frameworks created mainly using input from major players (because they are "experienced") are more lik

  • That is the billion-dollar question.

  • by fjo3 ( 1399739 )
    Everything Newsom has been doing for the last year, and will continue to do, is about setting up his run for president. It's safe to ignore him, unless you live in CA like I do.

The bogosity meter just pegged.

Working...