Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Government AI

California AI Policy Report Warns of 'Irreversible Harms' 52

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."
This discussion has been archived. No new comments can be posted.

California AI Policy Report Warns of 'Irreversible Harms'

Comments Filter:
  • by thesjaakspoiler ( 4782965 ) on Wednesday June 18, 2025 @12:10AM (#65457291)

    wonder if I can sue OpenAI for that.

    • Try it. If you fail, you can write a book about it.
      • "Why Experts Worry Weâ(TM)re 2 Years From An âoeAI Black Deathâ"
        https://www.youtube.com/watch?... [youtube.com]

        • lol. The danger isn't AI designing new viruses, it's America refusing to mandate vaccines and quarantine infected people for anything. America was the most important incubation and infection vector in the world during Covid by far. The UK was second.
          • What do you think of the idea that SARS-CoV-2 was perhaps engineered in the USA as a DARPA-funded self-spreading vaccine intended for bats to prevent zoonotic outbreaks in China but then accidentally leaked out when being tested by a partner in Wuhan who had a colony of the bats the vaccine was intended for? More details on the possible "who what when how and why" of all that:
            https://dailysceptic.org/2024/... [dailysceptic.org]

            If true, it provides an example of how dangerous this sort of gain-of-function bio-engineering of vi

            • I'm just not convinced that AI designer viruses qualify as a major impending threat to humanity. To be more precise: There are plenty of natural biological/virus risks that dwarf this threat already.

              The first one that comes to mind is the natural adaptation of microbes to antibiotics in hospitals. No AI required. Just natural selection in interaction rich environments. There are a lot more uncontrolled experiments in hospitals with sick people than lab experiments, statistically it's not close.

              Another is

              • Good points on complex systems and adaptations. Thanks! I hope you are right overall on that in practice -- especially about AI-co-designed biowarfare agents not being a huge issue versus Eric Schmidt's point on the "Offense Dominant" nature of such.

                And advanced computing can otherwise indeed do a lot of good in health care. For example:
                "Taking the bite out of Lyme disease: New studies offer insight into disease's treatment, lingering symptoms"
                https://news.northwestern.edu/... [northwestern.edu]
                "Northwestern scientists identi

    • by gweihir ( 88907 )

      Naa, just ask the LLM to pretend to be a girl and butter up you ego! I heat some LLMs will even confirm their users delusions that they are a god.

  • Ya know what? It's typically not Democrat states like CA which get called out for being overly conservative - and this absolutely is, verging on neophobic.

    This is along the lines of conservatives wanting to ban the Internet because it allows people to communicate (this was a thing for a hot minute in the 90s).

    Do that many people have such a problem with temporal permanency that they're not able to realize how rapidly things have changed over the past 5, 10, 25, 50 years? You can't hold back progress by bann

    • by AmiMoJo ( 196126 )

      Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

      AI will be the same. The US won't regulate it enough and it will exploit everyone, and bizarrely like with personal data privacy the citizens will just accept it as if there is nothing they can do about it. Meanwhile Europe will protect people.

      • Lovely job painting these unproductive chuckledinks as heroes.

        Completely BS of course, but great job nonetheless. Europe *will* craft law aimed at monetising technologies by means of fines. *THAT* is what Europe does.

        When you can't innovate, legislate to make it illegal then coin it in court.

        "Protect people"! Hilarious.

        • Lovely job painting these unproductive chuckledinks as heroes.

          Completely BS of course, but great job nonetheless. Europe *will* craft law aimed at monetising technologies by means of fines. *THAT* is what Europe does.

          When you can't innovate, legislate to make it illegal then coin it in court.

          "Protect people"! Hilarious.

          You are exactly correct. The EU uses other technology as a way to make money. They do not, and apparently are not capable of innovation.

          Oh.... Apple makes money. Let's find a way to take some of it!

          So then along comes their protection, force all Apple phones and pads to have USB-C connectors, just like Android. Which apparently in a sea of USB, the ones on Android phones are the final version never to be - improved.

          With outlandishly lame excuses for that action.

          I mean, what happened? Europe use

      • Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

        Thanks to the GDPR the world is full of websites constantly wasting our time by brandish misleading modal prompts served by third parties spewing meaningless drivel about cookies. The most absurd aspect is rendering prompts themselves are often themselves served from third parties further increasing vectors for tracking.

        Random example:
        https://www.euronews.com/ [euronews.com]

        I rejected the prompt yet it is still connecting to Bombora, Adobe Tag Manager, DoubleVerify, DoubleClick, Opti Digital, Privacy Center. So much for

        • by AmiMoJo ( 196126 )

          GDPR isn't without its flaws. Some websites did the right thing and just disabled analytics so they don't need the requests. It could do with updating to make prompting illegal. Arguably it already is, but the regulators don't seem to be enforcing it.

          Get an add-on like I Still Don't Care About Cookies, and 99% of them will go away.

          What are these amazing personal data processing AI services we are missing out on?

        • Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

          Thanks to the GDPR the world is full of websites constantly wasting our time by brandish misleading modal prompts served by third parties spewing meaningless drivel about cookies.

          Yes, I don't have that on my websites - I geoblock Europe. Yes, they can get in if they use a VPN, but that's their problem then.

          The most absurd aspect is rendering prompts themselves are often themselves served from third parties further increasing vectors for tracking.

          Random example: https://www.euronews.com/ [euronews.com]

          I rejected the prompt yet it is still connecting to Bombora, Adobe Tag Manager, DoubleVerify, DoubleClick, Opti Digital, Privacy Center. So much for protecting me. Your false sense of security is worse than nothing because at least people are not being mislead.

          If European citizens are so concerned about their Data, it seems maybe their government - in its zeal to protect them - should be requiring the use of script blockers instead of the brain dead cookie bullshit. We always hear talk from them about privacy, but taking a look at what they do allow. they have accomplished nothing for all their sound and fury. I once went to a news si

      • Europe got ahead of data protection in the 80s, and it proved to be a huge boon for us, leading to GDPR. The danger was recognized and appropriate, reasonable protections put in place.

        AI will be the same. The US won't regulate it enough and it will exploit everyone, and bizarrely like with personal data privacy the citizens will just accept it as if there is nothing they can do about it.

        Can you point on this doll where America hurt you?

        Meanwhile Europe will protect people.

        Look, unless you were never taught European history, you really need to drop your constant idea that Europe is this bastion of kindness, protecting it's people making certain that all it's citizens are happy and healthy, provided by governance that really cares.

        This isn't to say 'Murrica is flawless, we aren't. Not even close. But we admit it.

        Meanwhile there is a contingent here who believe that Europe is a

  • by NotEmmanuelGoldstein ( 6423622 ) on Wednesday June 18, 2025 @02:37AM (#65457431)

    ... the fearful approach provides ...

    IANAL but it's easy to see the consequences of the GOP pro-AI bill. It's wonderful only the US federal government can demand drones contain a password, firewall, anti-virus and kill-switch. Only the federal government can make it illegal for AI to dox police officers, undress school-girls, or short-sell all the blue-chip stocks. Sure, the states might be able to throw someone in prison for training the AI to commit a crime, or for running the software. But the results of the software is untouchable, thanks to federal law on AI.

    • by schwit1 ( 797399 )

      Good or bad, it is what it is. This idea by California is like trying to hold back the tide, it's wishful thinking.

      Since the US and its states do not have regulatory authority over all AIs, all they can do is hamper the US AI companies, which the CCP is lobbying for and hoping for.

    • The alternative is that =50 States have X sets of AI rules. Considering the "Internet" is "Interstate," I think it makes more sense for the Federal Government to set those rules across all States. I would, and do, make the same argument against the Texas/Florida/Utah Social Media and Adult Website laws that recently took effect. I'm for reasonable government regulation of most things, but they should be the same across the country.

    • Yup. Sure, AI could cause trouble. Monkeys could also fly out of my a$$.

      It's not enough to say something could happen so we need to regulate. We need to assess how likely the bad consequences are. Given how much coaching Cursor needs to re-write some code for me, I don't think it's staging a worldwide robot revolution any time soon. TBH, I think it's much more likely some code monkey will use Claude or Copilot to create some faulty document which another drone reads and follows and causes problems.

  • AI is the death of the search engine. It allows SEO leeches to build websites too easily, that appear informative and match all of Google's requirements to be voted up.

    This is in particular a problem because the SEO websites built by AI are often wrong. At least with a handmade SEO website like Wirecutter, the information is mediocre, but real. With the new AI-SEO, reality has nothing to do with it.
    • by allo ( 1728082 )

      And rise of AI search engines. In a few years only power users will use a list of website links. Other people let the search engine generate an answer page and click the source links when needed.

  • This report leverages broad evidence including empirical research, historical analysis, and modeling and simulations to provide a framework for policymaking on the frontier of AI development.

    If true this would be a first, every AI doom / policy paper I've ever seen consists entirely of evidence free conjecturbation. To date all I've seen is people saying some bad thing "could" happen without objectively supporting the statement and almost always without even trying to provide any kind of useful characterization of what vague "could be" assertions are even supposed to represent.

    Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms.

    What else is new? Everyone always invokes this same tired meaningless "could" rhetoric.

    Evidence-based policymaking incorporates not only observed harms but also prediction and analysis grounded in technical methods and historical experience, leveraging case comparisons, modeling, simulations, and adversarial testing.

    LOL perhaps they will perform

  • I don't know what nuclear-grade uranium means but it must be bad if it is included in a list along with "biological threats", which sounds bad in context but then I'm not a biologist.

    A bit of searching the internet will give some definitions of the varied grades of uranium, but I doubt "nuclear grade" will be found among them. As I recall the distinction between "low enriched" and "high enriched" is mostly a matter of international law than anything definitive in physics, but it's a convenient dividing lin

    • I don't know what nuclear-grade uranium means but it must be bad if it is included in a list along with "biological threats",

      It reminds me of "military grade" guns in scare quotes. Military grade means built by the low-cost (or most politically connected) vendor using inconsistent and ambiguous specs carved in stone 20 years ago. It does not necessarily mean the deadliest weapon available.

    • by allo ( 1728082 )

      Uranium is an atom with a nucleus, hence nuclear grade uranium. It also has a few electrons, so be careful not to get electrocuted when handling it.

  • I first heard this comparison back when IDEs were young (kudos to Larry Masinter, at Xerox PARC at the time).

    Amplifiers don't really know or care what they are amplifying.
    If you tell them to create good, bad, immoral, or dangerous code, they'll try to comply.
    Laws against bad uses of LLMs just make them illegal - they don't make them impossible.

    Mediocre programmers with IDE/LLM support will create reams of mediocre code, at best.

  • California’s new Report on Frontier AI Policy (June 17, 2025) is a rare thing: a clear, technically literate framework for AI risk without the usual hysteria. No doom-laden speculation, no calls to ban technology, and—refreshingly—no breathless invocation of terrorism. That word—“terrorism”—does not appear once in the report. I checked.

    Instead, this policy leans heavily on the kind of safeguards engineers can actually implement: pre-deployment evaluations, red teaming, transparency on training data, and risk thresholds tied to compute. They’re not trying to stop progress—they’re trying to make sure someone hits the brakes before a fine-tuned model starts confidently generating CRISPR exploits or accidentally-on-purpose reverse-engineers a US or PRC cyberweapon.

    Yes, it focuses on high-compute models (10^25 FLOP and up), which you could argue is a crude proxy for capability. But the point is to establish a regulatory floor, not to kneecap the field. There’s no attempt here to ban local LLMs, outlaw open weights, or panic over generative art. In fact, copyright isn’t even the main frame—training data provenance is discussed, but not weaponized.

    What’s most striking is what’s absent. This is not another case of “but criminals!” moral panic—the same kind that nearly sank the early internet. I remember when some congressional staffer stumbled across a USENET post with a zipped up copy of the Anarchist Cookbook, and lost their damn mind. This isn’t politicians circa 1996 trying to smother a transformative technology because they don’t understand it. It’s an honest effort to thread the needle: prevent catastrophic misuse without killing the tools.

  • by anegg ( 1390659 ) on Wednesday June 18, 2025 @12:36PM (#65458603)

    Just like project requirements, policies should specify "whats" and not "hows". What outcomes are desirable, and what are not. Also like project requirements, the views of all stakeholders should be reflected in the policies.

    Regulatory frameworks that specify the "hows" are more likely to result in meaningless compliance as game-playing organizations seek to maximize returns under the rules. Regulatory frameworks created mainly using input from major players (because they are "experienced") are more likely to align with how those major players want to do business than what concerns really need to be addressed.

    One major concern I see with "AI" is the potential for harmful behavior that is excused because "the AI did it". Fortunately, there have been some legal rulings where that defense didn't hold water. A policy that makes it clear that organizations can't escape claims of harm just because a computational system judged to be using "AI" is involved would clarify that the organization is responsible for what the organization does, whether through its people or its systems.

    Another major concern I see with "AI" is the creation of dramatically unequal juxtapositions of people/human effort against human-like effort that is really computationally driven in situations where expectations are based on human-effort versus human-effort. An "AI" LLM, for example, can spout vast quantities of human-like output (some percentage of which is bullshit) which can overwhelm the abilities of a real human to understand and respond to in real time. Behavioral norms that are based on real humans interacting with real humans will be upset by real humans interacting with computational systems unless it is made clear that those norms cannot be upheld in those circumstances.

    I'm sure that a group of people could identify more potential harms than just these two. I've cited them here as examples and not an enumeration of all concerns.

    If someone is going to really develop a policy framework or even policies, then a substantial amount of original thinking based on first principles and identification of the "whats" of actual harms needs to be undertaken. Telling an organization clearly that "if your AI kills someone (or produces outcomes of lesser but still significant harm) you will be held responsible" is much better than telling that organization "you must reduce risk by using red teams to evaluate systems before putting them into production".

    • You raise some valid-sounding points, but your throughline is clear: you’re less interested in regulating AI responsibly than in resisting its integration altogether. This isn’t a critique of how we regulate—it's a veiled argument for not trusting AI at all. That’s a position worth debating, but let’s not pretend it’s neutral.

      Just like project requirements, policies should specify "whats" and not "hows".

      That’s a nice slogan, but it falls apart the moment you're dealing with high-risk technology. When the potential harms include synthetic bioag

  • by Tony Isaac ( 1301187 ) on Wednesday June 18, 2025 @01:48PM (#65458875) Homepage

    That is the billion-dollar question.

    • by allo ( 1728082 )

      If you ask Meta: No nipples.

    • That is the billion-dollar question.

      You know, trolls often open their posts by begging the question, like you just did. Since you didn’t immediately follow it with a bunch of strawman assertions and lame-ass whataboutisms, I’m going to give you the benefit of the doubt and try to address it in good faith.

      So...“what safeguards are proper?” This question was asked and answered in the policy paper. The commission defines proper safeguards as concrete, enforceable measures grounded in engineering reality, and they clearl

      • Yes indeed, the study authors laid out *their* recommendations for guardrails. Are those the "proper" ones? Would they work? Would they do harm?

        - Mandatory risk assessments. What risks are in scope? It's not possible to assess risks without defining scope. And defining scope by definition leaves out other potential risks, some of which could be worse than those that were in scope. Who decides whether the assessment is satisfactory, and what is the standard that determines the criteria for success? How is "c

  • by fjo3 ( 1399739 )
    Everything Newsom has been doing for the last year, and will continue to do, is about setting up his run for president. It's safe to ignore him, unless you live in CA like I do.
  • "Irreversible harm." Well, at least that's better than "existential threat."

"Well, if you can't believe what you read in a comic book, what *can* you believe?!" -- Bullwinkle J. Moose

Working...