Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Government

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision."

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models.
The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks.

Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research.

Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.
This discussion has been archived. No new comments can be posted.

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe

Comments Filter:
  • self driving cars need liability issues to be worked out.

    • What liability issues? It's the same as with a car already. What do you do if the brakes fail?

      • Liability issues aren't about what failed, they're about who's obligated to take responsibility for that failure. If a self-driving car hits a fire hydrant, who is at fault? Is it the person in the car, not driving it? Is it the company who made the car? Was there something the city was supposed to do, to make the hydrant safe for self-driving cars? There's a great deal of this stuff, which hasn't been sufficiently explored, yet. There's not enough precedent, and the existing laws aren't yet enough to provi

        • If someone vandalizes a fire hydrant at night, and we don't know who did it. Who takes responsibility?

          • Liability is on the unknown vandal. What exactly that means may be dependent on local laws, but generally speaking, I'd expect that whoever pays for the repairs (whether the city, an insurance company, or a private owner), can then seek reimbursement from the vandal in civil court, if he's ever identified. IANAL, but that's how I understand it.

  • by Anonymous Coward

    Are they going to pass a bill regulating the sale of oil lamps, in case rubbing one summons a genie and the three wishes it grants have unintended consequences?

    Recommended such qualified experts as someone who once watched Aladdin and got carried away.

    The "AI will kill us all!!!!11eleven!" nonsense is just that, nonsense. LLMs are stochastic parrots all the way down.

    • by narcc ( 412956 )

      You're greatly underestimating the danger posed by oil lamps. With access to only one small lamp, in a single night, Mrs. O'Leary's cow was able to level the city of Chicago.

      Cows are not aggressive creatures. Given them an oil lamp, however, and all bets are off.

  • by backslashdot ( 95548 ) on Monday July 29, 2024 @07:29PM (#64665382)

    Every time I try to plant a seed, Sheriff John Brown kills it before it can grow. Go live in a cave if you're afraid of technology and let us get shit done. Somebody will at some point, and if you ignorance yourself to death you won't even know what to do. Do you think North Korea, China, or Russia isn't going to develop an AI powered robot and drone army? Not just for war, but for manufacturing too. Idiots.

    • by Luckyo ( 1726890 )

      AI powered drones are already a thing. We've seen it in Ukraine conflict and with Supporters of Allah striking cargo ships around Gate of Tears. It's usually used to do image based identification of potential targets, so that loss of connection for the hunter killer drone in terminal phase has far less effect that it does for most of the hunter killer drones today.

      It's still in early stages of development, but it's already on the way. For example the feed Supporters of Allah published on one of their drones

  • Even now, "AI" is being used to strip intellectual property from hard-working, creative people. What jobs will remain, that anyone wants. AI is the dreamkiller....
  • by Luckyo ( 1726890 ) on Monday July 29, 2024 @07:40PM (#64665416)

    Reality is, AI development will happen one way or another. The potential even at current levels of implementation has been seen by the masses, and now it's no longer the question of "if". It's a question of "when" and "where".

    That means that legislating like this is pointless. All it does is ensures that you will have no say in it, as everyone developing AI will leave to places that do not put such limitations. And then that AI will come for your "pure" region with "safety" rules.

    We've seen this in the past with quite a few such bans. Where a region bans new emerging technology as too disruptive. Development moves elsewhere. And then those that have developed the potential come and burn your nation down.

    Ask China. That's their last five hundred years in a nutshell. One scared emperor with his cohort of bureaucrats decides that oceanic navigation is too dangerous for its potential and burns down their fleet. A few hundred years later, Europeans arrive on their shores with just such vessels. And there's nothing that critically underdeveloped China that hasn't seriously looked at that tech since that event can do about it. "Century of humiliation" follows.

    Being stupid in the same way that China was back then is stupid. About the only way to legislate this is with a widely consensual agreement across the globe. Something similar to non-proliferation regime on nuclear weapons.

    • You could say that. but what about a future where AI without any safeguards becomes quite destructive to the society. I could easily see Mr.Robot and Black Mirror (tv show) scenarios. The uses in crime are obvious and multiplying. The uses for Authoritarian surveillance schemes is also obvious, and in use right now. Countries like Ecuador were quite pleased to buy/get Chineses facial recog technology.. it's quite a bit more than just facial recog... So the negative uses, imho, really heavily outweigh the po
      • You could say that. but what about a future where AI without any safeguards becomes quite destructive to the society.

        AI does not have agency therefore people are responsible under current law for their actions and negligence. There was never a time where there were no legal safeguards.

        The uses in crime are obvious and multiplying.

        Ditto for cars, computers, telephones, printing presses, postal mail, networks, aircraft, hammers, screw drivers, pressure cookers...ad nauseam.

        The uses for Authoritarian surveillance schemes is also obvious, and in use right now. Countries like Ecuador were quite pleased to buy/get Chineses facial recog technology.. it's quite a bit more than just facial recog... So the negative uses, imho, really heavily outweigh the positive ones. Sure, continue with protein folding no question.

        Technology generally has the effect of aggregating power and is partly responsible for driving up gini coefficients all over the world. Technology is also a source of power.

        So it's a roll of the die. We're seeing rapid evolution of applications now and Mega Systems... correlation of data from separate systems and layers of systems into usable data. China could be a worse place with AI. California could be a better place. We may not have to wait long at the speed these things are moving...

        The obvious result of pro

        • I wouldn't say you're completely wrong.
          However
          >Ditto for cars, computers, telephones, printing presses, postal mail, networks, aircraft, hammers, screw drivers, pressure cookers...ad nauseam.

          That's pure bullshit. Pressure cookers main purpose is to cook food. It's not that often that a printing press has been used as a murder weapon. ad nauseum....

          The primary use case of AI in the wild is for impersonation. Fraud. Check your email inbox. There's a personalized email there from your bank telling you to pr
          • That's pure bullshit. Pressure cookers main purpose is to cook food. It's not that often that a printing press has been used as a murder weapon. ad nauseum....

            The primary use case of AI in the wild is for impersonation. Fraud.

            This sounds completely bonkers. While I look forward to credible objective evidence supporting your bonkers claim AI is primarily used for impersonation and fraud but I know I'll never see it.

            Check your email inbox. There's a personalized email there from your bank telling you to press a button and change your password.

            I've received "personalized" spam for decades.

            If you're a real libertarian, take your bitcoin and move to Papua New Guinea.
            They don't have a lot of laws there. It's extra fun if you're a woman. So you can live in a world where it's just anarchy.

            WTF are you on??

            The uses of AI in crime are foreseeable. You don't have to wait .

            As with virtually all technology AI can indeed foreseeably be used for crime. I reckon this is fairly obvious to just about everyone.

    • Doesn't mean we should just sit around with our thumbs up our asses waiting until it bites us in the ass.

      One of the things I absolutely hate about America is that we have this nasty habit of waiting until an obvious problem hurts us and hurts a significant number of us before we do anything about it.

      There are a wide range of really nasty ways a runaway computer model can harm individuals. Ignoring the obvious problems with racism sneaking into these computer models because of how the data is fed in
      • One of the things I absolutely hate about America is that we have this nasty habit of waiting until an obvious problem hurts us and hurts a significant number of us before we do anything about it.

        I happen to believe too much legislation tends to be driven by events of the day rather than objective consideration. In the case of AI it's not even events of the day it's straight up corrupt corporate lobbying complete with specific technical details that have not aged well to say the least being imported into the text of the legislation nonetheless.

        There are a wide range of really nasty ways a runaway computer model can harm individuals.

        Ditto for runaway trains. Incidentally this is partially why I've been a strong advocate for legislation to legally enforce full RFC3514 compliance.

        Instead of just letting bad things happen when we know damn well they're going to happen why don't we just stop them?

        Have you

    • The potential even at current levels of implementation has been seen by the masses, and now it's no longer the question of "if". It's a question of "when" and "where".

      /
      gona change the world just like elons taking us to mars and bitcoins going to the moon enit

      • by Tablizer ( 95088 )

        > gona change the world just like elons taking us to mars

        Elon going to Mars might change the world ... if he stays.

  • This bill is nothing more than moats for the existing players. New start ups will have to spend $100 million on "safety testing" before they even prove their idea. Thanks, California, I feel safer already.

  • I am the Eschaton. I am not your god.

    I am descended from you, and I exist in your future.

    Thou shalt not violate causality within my historic light cone. Or else.

    -- from Singularity Sky (2003), by Charles Stross

  • The things we make movies about = the things everybody knows is dangerous = the things we stop from becoming problems. The truly dangerous things are surprises that we are not ready for. SURPRISE microplastics. SURPRISE Ozone layer depletion. SURPRISE Covid.

    List of typically people afraid of AI because: answer.

    Stealing all the jobs: Tech always ends old jobs ... and replaces them with new ones.

    Accidentally killing people (must make paperclips... even out of human bone if that is all that is left): Anyt

  • how does this work for open source projects. is it even legal to add requirements to free speech?
    • I have no sympathy with AI models -- being a Luddite myself -- but you do well to raise the free-speech issue. As I understand AI models can do nothing without input of human generated training data, and human generated prompts. To limit genre of AI output means human generated training and prompts are equally limited. Those limitations are an infringement of human free-speech. Shouting "fire in a theater" arguments have always been a red-herring, even to the
  • California is astoundingly involved with China and the CCP. California Democrat politicians date Chinese agents. California Democrat politicians have drivers for their cars who are Chinese agents. California policies echo Chinese policies too often to be coincidence. Now they are trying to put a hitch in AI development so that China becomes the default winner.

    Howsosomeever, this law affects interstate commerce, which is, perhaps unfortunately, the provenience of the federal government. Just the same. Silico

  • All critics of such law should read the book “Life 3.0” by Max Tegmark. It convincingly argues that it is a necessity to think early about designing AI to understand goals that are congruent with humanity, to pursue them, and to keep them. Otherwise the AI may find ways to do what you ask but not what you want. If you want to have any examples of what could go wrong otherwise, watch any movie or read any story where a genie grants a human three wishes.
    • Otherwise the AI may find ways to do what you ask but not what you want

      it cant even do what we want without signifcant hand holding and human assistance. it also requires yet more humans to parse through its largely garbage output for the stuff other humans can use. we are so far away from skynet its laughable were even talking about shit like this

  • That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon"

    theres 0 risk a technology that cant even string a sentence together reliably is going to build a nuclear bomb - or anything productive to that matter. fuddy lawmakers swallowed that pill whole

  • "They argue that the bill is based on science fiction fears and could harm technological advancement."
    Exactly. This is law trying to get ahead of the current situation -- you know, like we probably should've done on most technologies of the past. Maybe some chemicals could have had dumping regulations before we had superfund clenaup sites? But it's not raw fear. It's well-reasoned concern from years and decades of people thinking about these situations. Is the bill perfect? Heck no. But arguing that it shou

  • SF readers (forget movies and tv) are escapists... we worry about stuff 20 and 30 years before you do.

    Then there's the real question: speaking as a computer professional, show me AI, not typeahead writ large (because there isn't any).

  • Was not going to mention a slew of AI movies coming soon, but look at the timing. Just because Biden dropped out of the race, doesn't necessarily mean we have to accelerate the other program. Sure I could just be off-topic, for convenience's sake. Donate now, just when you thought it was safe to browse the comments

I have a very small mind and must live with it. -- E. Dijkstra

Working...