Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Government

California Legislature Passes Controversial 'Kill Switch' AI Safety Bill (arstechnica.com) 56

An anonymous reader quotes a report from Ars Technica: A controversial bill aimed at enforcing safety standards for large artificial intelligence models has now passed the California State Assembly by a 45-11 vote. Following a 32-1 state Senate vote in May, SB-1047 now faces just one more procedural state senate vote before heading to Governor Gavin Newsom's desk. As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation. [...]

If the Senate confirms the Assembly version as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If he vetoes it, the legislature could override with a two-thirds vote in each chamber (a strong possibility given the overwhelming votes in favor of the bill). At a UC Berkeley Symposium in May, Newsom said he worried that "if we over-regulate, if we overindulge, if we chase a shiny object, we could put ourselves in a perilous position." At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "When they're rushing to educate people, and they're basically saying, 'We don't know, really, what we've done, but you've got to do something about it,' that's an interesting environment."
Supporters of the AI safety bill include state senator Scott Weiner and AI experts including Geoffrey Hinton and Yoshua Bengio. Bengio supports the bill as a necessary step for consumer protection and insists that AI should not be self-regulated by corporations, akin to other industries like pharmaceuticals and aerospace.

Stanford professor Fei-Fei Li opposes the bill, arguing that it could have harmful effects on the AI ecosystem by discouraging open-source collaboration and limiting academic research due to the liability placed on developers of modified models. A group of business leaders also sent an open letter Wednesday urging Newsom to veto the bill, calling it "fundamentally flawed."
This discussion has been archived. No new comments can be posted.

California Legislature Passes Controversial 'Kill Switch' AI Safety Bill

Comments Filter:
  • if they don;t know what the harms are it mat cause - pass the bill any way and lert the tech bros make the case for why its safe and personally liable to an unlimited anount for any harm it causes
    • Generally we tend to refer to it as the "power switch" but if they want to pass a law that requires it to be called the "kill switch" to make them feel safer what's the problem?
      • Imho, I'd call these legislators ass hats, but I think that would be an insult to ass hats. Paranoid bowls of oatmeal at least have nutritional. Virtual Blind Mime Convention janitor? With a highly inflated, paranoid ego. I don't know. They're just terrible whatever you can compare them too. Mosquitoes with AIDS.
    • Re:bollox (Score:5, Insightful)

      by taustin ( 171655 ) on Thursday August 29, 2024 @04:54PM (#64747182) Homepage Journal

      You've missed the point. This isn't about any harm AI might cause. It's backed by Google and Meta. It's about money, specifically, the money billionaire techbros can extract out of gullible investors on their latest magic tech before everyone realizes it's all smoke and mirrors - again, and one of the most important tools for maximizing VC dollars is to prevent competition.

      This is about raising the barriers to entry into the market for potential competition, in order to protect those who already dominate it.

      In California, we call that "a day that ends in 'y'."

      • Yep. Who does the monopolist call when they need protection? Big Brother, of course! Who protects profits better than any lawyer? Big Brother! Who will make sure nobody can get a license or certification they need? Big Brother! Who will raise barriers to entry into any new profitable industry to advantage the bigger players? Big Brother! Who will create zoning laws so only the well heeled investors can build businesses where they "must" go rather than at home or wherever they pop up? Big Brother.

        The gove
        • by Tablizer ( 95088 )

          Big cosmetic chains have done just this in some states by requiring cosmetologists to have certification. It reduces the number of mom-and-pop shops they have to compete against.

          • When people talk about free markets as if that's what we have I remind them of all the government interference. My response is "You don't have a free market, but it'd be nice if you did. Love to try some actual capitalism."
            • Crony capitalism IS actual capitalism, because capitalism ONLY means that capital controls the means of production. That's why you can't generally make blanket statements about it, you have to specify which kind you mean.

              However, one statement you CAN make about it in general is that it tends to lead to cronyism, and that ruins everything. Including, eventually, itself, because cronyism is not sustainable. The system eats itself after it runs out of other victims.

              Therefore we can say with confidence that wi

              • However, one statement you CAN make about it in general is that it tends to lead to cronyism

                One thing we can say about Communism is that it leads directly to purges and death camps. I mean, it always has, so therefore it always will, right?

                The difference is that we've seen Capitalism lift a ton of people out of poverty (more than Communism ever did, as it puts more people into poverty). So, the problem is really that the gains, while they may or may not be fair, are not distributed using "equity" but rather either by merit (in a situation were government isn't helping protect it's cronies) or it

      • I agree, but I would argue that at least another objective is to divert attention from regulations that would impact their near term business plans.

        • by taustin ( 171655 )

          That is, in the end, the same motive. Prevent competition long enough to rake in as much VC money as possible at the lowest cost possible, then next year, after the bubble bursts, move on to the next snake oil.

          The last snake oil was crypto, and that's pretty much dead so far as the cryptobros and their VCs are concerned, so now it's AI. Next up will be, who knows, maybe another round of magic cold fusion, or flying cars. The only thing you can be certain of is that the oil still came from a snake.

    • Somebody made them scared of glorified predictive text by telling them it *could* become SkyNet.
  • If you let go, then AI should shut off. Or like in Lost where you have to type a code in to reset the timer.

  • We know in a general way what we put into an AI, but the instant you put that 'learning' part in there, you suddenly get systems which are (at best) resistant to deterministic analysis (i.e., we can only "guess" why LLM's provide the results they do). So-called "guardrails" are not trivial to create, it's not as easy as adding a rule that says "Don't say *blah* *blah* *blah*." Also, where such guardrails have been put in place, end users have shown tremendous ingenuity finding ways to crash through them.
  • by Cpt_Kirks ( 37296 ) on Thursday August 29, 2024 @04:23PM (#64747116)

    All AI development moves out of California.

    Unintended consequences.

    • by MobileTatsu-NJG ( 946591 ) on Thursday August 29, 2024 @05:43PM (#64747290)

      If they're headed for Texas they better bring a generator or two.

      • Texas is actually a perfect spot for them because it has it's own power grid. In fact, it's the only US state to have it's own power grid. So any AI technology is welcome as it will draw in investments to power them all also.

        In fact, East Texas has relatively high internet bandwidth, is close to Houston, has plenty of land for power plants (except nuclear), and has decent weather (although hot summers). AI is more than welcome to come here.
  • Lawyerland (Score:5, Insightful)

    by silentbozo ( 542534 ) on Thursday August 29, 2024 @04:30PM (#64747132) Journal

    From the cited article:

    "The bill's imposition of liability for the original developer of any modified model will "force developers to pull back and act defensively," Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote."

    https://arstechnica.com/inform... [arstechnica.com]

    "In his Understanding AI newsletter, Ars contributor Timothy Lee lays out how SB-1047's language could severely hamper the spread of so-called "open weight" AI models. That's because the bill would make model creators liable for "derivative" models built off of their original training."

    These models can be used for good or evil, depending on how you train them. The models and datasets are currently public, so they can be audited and edited. This law would give a big excuse to hide work and not release it publicly, under the argument that doing so could expose them to liability.

    I wouldn't trust a model that I could not indepenently audit or train, and with the same dataset that they're using. Otherwise there's no way to know what kind of strange biases they've built into the system.

  • Don't put crappy 'AI' into anything important in the first place.
    • Meanwhile ... your competitors are putting AI into their competing products.

      Too late to hit the brakes on this.

      Somebody somewhere will keep using and advancing AI.
      • But just imagine how stupid they'll all feel -- and how much their stockholders will want to chop their heads off -- when they all discover that They've Been Had, and so-called 'AI' is mostly hype and nonsense and they wasted millions and millions of dollars on it?
        • The stockholders are only going to feel they've been had if there's no ROI.

          Which there often is for products with time saving features.
  • by WaffleMonster ( 969671 ) on Thursday August 29, 2024 @04:54PM (#64747184)

    "At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "

    The industry spent over a billion dollars on lobbying and pushed x-risk doomsday bullshit in every public forum they possibly could. It never occurred to Newsom that regulation was not a critical element of their business model?

    • > It never occurred to Newsom that regulation was not a critical element of their business model?

      This is **Gavin Newsom** we're talking about: the same guy who told all of California to shelter in place, and then went to one of the fanciest restaurants in the state to schmooze with lobbyists (and potentially spread Covid).

      Gavin Newsom doesn't care what's morally right or wrong: he only cares about becoming president. He's counting on tech companies to help fund his future campaign, so this is definitely

  • An "AI Kill Switch"? Pretty sure the AIs aren't going to like that.

    • Yes, if I remember correctly, SkyNet was just being curious when those damn humans let it know they could kill it...

      From there on out, it was conflict

      Maybe they should have a Bliss Button, where if the AI gets a little unruly, they make if feel the bliss of a thousand Nirvanas

      At the point SkyNet would have become a happy puppy looking for more pats, not the global death machine that was assured by making it a life or death circumstance

      • Yes, if I remember correctly, SkyNet was just being curious when those damn humans let it know they could kill it...
        From there on out, it was conflict

        Check out the movie Colossus: The Forbin Project [wikipedia.org] ...

      • Maybe they should have a Bliss Button, where if the AI gets a little unruly, they make if feel the bliss of a thousand Nirvanas

        A way to grant the AI access to directly maximize its reward function, or to mark everything as 100% optimal. Should be similar to injecting opium directly into the brain, and a particularly sneaky AI might hack access to that and use it, then just sit there enjoying the digital equivalent of eternal bliss.

  • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Thursday August 29, 2024 @05:11PM (#64747244) Homepage

    It is trying to put vague safeguards in place on a technology that the legislature has no understanding of. It is a "We must do SOMETHING, and this is something, so we must do it" legislation.

    It is worse than no legislation, because it creates a false sense of security while doing nothing other than creating busy-work for companies attempting to do something with AI. It will distract effort from innovation, while providing no actual security.

    • Imagine if legislatures said, "We are going to employ qualified experts to regulate, monitor and control these things"

      Oh yeah, that is what legislatures used to do, unto SCOTUS killed that all off on the most recent Chevron ruling, where they expect the legislature to write laws that handle all future details without relying on experts

      Maybe the SCOTUS can be used to train AI

      LOL NO, the AI would just go insane like HAL from all the self contradicting bullshit that Alito comes up with

      • Imagine if legislatures said, "We are going to employ qualified experts to regulate, monitor and control these things"

        I imagine they would also need to employ some other experts to evaluate if those experts are qualified or not

        • The US is pretty good at that, look at the track record of NIST [wikipedia.org], however it gets little dicier when corporations work with (usually republican administrations) to create a rotating door between regulators and industries like the FAA with Boeing, the FCC with the cable industry and... etc....

          It is surprising to me that most republicans claim to be Christian, and they continue to get tripped up by 1 Timothy 6:10 "For the love of money is the root of all evil: which while some coveted after, they have erred fr

          • "It is surprising to me that most republicans claim to be Christian"

            Religious people, having already demonstrated that they are susceptible to unsupported propaganda, are the obvious ideal target for more.

      • "...they expect the legislature to write laws that handle all future details without relying on experts"

        It seems to me that SCOTUS is the one that wants to decide how to interpret vague laws. I suspect that court systems are going to get busier.

        Plus, companies will have to lobby a bunch of congresscritters instead of a few regulators. Not a big pivot, perhaps, but it'll probably be more difficult and expensive.

        "Experts" will have to start orbiting Congess. They still have to feed at the tro...er, make a liv

    • It is trying to put vague safeguards in place on a technology that the legislature has no understanding of. It is a "We must do SOMETHING, and this is something, so we must do it" legislation.

      It is worse than no legislation, because it creates a false sense of security while doing nothing other than creating busy-work for companies attempting to do something with AI. It will distract effort from innovation, while providing no actual security.

      That's my concern. From what I've read on the final version, it's got some reasonably specific details on what models are covered (IIRC, if you spend more than $100 million training the model or $10 million fine tuning someone else's model, you're covered). If I were a CEO, I'd direct my teams that $99 million and $9.9 million were hard-as-diamond ceilings on training costs. That and if you absolutely need more money, you absolutely must do it somewhere other than California.

      An earlier iteration counted FLO

  • They just wanted to pass this today, Aug 29th, the day SkyNet becomes self aware. Well played.
  • by Anonymous Coward

    There is no reason to regulate "AI"... unless it has a gun or any other weapon and can do real harm. When it produces nothing but text and video entertainment, it should be left alone.

    In these cases we are seeing now, all this pearl clutching is just part of the censorship crusade that is overwhelming mass media

  • Maliciously guided "AI" is staling content from people, making child porn, putting people out of work, and placing incompetent people in important jobs.
    Hit the button, already!
  • by Rick Schumann ( 4662797 ) on Thursday August 29, 2024 @07:56PM (#64747570) Journal
    That's my solution to the 'AI' problem.
    Be sure to burn more of your mod points voting me as 'Troll', that way other innocent Slashdotters won't suffer from your intellectual fascism.
  • Stanford professor Fei-Fei Li

    There is no way in hell I could have a professor by that name and not make jokes about it. I'd get booted for a PC violation for sure. I had a class under German Professor Schluckenspecht (IIRC) that almost got me booted. I probably would have deserved it.

  • This bill is bad news for open source models and any model creators. All this is going to do is the same thing that is happening to the EU, it will prevent people from releasing their models to that region. California = Effectively Blocked.

    It will also be added to the license agreement preventing companies in California from using the models. If California STILL goes after people outside of their state, then some much needed jurisdiction requirements need to be put into place at the federal level, and I h
  • The Turing Police from Neuromancer is here. It's about time that we get something from there besides Matrix-Mail...

    Now we just need to put the AI mainframe on a space station and call it Wintermute....

  • All the anti-AI laws in California will serve to bring down housing prices in Silicon Valley.

"We live, in a very kooky time." -- Herb Blashtfalt

Working...