California Legislature Passes Controversial 'Kill Switch' AI Safety Bill (arstechnica.com) 56
An anonymous reader quotes a report from Ars Technica: A controversial bill aimed at enforcing safety standards for large artificial intelligence models has now passed the California State Assembly by a 45-11 vote. Following a 32-1 state Senate vote in May, SB-1047 now faces just one more procedural state senate vote before heading to Governor Gavin Newsom's desk. As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation. [...]
If the Senate confirms the Assembly version as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If he vetoes it, the legislature could override with a two-thirds vote in each chamber (a strong possibility given the overwhelming votes in favor of the bill). At a UC Berkeley Symposium in May, Newsom said he worried that "if we over-regulate, if we overindulge, if we chase a shiny object, we could put ourselves in a perilous position." At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "When they're rushing to educate people, and they're basically saying, 'We don't know, really, what we've done, but you've got to do something about it,' that's an interesting environment." Supporters of the AI safety bill include state senator Scott Weiner and AI experts including Geoffrey Hinton and Yoshua Bengio. Bengio supports the bill as a necessary step for consumer protection and insists that AI should not be self-regulated by corporations, akin to other industries like pharmaceuticals and aerospace.
Stanford professor Fei-Fei Li opposes the bill, arguing that it could have harmful effects on the AI ecosystem by discouraging open-source collaboration and limiting academic research due to the liability placed on developers of modified models. A group of business leaders also sent an open letter Wednesday urging Newsom to veto the bill, calling it "fundamentally flawed."
If the Senate confirms the Assembly version as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If he vetoes it, the legislature could override with a two-thirds vote in each chamber (a strong possibility given the overwhelming votes in favor of the bill). At a UC Berkeley Symposium in May, Newsom said he worried that "if we over-regulate, if we overindulge, if we chase a shiny object, we could put ourselves in a perilous position." At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "When they're rushing to educate people, and they're basically saying, 'We don't know, really, what we've done, but you've got to do something about it,' that's an interesting environment." Supporters of the AI safety bill include state senator Scott Weiner and AI experts including Geoffrey Hinton and Yoshua Bengio. Bengio supports the bill as a necessary step for consumer protection and insists that AI should not be self-regulated by corporations, akin to other industries like pharmaceuticals and aerospace.
Stanford professor Fei-Fei Li opposes the bill, arguing that it could have harmful effects on the AI ecosystem by discouraging open-source collaboration and limiting academic research due to the liability placed on developers of modified models. A group of business leaders also sent an open letter Wednesday urging Newsom to veto the bill, calling it "fundamentally flawed."
bollox (Score:2)
A "Kill Switch" Already Exists (Score:3)
Re: (Score:2)
Re:bollox (Score:5, Insightful)
You've missed the point. This isn't about any harm AI might cause. It's backed by Google and Meta. It's about money, specifically, the money billionaire techbros can extract out of gullible investors on their latest magic tech before everyone realizes it's all smoke and mirrors - again, and one of the most important tools for maximizing VC dollars is to prevent competition.
This is about raising the barriers to entry into the market for potential competition, in order to protect those who already dominate it.
In California, we call that "a day that ends in 'y'."
Re: (Score:2)
The gove
Re: (Score:1)
Big cosmetic chains have done just this in some states by requiring cosmetologists to have certification. It reduces the number of mom-and-pop shops they have to compete against.
Re: (Score:2)
Re: bollox (Score:2)
Crony capitalism IS actual capitalism, because capitalism ONLY means that capital controls the means of production. That's why you can't generally make blanket statements about it, you have to specify which kind you mean.
However, one statement you CAN make about it in general is that it tends to lead to cronyism, and that ruins everything. Including, eventually, itself, because cronyism is not sustainable. The system eats itself after it runs out of other victims.
Therefore we can say with confidence that wi
Re: (Score:2)
However, one statement you CAN make about it in general is that it tends to lead to cronyism
One thing we can say about Communism is that it leads directly to purges and death camps. I mean, it always has, so therefore it always will, right?
The difference is that we've seen Capitalism lift a ton of people out of poverty (more than Communism ever did, as it puts more people into poverty). So, the problem is really that the gains, while they may or may not be fair, are not distributed using "equity" but rather either by merit (in a situation were government isn't helping protect it's cronies) or it
Re: (Score:2)
I agree, but I would argue that at least another objective is to divert attention from regulations that would impact their near term business plans.
Re: (Score:2)
That is, in the end, the same motive. Prevent competition long enough to rake in as much VC money as possible at the lowest cost possible, then next year, after the bubble bursts, move on to the next snake oil.
The last snake oil was crypto, and that's pretty much dead so far as the cryptobros and their VCs are concerned, so now it's AI. Next up will be, who knows, maybe another round of magic cold fusion, or flying cars. The only thing you can be certain of is that the oil still came from a snake.
Re: (Score:2)
ought to be a dead mans switch (Score:2)
If you let go, then AI should shut off. Or like in Lost where you have to type a code in to reset the timer.
Re: (Score:2)
And POOF, just like that (Score:3, Insightful)
All AI development moves out of California.
Unintended consequences.
Re: (Score:1)
The maps are created by the Deep State, can't trust them! I sent my Uncle Martin out there to verify, and he never found anything resembling Texas, except for one drunk who cannot be trusted. However, he did find Area 51, and the alleged aliens are actually all Elvis clones.
Re:And POOF, just like that (Score:5, Funny)
If they're headed for Texas they better bring a generator or two.
Re: (Score:1)
In fact, East Texas has relatively high internet bandwidth, is close to Houston, has plenty of land for power plants (except nuclear), and has decent weather (although hot summers). AI is more than welcome to come here.
Lawyerland (Score:5, Insightful)
From the cited article:
"The bill's imposition of liability for the original developer of any modified model will "force developers to pull back and act defensively," Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote."
https://arstechnica.com/inform... [arstechnica.com]
"In his Understanding AI newsletter, Ars contributor Timothy Lee lays out how SB-1047's language could severely hamper the spread of so-called "open weight" AI models. That's because the bill would make model creators liable for "derivative" models built off of their original training."
These models can be used for good or evil, depending on how you train them. The models and datasets are currently public, so they can be audited and edited. This law would give a big excuse to hide work and not release it publicly, under the argument that doing so could expose them to liability.
I wouldn't trust a model that I could not indepenently audit or train, and with the same dataset that they're using. Otherwise there's no way to know what kind of strange biases they've built into the system.
I have a better, simpler solution: (Score:1, Troll)
Re: (Score:2)
Too late to hit the brakes on this.
Somebody somewhere will keep using and advancing AI.
Re: (Score:2)
Re: (Score:2)
Which there often is for products with time saving features.
Blissfull ignorance (Score:3, Insightful)
"At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "
The industry spent over a billion dollars on lobbying and pushed x-risk doomsday bullshit in every public forum they possibly could. It never occurred to Newsom that regulation was not a critical element of their business model?
Re: (Score:2)
> It never occurred to Newsom that regulation was not a critical element of their business model?
This is **Gavin Newsom** we're talking about: the same guy who told all of California to shelter in place, and then went to one of the fanciest restaurants in the state to schmooze with lobbyists (and potentially spread Covid).
Gavin Newsom doesn't care what's morally right or wrong: he only cares about becoming president. He's counting on tech companies to help fund his future campaign, so this is definitely
Re: (Score:2)
If that's the worse "sin" he committed, I consider it a yawner by politician standards.
Re: Blissfull ignorance (Score:2)
He's currently charging forwards towards concentration camps for the homeless, so I expect him to flip to Republican and become their hero
Um ... (Score:2)
An "AI Kill Switch"? Pretty sure the AIs aren't going to like that.
Re: (Score:3)
Yes, if I remember correctly, SkyNet was just being curious when those damn humans let it know they could kill it...
From there on out, it was conflict
Maybe they should have a Bliss Button, where if the AI gets a little unruly, they make if feel the bliss of a thousand Nirvanas
At the point SkyNet would have become a happy puppy looking for more pats, not the global death machine that was assured by making it a life or death circumstance
Re: (Score:3)
Yes, if I remember correctly, SkyNet was just being curious when those damn humans let it know they could kill it...
From there on out, it was conflict
Check out the movie Colossus: The Forbin Project [wikipedia.org] ...
Re: (Score:3)
Yes, I do remember when Colossus teams up with the Soviet counterpart to enforce peace on humans...
My very favorite AI "Bad End", though, is "I Have No Mouth and I Must Scream [cuny.edu]" , which really, really, really demonstrates why you do not want to piss of the AI follow on for Humanity (yes, it IS Evolution in action)
Re: (Score:2)
Thanks! I'll check that out.
Re: (Score:2)
Maybe they should have a Bliss Button, where if the AI gets a little unruly, they make if feel the bliss of a thousand Nirvanas
A way to grant the AI access to directly maximize its reward function, or to mark everything as 100% optimal. Should be similar to injecting opium directly into the brain, and a particularly sneaky AI might hack access to that and use it, then just sit there enjoying the digital equivalent of eternal bliss.
Bad legislation (Score:3)
It is trying to put vague safeguards in place on a technology that the legislature has no understanding of. It is a "We must do SOMETHING, and this is something, so we must do it" legislation.
It is worse than no legislation, because it creates a false sense of security while doing nothing other than creating busy-work for companies attempting to do something with AI. It will distract effort from innovation, while providing no actual security.
Re: (Score:2)
Imagine if legislatures said, "We are going to employ qualified experts to regulate, monitor and control these things"
Oh yeah, that is what legislatures used to do, unto SCOTUS killed that all off on the most recent Chevron ruling, where they expect the legislature to write laws that handle all future details without relying on experts
Maybe the SCOTUS can be used to train AI
LOL NO, the AI would just go insane like HAL from all the self contradicting bullshit that Alito comes up with
Re: (Score:2)
Imagine if legislatures said, "We are going to employ qualified experts to regulate, monitor and control these things"
I imagine they would also need to employ some other experts to evaluate if those experts are qualified or not
Re: (Score:2)
The US is pretty good at that, look at the track record of NIST [wikipedia.org], however it gets little dicier when corporations work with (usually republican administrations) to create a rotating door between regulators and industries like the FAA with Boeing, the FCC with the cable industry and... etc....
It is surprising to me that most republicans claim to be Christian, and they continue to get tripped up by 1 Timothy 6:10 "For the love of money is the root of all evil: which while some coveted after, they have erred fr
Re: Bad legislation (Score:2)
"It is surprising to me that most republicans claim to be Christian"
Religious people, having already demonstrated that they are susceptible to unsupported propaganda, are the obvious ideal target for more.
Re: Bad legislation (Score:2)
"...they expect the legislature to write laws that handle all future details without relying on experts"
It seems to me that SCOTUS is the one that wants to decide how to interpret vague laws. I suspect that court systems are going to get busier.
Plus, companies will have to lobby a bunch of congresscritters instead of a few regulators. Not a big pivot, perhaps, but it'll probably be more difficult and expensive.
"Experts" will have to start orbiting Congess. They still have to feed at the tro...er, make a liv
Re: (Score:2)
It is trying to put vague safeguards in place on a technology that the legislature has no understanding of. It is a "We must do SOMETHING, and this is something, so we must do it" legislation.
It is worse than no legislation, because it creates a false sense of security while doing nothing other than creating busy-work for companies attempting to do something with AI. It will distract effort from innovation, while providing no actual security.
That's my concern. From what I've read on the final version, it's got some reasonably specific details on what models are covered (IIRC, if you spend more than $100 million training the model or $10 million fine tuning someone else's model, you're covered). If I were a CEO, I'd direct my teams that $99 million and $9.9 million were hard-as-diamond ceilings on training costs. That and if you absolutely need more money, you absolutely must do it somewhere other than California.
An earlier iteration counted FLO
An auspicious day (Score:2)
bah How about a remote switch to cut the power (Score:1)
fundamentally flawed... that it is (Score:1)
There is no reason to regulate "AI"... unless it has a gun or any other weapon and can do real harm. When it produces nothing but text and video entertainment, it should be left alone.
In these cases we are seeing now, all this pearl clutching is just part of the censorship crusade that is overwhelming mass media
Press it now! (Score:2)
Hit the button, already!
Don't put AI into anything important (Score:3)
Be sure to burn more of your mod points voting me as 'Troll', that way other innocent Slashdotters won't suffer from your intellectual fascism.
Re: (Score:1)
Trolls and scammers will cause enough problems even without formal products.
My generation has no PC discipline (Score:2)
There is no way in hell I could have a professor by that name and not make jokes about it. I'd get booted for a PC violation for sure. I had a class under German Professor Schluckenspecht (IIRC) that almost got me booted. I probably would have deserved it.
This bill is bad news (Score:2)
It will also be added to the license agreement preventing companies in California from using the models. If California STILL goes after people outside of their state, then some much needed jurisdiction requirements need to be put into place at the federal level, and I h
Ahh finally... (Score:2)
The Turing Police from Neuromancer is here. It's about time that we get something from there besides Matrix-Mail...
Now we just need to put the AI mainframe on a space station and call it Wintermute....
housing prices (Score:2)
All the anti-AI laws in California will serve to bring down housing prices in Silicon Valley.