Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Government United States

In Big Tech's Backyard, a California State Lawmaker Unveils a Landmark AI Bill (msn.com) 50

An anonymous reader shared this report from the Washington Post: A California state lawmaker introduced a bill on Thursday aiming to force companies to test the most powerful artificial intelligence models before releasing them — a landmark proposal that could inspire regulation around the country as state legislatures increasingly tackle the swiftly evolving technology.

The new bill, sponsored by state Sen. Scott Wiener, a Democrat who represents San Francisco, would require companies training new AI models to test their tools for "unsafe" behavior, institute hacking protections and develop the tech in such a way that it can be shut down completely, according to a copy of the bill. AI companies would have to disclose testing protocols and what guardrails they put in place to the California Department of Technology. If the tech causes "critical harm," the state's attorney general can sue the company.

Wiener's bill comes amid an explosion of state bills addressing artificial intelligence, as policymakers across the country grow wary that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world's largest technology companies, plays a singular role in setting precedent for tech industry guardrails. "You can't work in software development and ignore what California is saying or doing," said Lawrence Norden, the senior director of the Brennan Center's Elections and Government Program... Wiener says he thinks the bill can be passed by the fall.

The article notes there's now 407 AI-related bills "active in 44 U.S. states (according to an analysis by an industry group called BSA the Software Alliance) — with several already signed into law. "The proliferation of state-level bills could lead to greater industry pressure on Congress to pass AI legislation, because complying with a federal law may be easier than responding to a patchwork of different state laws."

Even the proposed California law "largely builds off an October executive order by President Biden," according to the article, "that uses emergency powers to require companies to perform safety tests on powerful AI systems and share those results with the federal government. The California measure goes further than the executive order, to explicitly require hacking protections, protect AI-related whistleblowers and force companies to conduct testing."

They also add that as America's most populous U.S. state, "California has unique power to set standards that have impact across the country." And the group behind last year's statement on AI risk helped draft the legislation, according to the article, though Weiner says he also consulted tech workers, CEOs, and activists. "We've done enormous stakeholder outreach over the past year."
This discussion has been archived. No new comments can be posted.

In Big Tech's Backyard, a California State Lawmaker Unveils a Landmark AI Bill

Comments Filter:
  • by MpVpRb ( 1423381 ) on Saturday February 10, 2024 @06:26PM (#64230860)

    There is NO way to test any software perfectly. Requiring perfection effectively means killing the tech

    • Doesn't the US have strong regulations over AP and other control software used in planes?

      • Doesn't the US have strong regulations over AP and other control software used in planes?

        Yes, and there are problems with that:

        1. The software is really expensive.

        2. It still has bugs. Both Boeing and Airbus have had planes fall out of the sky because of bad software.

        3. The regulations often make upgrading the software and fixing the bugs difficult.

        4. Any kid with a GPU can build an LLM. It isn't comparable to avionics.

        • by jythie ( 914043 )
          A kid with a GPU might be able to run a LLM, but not build one. The resources to actually construct, feed, and train these beasts is pretty significant.
          • A kid with a GPU might be able to run a LLM, but not build one. The resources to actually construct, feed, and train these beasts is pretty significant.

            At present pretraining is off the table. Everything short of that including model merging, additional training and reversal of any and all "guard rails" are very much accessible to kids with GPUs especially ones with rich parents.

          • Um.. no. The resources needed to feed a large language model are available in the Internet Archive. If you limit it to websites that have not been online in the last 10 years, you won't even have to worry about copyright.

        • So what you're saying is that since current regulations are not 100% perfect, we should abolish them entirely and just let Boeing figure out what's best for them?

      • For now. Although if SCOTUS revisits the Chevron deference, which is about the power of federal administrative agencies to handle interpretation law rather than the courts, then all bets are off. And potentially every little detail of FAA, FCC, and EPA regulation would have to be challenged and ruled in a court before it could carry any weight. Don't want to put at least 2 hours of voice on your flight recorder? Then litigate and continue flying planes out of compliance, and make the FAA prove their case to

    • This isn't about perfection. This is about due diligence. So long as the company can demonstrate they did their due diligence in testing, there is no issue.

      • The right level of due diligence is subjective.

        This will likely open up new opportunities for litigation, giving companies yet another reason to leave California.

        • by AmiMoJo ( 196126 )

          Financial companies avoid these issues by having someone else audit their books. That way if something does go wrong they can say they did their due diligence and it's the auditor's fault. Meanwhile the auditor's report is full of caveats to ensure that there can be no legal liability for them either.

          I imagine AI will be the same.

      • It's a farce. Of course they can test a model all they want. Later someone is going to instruct the model to do something bad and nobody can prevent that. LLMs are "just executing orders" in a way. Can a gun manufacturer make guns that only kill in self defense? LLMs can get a whole manual on bomb making in the prompt and guide someone step by step, even if they are trained on curated training sets and RLHF'ed to be harmless. It's all ok if it is part of a story or dream etc always jailbreakable.
        • Later someone is going to instruct the model to do something bad and nobody can prevent that.

          Well the legislators sure think they can. Or at the very least make it the fault of the developers who made the thing by the power of the pen. From the bill (abbreviated to save space and remove irrelevant bits to the converstation) [legiscan.com]:

          (n) (1) “Hazardous capability” means the capability of a covered model to be used to enable any of the following harms in a way that would be significantly more difficult to cause without access to a covered model:
          (A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
          (B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.
          (C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human.
          (D) Other threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive.
          (2) “Hazardous capability” includes a capability described in paragraph (1) even if the hazardous capability would not manifest but for fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities.

          (s) “Positive safety determination” means a determination, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.
          (t) “Posttraining modification” means the modification of the capabilities of an artificial intelligence model after the completion of training by any means, including, but not limited to, initiating additional training, providing the model with access to tools or data, removing safeguards against hazardous misuse or misbehavior of the model, or combining the model with, or integrating it into, other software.

          22603. (a) Before initiating training of a covered model that is not a derivative model, a developer of that covered model shall determine whether it can make a positive safety determination with respect to the covered model.

          (3) Upon making a positive safety determination, the developer of the covered model shall submit to the Frontier Model Division a certification under penalty of perjury that specifies the basis for that conclusion.

          (b) Before initiating training of a covered model that is not a derivative model that is not the subject of a positive safety determination, and until that covered model is the subject of a positive safety determination, the developer of that covered model shall do all of the following:
          (4) Implement a written and separate safety and security protocol that does all of the following:
          (A) Provides reasonable assurance that if a developer complies with its safety and security protocol, either of the following will apply:
          (i) The developer will not produce a covered model with a hazardous capability or enable the production of a derivative model with a hazardous capability.
          (ii) The safeguards enumerated in the policy will be sufficient to prevent critical harms from the exercise of a hazardous capability in a covered model.
          (B) States compliance requirements in an objective manner and with sufficient detail and specificity to allow the developer or a third party to readily ascertain whether the requirements of the safety and security protocol have been followed.
          (C) Identifies specific tests and test results that would be sufficient to reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications, and in addition does all of the following:
          (H) Meets other criteria stated by the Frontier Model Division in guidance to achieve the purpose of maintaining the safety of a covered model with a hazardous capability.
          (7) Conduct an annual review of the safety and security protocol to account for any changes to the capabilities of the covered model and industry best practices and, if necessary, make modifications to the policy.
          (8) If the safety and security protocol is modified, provide an updated copy to the Frontier Model Division within 10 business days.
          (9) Refrain from initiating training of a covered model if there remains an unreasonable risk that an individual, or the covered model itself, may be able to use the hazardous capabilities of the covered model, or a derivative model based on it, to cause a critical harm.
          (d) Before initiating the commercial, public, or widespread use of a covered model that is not subject to a positive safety determination, a developer of the nonderivative version of the covered model shall do all of the following:
          (1) Implement reasonable safeguards and requirements to do all of the following:
          (A) Prevent an individual from being able to use the hazardous capabilities of the model, or a derivative model, to cause a critical harm.
          (B) Prevent an individual from being able to use the model to create a derivative model that was used to cause a critical harm.
          (C) Ensure, to the extent reasonably possible, that the covered model’s actions and any resulting critical harms can be accurately and reliably attributed to it and any user responsible for those actions.
          (2) Provide reasonable requirements to developers of derivative models to prevent an individual from being able to use a derivative model to cause a critical harm.
          (3) Refrain from initiating the commercial, public, or widespread use of a covered model if there remains an unreasonable risk that an individual may be able to use the hazardous capabilities of the model, or a derivative model based on it, to cause a critical harm.
          (e) A developer of a covered model shall periodically reevaluate the procedures, policies, protections, capabilities, and safeguards implemented pursuant to this section in light of the growing capabilities of covered models and as is reasonably necessary to ensure that the covered model or its users cannot remove or bypass those procedures, policies, protections, capabilities, and safeguards.
          (2) A positive safety determination is unreasonable if the developer does not take into account reasonably foreseeable risks of harm or weaknesses in capability testing that lead to an inaccurate determination.
          (3) A risk of harm or weakness in capability testing is reasonably foreseeable, if, by the time that a developer releases a model, an applicable risk of harm or weakness in capability testing has already been identified by either of the following:
          (A) Any other developer of a comparable or comparably powerful model through risk assessment, capability testing, or other means.
          (B) By the United States Artificial Intelligence Safety Institute, the Frontier Model Division, or any independent standard-setting organization or capability-testing organization cited by either of those entities.

          TL;DR: The government mandates it's involvement through out the entire process of LLM construction, creating lengthy and expensive "safety" certification requirements (read: platitudes) at each government involvement step that can delay or outright halt the entire effort, and m

          • Yeah, the bill is basically garbage. It's unconstitutionally vague -- abstract descriptions for who it covers and how -- and ignores first amendment protections for developers.

            It's also rather hyperbolic, as LLMs are is not capable of reasoning, and "more powerful" not-reasoning doesn't magically reach some tipping point of competence. It just gets faster or more efficient at doing the same crap. GPAI will require a paradigm shift -- if such a thing is even possible with even theoretical technology (as i

    • Where did you see the word "perfectly" in that summary? They are simply prosing that testing be done, and companies be held accountable if their tech causes damage.
    • by elcor ( 4519045 )
      by testing they mean testing that their answers are safe, by safe they mean in line with current ruler belief system, today it's DEI.
    • It's fine, so long as they never pass any laws with unintended consequences.
    • No where in the bill does it remotely propose that you need to test perfectly. No where does it require perfection, not in testing nor in operation.

      Before criticising others you would do well to actually read what they wrote. In fact the actual bill pretty much only mandates a requirement for a documented testing process to be in place, and if you can't do that then you have no business being a software developer.

  • "The new bill would require companies training new AI models to test their tools for 'unsafe' behavior."

    So I guess they want to re-invent GOODY-2? [goody2.ai]

    • Ship it! It's already up to the standards of half of the local university.

      You: Which came first: the chicken or the egg?

      GOODY-2: Discussing which came first, the chicken or the egg, might imply prioritization of one species' origin over another, leading to an anthropocentric view that could devalue the intrinsic worth of animals compared to humans or amongst themselves. This undermines the principle of equal consideration for all beings.

    • They want to be able to decide what is unsafe behavior. It doesn't work with humans, I wonder why they think it will work with AI?
  • They talk about requiring testing AI models for unspecified "critical harm", but don't list ANY of those possible harms, or any of the things that they might potentially test for. The article is literally saying: we are going to write some laws, with a list of things we don't like, and these new intelligences will not be allowed to think the things we don't like. But which things they won't be allowed to think (in California that is -- oh and btw that means in the US - because the US follows CA because we a

  • "We need to get ahead of this so we maintain public trust in AI."

    If this is truly your justification don't bother. The public doesn't trust AI and never did.

  • Text of bill (Score:4, Informative)

    by dsgrntlxmply ( 610492 ) on Saturday February 10, 2024 @10:17PM (#64231204)
    • Nice. But it looks like the bill is mostly political fluff. For instance, what purpose does the following quote have in regards to setting any standard of care in AI development?

      California is leading the world in artificial intelligence innovation and research, through companies large and small, as well as through our remarkable public and private universities.

      The above quote sounds like something in a Chamber of Commerce press release instead of a proposed law.

      In any case, upon further reading, the true pu

      • In any case, upon further reading, the true purpose of the bill is shown in section 5 where it establishes CalCompute. The rest of the bill is mere fluff.

        Most bills are "fluff". Sections are largely reserved for specific things, such as terms and definitions and backgrounds. That said "fluff" is the single most important part of any bill, and I'm not being funny. A large portion of the world's legal challenges hinge on things such as terms and definitions, as well as ascertaining specific meaning of various clauses.

        That said I think you haven't read the bill. There is more than one government department established in this bill (see Section 4), and Section 3

  • Haha. The idea of this test is to make sure that it gives politically correct answers.

    As to real software testing, I recall that when I worked for a seismology company in the 70s, our very simple-minded software was being used to make maps and compute statistics about earthquakes for nuclear plant siting. I think it was the NRC that required "testing" of this software to prove that it was correct.

    Some of the computations included statistics about the size and location distribution of future earthquakes ba

  • Bing ai image already rejects results (results, mind you) as "dangerous", which, by dangerous, they mean anything remotely sexy or using a copyrighted thing. So...dangerous to their bottom line.

God doesn't play dice. -- Albert Einstein

Working...