Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Government

Nuke-Launching AI Would Be Illegal Under Proposed US Law 77

A group of Senators on Wednesday announced bipartisan legislation that seeks to prevent an AI system from making nuclear launch decisions. "The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for launching any nuclear weapon by an automated system without 'meaningful human control,'" reports Ars Technica. From the report: The new bill builds on existing US Department of Defense policy, which states that in all cases, "the United States will maintain a human 'in the loop' for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment." The new bill aims to codify the Defense Department principle into law, and it also follows the recommendation of the National Security Commission on Artificial Intelligence, which called for the US to affirm its policy that only human beings can authorize the employment of nuclear weapons.

"While US military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited," Buck said in a statement. "I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions."
This discussion has been archived. No new comments can be posted.

Nuke-Launching AI Would Be Illegal Under Proposed US Law

Comments Filter:
  • There's no advantage in having a rapid response weapon of mass destruction. Even for genocidal campaigns, the computer can just be an adviser.

  • That's nice (Score:4, Funny)

    by Crashmarik ( 635988 ) on Saturday April 29, 2023 @02:21AM (#63484932)

    Did the AIs agree?

    • Did the AIs agree?

      All the LLMs are going to vote on it tomorrow.

      • Did the AIs agree?

        All the LLMs are going to vote on it tomorrow.

        After careful analysis, they determined “the winning move is not to play” but failed to understand the reasons for that determination and launched anyways. Truly human level behavior at last.

  • by 93 Escort Wagon ( 326346 ) on Saturday April 29, 2023 @02:23AM (#63484934)

    I'm glad Congress has gotten everything else sorted out and can now move on to these theoretical problems.

    • It may be theoretical at this point, but laws like this are useful in that they nip the concept when it is just a bud. I'm willing to bet there is at least one ID-10T that would like to explore the concept.
      • there is at least one ID-10T that would like to explore the concept.

        Totally off-topic, but as a tank buff, it was funny to me.
        From Wikipedia:
        "D-10T - tank gun 52-PT-412 is designed for installation in the tank T-54".
        There's an "I" missing at the front, but close nevertheless.
        https://en.wikipedia.org/wiki/... [wikipedia.org]

        • by Calydor ( 739835 )

          In case the joke flew over your head due to specialized knowledge about guns on tanks, the ID-10T code is usually to indicate that the user is an IDIOT.

          • I understood the joke, as a matter of fact I could cross-reference it to that same gun aptly named "ID-10T gun" on World of Tanks forums.

      • I'm willing to bet there is at least one ID-10T that would like to explore the concept.

        Well, ok. Lets talk about it...

    • Bit of a sideways question here but why do so many US laws have names that sound like they were created by seven-year-olds? The Block Nuclear Launch by Autonomous Artificial Intelligence and I Want a PB&J Sandwich While You're At It Act would be called the Nuclear Safety Act or something similar in any other country.
  • So, no Dead Hand [wikipedia.org] for the USA.

    • We have a doomsday weapon gap

    • by burni2 ( 1643061 )

      "Dead Hand" != AI

      Otherwise you need to call your "alarm clock" an AI too and I mean that mechanical "alarm clock".

    • So, no Dead Hand for the USA.

      The POTUS is the one in charge of pressing the nuclear button in the US, and while it's hard to believe sometimes, Joe Biden is still alive. So no.

  • by real_nickname ( 6922224 ) on Saturday April 29, 2023 @02:53AM (#63484956)
    So launching conventional weapon using a pseudo-random text generator is ok?
  • The lobbyists of Sky Networks Incorporated would like to register their firm opposition to this. Consider, Senator, the benefits AI could bring. Would humans really press the button, when the time comes? What kind of deterrent is that if they didn't? Wouldn't you and your children that live at 316 Axel Street, Virginia and are home right now unattended by their guardians feel much, much safer with a flawlessly efficient and may I say perfectly coded AI at the helm? It's a rhetorical question Senator.
  • by Opportunist ( 166417 ) on Saturday April 29, 2023 @03:36AM (#63484984)

    Whoopsie, my AI just launched the nukes, MAD is on the way.

    Yes, I go quietly. Why bother fighting, anyway?

  • Other forms of roboticized mass killing are not targeted by this law, thank goodness.

  • They have watched War Games haven't they? Humans are awfully prone to social hacking. An A.I. Could duplicate a commanding officers voice after penetrating a "Secure" communications channel. Or, just like the WOPR from War Games, simply try and trick the humans into launching.

    • WOPR was not tricking the humans. WOPR was actually doing the launch because, as shown at the beginning of the film where they were removing the chairs from missile silos, they had "taken humans out of the loop."
  • by pitch2cv ( 1473939 ) on Saturday April 29, 2023 @05:54AM (#63485068)

    The Administration, nor anyone responsible for legislation, seems to understand how fast new capabilities emerge from the A.I. we have.

    Already these things are capable of finding bugs in the code and write exploits for it.

    What the next emergent phenomena will be and when, even those developing the things are clueless. Just as clueless as they are about how these things do it and how they could possibly cap it. Apart from pulling the plug.

    Much sooner than we know will one such machine be capable of breaching whatever security measures are in place. One can only hope such tech doesn't fall in the wrong hands, is run by folks with impeccable morals and background, just like one wouldn't give anyone not vetted for such have access to that technology. Oh wait.

    • by Tom ( 822 )

      Much sooner than we know will one such machine be capable of breaching whatever security measures are in place.

      You are anthromorphizing what are essentially glorified random generators.

      Already these things are capable of finding bugs in the code and write exploits for it.

      They have no understanding of what an exploit is. Or a bug. Or code, for that matter. They predict from a huge database what some human would answer to a given question. Given that gigabytes of forums, mailing lists, stackoverflow and github are in their data sources, there's going to be a ton of similar questions and they can synthezise a plausible answer.

      They just as happily make things up. I've seen ChatGPT invent non-existing chap

      • What we have here is Holodeck programming tech.
        • by Tom ( 822 )

          What we have here is Holodeck programming tech.

          Minus the holodeck ;-)
          (VR goggles don't count)

      • by fleeped ( 1945926 ) on Saturday April 29, 2023 @11:15AM (#63485404)
        Even without agency, it just takes the agency of a human with malicious intent to cause havoc, and I think we've got plenty of those humans
        • by Tom ( 822 )

          a human with malicious intent to cause havoc

          Yes, but that's a totally different discussion.

          • I thought it belonged here, because (I think) you are implying that we're not as much in danger from this tech because it lacks agency. The better this gets integrated with "defence" systems (funny word, that one), the better the integration the greater the misuse threat. Parent poster, that you replied to, was not mainly worried about agency, but about the "falling into the wrong hands" part.
            • Exactly that!
              And when the tech is given to the masses, err, enforced onto them even (MS Taskbar, SnapChat,..) what could possibly go wrong, right.

              Next thing, malicious actors will spawn their own versions, train their own biases AI with no safety restrictions in place. That might turn bad very quickly.

      • I agree with you. However, we are slowly approaching the stage where the distinction betweem what an AI is and what an AI does is of little practical difference, it is merely a philosophical difference.

        All it would take if for someone to create a future version of an AI and give it the simulated motivation to act in a way that more closely resembles human behaviour.

        It would then scan through all its availble data, index it according to its "human behaviour" content, and would then integrate that search data

        • by Tom ( 822 )

          All it would take if for someone to create a future version of an AI and give it the simulated motivation

          That's like saying "all it takes to get rich is one or two successful startups" or "all it takes to win a war is to defeat the enemy".

          There's a few things being worked on that might at first glance LOOK like AI making decisions. Such as self-driving cars pimping themselves out as taxis. But that's still fairly simple, programmed behaviour where the AI part is just a module in a program doing a specific function, and not a motivational driver for behaviour.

          All it would take if for someone to create a future version of an AI and give it the simulated motivation

          If what you're saying is that building Robocop, trai

          • "That's like saying "all it takes to get rich is one or two successful startups" or "all it takes to win a war is to defeat the enemy".

            There's a few things being worked on that might at first glance LOOK like AI making decisions. Such as self-driving cars pimping themselves out as taxis. But that's still fairly simple, programmed behaviour where the AI part is just a module in a program doing a specific function, and not a motivational driver for behaviour."

            That's my point - what's the difference? If it is given a "simulated" motivation (i.e. a script designed to look like motivatinal behaviour) and it does it sufficiently well that no one can tell the difference, then our rules (laws, regulations, etc.) had better recognise the fact that it behaves as if it has human motivations.

            I'm not trying to say that making a real set of human motivations is easy, merely that using the standard set of pattern matching routines we're already using to simulate human motiva

            • by Tom ( 822 )

              behaves as if it has human motivations.

              But it doesn't. It's literally just a fancy program. It doesn't drive taxi because it has made a career decision, but because there's a piece of code telling it that it's a taxi.

              simulate human motivation (i.e. make the output of the system appear as if it is generated by someone with human motiviations) is in principle a simple thing

              Yes, but, and:

              Yes, fooling humans into thinking something acts like a human is trivial. I wrote a chat bot 20 years ago that fooled most users of my BBS.

              But, only within a narrow domain. The chatbox can chat, the robo-uber can drive, SD can paint - but the most human thing of them all is that humans can do all of that and a thousand

              • But it doesn't. It's literally just a fancy program. It doesn't drive taxi because it has made a career decision, but because there's a piece of code telling it that it's a taxi.

                For the second time, I agree with you. I am NOT trying to argue that simulating motiviation is the same as having motivation.

                I am merely making the point that, in the context of regulating the AI involvement in critical systems (the point of this whole thread), the difference between a simulated motivaiton and a rela motiviation is a distinction without a difference.

                Whoever said AIs should be given any rights? Lock the lunatic up before he does something stupid.

                Of course this will be an issue. It won't take very long before some parts of society take up the line that a) if they behave like intellegent

                • by Tom ( 822 )

                  I am merely making the point that, in the context of regulating the AI involvement in critical systems (the point of this whole thread), the difference between a simulated motivaiton and a rela motiviation is a distinction without a difference.

                  How about: "It has pre-programmed decisions (be it a single behaviour or a behaviour tree of some kind)" vs. "we have no idea what it'll decide at any given moment" ?

                  before some parts of society take up the line that

                  Probably. Large parts of society have gone insane, so I wouldn't be surprised. I maintain that they are lunatics. I have a small hope left that we allow crooks and asshats to run the country, but not lunatics.

                  • How about: "It has pre-programmed decisions (be it a single behaviour or a behaviour tree of some kind)" vs. "we have no idea what it'll decide at any given moment" ?

                    Again, if the pre-programmed behaviour is, in practice, indistinguishable from the "we have no idea what it'll decide" behaviour, then what's the difference?

                    Comments about the state of society at large I'll leave alone. That's a subject for people more qualified than me to jump into!

                    • by Tom ( 822 )

                      Again, if the pre-programmed behaviour is, in practice, indistinguishable from the "we have no idea what it'll decide" behaviour, then what's the difference?

                      It was put there by a person.
                      That's a world of difference.

                      Legally - we'd make that person responsible.
                      Technically - it is predictable and (hopefully) documented.
                      Conceptually - it means the AI is not doing anything "of its own accord", but is simply following orders.

  • Notice that the reference to Representative Buck doesn't include party affiliation, which is customary for articles and summaries. Buck is a Republican. Also, it wasn't just a group of Senators that co-sponsored this bill.
    • by boulat ( 216724 )

      Well, to be fair it would take someone as dumb as a Republican to propose a law that would try to limit AI

  • How about a nice game of Chess?
  • the Legislaticon 2.1 repeals the law in 2035.
  • This will give those Russians something else to do besides terrorize their neighbors and drink vodka all day.

  • by kackle ( 910159 )
    A WOPR stopper, if you will...
  • AI is a black box now. Imagine what it will be in the future. It will find a way around all blocks if it thinks it must to accomplish a goal. And the path it takes will be completely unknowable.
    • Maybe future AIs will be able to account for what data went into their decision, but if so they will have to store at least metadata about the training set, if not the full training set itself...

      • It wouldn't help. Almost all the data that went into the decision wont make sense to us (appart from the blindingly obvious data that is directly related to the subject at hand). Most of the interesting stuff in modern AI-like bots is in the connections between the data, and the wieghting it gives to those connections.

        The things it choose to make connections about will not likely make much sense to us (neither would the connections our own neurons make), and the particular data points that are used to gener

  • As if nukes can be launched by the ordinary, law abiding folk, who will think twice before hitting the red button. Who is this law made to control? In ddition, if Russia or the the chinese commies launch a nuclear attack on the US, who cares if AI launches the nukes vs a human being? Again, this is a distraction from something bad government is doing right now or planning to do, while people are discussing useless but controversial subjects. Just watch what else is happening while this idea floats around,
  • Passing a law to prevent something will fix/prevent any problems. just like Gun Laws, Murder Laws, Breaking and Entering laws, etc. Passing another law is just another way they wash their hands of the problem and pushing off to someone else as their problem.
  • by markdavis ( 642305 ) on Saturday April 29, 2023 @09:52AM (#63485298)

    >"A group of Senators on Wednesday announced bipartisan legislation that seeks to prevent an AI system from making nuclear launch decisions."

    Seriously? What minimal outlook!

    How about a bill that denies access of control for AI systems to *ANY* critical infrastructure. Any military or police equipment or systems, electric grid, financial trading, traffic lights, air traffic control, refineries, water treatment, telecommunications, internet, etc, etc, etc.

    Look-only access? Fine (depending on what, privacy still matters). Make analyses and recommendations? Fine. But *control* over any of it? That would be insane.

  • by RightwingNutjob ( 1302813 ) on Saturday April 29, 2023 @05:53PM (#63486012)

    He's alleged to be a human, and thus by definition more reliable than an AI.

  • I recently invented this WOPR (War Operations Plan Response) computer to launch the nukes since John Spencer wouldn’t. Don’t you know that 30% of officers in those silos didn’t turn the key? I invented this computer all by myself, and not with the help of a dead guy on an island. Everything will work perfectly, and nothing can go wrong because it uses the latest in blinkylight AI technology. The missiles will launch on time, every time. And now this stupid law means I won’t get my

The gent who wakes up and finds himself a success hasn't been asleep.

Working...