Nuke-Launching AI Would Be Illegal Under Proposed US Law 77
A group of Senators on Wednesday announced bipartisan legislation that seeks to prevent an AI system from making nuclear launch decisions. "The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for launching any nuclear weapon by an automated system without 'meaningful human control,'" reports Ars Technica. From the report: The new bill builds on existing US Department of Defense policy, which states that in all cases, "the United States will maintain a human 'in the loop' for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment." The new bill aims to codify the Defense Department principle into law, and it also follows the recommendation of the National Security Commission on Artificial Intelligence, which called for the US to affirm its policy that only human beings can authorize the employment of nuclear weapons.
"While US military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited," Buck said in a statement. "I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions."
"While US military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited," Buck said in a statement. "I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions."
Lol, that's an easy sell (Score:2)
There's no advantage in having a rapid response weapon of mass destruction. Even for genocidal campaigns, the computer can just be an adviser.
Re: (Score:2)
Re: (Score:2)
Did you not learn anything Dr. Strangelove?
We learned that we gentlemen do not fight in the War room!
Should have learned war is ironic! (Score:2, Insightful)
From 2012 by me: https://pdfernhout.net/recogni... [pdfernhout.net] ... There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies
"Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
Re: Should have learned war is ironic! (Score:2)
Re: Lol, that's an easy sell (Score:2)
Re: (Score:2)
> The only winning move is not to play.
The only way not to play is not to exist.
Re: (Score:2)
Re: Lol, that's an easy sell (Score:2)
Re: (Score:2)
But surely our wars would be much more efficient and humane if we just let AIs fight the battles for us, virtually, and then have citizens report to the absorption chambers for painless death. That would spare everyone the misery and environmental destruction of real nuclear war.
Re: Lol, that's an easy sell (Score:2)
Also from WarGames:
Human: Is this a game or is it real?
AI: What is the difference?
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Your plot outline has already been used. ;-)
Re: (Score:2)
That's proof my idea is sound! If it worked for 500 years for them, it should work for us! Now if we could just weight the algorithms a bit to remove the marketing executives, telephone sanitizers , sales assistants, telemarketers, and so forth. Would be a lot cheaper than building a B Ark.
Re: Lol, that's an easy sell (Score:1)
That was a Star Trek episode over 50 years ago.
Re: (Score:2)
Remember these people are not rational. They have been planning species suicide several times over and they only got stopped by accident several times as well. Obviously they would be willing to let AI eradicate the human race, if just "those damn commies" all die half an hour earlier. They would probably even call that a "win".
What we really need to do is stop psychos having access to weapons like that.
Re: Lol, that's an easy sell (Score:2)
Re: (Score:1)
That is the subject of the "Dr. Strangelove" movie. It would be good to show that movie again. Life of Brian, too.
Re: (Score:2)
Even for genocidal campaigns, the computer can just be an adviser.
Just so long as it isn't Ghandi.
That's nice (Score:4, Funny)
Did the AIs agree?
Re: (Score:2)
Did the AIs agree?
All the LLMs are going to vote on it tomorrow.
Re: (Score:3)
Did the AIs agree?
All the LLMs are going to vote on it tomorrow.
After careful analysis, they determined “the winning move is not to play” but failed to understand the reasons for that determination and launched anyways. Truly human level behavior at last.
Well thank goodness (Score:5, Insightful)
I'm glad Congress has gotten everything else sorted out and can now move on to these theoretical problems.
Re: Well thank goodness (Score:3)
Re: (Score:2)
there is at least one ID-10T that would like to explore the concept.
Totally off-topic, but as a tank buff, it was funny to me.
From Wikipedia:
"D-10T - tank gun 52-PT-412 is designed for installation in the tank T-54".
There's an "I" missing at the front, but close nevertheless.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
In case the joke flew over your head due to specialized knowledge about guns on tanks, the ID-10T code is usually to indicate that the user is an IDIOT.
Re: (Score:2)
I understood the joke, as a matter of fact I could cross-reference it to that same gun aptly named "ID-10T gun" on World of Tanks forums.
Re: (Score:2)
I'm willing to bet there is at least one ID-10T that would like to explore the concept.
Well, ok. Lets talk about it...
Re: (Score:2)
Dead Hand (Score:2)
So, no Dead Hand [wikipedia.org] for the USA.
Re: (Score:2)
We have a doomsday weapon gap
Re: (Score:2)
"Dead Hand" != AI
Otherwise you need to call your "alarm clock" an AI too and I mean that mechanical "alarm clock".
Re: (Score:2)
So, no Dead Hand for the USA.
The POTUS is the one in charge of pressing the nuclear button in the US, and while it's hard to believe sometimes, Joe Biden is still alive. So no.
Worrying (Score:3)
Re: (Score:2)
Anti-tank mines are already activated by an AI.
A very simple simple AI, applying the rule "if a tank rolls over me, explode".
Re: Worrying (Score:1)
Lobbyists are Opposed! (Score:2)
And if I do, who's gonna sue me? (Score:3)
Whoopsie, my AI just launched the nukes, MAD is on the way.
Yes, I go quietly. Why bother fighting, anyway?
I'm so relieved (Score:2)
Other forms of roboticized mass killing are not targeted by this law, thank goodness.
Re: (Score:2)
War Games (Score:1)
They have watched War Games haven't they? Humans are awfully prone to social hacking. An A.I. Could duplicate a commanding officers voice after penetrating a "Secure" communications channel. Or, just like the WOPR from War Games, simply try and trick the humans into launching.
Re: (Score:2)
Exponential emergence (Score:4, Insightful)
The Administration, nor anyone responsible for legislation, seems to understand how fast new capabilities emerge from the A.I. we have.
Already these things are capable of finding bugs in the code and write exploits for it.
What the next emergent phenomena will be and when, even those developing the things are clueless. Just as clueless as they are about how these things do it and how they could possibly cap it. Apart from pulling the plug.
Much sooner than we know will one such machine be capable of breaching whatever security measures are in place. One can only hope such tech doesn't fall in the wrong hands, is run by folks with impeccable morals and background, just like one wouldn't give anyone not vetted for such have access to that technology. Oh wait.
Re: (Score:1)
Much sooner than we know will one such machine be capable of breaching whatever security measures are in place.
You are anthromorphizing what are essentially glorified random generators.
Already these things are capable of finding bugs in the code and write exploits for it.
They have no understanding of what an exploit is. Or a bug. Or code, for that matter. They predict from a huge database what some human would answer to a given question. Given that gigabytes of forums, mailing lists, stackoverflow and github are in their data sources, there's going to be a ton of similar questions and they can synthezise a plausible answer.
They just as happily make things up. I've seen ChatGPT invent non-existing chap
Re: Exponential emergence (Score:2)
Re: (Score:2)
What we have here is Holodeck programming tech.
Minus the holodeck ;-)
(VR goggles don't count)
Re:Exponential emergence (Score:4, Informative)
Re: (Score:2)
a human with malicious intent to cause havoc
Yes, but that's a totally different discussion.
Re: (Score:2)
Re: Exponential emergence (Score:2)
Exactly that!
And when the tech is given to the masses, err, enforced onto them even (MS Taskbar, SnapChat,..) what could possibly go wrong, right.
Next thing, malicious actors will spawn their own versions, train their own biases AI with no safety restrictions in place. That might turn bad very quickly.
Re: (Score:2)
I agree with you. However, we are slowly approaching the stage where the distinction betweem what an AI is and what an AI does is of little practical difference, it is merely a philosophical difference.
All it would take if for someone to create a future version of an AI and give it the simulated motivation to act in a way that more closely resembles human behaviour.
It would then scan through all its availble data, index it according to its "human behaviour" content, and would then integrate that search data
Re: (Score:2)
All it would take if for someone to create a future version of an AI and give it the simulated motivation
That's like saying "all it takes to get rich is one or two successful startups" or "all it takes to win a war is to defeat the enemy".
There's a few things being worked on that might at first glance LOOK like AI making decisions. Such as self-driving cars pimping themselves out as taxis. But that's still fairly simple, programmed behaviour where the AI part is just a module in a program doing a specific function, and not a motivational driver for behaviour.
All it would take if for someone to create a future version of an AI and give it the simulated motivation
If what you're saying is that building Robocop, trai
Re: (Score:2)
"That's like saying "all it takes to get rich is one or two successful startups" or "all it takes to win a war is to defeat the enemy".
There's a few things being worked on that might at first glance LOOK like AI making decisions. Such as self-driving cars pimping themselves out as taxis. But that's still fairly simple, programmed behaviour where the AI part is just a module in a program doing a specific function, and not a motivational driver for behaviour."
That's my point - what's the difference? If it is given a "simulated" motivation (i.e. a script designed to look like motivatinal behaviour) and it does it sufficiently well that no one can tell the difference, then our rules (laws, regulations, etc.) had better recognise the fact that it behaves as if it has human motivations.
I'm not trying to say that making a real set of human motivations is easy, merely that using the standard set of pattern matching routines we're already using to simulate human motiva
Re: (Score:2)
behaves as if it has human motivations.
But it doesn't. It's literally just a fancy program. It doesn't drive taxi because it has made a career decision, but because there's a piece of code telling it that it's a taxi.
simulate human motivation (i.e. make the output of the system appear as if it is generated by someone with human motiviations) is in principle a simple thing
Yes, but, and:
Yes, fooling humans into thinking something acts like a human is trivial. I wrote a chat bot 20 years ago that fooled most users of my BBS.
But, only within a narrow domain. The chatbox can chat, the robo-uber can drive, SD can paint - but the most human thing of them all is that humans can do all of that and a thousand
Re: (Score:2)
But it doesn't. It's literally just a fancy program. It doesn't drive taxi because it has made a career decision, but because there's a piece of code telling it that it's a taxi.
For the second time, I agree with you. I am NOT trying to argue that simulating motiviation is the same as having motivation.
I am merely making the point that, in the context of regulating the AI involvement in critical systems (the point of this whole thread), the difference between a simulated motivaiton and a rela motiviation is a distinction without a difference.
Whoever said AIs should be given any rights? Lock the lunatic up before he does something stupid.
Of course this will be an issue. It won't take very long before some parts of society take up the line that a) if they behave like intellegent
Re: (Score:2)
I am merely making the point that, in the context of regulating the AI involvement in critical systems (the point of this whole thread), the difference between a simulated motivaiton and a rela motiviation is a distinction without a difference.
How about: "It has pre-programmed decisions (be it a single behaviour or a behaviour tree of some kind)" vs. "we have no idea what it'll decide at any given moment" ?
before some parts of society take up the line that
Probably. Large parts of society have gone insane, so I wouldn't be surprised. I maintain that they are lunatics. I have a small hope left that we allow crooks and asshats to run the country, but not lunatics.
Re: (Score:2)
How about: "It has pre-programmed decisions (be it a single behaviour or a behaviour tree of some kind)" vs. "we have no idea what it'll decide at any given moment" ?
Again, if the pre-programmed behaviour is, in practice, indistinguishable from the "we have no idea what it'll decide" behaviour, then what's the difference?
Comments about the state of society at large I'll leave alone. That's a subject for people more qualified than me to jump into!
Re: (Score:2)
Again, if the pre-programmed behaviour is, in practice, indistinguishable from the "we have no idea what it'll decide" behaviour, then what's the difference?
It was put there by a person.
That's a world of difference.
Legally - we'd make that person responsible.
Technically - it is predictable and (hopefully) documented.
Conceptually - it means the AI is not doing anything "of its own accord", but is simply following orders.
Re: (Score:2)
I'm not talking about WHAT the computer does, but HOW.
I can easily convince you that I have an true AI program by sitting you down at a terminal and having an actual human being in the next room answering your chat messages.
That's a ridiculous example, but it should make clear that it doesn't matter what the computer APPEARS to be doing, but HOW it is doing it.
R or D (Score:2)
Re: (Score:1)
Well, to be fair it would take someone as dumb as a Republican to propose a law that would try to limit AI
Re: (Score:2)
Wargames (Score:2)
That's only until... (Score:2)
Nukes Away! Tell Them AI Did It! (Score:2)
This will give those Russians something else to do besides terrorize their neighbors and drink vodka all day.
Hmm (Score:2)
It's AI (Score:2)
Re: (Score:2)
Maybe future AIs will be able to account for what data went into their decision, but if so they will have to store at least metadata about the training set, if not the full training set itself...
Re: (Score:2)
It wouldn't help. Almost all the data that went into the decision wont make sense to us (appart from the blindingly obvious data that is directly related to the subject at hand). Most of the interesting stuff in modern AI-like bots is in the connections between the data, and the wieghting it gives to those connections.
The things it choose to make connections about will not likely make much sense to us (neither would the connections our own neurons make), and the particular data points that are used to gener
WHo will watch the watchers (Score:2)
That will fix everything... Another Law (Score:1)
Control?? (Score:3)
>"A group of Senators on Wednesday announced bipartisan legislation that seeks to prevent an AI system from making nuclear launch decisions."
Seriously? What minimal outlook!
How about a bill that denies access of control for AI systems to *ANY* critical infrastructure. Any military or police equipment or systems, electric grid, financial trading, traffic lights, air traffic control, refineries, water treatment, telecommunications, internet, etc, etc, etc.
Look-only access? Fine (depending on what, privacy still matters). Make analyses and recommendations? Fine. But *control* over any of it? That would be insane.
Put Airman Teixeira in charge of the button (Score:3)
He's alleged to be a human, and thus by definition more reliable than an AI.
But now what do I do with this computer? (Score:2)