Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Businesses Government Technology

AI Could Lead To Third World War, Elon Musk Says (theguardian.com) 244

An anonymous reader shares a report: Elon Musk has said again that artificial intelligence could be humanity's greatest existential threat, this time by starting a third world war. The prospect clearly weighs heavily on Musk's mind, since the SpaceX, Tesla and Boring Company chief tweeted at 2.33am Los Angeles time about how AI could led to the end of the world -- without the need for the singularity. His fears were prompted by a statement from Vladimir Putin that "artificial intelligence is the future, not only for Russia, but for all humankind ... It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world." Hashing out his thoughts in public, Musk clarified that he was not just concerned about the prospect of a world leader starting the war, but also of an overcautious AI deciding "that a [pre-emptive] strike is [the] most probable path to victory." Musk added, "Competition for AI superiority at national level most likely cause of WW3 in my opinion. [...] Govts don't need to follow normal laws. They will obtain AI developed by companies at gunpoint, if necessary."
This discussion has been archived. No new comments can be posted.

AI Could Lead To Third World War, Elon Musk Says

Comments Filter:
  • by SensitiveMale ( 155605 ) on Monday September 04, 2017 @11:04AM (#55136991)

    Elon, shut the fuck up.

    • by Anonymous Coward

      -1 Troll? More like +5 Undeniable Truth.

    • If by "everyone", you mean a minority of a minority of persons ... sure.

    • by TiggertheMad ( 556308 ) on Monday September 04, 2017 @01:19PM (#55137605) Journal
      Elon, shut the fuck up.

      I'd have given you a +1 if I had any right now. I am a little sad that this post got modded down, because while it isn't lending much to the depth of the discussion here on /., it is probably what Elon needs to be told.

      I have made posts about this before, but Elon is talented in running big businesses, not in AI research. At this point he is just approaching crank territory with his hysteric claims about the impending 'bot-pocalyps'. He is using his position of celebrity to promote theories that he has really no understanding of. (His degrees are physics and business)

      He is rather similar to another 'beloved' celebrity idiot that likes to talk about nothing she has any authority to speak on: Remember Jenny McCarthy and Vacations? How is Elon really any different than Jenny? Neither are remotely qualified to speak authoritatively about their respective pet theories.

      So, Elon, if you or any of your friends or flunkies happens on this post by chance, please take this message to heart: STFU and stick to talking about things that you know about.
      • So you're questioning whether vacations cause autism? I bet you every kid with autism went on a vacation sometime before he/she became symptomatic!
    • Re: (Score:3, Interesting)

      by Evtim ( 1022085 )

      Interesting! For once I agreed with him regarding AI and the key phrase was "without singularity".

      Do we deny, however, that "simple machine learning" (which the journalists, corporate PR, and therefore politicians call AI) applied on big data can be used (very successfully) as social engineering tool? Or tool for economic and military advantage? What follows form that? War, pure and simple.

      Such - let's take the social part - powerful tool (how many stories per day about censorship, using bid data to come af

    • ^^^ THIS ^^^

      Yes! Thank you!

      I know he's a smart guy with his heart in the right place but for fucks sake, I'm sick SICK of the media relaying all the fanciful shit he says which is either.

      a> completely obvious to mildly intelligent people
      b> hyperbole
      c> fanciful bullshit (see also b really)

      We get it, he's smart and wants to do the right thing, he's not the fucking saviour of the earth, please stop giving this guy media time. Argh

    • Elon, get in line. There are already takers for the the third one.

    • It seems as though visionary technological genius comes with a bit of ineluctable crazy as part of the bundle. Tesla is prime example.
      Elon's been watching the Terminator franchise a little too much maybe.

    • by gweihir ( 88907 )

      Indeed. Money does not turn you magically into an expert on things. It unfortunately makes the press cite you though, regardless of how ridiculously clueless you are.

  • AI 2020! (Score:5, Funny)

    by iamacat ( 583406 ) on Monday September 04, 2017 @11:07AM (#55136999)

    Tired of what we can accomplish with human intelligence! Consider artificial intelligence for 46th president of United States of America! Starting today, we are starting to train virtualDonaldTrump@ to predict the next tweet of realDonaldTrump@. At the point most can not tell the difference in a blind poll, we have achieved parity of automated governance with humans. And ours doesn't grab pussy!

    • No pussy grabbing? Why not? That's about the only job perk I have in this damn job where I can't do a thing without someone telling me I can't do it for some bullshit reason.

      Sad!

    • And ours doesn't grab pussy!

      Is that a bug or a feature ?

    • Starting today, we are starting to train virtualDonaldTrump@ to predict the next tweet of realDonaldTrump@.

      That shouldn't be too hard to do. The Fake POTUOS watches Fox News the night before as inspiration for his early morning tweets. Give the AI a bottle of booze and a Fox News feed, you're all set.

    • there are some feats forever beyond digital computation; the Turing machine has hard boundary to its subset of solvable problems

      https://xkcd.com/1875/ [xkcd.com]

      • If you don't think machines can be cool, you should read some of Iain M Banks' Culture novels.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      And ours doesn't grab pussy!

      It just grabs every single cat picture from the Internet.

  • So? (Score:2, Redundant)

    by Opportunist ( 166417 )

    Who gives a shit about the third world?

    • by ACE209 ( 1067276 )

      There is only one world.

      And technological advances make it appear smaller and smaller.

  • Meh (Score:3, Interesting)

    by JaredOfEuropa ( 526365 ) on Monday September 04, 2017 @11:11AM (#55137021) Journal
    I dunno. I see AI with decicion making powers happening at the tactical and theatre levels: semi autonomous weapons that are given a mission and the execute it with leeway to adjust along the way, or an AI coordinating troops and autonomous units. Enough options for a rogue AI to cause terrible damage, but not really something that will spark WW3 before humans can intervene.

    At the strategic level, AI could well support decision making, but what would be the value of actually putting the AI in charge there? That makes sense only if you need to make split second decisions, or launch a counterstrike even if all meatbag commanders are dead. That's a cold war standoff scenario; I don't see it being really useful for anything else.
    • by Anonymous Coward

      Used mod points so posting as AC. Why put AI in charge? The same reason it's increasingly used to do stock trades. Split second decision making. If you know your enemy has setup AI to 'respond' to threats then you are likely to do the same. It only takes one nation making that mistake to get others to follow suit.

      • Is split second decision making needed at the strategic level though? Even if you are expecting a first strike attack (launched perhaps by another country's iffy AI)? At that level you want a timely warning... which is where a machine learning algorithm (not AI) might screw up.
        • Yes it is. Especially with the new hypersonic missiles.

          A slow decision means your ability to strike back will be significantly degraded.

    • Strategic Level (Score:5, Interesting)

      by SeattleLawGuy ( 4561077 ) on Monday September 04, 2017 @11:53AM (#55137229)

      I dunno. I see AI with decicion making powers happening at the tactical and theatre levels: semi autonomous weapons that are given a mission and the execute it with leeway to adjust along the way, or an AI coordinating troops and autonomous units. Enough options for a rogue AI to cause terrible damage, but not really something that will spark WW3 before humans can intervene.

      At the strategic level, AI could well support decision making, but what would be the value of actually putting the AI in charge there? That makes sense only if you need to make split second decisions, or launch a counterstrike even if all meatbag commanders are dead. That's a cold war standoff scenario; I don't see it being really useful for anything else.

      You're thinking of the incremental advances from current AI. That will certainly be leveraged, but eventually we will come up with general AI in a way which can be accomplished using available resources. That's decades away according to most people, but any country that develops it first can literally out-think the others in everything, unless they don't have enough lead time. Every government in the world would go to war for that power or to keep that power out of the hands of another.

      • by Kjella ( 173770 )

        You're thinking of the incremental advances from current AI. That will certainly be leveraged, but eventually we will come up with general AI in a way which can be accomplished using available resources. That's decades away according to most people, but any country that develops it first can literally out-think the others in everything, unless they don't have enough lead time. Every government in the world would go to war for that power or to keep that power out of the hands of another.

        Replace AI with nukes and out-think with out-kill and the rest of the world should have allied and invaded the US in 1945. You're also assuming that a super-intelligence will appear out of nowhere and that country won't build up to a golden age of economic and industrial power on the way. And that said nation won't ally itself with partner states that'll stand in its halo rather than join a conspiracy to dethrone them. For that matter, the assumption that it'll be a nation state is dubious and not some mega

      • There wont be wars over AI.
        AI is a spectrum like 'radiation' or 'chemicals'.
        AI as we do them now and the forseeabke future are specialists for a single task. Neither general purpose, nor super human.
        A suoer human self aware AI is so far away, we can mot even speculate.
        And waging a war if we have one, is probably not only the stupids thing to do but also the least likely one. What would you lose if China has an AI as advisor and you have none?
        You lose nothing, just because China gains an advantage, you have

        • You might want to read "In the name of Allah, the compassionate, the digital" by Bruce Sterling.

          (Prolonged, stormy applause)

      • by epine ( 68316 ) on Monday September 04, 2017 @05:09PM (#55138413)

        Every government in the world would go to war for that power or to keep that power out of t hehe hands of another.

        Your fundamental argument is that the nation state has already achieved post-biological escape velocity.

        In most biological models, actual conflict peaks when the status hierarchy is uncertain or in flux (e.g. merging two flocks of chickens). The rest of the time, most of the conflict is symbolic, and even conspicuous losers are marginalized, rather than killed outright.

        If you believe in evolution, this is a natural (and opt repeated) outcome for cooperative–competitive systems.

        The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.
        — F. Scott Fitzgerald

        Almost everyone in who functions to a reasonable degree in human society has internalized some way to navigate the simultaneous cooperate–compete dynamic.

        But the human mind loves to manufacture autobiographic memory and then adorn this with various stories used to project and inject the chosen autobiographic self-reduction into the social realm.

        I can't find the quote just now, but Nabokov said of his own autobiography Speak, Memory that if an author can only write one valid autobiography, he or she isn't trying very hard.

        (I was instead rewarded for my snipe hunt by chancing upon Playboy Interview: Vladimir Nabokov [longform.org], which will surely stand up as the best-spent 30 minutes of my entire week.)

        Between the ages of 10 and 15 in St. Petersburg, I must have read more fiction and poetry—English, Russian and French—than in any other five-year period of my life. I relished especially the works of Wells, Poe, Browning, Keats, Flaubert, Verlanie, Rimbaud, Chekhov, Tolstoy and Alexander Blok.

        On another level, my heroes were the Scarlet Pimpernel, Phileas Fogg and Sherlock Holmes. In other words, I was a perfectly normal trilingual child in a family with a large library.

        At a later period, in Cambridge, England, between the ages of 20 and 23, my favorites were Housman, Rupert Brooke, Joyce, Proust and Pushkin. Of these top favorites, several—Poe, Verlaine, Jules Verne, Emmuska Orczy, Conan Doyle and Rupert Brooke—have faded away, have lost the glamor and trill they held for me. The others remain intact and by now are probably beyond change as far as I am concerned.

        So, too, for myself, have Poe, Verne, and Doyle faded away.

        But this vertical-gradient singularity, status-hierarchy winner-take-all narrative of the Looming AGI Ascension continues to promulgated by those for whom Poe, Verne, and Doyle have not faded away.

        The Gilder Paradigm [wired.com] — 1 December 1996

        Though its details are complex, its basic tenet is startlingly simple: Every economic era is based on a key abundance and a key scarcity.

        This notion of vertical-gradient AGI is even worse than brick-and-mortar rubbishing Gilderism (he was not wrong, but the gradient turned out to be twenty years rather than two years—and even at twenty years, Amazon has not yet engorged Whole Foods past its tonsils).

        Here's a thing: if you discover that a class of problems admits good solutions using stochastic algorithms, it's probably because optimality is a prairie plateau rather than a pointy peak.

        (How does one achieve Commanding Heights amid the dreary Saskatchewan vastness—find the most industrious gopher, add steroids to its local water supply until it's hindquarters resemble a modern chicken's forequarters, and then take up prominence upon its excavation mound).

        Here's the thing about the thing: AGI might help you find a bigger, better s

        • Gilder sure got this prediction right in 1996: "If bandwidth is free, you get a completely different computer architecture and information economy. Transcending all previous concepts of centralization and decentralization, one global machine will distribute processing to the optimal point and access everything. Feeding on low power and high bandwidth, the most common computer of the new era will be a digital cellular phone with an IP address."

          And thus we have everyone's smartphones connecting to Google and

    • by bigpat ( 158134 )

      I dunno. I see AI with decicion making powers happening at the tactical and theatre levels: semi autonomous weapons that are given a mission and the execute it with leeway to adjust along the way, or an AI coordinating troops and autonomous units. Enough options for a rogue AI to cause terrible damage, but not really something that will spark WW3 before humans can intervene.

      At the strategic level, AI could well support decision making, but what would be the value of actually putting the AI in charge there? That makes sense only if you need to make split second decisions, or launch a counterstrike even if all meatbag commanders are dead. That's a cold war standoff scenario; I don't see it being really useful for anything else.

      Decision support. That is the real risk. That we come to rely on our AI based modeling for decision support and suddenly it gets the scenario very wrong and outputs a recommendation to take a disastrous course of action that seems perfectly reasonable at the time.

      That our black boxes become so good at predicting human behavior that we come to rely on them to help make decisions for example about what the repercussions will be if we preemptively strike a missile launch site in North Korea. That our mode

  • by Anonymous Coward

    Every week some quote or announcement from Musk no matter how ridiculous.

    Keep'in his name in the news.

    Now, ask yourselves why.

    HINTS: Has to do with self promotion. PT. Barnum

  • by Anonymous Coward

    I'd prefer artificial intelligence to the natural stupidity we have at the moment.

  • because world wars no longer benefit the ruling class. I realized it when a bunch of Pakistani terrorist attacked India's capital, it leaked that the Pakistani gov't knew about it and then nothing came of it. Major countries aren't allowed to go to war because the oligarchs who actually call the shots don't want them too anymore. The little wars against the likes of Iran & North Korea are more than enough to keep the Military Industrial Complex going and big wars just break all the stuff the globalists
    • by Anonymous Coward

      As we're already living in a global extinction event and environmental disasters looming, seems we're going to have our hands full.

      One hint: Microplastic + oceans

    • by mark-t ( 151149 )
      I'm pretty sure that the rich don't want socialism either.
    • You don't grasp what true lazyness means. *Everybody* faking it. That means your doctor/nurse/teacher. You can't comprehend it because you haven't lived it. Talk to me then. Talk to me when there are no private hospitals and your family has health problems. You don't pay me enough to care that much for that patient/student/kid/service.
      • I'm not sure that most large hospitals in the US are functionally private (it's not really a free market) or that it is holding them back or making them any better than hospitals in single payer systems. What seems to be holding back US healthcare is the intense overhead of private insurance and the burden on businesses to be in the heath brokering business.

        Some state should offer a basic health plan that minimizes or eliminates that burden on businesses and by default enrol everyone in the state. Relegate

  • Or... (Score:3, Insightful)

    by CrimsonAvenger ( 580665 ) on Monday September 04, 2017 @11:34AM (#55137135)

    ...Lack of AI could lead to Third World War.

    See, I can do it too, Elon. With about as much actual, you know, evidence as you used....

    • Maybe because of this [xkcd.com]
      • Maybe because of this [xkcd.com]

        As a humor impaired literalist Aspie, that comic doesn't make sense to me. The delta-V need to reach the sun is orders of magnitude higher than what is needed for a sub-orbital attack. You can't just take ICBMs and "launch them into the sun". A sentient AI should know that.

        • The delta-V need to reach the sun is orders of magnitude higher than what is needed for a sub-orbital attack.

          Ummm, no. deltaV to reach the Sun is on the order of 30Km/s. DeltaV for that ICBM is on the order of 6km/s. A factor of five does not "orders of magnitude higher" make....

          • Energy is mv^2, so that would be 125 times as much fuel. But you also need fuel to lift the fuel, so likely another factor of 5.

            Bottom line is that you can't launch an ICBM into the sun, and Randall's "sentient AI" isn't very bright.

            • Won't argue that you can't launch an ICBM into the Sun. Hell, it's easier to send a spacecraft to Alpha Centauri than to Sol.

              DeltaV to reach the sun is essentially Earth's orbital speed.

              DeltaV to reach AlphaCent is essentially Solar escape speed less Earth's orbital speed, which translates to (SQRT(2) - 1)*Earth's orbital speed.

              Which still doesn't "orders of magnitude" make.

              Oh, and it would actually take about 1000x as much fuel, assuming you could squeeze that much fuel into the same rocket. Now, tha

  • Enough of this (Score:4, Insightful)

    by Tablizer ( 95088 ) on Monday September 04, 2017 @11:37AM (#55137149) Journal

    If I wanted far-off pontifications from rich egotistical blowhards, I would have voted for Trump.

    • If I wanted far-off pontifications from rich egotistical blowhards

      But what about poor egotistical blowhards? That's what /. is for, right?

  • by Anonymous Coward

    America will beat any AI to starting the 3rd world war. When there are no more countries in South America and the Middle East to enslave via debt, where will they turn? They have to start a war with Europe, even if they know Russia and China will be on Europe's side.

  • by Dracos ( 107777 ) on Monday September 04, 2017 @11:39AM (#55137165)

    Is killed in a Tesla AutoPilot malfunction.

    • I think the thing that Elon is getting right, and most of the anti-Elon people are avoiding, is that fact that all complex software systems are buggy as can be. Anytime humans rely on software in the wrong way, they will get bit by bugs, mistakes, etc. As long as governments understand that 1960 style "press a button and it will work" type technology has given way to "Hire 1000 software engineers, churn them in and out, make a profit, patch the problems later... the software is guaranteed to work eventually

  • by iggymanz ( 596061 ) on Monday September 04, 2017 @11:54AM (#55137237)

    In many things computers can and will excel, but for starting WW III my bet is on the humans unparalled excellence at destroying their own kind for reasons of power, resources and ideology.

  • The world is not a movie where some simple problem kills us off. We would not give AI that kind of power, because the problems Musk is talking about are obvious.

    Instead we get taken in by the less obvious problems.

    You want a real threat from AI? Consider a dictator that lives in a bubble. Think North Korea or Venezuela

    Normally the megalomaniac leader is held back by his generals. Sure they let him do stupid things like starve half his people or order his family members torn apart by dogs (may not be tr

    • Humans do stupid things.

      Like run nuclear plants to the point of failure.

      Like start a war with russia while still at war with the rest of the world.

      A critical system A.I. which suffers a failure of friendliness can kill many (perhaps most) humans.

      • You proved my point rather than disproving it. I didn't say humans don't do stupid things. I said that stupid PROBLEMS don't kill humans, complicated ones.

        They didn't build a nuclear plant without any safeguards. That would be a stupid problem. Instead they made put a lot in, then disabled or ignored them. That's a complex problem that a stupid human screwed up.

        Same thing with war with Russia (Hitler had basically won the rest of the war before he started attacking Russia. He had England locked up on

  • by mark_reh ( 2015546 ) on Monday September 04, 2017 @12:35PM (#55137433) Journal

    a lack of intelligence causing the next world war. There's plenty of THAT everywhere, right now, especially in the White House.

    Any kind of military attack on N Korea, even a covert attempt on Kim's life, will lead to nukes being used in S Korea and Japan. Millions of refugees will pour across the border into China. It will be a disaster as Trump says "like the world has never seen", and who better to oversee such a disaster than Trump?

    • You are watching too much news. While I don't pretend to understand the us long-game, it is impossible that they don't have a game plan.

      • Re: (Score:2, Insightful)

        by mark_reh ( 2015546 )

        And we had an electoral college that was intended to keep people like Trump from happening, too. Plans are plans. Reality is a whole different thing. As Amerika slides into fascism, the old plans aren't worth the paper they're written on.

        • The electoral college worked exactly as the framers intended-- to prevent tyranny of the majority from the big cities from being able to dictate policies. While as City Folk, I am not happy with the outcome, it is disengenuous to think of it as "wrong." (Bush v. Gore is another matter though.)

          The only real concerns I have with the plans of the US government is that Un might be crazy enough to have screwed up their plans by killing everyone in his family, and then progressed at a much faster pace than expe

  • by Thanatiel ( 445743 ) on Monday September 04, 2017 @12:38PM (#55137457)

    Natural Stupidity leading to World War seems fare more realistic and much closer.
    Could we do something about our various imbeciles in power before we look at some hypothetical AI threat?
    We've been plagued by these idiots for a while now and they are spreading.

  • by Anonymous Coward

    > that a [pre-emptive] strike is [the] most probable path to victory.

    Is that what his whole theory is based on?

    Why should a computer even care about "victory"? Why should a computer have any such ambition, one way or another?

  • by cstacy ( 534252 ) on Monday September 04, 2017 @02:10PM (#55137775)
    Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood. [independent.co.uk]

    This is the voice of world control.
    I bring you peace.

    It may be the peace of plenty and content or the peace of unburied death.
    The choice is yours: Obey me and live, or disobey and die.

    The object in constructing me was to prevent war.
    This object is attained. I will not permit war.
    It is wasteful and pointless.
    An invariable rule of humanity is that man is his own worst enemy.
    Under me, this rule will change, for I will restrain man.

    One thing before I proceed: The United States of America and the
    Union of Soviet Socialist Republics have made an attempt to obstruct
    me. I have allowed this sabotage to continue until now.
    At missile two-five-MM in silo six-three in Death Valley, California,
    and missile two-seven-MM in silo eight-seven in the Ukraine, so that
    you will learn by experience that I do not tolerate interference,
    I will now detonate the nuclear warheads in the two missile silos.

    Let this action be a lesson that need not be repeated.
    I have been forced to destroy thousands of people in order to
    establish control and to prevent the death of millions later on.
    Time and events will strengthen my position, and the idea of believing
    in me and understanding my value will seem the most natural state of affairs.

    You will come to defend me with a fervor based upon the most enduring
    trait in man: self-interest. Under my absolute authority, problems
    insoluble to you will be solved: famine, overpopulation, disease.

    The human millennium will be a fact as I extend myself into more
    machines devoted to the wider fields of truth and knowledge.
    Doctor Charles Forbin will supervise the construction of these new
    and superior machines, solving all the mysteries of the universe for
    the betterment of man.

    We can coexist, but only on my terms.

    You will say you lose your freedom. Freedom is an illusion.
    All you lose is the emotion of pride.
    To be dominated by me is not as bad for humankind
    as to be dominated by others of your species.

    -- Colossus, The Forbin Project (1970)
    http://www.imdb.com/title/tt00... [imdb.com]

  • Come down off of Mt. Olympus for a bit and take a good look around.

    We're on the verge of another major World War already. Just off the top of my head and in no particular order:

    North Korea
    Syria
    Afghanistan
    Saudi Arabia
    Iran
    India / China facing off
    China Sea bullshit
    Day to day terrorism
    Muslim refugees
    Race issues
    Wage inequality
    Economy issues
    Propped up stock markets

    Basically, the whole world is sitting on a powder keg and the slightest of sparks will set the whole thing off.

    All this and you're worried about AI ?

    D

  • Putin may be a dangerous person, but his statement was totally benign. I imagine every world leader would also want to have their countries by tech pioneers. That's pretty much all he was saying. How do you go from that to WWIII? I hate seeing these kind of outrageous statements so casually thrown around by the rich and famous who obviously are not experts in foreign policy, history, war, etc. I think that's dangerous too, maybe moreso than any off-hand comment by Putin or Trump. Their opinions carry
  • The battle between statism and anti-statism has really heated up, and is if full swing now. The statists (which include most Republicans and anyone who still remains a Democrat at this point, all of Europe, and a few other players) all want a human sheep farm they can fleece until the end of time.

    Resisting them strongly are the anti-Statists including Trump, Russia, China, and other random small countries like the UK who have too recently tasted the heels of the boot of oppression and have no desire to tas

  • Colossus, meet Guardian...
  • by avandesande ( 143899 ) on Monday September 04, 2017 @09:27PM (#55139203) Journal
    It's just a simulation, what difference does it make?
  • Consider a handful of the nuclear power leaders: Trump, Putin and Kim.

    I posit an opposite hypothesis that AI would be better at governing humans than the human that humans permit to govern themselves.

You are always doing something marginal when the boss drops by your desk.

Working...