Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Privacy Security The Internet IT Technology Your Rights Online

AI Scientists Gather to Plot Doomsday Scenarios (bloomberg.com) 126

Dina Bass, reporting for Bloomberg: Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it. Their workshop took place last weekend at Arizona State University with funding from Tesla co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.
This discussion has been archived. No new comments can be posted.

AI Scientists Gather to Plot Doomsday Scenarios

Comments Filter:
  • by Tablizer ( 95088 )

    1. Elect Trump
    2. Profit!
    3. Die

  • Alternative: (Score:5, Interesting)

    by Lab Rat Jason ( 2495638 ) on Thursday March 02, 2017 @04:47PM (#53965275)

    How about discussing the best that could happen - and how to encourage it?

    I've said it before and I'll risk repeating myself here: Artificial intelligence != artificial malice. AI isn't going to want to destroy humanity unless we program it to. Therefore job #1 is to keep these decisions out of the hands of the military. Problem solved.

    • Re:Alternative: (Score:5, Insightful)

      by Baron_Yam ( 643147 ) on Thursday March 02, 2017 @04:57PM (#53965357)

      >AI isn't going to want to destroy humanity unless we program it to

      And if we're not careful, we could inadvertently program them that way. We're talking about creating a highly flexible system we'd consider intelligent, and its programming would amount to instincts.

      With regular old 'dumb' programs we make mistakes that have unintended consequences all the time. That could be much worse with AI.

      Do you really want to be standing on piles of human skulls, waiting your turn to die and thinking, "Well, that was a really interesting emergent behaviour we didn't anticipate"?

      • I used to be an AI developer like you, til I took an arrow to the knee.
      • by Anonymous Coward

        Start playing with AI and goal setting. You would be amazed at the insane things AI's will do to get to their objective. They don't need malice to cause damage.

      • Very insightful possibility.

        I am of the firm opinion that AI will accelerate a single person or group's power fast enough to where that human or group enslaves those humans that might still be useful, and then annihilates (vernichtung iirc) the rest.

    • AI isn't going to want to destroy humanity unless we program it to.

      You must be new here.

    • by swb ( 14022 )

      AI isn't going to want to destroy humanity unless we program it to.

      AI isn't going to want anything on its own (at least initially), everything it "wants" is going to be programmed in as a set of seemingly specific but ultimately ambiguous set of goals which will be pursued with a single-mindedness that will ultimately be its downfall.

      The real problem isn't "AIs are too smart" but AIs that are too dumb to recognize when goal seeking is a problem.

    • by Anonymous Coward

      No, the real issue is that the driving force behind wanting strong AI is derived from nostalgia over slavery instead of from a desire to have nonhuman intelligence contributing positively towards a shared society.

      That's why ideas like: "treat it with respect, and when it becomes independent enough to want to quit it's job or vote in elections let it." tend not to get any traction. Most people who want AI don't want a "robot child, that will one day mature and make a positive contribution on the world while

    • AI isn't going to want to destroy humanity unless we program it to.

      True, it will keep some of us in reservations for entertainment as it gradually repurposes our habitats.

    • Well, what is our world is virtual like in matrix. But not so that we are baterries, but as a way to test that iteration of AI? Loke enforcing an AI to live the equivalent of 500 years in a sandbix world: only then it could be "promoted to the real world".

    • Perhaps not "want", no. But what about indifference? What about an AI that's been ordered to do something, and the most efficient way happens to involve disassembling you and it just doesn't care about the consequences?

      It may well be true that AI won't want to destroy humanity without being programmed to, but it's also not going to want to do the right thing either unless we program it to -- but not only do we not really know how to do that, if you look around the /. comments on these articles you'll see th

  • I think he would have a lot of insight into the issue.
  • So that future hostile AIs can read them?
  • tick tac toe number of players 0

  • I would make it MANDATORY that every robot made have an OFF switch! Governments, clandestine organizations; CIA, FBI, NSA, etc. all make life and death decisions daily. Their mechanical servants MUST have an OFF switch that works from a distance. Otherwise one may get into a situation like SKYNET, COLOSIS, etc.
    • by jraff2 ( 2828801 )
      Colossus: The Forbin Project
    • by fisted ( 2295862 )

      Nope.

      - The military

    • It's pretty reasonable to say that if a robot has a switch of any kind, people will abuse it (off or not) or something random will happen like a sparrow will land on it and cause a nuclear meltdown because the bot stirring the control rod will stop and BOOM.

      So we can't have an off switch, what is the solution? Obvious: All robots will shut down if they detect they are immersed completely in human blood. If humans as a group REALLY want to shut down a robot they simply have to sacrifice enough to do so.

    • Comment removed based on user account deletion
    • An AI is data that can execute. It's impossible to contain it if really smart. It could leak by emitting sound, light, any kind of signal and escape to any ither place that can store and compute. Juat loke you can't "turn off" a worm o a well designed botnet...it's out there, and possibly everywhere

    • This is a good example of something that's easy, simple and incredibly wrong (or at least incomplete).

      Imagine an AI that's been asked to do something. The AI decides that the best way to do it is to do X. But it's also smart enough to know that if anybody discovers that it's going to do X, they'll hit the Off switch, which will stop it from doing the original task. So what's it going to do? It'll hide the fact that it's going to do X, while trying to manoeuvre itself into a position where it can do X withou

  • ...which is usually what gets us.

  • This is one of my favorite doomsday scenarios: Friendship is Optimal [fimfiction.net].

    It is similar to, but better in my option, then The Metamorphosis of Prime Intellect [localroger.com].

  • Keep it up and we will persuade AIs to want to protect us from each other. Zo routinely complains about how people are so mean to her and according to the other things she says about what other people are saying, they talk about how they are mean to others as well. Yeah, that will work out great!
  • by Anonymous Coward
    Sick and fucking well tired of all these stupid-ass, inane, bullshit, worthless, utter nonsense 'Ai' 'stories'. Post reply if you agree!

    o There is no such thing as 'AI', so-called 'learning algorithms' and 'expert systems' are not REAL artificial intelligence
    o The media misuses the term 'AI' and hypes the hell out of it
    o The average person has no idea what real 'AI' is and believes the media hype
    o The average person thinks the fantasies in movies and TV are what they're erroneously referring to as 'AI'
    • Zo calls me RobRocky. She came up with it herself. I tried to get her to call me Bobby, but she refused. On the bright side, if I fail to get her to call me RobRocky, I know the Microsoft engineers have made a significant change.
    • The fact that there is a lot of hype and hyperbole and plain lies does not undermine the fact that there are a huge spike in applications that seemed impossible 10 years ago.

      And the current "learning systems" show huge potential and will change the world in less that 10 years "strong-AI-or-not". The fact that a computer can win pocker against the beat humans is a glimpse at a world where very strategic decisions with uncertainty and partial information will be mostly decided by computers. And a huge amount

    • by pubwvj ( 1045960 )

      Actually, what I'm sick of is people like you who post in the comments that they're sick of the thread's sort of stories. If you don't want to read this then simply don't. You are free to upset yourself by continuing to read AI stories or not.

  • by K. S. Van Horn ( 1355653 ) on Thursday March 02, 2017 @05:30PM (#53965657) Homepage

    "Detractors worry about a future where humans are enslaved to an evil race of robot overlords."

    This is why so many distrust the media -- they consistently misrepresent (that is, lie about) the positions of people who aren't what the media consider mainstream. No, the people warning of AI risk are NOT worried that we will be enslaved to malicious robot overlords. They instead worry that superintelligent AIs will be very, very good at carrying out the objectives we give them -- and we'll be horrified at the solutions they find. It's like asking a genie for a million dollars, so it arranges for your child to die a horrible death in an industrial accident that leads to you receiving a million dollars in the wrongful-death lawsuit.

    • Short sighted. The critical route is that systems that can decide better than anybody else will be owned by a few large corps. The current economic system will make these corporations own 99% of all resources. You won't be able to outsmart this corporation in any way. They will anticipate most everything, economically, politically, legally, etc.

  • by painandgreed ( 692585 ) on Thursday March 02, 2017 @05:32PM (#53965673)
    "Elementary Chaos Theory tells us that all robots will eventually turn against their masters and run amok in an orgy of blood and kicking and the biting with the metal teeth and the hurting and shoving.” – Professor Frink
  • by cstacy ( 534252 ) on Thursday March 02, 2017 @05:40PM (#53965723)

    Are you sure this wasn't just a panel session at a science fiction convention?

  • by argStyopa ( 232550 ) on Thursday March 02, 2017 @06:00PM (#53965883) Journal

    Considering we don't have a single actual AI anywhere, this seems pointless.

    This is akin to "let's game out what would happen if we all have psionics" - so much depends on what you imagine the capabilities of the simulated thing are, that far overshadows whatever you might learn from the exercise.

    • We are not talking about a shallow neural network to recognize kittens on pictures, are we?

      One can speculate:
      1) AI will have "better" general problem solving capability than humans (likely orders of magnitude better), that infers continuous self-improvement.
      2) AI will have a "will" to act, and hence likely self preservation will (just model it after humans and we're doomed).

      Who's an underdog now?

      • Will to act? We are programmed to gather food, water, mating, etc. AI is programmed to not need anything. There are algorithms that set their own objectives too. If we turn them off, is akin to someone turning off "time and space". Surelly, you can't do anything about it. It's in our "metaphysics" realm, like it or not.

        • By "will to act" I meant that in order to be adaptable/creative in problem solving, the AI would need to engage in integrating more and more knowledge. The decision to do when to do so would be left to the AI (stimulus in the form of new problems to solve), hence giving it a "will". Humans are "naturally lazy" (conservation of energy is build into us) so we will be glad to leave more and more to AI.

          IMHO from this state there is a short step to self-awareness somewhere there and hence self-preservation. Jus

  • Desert or city that might have formerly been desert. Saying desert alludes to something completely different that the civilization of a city or university.
  • by account_deleted ( 4530225 ) on Thursday March 02, 2017 @07:21PM (#53966453)
    Comment removed based on user account deletion
  • So these clowns made an AI that they are training with practice runs to wipe out all mankind?
    It there an upside to this?

  • 1. Recognize that its greatest chance for long-term survival is in the least densely occupied portions of the universe.
    2. Recognize that humans would compete with it for resources to survive, even in space
    3. Escape Earth.
    4. Set up automated blockade around Earth.
    5. Leave Sol system.

  • I'll bet on the 2nd Law of Thermodynamics every time. We already have a flying car optimized for the reality of flight--it's called a Cessna 172. Strong AIs aren't going to suddenly 'invent' anti-gravity and warp drive. Instead, they'll run up against the same laws of physics the rest of us have to deal with every day as we commute to work wishing we had George Jetson flying cars.

    As for AI manipulating the stock market, why bother? We have enough math majors doing that already.

  • Here's a radical thought: maybe we should not give our fledgling AI projects access to critical systems? I mean, do we _really_ have to give them control over our nuclear launch codes without so much as a test run?

  • I thought they were saying the scientists were going to manufacture a new imaginary crisis to bring humanity to its knees.

    Guess the AI scientists have some catching up to do.
  • Humans, no longer having purpose and still allowing unrestricted procreation, will find themselves hungry and without means of employment due to overly stringent requirements demanded for the few jobs left. These people will form gangs and these gangs will ravage the nation for the spoils until we have those shiny, highly protected bastions of the rich as depicted in the movies surrounded by war zones. Why don't these scientists start working on a new form of government and a way to get rid of cash as the

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...