AI Scientists Gather to Plot Doomsday Scenarios (bloomberg.com) 126
Dina Bass, reporting for Bloomberg: Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it. Their workshop took place last weekend at Arizona State University with funding from Tesla co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.
Re: (Score:2)
Uh, if humans can fuck up the human race (and we sure have the ability), then why wouldn't actual, strong A.I.? Can't hurt to play it through, though for now we're safe anyway given that nobody has a clue how to come up with strong A.I. in the first place.
Material is also information (Score:1)
Re: (Score:1)
Re: (Score:2)
Uh, humans exist. AI doesn't. Next stupid observation?
I'm not sure if you have another stupid observation, you just repeated what I already said.
PS: AI = information. Humans = material.
Apples != Oranges.
Both AI and human intelligence would be information. Both humans and AI-equipped robots would be material. So your point is?
Re: (Score:1, Flamebait)
You're a special sort of stupid.
Re: (Score:1, Flamebait)
I only fixed your flawed comparison to compare apples and apples. But it's getting obvious that you're just trying to troll, so don't expect further replies.
Re: (Score:2)
xor ax, ax
Fuckit, the governor on my steam engine is AI
Re: (Score:2)
Re: (Score:2)
Re:Weak or strong AI (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
There is no indication how close or how far we are away from strong AI.
This is just false.
Re: (Score:1)
Re: (Score:2)
For example, we can look at how much we understand about the brain. Once we understand it, then we can reproduce it in silicon (or, alternately, we'll have an explanation for why that's impossible). Better brain imaging tools would give us huge progress, but we're still pretty far from understanding much about that.
Re: (Score:1)
Re: Weak or strong AI (Score:2)
Stron AI is ...what? In the past 10 years, we made huge progress in many key tasks only possible by humans. The human brain has 100 billion neurons with around 1k-10k connections each. We may very well be on the verge of what seems like true intelligence. Or rather, as we progress, we reallize how little intelligence is required to do things that previously was thought to require intelligence and intuition. Like playing Go, or Poker. We might just be a huge statistical machine in the end
Re: (Score:1)
Re: (Score:2)
Re: Weak or strong AI (Score:1)
Re: (Score:1)
true, also, a sufficiently clever weak AI might just be able to fake being strong AI flawlessly... might even believe it IS strong AI...
how are we so positive that WE are strong I, exactly?
Re: Oh wow! (Score:2)
You have no clue how this AI or SI will be programmed. We don't even know the mathematical properties of current Reinforcement Learning approaches that use policies and value nets with function approximation, and we have little clue of even why they work at all- and you cannot imagine that maybe, or most likely, we'll not be able to "program" much here, and they will have a mind of their own. And it will be much less difficult to enfoce thatno single soal releases a cersion that is not Limited by whatever m
Re: (Score:2)
It shouldn't be that hard to hardwire some Asimov laws into these AI. I'm sure, that's the easiest part.
Re: (Score:2)
Why would we do that? Asimov made up the laws specifically so that he could show how ineffective or counterproductive they'd be. And you want to actually USE them?
Re: (Score:1)
It's "artificial" because we made it, as opposed to it arising naturally. It has nothing to do with whether it's "real" intelligence or not.
Nothing Ever Happens (Score:2)
Yeah this kind of stuff never happens. I remember hearing a whole lot of hooey about some scientists trying to release enormous amounts of energy in a an explosive fashion by splitting the atom. Like that would ever work...
Re: (Score:1)
Simulation #1 (Score:2, Funny)
1. Elect Trump
2. Profit!
3. Die
Re: (Score:2)
It's like a Batman villain became prez. If somebody had made a Batman movie a decade ago where The Joker becomes prez, people would've called it far-fetched.
Re:DC comics storyline villan becoming president. (Score:1)
Alternative: (Score:5, Interesting)
How about discussing the best that could happen - and how to encourage it?
I've said it before and I'll risk repeating myself here: Artificial intelligence != artificial malice. AI isn't going to want to destroy humanity unless we program it to. Therefore job #1 is to keep these decisions out of the hands of the military. Problem solved.
Re:Alternative: (Score:5, Insightful)
>AI isn't going to want to destroy humanity unless we program it to
And if we're not careful, we could inadvertently program them that way. We're talking about creating a highly flexible system we'd consider intelligent, and its programming would amount to instincts.
With regular old 'dumb' programs we make mistakes that have unintended consequences all the time. That could be much worse with AI.
Do you really want to be standing on piles of human skulls, waiting your turn to die and thinking, "Well, that was a really interesting emergent behaviour we didn't anticipate"?
Favorite Emergent Behavior (Score:3)
Re: (Score:1)
Start playing with AI and goal setting. You would be amazed at the insane things AI's will do to get to their objective. They don't need malice to cause damage.
Re: (Score:2)
Very insightful possibility.
I am of the firm opinion that AI will accelerate a single person or group's power fast enough to where that human or group enslaves those humans that might still be useful, and then annihilates (vernichtung iirc) the rest.
Re:By definition (Score:1)
Re: (Score:2)
AI isn't going to want to destroy humanity unless we program it to.
You must be new here.
Re: (Score:2)
Re: (Score:2)
I don't think you understand just how difficult what you're proposing is. To start with, how do you tell a program to recognize a human that's cooperating at being recognized? (This can currently be solved, with a few constraints, but it isn't easy.)
Then consider how you construct an intentional stance towards an entity that you have a hard time recognizing. If you're quite successful your robot may go around destroying garbage cans.
The real problem is getting it to act in a generally (as opposed to spec
Re:What does cooperation have to do with anything. (Score:1)
Re: (Score:2)
AI isn't going to want to destroy humanity unless we program it to.
AI isn't going to want anything on its own (at least initially), everything it "wants" is going to be programmed in as a set of seemingly specific but ultimately ambiguous set of goals which will be pursued with a single-mindedness that will ultimately be its downfall.
The real problem isn't "AIs are too smart" but AIs that are too dumb to recognize when goal seeking is a problem.
Re: (Score:1)
No, the real issue is that the driving force behind wanting strong AI is derived from nostalgia over slavery instead of from a desire to have nonhuman intelligence contributing positively towards a shared society.
That's why ideas like: "treat it with respect, and when it becomes independent enough to want to quit it's job or vote in elections let it." tend not to get any traction. Most people who want AI don't want a "robot child, that will one day mature and make a positive contribution on the world while
Re: (Score:2)
AI isn't going to want to destroy humanity unless we program it to.
True, it will keep some of us in reservations for entertainment as it gradually repurposes our habitats.
Re: Alternative: (Score:2)
Well, what is our world is virtual like in matrix. But not so that we are baterries, but as a way to test that iteration of AI? Loke enforcing an AI to live the equivalent of 500 years in a sandbix world: only then it could be "promoted to the real world".
Re: (Score:2)
Perhaps not "want", no. But what about indifference? What about an AI that's been ordered to do something, and the most efficient way happens to involve disassembling you and it just doesn't care about the consequences?
It may well be true that AI won't want to destroy humanity without being programmed to, but it's also not going to want to do the right thing either unless we program it to -- but not only do we not really know how to do that, if you look around the /. comments on these articles you'll see th
Did they invite Arnold Schwarzenegger? (Score:2)
Re:Did they invite Arnold Schwarzenegger? (Score:5, Funny)
He was there for the first hour but then had to leave early.
On the way out, he said "I'll be back".
Re:Did they invite Arnold Schwarzenegger? (Score:4, Funny)
Did they helpfully post the results online? (Score:1)
tick tac toe number of players 0 (Score:2)
tick tac toe number of players 0
Robot Safety (Score:1)
Re: (Score:2)
Re: (Score:2)
Guardian: The Kuprin Project
Re: (Score:2)
Proteus IV: "Demon Seed".
Re: (Score:2)
Nope.
- The military
Compromise (Score:2)
It's pretty reasonable to say that if a robot has a switch of any kind, people will abuse it (off or not) or something random will happen like a sparrow will land on it and cause a nuclear meltdown because the bot stirring the control rod will stop and BOOM.
So we can't have an off switch, what is the solution? Obvious: All robots will shut down if they detect they are immersed completely in human blood. If humans as a group REALLY want to shut down a robot they simply have to sacrifice enough to do so.
Re: (Score:2)
Re: Robot Safety (Score:2)
An AI is data that can execute. It's impossible to contain it if really smart. It could leak by emitting sound, light, any kind of signal and escape to any ither place that can store and compute. Juat loke you can't "turn off" a worm o a well designed botnet...it's out there, and possibly everywhere
Re: (Score:2)
This is a good example of something that's easy, simple and incredibly wrong (or at least incomplete).
Imagine an AI that's been asked to do something. The AI decides that the best way to do it is to do X. But it's also smart enough to know that if anybody discovers that it's going to do X, they'll hit the Off switch, which will stop it from doing the original task. So what's it going to do? It'll hide the fact that it's going to do X, while trying to manoeuvre itself into a position where it can do X withou
Difficult to plan for unintended consequences... (Score:1)
...which is usually what gets us.
Re: Difficult to plan for unintended consequences. (Score:2)
+5 ... I think when you grasp how current rudimentary """"weak AI"""" works, it becomes very clear that a """strong AI""" will be impossible to predict, anticipate, outguess or control.
Doomsday Scenarios (Score:2)
This is one of my favorite doomsday scenarios: Friendship is Optimal [fimfiction.net].
It is similar to, but better in my option, then The Metamorphosis of Prime Intellect [localroger.com].
Keep it up and we will persuade AIs... (Score:1)
Post in this comment if you're sick of AI stories (Score:2, Insightful)
o There is no such thing as 'AI', so-called 'learning algorithms' and 'expert systems' are not REAL artificial intelligence
o The media misuses the term 'AI' and hypes the hell out of it
o The average person has no idea what real 'AI' is and believes the media hype
o The average person thinks the fantasies in movies and TV are what they're erroneously referring to as 'AI'
Wrong! (Score:1)
Re: Post in this comment if you're sick of AI stor (Score:2)
The fact that there is a lot of hype and hyperbole and plain lies does not undermine the fact that there are a huge spike in applications that seemed impossible 10 years ago.
And the current "learning systems" show huge potential and will change the world in less that 10 years "strong-AI-or-not". The fact that a computer can win pocker against the beat humans is a glimpse at a world where very strategic decisions with uncertainty and partial information will be mostly decided by computers. And a huge amount
Re: (Score:2)
Actually, what I'm sick of is people like you who post in the comments that they're sick of the thread's sort of stories. If you don't want to read this then simply don't. You are free to upset yourself by continuing to read AI stories or not.
The media get it wrong AGAIN (Score:3)
"Detractors worry about a future where humans are enslaved to an evil race of robot overlords."
This is why so many distrust the media -- they consistently misrepresent (that is, lie about) the positions of people who aren't what the media consider mainstream. No, the people warning of AI risk are NOT worried that we will be enslaved to malicious robot overlords. They instead worry that superintelligent AIs will be very, very good at carrying out the objectives we give them -- and we'll be horrified at the solutions they find. It's like asking a genie for a million dollars, so it arranges for your child to die a horrible death in an industrial accident that leads to you receiving a million dollars in the wrongful-death lawsuit.
Re: The media get it wrong AGAIN (Score:2)
Short sighted. The critical route is that systems that can decide better than anybody else will be owned by a few large corps. The current economic system will make these corporations own 99% of all resources. You won't be able to outsmart this corporation in any way. They will anticipate most everything, economically, politically, legally, etc.
YOU'VE GOT TO LISTEN TO ME! (Score:3)
Are you sure this wasn't (Score:3)
Are you sure this wasn't just a panel session at a science fiction convention?
we don't know enough (Score:3)
Considering we don't have a single actual AI anywhere, this seems pointless.
This is akin to "let's game out what would happen if we all have psionics" - so much depends on what you imagine the capabilities of the simulated thing are, that far overshadows whatever you might learn from the exercise.
Re: (Score:2)
We are not talking about a shallow neural network to recognize kittens on pictures, are we?
One can speculate:
1) AI will have "better" general problem solving capability than humans (likely orders of magnitude better), that infers continuous self-improvement.
2) AI will have a "will" to act, and hence likely self preservation will (just model it after humans and we're doomed).
Who's an underdog now?
Re: we don't know enough (Score:2)
Will to act? We are programmed to gather food, water, mating, etc. AI is programmed to not need anything. There are algorithms that set their own objectives too. If we turn them off, is akin to someone turning off "time and space". Surelly, you can't do anything about it. It's in our "metaphysics" realm, like it or not.
Re: (Score:2)
By "will to act" I meant that in order to be adaptable/creative in problem solving, the AI would need to engage in integrating more and more knowledge. The decision to do when to do so would be left to the AI (stimulus in the form of new problems to solve), hence giving it a "will". Humans are "naturally lazy" (conservation of energy is build into us) so we will be glad to leave more and more to AI.
IMHO from this state there is a short step to self-awareness somewhere there and hence self-preservation. Jus
Re: (Score:2)
"we have plenty of examples of it all over the place."
Really?
definition of artificial intelligence (FROM THE GODDAMN DICTIONARY)
1 : a branch of computer science dealing with the simulation of intelligent behavior in computers
2: the capability of a machine to imitate intelligent human behavior
Please, provide an example of a computer system which is able to imitate intelligent human behavior. Maybe...one that actually passed a Turing Test?
Oh, before you answer, you might want to read https://www.techdirt.c [techdirt.com]
Desert or city ? (Score:1)
Comment removed (Score:3)
Practice makes perfect (Score:2)
So these clowns made an AI that they are training with practice runs to wipe out all mankind?
It there an upside to this?
Maroon Humanity on Earth (Score:1)
1. Recognize that its greatest chance for long-term survival is in the least densely occupied portions of the universe.
2. Recognize that humans would compete with it for resources to survive, even in space
3. Escape Earth.
4. Set up automated blockade around Earth.
5. Leave Sol system.
Stron g AI vs the 2nd Law (Score:2)
I'll bet on the 2nd Law of Thermodynamics every time. We already have a flying car optimized for the reality of flight--it's called a Cessna 172. Strong AIs aren't going to suddenly 'invent' anti-gravity and warp drive. Instead, they'll run up against the same laws of physics the rest of us have to deal with every day as we commute to work wishing we had George Jetson flying cars.
As for AI manipulating the stock market, why bother? We have enough math majors doing that already.
Not give access to critical systems, maybe? (Score:2)
Here's a radical thought: maybe we should not give our fledgling AI projects access to critical systems? I mean, do we _really_ have to give them control over our nuclear launch codes without so much as a test run?
Misread the headline (Score:1)
Guess the AI scientists have some catching up to do.
Most likely scenario: (Score:1)