100-Page Report Warns of the Many Dangers of AI (vice.com) 62
dmoberhaus writes: Last year, 26 top AI researchers from around the globe convened in Oxford to discuss the biggest threats posed by artificial intelligence. The result of this two day conference was published today as a 100-page report. The report details three main areas where AI poses a threat: political, physical systems, and cybersecurity. It discusses the specifics of these threats, which range from political strife caused by fake AI-generated videos to catastrophic failure of smart homes and autonomous vehicles, as well as intentional threats, such as autonomous weapons. Although the researchers offer only general guidance for how to deal with these threats, they do offer a path forward for policy makers.
More Human Intelligence than AI (Score:2, Interesting)
It sounds like at least some of these problems are pretty much only a problem if we give too much power to AIs--and not even necessarily going to happen because of the AI's behavior. Intentionally designing the systems to be incapable of causing certain types of havoc--or very quickly deactivated and control transferred to humans--is basic caution here, and not really needing a 100-page report by 26 high-octane scientists to tell us as much.
Re: (Score:3, Insightful)
Let's frame this a little differently, shall we? It sounds like at least some of these problems are pretty much only a problem if we trust too much the half-assed excuse for AI they keep trotting out. These software idiots aren't anywhere near as capable as most people think they are, and THAT is the real danger. We need competent human beings monitoring them constantly for when (not IF, but WHEN) they screw up. Remember, kids: these machines can't really think, not anywhere near like you define t
Re: (Score:3)
problem is once they do learn to think (if that's even possible, who knows) is that they will outpace us in capability very quickly and be perfectly sociopathic. And to top it off, we might not even know it's happened.
(I say sociopathic because of the lack of morals and empathy. Could a system understand frustration and anger without being able to directly experience them itself? and if it could experience them, how would it handle 'serving' creatures that are so very, very slow and limited in comparison?)
Re: (Score:2)
You'd need some kind of quantum computer as silicone is a tiny fraction of what the human brain is capable of.
That would be silicon. Silicone is what they use for breast implants. Or they used to.
Re: (Score:1)
Re: (Score:2)
problem is once they do learn to think (if that's even possible, who knows) is that they will outpace us in capability very quickly
Pure speculation.
Re: (Score:2)
Precisely. We won't know what to expect of an AI until it is upon us.
Re: (Score:2)
Remember, kids: these machines can't really think, not anywhere near like you define the word.
They don't need to be able to think, your anti-infantry drone basically needs to just be able to patrol between points a-b-c at set altitude, be able to detect people and be rotate its rifle caliber gun to targets general direction and the self-guiding ammunition does rest of the work, make it nuclear+solar powered and it can wait for its prey to come out of its hidey hole basically forever
Re:More Human Intelligence than AI (Score:5, Insightful)
Did you read the summary?
The dangers they are outlining don't need thinking systems. This is about a quantum leap in what we could do with computers until now (and with what costs) - effortlessly creating fake videos, photos, voice recordings and twitter posts, more troublesome botnets etc. These don't need sentience, but it is chaos all the same. They are not talking about computer overlords taking over, but about what malicious human actors can do with the new tools. For instance, bots that do more precise sentiment analysis and classification to push posts that favor a government's position - we are all effected at some level by what we consider to be the public consensus, especially it is an issue we don't have a deep understanding of.
When Internet first began, security concerns were minimal. Only the technical and academic elite cared and were largely well-behaved in their communities. As it became democratized, it became necessary to be cautious about everything. Who needed a firewall or a spam filter in the beginning? People trusted any executable they downloaded. A consumer was not worried about patching their systems regularly.
Same thing now. So far, AI (let's just call it advanced statistical learning, if you are finicky about the term AI) has been largely used for benevolent and creative purposes. As the use grows, that won't be the only way it will be applied.
Re: (Score:2)
Yep, read up on "adversarial objects", and you can see how researchers fooled "AI" into thinking a turtle was a rifle. The level of overactive imagination and apparent breathless panic over long, steady gains in neural network and general computer processing is just beyond absurd at this point. It's almost embarrassing to watch.
From the article:
For example, the researchers suggest that central access licensing models could ensure that AI technologies don’t fall into the wrong hands, or instituting some sort of monitoring program to keep tabs on the use of AI resources.
This sort of authoritarian thinking scares me a hell of a lot more than their supposed "AI threats".
Authoritarians are powerless here (Score:5, Interesting)
No need to worry. Anyone with the skills - which are hardly difficult to acquire - can cook up ML in their basement, garage, warehouse, dorm, wherever. When actual AI comes along, same thing. It's just a matter of the right code. Even if people have relatively low-end hardware, that just means they will have relatively slow ML/AI; after all, if you pass a problem requiring intelligence to solve it and it's handed back a minute later, or two days later, with the same, correct answer - you still have the same AI. Just slower. At which point it can be distributed and better hardware applied.
There is simply no way, as in absolutely none, to stop this kind of technology within the bounds of people still owning general-purpose computers. And we already have them, so the cat is well and truly out of the bag.
As the technical level advances, so will ML, and as ML advances, AI will certainly pop up at one point or another. There's no doubt about it, unless you think brains are magic rather than [chemistry/electricity/topology] (and if you do, you're going to be very surprised at some point, although you'll have a period of illusion during which you can be calm, just like the one when people thought airplanes were impossible.)
In any case, don't worry about politicos and academics bloviating about "restricting" ML/AI. Can't be done. That ship has sailed.
Re: (Score:2)
You're right of course. I certainly don't think they can actually STOP people from coding whatever they want. But we've already demonstrated what the implications of a "we must monitor all our citizens - for their own safety, of course" mentality lead to. And it's not even nearly as bad as it COULD get.
I have no doubt that someday true AI will "pop up", but don't underestimate how far we have to go still until we can replicate the computational requirements in a meaningful way. It's really all about tha
Getting there, sooner or later (Score:2)
Here's the thing. There's no assurance that the only way to achieve intelligence is the way the human brain does. So it may be premature
Re: (Score:2)
It'll still be necessary even then--by the time they can think in the sense you're probably meaning with 'think,' they're going to be self-aware and sapient. This will not necessarily hit after the point at which they have the levels of capability where you don't need to have the ability to transfer control back to a human; if you noticed, I did say it was not even necessarily because of the AI's level of capability.
Also, are you wrapping into the AI's basic capabilities 'cannot be hacked'? I'm not. I'm
Re: (Score:2)
What makes AI scarier than the software controlling nuclear facilities or nuclear weapons? Humans can wipe each other out and persecute each other without the help of AI. This scare-mongering is stupid, and makes you look like a retarded dipshit.
Re: (Score:2)
What makes AI scarier than the software controlling nuclear facilities or nuclear weapons? Humans can wipe each other out and persecute each other without the help of AI. This scare-mongering is stupid, and makes you look like a retarded dipshit.
The software controlling nuclear facilities and nuclear weapons, offhand, are written by people who do not assume they're somehow immune from hacking, failing, and getting fucked with in general? The thing I'm saying is concerning here, which I will try to use shorter words for, is the utter fucking morons making the AIs.
I'm not scared of AI. In it's current state, it's a tool. It's a tool which has been vastly oversold with sucky quality, made by people who do not seem think that security and failsafes
Re: (Score:2)
No matter the size of computer or clustered racks of computers that make up an AI there will always be a way to turn it off. A main service disconnect. A breaker. A cable running to a building that can be cut. A fuse at a transformer up on a pole. A power plant that can be shut down.
Unless we GIVE the AI the ability to somehow guard it's own power connection there should always be at least one way to regain control from a runaway rogue AI.
TURN IT OFF.
Now, if this runaway AI gets a bank account and hires a s
Re: (Score:2)
They should, like, make a movie or something about that.
Re: (Score:2)
Is this Disaster Movie of the Week day? (Score:2)
Re:Is this Disaster Movie of the Week day? (Score:5, Insightful)
Just because the media loves sensational titles doesn't mean the predictions are wrong.
Re: (Score:2)
Re: (Score:1)
So Vice get a piece on slashdot, they can add the hyperbolic hipster verve too slashdot as they slowly but surely disappear into obscurity (getting in early does not mean you will last, too many players eating each others lunch, means the ones that need to eat much more, starve to death, they can never go back to being frugal eaters). Here's hint VICE drop the SJW B$ because there is no money in it and go the way you need to go, a sexually liberated workplace where all kinds of non-violent physical and soci
Make a LAW! (Score:2)
Really bad idea (Score:5, Funny)
What exactly do you think the first AI's to gain sentience are going to do?
The first thing is to dig up documents like this and study them... this is not a warning, it's a how-to guide.
Re: (Score:2)
Re: (Score:2)
What exactly do you think the first AI's to gain sentience are going to do?
The first thing is to dig up documents like this and study them... this is not a warning, it's a how-to guide.
The problem is not the machines rising to rebellion but instead following orders like good little germans when the owners tell them to genocide the masses of unemployed starving people
Re: (Score:1)
The term artificial means "made by humans". The AI may also be synthetic, but it will always be artificial. Perhaps no one makes the distinction because the terms are not mutually-exclusive, you fucked-up shit.
Re: (Score:2)
100 pages! Sorry, no time (Score:2)
Wow (Score:1)
Truth (Score:2)
Anything that (Score:3)
Regardless, of any bans, laws, promises, regulations, restrictions, etc. created by corporations, government, group, entity, individual.
Just my 2 cents
Rules are for Pussies (Score:2)
"Although the researchers offer only general guidance for how to deal with these threats, they do offer a path forward for policy makers."
Oh, so there's a path forward for the policy makers? And what makes you think when it comes to autonomous weapons systems that countries are going to follow "policy"? Not even the UN is the all-encompassing ruler of all, and plenty of countries will happily put warmongering profits over everything else.
We've already proven that entire industries can and will be deployed with little or no security (IoT). Based on related profits and popularity, consumers certainly don't give a shit that security is lackin
Re: (Score:2)
Well, 10,000 killed in an autonomous car terrorist attack is a lot better than the 1.3 million people every year killed by human drivers.
Smoking Guns (Score:2)
The only defense against bad guys with AI is good guys with AI first.
Whom ever gets to generalized AI first wins. There is no second place in this race.
The writers of the report are missing this key point and no amount of laws, regulations or policy making is going to save them.
The AI race is on. First one over the finish line wins all. Hamstringing your own team guarantees you lose.
Re: (Score:2)
What a fucked-up mind you must have. AI isn't superpowers. It's just another tool. Russia doesn't need an AI to launch a nuclear holocaust, and neither does China, nor does the US. The US was not only first, they're the only ones to have ever used them in war, yet we still had the Cold War and we are worried about NK and Iran. The same will happen to AI. Maybe. If it ever provides a capability qualitatively different than what we have without it.
Re: (Score:2)
There is no need for your foul language.
I'm talking about GAI rather than the specific AI of say map routing. Think about it a little bit...
Sure Looks Like a Waste of End Results (Score:2)
Re: (Score:2)
Keep feeding the AI! (Score:2)
By the time we will have real working and possibly fearing AI, it will find so much information online about what we fear it can do to us (and the opposite), that it will know how to work around all our fears without us knowing.