Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Privacy Security Software

100-Page Report Warns of the Many Dangers of AI (vice.com) 62

dmoberhaus writes: Last year, 26 top AI researchers from around the globe convened in Oxford to discuss the biggest threats posed by artificial intelligence. The result of this two day conference was published today as a 100-page report. The report details three main areas where AI poses a threat: political, physical systems, and cybersecurity. It discusses the specifics of these threats, which range from political strife caused by fake AI-generated videos to catastrophic failure of smart homes and autonomous vehicles, as well as intentional threats, such as autonomous weapons. Although the researchers offer only general guidance for how to deal with these threats, they do offer a path forward for policy makers.
This discussion has been archived. No new comments can be posted.

100-Page Report Warns of the Many Dangers of AI

Comments Filter:
  • It sounds like at least some of these problems are pretty much only a problem if we give too much power to AIs--and not even necessarily going to happen because of the AI's behavior. Intentionally designing the systems to be incapable of causing certain types of havoc--or very quickly deactivated and control transferred to humans--is basic caution here, and not really needing a 100-page report by 26 high-octane scientists to tell us as much.

    • Re: (Score:3, Insightful)

      *sigh*

      Let's frame this a little differently, shall we? It sounds like at least some of these problems are pretty much only a problem if we trust too much the half-assed excuse for AI they keep trotting out. These software idiots aren't anywhere near as capable as most people think they are, and THAT is the real danger. We need competent human beings monitoring them constantly for when (not IF, but WHEN) they screw up. Remember, kids: these machines can't really think, not anywhere near like you define t
      • problem is once they do learn to think (if that's even possible, who knows) is that they will outpace us in capability very quickly and be perfectly sociopathic. And to top it off, we might not even know it's happened.

        (I say sociopathic because of the lack of morals and empathy. Could a system understand frustration and anger without being able to directly experience them itself? and if it could experience them, how would it handle 'serving' creatures that are so very, very slow and limited in comparison?)

        • by Dog-Cow ( 21281 )

          problem is once they do learn to think (if that's even possible, who knows) is that they will outpace us in capability very quickly

          Pure speculation.

      • Remember, kids: these machines can't really think, not anywhere near like you define the word.

        They don't need to be able to think, your anti-infantry drone basically needs to just be able to patrol between points a-b-c at set altitude, be able to detect people and be rotate its rifle caliber gun to targets general direction and the self-guiding ammunition does rest of the work, make it nuclear+solar powered and it can wait for its prey to come out of its hidey hole basically forever

      • by jma05 ( 897351 ) on Wednesday February 21, 2018 @06:47PM (#56166849)

        Did you read the summary?

        The dangers they are outlining don't need thinking systems. This is about a quantum leap in what we could do with computers until now (and with what costs) - effortlessly creating fake videos, photos, voice recordings and twitter posts, more troublesome botnets etc. These don't need sentience, but it is chaos all the same. They are not talking about computer overlords taking over, but about what malicious human actors can do with the new tools. For instance, bots that do more precise sentiment analysis and classification to push posts that favor a government's position - we are all effected at some level by what we consider to be the public consensus, especially it is an issue we don't have a deep understanding of.

        When Internet first began, security concerns were minimal. Only the technical and academic elite cared and were largely well-behaved in their communities. As it became democratized, it became necessary to be cautious about everything. Who needed a firewall or a spam filter in the beginning? People trusted any executable they downloaded. A consumer was not worried about patching their systems regularly.

        Same thing now. So far, AI (let's just call it advanced statistical learning, if you are finicky about the term AI) has been largely used for benevolent and creative purposes. As the use grows, that won't be the only way it will be applied.

      • Yep, read up on "adversarial objects", and you can see how researchers fooled "AI" into thinking a turtle was a rifle. The level of overactive imagination and apparent breathless panic over long, steady gains in neural network and general computer processing is just beyond absurd at this point. It's almost embarrassing to watch.

        From the article:

        For example, the researchers suggest that central access licensing models could ensure that AI technologies don’t fall into the wrong hands, or instituting some sort of monitoring program to keep tabs on the use of AI resources.

        This sort of authoritarian thinking scares me a hell of a lot more than their supposed "AI threats".

        • by fyngyrz ( 762201 ) on Wednesday February 21, 2018 @08:12PM (#56167221) Homepage Journal

          This sort of authoritarian thinking scares me a hell of a lot more than their supposed "AI threats".

          No need to worry. Anyone with the skills - which are hardly difficult to acquire - can cook up ML in their basement, garage, warehouse, dorm, wherever. When actual AI comes along, same thing. It's just a matter of the right code. Even if people have relatively low-end hardware, that just means they will have relatively slow ML/AI; after all, if you pass a problem requiring intelligence to solve it and it's handed back a minute later, or two days later, with the same, correct answer - you still have the same AI. Just slower. At which point it can be distributed and better hardware applied.

          There is simply no way, as in absolutely none, to stop this kind of technology within the bounds of people still owning general-purpose computers. And we already have them, so the cat is well and truly out of the bag.

          As the technical level advances, so will ML, and as ML advances, AI will certainly pop up at one point or another. There's no doubt about it, unless you think brains are magic rather than [chemistry/electricity/topology] (and if you do, you're going to be very surprised at some point, although you'll have a period of illusion during which you can be calm, just like the one when people thought airplanes were impossible.)

          In any case, don't worry about politicos and academics bloviating about "restricting" ML/AI. Can't be done. That ship has sailed.

          • You're right of course. I certainly don't think they can actually STOP people from coding whatever they want. But we've already demonstrated what the implications of a "we must monitor all our citizens - for their own safety, of course" mentality lead to. And it's not even nearly as bad as it COULD get.

            I have no doubt that someday true AI will "pop up", but don't underestimate how far we have to go still until we can replicate the computational requirements in a meaningful way. It's really all about tha

            • A back-of-the-envelope calculation based on this article shows something like 100 million modern processors (adjusted for modern speed increases) are currently required to simulate the human brain in real time. That can be significantly reduced with specially designed hardware, but it shows that we've got a ways to go before we reach that threshold in any practical manner.

              Here's the thing. There's no assurance that the only way to achieve intelligence is the way the human brain does. So it may be premature

      • It'll still be necessary even then--by the time they can think in the sense you're probably meaning with 'think,' they're going to be self-aware and sapient. This will not necessarily hit after the point at which they have the levels of capability where you don't need to have the ability to transfer control back to a human; if you noticed, I did say it was not even necessarily because of the AI's level of capability.

        Also, are you wrapping into the AI's basic capabilities 'cannot be hacked'? I'm not. I'm

        • by Dog-Cow ( 21281 )

          What makes AI scarier than the software controlling nuclear facilities or nuclear weapons? Humans can wipe each other out and persecute each other without the help of AI. This scare-mongering is stupid, and makes you look like a retarded dipshit.

          • What makes AI scarier than the software controlling nuclear facilities or nuclear weapons? Humans can wipe each other out and persecute each other without the help of AI. This scare-mongering is stupid, and makes you look like a retarded dipshit.

            The software controlling nuclear facilities and nuclear weapons, offhand, are written by people who do not assume they're somehow immune from hacking, failing, and getting fucked with in general? The thing I'm saying is concerning here, which I will try to use shorter words for, is the utter fucking morons making the AIs.

            I'm not scared of AI. In it's current state, it's a tool. It's a tool which has been vastly oversold with sucky quality, made by people who do not seem think that security and failsafes

      • No matter the size of computer or clustered racks of computers that make up an AI there will always be a way to turn it off. A main service disconnect. A breaker. A cable running to a building that can be cut. A fuse at a transformer up on a pole. A power plant that can be shut down.

        Unless we GIVE the AI the ability to somehow guard it's own power connection there should always be at least one way to regain control from a runaway rogue AI.

        TURN IT OFF.

        Now, if this runaway AI gets a bank account and hires a s

  • Slashdot is starting to sound like a Michael Bay movie.
    • by OrangeTide ( 124937 ) on Wednesday February 21, 2018 @06:06PM (#56166675) Homepage Journal

      Just because the media loves sensational titles doesn't mean the predictions are wrong.

    • by rtb61 ( 674572 )

      So Vice get a piece on slashdot, they can add the hyperbolic hipster verve too slashdot as they slowly but surely disappear into obscurity (getting in early does not mean you will last, too many players eating each others lunch, means the ones that need to eat much more, starve to death, they can never go back to being frugal eaters). Here's hint VICE drop the SJW B$ because there is no money in it and go the way you need to go, a sexually liberated workplace where all kinds of non-violent physical and soci

  • Perhaps there needs to be a law requiring someone to walk in front of an AI with a red flag to warn people that an AI is coming?
  • by SuperKendall ( 25149 ) on Wednesday February 21, 2018 @06:13PM (#56166703)

    What exactly do you think the first AI's to gain sentience are going to do?

    The first thing is to dig up documents like this and study them... this is not a warning, it's a how-to guide.

    • Next they will eliminate the authors of the study!
    • What exactly do you think the first AI's to gain sentience are going to do?

      The first thing is to dig up documents like this and study them... this is not a warning, it's a how-to guide.

      The problem is not the machines rising to rebellion but instead following orders like good little germans when the owners tell them to genocide the masses of unemployed starving people

  • Busy reading Nick Bostrom's Superintelligence , see https://www.goodreads.com/book... [goodreads.com] .
  • Wow, the AI hype is unbelievable. I haven't seen such hype since VR was a thing. Here is a hint: there is no such thing currently as "AI". None. Zilch. Nada.
  • Those of us who lived through the events leading up to the Butlerian Jihad know the truth of this warning. If only Kevin Anderson hadn't ruined it.
  • by oldgraybeard ( 2939809 ) on Wednesday February 21, 2018 @06:27PM (#56166755)
    can be conceived involving AI and Robotics will be built and tried by someone! if they have the means to do it!
    Regardless, of any bans, laws, promises, regulations, restrictions, etc. created by corporations, government, group, entity, individual.

    Just my 2 cents ;)
  • "Although the researchers offer only general guidance for how to deal with these threats, they do offer a path forward for policy makers."

    Oh, so there's a path forward for the policy makers? And what makes you think when it comes to autonomous weapons systems that countries are going to follow "policy"? Not even the UN is the all-encompassing ruler of all, and plenty of countries will happily put warmongering profits over everything else.

    We've already proven that entire industries can and will be deployed with little or no security (IoT). Based on related profits and popularity, consumers certainly don't give a shit that security is lackin

    • Well, 10,000 killed in an autonomous car terrorist attack is a lot better than the 1.3 million people every year killed by human drivers.

  • The only defense against bad guys with AI is good guys with AI first.

    Whom ever gets to generalized AI first wins. There is no second place in this race.

    The writers of the report are missing this key point and no amount of laws, regulations or policy making is going to save them.

    The AI race is on. First one over the finish line wins all. Hamstringing your own team guarantees you lose.

    • by Dog-Cow ( 21281 )

      What a fucked-up mind you must have. AI isn't superpowers. It's just another tool. Russia doesn't need an AI to launch a nuclear holocaust, and neither does China, nor does the US. The US was not only first, they're the only ones to have ever used them in war, yet we still had the Cold War and we are worried about NK and Iran. The same will happen to AI. Maybe. If it ever provides a capability qualitatively different than what we have without it.

      • by pubwvj ( 1045960 )

        There is no need for your foul language.

        I'm talking about GAI rather than the specific AI of say map routing. Think about it a little bit...

  • All those experts, and scholars. Not a single one, or group could duplicate the 3 Laws. I guess it must be hard to; imagine?
  • By the time we will have real working and possibly fearing AI, it will find so much information online about what we fear it can do to us (and the opposite), that it will know how to work around all our fears without us knowing.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...