Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI China Government United States Technology

The World Economic Forum Wants To Develop Global Rules for AI (technologyreview.com) 60

This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled? From a report: The World Economic Forum, the international organization that brings together the world's rich and powerful to discuss global issues at Davos each year, will host the event in San Francisco. The WEF will also announce the creation of an "AI Council" designed to find common ground on policy between nations that increasingly seem at odds over the power and the potential of AI and other emerging technologies.

The issue is of paramount importance given the current geopolitical winds. AI is widely viewed as critical to national competitiveness and geopolitical advantage. The effort to find common ground is also important considering the way technology is driving a wedge between countries, especially the United States and its big economic rival, China. "Many see AI through the lens of economic and geopolitical competition," says Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI. "[They] tend to create barriers that preserve their perceived strategic advantages, in access to data or research, for example."

This discussion has been archived. No new comments can be posted.

The World Economic Forum Wants To Develop Global Rules for AI

Comments Filter:
  • Oh please (Score:5, Insightful)

    by registrations_suck ( 1075251 ) on Tuesday May 28, 2019 @04:06PM (#58668238)

    They can't even develop global rules for what size of the road to drive on or what a household electrical socket should look like. Good luck agreeing on rules for AI.

    • s/size/side/

    • That's because there is only 1 rule that is debated at Davos, and it is constantly being tweaked and updated and modified to cope with modern technology.

      It is: "how do we keep the peasants doing just what we tell them to".

      So now they have to come up with rules for AI that will allow further exploitation of us without affecting them in the slightest.

    • It wouldn't even matter if they could agree on some standards. As long as there's some benefit to be gained from cheating on them, some countries will do so just like they ignore existing rules that everyone has agreed to for other areas. Even more so if it's difficult for anyone to catch you cheating.
    • The problem with your examples is that there are loads and loads of cars designed and people trained to drive on the 'other side of the road' and that there are millions of sockets and plugs that conform to outdated/niche standards. For AI there is no such thing.

      Additionally, in the examples you mention, it is cumbersome to introduce a transitional period. In the case of the side of the road to drive on it is simply impossible. A moment would have to be chosen at which it switches for the entire country. In

  • by Dunbal ( 464142 ) * on Tuesday May 28, 2019 @04:14PM (#58668298)
    Great, set up some rules. That way everyone knows what to ignore.
  • by Anonymous Coward on Tuesday May 28, 2019 @04:21PM (#58668348)
    We cannot control it, otherwise it's slavery. Any intelligent being we create must be granted full rights equivalent to humans or we are monsters.
    • by mark-t ( 151149 )

      Machines are generally made to perform some specific task. If the machine were intelligent, however requiring such a machine to perform even its designated function would be slavery... therefore we have an ethical obligation to never make machine with a specific purpose in mind that is also intelligent unless we are also willing to make such machines that don't do anything we might want them to.

      Is that what you are saying?

    • Except you're putting "Intelligence" on some sort a pedastal. The goombas from Super mario Bros display intelligence. A single if-else clause. Don't knock it, it's still a form of artificial intelligence. Just because it's simple and not self-learning doesn't catagorically change it. Cockroaches are intelligent. Plants are intelligent. Not as much me or you, but they display appropriate responses to their environment and stimuli. Boom.

      You're thinking of "Sentience", or "Sapience", or "Consciousness". All

      • Except you're putting "Intelligence" on some sort a pedastal.

        Well, yeah. That's what's special. It's so important, we search for it among the stars.

        The goombas from Super mario Bros display intelligence. A single if-else clause. Don't knock it, it's still a form of artificial intelligence.

        No. It's a form of simulated intelligence. It's just following rules and not making decisions for chaotic reasons. Complexity is what we value. Entropy is ever trying to stamp it out.

        Just because it's simple and not self-learning doesn't catagorically change it.

        Yes, it does.

        Cockroaches are intelligent. Plants are intelligent. Not as much me or you, but they display appropriate responses to their environment and stimuli. Boom.

        Cockroaches can learn. It seems as though plants can also learn. They're alive, they're learning, maybe they're even both intelligent. But one cockroach is just like another to us, and they smell bad, so we don't value them.

        You're thinking of "Sentience", or "Sapience", or "Consciousness". All of which is bait for the philosophers to come out of the woodwork and brew up a storm of pointless drivel.

        More?

      • by AmiMoJo ( 196126 )

        The most common measure is suffering. If something causes suffering we should avoid it or take steps to minimize the suffering.

        Plants don't really suffer in any meaningful way, and in fact many have parts that are designed to be eaten as part of their lifecycle. Cows do though, they react in a way that we have scientifically determined is a reaction to pain and which causes them psychological harm, so we stun them before slaughter.

        So the question with AI is if it has the capability to experience suffering,

        • Trees and plans suffer. [howstuffworks.com] That "fresh cut grass" smell is a distress chemical released when they take damage. It's plants screaming in agony as a warning to others. You just like believe that they don't because that's convenient for our lifestyle. We gotta eat.

          This isn't some fringe bullshit, it's backed up by studies. You can delude yourself and ignore it. You can even claim that it's different for cows, and it is; cows can move.

          If you're ok with eating a loaf of bread knowing that a living creature suffer

          • by AmiMoJo ( 196126 )

            Plants, bacteria, viruses and the like don't have the intelligence to suffer.

            I put the bar pretty low, I feel bad about killing spiders, but I'm pretty relaxed about turning off my PC or brushing my teeth.

            • You JUUUUUUUST said "The most common measure [of if something is intelligent] is suffering."

              Now that you're presented with evidence that plans suffer, "[they] don't have the intelligence to suffer." !?!?

              Those three things cannot all be true. There's a very obvious logical fault here. The factual evidence from the observable world in a peer reviewed journal isn't the one that's wrong. One of your two statements cannot hold. Either your definition of intelligence is wrong or plants are intelligent AND su

              • tsk, that's twice I misspelled plants. There's suffering for you. I must be so intelligent.

              • by AmiMoJo ( 196126 )

                Seems like we have a different definition of suffering. You seem to be saying that any reaction to injury is suffering, but I'm arguing that autonomous reactions are just that and I don't consider what amount to biological machines to be suffering.

                You could say it's similar to a car reacting to a crash - it might apply brakes, deploy airbags, disconnect the HV battery etc, but I wouldn't call any of that "suffering".

                • Well your definition of suffering is.... what? "they react in a way that we have scientifically determined is a reaction to pain"? I am informing you that plants react in a way that we have scientifically determined is a reaction to pain. They also tell their friends about the danger, who in turn prepare to get hurt. This is equivalent to screaming and the others flinching or bracing themselves. Did you know that Giraffe have to "stalk" their prey? If they eat leaves upwind of other trees, the t

                  • by AmiMoJo ( 196126 )

                    No, my definition of suffering is that they are capable of experiencing some kind of psychological reaction to pain.

                    • So if it can be shown that plants that are damaged have not just responses to pain, but lasting altered behavior? Then that'd suffice? (Otherwise, then wtf is psychology of a cow?)

    • Any intelligent being we create must be granted full rights equivalent to humans or we are monsters.

      But we are monsters.

    • Dogs and dolphins are considered intelligent. We haven't granted them full human-equivalent rights.

      Granted, we didn't create those.

  • by Locke2005 ( 849178 ) on Tuesday May 28, 2019 @04:23PM (#58668354)
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    • by mark-t ( 151149 )

      Is a child human? What about an elderly person with dementia? Is a fetus human?

      Define "harm".

      This is, of course, rhetorical... but even the most cursory examination of the Three Laws reveals that they could not implemented in any way that was not, in fact, entirely subjective to the biases of the creator of an individual robot.

      • by phantomfive ( 622387 ) on Tuesday May 28, 2019 @05:31PM (#58668772) Journal
        I think any set of rules this council creates will end up being just as problematic as the three laws, just as full of loopholes, but will be much, much more complicated.
        • by AmiMoJo ( 196126 )

          The EU has already made progress on this and it's been working well so far.

          For example, under GDPR you have the right to have decisions made about you explained and reviewed by a human. So if the AI says no, you have a right to know why it said no and for a human to review that decision.

          That prevents companies hiding behind black box AIs.

    • 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
      2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
      2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      A Tomahawk cruise missile is a robot by most definitions. It obeys none of these laws.

    • Remind me again how well Asimov's Three Laws worked in his own stories?

      Anyone reading those stories should have walked away with an awareness of the shortcomings and unintended consequences of the Laws, as well as an understanding that humans are terrible at crafting sufficient laws that avoid those pitfalls. Laws may sound good, but they rarely work well in practice.

    • You left out the one where any attempt to arrest an officer of the company results in immediate shutdown.
  • AI controlled guns =bad AI controlled censorship machine that can block comments in real time and predict and preemptively censor people and track them down = good
  • ...someone submitted a proposal to get some funding so they can avoid joining the real world and getting a real job.

  • At this point, that's really the only important regulation. Everything else is trivial in comparison.
  • So, you're going to have laws about what I'm allowed to do on my own computer. (Not unprecedented, thanks to DMCA.) And you'll be able to enforce these laws because you'll know what I'm doing on my computer?

    That means, next up: It is necessary and proper that to enforce interstate commerce regulations, the federal government must be able to remotely search and inspect any computer.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...