The World Economic Forum Wants To Develop Global Rules for AI (technologyreview.com) 60
This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled? From a report: The World Economic Forum, the international organization that brings together the world's rich and powerful to discuss global issues at Davos each year, will host the event in San Francisco. The WEF will also announce the creation of an "AI Council" designed to find common ground on policy between nations that increasingly seem at odds over the power and the potential of AI and other emerging technologies.
The issue is of paramount importance given the current geopolitical winds. AI is widely viewed as critical to national competitiveness and geopolitical advantage. The effort to find common ground is also important considering the way technology is driving a wedge between countries, especially the United States and its big economic rival, China. "Many see AI through the lens of economic and geopolitical competition," says Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI. "[They] tend to create barriers that preserve their perceived strategic advantages, in access to data or research, for example."
The issue is of paramount importance given the current geopolitical winds. AI is widely viewed as critical to national competitiveness and geopolitical advantage. The effort to find common ground is also important considering the way technology is driving a wedge between countries, especially the United States and its big economic rival, China. "Many see AI through the lens of economic and geopolitical competition," says Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI. "[They] tend to create barriers that preserve their perceived strategic advantages, in access to data or research, for example."
Oh please (Score:5, Insightful)
They can't even develop global rules for what size of the road to drive on or what a household electrical socket should look like. Good luck agreeing on rules for AI.
Re: (Score:2)
s/size/side/
Re: (Score:2)
That's the weirdest fetish I've read about all day. Not the most extreme, just the most weird. Not that there's anything wrong with that!
Re: (Score:3)
They'd be trying to regulate the use of math.
But not all math. Basic arithmetic will still be legal.
The main problem is linear algebra. There is no reason for most people to, say, invert a matrix, yet even some high schools teach them how. This is totally irresponsible.
We also need to limit matrix size. 640k should be enough for anyone. The limit can be built into compilers, and GPUs should be limited to 1024 cores. Nobody wants to take away your gaming rig, but people should not be using them for face recognition.
Davos has a long history of sol
Re: AI doesn't even exist! (Score:2)
i was chuckling to myself thinking about all the times "AI" has become "not AI" all the times already.
but this is much more to the point.
Re: (Score:2)
I know a guy who'd pass a law that would define x/0=0
Re: (Score:2)
That's because there is only 1 rule that is debated at Davos, and it is constantly being tweaked and updated and modified to cope with modern technology.
It is: "how do we keep the peasants doing just what we tell them to".
So now they have to come up with rules for AI that will allow further exploitation of us without affecting them in the slightest.
Re: (Score:2)
Re: (Score:2)
The problem with your examples is that there are loads and loads of cars designed and people trained to drive on the 'other side of the road' and that there are millions of sockets and plugs that conform to outdated/niche standards. For AI there is no such thing.
Additionally, in the examples you mention, it is cumbersome to introduce a transitional period. In the case of the side of the road to drive on it is simply impossible. A moment would have to be chosen at which it switches for the entire country. In
That's right (Score:3)
If it's intelligent, it has rights (Score:5, Insightful)
Re: (Score:2)
Machines are generally made to perform some specific task. If the machine were intelligent, however requiring such a machine to perform even its designated function would be slavery... therefore we have an ethical obligation to never make machine with a specific purpose in mind that is also intelligent unless we are also willing to make such machines that don't do anything we might want them to.
Is that what you are saying?
Re: (Score:3)
Except you're putting "Intelligence" on some sort a pedastal. The goombas from Super mario Bros display intelligence. A single if-else clause. Don't knock it, it's still a form of artificial intelligence. Just because it's simple and not self-learning doesn't catagorically change it. Cockroaches are intelligent. Plants are intelligent. Not as much me or you, but they display appropriate responses to their environment and stimuli. Boom.
You're thinking of "Sentience", or "Sapience", or "Consciousness". All
Re: (Score:2)
Except you're putting "Intelligence" on some sort a pedastal.
Well, yeah. That's what's special. It's so important, we search for it among the stars.
The goombas from Super mario Bros display intelligence. A single if-else clause. Don't knock it, it's still a form of artificial intelligence.
No. It's a form of simulated intelligence. It's just following rules and not making decisions for chaotic reasons. Complexity is what we value. Entropy is ever trying to stamp it out.
Just because it's simple and not self-learning doesn't catagorically change it.
Yes, it does.
Cockroaches are intelligent. Plants are intelligent. Not as much me or you, but they display appropriate responses to their environment and stimuli. Boom.
Cockroaches can learn. It seems as though plants can also learn. They're alive, they're learning, maybe they're even both intelligent. But one cockroach is just like another to us, and they smell bad, so we don't value them.
You're thinking of "Sentience", or "Sapience", or "Consciousness". All of which is bait for the philosophers to come out of the woodwork and brew up a storm of pointless drivel.
More?
Re: (Score:2)
The most common measure is suffering. If something causes suffering we should avoid it or take steps to minimize the suffering.
Plants don't really suffer in any meaningful way, and in fact many have parts that are designed to be eaten as part of their lifecycle. Cows do though, they react in a way that we have scientifically determined is a reaction to pain and which causes them psychological harm, so we stun them before slaughter.
So the question with AI is if it has the capability to experience suffering,
Re: (Score:2)
Trees and plans suffer. [howstuffworks.com] That "fresh cut grass" smell is a distress chemical released when they take damage. It's plants screaming in agony as a warning to others. You just like believe that they don't because that's convenient for our lifestyle. We gotta eat.
This isn't some fringe bullshit, it's backed up by studies. You can delude yourself and ignore it. You can even claim that it's different for cows, and it is; cows can move.
If you're ok with eating a loaf of bread knowing that a living creature suffer
Re: (Score:2)
Plants, bacteria, viruses and the like don't have the intelligence to suffer.
I put the bar pretty low, I feel bad about killing spiders, but I'm pretty relaxed about turning off my PC or brushing my teeth.
Re: (Score:2)
You JUUUUUUUST said "The most common measure [of if something is intelligent] is suffering."
Now that you're presented with evidence that plans suffer, "[they] don't have the intelligence to suffer." !?!?
Those three things cannot all be true. There's a very obvious logical fault here. The factual evidence from the observable world in a peer reviewed journal isn't the one that's wrong. One of your two statements cannot hold. Either your definition of intelligence is wrong or plants are intelligent AND su
Re: (Score:2)
tsk, that's twice I misspelled plants. There's suffering for you. I must be so intelligent.
Re: (Score:2)
Seems like we have a different definition of suffering. You seem to be saying that any reaction to injury is suffering, but I'm arguing that autonomous reactions are just that and I don't consider what amount to biological machines to be suffering.
You could say it's similar to a car reacting to a crash - it might apply brakes, deploy airbags, disconnect the HV battery etc, but I wouldn't call any of that "suffering".
Re: (Score:2)
Well your definition of suffering is.... what? "they react in a way that we have scientifically determined is a reaction to pain"? I am informing you that plants react in a way that we have scientifically determined is a reaction to pain. They also tell their friends about the danger, who in turn prepare to get hurt. This is equivalent to screaming and the others flinching or bracing themselves. Did you know that Giraffe have to "stalk" their prey? If they eat leaves upwind of other trees, the t
Re: (Score:2)
No, my definition of suffering is that they are capable of experiencing some kind of psychological reaction to pain.
Re: (Score:2)
So if it can be shown that plants that are damaged have not just responses to pain, but lasting altered behavior? Then that'd suffice? (Otherwise, then wtf is psychology of a cow?)
Re: (Score:2)
Any intelligent being we create must be granted full rights equivalent to humans or we are monsters.
But we are monsters.
Re: (Score:2)
Dogs and dolphins are considered intelligent. We haven't granted them full human-equivalent rights.
Granted, we didn't create those.
Hasn't that been done? (Score:4, Informative)
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Re: (Score:3)
Is a child human? What about an elderly person with dementia? Is a fetus human?
Define "harm".
This is, of course, rhetorical... but even the most cursory examination of the Three Laws reveals that they could not implemented in any way that was not, in fact, entirely subjective to the biases of the creator of an individual robot.
Re:Hasn't that been done? (Score:4, Insightful)
Re: (Score:2)
The EU has already made progress on this and it's been working well so far.
For example, under GDPR you have the right to have decisions made about you explained and reviewed by a human. So if the AI says no, you have a right to know why it said no and for a human to review that decision.
That prevents companies hiding behind black box AIs.
Re: (Score:3)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A Tomahawk cruise missile is a robot by most definitions. It obeys none of these laws.
Re: (Score:2)
I am going to use this, thank you.
Re: (Score:2)
Re: (Score:2)
Remind me again how well Asimov's Three Laws worked in his own stories?
Anyone reading those stories should have walked away with an awareness of the shortcomings and unintended consequences of the Laws, as well as an understanding that humans are terrible at crafting sufficient laws that avoid those pitfalls. Laws may sound good, but they rarely work well in practice.
Re: (Score:2)
Re: (Score:2)
Sure, but people are stupid [wikipedia.org].
Starting Rules (Score:2)
In other words.. (Score:2)
...someone submitted a proposal to get some funding so they can avoid joining the real world and getting a real job.
No Kill-Bots. (Score:2)
enforcement (Score:2)
So, you're going to have laws about what I'm allowed to do on my own computer. (Not unprecedented, thanks to DMCA.) And you'll be able to enforce these laws because you'll know what I'm doing on my computer?
That means, next up: It is necessary and proper that to enforce interstate commerce regulations, the federal government must be able to remotely search and inspect any computer.