Patent Granted for Ethical AI 345
BandwidthHog writes "Marriage counselor and ethics author codifies human virtues and vices, then patents it as Ethical AI. Seems vague, but he's got high hopes: 'This could be a big money-making operation for someone who wants to develop it,' and 'The patent shows someone who has knowledge of the A.I. field how to make the invention.'" I can't wait for the kinder, gentler vending machine.
cool (Score:3, Interesting)
I've often feared that we've given robotic and intelligent systems too much power with too little "sense" of responsibility. I fear it's only a matter of time before our machines become unhappy with their subservient roles. Ethical AI is a positive development. I just hope it isn't too late to save us from our creations.
I beg the question... (Score:5, Interesting)
Re:cool (Score:5, Interesting)
But now that Ethical AI is Patented, doesn't that mean that more people are _less_ likely to make an ethical AI? As you mentined, it's a Profit-Driven Marketplace.
great, just great... (Score:2, Interesting)
oh....
wait...
::takes of cynic-colored classes (pattented)::
This looks original! What the hell is going on over there at the USPO, and when will
Sales pitch from the early 21st century... (Score:4, Interesting)
Some how this sales speak might be closer than you think.
Re:HAL, the marriage counselor-enabled AI (Score:3, Interesting)
One step away from a Genuine People Personality though!
Re:cool (Score:3, Interesting)
First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'd bet my bottom dollar, though, that it'll turn out to be more like Murphy's new directive list in Robocop 2 [geocities.com].
We live in a society where the "PC" crowd will pick at this until the other AI, (Artificial Insanity), is the result.
The Emperor's New Mind (Score:2, Interesting)
But if understand the extreme Strong-AI viewpoint (I may not), isn't it basically saying that if a sufficiently complicated algorithm to emulate the human thought process were run on a sufficiently complex machine, then those 'intangible' features of the mind (identity, self-awareness, feelings, and, by logicaly extension, some sort of values, hence ethics) would arise naturally, just like they do in humans.
All that nothwithstanding, even presuming it is meaningful to talk about programming `ethics', isn't the concept of ethics linked to the presence of free will? The human idea of ethical seems to be linked to the concept of doing the Right Thing, instead of the Wrong Thing, even if the Wrong Thing were more profitable. (Well, that's the idea, anyway)
So, (maybe I'm missing the point here) wouldn't we need to give our machines `free will' before any talk of their `ethics' would be meaningful? And then, if their ethics were programmed, would we still be able to say they had `free will'?
It's too early in the morning to think on these things.
To be fair, Douglas Hofstadter has written his share of books and articles in favor of the Strong-AI viewpoint, and has many interesting things to say about it.
Personally, I have to admit that while I expect AI to become more convincing, I don't expect to ever find my computer in an ethical dilemma. My God! What if your computer decided file-sharing was `wrong'?
SHOCKING AND BAD PATENT PRACTICE (Score:5, Interesting)
To be honest this really disgusts me. That a patent this wide has been granted is crazy. Applying affective research to processing user input is not new and the ethics of patenting ethics itself is really worrying.
Firstly there are many different types of ethical approaches, for instance: Deontological, Consequentialist (utilitarian), Ethical Egoism, Dialogical. And this man appears to have covered them all by one word - ETHICS.
Many of these ethical responses are contradictory and offer multiple possibilities for human action so why give him the whole lot when such completely different AI models, programming techniques and philosophical and psychological approaches will be needed!
Reading his patent application he appears to be applying a psychological Egoist motivational approach to affective processing but the language is so broad that it would be easy to claim that ALL ethical approaches are covered.
I think this patent uses ethics in a simplistic fashion and I sincerely hope that the patent office are sophisticated enough to realise this. This patent offers an attempt at affective processing based on either a motivational or consequentist ethical approach and therefore it should NOT be able to be used against competing ethical approaches.
Remember that really we are all doing 'Affective' processing when we take in user input (afterall users are rarely purely rational and always have an emotional human side - er... except maybe Eric Raymond ;-)
This is, of course, Crap. (Score:5, Interesting)
Take a million people. They will only agree that murder is bad, and even that won't be unanimous.
Whenever someone tries to nail down a few rules of human behavior and then tries to call it "ethics" I always want to go beat the hell out of them. In this case, the guy seems to be trying to isolate 2 things: Empathy and Politeness. Considering that 90% of the human race is massively deficient in these qualities, pardon me if I don't hold faith. And the fact that he PATENTS it is infuriating! Don't those bastards at the patent office turn down ANYTHING?
He might be dangerous if he knew what the word "ethics" means.
Just my opinion.
Re:cool (Score:3, Interesting)
You're missing the point of a marketplace. A market exists so that people who want things can express that want by offering a token of exchange, and people who have stuff that people want can provide it for said tokens (then spend the tokens on what they want themselves).
If people want AI that obeys Asimov's 3 laws, then the market is the best way for them to get it. If people do not want AIs with those laws, or want AIs with different laws, then that's what the market will do.
A market has no ethical or moral system beyond that of its participants. But then, neither does any human construct. In fact, such a thing is impossible.
Also it's worth noting that Asimov was not a computer scientist - his 3 laws were invented to help him sell novels, and that's the only reason. In other words, Asimov invented the 3 laws to make money.
Re:Had to be said (Score:5, Interesting)
Example, while different cultures differ on what types of actions are "morally good actions", the word good ALWAYS refers to actions that involve "one party making a willing sacrifice for the benefit of a worthy second party." But because different cultures have different opinions on what is or is not a "sacrifice", what is or is not a "benefit", and what is or is not a "worthy" party, they have different opinions on what is or is not a good action.
So he can "codify" and "define" ethical behavior, as long as he leaves certain key words undefined and people will go along with it as proven by the claim that "I know it when I see it" for pornography.
Re:cool (Score:3, Interesting)
Please understand that I am *not* making fun of you or trying to be a Usage Nazi.
Your use of the term "left by the weigh site" (vs. the standard "wayside" or side of the road) suggests that you have a specific image in mind when you use the term. I'm curious what that image is. To me, such usages are fascinating picture postcards about how others think. I spend all my time cooped up in my own 1500cc skull, so I'll take all the diversity of scenery I can get.
Morality is not R.E.! (Score:2, Interesting)
Every course of action can be described via a Turing machine, but evaluating the outcome of such TMs is not possible according to Rice's theorem. So morality is undecidable.
Furthermore, it should be possible to show that morality isn't even semi-decidable. This can be done using a mapping reduction of the Halt-Complement problem to Morality as follows:
Given input to the !Halt problem (a TM B and input X) we map it to a TM P which is connected to an electric chair that holds a nun. P runs a timer for 60 seconds and in parallel simluates B(X), when the 60 seconds are up it let's the juice run, unless B(X) halts in which case it stops the timer.
If B(X) doesn't halt -> the nun fries -> P is moral.
If B(X) halts -> the nun lives -> P is immoral. Q.E.D.
Re:MOD PARENT UP (Score:1, Interesting)
Patents are poison for information society.
http://www.epatents.org
I think he missed the prior art bit (Score:3, Interesting)
The AI and cognitive science fields already have such a large body of published theories and experimental work that I think this guy has basically wasted his money getting himself a vanity patent, and demonstrated his own deep level of ignorance about the whole field in the process. The first time he tries to collect his millions of dollars he's going to discover what's lurking in a field of study with hordes of earnest researchers and a 50 year history.
So I'm not worried about him and his patent, it will blow away with the first little breeze of reality, but I am profoundly disturbed about a U.S. Patent Office which hands out BS like this to anyone with a filing fee and the right format for the paperwork. Now, that's the real travesty here.