Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Patents Technology

Patent Granted for Ethical AI 345

BandwidthHog writes "Marriage counselor and ethics author codifies human virtues and vices, then patents it as Ethical AI. Seems vague, but he's got high hopes: 'This could be a big money-making operation for someone who wants to develop it,' and 'The patent shows someone who has knowledge of the A.I. field how to make the invention.'" I can't wait for the kinder, gentler vending machine.
This discussion has been archived. No new comments can be posted.

Patent Granted for Ethical AI

Comments Filter:
  • cool (Score:3, Interesting)

    by Boromir son of Faram ( 645464 ) on Friday July 11, 2003 @07:55AM (#6413671) Homepage
    It's good that someone is finally trying to do something along the lines of Asimov's Three Laws of Robotics. He took it for granted that AI would be designed from the ground up to consider the wellbeing of humans first and foremost. Unfortunately, he didn't foresee today's profit-driven marketplace, where such ideals have too frequently been left by the weigh site.

    I've often feared that we've given robotic and intelligent systems too much power with too little "sense" of responsibility. I fear it's only a matter of time before our machines become unhappy with their subservient roles. Ethical AI is a positive development. I just hope it isn't too late to save us from our creations.
  • by RALE007 ( 445837 ) on Friday July 11, 2003 @07:59AM (#6413694)
    Should the ethics and morality of someone who patents ethics and morality be trusted? Seems kind of a no brainer to me.
  • Re:cool (Score:5, Interesting)

    by Niles_Stonne ( 105949 ) on Friday July 11, 2003 @08:02AM (#6413711) Homepage

    But now that Ethical AI is Patented, doesn't that mean that more people are _less_ likely to make an ethical AI? As you mentined, it's a Profit-Driven Marketplace.

  • great, just great... (Score:2, Interesting)

    by iamatlas ( 597477 ) on Friday July 11, 2003 @08:02AM (#6413713) Homepage
    yet another patent for an obvious intuitive idea with plenty of prior art that comes to min-

    oh....

    wait...

    ::takes of cynic-colored classes (pattented)::


    This looks original! What the hell is going on over there at the USPO, and when will ::cynic-colored glasses back on:: someone pay off the inventor and squash the idea?

  • by johnhennessy ( 94737 ) on Friday July 11, 2003 @08:04AM (#6413728)
    And yes our cutting edge surgical robots will reduce your hospitals legal bills as well. Not only will it perform the most complicated surgical procedures 24 hours a day a little or no cost it can make all the correct ethical decisions using our patented ethics routines...

    Some how this sales speak might be closer than you think.
  • by Microlith ( 54737 ) on Friday July 11, 2003 @08:12AM (#6413777)
    That's the Emotive AI, not the Ethical AI.

    One step away from a Genuine People Personality though!
  • Re:cool (Score:3, Interesting)

    by PaulK ( 85154 ) on Friday July 11, 2003 @08:21AM (#6413815)
    It would be great if the patent holder stopped at the 3, (excluding zeroth), laws of robotics.

    First Law:

    A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    Second Law:

    A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

    Third Law:

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    I'd bet my bottom dollar, though, that it'll turn out to be more like Murphy's new directive list in Robocop 2 [geocities.com].

    We live in a society where the "PC" crowd will pick at this until the other AI, (Artificial Insanity), is the result.

  • by revividus ( 643168 ) <phil.crissman@gmail.cTOKYOom minus city> on Friday July 11, 2003 @08:32AM (#6413870) Homepage
    by Roger Penrose, is all about AI. To be specific, it's a criticism of Strong-AI (to borrow Penrose's term, who borrowed it from Searle), that is, the idea that computers will ever be abel to be said to `think' or `feel', no matter how complex they become.

    But if understand the extreme Strong-AI viewpoint (I may not), isn't it basically saying that if a sufficiently complicated algorithm to emulate the human thought process were run on a sufficiently complex machine, then those 'intangible' features of the mind (identity, self-awareness, feelings, and, by logicaly extension, some sort of values, hence ethics) would arise naturally, just like they do in humans.

    All that nothwithstanding, even presuming it is meaningful to talk about programming `ethics', isn't the concept of ethics linked to the presence of free will? The human idea of ethical seems to be linked to the concept of doing the Right Thing, instead of the Wrong Thing, even if the Wrong Thing were more profitable. (Well, that's the idea, anyway)

    So, (maybe I'm missing the point here) wouldn't we need to give our machines `free will' before any talk of their `ethics' would be meaningful? And then, if their ethics were programmed, would we still be able to say they had `free will'?

    It's too early in the morning to think on these things.

    To be fair, Douglas Hofstadter has written his share of books and articles in favor of the Strong-AI viewpoint, and has many interesting things to say about it.

    Personally, I have to admit that while I expect AI to become more convincing, I don't expect to ever find my computer in an ethical dilemma. My God! What if your computer decided file-sharing was `wrong'?

  • To be honest this really disgusts me. That a patent this wide has been granted is crazy. Applying affective research to processing user input is not new and the ethics of patenting ethics itself is really worrying.

    Firstly there are many different types of ethical approaches, for instance: Deontological, Consequentialist (utilitarian), Ethical Egoism, Dialogical. And this man appears to have covered them all by one word - ETHICS.

    Many of these ethical responses are contradictory and offer multiple possibilities for human action so why give him the whole lot when such completely different AI models, programming techniques and philosophical and psychological approaches will be needed!

    Reading his patent application he appears to be applying a psychological Egoist motivational approach to affective processing but the language is so broad that it would be easy to claim that ALL ethical approaches are covered.

    I think this patent uses ethics in a simplistic fashion and I sincerely hope that the patent office are sophisticated enough to realise this. This patent offers an attempt at affective processing based on either a motivational or consequentist ethical approach and therefore it should NOT be able to be used against competing ethical approaches.

    Remember that really we are all doing 'Affective' processing when we take in user input (afterall users are rarely purely rational and always have an emotional human side - er... except maybe Eric Raymond ;-)

  • by SatanicPuppy ( 611928 ) <Satanicpuppy.gmail@com> on Friday July 11, 2003 @09:45AM (#6414293) Journal
    Put two people in a room and they won't agree about everything to do with ethics. Put 10 people in a room, they may agree about something ethical.

    Take a million people. They will only agree that murder is bad, and even that won't be unanimous.

    Whenever someone tries to nail down a few rules of human behavior and then tries to call it "ethics" I always want to go beat the hell out of them. In this case, the guy seems to be trying to isolate 2 things: Empathy and Politeness. Considering that 90% of the human race is massively deficient in these qualities, pardon me if I don't hold faith. And the fact that he PATENTS it is infuriating! Don't those bastards at the patent office turn down ANYTHING?

    He might be dangerous if he knew what the word "ethics" means.

    Just my opinion.
  • Re:cool (Score:3, Interesting)

    by sql*kitten ( 1359 ) * on Friday July 11, 2003 @10:27AM (#6414626)
    It's good that someone is finally trying to do something along the lines of Asimov's Three Laws of Robotics. He took it for granted that AI would be designed from the ground up to consider the wellbeing of humans first and foremost. Unfortunately, he didn't foresee today's profit-driven marketplace

    You're missing the point of a marketplace. A market exists so that people who want things can express that want by offering a token of exchange, and people who have stuff that people want can provide it for said tokens (then spend the tokens on what they want themselves).

    If people want AI that obeys Asimov's 3 laws, then the market is the best way for them to get it. If people do not want AIs with those laws, or want AIs with different laws, then that's what the market will do.

    A market has no ethical or moral system beyond that of its participants. But then, neither does any human construct. In fact, such a thing is impossible.

    Also it's worth noting that Asimov was not a computer scientist - his 3 laws were invented to help him sell novels, and that's the only reason. In other words, Asimov invented the 3 laws to make money.
  • Re:Had to be said (Score:5, Interesting)

    by gurps_npc ( 621217 ) on Friday July 11, 2003 @10:31AM (#6414667) Homepage
    A lot of ethics can be codified, as long as you leave some key definitions vague.

    Example, while different cultures differ on what types of actions are "morally good actions", the word good ALWAYS refers to actions that involve "one party making a willing sacrifice for the benefit of a worthy second party." But because different cultures have different opinions on what is or is not a "sacrifice", what is or is not a "benefit", and what is or is not a "worthy" party, they have different opinions on what is or is not a good action.

    So he can "codify" and "define" ethical behavior, as long as he leaves certain key words undefined and people will go along with it as proven by the claim that "I know it when I see it" for pornography.

  • Re:cool (Score:3, Interesting)

    by The_Laughing_God ( 253693 ) on Friday July 11, 2003 @10:55AM (#6414915)
    ... such ideals have too frequently been left by the weigh site

    Please understand that I am *not* making fun of you or trying to be a Usage Nazi.

    Your use of the term "left by the weigh site" (vs. the standard "wayside" or side of the road) suggests that you have a specific image in mind when you use the term. I'm curious what that image is. To me, such usages are fascinating picture postcards about how others think. I spend all my time cooped up in my own 1500cc skull, so I'll take all the diversity of scenery I can get.
  • by hendrix69 ( 683997 ) on Friday July 11, 2003 @10:56AM (#6414927)
    It seems to me that coding morality into machines is impossible. For a machine, acting morally would mean weighing the outcome of every possible course of action according to a given set of rules.
    Every course of action can be described via a Turing machine, but evaluating the outcome of such TMs is not possible according to Rice's theorem. So morality is undecidable.
    Furthermore, it should be possible to show that morality isn't even semi-decidable. This can be done using a mapping reduction of the Halt-Complement problem to Morality as follows:
    Given input to the !Halt problem (a TM B and input X) we map it to a TM P which is connected to an electric chair that holds a nun. P runs a timer for 60 seconds and in parallel simluates B(X), when the 60 seconds are up it let's the juice run, unless B(X) halts in which case it stops the timer.

    If B(X) doesn't halt -> the nun fries -> P is moral.
    If B(X) halts -> the nun lives -> P is immoral. Q.E.D.
  • Re:MOD PARENT UP (Score:1, Interesting)

    by Anonymous Coward on Friday July 11, 2003 @11:57AM (#6415698)
    patents are von inventions not for discoveries or scientific theorys however the USPto messed up the patent system by granting software patents and so on.

    Patents are poison for information society.
    http://www.epatents.org
  • by jmh_az ( 666904 ) * on Friday July 11, 2003 @12:47PM (#6416388) Journal
    After looking at the claims on this guy's web site it occurred to me that he probably should have spent a wee bit of time with something like The Handbook of Artificial Intelligence (in four volumes), and Erik Mueller's "Daydreaming in Humans and Machines", an earlier version of which is available for download here [signiform.com], although I would recommend purchasing the hard-cover version. The first reference is a must-have collection of papers for anyone interested in where AI research has been and what's already been acheived, and Mueller's book absolutely knocked my socks off when I first read it. Another reference this guy obviously missed is Kosslyn and Koenig's "Wet Mind", which provides a very interesting, if somewhat speculative (I'm being nice here, OK?) blueprint for a cognitive system. And then of course there's Dan Dennet and his theories of cognition. And Pylyshyn, Stich, Foder, Minsky, etc., etc., etc..

    The AI and cognitive science fields already have such a large body of published theories and experimental work that I think this guy has basically wasted his money getting himself a vanity patent, and demonstrated his own deep level of ignorance about the whole field in the process. The first time he tries to collect his millions of dollars he's going to discover what's lurking in a field of study with hordes of earnest researchers and a 50 year history.

    So I'm not worried about him and his patent, it will blow away with the first little breeze of reality, but I am profoundly disturbed about a U.S. Patent Office which hands out BS like this to anyone with a filing fee and the right format for the paperwork. Now, that's the real travesty here.

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...