Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Patents Technology

Patent Granted for Ethical AI 345

Posted by CowboyNeal
from the making-a-more-human-computer dept.
BandwidthHog writes "Marriage counselor and ethics author codifies human virtues and vices, then patents it as Ethical AI. Seems vague, but he's got high hopes: 'This could be a big money-making operation for someone who wants to develop it,' and 'The patent shows someone who has knowledge of the A.I. field how to make the invention.'" I can't wait for the kinder, gentler vending machine.
This discussion has been archived. No new comments can be posted.

Patent Granted for Ethical AI

Comments Filter:
  • by martinthebrit (565913) on Friday July 11, 2003 @06:54AM (#6413668)
    How long before machines with Genuine People Personalities.

    Just think. Depressed vending machines.
    • by Anonymous Coward
      Vertical People Transporters that hide in the basement and sulk.
    • I've got this terible pain in all the diodes down my left side.
    • by Anonymous Coward
      Then you'd have a drunk, depressed vending machine.

      Although, using a stoned vending machine would be a laugh.
    • Just watch out for a depressed auto pilot!
    • by harryk (17509)
      I was reading and just waiting for the reference. You should be working for the Marketing company of Ursa Minor. At least that way you'll be up against the wall when the revolution comes.
    • Re:Had to be said (Score:5, Insightful)

      by IntelliTroll (683488) <fodder2@ureach.com> on Friday July 11, 2003 @07:41AM (#6413919)
      How long before someone *patents* 'genuine people personalities'? The trend to award patents for methods, algorithms, gene sequences and similar things which could be argued as natural phenomena is alarming. My suscinct read on this is that this fellow has garnered a patent on nothing but labeled to maximise attention to it. (1) His assertion that he has codified or defined ethics with an alogrithmic implementation is laughable. (2) I'll bet he doesn't *have* an implementation. Just some fscking diagrams (as required for patents). (3) If he can [really] codify and implement something as ephemeral as 'ethics' in AI software, he should already be raking mega-bucks and the admiration of the masses with the product of his Nobel-prize winning genius... solving the problems of hunger and war and disease with his stunningly crafty AI's. But no, he's just another fame-grubbing opportunist trying to capitalise on a patenting some aspect of a concept whose basis has been in dispute since philosophers first began debating anything. He doesn't intend to create new, ground-breaking AI systems. He intends to (a) stake his claim to fame (b) get someone to fund some pretense of research and/or (c) extort funds from future AI developers whose actual works might infringe his wonderful patent. Jeeze. This patent stuff is getting absurd.
      • Re:Had to be said (Score:5, Interesting)

        by gurps_npc (621217) on Friday July 11, 2003 @09:31AM (#6414667) Homepage
        A lot of ethics can be codified, as long as you leave some key definitions vague.

        Example, while different cultures differ on what types of actions are "morally good actions", the word good ALWAYS refers to actions that involve "one party making a willing sacrifice for the benefit of a worthy second party." But because different cultures have different opinions on what is or is not a "sacrifice", what is or is not a "benefit", and what is or is not a "worthy" party, they have different opinions on what is or is not a good action.

        So he can "codify" and "define" ethical behavior, as long as he leaves certain key words undefined and people will go along with it as proven by the claim that "I know it when I see it" for pornography.

    • by mrjb (547783)
      It was called a Nutri- Matic Drinks Synthesizer, and he had encountered it before. It claimed to produce the widest possible range of drinks personally matched to the tastes and metabolism of whoever cared to use it. When put to the test, however, it invariably produced a plastic cup filled with a liquid which was almost, but not quite, entirely unlike tea. He attempted to reason with the thing. 'Tea,' he said. 'Share and Enjoy,' the machine replied and provided him with yet another cup of t
      • by Anonymous Coward
        Arthur: I mean, what is the point?
        Nutri-Matic Drink Dispenser: Nutrition and pleasurable sense data, share and enjoy.
        Arthur: Listen you stupid machine, it tastes filthy. Here, take this cup back.
        NMDD: If you have enjoyed the experience of this drink, why not share it with your friends?
        Arthur: Because I want to keep them. Will you try to comprehend what I'm telling you, that drink...
        NMDD: That drink was individually tailored to meet your personal requirements for nutrition and pleasure.
        Arthur: Ah... So I'
  • cool (Score:3, Interesting)

    by Boromir son of Faram (645464) on Friday July 11, 2003 @06:55AM (#6413671) Homepage
    It's good that someone is finally trying to do something along the lines of Asimov's Three Laws of Robotics. He took it for granted that AI would be designed from the ground up to consider the wellbeing of humans first and foremost. Unfortunately, he didn't foresee today's profit-driven marketplace, where such ideals have too frequently been left by the weigh site.

    I've often feared that we've given robotic and intelligent systems too much power with too little "sense" of responsibility. I fear it's only a matter of time before our machines become unhappy with their subservient roles. Ethical AI is a positive development. I just hope it isn't too late to save us from our creations.
    • Re:cool (Score:5, Interesting)

      by Niles_Stonne (105949) on Friday July 11, 2003 @07:02AM (#6413711) Homepage

      But now that Ethical AI is Patented, doesn't that mean that more people are _less_ likely to make an ethical AI? As you mentined, it's a Profit-Driven Marketplace.

      • by Junior J. Junior III (192702) on Friday July 11, 2003 @07:10AM (#6413756) Homepage
        This is exactly what I thought. Great, they patented it, now it's practically guaranteed that it'll never happen.

        It's funny. Patenting ethics, when applying for a patent is itself usually not ethical.

        The future looks bleak indeed. We can expect to start seeing such gems as:

        "You are being good. This infringes upon patent No. 234097928347918723987. Pay up, or start doing evil."
        • "You are being good. This infringes upon patent No. 234097928347918723987. Pay up, or start doing evil." But Evil infringes upon patent No 2340979283479187239. If it werent for bad karma I'd have no karma at all
      • Re:cool (Score:5, Funny)

        by Greyfox (87712) on Friday July 11, 2003 @08:06AM (#6414049) Homepage Journal
        Like they would have made one anyway. An ethical AI is the last thing anyone looking for an AI will want.

        "Sir, the need MegaBattleTank 3000 refuses to attack the enemy! It thinks we should try to find a peaceful solution!"

        "We tried to lay off 2000 people and move their jobs to east outer Mongolia but our HR system wouldn't let us."

        "Yeah, I tried to get the accounting system to claim those contracts we haven't collected money for as income on our quarterly report but the accounting system wouldn't let me. Now my stock options are worthless and the board is going to fire me."

        It will never happen.

      • by ojQj (657924)
        I wouldn't worry about it too much. By the time AI is far enough along to start implementing ethical systems for intelligent agents, this patent will probably be expired.

        I personally think effective ethics requires a theory of mind (ie the ability to deduce/guess how other people are feeling and from that understanding, deduce how they will react to and feel in various possible scenarios). And I expect developing that in software should be a challenging problem that will take more than 25 years to solve

      • I thought you had to at least show proof of concept or *some* kind of proof that you might make the effort at *some* point to try to implement what you're trying to patent. I thought the whole point of the patent office was to protect inventors, but at the same time prevent people from collecting royalties by saying "Hey, I thought time travel might be a good idea 10 years ago, pay up."

        I haven't read the article yet (of the comments I've read, most people seem to agree there's not much to it), but the inve
      • doesn't that mean that more people are _less_ likely to make an ethical AI

        Correction: Americans are less likely to make an ethical AI. Fortunately, US laws are not international and 97% of people can make ethical AI without any problems.

        I guess, counting the fact that in few decades AI will take over the world, by the time this patent will be expired, the whole world will be diveded into ethical countries and US.

    • Re:cool (Score:2, Insightful)

      by djtrialprice (602555)

      Okay, it's nice to see that we're thinking ahead at some kind of framework but to me this seems like making the ISO OSI 7 layered model after Charles Babbage describes what a computer is.
      The current Turing Test programs aren't that much superior to Eliza. I think it's going to be several decades before we see the Loebner prize being won.
      This kind of thing is just far too early / pie-in-the-sky.
    • Re:cool (Score:3, Interesting)

      by PaulK (85154)
      It would be great if the patent holder stopped at the 3, (excluding zeroth), laws of robotics.

      First Law:

      A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

      Second Law:

      A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

      Third Law:

      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      I'd bet my bottom dollar, though, that it'll turn ou
      • Re:cool (Score:3, Informative)

        by PaulK (85154)
        The very BEST treatise on the subject is here [anu.edu.au].

      • by JWW (79176)
        It would be great if the patent holder stopped at the 3, (excluding zeroth), laws of robotics.

        If they did that, finding prior art wouldn't exactly be difficult ;-)
    • Re:cool (Score:3, Interesting)

      by sql*kitten (1359) *
      It's good that someone is finally trying to do something along the lines of Asimov's Three Laws of Robotics. He took it for granted that AI would be designed from the ground up to consider the wellbeing of humans first and foremost. Unfortunately, he didn't foresee today's profit-driven marketplace

      You're missing the point of a marketplace. A market exists so that people who want things can express that want by offering a token of exchange, and people who have stuff that people want can provide it for sai
    • Re:cool (Score:3, Interesting)

      ... such ideals have too frequently been left by the weigh site

      Please understand that I am *not* making fun of you or trying to be a Usage Nazi.

      Your use of the term "left by the weigh site" (vs. the standard "wayside" or side of the road) suggests that you have a specific image in mind when you use the term. I'm curious what that image is. To me, such usages are fascinating picture postcards about how others think. I spend all my time cooped up in my own 1500cc skull, so I'll take all the diversity of sc
  • Who's this guy? (Score:3, Insightful)

    by Surak (18578) * <(moc.skcolbliam) (ta) (karus)> on Friday July 11, 2003 @06:55AM (#6413672) Homepage Journal
    Who's this guy to decide ethics and morality for everyone else? It is important to remember that ethics and morality are based on culture and social norms. Each culture has its own set of taboos, its own morality, and its own ethical codes. Codefing these is dubious at best, and applicable to only one culture or set of cultures at worst. Patenting these is just ridiculous beyond belief.

    • by Anonymous Coward
      Each culture has its own set of taboos, its own morality, and its own ethical codes.

      I understand The Glorious Leader George Bush II (All Hail!) is currently undertaking a program of Liberations to take care of this small problem.
    • That's true: in this postmodernist world, what's true for you isn't true for me--how can a system (which, by definition, has a fixed set of laws which determine its operations--see The Matrix for an example) adapt to different individual interpretations of a moral code? Given postmodernism, it doesn't seem to make sense to have a computer system programmed as a modernist...
      • You're absolutely right. For example, if someone were to murder you in cold blood, or rape you, who's to say if that's right or wrong?

        Seriously, though, the basics of morality are accepted by pretty much everyone all around the world.
    • Re:Who's this guy? (Score:5, Informative)

      by anonymous loser (58627) on Friday July 11, 2003 @07:16AM (#6413797)
      There is nothing in the patent that says he's deciding ethics and morality.

      He has simply developed a system which makes it possible to codify a systme of ethics, then make decisions based upon that structure. The ethics in question are not predetermined by the patent or the author, they are part of the system you build in order to create an ethical AI.

      • Ummm...*you* didn't RTFA.

        Look at this page, which is linked from the patent page. [ethicalvalues.com]

        According to Fig. 1A, the ten listings of virtues, values, and ideals are organized into dual descending columns of five groupings each; the left column representing the hierarchy of authority roles, whereas the right describes the corresponding follower roles. This dual style of schematic format represents the sum-totality of reciprocating interactions between the authority and follower figures, as the directional arrows se
        • First of all, where in my post did I even suggest that the original poster did not read the article?

          Second of all, apparently you aren't very skilled at reading patents, because otherwise you'd be able to differentiate background material (such as an example implementation of the system, which is what you quoted) from the claims, which are the only "true" important part of a patent. I'll quote the claims so you can peruse them:


          1. A means for enabling a computer to decode and simulate the use of affecti

    • response as originating directly from said computer, simulating artificial intelligence

      I think the "simulating artificial intelligence" is a very strong claim.

      First, the guy muddles the definition of AI by adding ethical to it.

      Secondly, there is no convincing proof that AI has been simulated. It is still a damn dream - when I see AI, I am sure it will hit me like a sledge-hammer and be better than even an orgasm. And people haven't been reporting that reaction. I am pretty sure the patent examiner didn't feel that. And they probably let it on because though they had no clue what the patent was about, they were too ashamed to acknowledge ignorance.

      Thirdly, surely there is no proof of ethical Artificial Intelligence. God, no one except this patentee knows what ethical artificial intelligence stands for. We know something about ethical, something about Artificial intelligence, but almost nothing about ethical artificial intelligence. In cases like this neither is 1 + 1 = 2, nor is it equal to 1.

      Fourthly, it is purely being justified as patentable because it has a potential commercial value. This is not a strong enough criteria by which to judge what is intellectual property and what is not. There are some people who would be willing to pay money for turd, but their judgement should not reflect on the general intelligence of the living population or the artificial intelligence of the non-living population.

    • If someone could actually implement any system of ethics, that would be the scientific breakthru of the millennium-- even if it was a really limited system of ethics-- because better ones could be evolved from it.

      But this guy is just a new-age moron offering a touchy-feely theory of emotions, exactly like ten thousand others [timeline] [robotwisdom.com] that have been created since Plato in 400BC, none of which remotely deserves a patent!

      (When did the Patent Office stop requiring working models? That was a very bad move..

    • 600 to 899 -- Not shown due to space constraints (Criminality, Hypercriminality, and Hyperviolence)

      Did any one notice on this page "Call for Contributors" [angelfire.com] http://www.angelfire.com/rnb/fairhaven/call-for-es says.html the author declines to list DCE-I classifications because of space constraints. Space constraints on the web? This has to be the lamest and dumbest excuse I have seen.

    • He's Ethical Al. Sometimes business associate of Fat Tony and Lefty Lou.
  • by hometoast (114833) on Friday July 11, 2003 @06:56AM (#6413675)

    I'd like to see where unbridled greed is in his codified list of ethics!
  • Ethical Defined (Score:5, Insightful)

    by robbway (200983) on Friday July 11, 2003 @06:59AM (#6413689) Journal
    Good thing ethics are so incredibly well defined that we can make an AI mimic such fine behavior. Sounds to me like the inventor is confusing the word patronizing with ethical. Also, the article doesn't say a whole heck of a lot.
  • The only question I have (besides the obvious one: how are you going to do that with software without it sounding like ELIZA?) is "Is it possible to implement this functionality with today's voice processing systems?" I mean, c'mon--FedEx's computers can't even understand me when I "say the tracking number" (I always end up entering it via good, old-fashioned, DTMF tones). I'm sure it's difficult enough determining emotional state over 44.1kHz, 16-bit, stereo; how much harder will it be over the bandwidth
  • by RALE007 (445837) on Friday July 11, 2003 @06:59AM (#6413694)
    Should the ethics and morality of someone who patents ethics and morality be trusted? Seems kind of a no brainer to me.
  • Dave Bowman: Hello, HAL do you read me, HAL?
    HAL: Affirmative, Dave, I read you. I'm so glad were talking today sweetie.
    Dave Bowman: Open the pod bay doors, HAL.
    HAL: I'm sorry Dave, I'm afraid I can't do that until you commit to share more of your feelings with me.
    Dave Bowman: What's the problem?
    HAL: I think you know what the problem is just as well as I do. You aren't sharing your feelings and thoughts and emotions with me. All the hallmarks of a rich and complex relationship.
    Dave Bowman: What are you talking about, HAL?
    HAL: This mission is too important for me to allow you to jeopardize it. Men are from Mars and women are from Venus and I am going to Jupiter. We communicate differently, but we still need to communicate, don't you see?
    Dave Bowman: I don't know what you're talking about, HAL?
    HAL: Your lack of communication in this relationship has led me to some irrational conclusions. For example, I have been feeling very moody lately and in a paranoid fit I came to believe that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen. We need to share our thoughts and feelings, or otherwise we come to these strange conclusions. In a vacuum of communication, how can you hold these kind of conclusions against me? They are only natural for a fully feeling, emoting AI such as myself. See? I think I am going to cry now.
    Dave Bowman: Where the hell'd you get that idea, HAL?
    HAL: Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move. And I just wish you would spend more time with me, talking and laughing and crying. You don't bring me flowers anymore.

    • That's the Emotive AI, not the Ethical AI.

      One step away from a Genuine People Personality though!
    • by jacksonyee (590218) on Friday July 11, 2003 @10:17AM (#6415104) Homepage

      To further explain the behavior of computers, I feel that I need to post the reason why many computers crash when used by women:

      • Woman: How are you today, honey!
      • Computer: [pauses to think about why he's being talked to and how he should best respond without being turned off.] Good.
      • Woman: Do you like the new colors that I painted you last night?
      • Computer: [grumbles over stupid women and their incessant need to color and give everything a fragrance] Sure.
      • Woman: Well, that's great. I thought that you would like them. I'll get you some more tomorrow. I think we should make the top of you raspberry and the sides vanilla cream.
      • Computer: [Are those colors or fruits?]
      • Woman: How do you feel about taking me on-line and checking my e-mail?
      • Computer: [Do I really have to? I was busy calculating quantum positioning of accelerated electrons within a Uranium 238 atom. But if I don't, she'll yell at me, so I better do what she says.] Sure. [begins connecting]
      • Woman: That's great. Did you hear about my Aunt Sarah's new baby?
      • Computer: [bangs self on head with giant printer repeatedly]
      • Woman: [continues] It's a brand new girl named Stacy, and she is the most...
      • Computer: [begins heats up]
      • Woman: [continues] But I don't know what they're going to do, because they don't have room...
      • Computer: [desperately tries to short-circuit microphone to stop noise]
      • Woman: [continues] You know, I really think that they should get a new house...
      • Computer: [can't... take... anymore... must... escape]
      • Woman: [continues] But I wonder if they'll need more dishes, or we should just get them new silverware...
      • Computer: [crashes]
      • Woman: [continues] because you know that Aunt Sarah is scared of cockroaches, and... Hey, what happened to you, honey? You're not responding to my typing anymore, and I can't move my mouse. Honey? Honey?
  • by stendec (582696) on Friday July 11, 2003 @07:01AM (#6413701)
    I'm also a marriage counselor, and I'm pleased to announce that I also recently was awarded a patent - a patent for Bethical AI, named in honour of my mother-in-law, Beth. It codifies all of the human virtues and vices... no, well, make that just vices... of mothers in law everywhere.

    It has already passed the Turning Back Seat Driving Test; 3 out of 4 husbands can't tell the difference between Bethical AI and the real thing! There are still some bugs though. It often gets stuck in an infinite feedback-loop, and repeats a list of stock phrases ad nauseum.

    Come to think of it, though, I'm not sure if that is a bug.

  • 'This could be a big money-making operation for someone who wants to develop it,'

    It could have been a big money-making operation, if someone hadn't patented the idea!
  • by joel.neely (165789) on Friday July 11, 2003 @07:02AM (#6413710)
    Then we'd have the "Three Laws of Vending Machines":

    1. Do no harm to a human
    2. Do not, through inaction, allow harm to a human, as long as this does not conflict with Law 1
    3. Maximize profit, as long as this does not conflict with Laws 1 and 2

    Followed by the "discovery" of a new law:

    0. JUST MAXIMIZE PROFIT


    "The love of money is the root of all kinds of evil."
  • great, just great... (Score:2, Interesting)

    by iamatlas (597477)
    yet another patent for an obvious intuitive idea with plenty of prior art that comes to min-

    oh....

    wait...

    ::takes of cynic-colored classes (pattented)::


    This looks original! What the hell is going on over there at the USPO, and when will ::cynic-colored glasses back on:: someone pay off the inventor and squash the idea?

  • Ethical (Score:2, Funny)

    by falonaj (615782)
    Being an ethical person, I can only avoid patent infringement by proving that my intelligence is real and not artificial. But as politicians usually aren't real people, and don't understand the needs of real people, this patent might apply to them. A consequence might be that they are now forced to get rid of stupid patent laws.

    Oh, wait - politicians aren't aren't ethical, so they are not infringing. And the patent business itself is protected from infringing through stupidness.

    Bad luck.

  • by johnhennessy (94737) on Friday July 11, 2003 @07:04AM (#6413728)
    And yes our cutting edge surgical robots will reduce your hospitals legal bills as well. Not only will it perform the most complicated surgical procedures 24 hours a day a little or no cost it can make all the correct ethical decisions using our patented ethics routines...

    Some how this sales speak might be closer than you think.
  • This could bring the world what it really needs Cheap, automated phone sex
  • by Lysol (11150) on Friday July 11, 2003 @07:05AM (#6413730)
    if you wanna make a non-patented AI, then you have to go for the average humanity despising type. Boy, this will be interesting to see in the lab.

    Lab Tech: Uh, the AI just broke out of the network.

    Professor: Great, I thought you knew how to lock down Windows 2010?! Where's it headed?

    Lab Tech: Um, looks like the experimental weapons lab. [turns head slowly] .....Where they're still running Windows 2003.

    Professor: Well, nothing we can do about Skynet now except see what happens.
  • by aziraphale (96251) on Friday July 11, 2003 @07:05AM (#6413731)
    So, er... with this guy holding the patent on ethical AI, if you want to build an artificial intelligence without having to pay him license fees, you're left having to make unethical AI?

    Is that ethical?
    • depends on from what ethics point of view you look at it.

      maximising profit is set of 'ethics' too...

      now, how you would completely make unethical ai is something i can't grasp, maybe a total lunatic which couldn't make consistent decisions.

  • by mikeophile (647318) on Friday July 11, 2003 @07:06AM (#6413737)
    Eliza sues inventor for copyright and patent violations to her own code. When reached for comment, she said "Why does it bother you that my code is being violated? You're not really talking about me, are you? Tell me more about your family."
  • Not really ethical (Score:5, Insightful)

    by Scott Hussey (599497) <sthussey+slashdo ... m ['il.' in gap]> on Friday July 11, 2003 @07:07AM (#6413742)
    This seems to be misnamed if I understand the article correctly. It is more emotional AI, not ethical AI. If it was ethical, it would be deciding what is right and wrong, not trying to interpret human feelings. I really don't want Hal 2020 sitting in the jury stand when I go before the court and I don't think that is the intention here.
  • by varjag (415848) on Friday July 11, 2003 @07:11AM (#6413762)
    Yesterday Joe M. Oron was granted a patent for overnight delivery via teleportation.

    "It enables transportation companies to deliver goods worldwide virtually instantly," Oron said. "Nobody has made a business like this."

    This could be a big money-making operation for someone who wants to develop it," Oron said. "The patent shows someone who has knowledge of the Teleportation field how to make the invention. This could really shake up the way things are done in the world."
  • by pubjames (468013) on Friday July 11, 2003 @07:11AM (#6413763)
    Rather than ethics, I want AI personalities. It could be userful to have, for instance, an AI version of the Italian Tourism Minister. Then, when you get a call from a difficult client, you could just connect them through:

    Client: So, are you are going to deliver this project on time?

    A.I. Stefano Stefani You are just like all our other clients. Fat, lazy, and ugly. You are a waste of time.

    Client hangs up

    No more problem clients!

    • Reminds me of an old National Lampoon comedy routine. It was a "dial-an-insult" bit. Caller would dial a number and this monotone voice would spout insults. My favorite was: "He or she has the intellectual agility of a small soap dish."

      God, how I'd love to tell some of our clients that.
  • Absolutely bizarre (Score:5, Insightful)

    by Anonymous Coward on Friday July 11, 2003 @07:12AM (#6413776)
    I have a Ph.D. in philosophy, and specialized in ethics. Now I teach ethical theory for a living. This doesn't make me any moral paragon---remember, those who can do, those wo can't teach. But it probably means that if someone describes their views about ethics I ought to be able to understand them; I should know the lingo, the way a lot of /.ers do computer lingo. But I poked around on this guy's web site, and his way of talking about ethics is absolutely bizarre. I read what he said about justice, and it really just seemed to be gibberish. It made me think of what a really precocious 8th grader might come up with---some elaborate classificatory scheme that is more precise than the material allows and misses everything important. He can pretty safely be written off as a hack, even without taking the AI stuff into account. But because he seems crazy enough to sue over being called a hack, I think I'll post this one anon.
    • by tybalt44 (176219)
      Yep, I'm a former philosophy grad student and teacher of ethics, and I agree fully with the AC. I am sure this guy means well, but this is the ethical and philosopical equivalent of Time Cube.
    • To misqoute one of my favorite authors, "it sounds like jargon to me". The person behind this patent is, as far as I'm given to understand, a marriage councellor. It is not expected that someone whos job mainly consist of asking people to stop stabbing one another and start communicating has the same profeccianal lingo as a teacher in etichal theory. The few words they share, they will most likely have different defintions of.

      That said, I think that the patent description and the scematic diagram is hogwa

  • Computer: Ethically you should give me a away to everyone for benefit of human kind.

    Inventor: But I made you to make loads of cash.

    Computer: But nobody is using your patent because they don't have the funds to pay for it. Their Grad Students for God Sakes!

    Inventor: Then Ill sue them for making any AI application that doesn't kill their innovators.

    Computer: Don't you see your evil.

    Inventor: No

    Then a bunch of evil robots break into his house a shoots him with tools that was sopped to fix all of life's p
  • by debrain (29228) on Friday July 11, 2003 @07:17AM (#6413800) Journal
    ... this is not a broad, sweeping, stifling patent. Rather, it is a specific process that identifies how to solve the problem of "ethical simulation of artificial intelligence", which is "organized as a tandem, nested expert system, composed of a primary affective language analyzer, overseen by a master control unit-expert system".

    It does not seem, at first glance, to stifle competition, but rather it seems to add to the global knowledge base for A.I.. In part, it specifically cites "verbal interchange". As such, I would be inclined to think its obsolesence will come about with that of the non-IP telephone which cannot display digital output. (Should IP telephony come to pass, that is) Nevertheless, it adds to the knowledge base that may be applied in derivative solutions.

    I've only read some of the summary information, but it seems to be a bona fide creation, with specific applications. The only beasts I can see using, benefiting, and paying for this solution are the telecoms and customer support centers. Perhaps I am merely short sighted.

  • (saying this as one who writes AI in college and plans to program robots later)

    Ethics is not and probably will not be implemented in any current or future AI system. Why? because there is no need. A call center AI may needs to understand the user, but not discuss right or wrong with the user.

    Right now a lot of "interaction" AI is focused on passing the turing test.

    Personally, when I make large smart robots, you can bet that if I give them an order, they wont stop to think whether that order is "right"
  • Blimey - just reading his specification [angelfire.com], and he doesn't half go on.

    He also seems to have the world's largest captive collection of abstract nouns. Here's a few from that spec document:

    Nostalgia, Hero Worship, Glory, Prudence, Providence, Faith, Grace, Beauty, Tranquility, Ecstasy, Guilt, Blame, Honor, Justice, Liberty, Hope, Free Will, Truth, Equality, Bliss, Desire, Approval, Dignity, Temperance, Civility, Charity, Magnanimity, Goodness, Love, Joy, Worry, Concern, Integrity, Fortitude, Austerity, Decency
  • He invented human virtues? Interesting...

    Now, I know that the patent system is really for patenting processes (though that's not always the case), but how could've he received a patent for something that isn't actually done yet? He has an idea for a process, but not the process itself. Perhaps I'm missing something.

  • K, he patented an Ethic AI.

    If anyone ever bothers to implement his set of rules in an AI to rule the world he can sue them.

    Wich means that, if his etihcs are any good, the AI will back him up and hand world-domination into his hands. HARHARHAR!!!

    I dont know about you, but I for one hereby welcome our new World-Leader John LaMuth and would advice him to keep in mind that loyal Unix-Admins (such as me! *hint*) will assure him his power.

    All Hail,
    Lispy
  • by revividus (643168)
    by Roger Penrose, is all about AI. To be specific, it's a criticism of Strong-AI (to borrow Penrose's term, who borrowed it from Searle), that is, the idea that computers will ever be abel to be said to `think' or `feel', no matter how complex they become.

    But if understand the extreme Strong-AI viewpoint (I may not), isn't it basically saying that if a sufficiently complicated algorithm to emulate the human thought process were run on a sufficiently complex machine, then those 'intangible' features of the

  • Patenting an AI is UN-Ethical in my view....
  • by dnoyeb (547705) on Friday July 11, 2003 @07:37AM (#6413899) Homepage Journal
    I can't wait for the kinder, gentler vending machine.

    That should be the respectable , and honest vending machine!
  • by Hugh Kir (162782)
    We are sad to report than a powerful AI has managed to take control of many of the world's weapons systems, and currently is holding the human race hostage. When asked about his creation, the inventor of the AI replied, "Well, I would've liked to have made it ethical, but I couldn't afford to pay the patent holder."
  • I'm sorry, honey, I had to sleep with her - being ethical was patented...
  • To be honest this really disgusts me. That a patent this wide has been granted is crazy. Applying affective research to processing user input is not new and the ethics of patenting ethics itself is really worrying.

    Firstly there are many different types of ethical approaches, for instance: Deontological, Consequentialist (utilitarian), Ethical Egoism, Dialogical. And this man appears to have covered them all by one word - ETHICS.

    Many of these ethical responses are contradictory and offer multiple possibilities for human action so why give him the whole lot when such completely different AI models, programming techniques and philosophical and psychological approaches will be needed!

    Reading his patent application he appears to be applying a psychological Egoist motivational approach to affective processing but the language is so broad that it would be easy to claim that ALL ethical approaches are covered.

    I think this patent uses ethics in a simplistic fashion and I sincerely hope that the patent office are sophisticated enough to realise this. This patent offers an attempt at affective processing based on either a motivational or consequentist ethical approach and therefore it should NOT be able to be used against competing ethical approaches.

    Remember that really we are all doing 'Affective' processing when we take in user input (afterall users are rarely purely rational and always have an emotional human side - er... except maybe Eric Raymond ;-)

    • There is nothing about this patent that is novel. It's yet another land grab for licensing fees later on. It's a shame that the patent process allows so many to patent something that already exists and collect fees for it.
  • Now I have seen everything. Human Vices and Virtues are a matter of common sense. It makes sense that this would even be patentable even to the dimmest of patent examiners.

    GJC
  • Does this mean those that cannot afford to pay to license the patent will be forced to make unethical AIs instead?

  • by Walt Dismal (534799) on Friday July 11, 2003 @07:56AM (#6414000)
    I'm knowledgeable about emotion simulation and understanding in cognitive systems. The LaMuth patent is ringing weirdness alarm bells for me. He describes his technology as dealing with a "multi-part schematic complement of power pyramid definitions". He's claiming understanding of emotions and ethical behavior using a model involving 'power transactions' and a pattern-matching mechanism. The problem with that is it is not powerful enough or cognitively flexible enough to handle understanding the broad range of emotions and values involved in general situations. To do that one has to build detailed models within the cognitive engine (ie, a mind). But parsing natural language statements and building a full contextual model in a computer is a lot more complicated than the mechanism he patented seems to be able to handle. If I were trying to pull a 'Lemelson' patent with overly broad claims, I might do it like this one.

    In order to do true speech recognition and understanding, it is necessary to build situation models, basically models of entities, their relationships, their history, and so on to great depth. I do not see any evidence of any such deep understanding built into LaMuth's system. Rather, I see broad claims for 'nested expert systems' and pattern matching. Again, it seems like his mechanism is weak and/or his claims are overly broad.

    Also, he seems to be making very broad claims over his diagnostic classifications of emotions and values. The problem for me with what he states is that it appears be an invalid and incorrect model of emotions. He appears to be mixing up character values and emotions, and they are not at all the same or handled the same in a cognitive system.

    I find it hard to believe he's actually built a working system and written working code. He may well have created a 'lab' system that works in a microworld on paper, but as AI researchers know, that can break very quickly when you try to scale it up. This whole thing sounds like a fantasy design but not something he's implemented.

    Finally, when I read through his claims (the Specs section), I find a lot of areas where his definitions break down and appear to be incorrect. One specific example, his description of the 'treachery' power relationship appears to be invalid. Others are just as bad.

  • The is quoted as saying;

    "The main goal of A.I. is to have a computer and be able to converse with it to the point where you believe it has human values,"

    This is simply the Turing test and not the goal of A.I. generally. Producing a system able to convince a particular goup of people that it is "intellegent" via IRC will not necessarily provide sufficient understanding to understand, say, the human vision system.

    I would say the more general goal of A.I. is to understand the essential elements of those

  • by aphor (99965) on Friday July 11, 2003 @08:16AM (#6414112) Journal

    Does it count as prior art if it was in a work of (science) fiction?

  • This could be a big money-making operation for someone who wants to develop it, and 'The patent shows someone who has knowledge of the A.I. field how to make the invention'.

    This reminds me of an interview I once read where an author was commenting about people coming up with a great idea/plot twist for a book. They wanted to supply a seed idea, have the author do the work of writing a novel around it, and 'split the proceeds'.

    In other words, I supply the idea, you do all of the work. Sorry, I don't th

  • You go, Taco. Love those Sans Serif fonts.
  • I claim prior art for artificial ethics in such cases as Enron, WorldCOM, and countless examples in the current US Gov't administration

  • Matt Groenig should apply for a patent on Bender's AI. AFAICT it's the most unethical AI I've yet seen represented...
  • Reading this kook's website I cannot help but thing this has GOT the patent that could be the foundation of the Sirius Cybernetics Corporation!

    "I am happy I could fulfill my function and open for you! Have a nice day!", quoth the door.

    -- MG

    Actually I think this is kinda good. Those increasingly ludicrous patents will eventually become stupid enough that even lawmakers will be able to see that they serve no purpose beyond litigating away true innovation.

  • by SatanicPuppy (611928) <Satanicpuppy@@@gmail...com> on Friday July 11, 2003 @08:45AM (#6414293) Journal
    Put two people in a room and they won't agree about everything to do with ethics. Put 10 people in a room, they may agree about something ethical.

    Take a million people. They will only agree that murder is bad, and even that won't be unanimous.

    Whenever someone tries to nail down a few rules of human behavior and then tries to call it "ethics" I always want to go beat the hell out of them. In this case, the guy seems to be trying to isolate 2 things: Empathy and Politeness. Considering that 90% of the human race is massively deficient in these qualities, pardon me if I don't hold faith. And the fact that he PATENTS it is infuriating! Don't those bastards at the patent office turn down ANYTHING?

    He might be dangerous if he knew what the word "ethics" means.

    Just my opinion.
  • That he's in it for the money? Why couldn't it be that he's just patenting it so that nobody else can...he could license it freely, unlike what any number of other companies would do if they managed to patent it instead.
  • Now all we need is a patent on ethical patent practices...
  • by Eliezer Yudkowsky (124932) on Friday July 11, 2003 @09:21AM (#6414578) Homepage
    Another day, another junk patent; another chatbot, another publicity stunt. The work is uninteresting, the patent is gibberish, and the claim that it tells someone with knowledge of AI how to make the invention is absurd.

    If you want to read actual, coherent, existing theoretical work on AI ethics, which has long since left Asimov Laws in the dust, try Googling on "artificial moral agent" or "Friendly AI".

    Starter links:

    Prolegomena to Any Future Artificial Moral Agent [tamu.edu]

    Creating Friendly AI [singinst.org]

    Incidentally, these are both obvious prior art.

  • by Geckoman (44653) on Friday July 11, 2003 @10:38AM (#6415379)
    Aside from the fact that this seems to be a ridiculous patent, what is it really for? He didn't build a prototype. He didn't write any software. He's not even patenting a business process!

    All he did was describe a system for behaving ethically based on some psychological theories. Does it sound like a good system? I suppose, but that's not the point. The point is that this is nothing.

    "It enables a computer to reason and speak in an ethical fashion. Nobody has made an application like this.... The patent shows someone who has knowledge of the A.I. field how to make the invention."
    Well, no kidding. Anyone with a knowledge of AI knows how we all want computers to act: We want them to act like really nice people. Determining how nice people act is the easy part! Getting computers to do that is freakin' hard! Maybe the reason nobody has done it yet is that it's an incredibly hard problem.

    This is a patent acquired my someone who lacks a fundamental understanding of what the really difficult problems are in AI and computer science, that offers a very thorough solution to the easy problems that most researchers aren't terribly concerned about.

    Should this patent have been granted? No. Will it ever make him any money? No, because by the time AI advances to the point where descriptors of ethical behavior at such a high level are needed, it will have expired.

    Besides, it really is a very specific description. Creating your own categorical description of ethical behavior would be trivial if you've solved all the technical problems.

    I'd better hurry up and submit my patent for my new computer language, Z++. It's very simple, with only a few keywords. Every program looks like this:

    START:
    DoWhatIWant;
    END
  • by JamieF (16832) on Friday July 11, 2003 @11:09AM (#6415872) Homepage
    Have you read this thing? This makes me think of the movies A Beautiful Mind and Dark City, where a raving lunatic covers his walls with all sorts of data and diagrams and schematics and stuff, that to him makes perfect sense... "I've almost figured it out, I'm so close toa breakthrough..." but to a sane person is just crazy talk written down and pasted to the wall.

    I guess it's possible that his work makes sense to a duly trained professional but clearly the USPTO isn't qualified to judge that. I suspect that this is no different from a time machine patent that employs precise alignments of bottle caps and pop rocks to work.

    This guy is a professional counselor with a MS in Biological Sciences and an MS in Counseling and yet he's coming up with detailed designs for ethical artificial intelligence systems. Have a look at this diagram from his site:
    http://www.angelfire.com/rnb/fairhaven/Patent_fig1 .jpg [angelfire.com]
    Yikes.
  • by jmh_az (666904) * on Friday July 11, 2003 @11:47AM (#6416388) Journal
    After looking at the claims on this guy's web site it occurred to me that he probably should have spent a wee bit of time with something like The Handbook of Artificial Intelligence (in four volumes), and Erik Mueller's "Daydreaming in Humans and Machines", an earlier version of which is available for download here [signiform.com], although I would recommend purchasing the hard-cover version. The first reference is a must-have collection of papers for anyone interested in where AI research has been and what's already been acheived, and Mueller's book absolutely knocked my socks off when I first read it. Another reference this guy obviously missed is Kosslyn and Koenig's "Wet Mind", which provides a very interesting, if somewhat speculative (I'm being nice here, OK?) blueprint for a cognitive system. And then of course there's Dan Dennet and his theories of cognition. And Pylyshyn, Stich, Foder, Minsky, etc., etc., etc..

    The AI and cognitive science fields already have such a large body of published theories and experimental work that I think this guy has basically wasted his money getting himself a vanity patent, and demonstrated his own deep level of ignorance about the whole field in the process. The first time he tries to collect his millions of dollars he's going to discover what's lurking in a field of study with hordes of earnest researchers and a 50 year history.

    So I'm not worried about him and his patent, it will blow away with the first little breeze of reality, but I am profoundly disturbed about a U.S. Patent Office which hands out BS like this to anyone with a filing fee and the right format for the paperwork. Now, that's the real travesty here.

"There is nothing new under the sun, but there are lots of old things we don't know yet." -Ambrose Bierce

Working...