Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Patents Technology

Patent Granted for Ethical AI 345

BandwidthHog writes "Marriage counselor and ethics author codifies human virtues and vices, then patents it as Ethical AI. Seems vague, but he's got high hopes: 'This could be a big money-making operation for someone who wants to develop it,' and 'The patent shows someone who has knowledge of the A.I. field how to make the invention.'" I can't wait for the kinder, gentler vending machine.
This discussion has been archived. No new comments can be posted.

Patent Granted for Ethical AI

Comments Filter:
  • Who's this guy? (Score:3, Insightful)

    by Surak ( 18578 ) * <surakNO@SPAMmailblocks.com> on Friday July 11, 2003 @07:55AM (#6413672) Homepage Journal
    Who's this guy to decide ethics and morality for everyone else? It is important to remember that ethics and morality are based on culture and social norms. Each culture has its own set of taboos, its own morality, and its own ethical codes. Codefing these is dubious at best, and applicable to only one culture or set of cultures at worst. Patenting these is just ridiculous beyond belief.

  • Ethical Defined (Score:5, Insightful)

    by robbway ( 200983 ) on Friday July 11, 2003 @07:59AM (#6413689) Journal
    Good thing ethics are so incredibly well defined that we can make an AI mimic such fine behavior. Sounds to me like the inventor is confusing the word patronizing with ethical. Also, the article doesn't say a whole heck of a lot.
  • Re:cool (Score:2, Insightful)

    by djtrialprice ( 602555 ) on Friday July 11, 2003 @08:03AM (#6413719)

    Okay, it's nice to see that we're thinking ahead at some kind of framework but to me this seems like making the ISO OSI 7 layered model after Charles Babbage describes what a computer is.
    The current Turing Test programs aren't that much superior to Eliza. I think it's going to be several decades before we see the Loebner prize being won.
    This kind of thing is just far too early / pie-in-the-sky.
  • Not really ethical (Score:5, Insightful)

    by Scott Hussey ( 599497 ) <sthussey+slashdot@NOSPAm.gmail.com> on Friday July 11, 2003 @08:07AM (#6413742)
    This seems to be misnamed if I understand the article correctly. It is more emotional AI, not ethical AI. If it was ethical, it would be deciding what is right and wrong, not trying to interpret human feelings. I really don't want Hal 2020 sitting in the jury stand when I go before the court and I don't think that is the intention here.
  • Absolutely bizarre (Score:5, Insightful)

    by Anonymous Coward on Friday July 11, 2003 @08:12AM (#6413776)
    I have a Ph.D. in philosophy, and specialized in ethics. Now I teach ethical theory for a living. This doesn't make me any moral paragon---remember, those who can do, those wo can't teach. But it probably means that if someone describes their views about ethics I ought to be able to understand them; I should know the lingo, the way a lot of /.ers do computer lingo. But I poked around on this guy's web site, and his way of talking about ethics is absolutely bizarre. I read what he said about justice, and it really just seemed to be gibberish. It made me think of what a really precocious 8th grader might come up with---some elaborate classificatory scheme that is more precise than the material allows and misses everything important. He can pretty safely be written off as a hack, even without taking the AI stuff into account. But because he seems crazy enough to sue over being called a hack, I think I'll post this one anon.
  • response as originating directly from said computer, simulating artificial intelligence

    I think the "simulating artificial intelligence" is a very strong claim.

    First, the guy muddles the definition of AI by adding ethical to it.

    Secondly, there is no convincing proof that AI has been simulated. It is still a damn dream - when I see AI, I am sure it will hit me like a sledge-hammer and be better than even an orgasm. And people haven't been reporting that reaction. I am pretty sure the patent examiner didn't feel that. And they probably let it on because though they had no clue what the patent was about, they were too ashamed to acknowledge ignorance.

    Thirdly, surely there is no proof of ethical Artificial Intelligence. God, no one except this patentee knows what ethical artificial intelligence stands for. We know something about ethical, something about Artificial intelligence, but almost nothing about ethical artificial intelligence. In cases like this neither is 1 + 1 = 2, nor is it equal to 1.

    Fourthly, it is purely being justified as patentable because it has a potential commercial value. This is not a strong enough criteria by which to judge what is intellectual property and what is not. There are some people who would be willing to pay money for turd, but their judgement should not reflect on the general intelligence of the living population or the artificial intelligence of the non-living population.

  • by tybalt44 ( 176219 ) on Friday July 11, 2003 @08:36AM (#6413888) Journal
    Yep, I'm a former philosophy grad student and teacher of ethics, and I agree fully with the AC. I am sure this guy means well, but this is the ethical and philosopical equivalent of Time Cube.
  • Re:Had to be said (Score:5, Insightful)

    by IntelliTroll ( 683488 ) <fodder2@ureach.com> on Friday July 11, 2003 @08:41AM (#6413919)
    How long before someone *patents* 'genuine people personalities'? The trend to award patents for methods, algorithms, gene sequences and similar things which could be argued as natural phenomena is alarming. My suscinct read on this is that this fellow has garnered a patent on nothing but labeled to maximise attention to it. (1) His assertion that he has codified or defined ethics with an alogrithmic implementation is laughable. (2) I'll bet he doesn't *have* an implementation. Just some fscking diagrams (as required for patents). (3) If he can [really] codify and implement something as ephemeral as 'ethics' in AI software, he should already be raking mega-bucks and the admiration of the masses with the product of his Nobel-prize winning genius... solving the problems of hunger and war and disease with his stunningly crafty AI's. But no, he's just another fame-grubbing opportunist trying to capitalise on a patenting some aspect of a concept whose basis has been in dispute since philosophers first began debating anything. He doesn't intend to create new, ground-breaking AI systems. He intends to (a) stake his claim to fame (b) get someone to fund some pretense of research and/or (c) extort funds from future AI developers whose actual works might infringe his wonderful patent. Jeeze. This patent stuff is getting absurd.
  • by Walt Dismal ( 534799 ) on Friday July 11, 2003 @08:56AM (#6414000)
    I'm knowledgeable about emotion simulation and understanding in cognitive systems. The LaMuth patent is ringing weirdness alarm bells for me. He describes his technology as dealing with a "multi-part schematic complement of power pyramid definitions". He's claiming understanding of emotions and ethical behavior using a model involving 'power transactions' and a pattern-matching mechanism. The problem with that is it is not powerful enough or cognitively flexible enough to handle understanding the broad range of emotions and values involved in general situations. To do that one has to build detailed models within the cognitive engine (ie, a mind). But parsing natural language statements and building a full contextual model in a computer is a lot more complicated than the mechanism he patented seems to be able to handle. If I were trying to pull a 'Lemelson' patent with overly broad claims, I might do it like this one.

    In order to do true speech recognition and understanding, it is necessary to build situation models, basically models of entities, their relationships, their history, and so on to great depth. I do not see any evidence of any such deep understanding built into LaMuth's system. Rather, I see broad claims for 'nested expert systems' and pattern matching. Again, it seems like his mechanism is weak and/or his claims are overly broad.

    Also, he seems to be making very broad claims over his diagnostic classifications of emotions and values. The problem for me with what he states is that it appears be an invalid and incorrect model of emotions. He appears to be mixing up character values and emotions, and they are not at all the same or handled the same in a cognitive system.

    I find it hard to believe he's actually built a working system and written working code. He may well have created a 'lab' system that works in a microworld on paper, but as AI researchers know, that can break very quickly when you try to scale it up. This whole thing sounds like a fantasy design but not something he's implemented.

    Finally, when I read through his claims (the Specs section), I find a lot of areas where his definitions break down and appear to be incorrect. One specific example, his description of the 'treachery' power relationship appears to be invalid. Others are just as bad.

  • Re:MOD PARENT UP (Score:3, Insightful)

    by EddWo ( 180780 ) <eddwo@[ ]pop.com ['hot' in gap]> on Friday July 11, 2003 @09:06AM (#6414051)
    But if you never tell anyone about it you can't make any money from it.

    The options seem to be:
    1) Keep invention a secret
    Its all very well to be able to go around thinking "I know something you don't know." But the only way to benefit from that knowledge is to produce a product or service based on it. Once it is available on the open market someone is bound to reverse-engineer it and try to undercut you. Without the protection of a patent you are powerless to prevent someone else from making all the big bucks from your ideas.

    2) Give information away freely
    The BSD statergy. You make money by providing your expertise and possibly gain a cult following a la Linus. As long as you don't mind a bunch of other people getting rich off your work it's fine but you loose control over the implications of your creation.

    3) Get a patent
    For a limited time (unless you buy an extension) you get to say where and how your idea is used. You are protected if you want to develop and market your own products or you can charge as much or as little as you like for other people to develop them. You remain in control and have a legally recognised ownership of the ideas. In the end when the patent expires your knowledge becomes public domain.

    So is there a way to horde(sp?) intellectual property?
  • by Marc2k ( 221814 ) on Friday July 11, 2003 @09:15AM (#6414099) Homepage Journal
    I thought you had to at least show proof of concept or *some* kind of proof that you might make the effort at *some* point to try to implement what you're trying to patent. I thought the whole point of the patent office was to protect inventors, but at the same time prevent people from collecting royalties by saying "Hey, I thought time travel might be a good idea 10 years ago, pay up."

    I haven't read the article yet (of the comments I've read, most people seem to agree there's not much to it), but the inventor here seems to be saying that he's not going to do any work in the field of his patent, but if someone would like to develop it, he'd gladly accept royalties.

    Am I missing something in regards to patent law, or in regards to this guy's intentions?
  • by Eliezer Yudkowsky ( 124932 ) on Friday July 11, 2003 @10:21AM (#6414578) Homepage
    Another day, another junk patent; another chatbot, another publicity stunt. The work is uninteresting, the patent is gibberish, and the claim that it tells someone with knowledge of AI how to make the invention is absurd.

    If you want to read actual, coherent, existing theoretical work on AI ethics, which has long since left Asimov Laws in the dust, try Googling on "artificial moral agent" or "Friendly AI".

    Starter links:

    Prolegomena to Any Future Artificial Moral Agent [tamu.edu]

    Creating Friendly AI [singinst.org]

    Incidentally, these are both obvious prior art.

  • by Poeir ( 637508 ) <poeir@geo.yahoo@com> on Friday July 11, 2003 @12:05PM (#6415825) Journal
    In order to avoid patent infringement, just avoid programming ethics into your AI.
  • by JamieF ( 16832 ) on Friday July 11, 2003 @12:09PM (#6415872) Homepage
    Have you read this thing? This makes me think of the movies A Beautiful Mind and Dark City, where a raving lunatic covers his walls with all sorts of data and diagrams and schematics and stuff, that to him makes perfect sense... "I've almost figured it out, I'm so close toa breakthrough..." but to a sane person is just crazy talk written down and pasted to the wall.

    I guess it's possible that his work makes sense to a duly trained professional but clearly the USPTO isn't qualified to judge that. I suspect that this is no different from a time machine patent that employs precise alignments of bottle caps and pop rocks to work.

    This guy is a professional counselor with a MS in Biological Sciences and an MS in Counseling and yet he's coming up with detailed designs for ethical artificial intelligence systems. Have a look at this diagram from his site:
    http://www.angelfire.com/rnb/fairhaven/Patent_fig1 .jpg [angelfire.com]
    Yikes.

If you have a procedure with 10 parameters, you probably missed some.

Working...