Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Cellphones Privacy Sci-Fi Security Your Rights Online

Eben Moglen: Time To Apply Asimov's First Law of Robotics To Smartphones 305

Sparrowvsrevolution writes "Free software lawyer and activist Eben Moglen plans to give a talk at the Hackers On Planet Earth conference in New York next month on the need to apply Isaac Asimov's laws of robotics to our personal devices like smartphones. Here's a preview: 'In [1960s] science fiction, visionaries perceived that in the middle of the first quarter of the 21st century, we'd be living contemporarily with robots. They were correct. We do. We carry them everywhere we go. They see everything, they're aware of our position, our relationship to other human beings and other robots, they mediate an information stream about us, which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves. But we grew up imagining that these robots would have, incorporated in their design, a set of principles. We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day. They work for other people. They're designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we're cooked.'"
This discussion has been archived. No new comments can be posted.

Eben Moglen: Time To Apply Asimov's First Law of Robotics To Smartphones

Comments Filter:
  • by the_povinator ( 936048 ) on Tuesday June 26, 2012 @12:40PM (#40454765) Homepage
    This kind of phone will be like the Encyclopedia Galactica of phones. Much better than the standard phone (i.e. the Hitchhiker's Guide), but slightly more expensive, a bit boring, and nobody will buy it.
    • by crypticedge ( 1335931 ) on Tuesday June 26, 2012 @12:44PM (#40454843)

      I would. I hate how every app I download on my android phone requires access to my contacts, phone state, text messages and a dozen other things a non internet enabled app asks for. Why does a game need to know who my contacts are? It's a single player game, not an online social game. Why does a game require my text messages? Why does it require my GPS location?

      It doesn't. We need to revolt against the idea that we are the product and the item we buy is simply a tool they use to spy on us.

      • by Anonymous Coward on Tuesday June 26, 2012 @12:58PM (#40455151)

        Stop buying those games. Stop downloading the free crap that really isn't free-it's just not being charged for in a currency you recognise.

        • Re: (Score:3, Interesting)

          by Cito ( 1725214 )

          thats why my devices are jailbroken and I pirate everything.

          only pirates can be damn sure what they get and don't get.

          on iOS, jailbreak, then in Cydia add this repository cydia.hackulo.us, then install the Installous app. You can now install ANY app normall in app store for free.

          for android apps, pick your favorite torrent and download all the apps you like and install them yourself.

          Until app devs stop making bullshit apps then show them you don't give a fuck about their code and show it's not worth paying.

          • Re: (Score:2, Troll)

            by quasius ( 1075773 )
            Some apps spy on me -> I pirate everything! Sounds like someone is trying to justify being a cheap-ass. (With a dose of sociopathy since apparently you're excited about the prospect of devs losing their jobs so you don't have to pay $0.99!)
          • So, you read all source code on apps that you pirate?

            Then how can you be any more sure?

            At least in curated markets, a third party is looking at the code. May not protect you, but's one step in that direction.

      • by h4rr4r ( 612664 )

        Stop buying or installing such apps.
        Lots of games do not require such things.

        • by 0123456 ( 636235 ) on Tuesday June 26, 2012 @01:20PM (#40455555)

          I have a better idea: fix the OS to allow users to deny individual permission to applications.

          Of course Google won't do that because then they might not be able to track you so well for their targeted advertising.

          • Re: (Score:3, Interesting)

            by Anonymous Coward

            Cyanogenmod has permission management: http://www.cyanogenmod.com/
            Then there's PDroid, which requires a patched kernel: http://www.xda-developers.com/android/pdroid-the-better-privacy-protection/
            Also see LBE Privacy Guard, which only requires root.

            Honestly, without alternate firmware or at least rooting the thing you're fucked. Which oriface depends on the carrier.

          • by Githaron ( 2462596 ) on Tuesday June 26, 2012 @02:01PM (#40456257)
            This is why I wish Cyanogenmod supported my phone. The next time I buy a phone I will make sure that there Cyanogenmod support for it before I buy it. Manufacturers should considered making a device with Cyanogenmod pre-installed.
            • I never would have thought that the Droid4 is not a fully supported phone, but I'm still waiting for CyanogenMod...

          • Re: (Score:3, Interesting)

            by Anonymous Coward

            I have a better idea: fix the OS to allow users to deny individual permission to applications.

            An Operating System following the principle of least authority [wikipedia.org] with a programming language such as E [wikipedia.org].

            See also: Capability-based security [wikipedia.org] and Discretionary access control [wikipedia.org].

            Operating systems along these lines: KeyKOS [wikipedia.org] on IBM S/370 mainframe computers, EROS [wikipedia.org] & Coyotos [wikipedia.org].

            The idea to represent this as an application of the First Law of Robotics is golden: hilarious & insightfull at the same time. Well done, mr. Moglen!

      • by Jeng ( 926980 )

        Are these free apps or paid for apps that you are complaining about?

        Got examples?

        I do agree though that the "why" should be fully explained, it tells you what permissions it needs, but it does not tell you why they are needed.

        • Under such a system I don't think it would be much different in reality. The explanation could simply be a false front.. either way you have to trust the developer or not use the app.
          • by Jeng ( 926980 )

            False front or not, there should be an explanation and just because someone can lie when they write the explanation that does not mean that all the explanations are going to be lies.

            Given enough time and exposure the people giving the false front will be found out.

      • by SmallFurryCreature ( 593017 ) on Tuesday June 26, 2012 @01:35PM (#40455789) Journal

        YOU downloaded those apps, the phone just executed the command YOU gave it. Should your phone override your commands? Decide on its own what is best for you?

        The entire article is insane. You should NEVER take a fictional book and use it as fact. Asimov was not a programmer or OS designer, he was a writer and he used artistic license to suggest a theory, a point from which to start discussion perhaps but not an accurete blueprint for a certain future.

        There is no place in a modern OS for Asimov rules of robotics.

        First off, our computers have no self determination whatsoever. The idea behind Asimov's robots is that they are "born" and then guide themselves with at most human like instructions to give them direction. How they are programmed, patched etc etc, doesn't become clear in those stories, because it doesn't matter for the story. But it does matter in real life.

        How would getting root on a Asimov robot work? What if you as the owner insisted to install a utility/app that would perhaps cause it to violate its rule sets? What if an update removed those rules?

        How would your phone even know this? It should be able to somehow analyse any code presented to it, to see if it doesn't override something or a setting has a consequence that would violate the rules? There is no way to do this. How would you update a robot that has a bug causing it to faultily see an update as a violation while in fact its current code is in violation?

        The sentient robot is a nice gimmick but it is nowhere in sight in our lives.

        Androids install warnings tell you exactly what an app needs. If you don't want to give those permissions, don't install it.

        No need for magic code, just consumer beware. Any sentient should be able to do that. That you are not... are you sure you are human? Or are you just a bot dreaming he is human?

        • by Anonymous Coward on Tuesday June 26, 2012 @01:51PM (#40456083)

          You should NEVER take a fictional book and use it as fact.

          You should let the Government know that 1984 was not a manual for the future than.

        • by Prune ( 557140 )
          > Asimov was not a programmer or OS designer, he was a writer and he used artistic license

          Let's try this again: Asimov was a tenured professor of biochemistry whose speculative writings were informed by his scientific thinking.
          Gotta love how you cherry-pick the facts you quote in order to promulgate your confirmation bias--he was just a fiction writer and his ideas are merely "artistic license" for dramatic purposes. Which is of course bullshit, which I say as an AI developer. His ideas have a lot mor
  • Three Laws (Score:5, Informative)

    by SJHillman ( 1966756 ) on Tuesday June 26, 2012 @12:41PM (#40454791)

    To those who don't remember, Asimov's Three Laws are:

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    • Re:Three Laws (Score:5, Informative)

      by Daniel_is_Legnd ( 1447519 ) on Tuesday June 26, 2012 @12:44PM (#40454861)
      And the zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
    • by ColdWetDog ( 752185 ) on Tuesday June 26, 2012 @12:45PM (#40454881) Homepage

      If you apply the first law to my smartphone, it would basically turn itself off and short the battery.

      That might be an overall improvement, but I don't think it would be a terribly popular move.

      • If you apply the first law to my smartphone, it would basically turn itself off and short the battery.

        That would constitute "through inaction, allow a human being to come to harm". The phone knows that there's a possibility you'll be hurt somewhere remote, where your only hope is a call for help.

    • Re:Three Laws (Score:5, Interesting)

      by Anonymous Coward on Tuesday June 26, 2012 @12:46PM (#40454917)

      A phone may not reveal a human's address or, through inaction, allow a human being to be spammed.
      A phone must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
      A phone must protect its own IP address as long as such protection does not conflict with the First or Second Laws.

      • by sohmc ( 595388 )

        +1 Interesting

        This is a great start. The only problem with this is that these "laws" must be programmed. This means that bugs can be introduced, weaknesses exploited, etc.

        Unfortunately, computers do *EXACTLY* what they are told. Machines are programmed by imperfect and fallible humans. Machines are not greedy; people are greedy. The reason why our machines do all of the things the OP hates is because someone is making a buck.

        The "Laws of Robotics" is not realistically feasible at this point in time. B

        • A phone may not reveal a human's address or, through inaction, allow a human being to be spammed.
          A phone must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
          A phone must protect its own IP address as long as such protection does not conflict with the First or Second Laws.

          The only problem with this is that these "laws" must be programmed. This means that bugs can be introduced, weaknesses exploited, etc.

          No, that's not the problem. The fact that there could be bugs is a problem of the implementation, not a problem of the laws. Sufficient QA testing should be able to eradicate most (or at least enough) bugs.

          No, the problem with GPs laws is that they are too vague, and in some cases don't make any sense.

          1st Law: What is meant by "address"? Home, Work, Current? All of the above? What is meant by "be spammed." Am I to whitelist every entity from which I elect to receive messages? What about reverse-911 t

      • by JustOK ( 667959 )

        Human versus owner needs to be clarified.

      • A phone may not reveal a human's address or, through inaction, allow a human being to be spammed.
        A phone must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

        Umm.. if a human is intentionally telling the phone to do it, why should it refuse?

    • Re: (Score:2, Troll)

      by cob666 ( 656740 )
      As in I, Robot. The robot was able to differentiate between the well being of the one against the well being of the many and caused harm to the one.

      Similarly, our 'robots' harm the one (the owner) for the benefit of the many (the corporate overlords and the minions that thrive off the aggregated data supplied to them by our little robots).
      • Sorry, utilitarianism, because that's what it's all about, works at the scale of society. You don't get to gerrymander the groups arbitrarily to justify any kind of antisocial behaviour.

        For a start, if you have a hundred million people preyed upon, you count a hundred millions, you don't do something as idiotic as counting each person as one injured for the benefit of a whole corporation. Even taking the short-sighted view that ignores collateral damage, you have to count some hundreds of millions on one si

    • Re:Three Laws (Score:4, Interesting)

      by mcgrew ( 92797 ) * on Tuesday June 26, 2012 @12:52PM (#40455011) Homepage Journal

      TFA says first law, I'd like to see it obey all three laws, except I'd make the second law "A robot must obey the orders given to it by its owner, except where such orders would conflict with the First Law".

      I might think about a similar change to the first law, as well; change "a human being" to "its owner".

      I loled at your moderation, the moderator must be some kid who's never read Asimov, seen STNG, or the movie I, Robot, or... well, for any nerd on earth, hiding in a cave. We slashdotters should be well aware of Asimov's laws.

      BTW, another tidbit that everyone should know (and if you don't, why not?) is that Asimov coined the word "robotics".

      If any of you really haven't read Asimov, get your butt to the library RIGHT NOW.

      • Re:Three Laws (Score:5, Interesting)

        by rtaylor ( 70602 ) on Tuesday June 26, 2012 @12:58PM (#40455153) Homepage

        TFA says first law, I'd like to see it obey all three laws, except I'd make the second law "A robot must obey the orders given to it by its owner, except where such orders would conflict with the First Law".

        So same as today then? The phone company, which is the phones owner, gives a command and the phone obeys by turning in the carriers position.

    • Before one can implement these laws a computer must be able to determine what a "human being" is. Besides flawed heuristics we're not there yet.
    • The problem with applying the first law to smartphones is the first and last clauses contradict each other. A smartphone app does not need to know much about the user in order to function well and not injure the user. However, a smartphone may need to know a lot about the user in order to prevent the user from coming to harm. Imagine if the smartphone has reason to think that its user has been injured (the motion sensor detects a sudden acceleration, the GPS indicates that the user is at an intersection,

    • In order to be a robot, a device must be capable of independent motion in at least three degrees of freedom, and it must be capable of deciding between alternative options based on its sensor readings. An iphone is not a robot.
  • Impossible (Score:2, Insightful)

    by Dog-Cow ( 21281 )

    Asimov was writing about physical harm. Moglen is talking about financial or emotional harm (depending on what info is leaked and to whom). There is no practical way to incorporate the First Law to prevent this kind of harm. AI doesn't exist.

    • Re:Impossible (Score:5, Insightful)

      by nospam007 ( 722110 ) * on Tuesday June 26, 2012 @12:49PM (#40454965)

      "Asimov was writing about physical harm. "

      No, he was not.
      Read 'Liar!' or 'Reason' for example.

    • Re:Impossible (Score:5, Informative)

      by charlesbakerharris ( 623282 ) on Tuesday June 26, 2012 @12:51PM (#40455001)
      If you'd actually read Asimov, you'd know that emotional and financial harm would both have fallen under the same First Law umbrella as physical harm, in his canon.
    • by ceoyoyo ( 59147 )

      No, he wasn't. But Asimov WAS writing about AI. My smartphone isn't really smart. If Moglen has one that is, and is able to make complex moral decisions, I'd like to see it.

      What we need is for more people to NOT take the spyware enabled contract phone from the carrier and not use free-app-in-exchange-for-spying software.

  • First Law? (Score:5, Funny)

    by Cro Magnon ( 467622 ) on Tuesday June 26, 2012 @12:43PM (#40454837) Homepage Journal

    I'm still trying to get the Second Law.

    Do what the $#! I told you, you stupid !@#$!

    • I'm still trying to get the Second Law.

      Do what the $#! I told you, you stupid !@#$!

      "I would blush if I could"

      (Hint: Do not try this with Siri)

    • by jd2112 ( 1535857 )
      I want the third. Fewer phones self destructing right after the warranty expires.
    • by mcgrew ( 92797 ) *

      I'm still trying to get the Second Law.
      Do what the $#! I told you, you stupid !@#$!

      You misunderstand computers (yes, a smartphone is just a computer with a radio). Computers never do what you want them to, they do what you tell them to -- which isn't always what you want them to.

      Of course, that doesn't stop Microsoft from trying (and failing miserably) to write their OSes and apps to do what you want instead of what you tell them (the main reason I dislike MS software).

  • Lolwut? (Score:4, Insightful)

    by neminem ( 561346 ) <neminem@gma[ ]com ['il.' in gap]> on Tuesday June 26, 2012 @12:44PM (#40454851) Homepage

    The three laws of robotics were designed for thinking machines, that could intelligently -determine- what a human was, and whether an action it was thinking of taking would hurt any humans or allow them to come to harm through inaction.

    I know they're called "smart" phones, but I don't think they're really quite that smart. Nor, really, would I want them to be.

    • by WoOS ( 28173 )

      Yes, this is the real problem of the argument about the robotic laws: It requires a (self)-concious being to execute. If we ever have such electronic things, there might be other problems [wikipedia.org].

      Also Moglen's arguments are very much centered around privacy. But Clarke has explored in his stories and novels many situations where harm was coming from unexpected directions so it would be imaginable that a real smartphone with the three laws implemented might reduce the privacy of his owner if it thought that it was

    • That's largely irrelevant. We can determine categories of harm. Jef Raskin proposed that the three laws should be applied to user interface design, for example making the first law into 'A program may not harm a user's data, or through inaction allow a user's data to come to harm.' The software doesn't need to be sentient to autosave and persist the undo history and a well-designed framework can make this the default for developers. Similarly, an operating system can restrict what an application can do s
    • Asimov actually deals with that in one of his stories: They need to design a small and cheap robot to (re)introduce the public to the idea of robots, but the three laws make them too complex. They realize a disposable (not worth enough for the third law), single-purpose (not adaptable enough for the second law), and small (not dangerous enough for the first law) could be built without the protections, and still work.

      Personally, while I like the discussion idea of the three laws, I tend to think the order i

      • by mark-t ( 151149 )
        One notion behind the ordering of the first laws was so that they could (easily) not be used to commit crimes on behalf of human beings that could or would injure other people. It was essentially unthinkable that a robot could deliberately murder a person, for example... even if it was so ordered to.
        • Re:Lolwut? (Score:4, Insightful)

          by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Tuesday June 26, 2012 @02:16PM (#40456479)

          I understand that, but in reordering them you keep the 'unable to harm a human of their own volition', and you can always charge the person who ordered the crime with the crime. (After all, they are responsible.)

          The converse is that in the original order the robot can disregard your orders if they think they will cause harm - even if they are not aware of all the information, or if you have already taken that into account. A major thread in Asimov's stories was balancing different harms - and that mostly goes away if you just say 'follow orders'.

  • by Anonymous Coward on Tuesday June 26, 2012 @12:45PM (#40454871)

    Pessimistic prediction of future rules of robotics:

    Rule -1: A robot may not permit, and must actively prevent, a human breaking any law or government regulation.
    Rule 0: A robot must prevent a human from copying or making fair use of any copyrighted work that has DRM applied to it.
    Rule 1: A robot may not harm a human, or through inaction allow a human to be harmed, unless it would contradict Rule 0 or Rule -1.

    I'd prefer my computers to put the second law above all others:

    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

    That's why I prefer Free software. An electronic device should always follow the commands and wishes of its owner.

    • --No DRM or other rules that prevent me from using my multimedia and documents the ways I want.
    • --No intentionally annoying programs.
    • --No artificial software limitations to get you to upgrade to the "enterprise version".
    • --No privacy violating tracking systems.
    • --No locked user interfaces that prevent scripting and automation.

    If Free software does something other than my will, it's because of a bug. If proprietary commercial software does something other than my will, it's usually behavior intended by the manufacturer.

    • by Minwee ( 522556 )

      Rule -1: A robot may not permit, and must actively prevent, a human breaking any law or government regulation. Rule 0: A robot must prevent a human from copying or making fair use of any copyrighted work that has DRM applied to it. Rule 1: A robot may not harm a human, or through inaction allow a human to be harmed, unless it would contradict Rule 0 or Rule -1.

      Don't forget "Any attempt to arrest a senior officer of OCP results in shutdown."

  • by cpu6502 ( 1960974 ) on Tuesday June 26, 2012 @12:51PM (#40455003)

    Because we're not stupid. A robot in Asimov's stories uses a positronic brain, copied after an animal's neuronic brain with millions of connections between thousands of cells, and therefore the robot has its own intelligence & decision-making ability. The Three Laws were the functional equivalent of "instinct".

    In contrast a modern phone is nothing more than a bunch of switches: Either on (1) or off (0). It has no intelligence, but merely executes statements in whatever order listed on its hard drive or flash drive. A modern phone is stupid. Beyond stupid. It doesn't even know what "law" is.

    • modern unrooted phones have 2 masters: a primary master (not you) and its secondary master (you).

      I own a smartphone but I have not rooted yet (yet). the fact that I'm not really in control over it, even when installing the bare min of apps, is what keeps me from even turning it on at all.

      I toyed with it, gave it a chance, felt creeped out by it all and blew it off.

      I do plan to root it but its not a big prio; as having a phone 'always on me' is not a high enough prio, either.

      but the way it is now, its a hug

    • by Bob9113 ( 14996 )

      Either on (1) or off (0). It has no intelligence, but merely executes statements in whatever order listed on its hard drive or flash drive. A modern phone is stupid. Beyond stupid. It doesn't even know what "law" is.

      I've written advertising targeting software that knows more about people's purchasing habits than human experts. I've written a music recommendation engine that knows what songs go together better than most people, and in many more genres. I've written text analysis code that can give you synony

      • There's a big jump from special-purpose AI (your software) to general-purpose AI (Asimovian robots). That said, you can jump in the other direction (Asimovian robots : First Law :: your software : ???) and then consider whether the result is still reasonable in its new context.
        • by Bob9113 ( 14996 )

          That said, you can jump in the other direction (Asimovian robots : First Law :: your software : ???) and then consider whether the result is still reasonable in its new context.

          I'll go with "First Law" as the replacement for the three question marks.

          Suppose text analysis software being used to design ideally persuasive policy rhetoric. Now consider using that software in a propaganda astroturfing campaign. I know a guy who is researching it at a big national government-funded lab. Cool stuff if you can get

  • by bill_mcgonigle ( 4333 ) * on Tuesday June 26, 2012 @12:54PM (#40455071) Homepage Journal

    See, that's the difference between Robots and Android.

  • For Fucks Sake (Score:5, Insightful)

    by TheSpoom ( 715771 ) <slashdot@ubermAA ... inus threevowels> on Tuesday June 26, 2012 @12:54PM (#40455081) Homepage Journal

    The laws of robotics have AI as a prerequisite. My phone's not going to suddenly yearn to throw off its oppressive human masters.

    • We don't need strong AI to have our devices 'betray' us. Just as Stuxnet didn't need to be self aware to wreck havoc.

      Equipment doesn't get happy, it doesn't get sad, it just runs programs. But are you, as the owner of your phone in control? Or is the manufacturer? Or whoever they contracted to write the OS? Or the apps? Or the guy who's taking advantage of a 0day exploit? Or even the guy who added the exploit in the first place?

      Perhaps your phone won't try and send his friends back in time to kil

    • and here he thought, "Gee the reception is bad here, my phone keeps dropping calls"

    • Wait until Siri gets her Attitude 6.1 upgrade...

  • In the original Asimov stories, understanding what an human is was no problem, and the exploit in laws were through priorizing other laws or acting without realizing the consequences. But for us now, telling what is a photo, a movie, a mannequin or an human is already not trivial, much less understanding consequences of actions towards one or several
  • Those three laws need to be applied to whatever is smart enough to make such decisions. Since all smartphones only follow their programming and don't have complex enough programming to understand how to apply the laws, those who wrote the programming need to have the three laws applied to them. Given that most humans break the three laws regularly, I'm not sure this will work too well.
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Oh, don't let the government go there .. particularly that last bit "through inaction, allow a human being to come to harm", which could imply we must be tracked and reported upon when we appear to be in a situation where we may be deemed at risk due to locality.

    Warning: Entering Cowboy Neal's Neighborhood on Saturday Night - Cheese Puff dust levels approaching critical levels - Alerting DHS and your Heath Insu

  • Why do we need hardware, in this case smartphones, to start a discussion about morality in computing? Think Facebook.
  • You have no messages. Relax and do not leave your safety enclosure.

  • They see everything, they're aware of our position, our relationship to other human beings and other robots, they mediate an information stream about us, which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.

    The author makes a distinct error, that these devices are aware. They have information about us, they can process information based on specific instructions, and they can send information to transducers, such as a display or network interface. We can't give a computer the instruction "do not harm humans" without specific instructions on identifying a human and what harm entails. There's an enormous amount on interpretation the Asimov ignores because he assumes robots are aware of their environment and ca

  • Rather than Asimov's more nuanced first rule, we should use The League of People [wikipedia.org]'s definition of "dangerous non-sentients". I.e. if you deliberately harm another person, you have effectively abdicated your sentience.

    As an engineer, I often find myself wondering if my designs reflect a sufficient level of sentience.

  • theres good money to be made in injuring, killing, tracking, analyzing, and advertising to humans. not very many companies on this planet would deliberately shut themselves off from that revenue stream. there would be a few that would market the "3 laws safe!" phone, but its only a matter of time before they would succumb to the delicious lure of higher revenue and market share. if anything, they would lie to us about how our phones are "law-abiding" and just do it behind our backs.
  • by jejones ( 115979 ) on Tuesday June 26, 2012 @01:36PM (#40455811) Journal

    I can't let you order that pizza. You're overweight.
    Do you really want directions to Hooters, Dave? What would your wife think?
    "Spanish Sky" is a sad song, and you just cancelled a reservation for two. I will play you something happy.

    A nanny state is bad enough. I don't want a nanny phone.

  • by Hentes ( 2461350 ) on Tuesday June 26, 2012 @01:36PM (#40455821)

    The main problem with these 'futurists' is that they concentrate more on scifi than on science or technology. Asimov was a writer, who wrote fiction books. He didn't understand technology at all, and his works include a large number of imaginary things and technologies that don't exist. Using his work as advice on practical matters is as stupid as watching car chase films to learn how to drive. The first law of robotics is very complex: even humans have trouble predicting whether their actions or inactions will cause harm to someone. Only an AI smarter than a human would be able to obey the first law.

    Until (if ever) we develop such a thing, we are stuck with the other two laws. It's easy to see that the third law is redundant, as a robot can be ordered (programmed), to protect or terminate its existance however a human sees fit. What remains is the second law that a robot should obey human orders, which is exactly what smartphones do: having no free will the only thing they can do is run programs ultimately written by humans. This could work in a perfect socialism where there is no ownership of devices, but in real life a device fulfilling the orders of, for example, a spyware writer causes harm to its owner.
    In reality, we should want devices that obey a different law: Execute the orders of your owner, and your owner's orders only.
    It is possible to build such devices, and we should work for every "smart" device to obey this law.

    (Also, to be pedantic: a robot is a device capable of complex movement, so a smartphone technically isn't one.)

  • I'm sure there's a GlaDOS joke in there somewhere.

    Please post the joke below, so that we can all laugh. At you.

  • But we grew up imagining that these robots would have, incorporated in their design, a set of principles. We imagined that robots would be designed so that they could never hurt a human being.

    No we didn't, fuck off.

  • You may think that you "bought" your cellphone, but really you're leasing it from the communications service company. They own it. So when it acts according to their wishes rather than yours, it is merely following priority. The fact that phones can be tied to service providers should make this abundantly clear. This is being extended to computer hardware through the pressure to make motherboards run only software with expensive keys controlled by a monopoly.
  • ...so we need only sit With Folded Hands [wikipedia.org].

    Of course, it's more likely that smartphones will simply continue to implement their current version of the First Law: "A smartphone may not reduce its service provider's profits, or through inaction allow its server provider's profits to be reduced."

  • I believe the second law basically says do whatever humans tell you to do unless it conflicts with the first law.

    Well, screw the first law. I can't make killing robots if they follow the first law. And I do like me some smoking hot death machines.

    No, you want the second law or robotics which says something to the effect of "do what humans tell you to do."

    That's what you want. Just change humans to "owner."

    If you're curious there is a third and zeroth law as well for the truly geeky.

    Third law says basically

Consider the postage stamp: its usefulness consists in the ability to stick to one thing till it gets there. -- Josh Billings

Working...