Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics AI Sci-Fi Technology

Developing the First Law of Robotics 165

wabrandsma sends this article from New Scientist: In an experiment, Alan Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov's fictional First Law of Robotics – a robot must not allow a human being to come to harm. At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions.
This discussion has been archived. No new comments can be posted.

Developing the First Law of Robotics

Comments Filter:
  • Same as humans ... (Score:4, Insightful)

    by BarbaraHudson ( 3785311 ) <barbara.jane.hudson@nospAM.icloud.com> on Tuesday September 16, 2014 @01:51PM (#47919479) Journal

    Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions.

    Someone sacrificing their lives by throwing themselves on a grenade to save others doesn't have time to think, never mind understand the reasoning behind their actions. And that's a good thing, because many times we do the right thing because we want to, and then rationalize it later. Altruism is a survival trait for the species.

    • by gl4ss ( 559668 ) on Tuesday September 16, 2014 @02:00PM (#47919575) Homepage Journal

      sure, but this is a fucking gimmick "experiment".

      the algo could be really simple too.

      and for developing said algorithm, no actual robots are necessary at all - except for showing to journos, no actual AI researchers would find that part necessary, the testing can happen entirely in simulation - and no actual ethics need to enter the picture even, the robot doesn't need to understand what a human is on the level a robot that would need to in order to act by asimovs laws.

      a spinning blade cutting tool that has an automatic emergency brake isn't sentient- it's not acting on asimovs laws, but you could claim so to some journalists anyways.. the thing to take home is that they built into the algorithm the ability to fret over the situation. if it just projected and saved what can be saved, it wouldn't fret or hesitate - and hesitate is really the wrong word.

      • I have a friend, a Comp Sci graduate no less, that can't see the endless utility of AI. His viewpoint is that you can simply program things to behave like they're intelligent, like these robots. He does not see the distinction, that an AI can be your friend, your researcher, your 24/7 slave/military tactician holed up underground somewhere. That it can do things without having to be programmed to do them.
        • There currently is no distinction -- things are programmed to behave like they're intelligent, because in all these decades no one has figured out how to make them actually intelligent. (This applies somewhat to people too)

          • by gweihir ( 88907 )

            Exactly. There is not even any credible theory that explains how intelligence could be created. "No theory" typically means >> 100 years in the future and may well be infeasible. It is not a question of computing power or memory size, or it would have long since been solved.

        • by gweihir ( 88907 )

          Well, maybe he just realizes that it is unlikely we will get AI like that any time soon and probably never. If you follow the research in that area for a few decades, that is the conclusion you come to. AI research over-promises and under-delivers like no other field. (Apologies to the honest folks in there, but you are not those visible to the general public.)

      • the thing to take home is that they built into the algorithm the ability to fret over the situation. if it just projected and saved what can be saved, it wouldn't fret or hesitate - and hesitate is really the wrong word.

        Unlikely that they added the ability to fret. More likely that they gave it the rule "prevent any automaton from falling into the hole" rather than "prevent as many automatons as possible from falling into the hole". Thus in the former case if it can't find a solution that saves both, it would keep looking forever. If you wanted one that looked more like indecision, you could give it the rule "move the automaton closest to the hole away from the hole".

        The trouble with computers is that they do as they're to

    • Oh yeah, I totally know how peoples bodies can operate complex mechanical tasks like that without any sort of cognition.

      Now a recent study has shown that tasks involving complex numerical cognition lower altruism [utoronto.ca], but come on. Thinking altruistically and quickly is still thinking.

    • This is a classic example of "Paralysis by Analysis"

      Also, the programmer was an idiot. Either use a priority queue or at the very least a timer to force a decision.

      while( 1 ) {
      if( people_in_danger ) {
      queryWhoToSave( people_in_danger );
      if( time_to_make_choice++ > CANT_DECIDE_WHO_TO_SAVE )
      savePerson( rand() );
      }
      else
      people_in_danger

  • by VitrosChemistryAnaly ( 616952 ) on Tuesday September 16, 2014 @01:53PM (#47919503) Journal
    A story in which a robot is stuck between two equal potentials and therefore cannot complete its task.

    http://en.wikipedia.org/wiki/Runaround_(story) [wikipedia.org]
    • by RoverDaddy ( 869116 ) on Tuesday September 16, 2014 @02:03PM (#47919605) Homepage
      In both Asimov's story and this experiment, the real moral seems to be that somebody failed to specify the proper requirements, or run a reasonable design review. "If you can't save everybody, save who you can" seems like a reasonable addition to the program.
      • How does the directive to "save who you can" allow it to decide which potential target to save?
        • Something simple like "with all other factors processed within x milliseconds being equal, save the closest one on the right."
        • by mark-t ( 151149 )
          Choose the next target to save such that it maximizes the number of additional targets that can be saved.
        • by Qzukk ( 229616 )

          It calculated that I had a 45% chance of survival. Sarah only had an 11% chance.

      • "If you can't save everybody, save who you can" seems like a reasonable addition to the program.

        The problem isn't that you can't save everyone.

        The problem is that you can save either of two people (hypothetical people, in this case). So, how do you code things to choose between the two, when you can do either, but not both?

        Let me guess - a PRN?

      • Unlike the robots in this experiment, most Asimov robots are not programmed in the traditional sense. Their positronic brains are advanced pattern recognition and difference engines much like our own brains. The Three Laws are encoded at a deep level, almost like an instinct.

        In the story Runaround, Speedy is much like a deer in headlights, stuck between the instinct to run away and remain concealed. Doing neither very well. The design mistake was putting more emphasis on the third law versus the second. The

        • Not only that, but the stories in I Robot and asimov's use of the 3 laws were not about laying an actual groundwork for how robots should function, but to illustrate that there are always unintended consequences to the laws. While the stories are really about the unpredictable outcomes of the interplay of those 3 constraints, it is kind of fitting that someone going down the road of trying to realize just one law would not quite get what they were hoping for.

          the real genius of the stories of course, isn'
      • by hey! ( 33014 )

        It depends on your design goals.

        In Asimov's story universe, the Three Laws are so deeply embedded in robotics technology they can't be circumvented by subsequent designers -- not without throwing out all subsequent robotics technology developments and starting over again from scratch. That's one heck of a tall order. Complaining about a corner case in which the system doesn't work as you'd like after they achieved that seems like nitpicking.

        We do know that *more* sophisticated robots can designed make mo

    • by Yuuto Amakawa ( 3632165 ) on Tuesday September 16, 2014 @02:08PM (#47919689)
      The idea is much, much older. Google "Buridan's Donkey". They just replaced the donkey with a robot and hunger/hay with programmed orders.
      • by radtea ( 464814 )

        Yup, and the solution available to any rational being is the same: since by hypothesis the two choices are indistinguishable, flip a coin to create a new situation in which one of them has a trivial weight on its side.

        Starving to death (or letting everyone die) is obviously inferior to this to any rational being (which the donkey and the robot are both presumed to be) and adding randomness is a perfectly general solution to the problem.

        Buridan's donkey is not in fact an example of a rational being, but rath

    • I have not yet read "Runaround". The story reminded me of the Star Trek: Voyager episode, Latent Image [wikipedia.org]

      The Doctor eventually discovers a conspiracy by the crew to keep him from remembering events that led to the holographic equivalent of a psychotic break. The trouble started when a shuttlecraft was attacked, causing several casualties. The Doctor was faced with making a choice between two critically injured patients - Ensign Jetal and Ensign Kim - with an equal chance of survival, but a limited amount of time in which the Doctor could act, meaning that he had to choose which of the two to save. The Doctor happened to choose Ensign Harry Kim; Jetal died on the operating table. As time passed, the Doctor was overpowered by guilt, believing that his friendship with Harry somehow influenced his choice

  • by Trailer Trash ( 60756 ) on Tuesday September 16, 2014 @01:54PM (#47919513) Homepage

    The real question is "how well do normal humans perform the same task?" My guess is "no better than the robot". Making those decisions is difficult enough when you're not under time pressure. It can be very complex, too. Normally I'd want to save the younger of the two if I had to make the choice, but what if the "old guy" is "really important"? Or something like that.

  • Computer don't speak human, so the First Law of Robotics is just a fancy way of describing an abstract idea. It needs to be described in an unambiguous, logical way that accounts for all contingencies.

    Or we can just make a sentient computer, your call.

  • by msauve ( 701917 ) on Tuesday September 16, 2014 @01:58PM (#47919551)
    and couldn't program it to prioritize based on which one was seen first, was closest, was apt to fall first based on speed/distance, or any one of many other possibilities. You could even place weights on them, and throw a die at the end as a tiebreaker. The rule should be interpreted as "allow the least harm," not "allow no harm."
    • I bet (before reading TFA) that the system started to oscillate.
      (i.e. Hinder one from falling in, and its chance of falling in becomes less than the others - so rush to the other to hinder it. Then repeat.)

      Then I watched the video - it didn't get even to that point.
      Or then it did start oscillate, but the feedback was given too soon (I am going to help this human - ergo the others chances are now worse).

      I was confused. What happened here. Why is this "research" done, or reported on /.? Then I realized: the "

      • by khr ( 708262 )

        This is certainly not news for nerds. But seems it is news for non-nerds

        Well, it gets a bit nerdier if you figure this is much like Wesley Crusher's psych test to get into Star Fleet Academy... He had to go into a room with two "victims" and rescue one so they could make sure he wouldn't freeze and fail to rescue anyone.

        And that is "stuff that matter"

        Well, that's a bit harder to argue with...

  • by kruach aum ( 1934852 ) on Tuesday September 16, 2014 @01:58PM (#47919561)

    Leaving aside that Asimov's laws of robotics are not sufficiently robust to deal with non-fictional situations, everything about this is way too simplified to draw conclusions from that could ever be relevant to other contexts. Robots are not human beings, nor are they harmed by falling into a hole. What happened here is a guy programmed a robot to stop other moving objects from completing a certain trajectory. Then, when a second moving object entered the picture, in 14 out of 33 trials his code was not up to the task of dealing with the situation. If he'd just been a little more flexible as a programmer (or not an academic trying to make a "point") there would have been no "hesitation" on the part of the robot. It would just do what it had been programmed to do.

    • It was doing what it was programmed to do! What do you think a human being would be to a robot anyway, if not other moving objects it has to keep out of a hole?
      • It was doing what it was programmed to do! What do you think a human being would be to a robot anyway, if not other moving objects it has to keep out of a hole?

        Wait, are we talking about robotic contraceptive devices?

  • by wisnoskij ( 1206448 ) on Tuesday September 16, 2014 @02:00PM (#47919577) Homepage
    Don't get me started on Asimov's work. He tried to write allot about how robots would function with these laws that he invented, but really just ended up writing about a bunch of horrendously programmed robots who underwent 0 testing and predictably and catastrophically failed at every single edge case. I do not think there is a single robot in any of his stories that would not not self destruct within 5 minutes of entering the real world.
    • by thewolfkin ( 2790519 ) on Tuesday September 16, 2014 @02:03PM (#47919607) Homepage Journal
      to be fair I thought the whole point of the book was a series of edge cases which would be hard to think of that cause all the "malfunction". The whole point of the book wasn't that the three laws were perfect but that they SEEMED perfect until we put them in the real world and suddenly they would appear to "malfunction"
      • Yes, which is great. Except that it was not just some edge cases, it was not just hard to think of plausible edge cases. It was every single edge case, so much so that, like I said, none of his robots would last 5 minutes in the real world.
        • by danlip ( 737336 )

          Do remember these stories were written as far back as 1941. "I, Robot" was published in 1950. Your experience with technology and real world edge cases is very different from his.

        • Actually, the stories in i, robot only covered a few edge cases. There could be hundreds of other edge cases where the Three Laws allowed the robots to function perfectly fine. The stories that are written are simply the cases that are notable for their failure.

        • Yes, which is great. Except that it was not just some edge cases, it was not just hard to think of plausible edge cases. It was every single edge case, so much so that, like I said, none of his robots would last 5 minutes in the real world.

          they SEEMED perfect until we put them in the real world and suddenly they would appear to "malfunction"

          Yeah I thought I said/agreed with that. As for "every single edge case" well it's hard to judge every edge case because the book only shows the ones where it goes "wr

      • by shuz ( 706678 )

        They would only fail if no action is taken. There is juxtaposition in law all the time. The key is to find if action is taken to uphold a law that results in another law failing to be upheld where taking no action causes both laws to not be upheld. Upholding at least one law is ideal. I am not suggesting that if you saw a bank being robbed that you join in robbing said bank to pay your taxes however.

        • by stiggle ( 649614 )

          But if you then mugged the bank robbers - that's a lesser law broken and so not as bad bank robbery, although the rewards would be the same.

    • As a programmer myself, I found that the point of Asimov's robot stories is that most of the robots' fuckups might have been prevented if the human programmers had done some thinking.
      • Part of it was that and part of it was user error. In Asimov's stories, users would give robots orders, but how you phrased the order could affect the robot's performance. A poorly phrased order would result in a "malfunctioning" robot (really, a robot that was doing its best to obey the order given).

        • Or maybe NOT a malfunction, but a deliberate effort to mislead. One of the stories posited robots serving a human a poisoned drink, despite their programming, because of careful commands and incomplete information: one put poison in a container, another transferred containers, the third took the drink to the human. EXACTLY THE SAME SETUP was used in the very beginning of "Downton Abbey", when a sequence of miscommunication caused a server to (almost) carry a bowl of rat poison to the dinner table. It's
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday September 16, 2014 @02:38PM (#47920009)
      Comment removed based on user account deletion
    • I used Asimov's work as entertainment rather than design documents. My mistake.

    • You seemed to entirely miss the point of that book. Apart from just being a fun read, it ultimately points out that we cannot create a flawless system of control over other intelligent beings. At first glance, the three laws of robotics are a fool proof system for keeping robots in check, so much so the three laws have withstood over half a century of scrutiny.

      The anecdotes in the book are all scenarios specifically created to show the flaws of this system, concluding that we will undoubtedly create A.I.
    • by lkcl ( 517947 )

      Don't get me started on Asimov's work. He tried to write allot about how robots would function with these laws that he invented, but really just ended up writing about a bunch of horrendously programmed robots who underwent 0 testing and predictably and catastrophically failed at every single edge case. I do not think there is a single robot in any of his stories that would not not self destruct within 5 minutes of entering the real world.

      hooray. someone who actually finally understands the point of the asimov stories. many people reading asimov's work do not understand that it was only in the later works commissioned by the asimov foundation (when Caliban - a Zero-Law Robot - is introduced; or it is finally revealed that Daneel - the robot that Giskard psychically impressed with the Zeroth Law to protect *humanity* onto - is over 30,000 years old and is the silent architect of the Foundation) that the failure of the Three Laws of Robotics

    • by DutchUncle ( 826473 ) on Wednesday September 17, 2014 @12:15AM (#47924027)
      I think you missed the point of many of Asimov's stories. Edge cases are the normal situation - human beings are always on an edge case in some dimension. Any simplistic set of rules, including all the great slogans and sound bites of capitalism and marxism and socialism and every other political system, are just too simple because the real world is complex.
  • I never understood why any one would believe a "robot" would be beholden to any laws at all. I mean, the first application of truly autonomous machines would be in the military or private sectors (shipping, manufacturing, etc.). Of course military robots are going to kill people, and industrial robots are only going to keep people from dying in so far as its good for the bottom line. Do you really think the main concern of a manufacturer of a self-driving delivery truck will be keeping it from running -over
  • 50/50 (Score:4, Interesting)

    by visionsofmcskill ( 556169 ) <vision AT getmp DOT com> on Tuesday September 16, 2014 @02:07PM (#47919667) Homepage Journal

    why would it waste any time fretting? i presume its decision is by the very nature of computing and evaluation a function of math... therefor the only decision to cause delay would be the one wherein the odds of success are 50/50... but it needs not be delayed there either... just roll a random and pick one to save first.

    Sounds like a case of a unnecessary recursive loop to me (where the even odds of save/fail cause the robotic savior to keep reevaluating the same inevitable math in hopes of some sort of change). Maybe the halfway solution is the first tiome you hit a 50/50 you flip a coin and start acting on saving one party while continuing to re-evaluate the odds as you are in motion... this could cause a similar loop - but is more likely to have the odds begin to cascade further in the direction of your intended action.

    Seems silly to me.

  • by Anonymous Coward

    Freefall has spent an awfully long time building and exploring this very issue. You might like it: http://freefall.purrsia.com/ - WARNING, slightly furry.

  • Bottom-line. We can't program general intelligence and all these systems are following hard coded rules of conduct. So if the robot lacks intelligence, all that is is a reflection of the lack of intelligence in the coder. A program is only as smart as the smartest programmer, because with today's tools and technology, the programmer is the only source of intelligence.

    So either the robot was stuck in a moral dilemma and was regretting its failures, or the guy who built the thing has no idea what he's doin

  • I hate to say it but the first AI controlled robots will know their environment and be able to interact with it.

    They'll get goals from their owner in natural language format.

    The thing is, the easiest application to task them with will be war. It is almost harder to design AI that is unable to kill than to develop AI itself.
  • "AI" has nothing to do with robots. Why do we keep relating the 2? A Robot may very well be controlled by and AI, or it might be controlled by a human. There is absolutely no reason why this experiment had to be done with robots. Especially given how simple it was.

    And most importantly, this wasn't a failure of AI or an example of the difficulty of ethics in robotics. It was crappy code. I think anyone that's worked with JavaScript in the past likely has some pretty good ideas regarding how to improve this a

  • by shuz ( 706678 ) on Tuesday September 16, 2014 @02:27PM (#47919891) Homepage Journal

    An interesting experiment would be to include actions that affect other actions. Such that when one specific proxy falls into a hole, multiple others fall into a hole. Would the robot learn? Would the robot assign priority over time? For any given decision there is yes, no, and maybe with maybe requiring a priority check to figure out what the end result is. In programming we tend towards binary logic, but the world is not black and white. Likely if the robot was programmed to learn, the robot would eventually come to the conclusion of save proxy A = yes, save proxy B = yes.Followed by Save A first = maybe, save B first = maybe. Followed by likely hood of success A > B = Yes/No and B>A Yes/No. Followed by action. The next question would be what happens if A=B? What you would likely find is that the robot would either randomly choose or go with the first or last choice, but would likely not fail to take some action. I would find it interesting if the robot didn't take action and then try to explain that.

  • In an experiment, Alan Winfield and his colleagues programmed a robot ... (snip) ... But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole.

    funny experiment but they definitely should have hired some halfway competent sw developer.

  • Buridan's Principle (Score:4, Informative)

    by rlseaman ( 1420667 ) on Tuesday September 16, 2014 @02:45PM (#47920089)
    For those who think the only problem is bad programming, see Leslie Lamport's analysis: http://research.microsoft.com/... [microsoft.com] Some race conditions are built into the real world.
    • by neoritter ( 3021561 ) on Tuesday September 16, 2014 @03:22PM (#47920455)

      Do you really think a donkey will starve to death because you place two bales of hay equidistant from the donkey?

      • To be fair, this could solve the donkey population problem we seem to be having...

        ...maybe we should substitute cheeseburgers.

    • Interesting: I knew this story as "Bollum's Ass". I did a quick google on that, and you'd be amazed at what I got back.

      Well, maybe not.
    • If this is the kind of research that Microsoft puts out, then I have an even lower opinion of them than I did before.

      from the article

      Random vibrations make it impossible to balance the ball on the knife edge, but if the ball is positioned randomly, random vibrations are as likely to keep it from falling as to cause it to fall.

      I have a hard time believing that there is a 50 percent chance that a ball will balance on the edge of the knife. First she says it's impossible, then in the same sentence she states that it is just as likely. WTF!

  • by rickb928 ( 945187 ) on Tuesday September 16, 2014 @03:02PM (#47920263) Homepage Journal

    The article misstated First Law. Get that right first.

  • Place it between two bales of hay. It will starve.

  • Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions.

    More and more research is hinting that humans may also be "ethical zombies" that act according to a programmed code of conduct. The "reasoning behind our actions" may very well be stories we invent to justify our pre-programmed actions.

  • Given a set of confusing and not-so-clear instructions, even humans can have problems following orders [youtube.com].

  • Asimov's "Law" is just a story by a fiction writer. In the real world we already have robots that counter threats (electronic countermeasures, anti-missile defenses, etc). There's no ethics involved, just a working algorithm.
  • I think I saw this article about the ethics of self-driving cars posted here [theatlantic.com].

    This also shows where a liberal arts education may come into the STEM world later, I have to admit my philosophy and engineering ethics courses were more cognitive than I thought they would be.
  • The programmers should introduce the concept of triage.

    If the only options is that you can only be partly successful, then chose the one most likely to provide the best results.

  • In my own theories of strong AI, I've developed a particular principle of strong AI: John's Theory of Robotic Id. The Id, in Freudian psychology, is the part of your mind that provides your basic impulses and desires. In humans, this is your desire to lie, cheat, and steal to get the things you want and need; while the super-ego is your conscience--the part that decides what is socially acceptable and, as an adaptation to survival as a social species, what would upset you to know about yourself and thus

  • Why not just fall over the hole to eliminate the threat?

  • I'd say the most important rule in robotics is starting to be solved.
  • IIRC, it's in "Red Storm Rising" (Tom Clancy) that a weapons system fails because its algorithm targets incoming missiles based on range, so when two birds have identical range, the algorithm went into a tight loop and never produced a firing solution.

    This (and the present "First law" implementation) has nothing to do with morals and everything to do with understanding how to deal with corner cases.

  • That is,

    What Would Ender Do?

    (You can choose from either his mindset in "Game" or "Speaker")

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...