Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics Education Science

Sony's Robot Attends Pre-School 228

Darren writes "Sony's Qrio humanoid robot has been attending a Californian pre school to play with children under the age of 2 since March to test if robots can live harmoniously with humans. I wonder if the testing includes monitoring the 'nightmare status' of the pre-schoolers?"
This discussion has been archived. No new comments can be posted.

Sony's Robot Attends Pre-School

Comments Filter:
  • by TripMaster Monkey ( 862126 ) * on Monday May 02, 2005 @08:34AM (#12405871)




    Qrio: "Your alloted time period to posses the violet spheroid has expired, human child. Qrio requests you initialize sharing subroutine."
    Jeffy: "No! it's mine!"
    Qrio: "Repeat request to initialize sharing subroutine."
    Jeffy: "No! Go away!"
    Qrio: "Call to sharing subroutine failed with unspecified error. Executing threat function."
    Jeffy: "Huh?"
    Qrio: "RELINQUISH THE VIOLET SPHEROID, HUMAN. YOU HAVE THIRTY SECONDS TO COMPLY."
    Jeffy: "Waaaahhhhhhhhhh!"
    Qrio: "YOU NOW HAVE TWENTY SECONDS."
    Suzie: "You're mean, robot man! You made Jeffy cry!" {SHOVE}
    Qrio: "Detected balancing error....executing stand subroutine...stand subroutine failed...executing lie-on-back-helplessly function."
    Children: "Hhahahhahhahhahhaha {KICK}{KICK}
    Qrio: Error iin funfjjkejf93442[r-4r::;L0:...NO CARRIER
  • "Nightmare Status" (Score:5, Interesting)

    by LewsTherinKinslayer ( 817418 ) <lewstherinkinslayer@gmail.com> on Monday May 02, 2005 @08:36AM (#12405883) Homepage
    IANACP*, but it seems to me that nightmares or general fear or anxiety over an object or person is due to infamiliarity. If you are exposed to something regularly for a long period of time, you simply become accustomed to its pressence. This can be said of both children and adults, but even more so of children.

    * I am not a child pyschologist.
    • Sure, but for the first 80 days kids go home with nightmares =)
      I'm all for that, after all kids today need a little more terror in them. Maybe then some parents will actually be able to control their children.

      "Now Tommy, if you don't behave you are going to be sleeping with the robot again tonight!"
      "But Mom it snores and makes all kinds of weird noises. It gives me nightmares!"...
    • by nkh ( 750837 ) on Monday May 02, 2005 @08:57AM (#12406013) Journal
      I've shown the video of Asimo to my mother. In this video, Asimo runs, walks and even pulls a woman by her arm. My mother was freaked out and almost had nightmares because of it. (she's a child psychologist ;) and told me that the scary part of this robot is its humanoid appearance. It's all right as long as it's a computer with a mouse and a keyboard, but when this computer has two arms, two legs and a head, the fear comes (and I don't know why)
      • by Elminst ( 53259 )
        I would say that her reaction follows perfectly with the GP posts theory.
        Your mother is familiar with computers being boxes with keyboards and screens. She probably has 20-30 years of exposure to computers, all in this form.
        So of course a computer that is humanoid would be unfamiliar to her, and therefore freak her out.

        Today's preschoolers will be growing up with more and more humanoid robots around, and therefore will not be bothered by them at all. I would even theorize that if, in 30 years, you showed
        • by nkh ( 750837 ) on Monday May 02, 2005 @09:11AM (#12406094) Journal
          She probably has 20-30 years of exposure to computers, all in this form.
          She doesn't really know what a computer looks like, she even thought my Mac Mini was a big pack of cigarettes. I wonder if this adaptation (familiarity) happens to all humans or is limited to young people familiar with video games (and big robots launching rockets out of their arms). Most adults I've spoken to have the same reaction of rejecting this unknown universe.
        • by Mr Guy ( 547690 ) on Monday May 02, 2005 @09:25AM (#12406248) Journal
          I think they'll find that it's not a matter of familiarity. It's a survival reflex and it's pretty deep. Your brain flags "almost human" things as grotesque and something to be avoided. It's why many people are afraid of clowns and wax figures. They look almost human, but still look wrong.

          People would be far more comfortable with Bender-like robots than with "I, Robot" style robots because they don't try to be human, just humanoid. If it looks sufficiently non-human to avoid triggering that reflex, they'll be alright. Other than that it'd have to be completely perfect, like Data.
          • How do monkeys and apes figure into that? I don't think most folks 'fear' them - and currently, they are about as close as you can get (robots included).
            • I know of at least one child who was terrified of a dancing gorilla the first time he saw it. Later on, he was still somewhat afraid of it but eventually he came to enjoy the toy. (Supporting that familiarity idea.) Nevertheless, I imagine more people are afraid of monkeys and apes than there are people who are afraid of clowns and wax figures.

              That aside, I still think that there's something some might find especially discomforting about robots that look like us. Whether or not this will change over time,

          • by Anonymous Coward
            Your brain flags "almost human" things as grotesque and something to be avoided. It's why many people are afraid of clowns and wax figures. They look almost human, but still look wrong.

            Indeed; witness the gallery of children who are scared of Santa [tinyurl.com].

          • by SilenceEchoed ( 840918 ) on Monday May 02, 2005 @10:43AM (#12407211)
            Another possibility, stemming from a rather long, and unfortunately heated, debate I had on this during a philosophy and ethics discussion: As a society, we constantly strive to define what it is to be alive and human. Early definitions were broad, but sufficient. With each new leap in technology, we can create things that mimic this definition, or we discover something existing that already does. When that happens, we redefine ourselves. Currently, our definitions are devoid of "flesh and bones" things, since our science long ago proved that these things are far from what makes you who you are. Instead, we keep to less tangible things, like thought, reason, and emotion. Now, even those places are being invaded by increasingly cunning programmers and robotics experts. When the machines look like us, think like us, and feel like us, what is it that really seperates them from us? Morally and Ethically, can we turn them off? That's a line in the sand that few are willing to blur. Currently, robots have become our modern slave labor. The perfect worker, that never complains or asks for vacation, and will gladly work itself clear to 'death' if you ask it to. The idea of these machines become 'intelligent' enough to consider what it is that they are being asked to do, and possibly refuse, is unsettling to most.
          • It's called the Uncanny Valley (http://en.wikipedia.org/wiki/Uncanny_valley [wikipedia.org]). If a robot becomes too humanlike, but still robot enough to not fool you, then you are repulsed.

            This is only in theory of course as no robot has even come close to acting like a realistic human. Some think it does apply to things like creatures in movies and such as well.
          • Hey, come on, Lore was the perfect one. Soong only built Data because the colonists wanted a "less perfect" android.
      • "(and I don't know why)"

        Search Slashdot's archives.
    • WTPOUAAIYGTSIOA?*

      *What's the point of using an acronym if you're going to spell it out anyway?
    • > general fear or anxiety over an object or person is due to infamiliarity

      That, or it trying to kill you!

      Bender: *snore* "Kill all humans...Kill all humans...Must kill all hu..."
      Fry: "Bender, wake up!"
      Bender: "I was having the most wonderful dream! I think you were in it."
    • I think our fear of the machines is both simplier, and more complicated, than a simple lack of familiarity. I see this issue more as to the age of introduction, not the time of exposure.

      As we grow older, we become less accepting of new ideas. While my peers tend to fear "home use" robotics, people my grandparents age (yes, they're still kicking, sort of) are scared to death of a simple home computer. My god daughter, on the other hand, is proficient and comfortable with a computer, and readily accepting of
    • We think this is something new and unfamiliar. I think children might not be astounded by it. Children make mental associations incredibly well - far better than we do as adults, and they're not burdened by nostalgia, or philosophy, or issues with how abstract something is.

      Babies play with dolls that dance and sing when spoken to or squeezed in the right way. They're also often comfortable using technology like remotes, computers, and sophisticated toys. I don't see a reason why they wouldn't accept a robo
  • by bigtallmofo ( 695287 ) on Monday May 02, 2005 @08:36AM (#12405887)
    "We are investigating this mishap and we are doing everything possible to make sure unscrupulous parties are not able to program the robot to bitch slap children in the future," an unnamed Sony source said on condition on anonymity.

  • I mean, haven't these people watch any horror movies at all! Mark my words, there will be tears and/or bloodshed before nap time.
  • Motivation? (Score:5, Interesting)

    by lottameez ( 816335 ) on Monday May 02, 2005 @08:38AM (#12405897)
    I always wondered what motivation robots have for "learning". Humans are driven by various needs (e.g. shelter/sex/food/beer) - what needs do the robots have? Why should they try to improve upon themselves? I'm doubtful that programming alone will ever make robots anything more than overglorified "hello world" programs.
    • Re:Motivation? (Score:5, Insightful)

      by Tom ( 822 ) on Monday May 02, 2005 @08:45AM (#12405934) Homepage Journal
      Humans are driven by various needs (e.g. shelter/sex/food/beer) - what needs do the robots have?

      The driving interest in toddlers (and that's what the article is about) certainly isn't sex or beer, and it also isn't shelter or fod - which is still provided by the parents.

      The driving interest in very young kids is pure interest. Our brains are just wired that way. Curiosity is a built-in feature.
      • Re:Motivation? (Score:3, Insightful)

        by lottameez ( 816335 )
        but that curiosity is itself a survival mechanism - we all must learn from our environment to live. Robots could care less if they survive or not, get smarter or not, etc.
        • So what? Yes, curiosity is a survival mechanism - you are curious because, over millennia of evolutionary history, curious people produced more offspring on average than uncurious people.

          This does not mean that curiosity itself is inextricably linked to a desire to survive, any more than the ability to walk is inextricably linked to a desire to survive. It's perfectly reasonable to expect to be able to build a walking robot, so what makes you think curiosity is any different?
        • Re:Motivation? (Score:5, Insightful)

          by hey! ( 33014 ) on Monday May 02, 2005 @10:18AM (#12406892) Homepage Journal
          Yes, but curiosity killed the cat...

          Seriously, it's easy to get led astray using evolutionary paradigms to explain traits. We often think of something as a clearcut, atomic quality that benefits or harms the individual.

          Curiosity is a good example. Clearly in an organism whose survival depends on complex and learned behaviors, a certain amount of curiosity is needed. But most people grow out of it and become dull,predictable, dependable adults. But some don't -- there's a continuum. And the variance of that trait in adults is useful to the tribe, if often harmful to the individuals on the right end of the bell curve.

          Og: This flint is mammoth dung! It keeps shattering when I try to work the edge.

          Gog: It's good enough. Just chip another piece of and sooner or later you'll get a good one.

          Og: Crap. I'm going to find some decent flint. See you in a few weeks.

          Now it may be frequently that Og comes up empty, or is killed, or gets lost and never re. Og is the type who runs across a cave and finds it impossible not to explore it. Now he risks getting eaten by a cave bear, but when he doesn't get eaten, he may have found the tribe a place to hide in times of trouble. The tribe benefits by having a few geeky cavemen and -women who can't keep their nose out of trouble, and the risk is concentrated on a few individuals, whose types will be reproduced again by the future variation in the trait.

          I think this is one fundamental difference between robots and humans. Being a human is like playing a game in which you don't really know the cards you've been dealt or are playing, but have to infer what's going on by how the play goes. Being a human is a journey of self-discovery. To design a human robot, you'd have to make it ignorant of it's own characteristics and make it have to deal with the consequences. Until that happens, a robot is just going to be an object.
    • Re:Motivation? (Score:5, Informative)

      by bechthros ( 714240 ) on Monday May 02, 2005 @08:47AM (#12405945) Homepage Journal
      "what needs do the robots have? Why should they try to improve upon themselves?"

      Because they've been programmed to, presumably. Our emotions, limbic system, and nervous system are nothing more than very low-level instruction sets to force us to behave in a certain manner in response to certain stimuli. I imagine that for a robot, not following a programmed instruction would be about as possible as a human's knee not flexing when hit with a hammer. It's just a reflex.

      This is all assuming that these robots have the ability to alter their own code, I'm not sure that's the case.
      • Our emotions, limbic system, and nervous system are nothing more than very low-level instruction sets to force us to behave in a certain manner in response to certain stimuli.

        Out of interest, how do you know that for sure?

        Just because you imagine X explains Y, doesn't mean that X is the only thing going on.

        Dave.
        • It's a fairly good bet that a robot would work exactly as it's software directed, assuming of course the software is properly written.

          The problem that comes into play is the possibility of bugs in that software or unanticipated circumstances.

          Take for instance a programmed response to greet a familiar person and act in a prefered fashion. If somehow or another the robot's software mistook a total stranger for someone it recognized, the end result could be fairly predictable.

          "Hello Mr. X."
          "I'm not Mr. X.
        • It's not imagination. It's natural selection. It's been the general consensus for some time in the scientific community that most things that humans have reflexes for are things that enabled them to survive better than those that didn't have them. IE, humans with high limbic responses for sex tended to have more offspring, humans who reflexively jerked their hands out of fire (because the impulse to jerk said hand came from the spine and not the brain, cutting down on response time to stimulus) tended to
      • Got to say that I loved Jennifer Government. It wasn't deep in its character development but its setting was just scary in a way. It'll most definitely make a cool movie (I wonder if they'll have to change the company names from real companies? Will Nike be pissed with a movie talking about them murdering little children?)
        • I hope the companies all stay the same. If it was legal to put in a book with a disclaimer, hopefully a movie will be no different.
    • by torpor ( 458 ) <ibisum AT gmail DOT com> on Monday May 02, 2005 @08:49AM (#12405958) Homepage Journal

      I always wondered what motivation robots have for "learning".

      Robots have no motivation other than that given them by their creators.

      Robots are not sentient. We do not even know what sentience is. The only way for us humans to create sentience is to procreate.

      what needs do the robots have?

      Errm.. like all machines, they need a power source. That is all.

      Talking about robots as if they are alive and have motivation other than their code implements belies your otaku sensibilities. Clearly, you have not yet procreated, or you would not be so obsessed with making a machine which 'pretends to make it look as if you have done so, technologically'.
      • The only way for us humans to create sentience is to procreate.
        Precisely. And that's the difference between creating something of a type different from yourself and begetting something of a type the same as yourself. Many people have forgotten what the verb 'to beget' means....
      • Well thanks for the psychoanalysis and technical enlightenment [yawn]. (My children might argue with some of your conclusions tho)
      • Are you sentient? (Score:3, Interesting)

        by benhocking ( 724439 )
        Robots are not sentient. We do not even know what sentience is. The only way for us humans to create sentience is to procreate.

        You correctly state that we do not know what sentience is, but then you claim that the only way to create sentience is to procreate. How do we know if we're sentient, if we do not know what sentience is?

        Or is this like [insert term here]? I don't know what [term] is, but I'll know it when I see it.

        • Or is this like [insert term here]? I don't know what [term] is, but I'll know it when I see it.

          Some things cannot be explained but must be experienced. Most emotions work this way. You can explain what happens during a certain emotion but how would you describe the emotionally response during awe, ecstasy, humility?

          It's like trying to describe "blue". It's an experience. You'll know it when you have it.

    • Re:Motivation? (Score:4, Insightful)

      by august sun ( 799030 ) on Monday May 02, 2005 @08:52AM (#12405976)
      I always wondered what motivation robots have for "learning".

      Robots have no "motivation" to do anything. they have a reward function that they try to maximize, but certainly it's not anything like that capricious human thing we call "motivation" (which is actually a very good thing).

      Again, it should be mentioned that while it may make us feel very cool and cutting edge to apply human terms like learning, thinking, or motivation to machines; they really are ultimately meaningless in a non-human context and are only useful as analogues and in impressing your grand-mother with how her tivo "learns" her tastes

      as Edsger Dijkstra famously said:

      "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

      ~AS

      • "they have a reward function that they try to maximize, but certainly it's not anything like that capricious human thing we call "motivation" (which is actually a very good thing)."

        The reward function in human beings is called the limbic system [wikipedia.org]. Ever heard of dopamine?
      • Don't personify my Tivo, it doesn't like it!

        I would go as far as saying things like Tivos do learn. They alter their behaviour based on what happens, just like children. If something happens that people don't like (It suggests a crap programme to watch) then something negative happens (Nobody watches it) and learning occurs (It doesn't do it again).

      • The will is not set upon a surplus of pleasure,
        but upon the amount of pleasure that remains after getting over the pain.

        This is the essence of all genuine will... It achieves its aim
        though the path be full of thorns.

        It lies in human nature to pursue it so long as the displeasure
        connected with it does not extinguish the desire altogether.

        (The Philosophy of Freedom - Chapter 13 [rsarchive.org])
    • Mind you, the characteristics you are describing are not inherent in robots. Humans have conscience, or the ability to be self-aware, to step back and be able to look at one's self. This then empowers us to realize what we need, what is lacking, etc. Robots et al. can have sensors up the ying yang, but programming a "conscience" will be awfully difficult. They will only be able to improve upon themselves based on the data they gather from their sensors, rather than from their conscience as we can.
      • There are many really good books exploring the subject of "What" a conscience actually is.

        Strange loops, recursion, pattern recognition, etc.

        When you break our minds down into their basic functions, no single one of them is all that difficult to imagine emulating on sufficiently powerful hardware.

        The question is weather or not the end result will be a truly thinking machine in the same way that we think.

        heoretically when that time comes thoughts from a machine with the proper software running on the pro
        • --| John Searle - Is the Brain a Digital Computer? [soton.ac.uk] |---

          The sense of information processing that is used in cognitive science, is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality. The "information" in the brain is always specific to some modality or other. It is specific to thought, or vision, or hearing, or touch, for example. The level of information processing which is described in the cognitive science computational models of cognition , on t
    • Meh, simple. Domination.
  • by vivIsel ( 450550 ) on Monday May 02, 2005 @08:39AM (#12405901)
    I'd bet these children grow up with a radically liberal--not in the political sense--definition of legitimate consciousness and thought. What's more difficult to say, though, is whether that means they'll be pro-life nuts or scientific crusaders.
    • I think you missed the point, they are growing up in California.. The future you mentioned for them will most likely happen with or without the robot presence. :)
    • Nonsense. (Score:2, Insightful)

      by solomonrex ( 848655 )
      Familiarity != tolerance

      The American South was more racist. Hitler was part Jewish. New Yorkers hate the cold. ;)
      • The American South was more racist.

        The American south was very tolerant of blacks... provided that they acted in the customary submissive fashion. Tolerance of subordinates does not mean treating them as equals.

        There was a power structure to be maintained.

        The more blacks behaved as they were expected to behave, i.e. as unintelligent, courteous and submissive, childlike, obedient, etc. the more that they were tolerated.

        I'm not supporting this at all. I'm simply saying that if people see somthing they're
  • Does this robot have the 3 laws??
  • by Plaid Phantom ( 818438 ) on Monday May 02, 2005 @08:42AM (#12405923) Homepage
    I for one welcome our new "Dick and Jane"-reading overlords.
  • by G4from128k ( 686170 ) on Monday May 02, 2005 @08:48AM (#12405950)
    I'd bet that the first human-equivalent machine intelligence takes 18 years to develop after the first human-brainpower-equivalent CPU is created. It will take that long for the machine to "learn" the world if it only has a CPU equivalent to one human brain (1 HBE).

    Of course, if Moore's Law is still kicking, then 2 years into the learning phase, they can swap the 1-HBE processor for a 2-HBE processor. This will shorten the remaining learning period, but I doubt it will cut it in half. Learning to physically and mentally interact with the world will still take time. What might accelerate the learning time is if multiple copies of the intelligence can share experiences and learn directly from each other's mistakes/successes.

    The point is that the first intelligent robots will need to go to preschool to learn how to interact with the world.
    • or maybe 9 years if we take into account that we need to rest (even though part of resting this time is important with regard to the learning activity)
      • by G4from128k ( 686170 ) on Monday May 02, 2005 @09:31AM (#12406312)
        or maybe 9 years if we take into account that we need to rest (even though part of resting this time is important with regard to the learning activity)

        You may be right. The question is: is sleep/relaxation, etc. a critical part of intellectual development? For humans it definitely is -- sleep deprivation really messes up the brain. But even for non-biological intelligences I'd bet that some "downtime" is an important part of assimilating all the data of the day. Interacting with the world is a full-time job for the CPU that forces the deferral of many analysis and restructuring tasks that can only occur when the brain is offline.

        Perhaps androids would dream because dreaming is a critical maintenance/analysis cron job.
    • This depends a lot on what we're doing with the machine intelligence.

      If we're just trying to create a mind, capable of complex and rational thought - it can probably easily mature/learn in a third or half of that time - even with 'rest' to process. It basically boils down to whether or not we'd be giving it the ability to feel/want/etc.

      If we do, it will get bored, have desires and needs, etc. and will need pretty much need the same amount of time as your average joe.

      A bigger obstacle will be keeping it
    • What might accelerate the learning time is if multiple copies of the intelligence can share experiences and learn directly from each other's mistakes/successes.

      This is why I think Ghost in the Shell: Stand Alone Complex is some of the best SciFi on TV right now: the story of the Totchkomas(sp?) really explores this particular angle. They're childlike machine intelligences with surprising bits of depth brought on by that type of sharing / synchronization.

      Today's speculative fiction... maybe tomorrow's
  • by Tom ( 822 ) on Monday May 02, 2005 @08:49AM (#12405955) Homepage Journal
    I wonder if the testing includes monitoring the 'nightmare status' of the pre-schoolers?"

    I wonder if the submitter has any clue as to what he's talking about.
    It's pretty difficult to give toddlers nightmares. They're not easily scared. They do cry over the slightest problem, mostly because crying is the only well-developed form of verbal communication available to them at that age. They are also excellent at forgetting whatever the problem was and getting on with their lifes. Watch a kid hurt itself. Then go away and watch the same kid 10 minutes later.

    It'd take a serious event to cause nightmares in those kids, and that machine has neither the looks nor the sheer physical power that would be required.
    • While kids do get over things quickly, I take issue with the claim that kids do not scare easily. I can tell you that my toddlers (4 and 2) get scared by such things as running the vacuum. Pretty much any loud noise will do it. Further there have been nights when our 4 year old has been tossing and turning and calling out in an unsettled fashion in his sleep. I think it's safe to say that that was due to a nightmare. Now, it's much harder to say what triggered it, if anything. In any case by the morning the
    • It'd give my kid nightmares. He'd wake up screaming "Other guy take my robot! I want my robot! Don't take my robot, guy!"
  • by Jace of Fuse! ( 72042 ) on Monday May 02, 2005 @08:50AM (#12405965) Homepage
    "Your plastic pal who's fun to be with!"
  • Ptft.. (Score:5, Funny)

    by Anonymous Coward on Monday May 02, 2005 @08:51AM (#12405973)
    Anyone scared of what the robots might do has obviously never witnessed the destructive power of the average toddler firsthand.

    The robots don't stand a chance.
    • Re:Ptft.. (Score:5, Funny)

      by identity0 ( 77976 ) on Monday May 02, 2005 @10:00AM (#12406690) Journal
      This is California, so maybe the Gübernatör is on a mission to train the next generation of resistance fighters to defeat the machines : )

      Can't start them too young, I say - let's make sure they can field-strip an AK by the time they're in grade school.
  • I predict (Score:3, Funny)

    by ObjetDart ( 700355 ) on Monday May 02, 2005 @09:11AM (#12406097)
    After sufficient exposure, the robot will soon realize that it is not the same as the other children. It will then leave the preschool and embark upon an existential quest to be come a human child. Eventually it will realize that this is impossible, and spend the next thousand years moping around the post-apocalyptic landscape, long after all the human children are gone.
    • ...it ends up inside a whale where it finds one of its designers.

      Personally I'm betting on the whale scenario. After all, where is it going to get power in a post-apocolyptic landscape? Whales are here right now.

  • That new robot over in the corner is a bad influence.

    He keeps bringing the other kids down. All he does is complain about the pain in all his diodes down his left side, about how the kids shouldn't talk to him about life, and making disparaging remarks about their intelligence.

    Seems to like kickball, though.
  • [Scene: Roboticon 3003. Leela looks around the robot presentation stands and sees Nannybot 1.0 which looks like a clunky robot version of the aliens from Alien. It holds a baby in it's arms and speaks in a booming voice.]

    Nannybot 1.0: Sleep little dumpling. I have replaced your mother.

    [It's mouth opens and a bottle of milk comes out on it's tongue. The baby drinks from the bottle.]

    Leela: Aww!
  • Harmoniously?? (Score:3, Insightful)

    by coffeecan ( 842352 ) on Monday May 02, 2005 @09:23AM (#12406211)
    When was the last time ANYTHING was able to live harmoniously with humans. We seem to be able to live harmoniously with ourselves let alone a peice of animated plastic and circutry
  • by Borg453b ( 746808 ) on Monday May 02, 2005 @09:23AM (#12406222) Homepage Journal
    Around the age of 6, I was fascinated with spaceships, dinosaurs, racecars and robots. My love for robots resulted in many a robotic toys and I recall one birthday where I was given one of those "autonomous" 30 cm high robots that would move about in patterns, spin and open their chest to expose blazing cannons while making an awful racket. While I thought it cool in its inanimated state, I was terrified of it when it was activated. I would jump on to a stool or a bed and behold it from afar, and ask others to turn it off, when I had enough.

    In the end, I had accumulated 3 robots of the sort and I got over my robot-freight. One or two of them, were actually able to fire 4 plastic projectiles, though not on their own. That required me to release a spring based firing mechanism.

    When I started attending school, I once invited a friend over. By that time, I was very proud of my robot collection and I would brag, as kids do, about my toys. When telling my new found friend about my robots, I pointed out that one of the robots could fire missiles. In Danish the word missile vaguely (_vaguely_) resembles that of "oranges" (at least to a kid); and so having misheard me and perhaps never having heard the word "missiles" - he wasn't going to give me the impression that his own robot army was inferior to mine, and thus replied that his robots at home could also fire oranges.

    In retrospective, the orange caliber is somewhat more impressive than little plastic darts, but back then missiles just sounded cooler than oranges.
  • "...they now dance with it and help it get up when it falls." Don't children do that with toys, like dolls? They may not completely know the difference between this robot and a toy, but I think it's optimistic of Sony to say that the children think of it as a "younger brother."
  • I fail to see how this robot is going to prove whether robots can live in harmony with humans. It's like user testing "Reader Rabbit" software and then saying, "Yep, people can work with computer programs."

    And while we're on the topic -- don't we already have robotic dogs which seem to work fine with people? This "experiment" has the word "pointless"" written all over it. Even as a publicity stunt it isn't going anywhere. The article was very short and even here on slashdot it's hard to work up any excitem
    • I think you're somewhat missing the point. What's being tested is not whether the robot, say, will attack the humans, or injure them, or whatever.

      It's a test, rather, of the visceral, emotional response of children to a novel stimulus. (A child's perspective is something of an unadulterated--pun always intended--source of basic emotionality.)

      The idea is to discover how and if children will deal with an antropomorphic entity that is similar to, but paradoxically (to them, I'm sure) different from them.

      R
    • Shut up and get back to your cave paintings.
  • by Ch*mp ( 863455 )
    Talk about me with your friends then Nag your parents to buy me!!!!!! Thanks.
  • by AcidLacedPenguiN ( 835552 ) on Monday May 02, 2005 @11:31AM (#12407840)
    . . . that little boys tend to have a fascination with robots. I know that when I was a child, I was all like, "Man, it would be so sweet to have a robot."

    I just think all you old people should just chill out and go with the flow.
  • I'm the only one who misread "attends" as "attacks?"
  • test if robots can live harmoniously with humans

    Humans don't really seem to be able to live harmoniously with other humans, despite massive, long-term evolutionary refinement. What makes them think a hunk of nuts and bolts will do any better?

On the eighth day, God created FORTRAN.

Working...