Sony's Robot Attends Pre-School 228
Darren writes "Sony's Qrio humanoid robot has been attending a Californian pre school to play with children under the age of 2 since March to test if robots can live harmoniously with humans. I wonder if the testing includes monitoring the 'nightmare status' of the pre-schoolers?"
Excerpt from researcher's logs: (Score:5, Funny)
Re:Excerpt from researcher's logs: (Score:2, Funny)
Do you have stairs in your house?
We are the space robots. [ungrounded.net]
"Nightmare Status" (Score:5, Interesting)
* I am not a child pyschologist.
Re:"Nightmare Status" (Score:2, Funny)
I'm all for that, after all kids today need a little more terror in them. Maybe then some parents will actually be able to control their children.
"Now Tommy, if you don't behave you are going to be sleeping with the robot again tonight!"
"But Mom it snores and makes all kinds of weird noises. It gives me nightmares!"...
Re:"Nightmare Status" (Score:4, Interesting)
Re:"Nightmare Status" (Score:3, Insightful)
Your mother is familiar with computers being boxes with keyboards and screens. She probably has 20-30 years of exposure to computers, all in this form.
So of course a computer that is humanoid would be unfamiliar to her, and therefore freak her out.
Today's preschoolers will be growing up with more and more humanoid robots around, and therefore will not be bothered by them at all. I would even theorize that if, in 30 years, you showed
Re:"Nightmare Status" (Score:5, Interesting)
Clowns and wax figures (Score:5, Insightful)
People would be far more comfortable with Bender-like robots than with "I, Robot" style robots because they don't try to be human, just humanoid. If it looks sufficiently non-human to avoid triggering that reflex, they'll be alright. Other than that it'd have to be completely perfect, like Data.
Re:Clowns and wax figures (Score:3, Interesting)
Most folks don't fear clowns, either (Score:3, Interesting)
I know of at least one child who was terrified of a dancing gorilla the first time he saw it. Later on, he was still somewhat afraid of it but eventually he came to enjoy the toy. (Supporting that familiarity idea.) Nevertheless, I imagine more people are afraid of monkeys and apes than there are people who are afraid of clowns and wax figures.
That aside, I still think that there's something some might find especially discomforting about robots that look like us. Whether or not this will change over time,
Scared of Santa (Score:2, Funny)
Indeed; witness the gallery of children who are scared of Santa [tinyurl.com].
Re:Clowns and wax figures (Score:4, Interesting)
Re:Clowns and wax figures (Score:2, Informative)
This is only in theory of course as no robot has even come close to acting like a realistic human. Some think it does apply to things like creatures in movies and such as well.
Re:Clowns and wax figures (Score:3, Funny)
Re:Clowns and wax figures (Score:2)
Re:Clowns and wax figures (Score:3, Interesting)
Re:"Nightmare Status" (Score:2)
Search Slashdot's archives.
Re:"Nightmare Status" (Score:2, Funny)
*What's the point of using an acronym if you're going to spell it out anyway?
Re:"Nightmare Status" (Score:2, Funny)
That, or it trying to kill you!
Bender: *snore* "Kill all humans...Kill all humans...Must kill all hu..."
Fry: "Bender, wake up!"
Bender: "I was having the most wonderful dream! I think you were in it."
Re:"Nightmare Status" (Score:2, Insightful)
As we grow older, we become less accepting of new ideas. While my peers tend to fear "home use" robotics, people my grandparents age (yes, they're still kicking, sort of) are scared to death of a simple home computer. My god daughter, on the other hand, is proficient and comfortable with a computer, and readily accepting of
Re:"Nightmare Status" (Score:3, Insightful)
Babies play with dolls that dance and sing when spoken to or squeezed in the right way. They're also often comfortable using technology like remotes, computers, and sophisticated toys. I don't see a reason why they wouldn't accept a robo
Inevitable Conclusion (Score:5, Funny)
Bad Idea (Score:2, Funny)
Motivation? (Score:5, Interesting)
Re:Motivation? (Score:5, Insightful)
The driving interest in toddlers (and that's what the article is about) certainly isn't sex or beer, and it also isn't shelter or fod - which is still provided by the parents.
The driving interest in very young kids is pure interest. Our brains are just wired that way. Curiosity is a built-in feature.
Re:Motivation? (Score:3, Insightful)
Re:Motivation? (Score:2)
This does not mean that curiosity itself is inextricably linked to a desire to survive, any more than the ability to walk is inextricably linked to a desire to survive. It's perfectly reasonable to expect to be able to build a walking robot, so what makes you think curiosity is any different?
Re:Motivation? (Score:5, Insightful)
Seriously, it's easy to get led astray using evolutionary paradigms to explain traits. We often think of something as a clearcut, atomic quality that benefits or harms the individual.
Curiosity is a good example. Clearly in an organism whose survival depends on complex and learned behaviors, a certain amount of curiosity is needed. But most people grow out of it and become dull,predictable, dependable adults. But some don't -- there's a continuum. And the variance of that trait in adults is useful to the tribe, if often harmful to the individuals on the right end of the bell curve.
Og: This flint is mammoth dung! It keeps shattering when I try to work the edge.
Gog: It's good enough. Just chip another piece of and sooner or later you'll get a good one.
Og: Crap. I'm going to find some decent flint. See you in a few weeks.
Now it may be frequently that Og comes up empty, or is killed, or gets lost and never re. Og is the type who runs across a cave and finds it impossible not to explore it. Now he risks getting eaten by a cave bear, but when he doesn't get eaten, he may have found the tribe a place to hide in times of trouble. The tribe benefits by having a few geeky cavemen and -women who can't keep their nose out of trouble, and the risk is concentrated on a few individuals, whose types will be reproduced again by the future variation in the trait.
I think this is one fundamental difference between robots and humans. Being a human is like playing a game in which you don't really know the cards you've been dealt or are playing, but have to infer what's going on by how the play goes. Being a human is a journey of self-discovery. To design a human robot, you'd have to make it ignorant of it's own characteristics and make it have to deal with the consequences. Until that happens, a robot is just going to be an object.
curiosity was framed (Score:2)
Re:curiosity was framed (Score:3, Funny)
Provided the cat is in a box we can't see into, it's state is the superposition of dead and alive, so the most you can say is that Ignornace half killed the cat. Or maybe it half kept it alive, or half alive? Ow, my brain hurts.
Re:Motivation? (Score:5, Informative)
Because they've been programmed to, presumably. Our emotions, limbic system, and nervous system are nothing more than very low-level instruction sets to force us to behave in a certain manner in response to certain stimuli. I imagine that for a robot, not following a programmed instruction would be about as possible as a human's knee not flexing when hit with a hammer. It's just a reflex.
This is all assuming that these robots have the ability to alter their own code, I'm not sure that's the case.
Re:Motivation? (Score:2)
Out of interest, how do you know that for sure?
Just because you imagine X explains Y, doesn't mean that X is the only thing going on.
Dave.
Re:Motivation? (Score:2)
The problem that comes into play is the possibility of bugs in that software or unanticipated circumstances.
Take for instance a programmed response to greet a familiar person and act in a prefered fashion. If somehow or another the robot's software mistook a total stranger for someone it recognized, the end result could be fairly predictable.
"Hello Mr. X."
"I'm not Mr. X.
Re:Motivation? (Score:2)
OT: Sig (Score:2)
Re:OT: Sig (Score:3)
Umm .. repeat after me: (Score:4, Insightful)
I always wondered what motivation robots have for "learning".
Robots have no motivation other than that given them by their creators.
Robots are not sentient. We do not even know what sentience is. The only way for us humans to create sentience is to procreate.
what needs do the robots have?
Errm.. like all machines, they need a power source. That is all.
Talking about robots as if they are alive and have motivation other than their code implements belies your otaku sensibilities. Clearly, you have not yet procreated, or you would not be so obsessed with making a machine which 'pretends to make it look as if you have done so, technologically'.
Re:Umm .. repeat after me: (Score:2)
Precisely. And that's the difference between creating something of a type different from yourself and begetting something of a type the same as yourself. Many people have forgotten what the verb 'to beget' means....
Re:Umm .. repeat after me: (Score:2)
Are you sentient? (Score:3, Interesting)
You correctly state that we do not know what sentience is, but then you claim that the only way to create sentience is to procreate. How do we know if we're sentient, if we do not know what sentience is?
Or is this like [insert term here]? I don't know what [term] is, but I'll know it when I see it.
Re:Are you sentient? (Score:2)
Some things cannot be explained but must be experienced. Most emotions work this way. You can explain what happens during a certain emotion but how would you describe the emotionally response during awe, ecstasy, humility?
It's like trying to describe "blue". It's an experience. You'll know it when you have it.
No, you don't (Score:2)
But I would argue that you do have to know what something is to know whether or not you've created it. Naturally, knowing what something is is necessary but not sufficient for knowing that you've created it. I.e., it is conceivable that one day we'll know what sentience is and still only be able to create it through procreation. However, it is also possible that one day we'll discover that defining sentience is as useful as defining the a
Re:Motivation? (Score:4, Insightful)
Robots have no "motivation" to do anything. they have a reward function that they try to maximize, but certainly it's not anything like that capricious human thing we call "motivation" (which is actually a very good thing).
Again, it should be mentioned that while it may make us feel very cool and cutting edge to apply human terms like learning, thinking, or motivation to machines; they really are ultimately meaningless in a non-human context and are only useful as analogues and in impressing your grand-mother with how her tivo "learns" her tastes
as Edsger Dijkstra famously said:
~AS
Re:Motivation? (Score:2)
The reward function in human beings is called the limbic system [wikipedia.org]. Ever heard of dopamine?
Re:Motivation? (Score:2)
I would go as far as saying things like Tivos do learn. They alter their behaviour based on what happens, just like children. If something happens that people don't like (It suggests a crap programme to watch) then something negative happens (Nobody watches it) and learning occurs (It doesn't do it again).
Re:Motivation? (Score:2)
The will is not set upon a surplus of pleasure,
but upon the amount of pleasure that remains after getting over the pain.
This is the essence of all genuine will... It achieves its aim
though the path be full of thorns.
It lies in human nature to pursue it so long as the displeasure
connected with it does not extinguish the desire altogether.
(The Philosophy of Freedom - Chapter 13 [rsarchive.org])
To quote another computer scientist... (Score:2)
One (of two) definitions Parnas used for AI was
Therefore, once we have completely mastered learning, sentience, etc., it will no longer be considered AI. Perhaps sentience will no longer be considered to be sentience, either.Re:Motivation? (Score:2)
Organisms designed by evolutionary processes can have analogs that are engineered, but they won't be identical. I agree with you that those differences aren't crucial.
And while I fundamentally agree with your stance, I think Kurt Vonnegut said it best in Player Piano (an old but very well written book about people being replaced by technology and the trouble it causes. )
"What are people for?"
What are people for, anyways? What are robots for?
I think that people are ultimatly defined by their functions, w
Conscience, Self-Awareness (Score:2)
Re:Conscience, Self-Awareness (Score:2)
Strange loops, recursion, pattern recognition, etc.
When you break our minds down into their basic functions, no single one of them is all that difficult to imagine emulating on sufficiently powerful hardware.
The question is weather or not the end result will be a truly thinking machine in the same way that we think.
heoretically when that time comes thoughts from a machine with the proper software running on the pro
Re:Conscience, Self-Awareness (Score:2)
The sense of information processing that is used in cognitive science, is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality. The "information" in the brain is always specific to some modality or other. It is specific to thought, or vision, or hearing, or touch, for example. The level of information processing which is described in the cognitive science computational models of cognition , on t
Re:Motivation? (Score:2)
effects on the children? (Score:5, Interesting)
Re:effects on the children? (Score:2)
Nonsense. (Score:2, Insightful)
The American South was more racist. Hitler was part Jewish. New Yorkers hate the cold.
tolerance != equality (Score:2)
The American south was very tolerant of blacks... provided that they acted in the customary submissive fashion. Tolerance of subordinates does not mean treating them as equals.
There was a power structure to be maintained.
The more blacks behaved as they were expected to behave, i.e. as unintelligent, courteous and submissive, childlike, obedient, etc. the more that they were tolerated.
I'm not supporting this at all. I'm simply saying that if people see somthing they're
3 laws (Score:2, Funny)
Re:3 laws (Score:2, Funny)
1. Do no harm to Sony
2. To promote Sony's range of electronic goods
3. Uphold the Law
4. Classified
Obligatory? Bring it on. (Score:5, Funny)
Re:Obligatory? Bring it on. (Score:2, Troll)
We've already got one in the White House.
Intelligence = CPU + experience (Score:5, Insightful)
Of course, if Moore's Law is still kicking, then 2 years into the learning phase, they can swap the 1-HBE processor for a 2-HBE processor. This will shorten the remaining learning period, but I doubt it will cut it in half. Learning to physically and mentally interact with the world will still take time. What might accelerate the learning time is if multiple copies of the intelligence can share experiences and learn directly from each other's mistakes/successes.
The point is that the first intelligent robots will need to go to preschool to learn how to interact with the world.
Re:Intelligence = CPU + experience (Score:2)
Is rest unnecessary? (Score:5, Insightful)
You may be right. The question is: is sleep/relaxation, etc. a critical part of intellectual development? For humans it definitely is -- sleep deprivation really messes up the brain. But even for non-biological intelligences I'd bet that some "downtime" is an important part of assimilating all the data of the day. Interacting with the world is a full-time job for the CPU that forces the deferral of many analysis and restructuring tasks that can only occur when the brain is offline.
Perhaps androids would dream because dreaming is a critical maintenance/analysis cron job.
Re:Is rest unnecessary? (Score:2)
Re:Intelligence = CPU + experience (Score:2)
If we're just trying to create a mind, capable of complex and rational thought - it can probably easily mature/learn in a third or half of that time - even with 'rest' to process. It basically boils down to whether or not we'd be giving it the ability to feel/want/etc.
If we do, it will get bored, have desires and needs, etc. and will need pretty much need the same amount of time as your average joe.
A bigger obstacle will be keeping it
Re:Intelligence = CPU + experience (Score:2)
This is why I think Ghost in the Shell: Stand Alone Complex is some of the best SciFi on TV right now: the story of the Totchkomas(sp?) really explores this particular angle. They're childlike machine intelligences with surprising bits of depth brought on by that type of sharing / synchronization.
Today's speculative fiction... maybe tomorrow's
Nightmares, yeah right (Score:5, Insightful)
I wonder if the submitter has any clue as to what he's talking about.
It's pretty difficult to give toddlers nightmares. They're not easily scared. They do cry over the slightest problem, mostly because crying is the only well-developed form of verbal communication available to them at that age. They are also excellent at forgetting whatever the problem was and getting on with their lifes. Watch a kid hurt itself. Then go away and watch the same kid 10 minutes later.
It'd take a serious event to cause nightmares in those kids, and that machine has neither the looks nor the sheer physical power that would be required.
Re:Nightmares, yeah right (Score:2)
Re:Nightmares, yeah right (Score:2)
Share and Enjoy... (Score:5, Funny)
Re:Share and Enjoy... (Score:3, Funny)
Oblig. Futurama Quote (Score:3, Funny)
Ptft.. (Score:5, Funny)
The robots don't stand a chance.
Re:Ptft.. (Score:5, Funny)
Can't start them too young, I say - let's make sure they can field-strip an AK by the time they're in grade school.
I predict (Score:3, Funny)
Either that or... (Score:2)
Personally I'm betting on the whale scenario. After all, where is it going to get power in a post-apocolyptic landscape? Whales are here right now.
That one in the corner keeps bumming out the kids (Score:2, Funny)
He keeps bringing the other kids down. All he does is complain about the pain in all his diodes down his left side, about how the kids shouldn't talk to him about life, and making disparaging remarks about their intelligence.
Seems to like kickball, though.
Nanybot :D (Score:2, Funny)
Nannybot 1.0: Sleep little dumpling. I have replaced your mother.
[It's mouth opens and a bottle of milk comes out on it's tongue. The baby drinks from the bottle.]
Leela: Aww!
Harmoniously?? (Score:3, Insightful)
Tales of toy robots (Score:5, Funny)
In the end, I had accumulated 3 robots of the sort and I got over my robot-freight. One or two of them, were actually able to fire 4 plastic projectiles, though not on their own. That required me to release a spring based firing mechanism.
When I started attending school, I once invited a friend over. By that time, I was very proud of my robot collection and I would brag, as kids do, about my toys. When telling my new found friend about my robots, I pointed out that one of the robots could fire missiles. In Danish the word missile vaguely (_vaguely_) resembles that of "oranges" (at least to a kid); and so having misheard me and perhaps never having heard the word "missiles" - he wasn't going to give me the impression that his own robot army was inferior to mine, and thus replied that his robots at home could also fire oranges.
In retrospective, the orange caliber is somewhat more impressive than little plastic darts, but back then missiles just sounded cooler than oranges.
Perhaps it's just a toy to them (Score:2, Interesting)
I fail to see what they are going to prove (Score:2, Informative)
And while we're on the topic -- don't we already have robotic dogs which seem to work fine with people? This "experiment" has the word "pointless"" written all over it. Even as a publicity stunt it isn't going anywhere. The article was very short and even here on slashdot it's hard to work up any excitem
Re:I fail to see what they are going to prove (Score:2, Insightful)
It's a test, rather, of the visceral, emotional response of children to a novel stimulus. (A child's perspective is something of an unadulterated--pun always intended--source of basic emotionality.)
The idea is to discover how and if children will deal with an antropomorphic entity that is similar to, but paradoxically (to them, I'm sure) different from them.
R
Ignorant Luddite. (Score:2)
Hey Kids!!!! (Score:2, Funny)
most of you are forgetting. . . (Score:3, Insightful)
I just think all you old people should just chill out and go with the flow.
So... (Score:2)
Imagine All The People... (Score:3, Interesting)
Humans don't really seem to be able to live harmoniously with other humans, despite massive, long-term evolutionary refinement. What makes them think a hunk of nuts and bolts will do any better?
No, no, no (Score:3, Informative)
Re:No, no, no (Score:3, Funny)
I think we all know the film "I, Robot" has sufficiently proven that ancient Law to be false!
As well as metaphorically pissing [thebestpag...iverse.net] on Asimov's grave.
Re:No, no, no (Score:2)
Feeble younger brother overlords (Score:2)
Re:It's all fun and games.... (Score:5, Funny)
Re:It's all fun and games.... (Score:2, Insightful)
Re:It's all fun and games.... (Score:3, Insightful)
People commonly think of a "robot" as a general purpose machine that can replace a human at any manual task, not realizing how many special-purpose robots are used in industry today. What people really want is the robotic maid (well, that and the sex robot, but anyway) and that's the hardest problem to s
Re:It's all fun and games.... (Score:2)
Not fair (Score:2)
-Matt
Re:It's all fun and games.... (Score:2)
Less than 2 feet - weighs less than you'd imagine. (Score:5, Informative)
So, even if the robot went 'dead' and fell rigidly from its full height, it would probably, at worst cause a small bruise to a kids knee.
However, having read a bit on QRIO, the robot knows when it is going to, or is being forcibly overbalanced and takes apropriate action to soften its fall (hands out) and even contort to avoid objects it is falling toward.
Re:Huh? (Score:2)