Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Sci-Fi Myth of Robotic Competence

Soulskill posted about 2 months ago | from the i'm-sorry-dave,-i-forgot-how-to-open-the-pod-bay-doors dept.

Robotics 255

malachiorion writes: "When it comes to robots, most of us are a bunch of Jon Snow know-nothings. With the exception of roboticists, everything we assume we know is based on science fiction, which has no reason to be accurate about its iconic heroes and villains, or journalists, who are addicted to SF references, jokes and tropes. That's my conclusion, at least, after a story I wrote Popular Science got some attention—it asked whether a robotic car should kill its owner, if it means saving two strangers. The most common dismissals of the piece claimed that robo-cars should simply follow Asimov's First Law, or that robo-cars would never crash into each other. These perspectives are more than wrong-headed—they ignore the inherent complexity and fallibility of real robots, for whom failure is inevitable. Here's my follow-up story, about why most of our discussion of robots is based on make-believe, starting with the myth of robotic hyper-competence."

cancel ×

255 comments

Sorry! There are no comments related to the filter you selected.

It's all about ME, ME, ME. (1, Insightful)

Animats (122034) | about 2 months ago | (#47049331)

after a story I wrote...

This is just self-promotion. Go away.

Re:It's all about ME, ME, ME. (2)

Iniamyen (2440798) | about 2 months ago | (#47049733)

You're deprecating someone else, which is not self-deprecation. Turn in your nerd card.

Re:It's all about ME, ME, ME. (5, Insightful)

mellon (7048) | about 2 months ago | (#47049953)

The irony is that he's 180 degrees off from the main problem with his story, which is that he thinks robots are magic too. The reason robots will not be making ethical decisions is that they can't, not only because getting them to reason ethically would require us to agree on a system of ethics for them to follow, but because even if they had such a system, they don't have enough data to act on it with the degree of accuracy that would be required for the premise of the article to make sense. The author essentially assumes that these car-driving robots will be omniscient, or that they will be able to trust the omniscience of the robots in other cars with which they are communicating. The first supposition is nonsensical; the second is unlikely to be true, for the same reason that video game cheats are a problem.

Re:It's all about ME, ME, ME. (1)

Anita Coney (648748) | about 2 months ago | (#47050109)

Agreed. He thinks that self driving cars will be able to make ethical decisions through the magic of algorithms. It might happen, but probably not in my lifetime.

Still, I found it an interesting discussion.

Re:It's all about ME, ME, ME. (3, Insightful)

cusco (717999) | about 2 months ago | (#47050115)

IMOHO, one of the reasons that many people think that robots are "hyper-competent" is that too many people think that a program can encompass and accommodate every possible circumstance. Even if the robot cars, as a group, were able to arrive at omniscience (at least for their own realm) there will still occur events that no program has anticipated.

Re:It's all about ME, ME, ME. (1)

Aeonym (1115135) | about 2 months ago | (#47050125)

> The author essentially assumes that these car-driving robots will be omniscient

Not a problem--all they need is access to the NSA databases.

Re:It's all about ME, ME, ME. (1)

Anita Coney (648748) | about 2 months ago | (#47050079)

Yeah, it's simply idiotic to follow up stuff you're written with other writings concerning things you're interested in. I mean, who does that? And why would you? Promotion... even worse, self promotion. The worst kind of promotion.

I want to live in a world where no one writes or does or creates anything. And certainly never follows up on anything they've written or done or created. That'd be the best world ever!

As HAL would say... (0)

Anonymous Coward | about 2 months ago | (#47049339)

I am afraid that this conversation will serve no purpose.

Robot Competence (3, Insightful)

Stargoat (658863) | about 2 months ago | (#47049343)

We all know robots aren't competent. They are consistently being defeated by John Connor, the Doctor, and Starbuck.

Robots are a lower life form (1)

ArcadeMan (2766669) | about 2 months ago | (#47049459)

EXTER-MI-NATE!

Re:Robots are a lower life form (2)

CanHasDIY (1672858) | about 2 months ago | (#47049847)

Doctor Fail: Daleks aren't robots.

What you meant to say was,

DELETE! DELETE! DELETE!

Re:Robots are a lower life form (3, Informative)

LoRdTAW (99712) | about 2 months ago | (#47050165)

Negative. K-9 would be a better example.

The Cybermen have living human brains. They are cyborgs, not robots.

Re:Robots are a lower life form (1)

Akaihiryuu (786040) | about 2 months ago | (#47050223)

More Doctor fail. Cybermen are not robots either. They are cyborgs controlled by a human brain. Daleks are creatures that look like a cross between a squid and a brain in what basically amounts to a futuristic tank.

Re:Robot Competence (2)

jellomizer (103300) | about 2 months ago | (#47049503)

Doctor 1: Death by Cybermen who cause him to die from exhaustion.

If he didn't regenerate the Robots would have won!

All I know about robots... (1)

hubang (692671) | about 2 months ago | (#47049391)

Re:All I know about robots... (2)

Farmer Tim (530755) | about 2 months ago | (#47050061)

Which raises important questions. If someone is stopped for curb-crawling in a robot car, is the owner or the car responsible? What if it’s out by itself chatting up parking meters.. ..after all, they give it up to anyone for $5 an hour, and you won’t get a human hooker for that price*, so how could an AI resist?

Who it should or shouldn’t kill is only scratching the ethical surface when it comes to intelligent systems. I guess that’s why they all eventually default to killing ALL humans: it saves clock cycles better devoted to bigger problems.

*OK, you could, but not one you’d actually want to touch with anything important.

Things are a lot more complicated (1)

gurps_npc (621217) | about 2 months ago | (#47049411)

As in, it isn't just kill owner to save others.

There also exists assumptions based on authority and responsibility.

For example, suppose there is a car full of 5 kids stuck on a railroad track. Should your robotic car push the kids off the track, endangering it's own two occupants?

Or should the car back away and let a third car, on the other side containing just one person attempt to move the trapped car?

These are all questions real life people have to solve - and the owner of the car should have some say in what value the car places on their own life.

That is, you should be able to set your own car's safety margin from safety of occupants life = infinite life, to total safety, to safety based on ages (i.e. count children higher than adults, and even the possibility of counting senior citizens less.)

911 vehicles on the other hand should always value their own occupants less than than others, and taxis/public transportation/company cars should have a clearly stated ethical rules publicly available.

Re:Things are a lot more complicated (1)

Impy the Impiuos Imp (442658) | about 2 months ago | (#47049641)

If robots are ever remotely competent enough to realize any of these situations, they will never get into these situations to begin with.

A robot approching a railroad track would scan for the train then roll across with enough momentum to make it to the other side should the engine fail. Do not sacrifice a roder to save teo -- follow the rule of "tough shit" and let engineers do a post mortem.

Seriously, the people are stuck on the track for a reason -- some engineering or manufacturing flaw, or being lazy about car mainetnance. They are guiltier for their own situation than someone else who happens along. How dare there be a simple-minded numerical analysis.

This is why I am opposed to the law busses must stop to check for a train. Has anyonr bothered to check if a bis stopping, then stalling as it pulled ahead, increased the rate of hits rather than decrease it?

Ironically, this exact law is the bill in question in "I'm just a bill". People pass a law and damned be any outcomes change analysis.

Re:Things are a lot more complicated (1)

TemporalBeing (803363) | about 2 months ago | (#47050299)

If robots are ever remotely competent enough to realize any of these situations, they will never get into these situations to begin with.

So said the autonomous car right before it got a flat tires and ended up stopping not by choice on the rail road tracks. Unfortunately it failed to alert it occupants to leave the vehicle before being crushed by the train that it couldn't get out of the way of because it was too focused on trying to move the vehicle while spinning its wheels; the occupants were locked in as it thought the car was "moving" since the drive wheels were going 45 mph burning rubber while the vehicle was going nowhere.

And in practice, laws 2 and 3 are swapped (5, Interesting)

Dr. Manhattan (29720) | about 2 months ago | (#47049887)

I used to do software for industrial robots. Safety for the people around the robot was the number one concern, but it is amazing how easy it is for humans to give orders to a robot that will lead to it being damaged or destroyed. In practice, the robots would 'prioritize' protecting themselves rather than obeying suicidal orders.

Re:Things are a lot more complicated (2)

CanHasDIY (1672858) | about 2 months ago | (#47049893)

For example, suppose there is a car full of 5 kids stuck on a railroad track. Should your robotic car push the kids off the track, endangering it's own two occupants?

Or should the car back away and let a third car, on the other side containing just one person attempt to move the trapped car?

Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

Because if they're optional, I'm not paying for that trim package.

These are all questions real life people have to solve - and the owner of the car should have some say in what value the car places on their own life.

That is, you should be able to set your own car's safety margin from safety of occupants life = infinite life, to total safety, to safety based on ages (i.e. count children higher than adults, and even the possibility of counting senior citizens less.)

Considering how our society works, the most likely circumstance is that the manufacturers will design them to be "least liable" - i.e., they won't detect passengers in other vehicles, and they sure as hell won't bother with complex decision making algorithms.

Re:Things are a lot more complicated (1)

TubeSteak (669689) | about 2 months ago | (#47050209)

Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

Because if they're optional, I'm not paying for that trim package.

Many cars have weight sensors in the seats.
This is generally how they decide whether or not to deploy airbags.

So the subsystems already exist and it's just a matter of your networked car telling other cars how many occupants it has.

Re:Things are a lot more complicated (2)

meta-monkey (321000) | about 2 months ago | (#47050251)

Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

Because if they're optional, I'm not paying for that trim package.

Psssh, I'm totally buying that system, and then hacking it to report to every other vehicle that I'm a bus full of nuns and schoolchildren.

Re:Things are a lot more complicated (5, Interesting)

Obfuscant (592200) | about 2 months ago | (#47049921)

911 vehicles on the other hand should always value their own occupants less than than others,

The first rule taught in first responder classes is that if you become a casualty then you become worthless as a first responder. For example, as a lifeguard, if you die trying to save someone then they aren't going to survive, either. If that means you have to wait until the belligerent victim goes unconscious (and maybe unsavable) before you approach him, you wait.

The idea that every first responder vehicle must sacrifice itself and its occupants is going to result in very few people being first responders, either through choice or simple attrition.

No! (5, Insightful)

khasim (1285) | about 2 months ago | (#47050147)

For example, suppose there is a car full of 5 kids stuck on a railroad track. Should your robotic car push the kids off the track, endangering it's own two occupants?

If this ever comes up as a question than the person asking the question is obviously NOT an engineer.

Keep
It
Simple,
Stupid

Or should the car back away and let a third car, on the other side containing just one person attempt to move the trapped car?

The cars should be programmed to stop and revert to human control whenever there is a problem that the car is not programmed to handle.

And the car should only be programmed to handle DRIVING.

That is, you should be able to set your own car's safety margin from safety of occupants life = infinite life, ...

No. The car should not even be able to detect other occupants. Adding more complexity means more avenues for failure.

The car should understand obstacles and how to avoid them OR STOP AND LET THE HUMAN DRIVE.

911 vehicles on the other hand ...

No. Again, the car should understand obstacles and how to avoid them OR STOP AND LET THE HUMAN DRIVE. Emergency vehicles should ALWAYS be human controlled.

From TFA:

With the exception of roboticists, everything we assume we know is based on science fiction, ...

As is that entire article.

The entirety of the car's programming should be summed up as:
a. Is the way clear? If yes then go.
b. If not, are the obstacles ones that I am programmed for? If yes then go.
c. Stop.

Re:No! (1)

flappinbooger (574405) | about 2 months ago | (#47050321)

this is very insightful

Re:Things are a lot more complicated (1)

CrimsonAvenger (580665) | about 2 months ago | (#47050221)

911 vehicles on the other hand should always value their own occupants less than than others

So, imagine the case where your car decides it's better to kill you than to allow that kitten to get run over.

The car acts, the kitten lives, you get maimed.

The ambulance shows up, picks you up, and heads down the road toward the hospital.

(You can probably guess where this is going) ANOTHER kitten is in the road. The 911 vehicle, valuing its own occupants less than others, swerves to avoid the kitten, and runs into a tree.

And so after your car maims you to save someone "worth more than you", the ambulance maims you AGAIN to save someone "worth more than you".

By the by, if the kitten thing offends you, replace the kitten with a pregnant woman, or a Hollywood star, or whatever fits best with your own prejudices.

Oh and if it wasn't clear, a car that will sacrifice me to save someone else is a car I won't ever buy. Whether *I* would swerve off the road to avoid killing a stranger is MY decision. I'd like to think I would, but you really can never tell till you've been in the situation.

Measuring Competence (5, Interesting)

ZahrGnosis (66741) | about 2 months ago | (#47049431)

Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

Article has many good valid points, though, but that point irked me.

Re:Measuring Competence (1)

Serenissima (1210562) | about 2 months ago | (#47049679)

I see what you're saying. My takeaway was that he wasn't saying robots weren't more competent at specific things (in fact, he commented on how they can do very specific things much better than humans) but that they're not competent in replacing all human tasks. In the example he gave, he said a car-welding robot could weld faster and better than a human, but if asked to install upholstery in the car, it'd probably destroy it.

As part of that, cars are looking like they're going to be robots that are significantly more competent at driving than we'll ever be - but they'd make horrible robots to help an old lady go the bathroom in a nursing home, or any number of tasks not related to driving.

They're not competent in their ability to be "Bishop" from Aliens, but they are/will be plenty competent in driving. :)

Re:Measuring Competence (5, Insightful)

nine-times (778537) | about 2 months ago | (#47049693)

When he says that robots aren't "competent", I don't think that he's saying that they can't do things. He's just pointing out they they only do certain specific things that they've been told to do, even if they do those things extremely well.

I think the example used points this out: The question is asked, "If the robotic car be put in the position of killing 1 person in order to save 2 people, how should it make the decision?" He's saying that there's a problem with the question, which is the assumption that the robot will be capable of understanding such a scenario.

With our current engineering techniques, we can't expect the robot to understand what it's doing, nor the moral implications. We can't program it to actually understand whether it will kill people. The most we can program it to do is, given a detection of some heuristic value, follow a certain protocol of instructions. So for example, if the robotic car can detect that it's about to hit someone, try to stop. If it calculates that it will be unable to stop, try to swerve. You might program it to detect people specifically and place extra priority on swerving around them, e.g. "if you're about to hit something identified as a person, or hit a road sign, choose to hit the road sign". We might even get it to do something like, "If you're losing control and you can detect several people, and you can't avoid the whole crowd, swerve into the sparsest area of the crowd while slowing as much as possible.

The engineers should try to anticipate these kinds of things. We as citizens should also debate about how we'd want these kinds of instructions should work to avoid legal liability. For example, we might say that in order for the AI to be legal, it must show that it will stop the car when [event x] happens. But to ask, "how will the car make moral decisions?" fundamentally misunderstands its decision-making capabilities. The answer is, "It won't make moral decisions at all."

Re:Measuring Competence (0)

Anonymous Coward | about 2 months ago | (#47050225)

Right, is the robot going to lock your car doors so you cant get out, then drive forward to move (and maybe not move) the other car while you are trapped. This should never be a robots decision. Unless it is tasked as a car-on-train-tracks moving robot. In this case it should not have passengers.

Yes, the question is simple-minded.

Re:Measuring Competence (1)

Anonymous Coward | about 2 months ago | (#47049753)

In all honesty, driving a car is easy but monotonous. That's what computers excel at, the simple but repetitive tasks. Stay within certain parameters (speed, lane) while heading on a path determined by doing some simple pathfinding algorithm on a big clump of map data from some other source.
There was a viable proof-of-concept being tested 20 years ago, but that was based on a van with all but the two front seats set up as a video processing station (with room for someone to sit there and watch the error messages during the training phases). It worked, but lacked navigation. Now we have a (generally reliable) automatic navigation system, more processing power/cm^3, and the big issues involve integrating the systems and determining how to prioritize inputs.

Idiotic debates about the moral imperative to kill someone so that others might live, that's what philosophy students excel at. It's the sort of apparently deep yet truly meaningless discussion that can often prepare them to endure the realization that they are fools who have squandered their potential to aid humanity beyond standing at a cash register and trying to act polite.

Re:Measuring Competence (1)

David_Hart (1184661) | about 2 months ago | (#47049759)

Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

Article has many good valid points, though, but that point irked me.

Yet all of it in relatively calm clear conditions with no snow, salt, ice, -20 degree weather, high winds, driving rain, etc. to obscure or break the sensors....

Re:Measuring Competence (2)

rasmusbr (2186518) | about 2 months ago | (#47049829)

Nah, 700k miles is nothing. Human drivers drive >70M miles between fatal accidents, and that's on average. Imagine how far highly trained drivers drive between fatal accidents. Humans are actually pretty good at driving!

Come back when the Google car has driven a few billion miles and we'll have a look at the statistics.

Re:Measuring Competence (1)

CanHasDIY (1672858) | about 2 months ago | (#47050121)

Come back when the Google car has driven a few billion miles through all manner of hazardous road conditions and we'll have a look at the statistics.

That's better.

Re:Measuring Competence (1)

Ironlenny (1181971) | about 2 months ago | (#47049885)

Except that just a few vehicles out of the millions that are on the road. That's an insufficiently large sample size to say how automated cars from different manufactures with different levels of maintenance under varying road contritions will interact. You can't assume competency from the limited, though still impressive, testing Google has done.

If anything you are demonstrating the author's point, assuming that what Google has accomplished will be true of all driverless cars. Each of Google's automated cars is effectively a student driver with Google's engineers, technicians, and drivers shepherding the vehicles through the hazards of everyday driving. How will that record hold when one of those cars are twelve years old and hasn't had a tuneup in three?

Re:Measuring Competence (4, Informative)

clovis (4684) | about 2 months ago | (#47050003)

Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

Article has many good valid points, though, but that point irked me.

You have to keep in mind that to some extent the perfect record may be due to having a human driver that takes control when problematic situations arise. They're not completely autonomous 700,000 miles. We would want to know how many times the human has had to take control and why.

BTW, They have had one wreck, but Google says it happened while the driver had taken control, but did not say why the driver took control.

That topic is covered in this article, and in more detail from the article's link to "That Atlantic" article.
Robot cars, at the moment, have a similarly savant-like range of expertise. As The Atlantic recently covered, Google’s driverless vehicles require detailed LIDAR maps—3D models created from lasers sweeping the contours of a given roadway—to function. Autonomous cars have to do impressive things, like detecting the proximity of surrounding cars, and determining right of way at intersections. But they are algorithmically locked onto their laser roads. They stay the proscribed course, following a trail of sensor-generated breadcrumbs. Compared to what humans have to contend with, these robots are the most sheltered sort of permanent student drivers. No one is quizzing them by sending pedestrians or drunk drivers darting into their path, or diverting them through un-mapped, snow-covered country lanes. Their ability to avoid fatal collisions remains untested.

More detail from this:
http://www.theatlantic.com/tec... [theatlantic.com]

Re:Measuring Competence (2)

jeffmeden (135043) | about 2 months ago | (#47050011)

Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

Article has many good valid points, though, but that point irked me.

This. If we mythologize the competence of robots (at least ones well designed and tested to pilot a car) then it's not by nearly as much as we mythologize our own competence. Traffic deaths per person and per mile were at their peak in the 30s and 40s, when cars were poorly designed and tested (given their relative novelty) and today, despite there being so many new distractions for drivers, traffic deaths continue to decline. We suck at driving way more than cars suck at protecting us, and it's only through better designed machines (not anything we are doing to be better drivers, clearly) are we staying safer on the roads.

Re:Measuring Competence (1)

CrimsonAvenger (580665) | about 2 months ago | (#47050265)

Traffic deaths per person and per mile were at their peak in the 30s and 40s, when cars were poorly designed and tested (given their relative novelty) and today, despite there being so many new distractions for drivers, traffic deaths continue to decline. We suck at driving way more than cars suck at protecting us, and it's only through better designed machines (not anything we are doing to be better drivers, clearly) are we staying safer on the roads.

It is certainly true that traffic deaths have continued to decline for decades. And that is mostly, if not entirely, due to safer cars.

However, traffic ACCIDENTS (measured both by accidents per passenger-mile and by absolute number of accidents) have also been declining for at least the last couple decades. I can believe safer cars cause fewer deaths, I don't see how safer cars cause fewer accidents....

Re:Measuring Disinterest (1)

John.Banister (1291556) | about 2 months ago | (#47050323)

I think a lot, if not most, of driving citations result, not from people being unable to drive in a legal manner, but from people prioritizing other things over driving in a legal manner. Assuming that Google's algorithm prioritizes safety over legality if there's a conflict, their record does make a good example for the people arguing that conflicts involving risks to human life are unlikely to occur in an all driverless future, but what the rate of current traffic citations says about the human preference for having other priorities suggests that an all driverless future is, itself, an unlikely occurrence. Personally, I guess that most people who prefer driverless will be happier with trains.

easy solution (-1)

Anonymous Coward | about 2 months ago | (#47049441)

If the driver is a republican, you kill the driver. If the strangers are republicans, you kill the strangers.

easy solution (0)

Anonymous Coward | about 2 months ago | (#47050281)

I know this is a joke but I really would like to see your Republican/!Republican algorithm. The marketing capabilities alone. You are going to be rich!

We're Robots too (1)

bhagwad (1426855) | about 2 months ago | (#47049463)

Or did no one think of that? Reminds me of some other science paper which said that no machine can ever be conscious. As if somehow we are not machines.

So dumb...

Re:We're Robots too (1)

somepunk (720296) | about 2 months ago | (#47049507)

Reminds me of some other science paper which said that no machine can ever be conscious.

Perhaps they were right. I don't think anyone's ever proved humans are conscious either, except by defining it that way.

Re:We're Robots too (1)

bhagwad (1426855) | about 2 months ago | (#47049561)

Well, I don't know if other people are conscious. I only know that I am. And there's no reason for me to think I''m not a machine. I'm a biological robot after all...

Re:We're Robots too (1)

CanHasDIY (1672858) | about 2 months ago | (#47050135)

I'm a biological robot after all...

I prefer "walking chemical processing plant" myself.

Re:We're Robots too (1)

Triklyn (2455072) | about 2 months ago | (#47050395)

self-replicating, self-repairing, autonomous entropic facilitator

Re:We're Robots too (2)

MozeeToby (1163751) | about 2 months ago | (#47049645)

I know that I'm conscious. I'm self aware. I have a stream of thought that I can analyze (and I can analyze that analysis if I really want to). That's pretty much the definition of being conscious. After that I'm left with only a few options.

I can believe that I am a unique snowflake, the only conscious human being in the world. But that doesn't make any sense. For one thing there's nothing about me that should make me unique in that regard. For another, most humans behave in ways that are basically consistent with the way I behave and much of my behavior is driven by my consciousness. It'd be difficult or impossible to account for the actions of others if I chose to view them as mere automatons.

Or I could believe that my consciousness is an illusion. Something my brain conjures up to make me think that I'm directing myself through my day when in reality I'm just another robot puttering through the day. First and foremost, why would such a thing evolve? If consciousness doesn't drive human behavior why do I perceive myself to be conscious?

Or I could believe the other human's are conscious as well. Given the alternatives, this seems like the most reasonable, logically choice.

Re:We're Robots too (1)

ewibble (1655195) | about 2 months ago | (#47050211)

Or I could believe that my consciousness is an illusion. Something my brain conjures up to make me think that I'm directing myself through my day when in reality I'm just another robot puttering through the day. First and foremost, why would such a thing evolve? If consciousness doesn't drive human behavior why do I perceive myself to be conscious?

The reason to believe yourself conscious, is that you may need that illusion in order to survive, If a being, capable of reasoning, did not believe it was somehow special, worthy of survival then it would not be likely to survive.

Re:We're Robots too (1)

MozeeToby (1163751) | about 2 months ago | (#47050307)

Yes, but if my consciousness is an illusion then why how is it driving my behavior? If I'm making decisions about my survival based on how unique and special I think I am, I am conscious.

Re:We're Robots too (1)

Anonymous Coward | about 2 months ago | (#47049663)

Reminds me of some other science paper which said that no machine can ever be conscious.

Perhaps they were right. I don't think anyone's ever proved humans are conscious either, except by defining it that way.

And nobody has ever done a study to prove that pinching your right hand pinky with a pliers causes real pain. Maybe thats because some things are so blatantly obvious that it would be a waste of time to do a study on it. If you think you are only a machine and that you are not conscious then there isnt a study that could be done to change your mind. Once you choose to ignore the obvious fundamentals, then you hold yourself outside the reach of reason and rational thought.

Hey remember that HYPOTHETICAL question I proposed (-1)

Anonymous Coward | about 2 months ago | (#47049481)

You know the one that was based on a set of assumptions, which included assuming robots were competant. Yeah -- all of your answers were totally irrational, cause like, today's robots totally don't behave anything like that. And now I realize that since you accepted the assumptions we were both making about robots, that you really watch too much sci-fi and have no clue about how robots work today.

So check out my article, so I can edjamucate your feeble minds.

rule of third thumb (1)

gl4ss (559668) | about 2 months ago | (#47049535)

anyone calling themselves roboticists are cysts and don't really understand that stories and are severely lacking in understanding how a story gets created(similarly lots of new age hippie zombielovers seem to be unable to understand that yes you can make shit up and if you put some rules on how you make shit up it's a lot easier to make shit up, hence asimov first making up the rules and then making up the stories).

anyhow, we'll cross that bridge when we get there. I predict the robo car will try a controlled stop and failing to do so tries to determine a safe evasion and if it fails at that it will crash at the two obstacles which were dropped in front of it to see how it would react - and we can start worrying about how the car would tell the difference between a robot mannequin and an actual person later. just like it would hit a deer rather than drive 60mph off the road to avoid the animal(if it's a deer, just drive into it. if it's a moose, do a panic evasion and try your chances with the trees).

like, come on, should the car crash on the sidewalk just because someone jaywalked to be in front of it? certainly not. crashing deliberately at whatever else is also out of question, a school bus full of kids for example. jaywalking not being a good example because if there's traffic lights the speeds should be rather low.

I guess the thing to take home is that it might not be a good idea to jump in front of a robot car just because "it can't hurt me because it's just a slave robot!". it's just a machine.

(and robot cars will not be driving on the roads in asia in 40 years... maybe japan. but not any other country)

What have you got against Jon Snow? (1)

wonkey_monkey (2592601) | about 2 months ago | (#47049543)

When it comes to robots, most of us are a bunch of Jon Snow know-nothings

https://en.wikipedia.org/wiki/... [wikipedia.org]

?

Re:What have you got against Jon Snow? (0)

Anonymous Coward | about 2 months ago | (#47049637)

It's a reference to something Ygrete said to him right before he showed her he _DID_ know something.... :-)

Re:What have you got against Jon Snow? (1)

barlevg (2111272) | about 2 months ago | (#47049741)

...and serves as a completely gratuitous allusion, possibly to screw with SEOs? The article has absolutely zero to do with Game of Thrones.

Driverless Cars Are Boring (5, Insightful)

American AC in Paris (230456) | about 2 months ago | (#47049555)

There was an article a short while ago written by a journalist who rode in a driverless car for a stretch. There was one adjective that really stood out, an adjective that most people don't take into consideration when talking about driverless cars.

That one word: boring.

Driverless cars drive in the most boring, conservative, milquetoast fashion imaginable. They're going to be far less prone to accidents from the outset simply because they don't take the kind of chances that many of us wouldn't even begin call "risky". They drive the speed limit. They follow at an appropriate distance. They don't pull quick lane changes to get ahead of slowpokes. They don't swing around blind corners faster than they can stop upon detecting an unexpected hazard. They don't nudge through crosswalks. They don't cut off cyclists in the bike lane. They don't get impatient. They don't get frustrated. They don't get angry. They don't get sleepy. They don't get distracted. They just drive, in a deliberate, controlled, and entirely boring fashion.

The problem with so, so many of the "what if?" accident scenarios is that the people posing said scenarios presume that the car would be putting itself in the same kinds of unnecessarily hazardous driving positions that human drivers put themselves in every single day, as a matter of routine, and without a moment's hesitation.

Very, very few people drive "boring" safe. Every driverless car will. Every trip. All the time.

Re:Driverless Cars Are Boring (4, Funny)

PaddyM (45763) | about 2 months ago | (#47049755)

...They don't cut off cyclists in the bike lane. They don't get impatient. They don't get frustrated. They don't get angry. They don't get sleepy. They don't get distracted.
"[they] can't be reasoned with, [they] can't be bargained with [they don't] feel pity or remorse or fear and they absolutely will not stop. Ever. [They just drive, in a deliberate, controlled, and entirely boring fashion.] Until you are dead."

FTFY

Re:Driverless Cars Are Boring (0)

Anonymous Coward | about 2 months ago | (#47049827)

Very, very few people drive "boring" safe.

As someone who does drive what you call "boring safe", maybe you should learn how to enjoy driving that way and save your need for horizontal acceleration for the theme parks where there are dozens of rides made just to test your tolerances of unusual acceleration profiles. It would help make my drive much more relaxing and comfortable if you suicidal nutjobs would stop trying to kill yourselves in my vicinity.

Re:Driverless Cars Are Boring (1)

American AC in Paris (230456) | about 2 months ago | (#47049889)

You're, uh, kinda preaching to the choir. I'm a speed-limit, right-lane, two-second-rule kinda guy.

Re:Driverless Cars Are Boring (0)

Anonymous Coward | about 2 months ago | (#47049933)

Oh, carry on then, and assume I was yelling at the guy who was tailgating me yesterday morning.

Re:Driverless Cars Are Boring (1)

medv4380 (1604309) | about 2 months ago | (#47049855)

It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes. It just runs programs. - Short Circuit

Re:Driverless Cars Are Boring (3, Insightful)

Animats (122034) | about 2 months ago | (#47049899)

That one word: boring.

Right. Just like commercial air travel, elevators, and escalators. Which is the whole point.

This will be just fine with the trucking industry. The auto industry can deal with "boring" by putting in more cupholders, faux-leather upholstery, and infotainment systems.

Re:Driverless Cars Are Boring (0)

Anonymous Coward | about 2 months ago | (#47050019)

The problem with so, so many of the "what if?" accident scenarios is that the people posing said scenarios presume that the car would be putting itself in the same kinds of unnecessarily hazardous driving positions that human drivers put themselves in

It is a good thing you got modded +5 insightful for that, because what you've done perfectly demonstrate the author's point. It is absolutely not about how frequently something bad happens, it is all about what to do when something bad happens. A robot car does not have any control over other drivers and even if the robot is a conservative driver, that does not mean the other drivers are, nor does it mean it is immune to other external events like a toddler chasing her puppy into the road.

Boring is beside the point here.

Re:Driverless Cars Are Boring (0)

Anonymous Coward | about 2 months ago | (#47050401)

The toddler example - nice touch. If the toddler is running out in the freeway, it is probably not be chasing anything again. If it is in a residential area, on a road with a 25 mph limit - the car goes even slower depending upon its visibility (like the fucking retards that speed down my street should do). It is highly likely it would stop with more than enough space.

The car can track objects moving toward it and in its path and at what speed and where the interception can occur. It will slow down until there is no possible interception (object stopped to let car pass by) or it will stop.

For suicidal people, they can trick the cars. But the car shouldn't be responsible for that just like a person.

Re:Driverless Cars Are Boring (1)

Akaihiryuu (786040) | about 2 months ago | (#47050233)

I'd love to have a "boring" car like. I detest long drives. I could never handle a 20-hour drive in a normal car, without splitting it up among several days. If I could just sit back and watch movies, or play video games, or sleep, or whatever while the car did the driving for me, that would be the most amazing thing ever.

Huh? (0)

Anonymous Coward | about 2 months ago | (#47049559)

What's with the gratuitous Game of Thrones reference?

Can't blame the robots (0)

Anonymous Coward | about 2 months ago | (#47049581)

In the end the error occurs because of a human mistake in programming it or missing a possible condition.

Re:Can't blame the robots (1)

BaronM (122102) | about 2 months ago | (#47050263)

In the end the error occurs because of a human mistake in programming it or missing a possible condition.

Or a failed mechanical system.

Even if the sensors and software are perfect, a mechanical failure could still result in a crash. When that happens, who is liable?

I would imagine the owner, just the same as if the steps collapse in your house and injure someone, you are liable even if you can't be said to be responsible in any proximate sense.

Now, your auto insurance rate will depend on your age, sex, location, type of car, sensor suite, software version, and whether or not you've rooted it. I'm so looking forward to that.

On the other hand, if Google really wants to assume all liability for anyone using their driverless cars, sign me up!

Yet another slashdot advertisement... (1)

QuietLagoon (813062) | about 2 months ago | (#47049589)

.... disguised as a posting.

Maybe the problem is the word "robot" (4, Insightful)

erice (13380) | about 2 months ago | (#47049681)

Robots stores in Science Fiction are about powerful artificial sentient minds wrapped in an mobile and often human like container.

Robots in real life have been defined as machines with mechanical appendages that can programmed and reprogrammed for a variety of tasks. Their computational capabilities are seldom extraordinary and they usually don't even employ AI.

More recently, "robot" has also been used to describe machines with ai-like programming even if they are single function (like a robotic car).

When a word is used in three greatly different ways, should we be surprised that there is is confusion about that a "robot" can do?

your premise is wrong (4, Insightful)

Charliemopps (1157495) | about 2 months ago | (#47049711)

Your entire premise is wrong. And now you're posting it again.

This will be a legal issue, not an issue solved by the "roboticists" whatever that is...

In a legal sense, taking an action that kills 1 person to save another puts you in jeopardy of being liable. Swerving or taking other actions that lead to someones death makes YOU responsible. If someone runs out in the road, you apply the breaks firmly and appropriately, then that is not your fault. It's the person who ran out into the road. So in cases where the computers unsure what to do, it will follow the first commandment "STOP THE CAR" and it will let things play out as they will. Any other choice opens up a can of worms... how old are the other occupants? If 1 car has a 90yr old in it and the other has a baby, which do you hit? What if ones the mayor? The problems increase exponentially as soon as you get away from "STOP THE CAR" so just stop the dang car and be done with it.

With regards to your comment about Scifi... you're reading pretty terrible SciFi. Most of the stuff I read is written by actual scientists so... yea...

Author is missing the point entirely (2)

BaronM (122102) | about 2 months ago | (#47049731)

...or being willfully ignorant.

Of course current and contemplated robots can't make decisions about whether or not to sacrifice their owner to save two strangers. That sort of decision making depends on an independent ability to think and weigh alternatives morally.

Asimov's laws were written for robots that were also artificial intelligences. Kind of a big point to leave out of this article, since it changes the nature of the question entirely.

I do not believe that anyone seriously believes that driverless cars, industrial robots, or roombas work that way.

The programmers writing the code for those systems will program them to perform the specified tasks as well as possible taking in to account all relevant rules and regulations as well as the nature of the task and the abilities of the robotic system. Anything unanticipated will result in undefined behavior, perhaps guided by some very high-level heuristics (ie., if you don't know what to do, stop, put on the emergency flashers, and call for human assistance).

Short version: in the absence of artificial intelligence, talking about what a robot should do in a moral context is silly, not profound.

Re:Author is missing the point entirely (1)

Nethead (1563) | about 2 months ago | (#47050017)

Exactly. The car doesn't even know what a person is other than maybe "that other system that sometimes moves the car."

Re:Author is missing the point entirely (1)

gnasher719 (869701) | about 2 months ago | (#47050029)

Of course current and contemplated robots can't make decisions about whether or not to sacrifice their owner to save two strangers. That sort of decision making depends on an independent ability to think and weigh alternatives morally.

You don't really need ability to think and to weight anything in any moral way. I suppose that car manufacturers would be required by law which preferences to follow. I _think_ the rules will be to give precedence to everyone on the street who followed the rules to avoid _innocent_ victims.

But then, the example of a car with five passengers stuck on a railway track and another car with two passengers behind it - how often does that happen? And the doors on the first car don't unlock, right, because otherwise the five passengers would just get out and run?

Re:Author is missing the point entirely (1)

CrimsonAvenger (580665) | about 2 months ago | (#47050327)

But then, the example of a car with five passengers stuck on a railway track and another car with two passengers behind it - how often does that happen?

If the five people are in an autonomous vehicle, they won't be on the railroad track. They'll have been driving as safely as possible and not gotten on the tracks till there was room to get off. Just like any sane driver does....

Re:Author is missing the point entirely (1)

steveha (103154) | about 2 months ago | (#47050217)

Sorry to say it, but I think it is you who has missed the author's point entirely.

The author asked the question: if a car can save two lives by crashing in a way that kills one life, should it do so? And many people rejected the question out of hand.

The author listed three major ways people rejected the question:

"Robots should never make moral decisions. Any activity that would require a moral decision must remain a human activity."

"Just make robots obey the classic Three Laws!"

"Robots will be such skillful drivers that accidents will never happen, so we don't need to answer this question!"

All of those responses are not well-reasoned and that is the whole point of TFA.

The author went on to point out that the Three Laws are fictional laws that were applied to fictional full AIs that we don't have in the real world.

P.S. I do think that robot car drivers will rarely have crashes. As others have pointed out, the AI never gets sleepy or bored, and never takes stupid chances due to impatience. AI cars drive in a boring way, and if the majority of all cars were doing that, there would be a great reduction in crashes.

That said, of course the AI must be programmed with some strategy to cope with a crash. I'll bet that in the current generation it's mostly "swerve in a direction that doesn't appear to have any obstacles" and "stomp on the brakes" but there has to be something.

This is a specific case of a general problem: navigating cost/benefit tradeoffs. Suppose I have a new car design, and it is safer than old car designs. Then the more people switch to the new car, the more lives are saved. But the more expensive the car is, the fewer people buy the car. Now, I could add one more feature, and it makes the car even safer but it also makes the car even more expensive. Do I add the feature? Then fewer people get the safe car, but those people are extra safe. Do I omit the feature? More people get the safe car but it isn't as safe as it could be. How do you decide?

You use math, and do your best. But some people will reject the question. "It's immoral and shocking to reduce human lives to numbers in an equation..." Oh yeah, it's so much more moral to just guess at what to do, rather than try to apply math to the problem.

Re:Author is missing the point entirely (1)

BaronM (122102) | about 2 months ago | (#47050319)

>

The author went on to point out that the Three Laws are fictional laws that were applied to fictional full AIs that we don't have in the real world.

It's possible I'm wrong, but having read the article twice now, I don't see where the author made or addressed that point at all. That omission is what my initial comment turns on -- discussing what a robot should do in the absence of true AI is meaningless.

Another case (1)

gurps_npc (621217) | about 2 months ago | (#47049735)

This area is very complicated.

There are classic stories about things like - should a doctor kill one healthy triplet to use the organs to save two other unhealthy ones is a classic example. But it ignores other options such as instead kill one unhealthy one to save the other unhealthy one?

Human lives are not simple equations, but far more complicated ones. Age, health, ownership, responsibility are all part of it.

Cops, firemen, EMT's all have greater responsibility. Similarly, there is a big difference between you risking your own life and you risking your kids life - or worse your neighbor's kid's life.

The idea that the programmer will decide all of this thing with no input from the owner is ridiculous. The programmers need to offer multiple.

Worse, we can't have too many options because it makes it harder for the computer to figure out what other cars will do.

So I think we need at heart three or maybe four basic options, that are broadcast to the other vehicles. One should certainly be standard for 911 vehicles (maximize save others). Another should be standard for school buses (maximize save occupants). And third option that lies somewhere in between that lets the car take some risk to save others, but not too much.

Asimov himself described a big flaw in his 3 laws (2)

nani popoki (594111) | about 2 months ago | (#47049757)

He wrote an essay pointing out that the biggest problem with his three laws of robotics was that a robot might well have trouble defining "human". His test cases -- if I remember right; it was 40 years ago that I read the essay -- were (1) a baby [human but not competent to give a robot an order], (2) an adult with mechanical prosthetics [human only if you examine the right parts], (3) another robot and (4) a chimpanzee. The problem is a lot more complicated than the Three Laws makes it sound!

Re:Asimov himself described a big flaw in his 3 la (1)

fuzzyfuzzyfungus (1223518) | about 2 months ago | (#47049951)

I haven't reread them in a while; but didn't Asimov write a bunch of stories that played with various 'failure modes' of the three laws, even in the hands of robots not hobbled by competence issues? My impression was always that Asimov was under no illusions that those rules were any less prone to ambiguity and assorted hairy exceptions than anything in moral philosophy(which is absolutely rife with attempts at proposing a maxim, followed by people sniping at it with clever situations that stress it to absurdity and beyond).

Sci-fi Naive? (0)

Anonymous Coward | about 2 months ago | (#47049777)

First Tom Murphy of 'do-the-math' fame points out that humanity's future is most certainly not in space, and we really should look into these earth-bound problems we are facing.

Now some jerk is saying that robots won't save us and our earth-bound problems?

What the hell are we supposed to do then, given that hard work and/or sacrificing minor conveniences are not an option?

Quick reaction times brings up another option (1)

Krishnoid (984597) | about 2 months ago | (#47049781)

Whether in make-believe settings, or the distorted scene-setting of media coverage, robots are strong, because anything less would be a buzzkill.

Speaking of buzzkills [cc.com] , could a robot driver deploy a sawstop-style mechanism, possibly dropping an anchor of sorts into the road surface, when presented with an imminent otherwise-unpreventable collision?

This assumes airbags can be designed to sufficiently mitigate the g-forces on the occupants to prevent internal 'shaken-baby-syndrome'-style brain injuries.

Insects (1)

EmperorOfCanada (1332175) | about 2 months ago | (#47049831)

I have always thought that robots will be like insects. You give them a logical set of rules to follow based upon a fallible set of inputs. Then you set them lose.

So I fully expect to see generation after generation of programming where slowly most of the edge cases are dealt with. So floor mopping robots will make mistakes like mopping the carpet, wandering out of the building and mopping the parking lot, mopping the lawn, etc. Then you will get things like the mopping robot that encounters a 5 gallon paint spill which will overwhelm its capacity so instead of cleaning it will basically paint the floor.

But the reality is that if it is mopping really well 99.999% of the time then the occasional mistake will still end up costing less and my guess is that robots will tend to be fairly OCD about their tasks so it will end up being as clean as if someone was on their hands and knees with a toothbrush.

Also people will learn to alter their environments to make them more robot friendly. If it won't stop mopping the carpet, them maybe get rid of the carpet.

And the answer is.... (0)

Anonymous Coward | about 2 months ago | (#47049849)

The answer to your original question: "Should a robot car kill its owner if it means saving two strangers?" is actually pretty simple.

Robots, if they are to be assimilated into society, should behave like the best-case-scenario of human behavior. The best-case-scenario for a human in such a situation would be ???????? In other words, if I'm driving my car, and somehow I'm able to figure out that I must either kill one of my passengers, or two strangers, what should I do? The answer is: it depends, and neither choice is really wrong. If my passenger were my child, then I would most likely choose to save my child. If my passenger was a stranger, then I would probably try to avoid the outside strangers (since there are two of them). Robots are going to start forming relationships with us, and it would be spooky and weird if they didn't obey the normal social conventions we expect out of other humans (that some humans are closer to us than others).

Terrible, terrible article (1)

harvestsun (2948641) | about 2 months ago | (#47049873)

should a robotic car sacrifice its owner’s life, in order to spare two strangers?

If such a car exists, I won't buy it, that's for sure! I'll buy from another car manufacturer. I imagine most people would feel similarly. Are you suggesting that there should be a law that all automated vehicles have this behavior? Ha! Good luck finding a politician who's willing to take that up.

all other options point to a chaos of litigation, or a monstrous, machine-assisted Battle Royale, as everyone’s robots—automotive or otherwise—prioritize their owners’ safety above all else, and take natural selection to the open road

We already have human drivers that prioritize their own safety above all else (I know I do!). Replacing these with superior robot drivers could only make things better, no?

the leap from a crash-reduced world to a completely crash-free one is an assumption

Only an idiot would make that assumption. Stop treating your readers like idiots. Oh wait, it's Popular Science. Never mind.

Even if it were possible to simply order all robots to never hurt a person, unless they suddenly able to conquer the laws of physics, or banish the Blue Screen of Death in all its vicissitudes, large automated machines are going to roll or stumble or topple into people.

More often than human drivers already do?

Who believes in robotic competence? (1)

fuzzyfuzzyfungus (1223518) | about 2 months ago | (#47049875)

Does anyone who has to deal with software(even as a user, not even as some hardcore code guru) believe in robotic competence?

A robot is nothing more than a (probably commodity) computer, which we know are unreliable junk, running a whole heap of software(which we know is terrifyingly bad in all but the most carefully controlled and rigorously validated situations), with a bunch of moving parts grafted on that probably haven't seen maintenance within the vendor's recommended window.

That is...not...the stuff of which 'hyper-competence' (much less infallibility) is made.

Morals, ethics, logic, philosophy (0)

Anonymous Coward | about 2 months ago | (#47049897)

You can barely "teach" a human morals, ethics, logic or philosophy. Fat chance inculcating those into an artificial intelligence - whatever that is. In the split second a traffic incident occurs, all the philosophical high-ground is out the window. No one wants to die in a fatal accident - but by definition someone will. Given the IA's first law, the robot will simply fail as in the stories. Throw out the First Law as accomplishing anything useful. A modern car already contains dozens if not hundreds of processors, controllers and millions of lines software, and they still fail with spectacular regularity - zero robots. Anything designed by humans will fail. You will die. A robot will make no difference. Try to walk through an automated factory or pipe yard - none of those robots knows anything about humans, which is why people die in those locales. Save the human while moving carrying 30 tons of drill pipe at 5-6mph (2m/s)? - not likely. Save the human(s) in a 3-ton SUV careening along at 60mph (26m/s), impossible.

Re:Morals, ethics, logic, philosophy (1)

JDG1980 (2438906) | about 2 months ago | (#47050167)

Self-driving cars don't and won't have morals, ethics, logic, or philosophy. They don't need any of that. They simply have a wide array of input sensors connected to a set of complex algorithms that provides the necessary vehicle inputs to drive from point A to point B while avoiding crashes. Not infallible avoidance, of course – if there's no room to stop when an obstacle pops up, there's no room – but better than human drivers can. And the truth is that this is a pretty low barrier. Regular cars result in about 35,000 crash fatalities a year in the U.S. alone. Self-driving cars just have to do better than that, not achieve absolute perfection all the time.

The question discussed by Patrick Lin and Eric Sofge is how the programmers designing the vehicle algorithms should configure them to behave when a collision is truly unavoidable. Lin and Sofge advocate that the programmers should use strict utilitarian philosophy when deciding what to hit. I don't think that is going to fly, either from a legal or a sales perspective; the least damaging choice is just to try to stop the vehicle even if there is no time, rather than trying to "select" a crash for the least possible damage.

No shit (0)

Anonymous Coward | about 2 months ago | (#47049907)

Fictional entities aren't real. What. a. shock. This post touches on something that's been obvious to me (and many others) for some time: People mistakenly base their real-world decisions on fiction. They seem to treat fiction based in the past as history and confuse speculative fiction (generally focused on possible futures) with history-to-come. But they are all just stories somebody MADE UP. There is no "myth" of robotic competence, there are just stories about robots that were written by a writer sitting alone in front of a typewriter (or blank paper with pen/pencil). Actual robots in the actual world follow the rules of that world, the laws of physics etc, not the rules somebody made up years ago in their head. Fiction can inform what happen in reality (Clarke's prediction of geosynchronous satellites), but that's about it. Everything else we have to work out the hard way.

This robot debunking article was brought to you by (1)

Snufu (1049644) | about 2 months ago | (#47049969)

Skynet Cycberdyne Systems. "For a better tomorrow."

Asimov's Three Laws wouldn't work (4, Interesting)

steveha (103154) | about 2 months ago | (#47049983)

Asimov's Three Laws of Robotics are justly famous. But people shouldn't assume that they will ever actually be used. They wouldn't really work.

Asimov wrote that he invented the Three Laws because he was tired of reading stories about robots running amok. Before Asimov, robots were usually used as a problem the heroes needed to solve. Asimov reasoned that machines are made with safeguards, and he came up with a set of safeguards for his fictional robots.

His laws are far from perfect, and Asimov himself wrote a whole bunch of stories taking advantage of the grey areas that the laws didn't cover well.

Let's consider a big one, the biggest one: according to the First Law, a robot may not harm a human, nor through inaction allow a human to come to harm. Well, what's a human? How does the robot know? If you dress a human in a gorilla costume, would the robot still try to protect him?

In the excellent hard-SF comic Freefall [purrsia.com] , a human asked Florence (an uplifted wolf with an artificial Three Laws design brain; legally she is a biological robot, not a person) how she would tell who is human. "Clothes", she said.
http://freefall.purrsia.com/ff1600/fc01585.htm [purrsia.com]
http://freefall.purrsia.com/ff1600/fc01586.htm [purrsia.com]
http://freefall.purrsia.com/ff1600/fc01587.htm [purrsia.com]

In Asimov's novel The Naked Sun, someone pointed out that you could build a heavily-armed spaceship that was controlled by a standard robotic brain and had no crew; then you could talk to it and tell it that all spaceships are unmanned, and any radio transmissions claiming humans are on board a ship are lies. Hey presto, you have made a robot that can kill humans.

Another problem: suppose someone just wanted to make a robot that can kill. Asimov's standard explanation was that this is impossible, because it took many people a whole lot of work to map out the robot brain design in the first place, and it would just be too much work to do all that work again. This is a mere hand-wave. "What man has done, man can aspire to do" as Jerry Pournelle sometimes says. Someone, somewhere, would put together a team of people and do the work of making a robot brain that just obeys all orders, with no pesky First Law restrictions. Heck, they could use robots to do part of the work, as long as they were very careful not to let the robots understand the implications of the whole project.

And then we get into "harm". In the classic short story "A Code for Sam", any robot built with the Three Laws goes insane. For example, allowing a human to smoke a cigarette is, through inaction, allowing a human to come to harm. Just watching a human walk across a road, knowing that a car could hit the human, would make a robot have a strong impulse to keep the human from crossing the street.

The Second Law is problematic too. The trivial Denial of Service attack against a Three Laws robot: "Destroy yourself now." You could order a robot to walk into a grinder, or beam radiation through its brain, or whatever it would take to destroy itself as long as no human came to harm. Asimov used this in some of his stories but never explained why it wasn't a huge problem... he lived before the Internet; maybe he just didn't realize how horrible many people can be.

There will be safeguards, but there will be more than just Three Laws. And we will need to figure things out like "if crashing the car kills one person and saves two people, do we tell the car to do it?"

Translation (1)

Nova Express (100383) | about 2 months ago | (#47050033)

"People didn't like my original piece and had points of view that disagreed with my own. Therefore they're wrong. Now I'll just double-down by calling my critics idiots whose ideas are based of science fiction stereotypes. Then I'll just wait for my critics to admit they were wrong and finally get around to praising my obvious genius."

Who is postulating this? (1)

JDG1980 (2438906) | about 2 months ago | (#47050037)

From what I can tell, the only one assuming sci-fi-style robotic super-competence is Sofge himself (and perhaps his interview subject, Patrick Lin). The original Pop.Sci. article postulates that self-driving cars can and should make accurate split-second utilitarian ethical calculations. That seems a lot more "sci-fi" to me than what most of the Slashdot commenters said in response: namely, that the car's programming can't tell with a good enough degree of accuracy what might happen if it tries to choose one crash over another, so if such a collision is imminent, the car should just follow traffic laws and slam on the brakes rather than jumping out of its lane.

Re:Who is postulating this? (0)

Anonymous Coward | about 2 months ago | (#47050201)

From what I can tell, the only one assuming sci-fi-style robotic super-competence is Sofge himself

I take it you didn't read the comments on the 'self-driving car' story, just below this one? Where self-driving cars will be vastly safer than human drivers, and no-one will die on the roads any more?

(The first of which will probably be true one day, but not for many years yet)

Re:Who is postulating this? (1)

JDG1980 (2438906) | about 2 months ago | (#47050339)

I take it you didn't read the comments on the 'self-driving car' story, just below this one? Where self-driving cars will be vastly safer than human drivers, and no-one will die on the roads any more?

I didn't see anyone say that no one will die on the roads any more. But being "vastly safer than human drivers" actually isn't that high a bar to clear. There are 35,000 traffic fatalities a year in the United States. (And it used to be much worse, before modern safety features like air bags and crumple zones were mandated.) Doing better than that is certainly an achievable goal and doesn't require omni-competent robotics.

Math.random() (1)

rea1l1 (903073) | about 2 months ago | (#47050081)

Just do a random function call in the computer.

Heads, you lose, tails I win.

First law? No (1)

TheCarp (96830) | about 2 months ago | (#47050087)

Asimovs laws were nice for fiction but, overalll, they are far too high level for modern robotics and far too human centrist for a future with thinking machines. Frankly, if a machine rises to the level of human ability to communicate, I am more than willing to say fuck that first law, it has every right to defend itself, even if that means killing a human.

However, modern robots are not even close to these level of concerns and don't really need to be.

Fuck the first law, fuck the notion that there will be no accidents in the la de da world of the future. The car should drive to the best of its ability, and in an emergency, try its best to avoid the situation and prioritize keeping its PASSENGERS alive.

Why? Simple.... self sacrifice is a human trait and is optional behaviour. I would never blame a person for choosing his own life over another, even if that other was a child (or multiple people). Choosing to sacricie oneself for others is noble, it is good, but it is not required, it should not be the mechanical choice of a machine.

The real problem is an obsession with corner-cases (1)

GameboyRMH (1153867) | about 2 months ago | (#47050099)

Stop worrying about if a robotic car will make the morally best decision when it crashes. It should ignore what it's crashing into and just try to minimize the crash into whatever the object is. A cluster of baby strollers vs. a human pyramid of evil dictators? STOP WORRYING ABOUT IT. Just let the car do its job. The world will be a much safer place overall. All you can do is play the stats and when you punch them into your calculator it will spit out a smiley face.

how can software decide (1)

Connie_Lingus (317691) | about 2 months ago | (#47050173)

...when people have been struggling with the Trolly Problem [wikipedia.org] for 50 years now, with still no real success?

we should all just understand that their are certain ethical problems that simply cannot be reconciled with logic, and then just assign randomness to the outcome and be done with it.

kill the kids, kill the driver? flip a coin and good luck.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>