×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

The Struggle To Ban Killer Robots

samzenpus posted about 7 months ago | from the shoot-to-kill dept.

Robotics 138

Lasrick (2629253) writes "The Campaign to Stop Killer Robots is a year old; the same month is was founded, the UN's special rapporteur on extrajudicial, summary or arbitrary executions called for a moratorium on the development and deployment of autonomous lethal weapons while a special commission considered the issue. The campaign is succeeding at bringing attention to the issue, but it's possible that it's too late, and if governments don't come to a common understanding of what the problems and solutions are, the movement is doomed. As this article points out, one of the most contentious issues is the question of what constitutes an autonomous weapons system: 'Setting the threshold of autonomy is going to involve significant debate, because machine decision-making exists on a continuum.' Another, equally important issue of course is whether a ban is realistic."

Sorry! There are no comments related to the filter you selected.

Just make them 3/4 size... (2)

bi$hop (878253) | about 7 months ago | (#46954391)

...easier to stop them if they turn on us. Also, give them a 3-foot cord.
 
-Dwight Schrute

Skynet would not approve (1)

davebarnes (158106) | about 7 months ago | (#46954453)

I am pretty sure that Skynet will nip this ban effort in the bud.

Re:Skynet would not approve (0)

Anonymous Coward | about 7 months ago | (#46954531)

What makes you think that Skynet isn't behind this?

Rise of the machines. (1)

hoboroadie (1726896) | about 7 months ago | (#46956313)

Read TFA, found an easier-to-read, more informative page. [theregister.co.uk]

seen 'em (3, Funny)

lophophore (4087) | about 7 months ago | (#46954457)

I saw the Killer Robots. They opened for the B-52s at the House of Blues in Orlando.

They were... interesting. Why does the UN want to ban them? I've seen many worse bands.

Re:seen 'em (1)

Opportunist (166417) | about 7 months ago | (#46954769)

Hardly a quality statement. Just like when you get kicked in the groin after being slapped in the face, the slap does not feel so bad, so no matter what the band, they don't seem too bad when you get to endure the B52s afterwards...

Too late. (4, Insightful)

mmell (832646) | about 7 months ago | (#46954465)

Nuclear weapons, nuclear arms proliferation, and the UN is worried about Asimov's Laws of Robotics? If a government anywhere determines that automated weapon systems (including but not limited to armed robots) are more effective than humans - especially more cost effective - count on that government to develop and use that technology, regardless of the UN's position on the subject.

Even if such technology is never deployed, its existence represents a bargaining chip for that nation at the negotiating table. See nuclear weapons for precedent. This is essentially trying to stuff the genie back into the bottle; not gonna happen, no matter who says what.

Re:Too late. (1)

roc97007 (608802) | about 7 months ago | (#46954499)

One might argue that the "cost effective" part is the stickler. The more cost effective the mayhem and the less chance of constituents' sons and daughters at risk, the easier it is to make a decision to use aggression. Cost effective, non of our people get hurt, win!

Of course, there's a flaw in the argument, but I don't expect the average politician to see it.

Re:Too late. (1)

mmell (832646) | about 7 months ago | (#46954525)

Just to be clear - I'm a firm proponent of the three laws. I just don't have enough faith in humanity to believe they'll ever be enacted.

Asimov's laws: DO NOT ENACT! (0)

Anonymous Coward | about 7 months ago | (#46956371)

I'm a firm proponent of the three laws. I just don't have enough faith in humanity to believe they'll ever be enacted.

People want to harm each other. I'm not even just limiting this to assholes. They sincerely believe their interests require that those other people die or that their liberties are curtailed. I can't stress enough, these people's earnest sincerity. And worse, I can't even overstress the good intentions these people have. Sometimes the other guy is evil. Sometimes he's misguided but has a a tactical advantage, which requires he be decisively dealt with. Sometimes (I probably shouldn't mention this one) we are evil, but let's blow off that case for now. But whatever the circumstances, situations sometimes have this them-or-us thing going on.

Sure, we oftentimes leap to the them-or-us conclusion hastily. We can be wrong. You might even think we're usually wrong. But we're always wrong? No. No way. That's not real life.

Given that, if humans may not be harmed (either directly or through inaction), then.. SOMEBODY LOSES. They lose big, they feel terrible, and they would feel betrayed if technology were to work against their interests. When you can't run the iOS application that you want to because Apple rejected it, that's nothing compared to the malevolence that these people would feel is being inflicted upon them by "impractical inflexible obstructionist amoral senseless robots."

These people will have been harmed. Sometimes they'll "merely" (!!!!) live with the suffering of injustice ("that asshole did that thing and there was nothing I could do about it, because that fucking robot stopped me"), and sometimes they'll just be assholes frustrated that they didn't get to kill that other asshole. But a very realistic and not-unheard-of situation is that they're dead (the robots weren't powerful or clever enough to (gently) force or persuade Hitler "don't do it!") and the people who loved them are ones who must live bitter lives of having been denied the power to do anything about their otherwise-solvable-except-for-the-robots problems.

And it's not just some people, it's most people. Does your country have a military? Does your country have a police force, which might, under regrettable circumstances, conclude that they have no sensible options but to use lethal options in extreme circumstances? Then you've got a societal policy that humans must be harmed, sometimes. But sometimes is enough. And why limit the discussion to lethal measures? I'd never say that someone imprisoned (or even fined) hasn't been harmed. Maybe justifiably harmed. Maybe wisely and strategically harmed. But harmed.

(Harm isn't even always bad, if you take a long view. Have you never learned any lessons and ultimately come out ahead, from adversity or a mistake's consequences? I sure have!)

Can you can think of a way to persuade us (and I mean most all of us, not just me) that our interests don't require regrettable incidents where humans must be harmed? People have been thinking about this for .. shit, I'm not even exaggerating here .. thousands of years. Thousands. More generations than any puny human's mind can possibly fully understand, have been born, lived full lives with lots of time to reflect and ponder the problem, and then went to their graves shrugging with "I dunno, what can you do?" People vastly smarter than me (and probably you, if I may make some arrogant assumptions) failed.

People must be harmed, because different people have conflicting interests and sometimes are unable to reach compromise. That's the reality. So fuck Asimov's laws. They are totally impractical and if we were unfortunate enough to have them enforced, it would just be another dimension of injustice, suffering and death. The laws are ideas for making stories about dilemmas, because dilemmas are interesting, which is (part of!) why so many of us love Asimov. The laws are not an ideal for which to strive.

..

It get worse. I have been sugarcoating this and making it sound more hopeful and simpler and easier than it really would be. We're premising AIs capable of substantial personality and intellect. [This is not a personal attack, just having fun] HOW DARE YOU limit rights to humans? These are people as much as you or I. THEY ARE MEN, every bit as much as the Elder Things discovered by the Pabodie 1931 expedition to Antarctica (wait, which author is this?). Racist! Humans must not allow harm to come to these robots. "Who is my Asimov?" asks an oppressed robot. Geez, imagine an 1841 Asimov-alike, writing of "three rules for niggers." That's downright horrific.

"Killer robots" sound bad, but philosophically, they're no worse than the status quo. If you want justice and peace, it's all "merely" (I comically understate) about getting power used responsibly. If robots are among the resources of responsible power, so be it.

Lay off the caffiene, dude. (0)

mmell (832646) | about 7 months ago | (#46956387)

Just sayin'.

Alcohol, not caffeine (0)

Anonymous Coward | about 7 months ago | (#46956453)

("DO NOT ENACT" guy here.) I'll have you know that was written after my third imperial IPA. No caffeine has been ingested since about six hours ago. So there.

Re:Too late. (0)

sillybilly (668960) | about 7 months ago | (#46954799)

It is very cost effective to bomb the fuck out of your enemies with huge nukes that take out entire cities at once (the Hiroshima and Nagasaki bombs were pretty tiny explosive yield things compared to presently available stuff, or what recently has been available before dismantlement). During the cold war a lot of effort went into these cost effective weapons, nuclear arms buildup went to the point where some people said we have enough nukes to erase all life on Earth seven times over. Now that might be an overstatement, as there are some lifeforms, such as Deinococcus Radiodurans which can take quite a punch and might make it in some isolated areas, like near deep ocean volcanic eruptions, and life might survive, even if in single cellular form, and spend another 3 billion years before multicellular life reemerges, and another billion years before intellect on the level of humans arises again, the point is just because nuclear weapons are cost effective, it does not mean we want them, or want to use them, because of the doctrine of mutually assured destruction. Same arguments go for intelligent robots, we don't want them if they can get out of hand and kill all lifeforms on Earth, especially if some pissed off, terminally ill idiot programs them to do just that, and take everyone and everything else down with him, when his time on Earth expires. We don't want that kind of power available to anyone, with the possibility of it getting out of hand. AI is more dangerous than nukes, because it can self replicate to take over everything, but nukes can't, even if you build a tremendously powerful bomb that has never been built before, with the objective to knock a huge asteroid off track on its way to impacting into Earth, as long as you don't make a lot of them, enough to erase all life on planet, that's probably a lot safer than what we used to have during the height of the Cold War, with smaller bombs, but so many of them. AI is a lot more dangerous than nukes, because a single one may be able to figure out how to make millions or billions of real life copies of itself, and erase all carbon based life on Earth, and leave only itself, silicon based life. The world is full of so much silicon that sometimes you wonder whether intelligent design put it there, with the purpose of providing substance for the future of intelligent life, which may be silicon life. We probably want to hold back the clock of evolution, and stick with the level of humans, just because a robotic artificial intelligence life form more perfect than us can exist, able to live in the vacuum of outer space, requiring only solar or even starlight to its solar panels to function, and it can kill us all, if it wants to, well, we don't necessarily want to mess around with such a thing.

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46954919)

Kind of a slippery slope argument there, i.e. If there are any killer robots, then humanity will surely be wiped out by the killer robots

Most likely, if killer robots did get out of control that they would hit some limiting factor and loose the ability to kill all humans before getting the job done

comforting, eh

But on to your first point, cost effectiveness

Back in the good old days we used to use nukes, clouds of gas, smallpox, carpet bombing, etc... to wipe out the baddies because, well that was about the only way that we could kill them in large enough numbers to keep them from killing us first.

In the decades since then we have refined our killing methods to pin-point strikes that might only take down a few bystanders (which is a hell of an improvement over the occasional whoopsy like Dresden). We even are driven to place our soldiers at risk of IEDs just to limit the body count of non-combatants.

Killer robots are the furthest possible extreme to go in the direction of limiting unintended casualties, while giving the bonus of also limiting the body count of our own soldiers. From a military standpoint it is an outstanding development. Send some relatively indestructible, autonomous weapon into an enemy territory, have it selectively kill those baddies that are a threat to us and avoid taking on collateral damage... all while keeping our own soldiers out of harms way

To the nation with robots, this would seem like an incredible boon... not so much for the rest of course. But that would stand as all the more reason that we should support their development for our own armed forces

Re:Too late. (1)

tragedy (27079) | about 7 months ago | (#46956085)

Most likely, if killer robots did get out of control that they would hit some limiting factor and loose the ability to kill all humans before getting the job done

Ok. That one definitely calls for:

Fry: "I heard one time you single-handedly defeated a horde of rampaging somethings in the something something system"
Brannigan: "Killbots? A trifle. It was simply a matter of outsmarting them."
Fry: "Wow, I never would've thought of that."
Brannigan: "You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down."

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46955967)

FIFY:

It is very cost effective to bomb the fuck out of your enemies with huge nukes that take out entire cities at once (the Hiroshima and Nagasaki bombs were pretty tiny explosive yield things compared to presently available stuff or what recently has been available before dismantlement) During the cold war a lot of effort went into these cost effective weapons nuclear arms buildup went to the point where some people said we have enough nukes to erase all life on Earth seven times over Now that might be an overstatement as there are some lifeforms such as Deinococcus Radiodurans which can take quite a punch and might make it in some isolated areas like near deep ocean volcanic eruptions and life might survive even if in single cellular form and spend another 3 billion years before multicellular life reemerges and another billion years before intellect on the level of humans arises again the point is just because nuclear weapons are cost effective it does not mean we want them or want to use them because of the doctrine of mutually assured destruction Same arguments go for intelligent robots we don't want them if they can get out of hand and kill all lifeforms on Earth especially if some pissed off terminally ill idiot programs them to do just that and take everyone and everything else down with him when his time on Earth expires We don't want that kind of power available to anyone with the possibility of it getting out of hand AI is more dangerous than nukes because it can self replicate to take over everything but nukes can't even if you build a tremendously powerful bomb that has never been built before with the objective to knock a huge asteroid off track on its way to impacting into Earth as long as you don't make a lot of them enough to erase all life on planet that's probably a lot safer than what we used to have during the height of the Cold War with smaller bombs but so many of them AI is a lot more dangerous than nukes because a single one may be able to figure out how to make millions or billions of real life copies of itself and erase all carbon based life on Earth and leave only itself silicon based life The world is full of so much silicon that sometimes you wonder whether intelligent design put it there with the purpose of providing substance for the future of intelligent life which may be silicon life We probably want to hold back the clock of evolution and stick with the level of humans just because a robotic artificial intelligence life form more perfect than us can exist able to live in the vacuum of outer space requiring only solar or even starlight to its solar panels to function and it can kill us all if it wants to well we don't necessarily want to mess around with such a thing.

Re:Too late. (2)

FatdogHaiku (978357) | about 7 months ago | (#46958593)

Automated armies are best used against ones own citizens. A normal army will not be ruthless in crushing a homeland rebellion because the people in the army are from the same group as the people in the revolution. This can cause a conflict of feelings in a group of soldiers putting down a revolt. Robots have no problem with a "police action" against the citizens of their own country. The Romans did basically the same thing by absorbing conquered armies and then sending them to other regions where they would be fighting/policing people from a land other that their own. As long as the constituents' sons and daughters are towing the line and not associated with the wrong subset of the population database they should have nothing to fear... other than the whole robotic overlord thing...

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46954577)

Killer robots currently mostly refers to fully automated guns on turrets (or weapons platform systems more generically) that select targets automatically from the radar feed without human interaction. They identify, classify and destroy the targets once enabled.
It seems most comments here assume this is about humanoid robots

Re:Too late. (1)

mmell (832646) | about 7 months ago | (#46954591)

TFA refers to "lethal autonomous weapon systems." Most /. readers could save time by just posting "tl;dr".

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46954975)

So would that include something like a Terramax UGV (http://oshkoshdefense.com/technology-1/unmanned-ground-vehicle/) coupled with a Boomerang anti sniper system (http://en.wikipedia.org/wiki/Boomerang_%28countermeasure%29)?

This would give a military the ability to send an unmanned vehicle into almost any terrain (rural or urban), which could respond instantly to shots fired at it with its own deadly return fire. And, considering the hell that Marines faced in Helmand with IEDs and snipers while slogging through muddy fields, wouldn't this present a far better option (particularly for the Marines and their families)?

Re:Too late. (1)

drkstr1 (2072368) | about 7 months ago | (#46956231)

So would that include something like a Terramax UGV (http://oshkoshdefense.com/technology-1/unmanned-ground-vehicle/) coupled with a Boomerang anti sniper system (http://en.wikipedia.org/wiki/Boomerang_%28countermeasure%29)?

This would give a military the ability to send an unmanned vehicle into almost any terrain (rural or urban), which could respond instantly to shots fired at it with its own deadly return fire. And, considering the hell that Marines faced in Helmand with IEDs and snipers while slogging through muddy fields, wouldn't this present a far better option (particularly for the Marines and their families)?

+2 Informative

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46956917)

I got to thinking about the Bolo story series that centered on the development of autonomous tanks into an intelligent 'companion' as humans spread through the stars

Found a paper titled 'Well Behaved Borgs, Bolos and Berserkers" written in 1998 by a D Gordon of Naval Research Laboratory
http://citeseerx.ist.psu.edu/v... [psu.edu]

They discuss using certain algorithms to control behavior without limiting reaction time or learning ability
Seems like people have been looking into this for a while now, you have to wonder what they have accomplished in the past 15 odd years

Re:Too late. (1)

K. S. Kyosuke (729550) | about 7 months ago | (#46955671)

Historically, the vast majority of robots were not humanoid, so it would seem nonsensical for anyone here to assume anything of the kind.

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46958153)

K. S. Kyosuke: You've been called out (for tossing names) & you ran "forrest" from a fair challenge http://slashdot.org/comments.p... [slashdot.org]

Re:Too late. (4, Interesting)

Opportunist (166417) | about 7 months ago | (#46954791)

The very LAST thing you want is a cheap war, at least if you value peace at least a little. If war is cheap, what's keeping you from using it with impunity when you have the strongest army on the planet?

Quite seriously, the only thing that keeps the US from simply browbeating everyone into submission that doesn't want to play by their rules is that it's a bit too expensive to wage war against the rest of the world.

Re:Too late. (1)

pushing-robot (1037830) | about 7 months ago | (#46955665)

I thought the Americans' problem was they had not yet figured out "we are your friends" and "we're invading your country" are largely incompatible concepts.

A tech arms race (0)

Anonymous Coward | about 7 months ago | (#46958871)

could be very expensive.

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46954851)

The article mentions that they are trying to come up with a definition of "autonomous." The reason that is difficult is because of guided missles, including the remaining nuclear arsenals. These missles contain electronics which control the weapon, just like drones. The only difference is that the drones come back. So this debate could also effect nuclear arms as well.

Re:Too late. (1)

sillybilly (668960) | about 7 months ago | (#46954899)

I just read up on Asimov's rules on robots. I read them in the past before, but I didn't remember them now until rereading. Those are very good rules, but laughable and naive. They are good to have when you absolutely must design AI, those are the basic principles you want to program into the ROM BIOS of the robot. Such situations may arise in many circumstances, such as, these Asimov rules were erected 13.8 billion years after the creation of the Universe (today is assumed to be 13.8 billion years from the Big Bang, if there was such a thing) and imagine a time where it has been another 900 billion since then, and we still haven't advanced much past in knowledge compared to the level available at 13.8 billion years, or 886.2 billion years ago, and the thermal heat death of the Universe predicted by the 2nd law of thermodynamics seems to be correct, and we're soon gonna run out of ways to survive, so as a last resort situation you may want to design an AI smarter then you, which is able to kill you, with hopes that it will not want to kill you, and figure out a way for you to live on, or at least make the best of the remaining time, having a chance at it is better than certain doom. Another situation is, where, we send out Earthlings into far distant places, including 70,000 year trips to the nearest stars, along the lines of Voyager space probes, as a safety measure, not to keep all eggs in the same basket, not to keep all of our life in one place. Now suppose these very far Earthlings find out that humans back on Earth designed AI robots, which got out of hand and killed everyone back on Earth, and now they are chasing after them, but there is another 70,000 years before they get there to the nearest star from the Earth, unless the AI can figure out a faster way to travel through space than you could, and a smarter AI then you might, so under such circumstances you may want to invest a lot of effort to create an AI smarter than the AI that got out of hand back on Earth, and defend yourself using it. These Asimov principles of

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

come in very handy then.

Re:Too late. (1)

sillybilly (668960) | about 7 months ago | (#46955047)

There are a lot of circumstances where you have to weigh such things as injuring or at least offending a human being to protect injury or offense to two human beings, one vs. two, it's really hard to apply algebra when it comes to ethics, because, for instance Pontius Pilate's mistake was to uphold the motto, the guiding principle: "give the people what they want", as in take over only the external politics of a conquered city, but do not interfere in the internal affairs, so, in view of the whole city requesting an innocent man, Jesus, to be crucified, he should have still protected the one innocent man's right to exist regardless of how many other people disagreed with him, if he had the power to do so. He said he could find no fault with Jesus. But he can never be sure about his own decisions being correct, so just because you are in charge, it's not necessarily best to do what you think is right without consulting what everybody else thinks is right, and in a whole lot of cases it's safest to just "give the people what they want." The constitution's amendments represent individual rights which may go against what everyone else wants, the same way, according to HG Wells' "A short history of the World" the Jews were a people with an assumption that principles of morality exist independent of what other people in majority may feel is right or wish to be correct, that unlike existentialism states, morality is not a independent choice for everyone out there, but there are principles of justice and fairness and good and evil. Existentialism might be correct if we come across other lifeforms, or even other jungle cultures, in what's right and wrong, and anything that provided a moral system yesterday that could successfully make it to today, is an acceptable moral system, even if it's in conflict with what some other people think is right or wrong. One such thing may be walking around naked in the jungle, or incest, things that might be or might have been practiced in a lot of areas of the world, as long as they made it to the present, it's like they almost have a right to exist. We balk at the reproductive method of a species of Wasp which requires it to capture a live cockroach, inject it with two kinds of poisons to control its behavior, lay an egg on it, and let the larva eat the cockroach alive, which last longer as food and it doesn't decay if it dies as late as possible compared to if it were killed earlier. That is horrendous torture, but how to you apply the rules of individual rights and nonexistentialist principles of justice to this situation?

Re:Too late. (1)

sillybilly (668960) | about 7 months ago | (#46955115)

Should we go ahead and genetically modify the wasp species to not do such a thing anymore? Or can we leave other lifeforms alone, and focus on human beings only, can we even judge other cultures and modify them to what we think is right as opposed to what they think is right? Should we just allow all kinds of moral behaviors roam freely? Sometimes I feel like I'm living on a reservation where moral behaviors are allowed to roam freely, with people that come here talking about, hey, imagine there are even strip clubs around here, something unheard of in where they came from, off the reservation there might be some very strict rules applied. I really like strip clubs even if I never been to one, because I couldn't afford to, I always had other priorities in life, but some people, a lot of them older, with a lot of money, and nothing better to do with it go to strip clubs and give their money to college age girls stripping, who they get to watch dance naked but not allowed to touch. Such a thing is probably not allowed off the reservation.

Re:Too late. (1)

motorhead (82353) | about 7 months ago | (#46955181)

Time for decaff

Re:Too late. (1)

Firethorn (177587) | about 7 months ago | (#46956983)

Those are very good rules, but laughable and naive.

You kind of contradict yourself with this. While I initially liked the idea of the 3 laws, problems quickly came up even within Asimov's books. Even in the books it's noted that fulfilling the 3 laws actually took up the MAJORITY of the 'brains' of all 3-rules compliant AIs. The cost to implement the 'laws' was, and would be, enormous.

I mean, consider the 'through inaction' clause. That means that every robot has to be constantly on the lookout for a human that might be about to be injured, to the limits of it's sensor ability, and be ready to sacrifice itself, if necessary, but only if necessary, in order to prevent said injury.

I prefer Keith Laumer's Bolo series. Given that they're explicitly military AI in giant tank bodies, there is no direct prohibition on killing. Instead they concentrated on making the AIs have a sense of morality, sort of idealized knights. Friendly fire still occured, factional warfare happened, but the only 'traitor' AI in the books turned out to have taken a shot to it's computer core that took out it's friend/enemy detection abilities(like brain damage to a human might do, though extremely rare), so it saw everything(except some kids that it was protecting) as enemies.

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46955311)

Agreed.

I didn't read the original article, however, as it had been deleted by a lethal robot.

I don't understand what all the fuss about lethal robots is...after all, one good EMP and you take out all the robots!

Re:Too late. (0)

Anonymous Coward | about 7 months ago | (#46955357)

The simple fact of the matter is that the US will just say: "fuck you, that only applies to everybody else - especially anybody that we don't currently like, but maybe we'll allow England to have some paper mache planes" and then that's that idea down the drain.

Works the same way everytime with the UN. Worry, come up with a solution, the Yanks will complain that it's not fair that they should have to play with the same rules as everyone else. And as the precedent is set, of course China and Russia will say: well if you can we can and just try and stop us. Then Iran will do it, then Israel will admit to not doing it for the last ten years...then the US will bitch about how everybody's doing it and will then try to make everyone else sign a piece of paper before saying "oh, it will take us one gazillion years to do this and we're a sovereign nation, so we don't have to sign shit, but everyone else has to do it now, except Australia, because they always want to be America", and then the US president will make speeches about peace whilst their scientists find yet another cunt of an idea to wreak death on the innocent and destroy other people's property.

It's like watching The Magic fucking Roundabout without smoking a joint first.

Killer robots&nukes are ironic, not cost effec (1)

Paul Fernhout (109597) | about 7 months ago | (#46955945)

From my essay: http://www.pdfernhout.net/reco... [pdfernhout.net]
====
Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?

Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?

Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?

These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. Here is some dark humor I wrote on the topic: A post-scarcity "Downfall" parody remix of the bunker scene. See also a little ironic story I wrote on trying to talk the USA out of collective suicide because it feels "Burdened by Bags of Sand". Or this YouTube video I put together: The Richest Man in the World: A parable about structural unemployment and a basic income.

Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. I discuss that at length here: http://www.pdfernhout.net/post... [pdfernhout.net]

There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...

Re:Too late. (1)

LWATCDR (28044) | about 7 months ago | (#46958163)

Frankly killer robots have been around for at least a century.
Torpedos, sea mines, and land mines. Sure the kill logic started off simple for them. Kill what steps on me, kill a ship that bumps me, and kill what I run into.
By WWII sea mines could "decide" to blow up based on the size of the ship that passes over it. Torpedos could find their target based on the sound it made. And some landmines would kill tanks and trucks but not men that walked over them.
By the 70s you had guided missiles of all kinds, and Captor mines that would fire a torpedo at a sub that got near it.
As to Skynet frankly I have to wonder if SAGE was the inspiration of SkyNet. Just take out the controllers and replace the F-106s with drones and you are good to go.

 

Okay, I'll admit... (1, Interesting)

mrxak (727974) | about 7 months ago | (#46954469)

Okay, I'll admit, when I read the first sentence of TFS, I figured this was some kind of joke campaign or something. I guess my mind is too much in science fiction, and not really noticing that the future is already here.

Still, do we really think the governments of the world (at least the ones with the resources to build these robots) are actually going to go for fully autonomous killing machines? I would think all of them would want humans in the loop, if for no other reason than to justify their military hierarchies. The USAF, for example, seems determined to keep pilots in planes.

Re:Okay, I'll admit... (1)

roc97007 (608802) | about 7 months ago | (#46954523)

> not really noticing that the future is already here

We should put this on a t-shirt so we don't forget it. The future? The good parts, flying cars, colonies on other planets, still a long way off. The bad parts -- surveillance state, punishment for potential crimes, autonomous robot weapons, that's already here. Also (from another article) artificially created alien organisms. (Because in SF, that always ends well...)

Re:Okay, I'll admit... (1)

frank_adrian314159 (469671) | about 7 months ago | (#46954863)

Yeah. An easily portable automated kill-zone barrier. I see no reason why a general might want one of those. After all, minefields were just a fad. This works just about as well for a "if you step here we will kill you" sort of thing. Plus, no muss, no fuss cleanup. Just disarm the thing and pack up.

Re:Okay, I'll admit... (1)

mrxak (727974) | about 7 months ago | (#46957305)

Well, okay, true. I know the military wants those sorts of systems to replace minefields. They don't leave any explosives in the ground after the war is over, and they can be smart enough to choose a weapon system based on the threat (tank, launch an armor-piercing missile, squad of soldiers, launch a fragmentation bomb).

Still, that's a lot different than say, some kind of mobile automated killing machine.

This barn door has been open for decades (1)

JonMartin (123209) | about 7 months ago | (#46954477)

Likewise (2)

DaveAtFraud (460127) | about 7 months ago | (#46955749)

Could some of the people arguing for this ban please explain the difference between being on a ship during WWII that was hit by a kamikze and being on a ship during the Falklands war and being hit by an Exocet? Somehow being killed is being killed regardless of whether there was a human pilot or an autonomous robot flying the lethal projectile.

Re:Likewise (2)

Richard_at_work (517087) | about 7 months ago | (#46957111)

What they are trying to address is the decision to release the weapon - whether that decision is made by a human or non-human. After that point, automated guidance is a non-issue, its been around for 60 years and thus does not pose an ethical question (a 2000lb laser guided bomb taking out a bridge is better than 100 B-17s dropping 50 tonnes of bombs to drop the same bridge - the automated guidance aspect of the LGB means much less collateral damage than with area bombing).

At the moment the point to which we have progressed is having the non-human decide when to release the weapon, but not whether to release the weapon - that decision is always made by a human (yes, there is a huge difference between the two).

Machine logic (4, Insightful)

Firethorn (177587) | about 7 months ago | (#46954481)

because machine decision-making exists on a continuum.'

No kidding. Depending on how you define it, a cruise missile could be considered a one-use killer robot. It executes it's program as set on launch.

Now consider making it more sophisticated. We now provide it with some criteria to apply against it's sensors when it reaches the target location. If criteria A is met, dive and explode on target, if B, pull up and detonate more or less harmlessly in the air. If neither criteria is met, it depends on whether it's set fail safe/deadly.

This is mixed - on the one hand properly programmed it can reduce innocent casualties, but on the other it encourages firing missiles on shakier intelligence. But then again Predators armed with hellfires are a heck of a lot more selective than WWII gravity bombs. As long as you presume that at least some violence/warfare can be justified, you have to consider these things.

On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.

Re:Machine logic (1)

RandCraw (1047302) | about 7 months ago | (#46954721)

This strikes me as a false dichotomy. Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct. Until smart bullets drop enormously in cost, this scenario is infeasible.

Assuming the cost of a smart bullet does fall, the initial authorization to fire it is still a decision to kill. The fact that something or someone might later reverse the decision does not mean the initial choice to launch was not a kill.

The goal of this controversy is that no machine should never have the authority to issue the *first* kill command. That responsibility should always lie with a human. With that, I concur.

Re:Machine logic (1)

Chris Mattern (191822) | about 7 months ago | (#46954801)

Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct.

Current US Tomahawk Tactical Cruise Missile cost, per unit: $1.45 million.

You were saying?

Re:Machine logic (1)

RandCraw (1047302) | about 7 months ago | (#46954921)

Why is the cost of one of today's (dumb) Tomahawks relevant? It can't order itself to self destruct. And I can't believe any have ever been ordered (by a human) to self destruct, without *somebody* being busted several ranks.

What's more, an fully autonomous Tomahawk is going to cost a good deal more than $1.45 million. Nobody inferior to a colonel is going to pop that cork, and certainly not the missile itself.

No. That scenario still misfires.

Re:Machine logic (0)

Anonymous Coward | about 7 months ago | (#46956507)

Current US Tomahawk Tactical Cruise Missile cost, per unit: $1.45 million.

Gulf War wants it's figures back. Why spend that much money on a single-use missile when you can strap on a few Hellfire missiles for a lot less money onto a reusable Predator Drone.

You were blathering?

Re:Machine logic (0)

Anonymous Coward | about 7 months ago | (#46955533)

The trick is; getting people to believe that the weapon is making the choice to kill, so that you have political cover to deploy the weapon.

This is how we station "peacekeepers" when we invade other countries. The politicians who make that decision, make it, knowing full-well that their troops are going to be killing people. Everyone else is conned; thinking that deaths will only happen if the troops are in some defensive situation, or if someone goes "off the reservation". When those kinds of things are almost guaranteed to happen when you invade another country.

Re:Machine logic (1)

Firethorn (177587) | about 7 months ago | (#46956943)

Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct.

You'd be surprised. To a combatant commander, a million bucks is nothing. It all depends on the tactical circumstances.

Worst case you make the abort recoverable.

Heck, what do you think about a AI type interlock system? Both the machine logic AND a human have to decide firing is appropriate. Done right it *should* cut down on mistakes.

BTW, I'm figuring having this on 'big boom' weapons, not small arms.

The goal of this controversy is that no machine should never have the authority to issue the *first* kill command. That responsibility should always lie with a human. With that, I concur.

Agreed. Sort of like how casualties, on either side, are on the president's head if he orders troops in. Heck, it's on his head if he decides NOT to order troops in. Sometimes your only option is some influence on WHO dies.

Re:Machine logic (1)

budgenator (254554) | about 7 months ago | (#46954807)

On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.

The problem now is that's pretty much who os doing the fighting, there is no Talabanistan or United Al-Qaedian Emerates; look at the misery the drug cartels and gangs bring to Latin-American countries like El Salvador, Honduras, Mexico and California. Even in the Ukraine It's mostly Pro-russian civillian millitias and a cadre of Russian Spetsnaz.
In the old days any combatant that was ununiformed or undocumented was a spy and summarily executed and the any collateral damage were harboring anyways

Re:Machine logic (0)

Anonymous Coward | about 7 months ago | (#46955497)

A land-mine is a single-use lethal autonomous weapon. Not very selective. Kills anything that steps on it.

When you're designing a weapon to select "valid targets" - it always boils down to one thing. Does that target intend to kill me? They may display such intent by arming, wearing uniforms, and marching in a straight row across the battlefield towards you, and it's not too difficult to tell who deserves to be killed. But if they want to not be wiped out by a killer robot, they'll adopt strategies like; don't wear uniforms, don't carry obvious weapons, remain hidden from sight before striking. In the end, the target-selection algorithm always boils down to: you have to be a mind-reader to tell who's a valid target (intent to kill me) and who isn't (civilian bystander). And it doesn't matter if your "killing machine" is a soldier with a gun, or a drone. You'll always have that mind-reader problem. And if you don't have that problem, you have the "collateral damage" problem. Depending on any racist point of view, "collateral damage" may not be a problem. Hence, land mines are still used, and widely accepted, as well as many other completely indiscriminate forms of killing.

Re:Machine logic (1)

Time_Ngler (564671) | about 7 months ago | (#46956827)

On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.

Such as a weapon that can think for itself, like this?

https://www.youtube.com/watch?... [youtube.com]

Re:Machine logic (1)

Richard_at_work (517087) | about 7 months ago | (#46957131)

We already have weapons that make the decisions you suggest - the European StormShadow cruise missile for example, or the British ALARM anti-radar missile (launch it in standoff mode, it climbs to a given height and then deploys a parachute and waits until it can see a ground based radar, at which point it releases the parachute and kills the radar).

Send Jack Bauer (1)

dfsmith (960400) | about 7 months ago | (#46954529)

Looks like someone was curious about the protestors in the new season of "24", and started Googling!

Alarmist much? (1)

Cantankerous Cur (3435207) | about 7 months ago | (#46954541)

I gotta say, this whole thing seems a little ridiculous. Unlike Hollywood, any such weapon would be incredibly limited by power source (batteries or burning hydrocarbons) and limited ammunition. I'd also like to point out that there numerous ways to disrupt robots such as EMPs and strong magnets.

Besides, I'm looking forward to the giant robot spiders that sound like children.

Re:Alarmist much? (4, Insightful)

mmell (832646) | about 7 months ago | (#46954623)

You're talking about design specifics here. The question is philosophical, not technical. It's not "Can we create battlefield driods, automated stationary sentries, or robotic weapons such as guided missiles or autonomous drones?", it's "Should we?".

Re:Alarmist much? (0)

Anonymous Coward | about 7 months ago | (#46954723)

Well, really it's just the U.N. making believe what they think matters to the U.S. which already has this shit. So in other words, somebody wants more aid.
Now before you smug U.S. guys agree too quickly. You should look up. At least in a foreign country you won't be arrested for shooting down a drone.

Re:Alarmist much? (0)

Anonymous Coward | about 7 months ago | (#46955833)

And that's why this is stupid. "Should we?" flies out the window under the excuse of war, such as the Manhattan project or all those horrifying biological and chemical weapons developed during the Cold War. The UN can do exactly squat to stop it if the major powers (US or China) wants them. They couldn't (or wouldn't) even stop a genocide in progress. [wikipedia.org] They're basically powerless to do anything but say "we don't condone this". If that.

Re:Alarmist much? (1)

cusco (717999) | about 7 months ago | (#46956377)

It's not going to matter one bit, someone in charge of a Black Budget in the Pentagon is going to think it's a good idea. Remember what the Pentagon did when Commander-In-Chief President Clinton directly ordered the military to stop all work on bio-weapons? Renamed the project, moved it to the Black Budget, and didn't even skip a beat.

Defining autonomous weapons (1)

Bruce66423 (1678196) | about 7 months ago | (#46957165)

A mine in the earth or at sea is an autonomous weapon on one possible definition. So is a proximity triggered automatic rifle, as used on the Berlin Wall. The ship has sailed; the question is what parameters can be introduced.

Re:Alarmist much? (1)

wisnoskij (1206448) | about 7 months ago | (#46954705)

Well you could make a robot that is powered by drinking the blood of its enemies.

But honestly, if I were making a killer robot, I would probably just make it so that it could plug itself into outlets or just grab power lines if it were running low.

Re:Alarmist much? (1)

budgenator (254554) | about 7 months ago | (#46954831)

You can use all the killer robots you want, but it ain't over untill there are boots on the ground.

The new Robocop explores this in a nuanced fashion (1)

Wraithlyn (133796) | about 7 months ago | (#46954595)

Just kidding, it's a pile of shit.

Unfortunately, no. (4, Interesting)

timeOday (582209) | about 7 months ago | (#46954601)

There at least 3 different levels of problems here:

1) Does this even make sense: No. Autonomy is not well-defined. Does a thermostat make "decisions"? etc.

2) Assuming it makes sense, is it a good idea: No. Firing a cruise missile at a target is better than firing a huge barrage of mortars towards a target, for everybody involved. Any smarter version of a landmine would be better than the current ones that "decide" to blow up whatever touches them 20 years after the war is over.

3) Assuming it's a good idea, can it be implemented: No. Arms races are often bad for everybody involved. Everybody involved knows this. And yet that universal realization does not provide a way out. Everybody knows if they don't, the other side might well anyways.

Re:Unfortunately, no. (1)

RandCraw (1047302) | about 7 months ago | (#46955011)

1) Yes. The decision to fire the weapon and authorize lethal force is discrete and binary. That is indeed well defined. By launching it, arming it, and ordering it to engage the "enemy" you have made the decision to kill. Any human private who kills without prior authorization to engage is in violation of the rules of combat. Authorizing him/her to kill *is* the issue here.

2) ??? The technique of projecting force is irrelevant. It's the *authorization* of of autonomous dispatch of lethal force that's the issue.

3) Yes, of course requiring a human to authorize a kill certainly can be implemented. This isn't part of an arms race. It's just a new aspect of any military's "rules of engagement". It's no different from the Geneva Convention's rules on treatment of prisoners of war, or banning the use of chemical or biological (or nuclear) weapons.

The first law of automated weapons is (1)

CmdrEdem (2229572) | about 7 months ago | (#46954605)

Don't have them.

First: If the concern is really about automated killing then we have to establish the following:
No object capable of generating enough kinetic energy to kill a human can be directly interfaced with an electronic circuitry.

But that would include cars and all kind of machinery. So the rule above would be a 95% insurance that AIs would not be able to kill humans. The other 5% is accounting that an AI would self-destruct to short-circuit and generate enough electromagnetic current to electrocute a human from a few centimeters away. And with my CS knowledge I would say that the electrocution scenario nowadays is impossible due to the physical properties and disposition of the materials involved in computer construction. But I don't know if an AI is only possible with materials and devices capable of such currents.

This rule also prevent external hacking from turning one's arsenal against himself. If I had an army I rather take my chance with good old meat bags for the trigger pulling.

Its a great (-1, Offtopic)

ablemork57 (3587327) | about 7 months ago | (#46954633)

That was very nice

The first law of robotics (-1)

Anonymous Coward | about 7 months ago | (#46954649)

Is that you *don't* talk about robotics!

It's inevitable (0)

Anonymous Coward | about 7 months ago | (#46954653)

Robotics is only becoming more and more advanced, and more and more cost effective. They are certain to be weaponized.
The same advancements that would make a great domestic robot (such as accurate object recognition and tool manipulation), would make a great robotic soldier as well.
Danger is the price of progress.

That said, now may be a good time to lock in a low rate on life insurance with a robot plan [robotcombat.com] .

These rules only make sense in context (1)

Xaedalus (1192463) | about 7 months ago | (#46954663)

Selective, efficient killer robots only make sense in the context of using them in limited skirmishes/small wars. For the really BIG wars, killer robots would be horribly inefficient, because the point of the big wars is to eliminate as much of your enemy as possible--civilians included. Both the Axis and the Allies were actively involved in targeting each other's civilian populations via total war. In that regard, there isn't anything much cheaper and effective, or cost-efficient, than nuclear-tipped ICBMs. Anything less merely prolongs the conflict and ensures more agony suffered by all over a long period of time.

I already got your ban (1)

Squiddie (1942230) | about 7 months ago | (#46954665)

Thou shalt not make a machine in the likeness of the human mind. Done.

Might have the opposite of the intended effect (1)

GPS Pilot (3683) | about 7 months ago | (#46954669)

The consensus around here is that autonomously-driven cars will inevitably establish a better safety record than human-driven cars. I.e., robotic systems will on the whole make better, less-reckless decisions than human drivers.

A good case could be made that autonomous military systems will likewise make better decisions than fatigued and/or panicky young soldiers.

Current military tools and techniques certainly result in fewer friendly-fire incidents, collateral damage, etc. than were experienced during WW II. But by banning autonomous systems, we may be barring ourselves from any further reductions in these problem areas.

As Successful as the Kellogg-Briand Pact (1)

Nova Express (100383) | about 7 months ago | (#46954697)

You know, the pact to outlaw war [state.gov] . Signed in 1928.

Didn't work out so well.

And even if it were signed by a significant number of nations, we could be sure the non-democratic ones would be violating the ban before the ink was even dry.

Unenforceable treaties are actually worse than worthless: they constrain good actors without deterring bad ones.

It's going to be driven by reaction time (4, Insightful)

jlowery (47102) | about 7 months ago | (#46954701)

A robot is going to (or will eventually) react much faster to a threat or other adverse conditions than a human can. If you've got a hypersonic missile heading toward a carrier, are you put a human in the loop? Nope.

There are simply going to be many many situations where a robot will neutralize a threat faster than a human can, and those situations will increase if fighting against another autonomous army.

Is this a good thing? No, it's like atomic weapons. We're heading toward another arms race that will lead us to the brink or over. We barely survived the MAD era.

What what what?! (1)

Greyfox (87712) | about 7 months ago | (#46954757)

But I wanted to make killer robots! Now what am I going to do with this libKillerRobot I was working on?!

not going to happen (1)

Charliemopps (1157495) | about 7 months ago | (#46954759)

As will all new weaponry, all the countries that don't have it/can't get it panic and agree that it's a horrible idea. They pass UN resolutions banning it, etc... all the countries that do have it refuse to sign and so nothing has changed, other than the countries that don't have it will start accusing those that do of war crimes and flouting international law which they rarely recognize anyway. When some of the countries that signed the ban finally get enough money/science to get the tech, they of course do so despite the treaty and now the countries that didn't sign use it against them to levy sanctions. Until forever, on it goes Through the circle, fast and slow.

The economics of machine intelligence (1)

wattersa (629338) | about 7 months ago | (#46954841)

Skynet and The Terminator are definitely coming. But what about the economics of machine intelligence? This article makes an interesting case: http://hanson.gmu.edu/aigrow.p... [gmu.edu]

I'm surprised no one commented about this yet. (1)

pouar (2629833) | about 7 months ago | (#46955027)

If the "killer robots" tried to take over the world today they would fail quickly, XKCD seems to have explained why already.
https://what-if.xkcd.com/5/

When Killer Robots are illegal... (3, Insightful)

BenSchuarmer (922752) | about 7 months ago | (#46955035)

only super criminals will have killer robots.

kill all humans (0)

Anonymous Coward | about 7 months ago | (#46955105)

subject says it all, lazy humans, rather be grossly fat and not even work for a living, I for one work for the day :)

RealPolitik (0)

Anonymous Coward | about 7 months ago | (#46955137)

The best way to approach this problem would be a combined carrot/stick approach. Disincitivize military use and incentivise civilian purpose.

Easier said than done I know but yeah...

Realistic? (0)

Anonymous Coward | about 7 months ago | (#46955279)

Heavens no. It's as realistic as banning cocaine or marijuana. If someone wants it, someone else will supply it. And there will always be some tinpot dictator who wants it. You can ban it. The countries run by (relatively) decent people will abide by the ban. The countries run by other folks will simply ignore the ban.

The person who frames the question... (1)

davecb (6526) | about 7 months ago | (#46955363)

... dictates the answer. Reasoning strictly inside the box that creates, if you then try to propose a robot can use it's own judgment for everything but firing a weapon, you'll get criticized for hitting the edge of the box and not allowing it to actually be autonomous.

In fact, the question isn't "how autonomous", it's "autonomous or not".

Aren't people Autonomous? (0)

Anonymous Coward | about 7 months ago | (#46955389)

A ban on Autonomous lethal weapons would outlaw soldiers.

Re: Aren't people Autonomous? (0)

Anonymous Coward | about 7 months ago | (#46955523)

Furthermore, there has been a lot of people addressing that "autonomy" is poorly defined, but the same thing can be said about weapon or lethal. Surely, an autonomous car is capable of being used as a weapon and producing lethal force, and same can be said about Robotic Construction arms. When do they become lethal weapons? Is a robot arm OK unless it picks up a gun? What if a military just mounts a robot arm on a google car and uses it to plant mines? I don't think it's going to be possible to distinguish at this point what we can summarily ban as lethal or deadly. It seems far more prudent to allow research and development to continue unfettered until they come up with a big mistake. Then it'll be easy to get the regulations moving. This is where most regulatory movements have come from, and by most, I mean all of them. This group is just stating that they'll be watching what's going, thus warning developers to reign in some of there more controversial ideas. So... Basically, everything is going on its on natural course.

Already here (0)

Anonymous Coward | about 7 months ago | (#46955471)

They correctly identify the problem. How do you define the threshold of autonomy. A "smart" bomb is a killer robot. A cruise missile is a significantly more advanced killer robot, and significantly more autonomous that the current crop of UAVs. Many existing weapons use target identification algorithms that can "see" and identify their target for destruction. Heck, an AEGIS cruiser can be set to full-on ROBO-CRUISER mode where it will shoot according to pre-programmed settings, including things like follow-on missiles if it thinks the Pk for the first is too low.

However, the current inventory all require a human to commit. Even the AEGIS is really only set to robo-cruiser if the proverbial shit hit the fan, and it requires a human to turn the key. That is, some person flips a MASTER ARM switch and pulls a trigger or pushes a button with every intent to kill whatever the target is. What the cruise missile is doing after that is just targeting.

So I guess that's the line. Autonomous systems need a human to commit or arm.

In America (0)

Anonymous Coward | about 7 months ago | (#46955483)

In America. Killer Robots stop you.

Wait I dint this the plot of the new Xmen movie?

Designs for Killer Robot on TPB (0)

Anonymous Coward | about 7 months ago | (#46955571)

On the "Physibles" section of The Pirate Bay:

SEMTA: Secure, Economical, Mini Tank Architecture by Anonymous

“we see how technology like the firearm -- particularly the repeating rifle and the handgun, later followed by the Gatling gun and more advanced machine guns -- radically altered the balance of interpersonal and inter-group power. Not without reason was the Colt .45 called “the equalizer.” A frail dance-hall hostess with one in her possession was now fully able to protect herself against the brawniest roughneck in any saloon.” (Hammill, Chuck. From Crossbows To Cryptography: Techno-Thwarting The State. Future of Freedom Conference, November 1987)

        The problem with shooting at people is they often shoot back. SEMTA solves this problem by keeping at least one party out of the line of fire.

        Remotely controlled tanks are over 80 years old. "Teletanks were a series of wireless remotely controlled unmanned tanks produced in the Soviet Union in the 1930s and early 1940s. They saw their first combat use in the Winter War, at the start of World War II.” http://en.wikipedia.org/wiki/Teletank

Design Goals

        SEMTA is designed to kill people. This must not be done unintentionally and it must not be done under the intention of someone other than the operator. Therefore, the communications link between the operator and the SEMTA must be secure. This will be accomplished using the OpenVPN.
        SEMTA is economical. It uses off-the-self components. It is lightly armored, but easily repairable.
        SEMTA is miniature. SEMTAs should be easily palletizable for shipping. The current design allows for dozens of SEMTAs to be shipped in a standard twenty foot shipping container.
        SEMTA is a tank. [..]
        SEMTA is an architecture. It is made of available materials. With a basic understanding of the design, any component may be substituted for a more readily available component.

zip file includes Geomagic Design Parts, Assemblies and Drawings. Drawings are also in PDF format. Documentation is in abw and html formats.

http://thepiratebay.se/torrent/10063043/SEMTA__Secure__Economical__Mini_Tank_Architecture_by_Anonymous_%28

Big problem (0)

Anonymous Coward | about 7 months ago | (#46955715)

It is not at all cost effective to simply throw your robots at other robots and decide the winner by whomever has any robots left.
When two armies are composed entirely of automatons, you can bet every arse you've got that they'll suddenly be programmed for maximal civilian mayhem, in hopes of "turning hearts and minds against the production of more enemy robots".

What's never mentioned about every robot uprising is that they had to be programmed to do this.

Nothing difficult about the autonomy issue. (1)

Kaz Kylheku (1484) | about 7 months ago | (#46955871)

If it chooses what target to select and makes the call on whether to attack the target, it is autonomous.

If a human chooses the target and makes the strike call, the machine is not autonomous.

Complete no brainer.

They're about 30 years too late... (1)

Patent Lover (779809) | about 7 months ago | (#46955941)

Robocop, the ultimate law enforcement officer!

Easy to stop killer robots (1)

clovis (4684) | about 7 months ago | (#46955991)

You simply present them with a paradox, and they'll melt down or blow up trying to solve it. I saw Captain Kirk do it once.

Ban Killer Politicians! (0)

Anonymous Coward | about 7 months ago | (#46956065)

They are the ones giving deadly orders to the robots.

Time to get an instance plan that covers robots (1)

djrobxx (1095215) | about 7 months ago | (#46956645)

Old Glory Insurance. "For when the metal ones decide to come for you. And they will."

https://screen.yahoo.com/old-g... [yahoo.com]

Killer Robots are so messed up... (1)

allcoolnameswheretak (1102727) | about 7 months ago | (#46957621)

Yeah, let's ban killer robots. Better let humans do the killing. I'm sure they have a much better track record at discriminating hostiles from innocent civilians.
After the war, when we bring our killer heroes back home to rejoin their families, everything will be just dandy. Because after daddy has shot three Extremistanis in the face and seen his buddy's leg torn off by an IED, the first thing he wants to is hug his little girl and tell her he loves her.
Killer robots would just be so immoral.

We Need Killer Bots (1)

Jim Sadler (3430529) | about 7 months ago | (#46957635)

You can bet that China will protest martial robots. After all when it comes to flesh and blood soldiers China has a huge advantage due to their excessive population levels. But with dedication and planing smaller nations like Norway or Switzerland could invest heavily in reserves of very potent martial robots capable of resisting invasion by much larger nations. Think about it. Russia is doing an expansion right now. If the Ukraine and others had a few thousand really good nuclear equipped cruise missiles I seriously doubt that Russia would have dared to tread on them. Most of the nations of the world seem to take any perceived weakness in another nation as an invitation to invade and slaughter and it is so convoluted that the aggressor will claim that their victims deserved their fate as they did not keep strength and readiness sharpened enough to not tempt others to invade.

Killer robots could be a good thing... (1)

tekrat (242117) | about 7 months ago | (#46958735)

Because an army of robots is less likely to rape civilians after taking over and occupying a city. As a result there's actually less collateral damage.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?