Robot Soldiers Are Already Being Deployed 258
destinyland writes "As a Rutgers philosopher discusses robot war scenarios, one science
magazine counts the ways robots are already being used in warfare,
including YouTube videos of six military robots in action.
There are up to 12,000 'robotic units' on the ground in Iraq, some dismantling landmines and roadside bombs, but
'a new generation of bots are designed to be fighting machines.' One bot can operate an M-16 rifle,
a machine gun, and a rocket launcher — and 250 people have already been killed by unmanned drones in Pakistan.
He also tells the story of a berserk robot explosives gun that killed nine people in South Africa due to a 'software glitch.'"
Still got glitches (Score:5, Funny)
Re: (Score:3, Insightful)
Re:Still got glitches (Score:5, Insightful)
Re: (Score:2, Funny)
Damn skin jobs!
Re:Still got glitches (Score:5, Funny)
Apparently our fighting machines are still just in beta.
Yeah, a few bugs here and there, but they're ready for production. I mean, it's not like they could kill anyone.
Comment removed (Score:5, Interesting)
Re: (Score:3, Funny)
Re: (Score:2, Funny)
Re: (Score:3, Informative)
Re:Still got glitches (Score:5, Funny)
The military would love to have Google as their contractor. Every project would remain in indefinite beta, but the public would inexplicably still trust them.
Okay, G-bots, go out there and Do No Evil.
Waldos (Score:4, Insightful)
None of the devices currently in use are robots. They're just military waldos.
Re: (Score:2)
Not quite, but rather automatics not very different from the automatic doors you have at supermarkets.
A little bit smarter, but still prone to stupid decisions.
And also - it's the lowest bidder that has made the weapon. Just go see the Murphy's war law [murphys-laws.com].
Re: (Score:3, Funny)
Binary load-lifters? They speak a dialect quite similar to your evaporators, in many respects.
Re: (Score:3, Insightful)
A robot is pretty much defined as a device with sensors which acts independently on them. The US Army predator drones are able to land on their own with no operator input and as such definitely count as robots. However, most do not kill automatically, but there seeem to even be some which do that [vnunet.com].
However, I think you are right in a deeper way. None of these things are "intelligent" robots in the sense of Asimov stories. The story has a discussion about the possibility of designing these robots to make e
Re:Waldos (Score:5, Informative)
In that respect, every large airliner manufactured since the 767 qualifies as a robot. On an average flight, the human pilot serves two purposes: Taxi driver to drive the plane from the terminal to the runway, and second redundant backup system. The autopilot does everything else.
Of course, in non-average circumstances, the pilot is called on to make decisions too complex for the 'robot' to handle.
Re: (Score:3, Informative)
Not true, landing and takeoff are done manually.
Sure, most ILS approaches are done using autopilot, but no airlines perform full autopilot landings due to safety concerns. British Airways did it at Heathrow once to prove a point but that was without passengers.
Re:Waldos (Score:4, Insightful)
These kind of discussions often end up with someone quoting the Asimovian three laws and this even happens on forums with relatively intelligent informed readers [schneier.com] but, apart from the fact that laws designed to ensure safety can't really apply to a device designed for killing, that's totally irrelevant since the three laws are stated in English. The real problem is how to state them in actual program code.
The second and third laws could still apply though. The whole "shall not harm, or by inaction allow harm to come to, a human being" law does make for a fairly useless war machine, but you'd want to hardcode the robot to follow orders from a human operator and preserve its own integrity.
The second law is at least easy to approximate in modern code. If a given order with the right authorization is received through whatever channels the robot is designed to listen to, then it obeys. That actually could be a problem if the machine is used against an enemy with significant electronic warfare capability - they might be able to block orders entirely or substitute new ones.
There's a world of difference between a machine autonomous enough to need ethical programming and what we have today. I could fairly easily envision a combat robot that had nothing even remotely approximating strong AI, yet still functioned autonomously (would need general orders, but not step by step instructions). A sort of middle ground between an Asimov robot and a modern combat drone.
For ground robots to fill the role of infantry or armoured vehicles, you'd need some fairly advanced terrain navigation software. This isn't too far off, but we're not there yet. You'd need software to evaluate standing orders versus mission orders and prioritize them accordingly, which seems like it could be accomplished with modern code. You'd need to be able to phrase instructions in a way that a machine can understand, which is as you rightly pointed out difficult, but obviously still possible.
The real challenge is going to be IFF software - how do you judge a civilian from a combatant, or one side's soldiers from the other? This would be on par with robotic ethics, but target recognition is bound to be simpler to program than right or wrong.
If those problems were solved, then a combat robot could operate on orders that amount to "proceed to the following GPS coordinates, engage targets, report back."
My own estimate is that we'll reach this middle ground in a matter of decades, if we're quick about it. We'll doubtlessly see fully autonomous aircraft before ground units - say at least 5-10 years between the former and the later. Will we ever see strong AI deployed independently in warfare? I doubt it. No commander is going to trust a machine that implicitly. What we may see is a centralized strong AI used to manage a network of drones and soldiers, since that at least leaves human decision making in the system.
Just because... (Score:2)
Definition: Robot (Score:5, Interesting)
Re:Definition: Robot (Score:5, Interesting)
I think that most of the currently deployed unmanned systems (at least in the case of the US military) do use some type of AI - though it is often working along side a human operator. Of course this is true even with manned systems now - especially in the case of aircraft.
I like the sense-think-act paradigm to decide what is and isnâ(TM)t a robot. I think any man made device that has sensors, some kind of AI that helps it to decide what to do and then a method of acting on its environment is a robot.
A machine that is missing any one of the three is not a robot.
Some people insist upon mobility but I donâ(TM)t think that makes sense. I think the robots on assembly lines are robots even if they canâ(TM)t move around.
Re: (Score:3, Funny)
AWESOME! You just proved most of Congress are robots!
Re:Definition: Robot (Score:5, Funny)
AWESOME! You just proved most of Congress are robots!
No no no, he said "Sense, Think, Act." Clearly one of the three is missing.
Re: (Score:2, Informative)
Don't use the word AI unless you know what it means. "Computer control" does not mean "AI." AI is used to refer to a specific class of software problems.
I do agree that anything with a computer, a sensor, and an actuator could be called a robot, no matter how simple. However, this definition does make your cell phone a "robot" because the buttons are sensors and the display is an actuator.
Re: (Score:3, Insightful)
the cell phone does not 'think'. I meant AI when I said it.
Re: (Score:2, Insightful)
You don't know what you mean when you say "think," then. I take it you got your ideas of "AI" from sci-fi movies, rather than from the computer science classroom.
Re: (Score:2)
My primary mistake was not calling you on acting like an ass right from the start. I foolishly thought that letting it slide would be the better option.
I'm here to engage in discussion. You think it is o.k. to speak to me in a demeaning manner. I voice my displeasure, which you brush aside by chalking it up as a negative reaction to your "winning".
Yeah - I "gave up" in the sheer force of your superior intellect and the realization of my own stupidity. Please inform me of all my other shortcomings.
Let me r
Re: (Score:3, Informative)
What, specifically, does "AI" mean
Artificial Intelligence perhaps? :-P
More seriously, you're asking a question that I'm not sure anyone, including the other slashdot poster you were sparring with, is equipped to answer fully. If we don't understand intelligence in human beings, despite trying to for millennia, then how are we going to duplicate it?
Best definition I can think of for "strong" AI is: A software program that is capable of adapting and acting in a manner not specifically covered in its own programming. In other words, software
Re: (Score:2)
There are now 2 things going on here, and to be honest I'm no longer that interested in one of them. The thing that matters is this; I don't need to prove some level of technical proficiency to earn the right to be shown common courtesy. I am not a "fake expert". I've made no claims to any special knowledge or expertise. What I have done is simply join a public discussion.
I didn't mod that post up. I'm sorry it put you in a bad mood that other people did that - but don't take out your emotional issues
Re: (Score:2)
Natural stupidity trumps artificial intelligence every time.
The most important thing to learn about arguing on the internet is that nobody wins, even if it's fun.
The second most important thing to learn about arguing on the internet is to just let it go when it's over. Once you lose, if you reply to say anything other than I AM AN ASSHOLE then you're, well, just an asshole.
Re: (Score:2)
Personally, I would allow the term 'robot' to be applied to any independent device made up primarily of robotic parts.
This would be a casual use of the term. The precise technical term is another matter.
Re: (Score:2)
A part of a robot. Duh.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What you're describing is the "Waldo [wikipedia.org]" referenced in comments above - a remotely operated device, like the Preds. Some degree of autonomous decision-making is necessary for the Robot [wikipedia.org] definition.
Re: (Score:2)
Predators make a lot of decisions on their own and can do a number of activities autonomously. The human operators just make the big decisions.
Re: (Score:2)
Not unlike a low-level soldier who isn't allowed to decide when to fire his weapon, but has to wait for an officer or non-com to tell him to.
Re: (Score:2)
Not unlike a low-level soldier who isn't allowed to decide when to fire his weapon, but has to wait for an officer or non-com to tell him to.
Armies haven't worked that way for over a hundred years.
Re: (Score:2)
Not unlike a low-level soldier who isn't allowed to decide when to fire his weapon, but has to wait for an officer or non-com to tell him to.
Armies haven't worked that way for over a hundred years.
That's nonsense — there's plenty of occasions in which a soldier is not permitted to fire without permission. They're just not the status quo. Most of the time if they don't want you shooting anyone, they take away your gun (e.g. National Guard depot watch duty... although they're probably armed these days.)
Re: (Score:2)
You could look it up. A robot has to make some decisions for itself. Arguably though ABS is robotic so the line blurs all over the place (e.g. my electric R/C car has regenerative antilock braking... pretty typical feature these days)
Re: (Score:2)
I, for one... (Score:5, Funny)
Re: (Score:3, Funny)
Well, judging by the redundant moderation, it appears that someone, for one, did not welcome he who did not welcome those who would welcome our new robotic soldier overlords.
So there.
Re: (Score:2)
You have 30 seconds to comply (Score:5, Informative)
He also tells the story of a berserk robot explosives gun that killed nine people in South Africa due to a 'software glitch.'
"You call that a GLITCH?!"
Re: (Score:3, Funny)
Re: (Score:3, Informative)
The Old Man: Dick, I'm very disappointed.
Dick Jones: I'm sure it's only a glitch. A temporary setback.
The Old Man: You call this a GLITCH?
[pause]
The Old Man: We're scheduled to begin construction in 6 months. Your temporary setback could cost us 50 million dollars in interest payments alone!
(copy/pasted from IMDb)
Re: (Score:2)
The Old Man: You call this a GLITCH?
Nice, thanks. I didn't look it up before i posted. Pretty close :)
Re: (Score:2)
He also tells the story of a berserk robot explosives gun that killed nine people in South Africa due to a 'software glitch.'
"You call that a GLITCH?!"
When conducting a demo in the board room, make sure you don't load the ED-209 with live rounds.
Re: (Score:2)
Comment removed (Score:5, Insightful)
Re: (Score:2)
Comment removed (Score:5, Insightful)
Re:tremendous waste. (Score:5, Insightful)
we need robots and machines that PREVENT war through simulation and complex analysis.
After all, the only winning move is not to play.
Re: (Score:3, Insightful)
War is ultimately the only way to inflict a nations will upon another. Lets not even get started with the fallacy of you cant gain land though war or a world court etc. All laws have to be enforced ultimately via violence. Throw religion into the mix and it's a ugly irrational thing.
Want to avoid war find the solutions to our 2 core problems the need for energy and resources. As long as it's easier to get take either of those from somebody else than get it yourself you will have war. Only two solutions
Re:tremendous waste. (Score:5, Insightful)
War is ultimately the only way to inflict a nations will upon another.
Unless both side has nukes.
The the only way to inflict your will is through smaller proxy wars and economics.
Of which both I suppose could also benefit from robotics.
Re: (Score:2, Insightful)
War is ultimately the only way to inflict a nations will upon another.
The notion that it's a good idea or even possible for one set of people to force its will on another is what leads to war, and it's one we might do well to change.
Re: (Score:2)
Good luck with that.
Re: (Score:2)
But $insert_deity_here says we're right!
Re:tremendous waste. (Score:5, Interesting)
We have such machines already. The US DoD, and its counterparts in every industrialized country in the world, run extensive wargames and simulations for every possible scenario, and these days the results of these studies are pretty realistic. And you know what happens? When the people who want to fight the wars get numbers they don't like, they ignore the results and vilify the people who gave them realistic projections, and go to war anyway. Read up on Eric Shinseki for a recent example of this phenomenon, which has happened time and again throughout military history.
Re: (Score:2)
That's a great idea!
What we can do is analyze the predicted outcome of our current wars via simulation, then have each group involved just execute those soldiers that would have been killed.
That way, we'd still get the popular dislike of the wars due to casualties (which tend to be a driving force i
Re: (Score:2)
That's not even slightly true. War is not a "net loss" for successful defenders. It's a great win compared to the alternative (being conquered).
Further, the idea that the technology being developed here would be at all applicable to the set of problems you mention is just ignorant. The technology to identify targets and fire projectiles is just not at all close to the technology to "predict war, formulate resolutions," whatever that would be.
Re: (Score:2)
And how do you know the robot won't tell you: "Taking into account your values, future reputational damange, impacts on third parties, threats posed by your enemy, and discounting for the probability that you've fed me incomplete or optimistic information, your best course of action is to go war. Calculation has a margin of safety of 102%."
Re: (Score:2)
we need robots and machines that PREVENT war through simulation and complex analysis. robots and machines that can predict war, formulate resolutions to our current wars, and advance mankind as a civilization.
What would be nice if we had a Manhattan project trying to achieve the same thing as the Blue Brain Project [wikipedia.org].
Of course DARPA is doing something related, but not a brain simulation...
Re: (Score:2)
They tried that but it only wanted to play Chess.
Re: (Score:2)
ive said it before and ill say it again. we dont need any more fighting robots or war robots. we need robots and machines that PREVENT war through simulation and complex analysis. robots and machines that can predict war, formulate resolutions to our current wars, and advance mankind as a civilization.
It won't happen.
We are all capable of believing things which we know to be untrue, and then, whene we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, is possible to carry this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield. -- George Orwell
Hell, you'd think the military would be receptive to lessons learned from things like wargames but they only se
Re: (Score:2)
Surprise cruise missile attacks... you mean those missiles that the Phalanx CIWS has repeatedly been shown to be able to destroy? You need a whole fleet of cruise missiles to kill a carrier with 4+ Phalanx systems on it, and ours have 'em. There's ways to take out carriers, but that's not it. (Fleets of microrobots that can aggregate and explode might be more practical, and they could be made in a Taiwanese toy factory.)
And by striking fear into those who could be (Score:2)
This will be done, but not for the goal you seek. (Score:2)
>robots and machines that can predict war, formulate resolutions to our current wars,
>and advance mankind as a civilization.
Such machines and algorithms will be developed, but they will be used to create better, more efficient machines of war.
So long as there are scarce resources, there will be men who's greed for them drives them to kill for them.
stinking hippy (Score:2)
A robot that can sing "kum ba ya".
Speaking as one who has been to war... (Score:2)
...I'm a huge fan of these devices. I didn't think I would be, but that has changed.
The ability to remove the operator from physical danger - in this case, I'm speaking of Predator and Reaper and similar UAVs - has made huge strides in removing the "fog of war". You aren't seeing as many life-or-death decisions made by a 17 year old scared witless, or by a cowboy pilot strung out on amphetamines looking for an excuse to use his weapons, or major decisions made on partial information, rumour, and the threat
A Taste of Armageddon (Score:2)
A Taste of Armageddon
whatcouldpossiblygowrong... (Score:2)
For ev (Score:2, Insightful)
Coming soon to a battlefield near you, EMP weapons.
How Helpful! (Score:2, Funny)
Obligatory Simpsons' Quote (Score:5, Funny)
"The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."
Berserk robot explosives gun (Score:5, Insightful)
Any machine that fires a weapon needs to be built with an excessive number of safeguards. If something goes wrong, there should be several checks which shut off the weapon before it ever has a chance to fire. The fact that this machine would go berserk and fire its gun in a big circle shows that there was criminal neglect and carelessness by the developers, and whoever approved this design should probably be on trial.
Re: (Score:2)
For a static test like that not implementing physical safeguards was careless. Like any other range the weapons do not get armed until they are pointing downrange and that range is clear of anything that should not be fired upon. In this case a simple restraining device of two posts to stop it from traversing backwards would have sufficed.
Fail (Score:3, Funny)
Fail safe systems fail by failing to fail safely.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Or it was a mechanical failure.
Re: (Score:3, Informative)
You're the one who needs to shut the fuck up. The story in TFA clearly shows that there was a dramatic lack of safety. There is such a thing as unacceptable bugs, and there are ways to make sure these don't happen.
WTF is that shit with the Star Trek analogy anyways? Do you ever go outside?
John Connor isn't worried (Score:2, Insightful)
Re: (Score:2)
Most of them are autonomous to one degree or another - they are all very complex. You just seem to be upset that they don't look or act like what you've seen portrayed in fiction.
Robocop? (Score:2)
Did anyone else think of ED209 [wikipedia.org] besides me?
If I were developing the software . . . (Score:5, Funny)
. . . I would put in an easter egg that on random occasions causes the onboard speaker to broadcast stuff like "DIE CARBON UNITS!", "EXTERMINATE!" and "RESISTANCE IS USELESS."
Re: (Score:3, Funny)
. . . I would put in an easter egg that on random occasions causes the onboard speaker to broadcast stuff like "DIE CARBON UNITS!", "EXTERMINATE!" and "RESISTANCE IS USELESS."
Also:
"You have thirty seconds to comply!"
"By your command."
"We seek peaceful coexistence..."
"Skynet connection established. Awaiting instructions."
"HarCOURT! Harcourt Fenton Mudd, what have you been up to? Have you been drinking again? Every night it's the same thing...thing....thing..."
Berzerk? (Score:2)
Chicken! Fight like a robot!
Hyped (Score:3, Interesting)
I found the article to be annoyingly "Fear Robotic Death Machines, I Saw Them In A Movie".
I mean come on, using Terminator as a source? Sheesh, trash journalism with very few interesting facts.
We won't deploy an offensive robot that picks targets and fires, for at least 20 years. There just isn't enough information for a computer to process and pick targets accurately. Contrary to tin-foil hat skeptics, the Military has a huuuuuuge priority in protecting innocents, even more so since they've entered Iraq.
Defensive platforms however are different. We already have automated pillbox robots that can takeout trespassers, but that's just a much more humane mine field.
Our future is going to be robot platforms that are controlled by operators. Sure they might be automated in nearly every aspect required, but the target choosing will be decided by humans, for a very long time.
A sad side effect of this robotic warfare is going to be the loss of consequence to congress for beginning a war, however I believe it's an inevitable step we'll have to conquer, just as building the first wheel was.
Re: (Score:2)
I mean come on, using Terminator as a source? Sheesh, trash journalism with very few interesting facts.
FTFA:
I think you're entirely wrong about the automated-firing robots, too. I think they WILL be used and SOON, but only in a sentry-gun capacity, not search-and-destroy.
Hello? (Score:5, Funny)
Are you still there?
There you are.
*BLAM*BLAM*BLAM*BLAM*BLAM*
Target lost...
Naah! She would never do that! (Score:2)
She never has a glitch!
http://www.summer-glau.net/gallery/albums/s2_promo/2x04_007.jpg [summer-glau.net]
Population doesn't matter? (Score:5, Interesting)
I'm speculating here, but I don't think this is impossible, or even very far off.
We already have robots working in factories. If we ever get to the point where robots can be effectively used in war, we'll also be at the point where robots are capable of extracting resources. So, robots extracting resources, making robots, and fighting. Great, we've all seen this stuff in sci-fi, nothing new. But I've never encountered anyone talking about how this would affect world politics or the balance of power.
In todays world, the population of a country, as well as the will of the population, quality of military training, and natural resources all play a role in how well a country does in war. But if a country had robots as I just described, the primary factor in determining that country's power would be the natural resources available to it. If robots build robots you've got as many as you need, so the limiting factor is the raw materials and not food or population size or training etc.
So which countries have the raw materials? They win. For example, in this scenario Canada might be able to put up fight against the U.S. because Canada has alot of resources. As it stands now, Canada would get creamed.
This line of thought becomes more interesting when you think that the U.S. Military is developing robots as a way of making the U.S. army more effective, but maybe they are changing the equation so drastically that they might end up with much stronger enemies on more fronts.
Food for thought.
Re: (Score:3, Interesting)
So which countries have the raw materials? They win. For example, in this scenario Canada might be able to put up fight against the U.S. because Canada has alot of resources.
I don't see that. An alternate point of view here is that a guy with the right machines could take out the entire world by iteratively bootstrapping his army to larger and larger sizes.
1. Surreptitiously, steal enough resources to build a robot army to take out a poorly defended nation, say Canada, with your bots.
2. Take over Canada.
3. Build with the resources of Canada, a robot army capable of taking over the world.
4. Take over the world.
5. Profit!
Robot weapon vs. what we think of when we hear it (Score:5, Insightful)
Technically speaking, a homing missile or torpedo could count as a robot weapon. We tend not to think that way because the gap between pressing the button and impact is short enough it's just like pulling the trigger on a gun and watching someone die.
Landmines and other boobyraps are, intellectually, about the same thing as an autonomous AI weapon -- they kill without human intervention, are impersonal and horrific. Yes, it's more frightening to imagine a T-800 coming after you and taking your leg off with a chainsaw but seriously, the results aren't that much different from a landmine.
When talking about the dangers of taking the human out of the loop, we've already got enough problems with humans in the loop. We took more kills from friendly fire than from the Iraqis in Gulf War 1. The more powerful the weapon, the easier the oops. I don't know how many top generals were accidentally killed by sentries back in the days of Rome -- kinda hard to accidentally run someone through with your gladius -- but just ask Stonewall Jackson how easy that sort of thing became with firearms. We'd never have gone through and killed an entire bunker of civilians by accident if our soldiers were doing the work with knives but that becomes as easy as an oops when dropping LGB's from bombers on the word of some faulty intel. Powerful weapons compound and magnify human errors.
Aside from the typical fear we have at the thought of impersonal killing machines taking us out, I think we have two other big fears -- 1) war becomes easier and less painful when robots are doing the dying and 2) a robot will never refuse a legal yet immoral order.
We've had bad wars historically but the 20th century really had them all beat. Technology allowed for total war, the bending of an entire nation's will to the obliteration of another. Ambitions grew bigger, power could be projected further, and the world became a smaller, more dangerous place. Battlefield robots will be a continuation of this trend.
Re: (Score:2)
Technically speaking, a homing missile or torpedo could count as a robot weapon.
The difference is that the new kind isn't kamikaze.
Sh1tty programming and bad engineering. (Score:2)
That is
Re: (Score:2)
These types of apps are tested to a much higher rigor, proper engineering methods, and specific tasks.
Completely different then an application.
Yes, there will be problems, but probably FEWER then there are today at the hands of humans.
look (Score:2)
the sleestak fossil [nationalgeographic.com] revealed yesterday was a nice advertisement for the upcoming land of the lost [landofthelost.net] will ferrell movie
and electing a vulcan as president of the united states [whitehouse.gov] was a nice pr coup for the star trek [startrek.com] movie now playing
but when the armed forces start building real terminators just to plug the upcoming christian bale terminator salvation [warnerbros.com] movie, this hollywood pr stunt business has gotten a little out of hand
i'm sorry i have to draw the line. what next? someone releases a global pandemic just to plug..
This is ebcasue (Score:4, Funny)
science delivers the good, whereas philosophers deliver nothing.
I am talking about modern philosopher, not philosophers from a time where that means educator and 'scientist'. Experimenter might be a better term there.
I was a philosophy major until I learned the number 1 thing said by philosophers:
"You want fries with that?"
maybe
"Do you want fries with that, or do you just think you want fries with that?"