Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Technology

Packs of Robots Will Hunt Down Uncooperative Humans 395

Ostracus writes "The latest request from the Pentagon jars the senses. At least, it did mine. They are looking for contractors to 'develop a software/hardware suite that would enable a multi-robot team, together with a human operator, to search for and detect a non-cooperative human subject. The main research task will involve determining the movements of the robot team through the environment to maximize the opportunity to find the subject ... Typical robots for this type of activity are expected to weigh less than 100 Kg and the team would have three to five robots.'" To be fair, they plan to use the Multi-Robot Pursuit System for less nefarious-sounding purposes as well. They note that the robots would "have potential commercialization within search and rescue, fire fighting, reconnaissance, and automated biological, chemical and radiation sensing with mobile platforms."
This discussion has been archived. No new comments can be posted.

Packs of Robots Will Hunt Down Uncooperative Humans

Comments Filter:
  • Mechanical Hound (Score:5, Interesting)

    by MiKM ( 752717 ) on Friday October 24, 2008 @08:06PM (#25505185)
    This is eerily reminiscent of the "mechanical hound" from Fahrenheit 451
  • by NeutronCowboy ( 896098 ) on Friday October 24, 2008 @08:10PM (#25505213)

    .... can I just shoot them if they try to hunt me down? What about a nice EMP blast? And will they be armed? Or will they behave more like searchers from the Chronicles of Riddick?

    I'm really not sure if I'm looking forward to that. Either they won't be armed, and they'll be easily disabled, or they will be, and then.... Meh.

  • by Anonymous Coward on Friday October 24, 2008 @08:23PM (#25505341)
    No, you cannot.

    I have it on the authority of a friend that when a police dog comes out of nowhere and leaps on you and you instinctively knock it away, it PISSES THE COPS OFF and the tend to beat the crap out of you. I'm pretty sure you would get a similar reaction from them if you scratch their shiny new toy. Remember, most law enforcement considers this a battle between US and THEM, and they will include these robots in their definition of US.
  • Re:Mechanical Hound (Score:4, Interesting)

    by ColdWetDog ( 752185 ) * on Friday October 24, 2008 @08:35PM (#25505451) Homepage
    It's worse than that. It's here. [bostondynamics.com]. Well, sort of anyway. It's more like a psychotic hydraulic mule. But I would especially want one chasing me.
  • by Devout_IPUite ( 1284636 ) on Friday October 24, 2008 @09:06PM (#25505727)

    Isn't it interesting though that the world has never seen a modern communist society... I wonder if one could actually work? People said a democracy would never work when the United States started and now most of it's residents would consider that statement to be false.

  • by Anonymous Coward on Friday October 24, 2008 @09:09PM (#25505755)

    As militaries get stronger, as nations become more powerful, as even individual human attain powers of substantial destruction, we need to start looking at how we organize ourselves as a society.

    Clearly, we are on the brink of enabling every techno-nightmare ever conceived.

    Simply trying to "stop the bad guys" isn't going to work anymore. We need to change the way we govern ourselves. That's the point behind the Metagovernment project [metagovernment.org]; combining the principles of open source and democracy with the new capabilities enabled by emerging web technologies.

    Are you in, or would you like to see where developments like military human-hunter robot swarms take us in another couple of decades?

  • by Anachragnome ( 1008495 ) on Friday October 24, 2008 @09:12PM (#25505775)

    I can attest to that myself.

    It DOES piss them off (especially if your knocking it away with Vibram-soled, steel toe boots), but they don't necessarily beat the crap out of you. They just let the now-very-pissed-off dog chew on you for awhile. That way there are no marks from THEM to indicate excessive force.

    The problem here is that the DOG does NOT have to announce himself as a police officer (like I'm gunna see a badge, on the collar, in the dark). That allows the officer to apply force without clearly announcing that you are dealing with someone that your not allowed to DEFEND yourself from. When it happened to me, I had already kicked the dog 4-5 times and been chewed on for 10-15 seconds by the time I had ANY idea there was a cop in the area.

    Personally, I think robots would just remove the normal hesitation that most people experience when confronted with the decision of killing someone else. In other words, get rid of that pesky conscience.

  • by Anonymous Coward on Friday October 24, 2008 @09:29PM (#25505895)

    "uncooperative subject" simply means a person who is not acting by a set of rules that might put the robot at an advantage. Meaning, that person is not cooperating with the robots.

    Assuming "uncooperative" has anything to do with laws or socially accepted behavior is simply projecting your fears and creating a context that was previously not there. It is simply another way of saying, "testing in the real world with no handicaps for the robots."

  • by Anonymous Coward on Friday October 24, 2008 @09:39PM (#25505949)

    There are some major upshots for protestors:

    1. It removes the justification to use lethal force. When the cops feel that their lives may be in danger, they have the asserted right in pretty much every country to use *lethal* force to defend themselves.
      But if there are no real cops on scene, then neither there is there any justification to ever use greater than non-lethal force.

    2. Accountability. Every button press and every seen on the monitor can be recorded. No more "accidentally" falling down stairs etc., and no more it coming down to your word vs. the cops' word.

    3. As far as countries with bad human rights records go, well, if the wealthy first world countries develop this technology first, then they can make a point of selling only Asimov type robots.

  • by Artifakt ( 700173 ) on Friday October 24, 2008 @10:11PM (#25506169)

    Asimov would have written a short story where a Positronic Robot series had just been developed to the point where it could decide imprisonment counted as harm, and a human had directed it that it was acceptable as it offered a chance for the human to reform and become a better person. Susan would get involved over something, like the robot breaking the prisoner out when it became apparent the prisoner wasn't going to reform, or that he already had so the rest of his sentence was superfluous and so counted as harm.
            Either way, putting someone in jail only automatically counts as harm at some particular level of mentation. Below that, the robot would assume that if the human got three squares and a cot, and better medical care than being on the run, there was no harm. Above that level, the robot would have to balance issues of human freedom with the harm a human might do to others exercising it. At still higher levels of understanding, the robot would have to consider how the human might harm himself exercising freedom. It's only an automatic violation of law 1 to a robot between the really dumb and the moderately smart levels, not to other robots.
            Returning to the thread, the robots described are in the real world = really, really dumb category, too dumb to even apply the first law at all. That means a human would actually be fully responsible for any mistakes the robots made, but tools such as this let that human pretend not to be responsible for mistakes - that's what's really a 'bad thing' (tm) here.

  • by Artifakt ( 700173 ) on Friday October 24, 2008 @10:17PM (#25506201)

    When Jack Williamson wrote "With Folded Hands", his 'humanoids' took away all freedom to do anything risky. supposedly for people's own good. Try to go mountain climbing, and they make you stay inside, but offer a nice game of chess. A little observation of what the humanoids say shows they were trying to implement Asimov's laws, and the whole story is about just the point you raise. It's a pity that not nearly as many people have read Williamson as Asimov.

  • by corsec67 ( 627446 ) on Friday October 24, 2008 @10:42PM (#25506351) Homepage Journal

    Crazy, killing a police dog is a felony, but a police office killing someone else's dog is ... part of the job?

  • by Irish_Samurai ( 224931 ) on Friday October 24, 2008 @10:52PM (#25506401)

    It was a fucked up experience when I ate it.

    I was racked with guilt at the time. Everyone, excluding my father in law, told me I should get over it (me and him actually bonded in a weird way because of this). He has never antagonized me about it, and any time the subject is brought up in conversation he hasn't been the one to initiate it - and he never says anything critical.

    In a culinary sense, it was good. In an existential sense, it was probably the most meaningful meal I have ever had.

  • by Anonymous Coward on Friday October 24, 2008 @11:55PM (#25506705)

    Exactly. I think this is an example of a poor choice of words. "Uncooperative" means that the individual is not trying to aid in detection.

    From a military point of view, the thinking probably went somehow like this:

    Colonel DARPA: Let's ask for a robot that will help find people.

    Lieutenant Writer: Ok, sir, how should we lay out the requirements?

    Col. DARPA: Well, we want it to be used in search and rescue, so it'll have to be something that will find an unconscious guy.

    Lt Writer: Yessir...think we'll ever use it to hunt bad guys?

    Col.: Hmmmm...maybe. Not really what we're wanting, but hey, you never know. Unconscious, bad guys, whatever. Let's just put in "uncooperative" to cover the whole thing.

  • by Shatrat ( 855151 ) on Saturday October 25, 2008 @01:45AM (#25507275)
    I will never understand this sort of thing.
    Unless you're a vegetarian, it's a complete cop-out not to be able to kill an animal.
    I mean, I couldn't kill a cat or a dog, and I might kill a person who killed a cat or a dog, but I wouldn't lose sleep over killing anything I ate.
    The only thing I still hunt is dove. I don't particularly like deer, or squirrel, and people get pissed when you shoot their hogs.

    If on the other hand you ARE a vegetarian, I may eat you myself.
    I realize that I'm not particularly eloquent, but Anthony Bourdain has covered this subject much better than I could on his show 'No Reservations' a few times.
  • by symbolset ( 646467 ) on Saturday October 25, 2008 @02:42AM (#25507581) Journal

    Asimov will be proven right eventually. We may all be dead by then. The three laws are derivative of Turing's work.

    It may be that the purpose for biological intelligence is to create machine intelligence.

  • by KDR_11k ( 778916 ) on Saturday October 25, 2008 @05:27AM (#25508173)

    But what about vehicles? Is a tank not a robot just because the operators sit inside instead of hidden away in some far-away base? Is the Goliath rolling bomb of WW2 a robot since the operator sits behind cover and uses a remote control to move it?

    Aren't Asimov's robots defined by having no operator and being completely independent in their decisions?

  • Re:crocodile dundee (Score:4, Interesting)

    by bitrex ( 859228 ) on Saturday October 25, 2008 @07:30AM (#25508615)

    What I wish to say is that if the definition one has of eliminating poverty is the "American Dream" of everyone owning an 11,000 square foot home, 2 luxury cars in the driveway, and 2.5 kids going to the best universities, forget it. It can't be done! Attempting to bring the whole planet up to what is considered an American middle class standard of living will burn through what resources remain on this planet like flash paper.

    I feel the reason "poverty" exists as it is defined in the United States is finally because the resources that do exist are ultimately advertised, marketed, and distributed to the "poor" in a way that leaves them physically, emotionally, and spiritually unsatisfied - to keep people always grasping for more - and this is done intentionally by the industries involved to make sure wealth continues to always flow upward. If you can trick people into believing that just that little extra effort, that next little purchase will somehow lead to true satisfaction, you can always make them believe that it's just around the corner. It's just a con-game to make what resources are left bubble to the top.

    Finally it all comes down to breeding rights and reproduction. That's what life is here for, it's what the specialized organ at the center of our bodies is there for. Perhaps the final reason for the existence of every concept of wealth, prosperity, and economic success is that it's the current measure by which one's fitness for breeding is judged. And if the current gold standard of breeding fitness is the American way of life - then by God those who have it are going to use every trick in the book to squeeze those who don't by the balls to give them the illusion of getting there when they're really not. The worst thing that could ever happen for their breeding prospects is for the masses to wake up and realize it's all a fucking lie - the closest the U.S. ever came to that stage was the late 1960s - and such deviance was eventually sublimated by consumer culture into the packaged deviance of basically body piercing and ass tattoos.

    If all that's not worth a -1 Offtopic I don't know what is.

  • by Cyberax ( 705495 ) on Saturday October 25, 2008 @09:07AM (#25508981)

    That's not true. Rule-based programming is widely used in practice. The canonical example is automated credit rating scoring.

    Look here: http://en.wikipedia.org/wiki/Production_rule_system [wikipedia.org]

    And incremental rule-based processing can be done very efficiently: http://en.wikipedia.org/wiki/Rete_algorithm [wikipedia.org]

    Of course, current rule-based systems are NOWHERE complicated enough to understand concepts like 'harm'.

There are two ways to write error-free programs; only the third one works.

Working...