Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics Technology

Fast Navigating Guessing Robots 75

holy_calamity writes "A new navigation technique for robots allows them to make predictions about what's around the corner based on where they've been already. It works well in repetitive environments like office buildings. If this were a Japanese project I'd say it'd be useful for robotic secretaries new on the job, but since it's an American one I suppose it'll be used for automated SWAT teams."
This discussion has been archived. No new comments can be posted.

Fast Navigating Guessing Robots

Comments Filter:
  • swat (Score:5, Funny)

    by mastershake_phd ( 1050150 ) on Thursday May 10, 2007 @02:57AM (#19063999) Homepage
    but since it's an American one I suppose it'll be used for automated SWAT teams.

    Ya last corner terrorist, next corner must be terrorist, come out shooting.
    • Re:swat (Score:5, Insightful)

      by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Thursday May 10, 2007 @03:06AM (#19064055) Journal

      If it were possible to rate topics like individual posts, I'd be torn between Insightful, Flamebait and Troll.

      • Not every radical opinion is a Flame bait or a Troll
        Not every too pro-American or too anti-American opinion is a flamebait or a troll. It usually is a genuine opinion.
        People may have unpopular opinions, and might even post such opinions in public (omg !), disregarding the retarded "we must stick to the middle of the road" mentality. Moderation is not about telling people what you think of their opinion.
        • Re: (Score:3, Insightful)

          by AlHunt ( 982887 )
          >post such opinions in public (omg !), disregarding the retarded "we must stick to the middle of the >road" mentality. Moderation is not about
          > telling people what you think of their opinion.

          The story should be presented without editorial comment, however. After that it's open season.

          That said, I guess we'll start using fluffy bunnies to sniff out bombs instead of machines. We wouldn't want to violate the Robot Bill of Rights, eh?

        • by zero_offset ( 200586 ) on Thursday May 10, 2007 @07:12AM (#19065349) Homepage
          The actual problem is that when these statements are made in the story summary, they are not subject to any moderation, deserving or otherwise. Regardless of whether you feel the statement deserves moderation, it clearly isn't adding anything to the summary. It's an old slashdot problem, and you can bet that the comments which survive just happen to match the slant of the editors: in effect, by making it into an untouchable story summary, it received the ultimate up-mod...
    • Ya last corner terrorist, next corner must be terrorist, come out shooting

      I'm not sure how serious you are about that comment, but the military and police need to have someone to blame whenever a weapon goes off.

      The only weapons system I can think of that have the ability to fully cut humans out of the loop are defensive weapons on naval ships and (soon to be?) on tanks.

      Maybe someone else knows of offensive weapons that don't need a human to pull the trigger, but AFAIK, no 'western' nation would ever deploy

      • Maybe someone else knows of offensive weapons that don't need a human to pull the trigger

        Robotic Sentry Gun [technovelgy.com]

        Whipped up by amateurs.

      • by HTH NE1 ( 675604 )

        The only weapons system I can think of that have the ability to fully cut humans out of the loop are defensive weapons on naval ships and (soon to be?) on tanks.

        "New target acquired."
        "That's not a target. That's Church!"
        "Target locked. Firing main cannon."
    • Terrorist detected, weapons locked on. Cancel or Allow?
    • Re: (Score:1, Flamebait)

      by drinkypoo ( 153816 )
      The LAPD heard about this technology, and was very excited. But they want the code changed to "last corner Mexican, next corner black, come out swinging your riot baton."
  • Mhmmm (Score:3, Funny)

    by tttonyyy ( 726776 ) on Thursday May 10, 2007 @03:14AM (#19064087) Homepage Journal
    ...sounds like just the excuse I need to place spinning blades around random corners in the office "to fend off any attacking robot overlords".
  • by Centurix ( 249778 ) <centurix@gmPERIODail.com minus punct> on Thursday May 10, 2007 @03:16AM (#19064097) Homepage
    [Enters maze] ... First corner, bushes, snow ... Second corner, bushes, snow ... Third corner, bushes, snow ... Fourth corner, bushes, snow, Jack Nicholson behind me with an axe
    +++NO CARRIER
  • by Ohreally_factor ( 593551 ) on Thursday May 10, 2007 @03:18AM (#19064127) Journal
    I could see this being applied to game technology before it gets applied to law enforcement. This is an interesting approach to an AI (or AI-like) problem. The implementation just happens to be (and is well suited for) robots.
    • Re: (Score:3, Insightful)

      by zero_offset ( 200586 )
      Why? In a game environment, it's possible -- easy -- for the "AI" to have full and perfect knowledge of the world. Guessing is not necessary.
      • We're not talking about full and perfect knowledge of the game environment. We're talking about the game AI routines having imperfect knowledge and being able to learn about its environment. Is it realistic for a computer enemy to have perfect knowledge of the terrain? On the other end of the spectrum, is it realistic for a computer foe to not be able to take advantage of terrain that it has learned about?

        (I didn't use quote marks. When discussing AI, it sometimes seems like every other word needs to be in
  • by Silver Sloth ( 770927 ) on Thursday May 10, 2007 @03:18AM (#19064129)
    From TFA

    But the method does have limitations, Lee says: "It works well in indoor environments, but wouldn't be very good in less-repetitive outdoors environments."
    So maybe a hybrid? Whilst in structured environments expect what you know, otherwise expect the unexpected. No one single answer will ever solve all problems (except 42)
  • by EmbeddedJanitor ( 597831 ) on Thursday May 10, 2007 @03:21AM (#19064143)
    This looks like a variant on behavior-based robotics. Instead of just prioritising behavior on sensed conditions, it also prioritises based on expected conditions.

    Currently robots really struggle with making good judgement calls. Behvior-based systems only go so far, perhaps this will go one step further.

    • by timmarhy ( 659436 ) on Thursday May 10, 2007 @03:46AM (#19064277)
      the reason is they lack the ability to put things into context. computers compute, therefor they calculate numbers and stats very well, but the context of a number, that elusive subtle meaning that a number has completely escapes them.

      example, say i presented you with the number 42. on here you might associate it with hitch hiker to the galaxay or maybe something else depending on the infinte number of ways i could put it in a sentence.

      • Re: (Score:3, Insightful)

        Except that context can be computed statistically very effectively by computers (only sometimes, of course).

        Say I present Google's computers with the number 42 [google.com]. Surprisingly enough it associates it with the Hitchhiker's Guide to the Galaxy [wikipedia.org] reference [wikipedia.org], much as you or I might. Indeed, if you put in a sentence it will do a surprisingly good job of responding to the context of the number 42, all through computing numbers and statistics.

        • Re: (Score:3, Interesting)

          by Farmer Tim ( 530755 )
          I think you've unintentionally summed up the problem with designing an all-purpose self-navigating robot quite well: it's as easy as putting Google's database and processing power in a box on wheels.
          • Re: (Score:2, Insightful)

            Almost, but not quite. The problem is just as you describe it, but the solution isn't to put Google in a box on wheels. Rather, we should be connecting the box on wheels to Google, wherever it is. My computer doesn't have Google's database or processing power, yet it can analyze the context of things through Google's datamining capabilities. I think autonomous robots need to be a bit less autonomous until we figure out the robot part more thoroughly. Shrinking and minimizing size will come with time in a ne
            • Re: (Score:3, Interesting)

              by Farmer Tim ( 530755 )
              I agree "Google on wheels" is not the solution, and I think part of the problem with robotics is the overwhelming desire to make them fully self-contained, when it's neither necessary or efficient for most situations (that is, anything terrestrial).

              All a robot really needs to have built-in is the equivalent of a nervous system and a brain stem with basic (using the word "basic" very loosely) housekeeping functions like communications, avoiding fast moving objects or balance in the case of bipedal robots; th
        • Google also applies the association the other way around [google.com]
    • Re: (Score:3, Informative)

      by SnowZero ( 92219 )
      No, this is a variant on a common "Simultaneous Localization and Mapping" (SLAM [wikipedia.org]) algorithm that brings in elements similar to Image Completion [google.com] from the computer graphics community. SLAM is a sensor and sensor interpretation problem, not a behavior/reasoning or action/control problem. While there are probabilistic reasoning systems, the only thing SLAM has in common with them is that it uses conditional probabilities.

      If you want to know more, feel free to ask.
  • by alphamugwump ( 918799 ) on Thursday May 10, 2007 @03:46AM (#19064281)

    If this were a Japanese project I'd say it'd be useful for robotic secretaries new on the job, but since it's an American one I suppose it'll be used for automated SWAT teams.

    More likely, you'd have a Japanese robot who is a waitress by day and a combat cyborg by night. And she happens to be a vampire from the future. And she wears a bunny suit. And she's also a suicidal paranoid schizophrenic.

    At least, that's what I've learned from watching anime. For God's sake, if you're going to troll, at least try to get your stereotypes right.

    • ...a Japanese robot who is a waitress...
      I dunno, I don't think I'd be giving a welcome plate of muffins to any waiting-staff-member who turns left purely out of habit. I think they'd be better on register, or coffee-machine. I can't wait until they make robotic kitchen-hands, so I can quit washing dishes and get a decent job. Like... data-entry.
  • by Anonymous Coward on Thursday May 10, 2007 @04:00AM (#19064331)

    If this were a Japanese project I'd say it'd be useful for robotic secretaries new on the job, but since it's an American one I suppose it'll be used for automated SWAT teams.
    Or, as is more likely the case, it was a bunch of American college grads being bored one night and wondering if they could make a robot guess what their lab looked like. I'm not sure why the author had to take a cheap swipe at a nationality under the flimsy guise of a guess as to its functionality. I'm not even sure why the author had to guess at its use in the first place; this is a website for nerds, and frankly, something like this is plain and simply cool.

    Naaaaah, it has to be for automating our SWAT teams, because we're a bunch of killcrazy cowboys looking for new ways to blow things up. Um... yee-haw?
    • I kinda thought the SWAT joke thing was supposed to be an ironically self-deprecating throwaway line, rather than that other sort of a line, the one with the hook in it, that everyone seems to think it is. Actually, thinking about it, after all the criticism American defense forces have come under, maybe a little prickliness from you guys is a good thing. Last thing this thread needs is someone quoting how much of the US GNP goes into ADF funding.
    • Honestly, the summary reads like something straight out of digg.
    • Re: (Score:3, Insightful)

      by stratjakt ( 596332 )
      Don't take it as an insult, but as a compliment.

      Using this type of technology for SWAT in a hostage situation could very well save lives.

      Using this type of technology to make a "robot secretary" is pretty much a waste of time and effort to create a novelty toy for rich japanese executives.

    • by e2d2 ( 115622 )
      Guess you didn't get the memo. Anti-American is the new black. Hell even if you're an American it's open season on other Americans. After all, it could never be you they are talking about right? It must be those other idiots.

      I'm all for criticism where it's due but today's world is just plain crass. I'm no exception. I've become so damn jaded that every other statement is a complaint or sarcasm or just plain mean. I've even considered "bucking the system" and being nice! It's a sad state of affairs when bei
  • It's called Banality Bot, and the last sentence of that post is about to get its tubes cleaned.
  • This (Score:3, Funny)

    by Colin Smith ( 2679 ) on Thursday May 10, 2007 @04:05AM (#19064363)
    Sounds like the same algorithm most drivers use.
     
  • by Hammerself ( 560585 ) on Thursday May 10, 2007 @04:12AM (#19064397)
    I don't know much about AI, but is the idea of making predictions based on previous data some kind of breakthrough? I'm assuming this is just an application of some firmly established concepts in AI. When confronted with a redundant or repetative data set, make predictions based on your experiences as to the nature of new elements in that set. I mean, aren't we paying these guys to tell machines how to recognize patterns? Is it news when they teach a machine to recognize patterns?

    I'd venture that the purpose of this post is to discuss Terminators, and Japanese robot secretaries, and to hail our coming robot overlords. This is just a guess based on a highly redundant data set I've been analyzing (rather than doing my work).
    • but is the idea of making predictions based on previous data some kind of breakthrough?

      That isn't the breakthrough, nor is it even necesarrily AI. In fact, most things dubbed AI I would call CS. AI is more abstract, while the implementations are definately CS. However that is besides the point. It isn't necesarrily simple to do the predicitons or even clear how to do the predictions. Anyone can say this room is just like the last one (only, mirrrored, or with a wastebasket or some non-trivial difference). How do you get a computer to recognise that and do it in realtime. Ok, humans do it

      • Yeah, I didn't mean to sound so dismissive. I find the concepts interesting, and it sounds like good progress. I hope they keep it up. The structure of these types of posts/articles is sometimes misconstrued by my tired brain. They give an update on some tech/science advance, making sure to justify the original research with applications of the larger field of study. Then I read: "Minor Advance in Field Paves Way for X and Y," and start talking smack. Don't mind me.
      • by Anonymous Coward
        What, exactly, is AI without a system to implement it?

        The answer is nothing. AI is tied 100% to system development.

        Also, since AI is subsumed by CS, I'm not sure what "most things dubbed AI I would call CS". I would change that to "Things dubbed AI, I would also call CS", but that would be redundant.
    • popfile [sourceforge.net] has been doing this for me for a while. It has infered email classes 95,640 times with a 99.71% accuracy.

      I wonder if the robots are using Bayes algorithms too?
  • by ishmalius ( 153450 ) on Thursday May 10, 2007 @04:15AM (#19064415)
    The more you know about the context, and the more you know about the result to a given action, the less information you need from the environment (or from the other side of a communication channel). This is the Holy Grail of information theory and data compression, and it seems as if they are applying its principles here. Higher CPU and better expert programming will likely produce some nice results in the near future.
  • I, for one, welcome our intelligently navigating robotic overlords.
  • "You could have two robots building their own maps," he says, "which then share them when they meet." This will allow a robot to make predictions based on data collected by its teammate.
    Sounds like the algorithm most Slashdotters use to avoid needing to RTFA.
  • by noddyxoi ( 1001532 ) on Thursday May 10, 2007 @05:32AM (#19064737)
    In my project called DATMO (Detection And Tracking of Moving Objects) i've made a tracker that followed people that could "guess" where the people would appear next, using an industrial laser scanner, check the video at http://miarn.sourceforge.net/videos/pv3d_peopletra cking_and_scene.avi [sourceforge.net]
    • Re: (Score:3, Funny)

      by Farmer Tim ( 530755 )
      That's nothing, I've designed a tracker that followed people and could "guess" where they'd disappear, using an industrial laser cutter.

      I'm in desperate need of some new research assistants...
      • by PPH ( 736903 )
        If at all possible, please try to include a shark.
        • I've consulted some marine biologists, but there's some disagreement about where a shark's head ends and it's body begins. Accuracy is important, because if I attach a freaking laser to a shark's neck I'll be a laughing stock (evil science is like porn: the difference between glory and ridicule is only three inches).
  • This almost sounds like some variant of reinforcement learning.. (The bit with confidence scores). Why do they never post real algorithm details :-(
  • Overkill. (Score:5, Funny)

    by jpellino ( 202698 ) on Thursday May 10, 2007 @05:46AM (#19064811)
    "I'd say it'd be useful for robotic secretaries new on the job"

    As they get chased around the desk by their robot bosses? It's pretty much left, left, left, left... etc...

  • by Anonymous Coward
    Having lived in Japan and other Asian countries. I would expect the Japanese to have the robot SWAT teams long before the US. I note that those most likely to make the inexperienced remarks about America vs the rest of the world are either Americans with limited or no experience in non American countries, cultures or languages or Europeans who have equally little actual first hand knowledge of America.
  • "You could have two robots building their own maps," he says, "which then share them when they meet." This will allow a robot to make predictions based on data collected by its teammate.

    Let's throw them a curveball, make one of them white & the other black, see what happens.
  • where it is at all times, it knows this because it knows where it isn't. By subtracting where it is from where it isn't or where it isn't from where it is, whichever is greater, it obtains a deviation.
  • FTA:
    Davison and colleagues are designing endoscopic surgical instruments with SLAM abilities.

    Does this sound painful to anyone else?
  • Navigation is one of the biggest challenges faced by mobile robots. One popular technique, dubbed SLAM (simultaneous localisation and mapping), involves having a robot build a map of the local area, whilst also tracking its position
    SLAM is a problem not a technique for navigation. Navigation involves more than just doing SLAM because you also have to do things like obstacle avoidance.
  • "wife back seat driver" robot?

    Robot 1: I'm going left.

    Robot 2: No, you idiot! Go right!

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...