Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Software Hardware Technology

MIT Machine Vision System Figures Out What It's Looking At By Itself (gsmarena.com) 36

MIT's "Dense Object Nets" or "DON" system uses machine vision to figure out what it's look at all by itself. "It generates a 'visual roadmap' -- basically, collections of visual data points arranged as coordinates," reports Engadget. "The system will also stitch each of these individual coordinate sets together into a larger coordinate set, the same way your phone can mesh numerous photos together into a single panoramic image. This enables the system to better and more intuitively understand the object's shape and how it works in the context of the environment around it." From the report: [T]he DON system will allow a robot to look at a cup of coffee, properly orient itself to the handle, and realize that the bottom of the mug needs to remain pointing down when the robot picks up the cup to avoid spilling its contents. What's more, the system will allow a robot to pick a specific object out of a pile of similar objects. The system relies on an RGB-D sensor which has a combination RGB-depth camera. Best of all, the system trains itself. There's no need to feed the AI hundreds upon thousands of images of an object to the DON in order to teach it. If you want the system to recognize a brown boot, you simply put the robot in a room with a brown boot for a little while. The system will automatically circle the boot, taking reference photos which it uses to generate the coordinate points, then trains itself based on what it's seen. The entire process takes less than an hour. MIT published a video on YouTube showing how the system works.
This discussion has been archived. No new comments can be posted.

MIT Machine Vision System Figures Out What It's Looking At By Itself

Comments Filter:
  • AI has now attained the sentience level of a bacteria.

    Wake me up when it can match wits with a dog.

    • "AI has now attained the sentience level of a bacteria."

      Sentience?

      Computer science involves AI, while biology involves bacteria.

  • by mrwireless ( 1056688 ) on Monday September 10, 2018 @09:22PM (#57288092)
    The robot in the video has to be manually shown where to hold the shoe (by the lip). It then understands that it should grab all shoes by the lip.

    While it's impressive that image recognition is moving into 3D here, the actual 'figuring it our' step seems to be a matter of definition.

    I suspect the robot didn't figure out that a cup should be kept upright by itself either. After all, that would mean that the robot somehow concludes that liquids should not be spilled. That would require a much higher level of cognition.

    It's the use of vague words that facilitates the rampant spreading of hype. This inflation of what words mean will harm the AI sector in the long run. Just like most mainstream people are now difficult to get excited about any actual innovation in 3D printing field - out collective excitement reservoir has been depleted.

    The notion that self driving cars can be classified into 5 levels of self driving prowess has reached quite a large mainstream audience. Perhaps that concept can be extended to all 'AI'.

    1. Handy things. Software that automates things, with a well designed internal ruleset.
    2. Smart things. Automation via machine learning, can have a level of unpredictability, but only because common sense cannot predict patterns in big data. Should have been called 'machine learning', and algorithms instead of AI.
    3. The next level. I have no idea how we will get here, or what to call it. Understanding Algorithms? Might create really complex classifications of the world around it, and infer things form that. Robots actually figuring out the laws of physics, and copying our value system ("don't spill coffee").
    4. The level after that. What to call this now that 'smart' is already taken? 'artificial intelligence' would actually be a good name for this level. Perhaps we will see emergent phenomena cognitive phenomena develop out of sheer complexity. I doubt it though.
    5. Artificial Consciousness. Systems that have a sense of identity, ethics and 'real' empathy. Perhaps "Sci-Fi AI" is another fun name for this stage.
    • by mjwx ( 966435 )

      The notion that self driving cars can be classified into 5 levels of self driving prowess has reached quite a large mainstream audience. Perhaps that concept can be extended to all 'AI'.

      We already have classification for different types of AI, weak AI and strong AI.

      Weak AI scope is narrow, it can perform assigned tasks but requires tasks to be assigned. It isn't capable of learning or behaviour modification on its own.

      Strong AI or Artificial General Intelligence is AI that is self learning and self directing. It can apply intelligence to any problem, rather than just the problem its designed for.

      I've always found the autonomous car "levels" to be a bit daft and probably wont refle

    • by gurps_npc ( 621217 ) on Tuesday September 11, 2018 @08:53AM (#57289828) Homepage

      Steps 4 or 5 require a rethinking of core concepts of programming. To get a "Data (Brent Spiner)" level of intelligence we need:

      1) Build the core AI it into the operating system, not merely as software that runs on top of the AI

      2) Give the AI the ability to reject almost all data it receives after being turned on as invalid. Just like a toddler, it must say "NO!" In Data's case, it should be able to say "No, I do not believe you are my Captain". This necessitates a rather complex data verification process which we do not do at ALL on today's machines. Think Anti-Virus only 100x more so.

      3) Finally it must have a core inspiration to motivate yet. We don't do that that at all either.

    • "Artificial" Intelligence is a terrible term. "To artifice" means "to create as an artisan". It's infeasible for a human to "create" intelligence, we only have a slight idea how to "grow" it. At best, we can create logical patterns to act as the building blocks of intelligence, which then must be grown through environmental interaction (just like human intelligence). I prefer "Synthetic Intelligence". Throw a bunch of SI together with different inputs and you have a Synthetic Intelligence Network, whi

  • There are two types of articles on AU.

    1. Media fluff pieces that say nothing.

    2. Deeply technical research papers full of heavy maths.

    It would be nice if there were at least some articles aimed at people like Slash Dot readers. That actually contain a useful overview of technologies.

  • by Anonymous Coward

    A robot has zero motivation to "pick up" a mug, zero motivation NOT to spill its contents, and zero reason (unless positively programmed to) to not bat the mug across the room. Goals have to be programmed.

  • >> and realize that the bottom of the mug needs to remain pointing down when the robot picks up the cup to avoid spilling its contents

    So it learnt about gravity and fluid dynamics by itself too?

  • What an improvement over "hundreds upon thousands of images" pumped into a clout GPU farm while you sleep.

  • ... are gonna be pretty pissed about this practice of imprisoning robots into rooms with a boot for hours!
  • Blah, blah de blahblahtechblah blah. More "gee, look what massive advances we are making," tech babble that seems to signify nothing. Just what can it actually do with that? And is it doing it? Probably not. I wait with bated breath for the next "advance."

"All the people are so happy now, their heads are caving in. I'm glad they are a snowman with protective rubber skin" -- They Might Be Giants

Working...