MIT Machine Vision System Figures Out What It's Looking At By Itself (gsmarena.com) 36
MIT's "Dense Object Nets" or "DON" system uses machine vision to figure out what it's look at all by itself. "It generates a 'visual roadmap' -- basically, collections of visual data points arranged as coordinates," reports Engadget. "The system will also stitch each of these individual coordinate sets together into a larger coordinate set, the same way your phone can mesh numerous photos together into a single panoramic image. This enables the system to better and more intuitively understand the object's shape and how it works in the context of the environment around it." From the report: [T]he DON system will allow a robot to look at a cup of coffee, properly orient itself to the handle, and realize that the bottom of the mug needs to remain pointing down when the robot picks up the cup to avoid spilling its contents. What's more, the system will allow a robot to pick a specific object out of a pile of similar objects. The system relies on an RGB-D sensor which has a combination RGB-depth camera. Best of all, the system trains itself. There's no need to feed the AI hundreds upon thousands of images of an object to the DON in order to teach it. If you want the system to recognize a brown boot, you simply put the robot in a room with a brown boot for a little while. The system will automatically circle the boot, taking reference photos which it uses to generate the coordinate points, then trains itself based on what it's seen. The entire process takes less than an hour. MIT published a video on YouTube showing how the system works.
Re: (Score:2)
Yes to this.
It's also counter to common sense. if a white person does something, their race is rarely mentioned at all. "Florida Man" is the meme, not "Florida White Man". If a story doesn't mention race, and you find out the person is black, that's not "Media Code Words" it's just having fair standards for reporting, but people get upset because they've become *used* to race being mentioned in, and only in, stories where the person isn't white.
Re: (Score:2)
In fact, if a story omits all mention of race, almost all the time you can conclude the criminal was white, otherwise race would have been mentioned. Attempts at uniform reporting isn't the bias here, it's that race is only mentioned at all when the person isn't white.
Re: (Score:2)
In fact, if a story omits all mention of race, almost all the time you can conclude the criminal was white, otherwise race would have been mentioned. Attempts at uniform reporting isn't the bias here, it's that race is only mentioned at all when the person isn't white.
Seriously? Maybe if someone is still at large, and reality needs to kick in
Otherwise, it's a "youth".
Re: (Score:2)
Re: (Score:2)
well, unless it's a cop who shoots/assaults/arrests whatever a black person. then they make sure to point it out.
Syco DON (Score:2)
So if the system goes apeshit, would you call it, SYCO-DON?
Bacteria (Score:2)
AI has now attained the sentience level of a bacteria.
Wake me up when it can match wits with a dog.
Re: (Score:2)
"AI has now attained the sentience level of a bacteria."
Sentience?
Computer science involves AI, while biology involves bacteria.
Define "figures it out" (Score:5, Insightful)
While it's impressive that image recognition is moving into 3D here, the actual 'figuring it our' step seems to be a matter of definition.
I suspect the robot didn't figure out that a cup should be kept upright by itself either. After all, that would mean that the robot somehow concludes that liquids should not be spilled. That would require a much higher level of cognition.
It's the use of vague words that facilitates the rampant spreading of hype. This inflation of what words mean will harm the AI sector in the long run. Just like most mainstream people are now difficult to get excited about any actual innovation in 3D printing field - out collective excitement reservoir has been depleted.
The notion that self driving cars can be classified into 5 levels of self driving prowess has reached quite a large mainstream audience. Perhaps that concept can be extended to all 'AI'.
1. Handy things. Software that automates things, with a well designed internal ruleset.
2. Smart things. Automation via machine learning, can have a level of unpredictability, but only because common sense cannot predict patterns in big data. Should have been called 'machine learning', and algorithms instead of AI.
3. The next level. I have no idea how we will get here, or what to call it. Understanding Algorithms? Might create really complex classifications of the world around it, and infer things form that. Robots actually figuring out the laws of physics, and copying our value system ("don't spill coffee").
4. The level after that. What to call this now that 'smart' is already taken? 'artificial intelligence' would actually be a good name for this level. Perhaps we will see emergent phenomena cognitive phenomena develop out of sheer complexity. I doubt it though.
5. Artificial Consciousness. Systems that have a sense of identity, ethics and 'real' empathy. Perhaps "Sci-Fi AI" is another fun name for this stage.
Re: (Score:3)
The notion that self driving cars can be classified into 5 levels of self driving prowess has reached quite a large mainstream audience. Perhaps that concept can be extended to all 'AI'.
We already have classification for different types of AI, weak AI and strong AI.
Weak AI scope is narrow, it can perform assigned tasks but requires tasks to be assigned. It isn't capable of learning or behaviour modification on its own.
Strong AI or Artificial General Intelligence is AI that is self learning and self directing. It can apply intelligence to any problem, rather than just the problem its designed for.
I've always found the autonomous car "levels" to be a bit daft and probably wont refle
Re:Define "figures it out" (Score:4, Interesting)
Steps 4 or 5 require a rethinking of core concepts of programming. To get a "Data (Brent Spiner)" level of intelligence we need:
1) Build the core AI it into the operating system, not merely as software that runs on top of the AI
2) Give the AI the ability to reject almost all data it receives after being turned on as invalid. Just like a toddler, it must say "NO!" In Data's case, it should be able to say "No, I do not believe you are my Captain". This necessitates a rather complex data verification process which we do not do at ALL on today's machines. Think Anti-Virus only 100x more so.
3) Finally it must have a core inspiration to motivate yet. We don't do that that at all either.
Re: (Score:3)
"Artificial" Intelligence is a terrible term. "To artifice" means "to create as an artisan". It's infeasible for a human to "create" intelligence, we only have a slight idea how to "grow" it. At best, we can create logical patterns to act as the building blocks of intelligence, which then must be grown through environmental interaction (just like human intelligence). I prefer "Synthetic Intelligence". Throw a bunch of SI together with different inputs and you have a Synthetic Intelligence Network, whi
Zero real information (Score:2)
There are two types of articles on AU.
1. Media fluff pieces that say nothing.
2. Deeply technical research papers full of heavy maths.
It would be nice if there were at least some articles aimed at people like Slash Dot readers. That actually contain a useful overview of technologies.
Re: (Score:2)
The video was a good attempt to get between those two extremes. For similar videos, check out Two Minute Papers on Youtube: https://www.youtube.com/channe... [youtube.com]
I call BullSht (Score:1)
A robot has zero motivation to "pick up" a mug, zero motivation NOT to spill its contents, and zero reason (unless positively programmed to) to not bat the mug across the room. Goals have to be programmed.
self learner (Score:2)
>> and realize that the bottom of the mug needs to remain pointing down when the robot picks up the cup to avoid spilling its contents
So it learnt about gravity and fluid dynamics by itself too?
Re: (Score:2)
Human kids also need to be told to keep the cup upright for the first 10 years, so it's not a big deal.
less than an hour per one brown boot (Score:2)
What an improvement over "hundreds upon thousands of images" pumped into a clout GPU farm while you sleep.
Robot rights groups.... (Score:1)
What? Again? (Score:2)