Forgot your password?
typodupeerror
AI Power Robotics Hardware

New Hardware Needed For Future Computational Brain 143

Posted by timothy
from the why-not-wipe-and-reinstall-on-regular-brains? dept.
schliz writes "Salk Institute director Terrence Sejnowski has called for more power-efficient, parallel computing architecture to support future robots that could keep up with the human brain. While human brains had 100 billion neurons and required only 20 Watts of energy, today's most powerful supercomputer, the 2.57 PFlop Chinese Tianhe-1A, requires four megawatts, and still has trouble with vision, motion, and 'common sense,' he said."
This discussion has been archived. No new comments can be posted.

New Hardware Needed For Future Computational Brain

Comments Filter:
  • by lawnboy5-O (772026) on Friday March 11, 2011 @06:04AM (#35451388)
    Its interesting that you think epistemology actually plays a part for the flipping computer.

    I could only agree if we are speaking of computer that is intending - by and within its design - to learn like, as well as act like us in a mature state. I agree this may be the most pure way for getting AI to resemble the human condition (for a lack of a better way to put it), but executing on this path is entirely a red herring.

    I would say that trying to understand and emulate the learning process is 10 to 100 orders magnitude over the the effort of just getting the damn thing to work at a common, layman intellectual level.

    We have no real understanding how we learn, empirically scientifically speaking - we are only beginning to understand this now. The understanding of this process changes rapidly and while we think we have momentum currently, more major unknowns exist. In fact, we don't know what we dont know at this point.

    Its been debated as long as man has had the ability too, however... but even throughout the thousands of years of philosophical deep diving, it wasn't until the age of enlightenment that Kant finally got everyone on board for "Epistemology First" in our understanding of our world - we must first understand how we learn about this place, before we can debate the ontological status of the world around us and have any meaningful debate of its metaphysics. Theocratic or not, this rings true - and its only added more complexities to the struggle of what we know about ourselves.

    And now, you want to build a robot to approach this condition.. Insanity. The effort is pure insanity and full of hubris. Lets work on simple tasks, and try to get those right, first. And how baout an honest look on who the fuck we are as emotional, sentient, chemically riding and wicked imperfect machines ourselves, before we attempt to perfect it in a model.

    The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.
  • by MattSausage (940218) on Friday March 11, 2011 @01:07PM (#35454734)
    Interview with Henry Markram [discovermagazine.com] This is the guy the article was about, but for the life of me I can't find the actual article where they describe the brain 'lighting up like a christmas tree', though I remember that exact phrase. Still, this describes his work pretty well. So might be worth a read.

Optimization hinders evolution.

Working...