Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Power Robotics Hardware

New Hardware Needed For Future Computational Brain 143

schliz writes "Salk Institute director Terrence Sejnowski has called for more power-efficient, parallel computing architecture to support future robots that could keep up with the human brain. While human brains had 100 billion neurons and required only 20 Watts of energy, today's most powerful supercomputer, the 2.57 PFlop Chinese Tianhe-1A, requires four megawatts, and still has trouble with vision, motion, and 'common sense,' he said."
This discussion has been archived. No new comments can be posted.

New Hardware Needed For Future Computational Brain

Comments Filter:
  • by Anonymous Coward

    LOL! "can't approach the capabilities of a common honey bee" might be more accurate.

    • by inpher ( 1788434 ) on Friday March 11, 2011 @06:14AM (#35451192)
      I think one mistake (besides the power requirements) that people make is to assume "if you build it it will work from the start", the human brain needs over ten year to develop even mediocre common sense and awareness of its surroundings. We should not be able to just build the hardware, install the software, flip a switch and then expect the machine to fully function the first year even. A learning period for the machine is to be expected (though it might be accelerated to some degree) if it is going to work like a human thinks.
      • Re: (Score:3, Interesting)

        by lawnboy5-O ( 772026 )
        Its interesting that you think epistemology actually plays a part for the flipping computer.

        I could only agree if we are speaking of computer that is intending - by and within its design - to learn like, as well as act like us in a mature state. I agree this may be the most pure way for getting AI to resemble the human condition (for a lack of a better way to put it), but executing on this path is entirely a red herring.

        I would say that trying to understand and emulate the learning process is 10 to 100
        • by MrKaos ( 858439 )

          Its interesting that you think epistemology actually plays a part for the flipping computer.

          Wellll that's only half the story. You'll get very little in the way of logic unless it's a flopping computer as well.

          get-it, logic gates! flip flop!,,,bwahahahahaha

          I'm sure there was a propagation delay before people got that one,,, da bom tish

        • by MrKaos ( 858439 )

          The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.

          Maybe all we will discover is that if you have a *really* big network of interconnected nodes functioning in parallel and a good handling of metastable states you get a reasonable facsimile of intelligence. Maybe that's all intelligence is, after all, humans provide a pretty good facsimile of intelligence - but they aren't very logical.

      • This is a valid point. There is indeed a learning factor for the brain... at least some aspects of the brain.

        Our brains are extremely inaccurate. Our perceptions are always relative and demonstrably imaginative. There is a lot more to what we think we see and know versus what we actually see and know.

        The thing with computers as we currently use and design them is that they are dependant on accuracy. (I recall when DRAM was coming into existence... people were flipping out over the idea that this type of

        • So our brains are really quantum computers, that may be on, off or both.

          It explains fuzzy memories, and why each person sees an event differently.

          as for common sense, some people don't learn that ever.

          • Our brains are analog. We don't compute with ons and offs. We compute with pulse frequencies and threshold levels.
            • That is also assuming that consciousness is computation in the first place, which is nowhere near certain.

        • There was an article in Discover last year sometime describing the different techniques computer scientists were using to try to emulate/simulate a human brain. One of the more interesting is one that actually used simple software to create several thousand neurons, each able to communicate with thirty or so other neurons, and they made the pathways changeable.

          Obviously I'm simplifying and paraphrasing a year old article here, but one of the most intriguing things about this one setup is not only that it
          • I too find that interesting. Link?

          • Those neurons were probably implemented as perceptrons , and were probably distributed and multiple layers with feedback between them , so that the output was an input for the perceptrons on an earlier level , those perceptrons themselves outputed info into the latter layers , and so you get those remaining waves.
          • by Dr Max ( 1696200 )
            Sounds like neural network programming to me which has been around for quite a long time. Many people use it today, Google has their finger in the pie and i seem to recall the us army getting it to recognize different models of tanks. The trick is you have inputs and outputs then a network of connections and nodes in between. When the computer gets an input it finds any pathway to the best answer, then continually refines the path by using a survival of the fittest type tactic. Eventually when enough good p
      • by Bengie ( 1121981 )
        I think it was said that something like at the age or 3, kids start to become self aware. After 3 years of running, SKYNET may learn to take over the internet and can recognize it's own reflection.
        • whoever said that never had children, an 18 month old toddler is very self-aware: "I want ______ !!!!!" By the time they're two they are ego-maniacal dictators.
      • Well the first step in that is actually understanding how the human brain works. Contrary to popular understanding we know almost nothing about it.

    • by Dr Max ( 1696200 )
      Most humans could never approach the capabilities of a common calculator.
      • by PhilHibbs ( 4537 )

        This actually raises an interesting point that I've been thinking about recently. People imagine that an AI will also be a mathematical genius compared to us, because computers can calculate numbers quickly. Not necessarily so. One of the reasons we are slow with numbers is we keep vast amounts of related information along with the number. If I ask you to think of a number and tell me what it is, you might say "seven", but in your mind you might also be imagining the colours of the rainbow, the sides of a f

        • With an AI, the solution would be a hybrid design.

          You have your ANN which is the seat of the AI's conciousness, but you attach an ordinary sequential computer (running ordinary software) to some of it's motor and sensory neurons.

          The idea here, is that the ANN can control the "dumb" sequential processing computer for such answers. It can consciously input data via the motor neurons, then receive sensory stimulation back from it. This *WOULD* make the AI into a mathematical prodigy, at least compared to pure

  • by Kokuyo ( 549451 ) on Friday March 11, 2011 @06:10AM (#35451184) Journal

    That most powerful supercomputer, I'd assume, has not been tuned to actually work like a brain would.

    This is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures. I think if they truly want to compare, they'll need to create an environment that is enhanced for the tasks we want it to process.

    Nobody expects the human brain to compute integer and floating point stuff at the same efficiency either, right?

    • "That most powerful supercomputer, I'd assume, has not been tuned to actually work like a brain would"

      I would *Love* to see that reduced to machine code
      • by Kokuyo ( 549451 )

        Would surely be interesting, wouldn't it?

        • As an undergrad philosophy student, I worked on the "reductionism" of Physics Theories (a sub set of simple Newtonian Mechanics) to sentential logical statements - presumably for an effort to map them to computer programing.

          The task was daunting for and undergrad... and what we ended up with was not so intuitive. I can only imagine mapping the depth and breath of the brain - and in fact would postulate that it can not be done with any adherence to soundness and validity using todays digital hierarchy.
      • by pstils ( 928424 )
        and here fits my car analogy... the worlds fastest production veichle, the bugatti veyron, is rubbish at getting up stairs
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I know a way, but it takes about 18 years plus 9 months and a male and a female participant...

      Also, what you end up with is usually an unemployed intelligence looking for something to do. And they don't always succeed. It's not obvious to me that we need more human intelligences. Maybe we need more and faster idiot savant machines, ones that excel at mundane things like driving road vehicles, doing laundry, loading dishwashers, sorting bills in chronological order. The boring stuff.

      • by TheLink ( 130905 )
        Yeah we already have billions of intelligent nonhuman entities. They're mostly in farms.

        We don't treat them well - we eat and exploit most of them). Why should we create more? So that we can exploit them too?

        If that's the reason we'd just be causing more evil in the world than good.

        Whereas if we instead used the tech to augment humans, we'd have about the same amount of evil and good. Or at least not increase the evil so rapidly.

        For similar reasons we should not create animal-human hybrids. We're not ready
        • by PhilHibbs ( 4537 )

          Every animal, every organism, on the planet exploits other organisms. Does that make all life evil? Why are we so different, that the way we treat other life as a resource makes us evil? Perhaps the most effective evolutionary adaptation that life has ever stumbled across is to be domesticatable, tasty, and/or useful to humans. It's a guaranteed win.

      • by mcvos ( 645701 )

        It's not obvious to me that we need more human intelligences.

        I thought the AI community had abandoned that idea ages ago. We already have plenty of humans. We don't need computers to do the things we're good at, we need them to do the things we're bad at.

    • Humans are actually quite good at floating point math as embodied by ballistic trajectories --- watch outfielders run straight to where a ball will be when it comes down rather than following a curve, or a marksman who can consistently shoot coins or aspirin out of the air (for the former always positioning the bullet hole so that the coin will be useful as a watch fob).

      Integer math as expressed in the real world can be quite good too --- I knew one teller who could take a fresh stack of $100 bills and zip

    • by Hatta ( 162192 )

      is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.

      Isn't that kind of the point of the article? To get around this need for all the computational power, we need hardware that's better at probabilistic analog computations, and to run it all in parallel.

    • A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.

      Tried and failed (which was to be expected). If you try to build code that follows the same type of principles that biological functions do, most of your computing power goes into finding stuff that can react with other stuff. That was a kick to write tho.

  • by Kensai7 ( 1005287 ) on Friday March 11, 2011 @06:16AM (#35451198)

    Instead of trying to emulate the human brain, which at the moment is unattainable, we should concentrate on efficiency paradigms of smaller neural ensembles. Once we achieve efficiency we can scale. Why haven't we learned anything from the CPU industry? They didn't start from 19nm manufacture. Why should we?

    We shouldn't hurry. AI comparable to a human person can be achieved, but it is still a long way until we reach it.

    • Why haven't we learned anything from the CPU industry?

      So you're saying AI is all in the branding, and that we should ship AI with artificial brain lobules disabled to reduce manufacturing costs?

      • by Kensai7 ( 1005287 ) on Friday March 11, 2011 @06:25AM (#35451246)

        The author talks about the honeybee. Let's emulate first the honeybee. Create a robot that can achieve what the social insect "bee" can achieve.

        Lobules Lobes Whole Brain

        • Swarm Intelligence [wikipedia.org] would be a good place to start. Path-finding/graph search is only one part of AI though. It's very useful, but it's not necessarily the best method to solve all types of problem.

        • by mangu ( 126918 )

          The honeybee is interesting because it's complexity is at about the limit of what personal computers can simulate today.

          In rough order of magnitude terms, a honeybee brain has a million neurons with a thousand synapses each. Assume a neuron fires a hundred times per second. In the standard model of a neuron, each synapse can be simulated by a floating point multiplication and one addition.

          Doing the math, a computer simulation of a honeybee brain in real time would need 100 gigaflops, which is in the range o

    • This has applications in robot pets. Artificial mouse brain sounds a lot easier than a human AI.
  • by sakdoctor ( 1087155 ) on Friday March 11, 2011 @06:16AM (#35451202) Homepage

    Each pint of beer contains 600 joules of energy, which can power your 20 watt brain for many hours, and give you trouble with vision, motion, and common sense.

  • The reason this is the case is because current AI simulates a neural network as a program, you would have to produce chips which where actual neural networks the problem however is the interconnects which is in an order of magnitude more complicated compared to anything we can currently create. In fact the brain is quite slow, but its organization is what makes it powerful.

  • Watt is a unit of power, not energy.
  • requires four megawatts, and still has trouble with vision, motion, and 'common sense

    I have known many people who have ~100billion or so neurons that consume 20 watts of power, but they also have plenty of trouble with "Common sense". Actually they might be less sensible in some areas than a 100Kb C code running on a puny little Pentium 4.

  • by gweihir ( 88907 ) on Friday March 11, 2011 @06:37AM (#35451294)

    The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.

    Side note: Comparing neurons and transistors, as is often done in the popular (but not very knowledgeable) press, is completely invalid as well. You need to compare neurons more to a micro-controller each.

    • by whimdot ( 591032 )
      Is it the orders of magnitude that are the problem or the magnitude of order?
    • by mangu ( 126918 )

      The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.

      Hint: simulating is not the same as duplicating. A digital computer trades high-speed communication for interconnections. Think of serial vs parallel. If you simulate a neuron as an object located in memory, each neuron is interconnected to each other, only they cannot all communicate at the same time.

      Considering the relatively slow rate at which neurons fire, that problem isn't so insurmountable as it seems at first.

    • You can't just focus on neurons and their connections. There are 10x more glial cells in the brain and more and more research is discovering that they not only perform their basic role to support metabolism and structure, they also communicate with themselves, communicate with neurons and are an integral part of cognition.

      In addition, they are finding that chemical communication between cells is not point to point contained within the synapse only. Cells are swimming in chemical and electrical communic
  • Ok , you can do this with a FPGA but this requires something external to the gate array to reset the logic gates - the array can't rewire itself. Biological neural systems can rewire themselves and not only that - they can do it *while they're running*. Obviously you could have this on the fly rewiring in a software simulation but thats orders of magnitudes slower than using hardware so I don't think we'll see computers simulating human brains in real time anytime soon.

  • by SpazmodeusG ( 1334705 ) on Friday March 11, 2011 @07:35AM (#35451492)

    Getting a little ahead of ourselves aren't we?

    We're still not entirely sure of how a brain works. Oh sure, it's a neural network of some kind, but how do the neurons in a brain form meaningful connections with each other? How do they get their weightings of activation? etc.

    Chances each neuron in the brain might be representable by a simple mathematical function with only a few terms. The way the neurons connect to each other might also be representable in a simplistic way. (btw. look up dynamic markov coding if you want to see a neat way a state can reproduce in a way that gives the newly created state meaningful input/output connections to other states).

    So the problem isn't necessarily that our computers aren't powerful enough. The problem is that we still don't know how a brain works.

    • Comment removed based on user account deletion
      • The dose of realism injected by a real live neuroscientist ought to be paid attention to. Most CS types know too little about neuroscience and psychology to have a worthwhile opinion about the viability of human-level machine intelligence and what it takes to get there. I used to believe we'd have a strong AI by oh, 2040 or so until I started really looking into the fields I mentioned, and every informed post like the one I'm replying to reaffirms my belief that we have a very, very, very long way to go--

  • Ok, I admit this sounds completely absurd at first, but there's an awful lot of similarities between the neural pathways of the brain and the countless number of ways websites link to each other, both directly and indirectly through their contacts, and their contacts' contacts, and all the contacts that eventually show up in an endless cycle of recursion, etc...

    Now, google has to wade through all this, and constantly correct and update itself, to ensure it can get a user to the correct web page that best ma

    • by ledow ( 319597 )

      "You'd think it'd just be a matter of passively connecting to a neuron to sniff it's traffic and then observing which nearby neurons carry the signals to and from it"

      You'd think. Except not only have people tried this but it's inherently gibberish and never gets anything useful.

      A neuron is an extremely complex biochemical cellular device that we don't understand. It is *not* just a biochem transistor, as some would have you think.

      It retains some information, reacts to historical stimuli, reacts to chemica

  • by divisionbyzero ( 300681 ) on Friday March 11, 2011 @08:13AM (#35451654)

    It's a software problem.

    • The architecture on which you run the software also determines quite a lot of what you can do and how the software is executed. You need a certain topology of the hardware, otherwise it is impossible to do certain tasks efficiently. There is a huge difference between a slow but massively interconnected network like the brain, and a sequential microprocessor running instructions one by one at high speed.

      • by lurcher ( 88082 )

        Who mentioned efficiency?

        We don't have to do it in real time. But even if we had till the heat death of the universe to let the code run, we still don't know how to write the code, which was the OP's point.

    • It's a software problem.

      Well, that's the hypothesis put forward in the 1950's that hasn't yielded results.

      In contrast, something like Watson has massive amounts of processing power and storage access, with relatively simple algorithms. Watson is the ENIAC of the 2029 pocket calculator.

      I wonder if humans like to think of themselves as needlessly complex.

      But as to the main story - "we need more power-efficient, more parallel hardware":

      Watson: "What is the main focus of modern computer architecture for the pa

  • As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.

    Obviously, we have very two different things here. We created computers to be good at the stuff we are *not* good at, not to match our capabilities (we wouldn't spend so much money to make machines that are good at just the same things we are). That

    • As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.

      Exactly!

      My $50,000 BMW can't keep up with my $10 pocket calculator when it comes to math. And my $10 calculator can't drive me to the mall.

  • Why is this article written in past tense? It contains funny paragraphs like this:

    'While fundamental physics and molecular biology dominated the past century’s innovations, Sejnowski said the years between 2000 and 2050 was the “age of information”.'

    2050 isn't really the past, right?

  • Know lots of 20 - 70 somethings with no common sense.

  • That belongs to the Jaguar Cray XT5-HE, not the overstated specs of the system that "claims" the supposed top slot.

    Move it down a bit more and you would truthfully be representing its capability. But then you'd just want to modbomb me into oblivion, since that's easier to do.

  • computer CPU and software processes in a flat 1 dimensional stream nerual structures are emulated taking time to read each ones state one after another and simulate the actions of the interconnects to get the result

    "Hardware/softcore" FPGA based neural net would form a flat even 2 dimensional "grid" array

    but a DNA based brain is both a 3D structure and also has sub "fractal" patterned interconnected structures within it

    to form even a bee style neural structure in a FPGA would still need the logic cells to

  • "Who's brain did you emulate?"

    "Uh, Abby someone..."

    "Abbey who?"

    "Abby Normal...."

  • It seems like we already have this in FPGAs. We don't really have good clusters of them though..at least that I know.

    I'm a software developer that has dabbed in VHDL and created some basic programs that got ran directly on a chip.

    It was a major pain as someone just trying to write something. A higher level language designed for parallel computation on a large FPGA array might be more in line with what he wants...without trying to design hardware specifically to the problem. Although maybe after a while comm

  • They have their benefits and their drawbacks but at some point you'd think the benefits and drawbacks of silicon would even out. At the astounding rate technology's been progressing since I got into the industry, I'd have guessed that silicon would have passed us up by now, but that appears not to be the case. I believe a lot of AI researchers made similar predictions though, so I don't feel too bad.

    I suspect there's some trickery going on in the meatputer though. The whole system feels kludgy. They seem

  • Comment removed based on user account deletion
  • Moore's law says that the 2.5Pfl machine in a 20 watt package is about 25 - 30 years away.

  • From what I'm hearing, Dr Sejnowski's plaint only partly addresses the problem. To implement cognition using a computational model, we need a neural simulator that:

    - is large enough to represent all the neurons and interconnections needed to synthesize human-level cognition
    - uses much less power than a supercomputer

    But to be more than "a brain in a jar" it also must:

    - learn using supervised and unsupervised instruction
    - quickly load and unload modules of what it has learned

    Without addressing all four goals

  • It just seems like a massive waste of computational resources... I would rather have a well programmed predictable computer program controlling my spacecraft vs a brian modeled after humans which may decide to go on strike or otherwise act unreliably.

    Why not just use GAs and NNs in specific context where they make sense... rather than trying to copy brains?

    If you want to solve hard math problems who is to say intelligent solvers can't be designed to provide real results for a fraction of the computer time?

    I

    • I really doubt that anyone in the next thousand years will be able to build a machine equal in all respects to the human brain.

      You can build a machine that will perform a single task or a variety of tasks but I have yet to see anything from anyone about building a machine that will recognize that a new task is required to solve a new problem and then formulate and perform that new task.

      The problem with a machine is that is does not think, it does not ponder, it does nothing intuitively. It can resolve any

    • I would rather have a well programmed predictable computer program controlling my spacecraft vs a brian modeled after humans which may decide to go on strike or otherwise act unreliably.

      Hey, I know Brian and he's a real stand-up guy. In fact, he'd be offended at being called unreliable if he wasn't so damned amicable.

"If it ain't broke, don't fix it." - Bert Lantz

Working...