Forgot your password?
typodupeerror
Supercomputing Hardware News

A Million Node Supercomputer 116

Posted by Unknown Lamer
from the scooping-doctor-soong dept.
An anonymous reader writes "Veteran of microcomputing Steve Furber, in his role as ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester, has called upon some old friends for his latest project: a brain-simulating supercomputer based on more than a million ARM processors." More detailed information can be found in the research paper.
This discussion has been archived. No new comments can be posted.

A Million Node Supercomputer

Comments Filter:
  • by xkuehn (2202854) on Thursday July 07, 2011 @02:46PM (#36686034)

    I am not a neuroscientist. As a grad student I do study artificial neural networks, which means that I must also have a little knowledge of neuroscience.

    The brain is not a fully connected network. It is divided into many sub-networks. I think it's estimated at about 500k, but don't quote me on that number. These sub-networks are often layered, so if you have a three-layer feed-forward sub-network of 5 cells in each layer, each of these cells has only 5 inputs except for the 5 nodes in the input layer, which connects to other sub-networks. (If there are connections from later layers back to earlier layers, the network is said to be a 'feedback' rather than feed-forward network.) These sorts of networks can be simulated very efficiently on parallel hardware, as a cell mostly gets information from the cells that are close to it.

    In short, your suspicion is entirely correct. Moreover, you not only don't need fast connections between many of your processing nodes, most of them don't need to be connected to each other at all.

    This is the reason why neural networks are interesting in the first place: that they can be simulated on parallel hardware when we don't know a good parallel algorithm with conventional computing techniques. (If it interests you: another name for neural networks is 'parallel distributed computing'.)

    There is a hard limit on the 'order' (think of it as function complexity) of functions that can be computed with a given network. To compute a function beyond that limit, you need to have a larger number of inputs to some cells, thereby increasing the order of the network but making it less parallel. Most everyday things are in fact of surprisingly low order. Fukushima's neucognitron can perform tasks like handwriting recognition with only highly local information.

All life evolves by the differential survival of replicating entities. -- Dawkins

Working...