Forgot your password?
typodupeerror
Supercomputing Hardware News

A Million Node Supercomputer 116

Posted by Unknown Lamer
from the scooping-doctor-soong dept.
An anonymous reader writes "Veteran of microcomputing Steve Furber, in his role as ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester, has called upon some old friends for his latest project: a brain-simulating supercomputer based on more than a million ARM processors." More detailed information can be found in the research paper.
This discussion has been archived. No new comments can be posted.

A Million Node Supercomputer

Comments Filter:
  • But will it run Lin.... ah, nevermind.

    • Just might run win.... ah, nevermind.
      • Well, since he's ICL professor of computer engineering I'd expect it to run George 3 or VME/B.

        But, then again, maybe it'd run BBC Basic?

      • Just might run win....

        In a future OS built by vast numbers of developers each chipping in 10 lines of code without knowing what the others wrote, having the code of each in its own node should keep more of it running? Reserve 100,000 or so nodes for guest processes.

        To use the distributed GPU, each node will output to a rooftop display, viewed by satellite cam zoomed in to encompass the required number of nodes.
        Reliability will be ensured by RAIN (redundant array of inexpensive neighborhoods).

    • by rbrausse (1319883)

      more like "will it run anything anytime?"

      FTFpaper:

      [..] we have yet to gain access to silicon [..] But we have exposed the design to extensive simulation, up to [..] models incorporating four chips and eight ARM968 processor

      I like the grandness of their vision but it would be nice to see at least a real-world version of one node (the 20 processor thingy). How big are the smallest cases needed for such a node? Is it even realistically possible to place 50000 of those within the range limits of Ethernet?

      • So this thing is supposed to simulate some aspect of a brain, and they've so far simulated one small portion of it. I bet I could simulate a simulation^4 of this thing, with a rock.
      • by jpapon (1877296)
        Not to be pedantic, but I think you're focusing on the wrong thing here. If you're going to build a 50,000 node supercomputer, I'm pretty sure you're not going to let yourself be limited by ethernet cables. I imagine they'd use some sort of fiber network.

        The better question is why do you need to simulate a brain in real time? I mean, if you can make something magical happen with a million cores in real time, why can't you just use the plain old Internet and make it happen with a million cores in 1/100th o

        • by rbrausse (1319883)

          If you're going to build a 50,000 node supercomputer, I'm pretty sure you're not going to let yourself be limited by ethernet cables.

          the Ethernet connection is directly quoted from the paper :) and it was more meant as "nice idea, but there are many not addressed practical problems" but as a show stopper argument.

          I mean, if you can make something magical happen with a million cores in real time, why can't you just use the plain old Internet and make it happen with a million cores in 1/100th or 1/1000th real time?

          sometimes I miss the obvious; this is a *very* interesting thought. the BOINC project "be part of a brain" (cover title,should be changed before release...) would attract my attention

        • It's not obvious to me that a brain simulating algorithm would be parallelizable enough to run effectively in chunks distributed over the internet. Or maybe it would but that's just not as fun as a 50,000 node computer.

          • by jpapon (1877296)
            If an algorithm can be split between 50,000 cores, I see no reason why the distance between the cores makes any difference. Sure, it will be slower, but a core is a core, whether it's in Boston or Bangkok.
            • If an algorithm can be split between 50,000 cores, it can run on one core. I see no reason why the number of cores makes any difference. Sure it will be slower, but computation is computation, whether it's happening usefully fast on a 50,000 core behemoth, or pointlessly slow on a 50Mhz 486..

              • by jpapon (1877296)
                Indeed. Except that there are reasons for trying to build computers that can run computations faster. There's no reason to give ARM the money to make a million core computer when you can't show that it would accomplish anything. If he could make something interesting running at 1/10000th the speed of a "real-time brain", I'd be interested in perhaps increasing that speed by a couple of orders of magnitude. Saying "Let's spend government money on a million ARM processors so we can build something that might
          • "It's not obvious to me that a brain simulating algorithm would be parallelizable enough to run effectively in chunks distributed over the internet. "
            It wouldn't. The timing of the spikes arrival at neurons is critical to within less than the usual jitter across large networks. General purpose CPUs just are not an efficient way to simulate neurons, and never will be.

            This comment on TFA links to information on a much more technically advanced and likely to succeed method:

            The DARPA SyNAPSE project looks much

    • by Verdatum (1257828)
      Yeah, and imagine a Beowul...ah, forget it.
    • Re:Oblig. Question (Score:5, Interesting)

      by Psion (2244) on Thursday July 07, 2011 @11:06AM (#36683990)
      No, no, no! Given the intended purpose, the question is: Will it run me?
  • The overhead in communications has to be stagger at that many nodes. And I suppose it depends on the workload, but I was under the impression that you get serious diminishing returns when you scale that large, to the point it might be faster to have one hundred clusters with 1000 nodes in each. Can anyone speak on how they get around this?
    • by creat3d (1489345)
      FTFA: "To overcome this limitation, Furber - along with Andrew Brown, from the University of Southampton - suggests an innovative system architecture that takes its cues from nature. "The approach taken towards this goal is to develop a massively-parallel computer architecture based on Multi-Processor System-on-Chip technology capable of modelling a billion spiking neurons in biological real time, with biologically realistic levels of connectivity between the neurons," Furber explains in a white paper outli
      • by blair1q (305137)

        In other words, it's okay if it runs slow, because your brain does, and makes up for it with parallelism and fuzzy logic.

    • by betterunixthanunix (980855) on Thursday July 07, 2011 @10:38AM (#36683644)
      On the other hand, if you are simulation a brain, I suspect that you don't really need to have fast communication between any two nodes; localized subclusters should communicate quickly, with slower communication between clusters. This wouldn't work for *all* problems, but for the specific problem they mentioned it seems to be a workable solution.
      • by xkuehn (2202854) on Thursday July 07, 2011 @01:46PM (#36686034)

        I am not a neuroscientist. As a grad student I do study artificial neural networks, which means that I must also have a little knowledge of neuroscience.

        The brain is not a fully connected network. It is divided into many sub-networks. I think it's estimated at about 500k, but don't quote me on that number. These sub-networks are often layered, so if you have a three-layer feed-forward sub-network of 5 cells in each layer, each of these cells has only 5 inputs except for the 5 nodes in the input layer, which connects to other sub-networks. (If there are connections from later layers back to earlier layers, the network is said to be a 'feedback' rather than feed-forward network.) These sorts of networks can be simulated very efficiently on parallel hardware, as a cell mostly gets information from the cells that are close to it.

        In short, your suspicion is entirely correct. Moreover, you not only don't need fast connections between many of your processing nodes, most of them don't need to be connected to each other at all.

        This is the reason why neural networks are interesting in the first place: that they can be simulated on parallel hardware when we don't know a good parallel algorithm with conventional computing techniques. (If it interests you: another name for neural networks is 'parallel distributed computing'.)

        There is a hard limit on the 'order' (think of it as function complexity) of functions that can be computed with a given network. To compute a function beyond that limit, you need to have a larger number of inputs to some cells, thereby increasing the order of the network but making it less parallel. Most everyday things are in fact of surprisingly low order. Fukushima's neucognitron can perform tasks like handwriting recognition with only highly local information.

        • Let's say it's simpler than most earlier estimates have said, say an average of 7 inputs and 1 output per neuron, each firing up to 100 times per second with about 3 bits of phase information encoded in each spike. So that's 300bps of bandwidth out and 2100bps in, so 300-2400bps/neuron depending on how it's implemented. Times 20-100 billion neurons that's still some serious bandwidth, (6 -240Tbps) even if it is nearly all local. Still, a lot of non-local communication takes place as shown by the "connectome

          • by xkuehn (2202854)

            Let's say it's simpler than most earlier estimates have said, say an average of 7 inputs and 1 output per neuron, each firing up to 100 times per second with about 3 bits of phase information encoded in each spike. So that's 300bps of bandwidth out and 2100bps in, so 300-2400bps/neuron depending on how it's implemented. Times 20-100 billion neurons that's still some serious bandwidth, (6 -240Tbps) even if it is nearly all local.

            I won't insult you by checking your calculations but your number is too large for your assumptions. Remember that when simulating a brain each processor won't compute for a single cell, but a local group of cells . The vast majority of the bandwidth use would therefore be between each CPU and its memory rather than between different CPUs. Each CPU would than have connections with those CPUs whose local groups of cells its own communicates with. The less local you get, the lower the total bandwidth requireme

            • By "local" I meant on-chip and within-node communication. The communication bandwidth does not drop off with locality as fast as you'd think - much if not most of the brain's volume is white matter devoted to long-distance communication.

    • by LWATCDR (28044)

      He is using different speeds for the interconnects based on distance to get around the issue. This is not uncommon in supercomputers today where the cores on the node can communicate much faster than the nodes can communicate over infinity band. It sort of reminds me of the Connection machine that used a hyper-cube type system for interconnects.
      The problems that this computer is going to try and solve are probably well suited to this type of system after all I am sure that our brains don't have a zero laten

  • by Yvan256 (722131) on Thursday July 07, 2011 @10:31AM (#36683550) Homepage Journal

    Shouldn't they be using BRAIN processors? /duck

  • This won't get anywhere near simulating a brain.
    • by geekoid (135745)

      And you base that on... what?

      • Well, not that I'm a mind reader, but my take on this is that we don't bloody know how those few lbs of grey matter work, how it self organizes, the exacting details of behaviors that drive it, etc etc

        On top of that what do you feed a brain? what is it going to do? They'd need to interface it with something that can challenge it, provide meaningful feedback, etc on and on.

        Whatever it is they simulate I'd not call it a brain, or a mind. Perhaps a highly complex neural net, but it will be running in slow moti

        • You pretty much hit it on the nose.

          Not only would a brain simulator have to simulate neurons, but also synapses, neurotransmitters, neurotransmitter receptor types, glial cell types (e.g. astrocyte computation), mRNA expression and probably about a library of congress worth of stuff we don't even know about yet.
          • Some of that may be true, but I doubt we need to simulate all of that to reproduce a brains function. We don't need to simulate transistors to emulate a computer architecture after all.

        • by blair1q (305137)

          They probably won't make it think, but there's a lot of environmental input and output that is mechanistic in nature. We do know how almost all of that works. It's the language and meme thing we're just scratching.

          The boundary between symbology and mental process may be illuminated by how this thing behaves. That'd be worth its price.

    • by Sulphur (1548251)

      This won't get anywhere near simulating a brain.

      Too true. How about trying to simulate self awareness.

      Use (say three) processes to monitor and optimize each other. These would run a virtual machine as a single self aware entity.

  • Laplace's demon, Omega, or Les? From my reading this million node supercomputer will be Les.

  • by Animats (122034) on Thursday July 07, 2011 @10:37AM (#36683624) Homepage

    OK, a mouse brain has about 1/1000 the mass of a human brain. So build a mouse brain with 1000 ARM CPUs, which ought to fit in one rack, and demonstrate the full range of mouse behavior, from vision to motor control.

    I read the paper. It's a "build it and they will come" design. There's no insight into how to get intelligence out of the thing, just faith that if we hook enough nodes together, something will happen.

    About 20 years ago, I went to hear Rodney Brooks (the artificial insect guy from MIT) talk about the next project after his reactive insects. He was talking about getting to human-level AI by building Cog [mit.edu], a robot head and hand that was supposed to "act human". I asked him why, since he'd already done insect-level AI, he didn't try mouse-level AI next, since that might be within reach. He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".

    Cog turned out to be a dead end. It was rather embarrassing to all concerned. As one grad student said, "It just sits there. That's all it does."

    • by Nidi62 (1525137) on Thursday July 07, 2011 @10:41AM (#36683678)

      As one grad student said, "It just sits there. That's all it does."

      Sounds like it actually modeled most of human behavior fairly well then.

    • by drolli (522659)

      Yeah, well. As an interested bystander (a physicist), to me it also seems AI ran into several dead ends because of setting the aims too high. I remember for as long as i can read people predicted that machines which would replace human translators to be around the corner.

      IHMO the fundamental mistake is to define something like intelligence on one hand to be very specific, and then hope to reach it by general methods, but taking a few shortcuts, which are not even known yet...

      The general rule of thumb which

      • by Animats (122034)

        The general rule of thumb which the physicists learned the hard way is that assuming something general and hoping that the shortcuts you need to take to get a result magically fall out once you randomly scatter enough phd students at a subject, is futile.

        I heard that exact approach proposed more than once at Stanford in the 1980s.

        AI is no longer a hardware problem. Any major data center probably has enough power to do a human brain, if we only knew how to program it. In some areas, like vision, sheer compute power has helped in a big way. Some problems can be hammered into the ground with machine learning techniques and CPU time. In other areas, we're still stuck. There's been almost no progress on "common sense reasoning" in years.

        On the other hand,

        • by anwaya (574190)

          There used to be much enthusiasm for neural nets, but it turns out that modern machine learning techniques do better on the few problems neural nets can do. The modern approaches are all matrix algebra, and you usually work in Matlab. Much of that stuff is parallelizable, and what you want is more like a GPU than neurons.

          Since the stated objective is to simulate a brain, I doubt that marix algebra is going to cut it.

          • Likely not. But if you want to simulate something complicated, matrix algebra is just about always the best place to start. You'll often end up there anyway, no matter where you start. You may find something better for some specialized or simplified bits, but it's seldom worth the extra effort - usually more difficult, less accurate and slower (the tridefecta).

        • by blair1q (305137)

          Any major data center probably has enough power to do a human brain, if we only knew how to program it.

          I think we're underestimating what these guys know about how to program it. If they can get the architecture to be more like a brain, it may allow them to implement accurate simulations of the neuronal processes we see in a brain, and then start to work on the processes those neurons are used for.

          I highly doubt this will be the only iteration of the hardware needed to get it to think, even at a small-furry-mammal level, but getting it to do a few self-aware things could be very helpful.

    • by dr.newton (648217)

      Even if they fail to produce anything interesting, that in itself will be an interesting result.

      There are likely a number of assumptions about intelligence as an emergent behaviour of non-quantum physical phenomena that could be invalidated by the failure of this experiment.

      "Brains can't work according to such-and-such a principle, because if that were true, Furber would've succeeded."

    • Re: (Score:1, Flamebait)

      by geekoid (135745)

      Great, you read the paper. DId you laso read the paper and studies that lead to thins? did you know they have already simulated a few neurons? and the activity looked like the human brain? no? STFU.

      IN our likely life time, we will have a simulated brain. This will just be one piece of a larger model.
      Once we have the model, we will be able to do a lot of things. What to simulate damage and look at long term effects? Speed up the model, get a years worth of data in days. Want to studies some thing closer? sl

    • by Anonymous Coward

      Nowhere in TFA does it say they're trying to build an AI or simulate the human brain. They're aiming for 1b neurons, which would be a tiny fraction of the human brain, and they're particularly interested in proving the ability to maximize supercomputing capacity while minimizing energy use.

    • by Rolgar (556636)

      It's one thing to copy human a physical human behavior, or to even be able to calculate something like the traveling salesman problem. It's another thing entirely to develop the ability to start an intelligence that naturally absorbs sensory on educational information, identifies patterns and holes in information, turns that into a question, and then follows the correct steps to learn what is unknown. Between us and a machine that can do that, we'd still have personality and drives (survival, success, compa

      • by DarkOx (621550)

        ---The computer can look at the U.S. budget deficit, and come up with a balanced budget that will make most citizens reasonably happy, or figure out how to structure a health care bill that will provide every one with excellent care at an affordable price. You might think that the failure of the U.S. government to do so means that nobody can, but that is a failure of politics, not intelligence.

        I doubt it a computer could do a good job at maxing the desired out puts and minimizing the required inputs. Humans have a different sense of fairness though and how we subjectively feel about it has a lot to do with how happy with it we are.

        ---When a computer can come into contact with a piece of art (photo, literature, comic), understand it, and have it mean something that the computer can learn from, and then examine something else, and apply the lessons learned to this new thing, and sort among many different things its learned to figure out which ones are most similar.

        Again I am not so sure a computer will be able to respond our art they way we do. I am not sure another intelligent biologic organism, say from space, could either. Its our art, and it reflects on some basic levels experience we all share. A computer won't see an im

    • Cog turned out to be a dead end. It was rather embarrassing to all concerned. As one grad student said, "It just sits there. That's all it does."

      Well, it's just a cog in a big machine.
    • by Calindae (1256922)

      Since when is brain mass proportionate to "intelligence" (a.k.a. hardware necessary to mimic it)? Or... maybe that's why the elephants look at me condescendingly every time I'm at the zoo...

  • by Okian Warrior (537106) on Thursday July 07, 2011 @10:45AM (#36683736) Homepage Journal

    This makes sense, how?

    It's like trying to simulate a computer by wiring 5 million transistors. Without a deep understanding of how computers work and a plan for implementation, the result will be worthless.

    I see this all the time in AI strategies. Without no deep understanding of AI, the project implements bad assumptions.

    Some examples: no way to encode the adjacency information, a fixed internal encoding system which cannot change (ie - a chess program that can't learn checkers), linear input->process->output models, and so on.

    Before building a system with a million processors capable of simulating the brain, how about we design an algorithm that embodies the simplest possible AI?

    • by PPH (736903) on Thursday July 07, 2011 @10:51AM (#36683794)
      Typical IT project philosophy: I'll go to the customer and try to get some requirements. The rest of you, start coding.
      • Yep. Leave the decision making to us. You just write the code and don't ask questions.
      • by bberens (965711)

        Typical IT project philosophy: I'll go to the customer and try to get some [strike]requirements[/strike] money. The rest of you, start coding.

        FTFY

        • by PPH (736903)

          We don't start coding until the check clears. We don't stop adding features until the money runs out. But there is no relationship with requirements.

    • by Anonymous Coward

      The problem with AI seems to be a lack of RI. You can't make an artificial anything if you don't know how the real something works.

    • by jpapon (1877296)
      I agree that the idea is somewhat silly, but I think it's more like "trying to simulate a computer by building a 5 million transistor FPGA". The connections between the cores isn't hardwired, it's configurable... so you could indeed make a "brain" out of it. The real problem is that there's no point in building such a massive system to simulate a brain in real-time. Simulate it in 1/1000th time using a much less expensive system first. If THAT works, maybe we can talk.
    • by blair1q (305137)

      Project for you: identify the bad assumptions in their model without building one and trying it out.

      Okay. Go.

    • by narcc (412956)

      Without no deep understanding of AI, the project implements bad assumptions.

      Speaking of bad assumptions...

      how about we design an algorithm that embodies the simplest possible AI?

      Computationalism has been dead for 30 years. Anyone still championing that failed philosophy either doesn't understand the problem or is clinging to it out of desperation.

      • I'm attempting to solve AI and have found a dearth of informed people who can talk about it.

        I'd love to chat with you about your views. If you feel up to it, drop me a line:

        Niroz (dot) 9 (dot) okianwarrior (at) spamgourmet.com

  • Abby someone?

  • ...battery capacity problems.

    • They're using the Apple model. All brains will have a non-removable battery built in. When the battery dies the brain dies and you go buy another one.
  • The number of nodes and processing power per node is meaningless unless they can connect them together in a similar fashion to the brain, sure they mention a "brain like" arrangement but the reason our brains are so sophisticated is not due to processing power but due to organisation. Brains are slow, really really slow but the parallelism and connectivity is beyond anything we can build at the moment and that is why we keep on failing on AI. An example is adding two numbers together, easy to do for a pro

    • by FlyingGuy (989135)

      Correct. We don't even understand how the human brain is "cognitive" much less how the information is stored or retrieved. Almost everything we know is an assumption at its base. Yes we have some ideas about which regions of the brain control certain functions but we have not a clue as to how those things work.

      • by geekoid (135745)

        Yes we do have a clue, several in fact. we even have an incredibly simple model(few neurons).

        We don't need to understand something to simulate it. It certainly helps. People in a factory can assemble a plane the works perfectly well, but never even have heard of bernoulli's principle.

        This sort of work is need so we can define 'cognitive' with more accuracy.

        We know a hell of a lot more then you imply in your post. The parent clearly doesn't know what he is talking about and did even bother to read the story,

    • by ceoyoyo (59147)

      People are pretty crappy at adding two numbers together too. That sounds like a triumph for AI, not a failing.

  • According to the research paper the goal is a million *processor* computer, not a million *node* computer. Each node described in the paper is made up of 20 ARM processors, so it would technically be a 50,000 node computer.

    • by blair1q (305137)

      Are those single- or multi-core ARM units? (does ARM even do multi-core units?)

      And really, if you count all the processor units in a graphics chip, there are probably some computers that could count several million individual processors in their architecture right now.

    • by 1729 (581437)

      The headline is useless. Proposing a million-core computer isn't news, since there's a 1.6 million core computer about to be deployed [wikipedia.org]. The headline should reflect what they're planning to do with this machine.

  • The problem in simulating a brain is not computing power. It is software. This is a worthless publicity stunt.

    • by narcc (412956)

      The problem in simulating a brain is not computing power. It is software.

      There is just so much wrong here I don't know where to even begin.

      • by gweihir (88907)

        Indeed.

        For example, for a realistic simulation, you would need to have a person in there. How is that going to happen? If (which I doubt, but there are possibilities, after all biological brains do acquire a person somehow as well) this is possible, is turning the simulation off then an act of murder? Also, what good is simulation a brain in the first place? If it is realistic, it is no better then the 8 billion we already have. If not, it is useless.

  • There are 100bn neurons in the human brain, each has upto 7,000 synaptic connections. You would need some factor of 10^14 bytes of RAM, assuming you're storing only one byte per synapse that's 636 TBs... Try again in 39 years?

Let's organize this thing and take all the fun out of it.

Working...