Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware Hacking Build Hardware Technology

Prototype Motherboard Clusters Self-Coordinating Modules 115

An anonymous reader writes "A group of hardware hackers has created a motherboard prototype that uses separate modules, each of which has its own processor, memory and storage. Each square cell in this design serves as a mini-motherboard and network node; the cells can allocate power and decide to accept or reject incoming transmissions and programs independently. Together, they form a networked cluster with significantly greater power than the individual modules. The design, called the Illuminato X Machina, is vastly different from the separate processor, memory and storage components that govern computers today."
This discussion has been archived. No new comments can be posted.

Prototype Motherboard Clusters Self-Coordinating Modules

Comments Filter:
  • I.... I... (Score:0, Insightful)

    by Anonymous Coward on Wednesday August 19, 2009 @06:36PM (#29126445)

    don't understand.

  • Re:So? (Score:2, Insightful)

    by Fyre2012 ( 762907 ) on Wednesday August 19, 2009 @06:48PM (#29126577) Homepage Journal
    perhaps just replace old modules with new ones, following ye olde Moore's law cycle?
  • by Brietech ( 668850 ) on Wednesday August 19, 2009 @06:56PM (#29126633)
    The connection machine was still SIMD, even though it did have 64k (1-bit!) processors. This is just like the transputer architecture though! There are a couple of *really* big problems with this: 1) none of their microcontrollers are individually capable of running a large modern program. They have a few kilobytes of code, and no large backing RAM. 2) How do you get to I/O devices? If you need shared access to devices, this just makes all the problems of a normal computer enormously worse. 3) What about communication latency (and bandwidth) between nodes? They're using serial communications between 72 MHz processors. We're probably talking several microseconds of latency, minimum, and low-bandwidth (just not enough pins, and not nearly fast enough links) communication between nodes. As fun as something like this would be to build and play around with, there are reasons architectures like the transputer died out. The penalty for going 'off-chip' is so large (and orders of magnitude larger nowadays than it was back then), and the links between chips suck so much, that a distributed architecture like this just can't compete with a screaming fast 3 GHz single-node (especially multi-core).
  • by commlinx ( 1068272 ) on Wednesday August 19, 2009 @07:36PM (#29126991) Homepage Journal

    I'd guess from the 14-pin connectors and the fact most smaller ARM microcontrollers can't do parallel data transfers under DMA they're using the SPI bus which may run at 72Mbps. Of course that would also mean the bus either needs to be shared for every device or operated in a token ring style with the associated propagation delays. I'd guess the latter because you'd be pushing to get 72MHz SPI data across a large number of devices due to the capacitance it would introduce to the transmission line.

    All in all sounds like an interesting academic excercise but of no real-world importance. I expect they'll find all their power and cost savings will be eaten up by requiring hundreds of devices to compete with a single piece of silicon. A better commercial solution would be to put lots of ARM cores on single chips (or FPGAs for development) but then it would make sense to use a better bus arrangement so that would largely invalidate anything they develop.

  • Mainframe (Score:3, Insightful)

    by sexconker ( 1179573 ) on Wednesday August 19, 2009 @07:51PM (#29127117)

    So it's a small, shitty mainframe.

  • by Brietech ( 668850 ) on Wednesday August 19, 2009 @08:38PM (#29127515)
    Well, if you take that idea to the limit using modern technologies, you basically wind up with rockin' new Nehalem processors using Quickpath Interconnect (QPI) between them, with PCI Express (serial links) to peripherals. But that's huge, is incredibly power hungry, and is basically the opposite of this architecture. But let's think this over some more. To access L1 cache, you can do it in a single cycle. L2 might be 10-20 cycles, etc. Now going over PCIe, the fastest thing going besides QPI, has a latency of like 400-800 nS. Even on a lowly 1 GHz processor, that's like 800 clock cycles, so you might as well be watching grass grow while you try to do something that's not embarrasingly parallel. As soon as you pump up the clock rate more, and add large caches and DRAM and all that, then you have *huge* power problems, and you still have somewhat crappy performance. Large-scale multi-core basically *does* use this architecture, only it's all on one die. It also uses an interconnect that doesn't suck, and manages to be cache coherent, so you can actually use it. Each core has it's own cache though (which is larger than the RAM on these chips), and the clock is nearly 2 orders of magnitude higher. Like i said, this is fun for a microcontroller project, but the performance would be atrocious for anything except embarrasingly parallel problems (and even then it would suck using these microcontrollers).
  • shit design (Score:1, Insightful)

    by Anonymous Coward on Wednesday August 19, 2009 @08:50PM (#29127591)

    as one poster had said, it would be much more sensible to integrate multiple cores onto an FPGA, and put the real time into the implementation of a bus that could realistically move data between the cores

    not to mention that their choice of parts was sub-optimal. the cortex m3 is not the suggested replacement for arm7 by accident, it offers 1.25 dmips/mhz (compared to this arm's 0.89 dmips/mhz), an instruction set with optimized code density versus performance, more predictable interrupt handling, mpu, probably better power consumption, etc. for practically the same price.

    if you ask me this is an academia project run by a bunch of hippies who are spending their time on all of the wrong aspects in this kind of decentralized computing concept.

  • Redundancy (Score:2, Insightful)

    by shadowblaster ( 1565487 ) on Wednesday August 19, 2009 @11:28PM (#29128729)

    Can they make the cluster survive a destruction of several nodes?

    There are many situations where this would be beneficial such as space craft design and military electronics. Even with several nodes severely damaged, the machine can re-route processing to the remaining nodes. Although overall processing speed might be reduced, there will be no loss of functionality.

With your bare hands?!?

Working...