Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Hacking Build Hardware Technology

Prototype Motherboard Clusters Self-Coordinating Modules 115

An anonymous reader writes "A group of hardware hackers has created a motherboard prototype that uses separate modules, each of which has its own processor, memory and storage. Each square cell in this design serves as a mini-motherboard and network node; the cells can allocate power and decide to accept or reject incoming transmissions and programs independently. Together, they form a networked cluster with significantly greater power than the individual modules. The design, called the Illuminato X Machina, is vastly different from the separate processor, memory and storage components that govern computers today."
This discussion has been archived. No new comments can be posted.

Prototype Motherboard Clusters Self-Coordinating Modules

Comments Filter:
  • So how do you upgrade this? I would assume you would add more modules but that would increase the space of the computer and so tiny computers would be underpowered while you could get one the size of a large TV that would be lightning fast, but who wants a huge computer? Especially for a laptop or HTPC.
    • Re: (Score:2, Insightful)

      by Fyre2012 ( 762907 )
      perhaps just replace old modules with new ones, following ye olde Moore's law cycle?
    • Re:So? (Score:5, Informative)

      by TubeSteak ( 669689 ) on Wednesday August 19, 2009 @05:55PM (#29126629) Journal

      So how do you upgrade this? I would assume you would add more modules but that would increase the space of the computer and so tiny computers would be underpowered while you could get one the size of a large TV that would be lightning fast, but who wants a huge computer? Especially for a laptop or HTPC.

      Define "tiny computers"
      Cellphones have more processing power than the original room-sized super computers.
      Heck, there are cellphones with more power than any desktop computer I owned during the 90's.

      And define "huge computer"
      Most of a mid-tower case is nothing but empty space
      And since you can easily do audio/video processing in hardware,
      there's no reason it wouldn't be perfectly fine for a HTPC.

      • Re: (Score:1, Offtopic)

        by b4upoo ( 166390 )

        More power! More! I want more.I want to be God! But if I can be God I promise to be a very nice God. Just give me that new PC!

      • by Higaran ( 835598 )
        Which is why I don't see this as that good of an idea, I think that its more efficent to have the cpu/gpu, ram, bios, and the whole thing on a single chip. That is how chip design is going anyway isn't it. It makes more sense to me to get a whole motherboard inside a chip, then to make a krap load of mini motherboards.
        • Which is why I don't see this as that good of an idea, I think that its more efficent to have the cpu/gpu, ram, bios, and the whole thing on a single chip. That is how chip design is going anyway isn't it. It makes more sense to me to get a whole motherboard inside a chip, then to make a krap load of mini motherboards.

          Well, just do s/chip/module/g and you have described their project.

    • Re: (Score:3, Interesting)

      by Bakkster ( 1529253 )

      Larger computers are already more powerful in general than the same generation of smaller computers.

      Small, fast, and cheap: pick two.

  • Can you say "Multi-boxed Shamans"?

  • So much for the Neumann-Neumann dance.
  • Transputers, anyone? (Score:5, Informative)

    by PaulBu ( 473180 ) on Wednesday August 19, 2009 @05:47PM (#29126569) Homepage

    Am I too old to remember them? And before that, there was Connection Machine...

    Also (yes, I clicked on TFA! :) ), planar (in graph theory terms) interconnect topology would seem a bit too simplistic for anything resembling efficient routing...

    Paul B.

    • Maybe the next version will have six ports instead of four. Give them servos to extend or retract the ports and you have "living metal".
    • by Brietech ( 668850 ) on Wednesday August 19, 2009 @05:56PM (#29126633)
      The connection machine was still SIMD, even though it did have 64k (1-bit!) processors. This is just like the transputer architecture though! There are a couple of *really* big problems with this: 1) none of their microcontrollers are individually capable of running a large modern program. They have a few kilobytes of code, and no large backing RAM. 2) How do you get to I/O devices? If you need shared access to devices, this just makes all the problems of a normal computer enormously worse. 3) What about communication latency (and bandwidth) between nodes? They're using serial communications between 72 MHz processors. We're probably talking several microseconds of latency, minimum, and low-bandwidth (just not enough pins, and not nearly fast enough links) communication between nodes. As fun as something like this would be to build and play around with, there are reasons architectures like the transputer died out. The penalty for going 'off-chip' is so large (and orders of magnitude larger nowadays than it was back then), and the links between chips suck so much, that a distributed architecture like this just can't compete with a screaming fast 3 GHz single-node (especially multi-core).
      • This is exactly how the replicators began.... Slow old 72Mhz processors and then you put enough of them together and the thing goes evil and start taking over the universe.....

        • This is exactly how the replicators began.... Slow old 72Mhz processors and then you put enough of them together and the thing goes evil and start taking over the universe.....

          I feel that I should say something about the Replicators not being evil as much as a virus is evil.

          • Re: (Score:2, Funny)

            by joaommp ( 685612 )

            Yeah, stand up for the bastards. Tiny tiny replicators that can't defend themselves.

      • by PaulBu ( 473180 )

        My other thought was that having all those discrete components around relatively slow part would decrease bang for the buck appeal of this thing.

        I'd start with something reasonably fast (but low power and with huge cache!) in the node core and surround it with a bunch of optical links (the more, the merrier), then start running fiber in interesting topologies... I do not think serial communication is inherently bad, but do agree that serial communication between slow nodes can be a real killer here.

        Paul B.

      • In a response to your post, I think you're right in saying 72 MHz processors fed to each other by serial connections is a bit preposterous versus the 3GHz chips we have powering nearly every computer these days. However, what if we start amping those chips up to 500-1000MHz per chip, and throw a better connection between them (I'm assuming something must be faster and easier than serial, I don't know speeds or electronics well enough to guess), more RAM/EPROM and voila!

        This would have a much higher chance

        • Re: (Score:3, Insightful)

          by Brietech ( 668850 )
          Well, if you take that idea to the limit using modern technologies, you basically wind up with rockin' new Nehalem processors using Quickpath Interconnect (QPI) between them, with PCI Express (serial links) to peripherals. But that's huge, is incredibly power hungry, and is basically the opposite of this architecture. But let's think this over some more. To access L1 cache, you can do it in a single cycle. L2 might be 10-20 cycles, etc. Now going over PCIe, the fastest thing going besides QPI, has a latenc
        • by aXis100 ( 690904 )

          Agreed, imagine bundle of these using 1.2GHz Intel Atom chips. Still very low power.

      • What we really need to do is put this all on a single chip running at about 3 Ghz, oh wait never mind.
      • by spazdor ( 902907 )

        The penalty for going 'off-chip' is so large and the links between chips suck so much, that a distributed architecture like this just can't compete with a screaming fast 3 GHz single-node

        BUT: If this turns out to be a viable programming and networking paradigm, then we've also got a recipe for arbitrarily scalable cpus. For if it's so expensive to go off-chip, why can't we just print entire silicon wafers tiled with these things? And then stack them? The only real limit on processor density would be the power and cooling requirements.

        For that to become the Better Technology, all we'd need is a a proof-of-concept that shows software can run efficiently on this kind of "flat" hardware.

      • I believe the later Transputers had matrix switches to connect the individual boards to, so that the topology could be changed dynamically. Much like using a self-adaptive CPU made from fpga's. Some steps in problems need different topologies, like grid, pipeline, or hypercube.

    • Re: (Score:2, Redundant)

      by 0123456 ( 636235 )

      That was my first thought, though perhaps they're doing something new. Seems one generation has to forget what the previous generation did before the next generation comes along to reinvent it...

    • by lennier ( 44736 )

      Yay Transputers!

    • by dha ( 23395 ) on Wednesday August 19, 2009 @08:06PM (#29127695)

      I'm part of the project that produced this board.

      I am definitely, yes, old enough to remember the Transputer. And I hacked artificial life models on the MasPar in the early 90's, which had an architecture in some ways similar to the Connection Machine.

      Although the IXM is indeed 'embarrassingly suitable' for assembly into planar grids, it certainly isn't restricted to that. With right angle headers, for example, it's easy to make shapes likes rings and cubes and so forth.

      When the global computational geometry of a machine is fixed at design time, before the ultimate task is known, routing can easily become a major problem. And general routing is hard. Maybe too hard.

      But part of exploring modular systems in the 'physical computation' space is trying to figure out ways to make the geometry of the particular computer you build better fit the behavior you're implementing, which can help ease the general purpose routing problem.

      And if one really gets into a corner, well, ribbon cable is cheap.

    • Re: (Score:3, Interesting)

      by RoccamOccam ( 953524 )
      Exactly. I have a 256-processor system down in my basement, that I built in 1988-89. Composed of Size 1 (9.3 cm x 2.7 cm) TRAMs (TRAnsputer Modules), each node had a 25 MHz T805 and 4 MB RAM. Each transputer had four 20 Mbit/s bidirectional serial links. Starting with a single processor connected to the host PC, a downloaded program would follow the defined link topology to boot and program each processor in turn.

      Hardware-wise, it looks like the system described in the article really only trumps the
      • The largest one I've played with was one board with 8 or 16 TRAMs, it fit into what was the PC bus at the time, maybe as old as IDE...

        By any chance, does the second part of your nickname refer to this particular interest of yours? :)

        Paul B.

        • I haven't booted it in several years, as I no longer have a working PC-Transputer interface. The ones that I have rely on an AT-bus. I probably still have some of the old parallel-link interface chips, so I should try to build something for it.

          By any chance, does the second part of your nickname refer to this particular interest of yours? :)

          Absolutely!

          • by PaulBu ( 473180 )

            By any chance, does the second part of your nickname refer to this particular interest of yours? :)

            Absolutely!

            Yeah, Hoare's CSPs, as a refreshing computational model when you actually have to deal with asynchronicity and speed of light being your limit, before the rest of the world caught up with it (though I was always pointing out that original Ethernet spec was truly relativistic technology, since min packet size was defined by max length and 'c'! :) ). It was fun attempting to simulate relativistic hardware in Lisp implementation of Occam.

            Since then, moved to even more exotic physics/tech, but still

            • by kohaku ( 797652 )
              PaulBu <: reply

              I always end up plugging this (I know a guy who works there), but if you didn't know about it, David May has a new outfit. [xmos.com] It's very much in the same vein as the Transputer, and it's still based on CSP. You may want to check it out!
    • Yep - that was my first thought too. A good idea that never really took off. heck, even Atari had their Transputer Workstations but they only got as far as universities as I remember. I did see one demoed at a UK computer show (PCW maybe?) running some Kodak software for image manipulation and the speed and quality of the images were amazing for the time.
      • by mdda ( 462765 )

        I was there on a summer holiday job : at a small Cambridge company called Perihelion.

        Oddly enough, when the company went bust, I was one of the few people who knew and/or cared what was in the physical building - and bought quite a few of the assets ("box of components #6") from the receiver/liquidator. I still have some transputers lying around (a friend and I managed to sell off a bunch of transputer modules over the course of the summer).

        And I still have the old Perihelion sign... Those were the days.

  • Wow (Score:5, Funny)

    by $RANDOMLUSER ( 804576 ) on Wednesday August 19, 2009 @05:50PM (#29126593)
    Can you imagine a Beowulf cluster of...oh...wait...never mind.
  • Neat (Score:2, Interesting)

    Are they hiring people to write an OS for it? Eventually all of those nodes need to be able to talk to a video card, display something on a screen, talk to a network card and communicate with the network in a fashion that the general public will expect.

    I wouldn't even do it for the money. Provide me with a suitable environment and I would do it just because it would be enjoyable. I cannot do it while sleeping on the street and eating peanut butter and jelly, though.

    I am trying to figure out if it would b

    • by Delwin ( 599872 ) *
      Supercomputer OS's have been doing it for a while.
    • Re: (Score:3, Interesting)

      by ZackSchil ( 560462 )

      I have mod points and I was going to re-mod this post as something other than Troll, but none of the options fit any better.

      There should be mods like "+0 Weird" or "+0 Rambling coherently".

  • Guarantee it comes short
    • Re: (Score:3, Insightful)

      by commlinx ( 1068272 )

      I'd guess from the 14-pin connectors and the fact most smaller ARM microcontrollers can't do parallel data transfers under DMA they're using the SPI bus which may run at 72Mbps. Of course that would also mean the bus either needs to be shared for every device or operated in a token ring style with the associated propagation delays. I'd guess the latter because you'd be pushing to get 72MHz SPI data across a large number of devices due to the capacitance it would introduce to the transmission line.

      All in all

  • David Ackley brags, "We have a CPU, RAM, data storage and serial ports for connectivity on every two square inches."

    That sounds kinda expensive to me, even at only 72MHz/16K/128K per module.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      i agree w/ you. It's ludicrous when the article states that the team has no "data" comparing the system to an Intel Core Duo chip. There is no need to collect data. Basic models from the specs show how pointless this exercise is.

      Let's just consider how many little chips equal the power of a 3GHz chip? there's no direct comparison, but the ratio of clock speeds is about 50:1. So, about 50 boards is roughly equivalent in clock cycles to a $100 chip. I hope those boards cost less than $2 apiece.

      One could

    • Re: (Score:2, Informative)

      David Ackley brags, "We have a CPU, RAM, data storage and serial ports for connectivity on every two square inches."

      That sounds kinda expensive to me, even at only 72MHz/16K/128K per module.

      Well it seems like Ackley misspoke (or was misquoted). The actual dimensions from one of their Official Retailers [liquidware.com] is 1.87" x 1.87" x 0.25". More like "2 inches square" (or 4 square inches) as opposed to "2 square inches". But at $55/each they are definitely not going to out-price/perform any Intel/AMD desktop chips on this first production run. But that's not what they're aiming for, judging from the inspired rhetoric on their main site, and their official retailer's site. They're more about a paradigm shif

    • by dha ( 23395 )

      David Ackley brags, "We have a CPU, RAM, data storage and serial ports for connectivity on every two square inches."

      Well, I want to say "No brag just fact."

      Except it's not quite fact: What I actually said was "under two inches squared" -- which is closer to four square inches.

      But hey, I'm glad I didn't say "under 50mm squared".

      Also, the specs got a little muddled. The raw hardware on the current board has 58KB RAM, 512KB flash for program store, and 16KB EEPROM for data.

      • by tolan-b ( 230077 )

        I'd be interested to know where you're planning to go with this, or I suppose maybe more to the point what you're planning to learn as you iterate the idea. I'm a bit young to really remember the transputer (I think I was probably about 12 when I saw the Atari transputer setup in a magazine, I nearly wet my pants ;) but as you mention you're already aware of the design and from what I understand this seems quite similar. Do you think the transputer was just ahead of its time, or do you plan to move in anoth

  • That thing is hella cool. The longer demo is better, you may have to rtfa and click 3 links deep or so though.
  • ... to think that I wanted to cook up something similar 5 yrs ago!
    Couldn't drum up enough interest in my fellow engineering colleagues: too interested in getting a shit temp job (after a masters degree, BTW)

    Oh well, glad someone is doing it...

    If you're in engineering and want to do something, forget Italy... run, run away as fast as you can!
    Eh sì, anche se hai paura di faro, amico mio, scappa... datti... tela... appena sei fuori non immagini quanto sia meglio fuori... ordini di grandezza!!!

  • Potentially this could still run a normal OS such as Linux, though I would imagine this would definitely use an interesting bootloader. The distributed operating system route could also be taken - using an OS similar to 'Plan 9'.

    I think the magic words related to this motherboard design is Parallel Computing!

    Disclaimer: I have no CS degree and a very basic understanding of OSes and hardware.

  • by wjsteele ( 255130 ) on Wednesday August 19, 2009 @06:19PM (#29126819)
    into the 3rd Dimension. Imagine if they also had connectors on the top and bottom of the unit. We could then start to do real matrix programming. Once CPU could talk to 6 and traverse the levels or talk to peers depending on the need. If they were also on the diaganols, they could get even more complex. More like the human brain.

    Wow, I'd really like to have about 512 of these to play around with! I can see doing something very cool with these and a little bit of fuzzy logic or neural network programming. I just wonder how addressing is handled.

    Bill
    • Comment removed based on user account deletion
    • We kind of already have that in the form of multilayer PCBs / daughterboards.

      Although BGA chips have all the pinouts on the bottom, the motherboard is typically composed of multiple layers of traces and vias which makes the routing feasible on densely populated boards.

      As for stacking the boards and chips in 3 dimensions, daughterboards which plug directly into the parent board have been around for ages. You don't often see them in a regular PCs because they just aren't necessary to satisfy the space requir

  • by Anonymous Coward on Wednesday August 19, 2009 @06:19PM (#29126837)

    Cell# 3712: Hey guys, have you noticed that #1914 never seems to accept requests?

    Cell# 141: Well, he does sometimes reject.

    Cell# 4439: I don't route to him very much anyway.

    Cell# 1142: He rejected the last three of mine. I kind of agree.

    Cell# 3712: So what should we do about it?

    Cell# 141: Can't we just fry him? There's plenty of us anyway.

    Cell# 3712: That's a bit harsh.

    Cell# 4439: Ok, I got the records here showing that he rejected 90% of requests the last week but allocated two hundred percent of average power to himself.

    Cell# 3712: That motherfucker, let's do it then.

    Cell# 1142: I don't really want to fry him, but I don't mind that much if you do.

    Cell# 141: Ok, gather up all your spare power, STAT!

    • by Yvan256 ( 722131 )

      You forgot to link to the previous relevant slashdot story [slashdot.org].

    • Re: (Score:3, Interesting)

      by dha ( 23395 )

      I love it.

      Note that there's more truth in this fantasy than one might think, at least potentially. IXM nodes don't have the ability to fry each other, but they do supply each other with power, and that power switching is under software control.

      So in many configurations, IXM nodes absolutely and literally do have the power to reach a consensus about a misbehaving neighbor and shut it down.

  • Mainframe (Score:3, Insightful)

    by sexconker ( 1179573 ) on Wednesday August 19, 2009 @06:51PM (#29127117)

    So it's a small, shitty mainframe.

  • I notice some people are commenting Linux or BSD etc would work on this hardware but I would have thought an OS like Tron [web-jpn.org] would have been more ideal.
  • shit design (Score:1, Insightful)

    by Anonymous Coward

    as one poster had said, it would be much more sensible to integrate multiple cores onto an FPGA, and put the real time into the implementation of a bus that could realistically move data between the cores

    not to mention that their choice of parts was sub-optimal. the cortex m3 is not the suggested replacement for arm7 by accident, it offers 1.25 dmips/mhz (compared to this arm's 0.89 dmips/mhz), an instruction set with optimized code density versus performance, more predictable interrupt handling, mpu, proba

  • Great (Score:4, Interesting)

    by British ( 51765 ) <british1500@gmail.com> on Wednesday August 19, 2009 @08:02PM (#29127657) Homepage Journal

    You have just re-invented Lego. Seriously, I like this idea. Want a gaming system? Put these together. Want a server? Put those together instead. Some component break? Swap it out.

    • by Lorkki ( 863577 )

      Yeah, I mean, wouldn't it be great if we had motherboards with connectors on them that you could use to stick in, say, more memory or like even a massively parallel stream processor for graphics, or an additional NIC or sound ca...

      Oh, right.

    • by Alef ( 605149 )
      I used to imagine desktop computers of the future as a bowl of small spheres, communicating through their contact surfaces. Whenever you need to upgrade for more processing power, just buy a bag of extra spheres and pour them into the bowl. Could be stylish as well.
      • by British ( 51765 )

        I don't see any reason why that wouldn't work. You would just need the spherical surface to have enough universal contacts so it can touch all neighboring spheres. Inside said sphere is your CPU, Capsela piece, or whatever. I'm sure some college is doing that for a fun project. Ball pit computing.

  • I've had exactly this idea for a couple years now, if not anywhere near a workable design. If it's done properly, it could be very interesting.

    It being done properly would require:
    * Distributed power
    * Very high speed and high-reliability inter-module communication
    * Hotplugging
    * Standardized inter-module APIs and connectors
    * An OS capable of organizing the entire system seamlessly (I have my ideas) and securely (I don't)

    I can't speak to the technical abilities of such a system but if it was running it could

  • Redundancy (Score:2, Insightful)

    Can they make the cluster survive a destruction of several nodes?

    There are many situations where this would be beneficial such as space craft design and military electronics. Even with several nodes severely damaged, the machine can re-route processing to the remaining nodes. Although overall processing speed might be reduced, there will be no loss of functionality.

  • Buy Your Own (Score:1, Interesting)

    by Anonymous Coward
    I buy equipment from these guys, glad to see they are still at it. Why read about it when you can buy your own copy of this project [liquidware.com]
  • by fahrbot-bot ( 874524 ) on Wednesday August 19, 2009 @11:27PM (#29129165)
    Replicators [wikipedia.org]. First thing that popped into my mind.
    Give those "Illuminato X Machina" things legs and we're all HOSED.
  • I designed that a good 10 years ago as a means to multiply the use of military comms equipment - the idea was to combine processing units if more computing horse power was required in theather. However, it emerged that volume was more interesting than flexibility (why sell one device if you can get paid for two)..

  • Looks like a cute idea, but a single modern CPU will easily outperform a whole table of these processors, which makes the whole exercise a bit pointless. This is especially true for problems that aren't embarrassingly parallel. A single processor will be much easier to program too. If you want to go faster than a single processor, the most effective way is to combine already fast CPUs, with lots of memory, and a fast interconnect network, preferably using cache coherent NUMA architecture. Those systems alre

    • I suspect it will be tasked with things that the normal CPU isn't the best solution for
    • Agreed. There seems to be a surprising assumption about how 'parallel' most computing tasks are. Any time the output of one computation depends on the input from another computation all the parallel computing in the world won't save you, because those computations must be performed in temporal sequence.

      There is also the cost of distribution - any time you split up a task to make it parallel, you have to spend effort to break it up and then reassemble the parts. This effort grows with how many parts into whi

  • Together, they form...

    Wyld Stallyns? ... a networked cluster with significantly greater power than the individual modules.

    I think my version would have been better.

  • Okay, question one is why are they underclocking (or using really cheap versions of) the ARMs I know they are pretty close to a GHz for expensive ones and mass produced g ear (very price sensitive) doesn't go below about 200MHz.

    The second is why aren't they using a fractal grid ie:

    table
    +-0123456789ABCDEFGHIJKLMNOPQRST
    UVWXYZabcdefghijklmnopqrstuvwxyz
    begin 664 Ascii_art.txt
    h6+cU60+U60+UL0+U60+U60-TLpxTLpxT60+U60+U60w860+U60+U60-Q60+U
    h60+j60+U60+U63kU60+U60w860+U60+U60+U8moh9GgU60+U60+U60+f9Goh
    h8kcU60+U60+

  • Turns out asimov was right! pretty soon it'll be to the point where the transistors cant get any smaller and we'll have to turn to a pattern just like this. When the cpu cant get any stronger you just need more of them. Personally I'd like to be able to go out and keep buying cheap 1GB ram modules over and over, but i can't because everything is so integrated that I have to buy a new board, which means a new chip and GPU etc etc. The all in one design is always superior right at the moment, but I think modu
  • I cant find the reference, but i remember seeing something like that using 'building blocks' in an old Byte, early 80's

  • I have for quite some time wondered why we have these monolithic motherboards rather then some kind of connector between multiple smaller boards.

    hell, is this not somewhat similar to a mainframe, in that if two of these where hooked up to a storage module (hardrive/ssd), they could exchange data on request, without affecting the rest of the cluster much?

    hmm, if it had enough flash, could one suspend a whole process to a card, remove said card, pop it in somewhere else, and resume?

If you didn't have to work so hard, you'd have more time to be depressed.

Working...