Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Hardware Technology

Multicore Chips As 'Mini-Internets' 132

An anonymous reader writes "Today, a typical chip might have six or eight cores, all communicating with each other over a single bundle of wires, called a bus. With a bus, only one pair of cores can talk at a time, which would be a serious limitation in chips with hundreds or even thousands of cores. Researchers at MIT say cores should instead communicate the same way computers hooked to the Internet do: by bundling the information they transmit into 'packets.' Each core would have its own router, which could send a packet down any of several paths, depending on the condition of the network as a whole."
This discussion has been archived. No new comments can be posted.

Multicore Chips As 'Mini-Internets'

Comments Filter:
  • by Anonymous Coward on Tuesday April 10, 2012 @11:34PM (#39640323)

    This technology that networks different cores can also serve another purpose, to prevent damage from core failure, and diagnose such failures. If the cores are connected to other cores, the same data can be processed by bypassing a damaged core, making over heating or manufacturing problems important, but almost treatable. Who knows, cores might even get replaceable.

  • by tibit ( 1762298 ) on Tuesday April 10, 2012 @11:46PM (#39640403)

    Alive and well as XMOS [xmos.com] products. I love those chips.

  • Re:Say what? (Score:5, Interesting)

    by hamjudo ( 64140 ) on Tuesday April 10, 2012 @11:51PM (#39640437) Homepage Journal

    Errr... the internal "bus" between cores on modern x86 chips already is either a ring of point to point links or a star with a massive crossbar at the center.

    The researchers can't be this far removed from the state of the art, so I am hoping that it is just a really badly written article. I hope they are comparing their newer research chips with their own previous generation of research chips. Intel and AMD aren't handing out their current chip designs to the universities, so many things have to be re-invented.

  • by Osgeld ( 1900440 ) on Wednesday April 11, 2012 @12:19AM (#39640611)

    pretty good, few years ago I ran for months on a dual core with one blown out, worked fine until I fired up something that used both, then it would die.

  • by AdamHaun ( 43173 ) on Wednesday April 11, 2012 @12:23AM (#39640643) Journal

    This sort of technology already exists to an extent. TI's Hercules TMS570 [ti.com] microcontrollers have two CPUs that run in lockstep along with a bus comparison module. I think total fail-tolerance might take three CPUs, but this provides strong hardware fault detection in addition to the usual ECC and other monitoring/correction stuff.

    Note that run-time fault tolerance is mostly needed for safety-critical systems. The customers who buy these products do not do so to get better yield, they do so to guarantee that their airbags, anti-lock brakes, or medical devices won't kill anyone. As such, manufacturing quality is very high. Also, die size is significantly larger than comparable general market (non-safety) devices. This means they cost a small fortune. The PC equivalent would be MLC vs. SLC SSDs. Consumer products usually don't waste money on that kind of reliability unless they need it. Now a super-expensive server CPU, maybe...

    [Disclaimer: I am a TI employee, but this is not an official advertisement for TI. Do not use any product in safety-critical systems without contacting the manufacturer, or at least a good lawyer. I am not responsible for damage to humans, machinery, or small woodland creatures that may result from improper use of TI products.]

  • by holophrastic ( 221104 ) on Wednesday April 11, 2012 @12:37AM (#39640717)

    Yeah, great idea. Take the very fastest communication that we have on the entire planet, and replace it with the absolute slowest communication we have on the planet. Great idea. And with it, more complexity, more caches, more lookup tables, and more things to go wrong.

    The best part is that it's totally unbalanced. Internet protocols are based on a network that's ever-changing and totally unreliable. The bus, on the other hand, is best on total reliability and static.

    I'd have thought that a pool concept, or a mailbox metaphor, or a message board analog would have been more appropriate. Something where streams are naturally quantized and sending is unpaired from receiving. Where a recipient can operate at it's own rate uncommon to the sender.

    You know, like typical linux interactive sockets, for example. But what do I know.

  • Re:Sounds like... (Score:5, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Wednesday April 11, 2012 @01:02AM (#39640871) Homepage Journal

    For low-level ccNUMA, you'd want three things:

    • A CPU network/bus with a "delay tolerant protocol" layer and support for tunneling to other chips
    • An MTU-to-MTU network/bus which used a compatible protocol to the CPU network/bus
    • MTUs to cache results locally

    If you were really clever, the MTU would become a CPU with a very limited instruction set (since there's no point re-inventing the rest of the architecture and external caching for CPUs is better developed than external caching for MTUs). In fact, you could slowly replace a lot of the chips in the system with highly specialized CPUs that could communicate with each other via a tunneled CPU network protocol.

  • by Joce640k ( 829181 ) on Wednesday April 11, 2012 @03:30AM (#39641443) Homepage

    Also this is exactly what chip makers already do to a great extent: the binning of CPUs by speeds is not a targeted process. You make a bunch of a chips, test them, and then sell them as whatever clock speed they are robustly stable at.

    Nope. The markings on a chip do NOT necessarily indicate what the chip is capable of.

    Chips are sorted by ability, yes, but many are deliberately downgraded to fill incoming orders for less powerful chips. Bits of them are disabled/underclocked even though they passed all stability tests simply because that's what the days incoming orders were for.

  • by Joce640k ( 829181 ) on Wednesday April 11, 2012 @03:51AM (#39641483) Homepage

    This sort of technology already exists to an extent. TI's Hercules TMS570 [ti.com] microcontrollers have two CPUs that run in lockstep along with a bus comparison module. I think total fail-tolerance might take three CPUs....

    This is just to detect when an individual CPU has failed. To build a fault-tolerant system you need multiple CPUs.

    nb. The 'three CPUs' thing isn't done for detection of hardware faults it's for software faults. The idea is to get three different programmers to write three different programs with a specified output. You then compare the outputs of the programs and if one is different it's likely to be a bug.

  • by 91degrees ( 207121 ) on Wednesday April 11, 2012 @04:33AM (#39641627) Journal
    My Computer Architecture lecturer at University was David May - lead architect for the Transputer. Our architecture notes consisted of a treatise on transputer design.

    Now multi-processor is becoming standard, it's interesting to see the the same problems being rediscovered, and often the same solutions reinvented. Their next problem will be contention between two cores that happen to be running processes that require a lot of communication. Inmos had a simple solution to this one as well.

    Rather a shame that Inmos came up with the technology a quarter of a century too early. I've known a lot of engineers say wonderful things about them. The reason they weren't a huge success was because nobody had found a need for them yet. Extra silicon could be used to make the current generation faster much more easily than now.
  • by morgauxo ( 974071 ) on Wednesday April 11, 2012 @09:05AM (#39642903)
    Years ago I had a single core chip with a damaged FPU. It took me forever to figure out the problem, my computer could only run Gentoo. Windows and Debian, both which it had ran previously gave me all sorts of weird errors I had never seen before. I had to keep using it because I was in college and didn't have money for another one so I just got used to Gentoo. Even in Gentoo anything which wasn't compiled from scratch was likely to crash in weird ways. (a clue) I finally diagnosed the problem a couple years later when a family member gave me a disk that boots up and runs all sorts of tests on the hardware. It turned out Gentoo worked because when software compiled it recognized the lack of an FPU and compiled in floating point emulation like it was dealing with an old 486sx chip.

    So, anyway, if that can happen I would imagine damaging a single core of a multicore chip is quite possible.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...