Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Hardware Technology

Multicore Chips As 'Mini-Internets' 132

An anonymous reader writes "Today, a typical chip might have six or eight cores, all communicating with each other over a single bundle of wires, called a bus. With a bus, only one pair of cores can talk at a time, which would be a serious limitation in chips with hundreds or even thousands of cores. Researchers at MIT say cores should instead communicate the same way computers hooked to the Internet do: by bundling the information they transmit into 'packets.' Each core would have its own router, which could send a packet down any of several paths, depending on the condition of the network as a whole."
This discussion has been archived. No new comments can be posted.

Multicore Chips As 'Mini-Internets'

Comments Filter:
  • way back machine (Score:5, Insightful)

    by Anonymous Coward on Tuesday April 10, 2012 @11:36PM (#39640341)

    I guess MIT has forgotten about the Transputer....

  • by GumphMaster ( 772693 ) on Tuesday April 10, 2012 @11:37PM (#39640353)

    I started reading an immediately had flashbacks to the Transputer [wikipedia.org]

  • Say what? (Score:2, Insightful)

    by Anonymous Coward on Tuesday April 10, 2012 @11:38PM (#39640361)

    Errr... the internal "bus" between cores on modern x86 chips already is either a ring of point to point links or a star with a massive crossbar at the center.

  • by Forever Wondering ( 2506940 ) on Wednesday April 11, 2012 @02:16AM (#39641205)

    I admit that despite being a technical user, I was not aware that only 2 chips are allowed to "talk" at a given time. I had (erroneously, it would seem) assumed that in order for a 3+-core chip to be fully useful, such a switch/router would have to already be in place.

    For [most] current designs, Intel/AMD have multilevel cache memory. The cores run independently and fully in parallel and if they need to communicate they do so via shared memory. Thus, they all run full bore, flat out, and don't need to wait for each other [there are some exceptions--read on]. They have cache snoop logic that keeps them up-to-date. In other words, all cores have access to the entire DRAM space through the cache hierarchy. When the system is booted, the DRAM is divided up (so each core gets its 1/N share of it).

    Let's say you have an 8 core chip. Normally, each program gets its own core [sort of]. Your email gets a core, your browser gets a core, your editor gets one, etc. and none of them wait for another [unless they do filesystem operations, etc.] Disjoint programs don't need to communicate much usually [and not at the level we're talking about here].

    But, if you have a program designed for heavy computation (e.g. video compression or transcoding), it might be designed to use multiple cores to get its work done faster. It will consist of multiple sections (e.g. processes/threads). If a process/thread so designates, it can share portions of its memory space with other processes/threads. Each thread takes input data from a memory pool somewhere, does some work on it, and deposits the results in a memory output pool. It then alerts the next thread in the processing "pipeline" as to which memory buffer it placed the result. The next thread does much the same. x86 architectures have some locking primitives to assist this. It's a bit more complex than that, but you don't need a "router". If the multicore application is designed correctly, any delays for sync between pipeline stages occur infrequently and are on the order of a few CPU cycles.

    This works fine up to about 16-32 cores. Beyond that, even the cache becomes a bottleneck. Or, consider a system were you have a 16 core chip (all on the same silicon substrate). The cache works fine there. But now suppose you want to have a motherboard that has 100 of these chips on it. That's right--16 cores/chip X 100 chips for a total of 160 cores. Now, you need some form of interchip communication.

    x86 systems already have this in the form of Hypertransport (AMD) or the PCI Express Bus (Intel) [there are others as well]. PCIe isn't a bus in the classic sense at all. It functions like an onboard store-and-forward point-to-point routing system with guaranteed packet delivery. This is how a SATA host adapter communicates with DRAM (via a PCIe link). Likewise for your video controller. Most current systems don't need to use PCIe beyond this (e.g. to hook up multiple CPU chips) because most desktop/laptop systems have only one chip (with X cores in it). But, in the 100 chip example, you would need something like this and HT and PCIe already do something similar. Intel/AMD are already working on any enhancements to HT/PCIe as needed. Actually, Intel [unwilling to just use HT], is pushing "Quick Path Interconnect" or QPI.

  • Re:Say what? (Score:4, Insightful)

    by TheRaven64 ( 641858 ) on Wednesday April 11, 2012 @07:06AM (#39642163) Journal

    The researchers can't be this far removed from the state of the art

    They aren't. The way this works is a conversation something like this:

    MIT PR: We want to write about your research, what do you do?
    Researcher: We're looking at highly scalable interconnects for future manycore systems.
    MIT PR: Interconnects? Like wires?
    Researcher: No, the way in which the cores on a chip communicate.
    MIT PR: So how does that work?
    Researcher: {long explanation}
    MIT PR: {blank expression}
    Researcher: You know how the Internet works? With packet switching?
    MIT PR: I guess...
    Researcher: Well, kind-of like that.
    MIT PR: Our researchers are putting the Internet in a CPU!!1!111eleventyone

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...