Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Sun Unveils Direct chip-to-chip Interconnect 185

mfago writes "On Tuesday September 23, Sun researchers R. Drost, R. Hopkins and I. Sutherland will present the paper "Proximity Communication" at the CICC conference in San Jose. According to an article published in the NYTimes, this breakthrough may eventually allow chips arranged in a checkerboard pattern to communicate directly with each other at over a Terabit per second using arrays of capacitively coupled transmitters and recievers located on the chip edges. Perhaps the beginning of a solution to the lag between memory and interconnect speed versus cpu frequency?"
This discussion has been archived. No new comments can be posted.

Sun Unveils Direct chip-to-chip Interconnect

Comments Filter:
  • Re:Timing? (Score:5, Insightful)

    by Usagi_yo ( 648836 ) on Monday September 22, 2003 @09:43AM (#7024011)
    No. What you don't understand or realize is that Bill Joy actually left 2 years ago, when he "retired" into distinguished senior engineer, from CTO. This latest move by Bill Joy, full retirement is merely a continuation of that. At least thats how I see it.
  • by Peridriga ( 308995 ) on Monday September 22, 2003 @09:51AM (#7024084)
    This might be the obvious question but, why hasn't anyone done this before?

    It seems obvious, the end of chip has pins. The chip it will eventually connect to has pins. Instead of having 20 trace lines to the next chip why not redesign them so the out/inputs of both line up to reduce the complexity of the design.

    Anyone wanna fill in my mental gap for me?
  • Re:Hmm (Score:2, Insightful)

    by Daniel Dvorkin ( 106857 ) on Monday September 22, 2003 @09:57AM (#7024145) Homepage Journal
    If IBM has it, it will run Linux.
  • Is this new? (Score:4, Insightful)

    by 4im ( 181450 ) on Monday September 22, 2003 @10:00AM (#7024171)

    Sounds a lot like the ol' Transputer (was from INMOS), of course faster. One could also think of AMD's HyperTransport. So, again, except maybe for the speed, I don't see much innovation here.

    If only people could remember that "terra" has something to do with earth, "tera" is the unit...

  • or it might not (Score:4, Insightful)

    by penguin7of9 ( 697383 ) on Monday September 22, 2003 @10:00AM (#7024172)
    Placing large numbers of chips adjacent to one another has obvious problems with heat and power, in particular when running at those speeds. That, rather than interconnect technology, is probably the main reason we still package up chips in large packages.

    This might be useful for placing a small number of chips close together, in particular chips that may require different manufacturing processes.
  • by ikkonoishi ( 674762 ) on Monday September 22, 2003 @10:00AM (#7024174) Journal
    The main bottle neck is memory access because of the system bus.

    A direct CPU to RAM connection would improve things dramatically.

    Why do you think L1 cache is so important?
  • by Jah-Wren Ryel ( 80510 ) on Monday September 22, 2003 @10:02AM (#7024187)
    It has been done before, probably the most recent incarnation is hypertransport from AMD. The only difference at the 50,000ft view is that the speeds and feeds are faster. This is an evolutionary step, not revolutionary or innovationary,
  • Bad math? (Score:5, Insightful)

    by Quixote ( 154172 ) on Monday September 22, 2003 @10:19AM (#7024332) Homepage Journal
    I hate it when the hype overshadows the technical details. Here's a snippet from the article:

    By comparison, an Intel Pentium 4 processor, the fastest desktop chip, can transmit about 50 billion bits a second. But when the technology is used in complete products, the researchers say, they expect to reach speeds in excess of a trillion bits a second, which would be about 100 times the limits of today's technology.

    If a P4 is already doing 50 Gbps (as they say), and this uber-technology will allow 1Tbps (which is 20x a P4's 50Gbps), then how is that "100x the limits of today's technology" ?

    <shakes head>

  • Re:or it might not (Score:2, Insightful)

    by Derivin ( 635919 ) on Monday September 22, 2003 @10:33AM (#7024464)
    Heat will definatly be an issue, but much less power will be required. The majority of the power required by chips is used to push data on and off the chip. It takes alot of poser to move a signal from a 25 micron PCB path.

    This technology (if it pans out) will mostlikly enter teh private sector in cell phones, DVD players and other small consumer electronics that have a very large number of units produced.

    Silicon wafer production has always had one major problem. Impurities. The ability to use more of the waffer to produce smaller chips that can later be 'put back together' in arrays that may not be any larger than the origional single chip solution has the potential to be much cheaper to manufacture in mass quantity.

    Granted this is part of the theory behind 6 Sigma, which does not always work out.
  • Re:Is this new? (Score:3, Insightful)

    by Ella the Cat ( 133841 ) on Monday September 22, 2003 @10:50AM (#7024611) Homepage Journal
    It doesn't sound like the Transputer to me. Sure, they resemble each other in that you can build a 2D array of chips from them by design, but you miss (or inadvertently downplay) that the innovation occurs in the fundamental electronic engineering issue of what happens in the bits of circuitry that drive the pins/pads - the transputer used asynchronous links and conventional pins and a nice but conventional memory interface, the Sun chip is doing something new, or if not new, seldom seen and highly promising. Ivor Sutherland knows his stuff.
  • Re:FINALLY! (Score:5, Insightful)

    by chrysrobyn ( 106763 ) * on Monday September 22, 2003 @11:30AM (#7024970)
    Someone gets it. As an Electrical Engineer-in-training, I was always frustrated with people who got these big bad processors and wondered why their improvement was minimal.
    They never quite grasped that the biggest bottleneck is between the processor and memory.

    Don't get too frustrated with this. There will always be people who don't understand something fundamental to your training. That's why you're trained, to understand these non-obvious fundamentals. Now that you understand a CPU has to be fed data in order to process it, it's obvious, but a PHB wouldn't necessarily come to that conclusion on his own.

    My EE instructor always said that they could improve performance by doing one simple thing: make the interconnects on the motherboard between the motherboard and RAM rounded instead of cornered. You could then increase bus speed as you wouldn't have magnetic loss at the corners like you do now.
    You fix that, and you can see a SUBSTANTIAL improvement in performance. The only thing that can be done beyond that is to get a Platypus drive (Solid state "Hard Drive" made from Quikdata made from DDR RAM). Then you reduce your access time to your hard drive from milliseconds to nano/microseconds.

    Your EE instructor will tell you lots of things that can help performance. For example, making the L2 cache be the size of main memory. Just because it helps performance doesn't make it worth the price. Rounded edges on the PCB are not easy to accomplish and their benefits may not be outweighed by the added price-- even for exceptionally high end servers. Without looking at the math, I would like to toss "10% performance adder, 50% cost adder" out into the air, and say that most people would rather save the dough. Another factor to consider is reliability. Intuition suggests to me that reliability would go up without sharp edges, but intuition also tells me that modelling board coupling on a 4 layer board would be a real pain in the ass, to say nothing of a server class 6 or 8 (or higher) layer board if you have to model curved structures. You might not find an easy way to capitalize on your wonderful curved wire performance. Not only do you have to worry about your slowest path, but your quickest one can't arrive so quickly that the other chip can't sample the previous output.

    Take care in your classes when you use the word "only". Taking advantage of our wonderful next generation 64 bit processors and multiple gigs of RAM, we could conceivably copy the contents of the hard drive to main memory (especially if we are only concerned with 1-4 gigs of data in a low cost solution). Here, we get the enhanced bandwidth of main memory instead of having to kludge through the southbridge, PCI controller, IDE/SCSI to RAM interface and back.

    There are many things that improve system performance-- and the system is the only thing that matters-- rounded wires and SSDs (solid state drives) are only the beginning. Depending on the application, a larger L3 cache may make more difference, or a wider faster CPU to CPU interface, or a pair of PCI controllers hanging off the southbridge for twice the bandwidth, or integrating the northbridge onto the CPU, or ...

    The best engineering advice I can give you is that the answer is always, "It depends". You'll spend the next 5-30 years of your life learning how to answer the followon question "Depends on what?". Almost everything has advantages and disadvantages and there are few absolutes.

    The "Someone gets it" and "They never quite grasped" attitude may get you in trouble. Being proactive and explaining and educating instead will likely be more effective.

  • Re:FINALLY! (Score:2, Insightful)

    by ingenthr ( 34535 ) on Monday September 22, 2003 @11:46AM (#7025111) Homepage
    Quite right. Sun gets this quite well. Look for the articles on the Niagara processor. People always look at it as an SMP on a chip and try to compare it to hyperthreading or the stuff that IBM has done. However, what Sun is doing is 'fast switching' between threads based on stalls for memory access. This is another way of solving the problem you mention.

    If the software and programming model are capable (and most software run on Solaris is) of exploiting this, you effecively trade off bandwidth (easy to obtain) for latency (very, very hard to obtain).

    It's cool stuff-- I'm looking forward to seeing it released as a product.
  • by Anonymous Coward on Monday September 22, 2003 @01:25PM (#7025998)
    You need more training. Or less ego.

    Look at a recent P4 motherboard for 45 degree traces. Look at any previous motherboard with RAMBUS (even an Nintendo 64 from November 1999) for curved traces.

    It's not so much a question as knowing about something as it is implementing it. If it isn't affordable, it isn't worth it. Because if it isn't affordable, you might be able to buy two affordable ones for the same price. And you're going to have trouble beating the performance of two systems with one.

    Finally, to make a hard drive from RAM is to totally lose track of the idea of what a hard drive is. Hard drives are supposed to be slower but they make it for it with lower cost per megabyte. Instead of a RAM drive, just put more RAM in your machine, it will use it as a disk cache/backing store and get you all the performance you want.

    Also, at 120us per command register access, you really cannot initiate any transfer over ATA in under 0.75ms.

To the systems programmer, users and applications serve only to provide a test load.

Working...