Forgot your password?
typodupeerror
Intel Networking Hardware Technology

Rethinking Computer Design For an Optical World 187

Posted by timothy
from the optical-floptical dept.
holy_calamity writes "Technology Review looks at how some traditions of computer architecture are up for grabs with the arrival of optical interconnects like Intel's 50Gbps link unveiled last week. The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors."
This discussion has been archived. No new comments can be posted.

Rethinking Computer Design For an Optical World

Comments Filter:
  • Here we go again (Score:5, Informative)

    by overshoot (39700) on Wednesday August 04, 2010 @02:55PM (#33141672)
    This is eerily reminiscent of Intel's flirtation with Rambus: they were so focused on bandwidth that they sacrificed latency to get it. Yeah, the Pentium4 series racked up impressive GHz numbers but the actual performance lagged because the insanely deep Rambus-optimized pipeline stalled all the time waiting for the first byte of a cache miss to arrive.

    Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

    Now, peripherals are another matter. But if bandwidth were all it took, we'd be using 10 Gb/s PCI Express for memory right now.

  • Re:Here we go again (Score:5, Informative)

    by demonbug (309515) on Wednesday August 04, 2010 @03:03PM (#33141802) Journal

    This is eerily reminiscent of Intel's flirtation with Rambus: they were so focused on bandwidth that they sacrificed latency to get it. Yeah, the Pentium4 series racked up impressive GHz numbers but the actual performance lagged because the insanely deep Rambus-optimized pipeline stalled all the time waiting for the first byte of a cache miss to arrive.

    Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

    Now, peripherals are another matter. But if bandwidth were all it took, we'd be using 10 Gb/s PCI Express for memory right now.

    I was thinking the same thing regarding latency and remote memory. If you've got your memory 1 physical meter away, you're already looking at something like 6.6 ns round-trip latency (in a vacuum) just for light traveling that physical distance; seems like once you include switching plus getting to/from the optical interconnect you're looking at some pretty serious latency issues compared to onboard RAM (I think DDR3 SDRAM is on the order of 7-9 ns).

  • Re:Getting Entangled (Score:3, Informative)

    by Rakishi (759894) on Wednesday August 04, 2010 @03:28PM (#33142128)

    No known process allows for information transfer at speeds faster than light. Including quantum entanglement. Stop watching so much science fiction and go read up on what it actually does instead.

  • Re:LightPeak (Score:2, Informative)

    by Anonymous Coward on Wednesday August 04, 2010 @03:52PM (#33142546)

    No, the lag would be stupid.

    No the lag would not be stupid, just imperceptible. No, really. A ten meter cable will delay data sent to a Remote GPU (tm) by fifty nanoseconds. Not milliseconds. Not microseconds. Nanoseconds. You can't perceive that. Not in your wildest, most fevered gamer dreams.

    Contemporary GPUs couldn't accomplish this because they frequently interact with the host CPU in a synchronous manner. I'm guessing that is the point of the "rethinking computer design" topic.

  • Re:LightPeak (Score:5, Informative)

    by The Master Control P (655590) <<ejkeever> <at> <nerdshack.com>> on Wednesday August 04, 2010 @03:59PM (#33142652)
    I recommend reading the programmer's guide [nvidia.com] to a modern graphics architecture; Caching is essential to them.

    Modern GPU architectures face the same clock speed/bus speed disparity and memory latency problems as CPUs and have taken their response much farther. They have several thousand registers per core and an L1 size & speed cache per processor group. Cache misses carry a typical penalty of several hundred cycles.
  • Re:LightPeak (Score:1, Informative)

    by Bitmanhome (254112) <bitman&pobox,com> on Wednesday August 04, 2010 @07:58PM (#33145436)

    50 nS is equivalent to 20 MHz. Since modern busses (RAM and PCI) run at 100-500 MHz, I'd have to say that you would most certainly notice that.

  • Re:LightPeak (Score:1, Informative)

    by Anonymous Coward on Wednesday August 04, 2010 @08:42PM (#33145792)

    I work in metal fabrication. (Quality engineer at an aerospace company.)

    Machined metal is vastly more expensive than is sheet metal. This is why the vast majority of system chasis are made of sheet metal of one variety or another. Nearly every system case I have seen has been made of specially pressed sheet metal, and NOT machined components.

    Thus, I would say cost would be a significant obstacle to the implementation of this kind of modular design, if you were to stick to your guns on machined surfaces over formed ones.

    Also, As a matter of record, aluminum extrusion has pretty crappy tolerances, due to cooling during the extrusion process causing random deformations in the surface of the extrusion. I would offer to give links to spec sheets for some common aluminum extrusions, but those are all ASME and Pals controlled, and they want money. (thus handing out the spec sheets would get me and my employer in trouble. You might be able to get some outdated extrusion profile documents from Assistdocs.com)

Take care of the luxuries and the necessities will take care of themselves. -- Lazarus Long

Working...