Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Networking Hardware Technology

Rethinking Computer Design For an Optical World 187

holy_calamity writes "Technology Review looks at how some traditions of computer architecture are up for grabs with the arrival of optical interconnects like Intel's 50Gbps link unveiled last week. The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors."
This discussion has been archived. No new comments can be posted.

Rethinking Computer Design For an Optical World

Comments Filter:
  • DRM (Score:4, Interesting)

    by vlm ( 69642 ) on Wednesday August 04, 2010 @02:49PM (#33141602)

    moving memory and computational power to peripherals like ... monitors.

    They mean ever more complicated DRM. Like sending the raw stream to the monitor to be decoded there.

  • by Chirs ( 87576 ) on Wednesday August 04, 2010 @02:58PM (#33141730)

    Without factoring in speed of light drops due to index of refraction changes, at a distance of 1 meter you're looking at latencies of 7 nanoseconds just for travel time. The bandwidth may be decent but the latency is going to be an issue for any significant distance.

  • by derGoldstein ( 1494129 ) on Wednesday August 04, 2010 @03:00PM (#33141754) Homepage
    It would allow you to use components an a more modular way, especially around an office. If you're not big enough (of a company) to have dedicated rendering/encoding servers, you could move the GPU around depending on who's currently doing the work that requires it. Even on a more casual basis, you could have a bunch of laptops with mid-range GPUs, and have an external GPU for whomever if gaming at the moment. Just like people take turns in a household with the home-theater rig in the living room -- you don't need to install a huge LCD + amp + speaker system in every room, you just need to take turns.
  • Re:a few extra feet (Score:4, Interesting)

    by Sarten-X ( 1102295 ) on Wednesday August 04, 2010 @03:05PM (#33141818) Homepage

    By my understanding, it's not so much the travel time as the decoding/switching/other electronic time. As one example, consider the switching time of a transistor/photodetector. The gate must collect enough energy to switch from "off" to "on". Increased speed means having fewer electrons enter the gate. Higher energy per electron means raising the voltage. That's why overclocking often involves fiddling with voltages. Unfortunately, with more voltage comes more induction, breakdown, and other headaches I don't know enough about to list.

    In contrast, light is much simpler to work with. You can make a light beam brighter without affecting other beams much. There's little chance of a beam breaking through its cable. We can send higher energies to gates with ease. Higher energy means less time to switch, and faster operation.

    Note that I am not a physicist, and not much of an electrical engineer. I may be entirely wrong.

  • Latency? (Score:2, Interesting)

    by Diantre ( 1791892 ) on Wednesday August 04, 2010 @03:05PM (#33141824)
    IANAEE (I Am Not An Electrical Engineer) Pardon my possible stupidity, but what was keeping us from putting the RAM a few feet from the CPU? The way I understand it, electrons don't move much slower than light. Of course you might lose current.
  • Two things... (Score:3, Interesting)

    by MarcQuadra ( 129430 ) on Wednesday August 04, 2010 @03:13PM (#33141920)

    1. The Internet already does that. How much of the experience today is processed partly in a faraway datacenter? I know that even users like me use the Internet as a method to pull things away from each other so each part lives where it makes sense. I have a powerful desktop at home that I RDP into from whatever portable device I happen to be toting. I don't worry about my laptop getting stolen, the experience is pretty fast (faster than a netbook's local CPU, for sure), and I get to mix-and-match my portable hardware.

    2. This is going to have much more use at a datacenter than it will in a server closet or a home. I can already fit more RAM, CPU, and Storage than I need in a typical desktop. Most small businesses run fine on one or two servers. Datacenters, on the other hand, could really take advantage to commoditizing RAM and CPU, like they have with SANs in storage. No more 'host box/VM', it's time to take the next step and pool RAM and CPUs, and provision them to VMs through some sort of software/hardware control fabric. I think Cisco already knows this, which is why they're moving to building servers.

    Imagine the datacenter of the future:

    Instead of discrete PC servers with multiple VM guests each and CAT-6 LAN plugs, you have a pool of RAM, a pool of storage, and a pool of CPUs controlled by some sort of control interface. Instead of plugging the NIC on the back of it into your network equipment, the control interface is -built into- the network core, wired right into the backplane of your LAN. Extra CPU power that's not actually being used will be put to work by the control fabric compressing and deduplicating stuff in storage and RAM. The control interface will 'learn' that some types of data are better served off of the faster set of drives, or in unused RAM allocated as storage. 'Cold' data would slowly migrate to cheap, redundant arrays.

    Guest systems will change, too. No longer will VMs do their own disk caching. It makes sense for a regular server to put all its own RAM to use, but on a system like this, it makes sense to let the 'host fabric' handle the intelligent stuff. Guest operating systems will likely evolve to speak directly to the 'host' VFS to avoid I/O penalties, and to communicate needs for more or less resources (why should a VM that never uses more than 1GB RAM and averages two threads always be allocated 4GB and eight threads?).

  • Re:LightPeak (Score:4, Interesting)

    by somersault ( 912633 ) on Wednesday August 04, 2010 @03:30PM (#33142164) Homepage Journal

    CPUs have high speed cache that is faster than the mainboard RAM for high speed processing on a set of data, and swap the cache to/from RAM as necessary (kind of like how you page RAM to your hard drive when you run out of RAM).

    Such a small cache would be useless for GPUs though, so they need faster RAM to read the massive amounts of texture/vertex/shader/whatever data they have as quick as possible. They also benefit more from stuff like RAM that is optimised for high sequential read speeds, so it does make sense to use RAM that has been specially designed for GPUs if you actually care about graphics performance (I doubt most Mac Mini users do).

  • Re:Here we go again (Score:4, Interesting)

    by hackerjoe ( 159094 ) on Wednesday August 04, 2010 @03:48PM (#33142482)

    You people are not thinking nearly creative enough. The article doesn't make it clear why you'd want to move your memory farther away -- it would increase latency, yeah, but moreover, what are you going to put that close to the CPU? There isn't anything else competing for the space.

    Here's a more interesting idea than just "outboard RAM": what if you replaced the RAM on a blade with a smaller but faster bank of cache memory, and for bulk memory had a giant federated memory bank that was shared by all the blades in an enclosure?

    Think multi-hundred-CPU, modular, commodity servers instead of clusters.

    Think taking two commodity servers, plugging their optical buses together, and getting something that behaves like a single machine with twice the resources. Seamless clustering handled at the hardware level, like SLI for computing instead of video if you want to make that analogy.

    Minor complaint, the summary is a little misleading with units: they're advertising not 50 gigabits/s, but 50 gigabytes/s. Current i7 architectures already have substantially more memory bandwidth than this to local RAM, so the advantage is definitely communication distance here, not speed.

  • Re:LightPeak (Score:4, Interesting)

    by Yvan256 ( 722131 ) on Wednesday August 04, 2010 @03:52PM (#33142536) Homepage Journal

    Most people don't want to mess around inside a computer case, just like most people don't want to mess with the engine of their car or truck, or with the insides of their televisions, etc.

    Such a modular system would be similar to huge LEGO bricks, nothing to open up, just connect the bricks together. Hopefully they would make the modules in standard sizes and allow multiples of that standard size. A CPU module could be 2x2x2 units, optical drives could be 2x1x2, etc.

    The system could allow to connect to at least four faces, so we don't end with with very tall or very wide stacks. Proper ventilation would be part of the standard unit size (you need more heatsinking than the aluminium casing allows? Make your product one unit bigger and put ventilation holes in the empty space). A standard material such as aluminium could be used so that machining/extruding could be used and would allow the modules to dissipate heat.

  • by Nadaka ( 224565 ) on Wednesday August 04, 2010 @03:52PM (#33142550)

    Not exactly what you had in mind, but I've already seen a lego like modular computer in the embedded hobbyist market.

    It is mostly networking and user interface elements that can be stacked, not gpu's or cpu's.

    http://www.buglabs.net/products

  • Re:dumb monitor (Score:2, Interesting)

    by jedidiah ( 1196 ) on Wednesday August 04, 2010 @04:06PM (#33142774) Homepage

    > Care to point out which ones are "hard to upgrade"?

    All the ones that don't cost an arm and a leg.

    I can easily upgrade a $300 PC. On a Mac, that's a privelege that requires a minimum $2400 buy in.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...