Forgot your password?
typodupeerror
Intel Networking Hardware Technology

Rethinking Computer Design For an Optical World 187

Posted by timothy
from the optical-floptical dept.
holy_calamity writes "Technology Review looks at how some traditions of computer architecture are up for grabs with the arrival of optical interconnects like Intel's 50Gbps link unveiled last week. The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors."
This discussion has been archived. No new comments can be posted.

Rethinking Computer Design For an Optical World

Comments Filter:
  • For GPUs? Finally an easy upgrade path for all future Macs?

    • by TheKidWho (705796)

      No, the lag would be stupid. You want your GPU as close as possible to the CPU...

      • by Yvan256 (722131)

        My Mac mini uses an nVidia 320M which shares RAM with the CPU. According to the resume it's good enough for the RAM, so why can't it be fast enough for a GPU?

        • Re:LightPeak (Score:4, Interesting)

          by somersault (912633) on Wednesday August 04, 2010 @03:30PM (#33142164) Homepage Journal

          CPUs have high speed cache that is faster than the mainboard RAM for high speed processing on a set of data, and swap the cache to/from RAM as necessary (kind of like how you page RAM to your hard drive when you run out of RAM).

          Such a small cache would be useless for GPUs though, so they need faster RAM to read the massive amounts of texture/vertex/shader/whatever data they have as quick as possible. They also benefit more from stuff like RAM that is optimised for high sequential read speeds, so it does make sense to use RAM that has been specially designed for GPUs if you actually care about graphics performance (I doubt most Mac Mini users do).

          • by Yvan256 (722131)

            But wouldn't the GPU and its own RAM be in the same box, away from the main CPU? Modular computers. Buy the CPU, RAM, GPU and storage modules you need and build your own computer accordingly.

            • by lgw (121541)

              But wouldn't the GPU and its own RAM be in the same box, away from the main CPU? Modular computers. Buy the CPU, RAM, GPU and storage modules you need and build your own computer accordingly.

              Isn't that what I did to build the computer I'm typing this one right now? I barely needed a screwdriver, and that was just to secure the motherboard to the case.

              • Re:LightPeak (Score:4, Interesting)

                by Yvan256 (722131) on Wednesday August 04, 2010 @03:52PM (#33142536) Homepage Journal

                Most people don't want to mess around inside a computer case, just like most people don't want to mess with the engine of their car or truck, or with the insides of their televisions, etc.

                Such a modular system would be similar to huge LEGO bricks, nothing to open up, just connect the bricks together. Hopefully they would make the modules in standard sizes and allow multiples of that standard size. A CPU module could be 2x2x2 units, optical drives could be 2x1x2, etc.

                The system could allow to connect to at least four faces, so we don't end with with very tall or very wide stacks. Proper ventilation would be part of the standard unit size (you need more heatsinking than the aluminium casing allows? Make your product one unit bigger and put ventilation holes in the empty space). A standard material such as aluminium could be used so that machining/extruding could be used and would allow the modules to dissipate heat.

                • by jedidiah (1196)

                  It doesn't matter if it's simple and easy lego bricks. If people aren't interested in rolling their own then they aren't interested in rolling their own regardless of how easy or hard it is.

                  Such large bulky systems will likely seem at best quaint.

          • Re:LightPeak (Score:5, Informative)

            by The Master Control P (655590) <ejkeever@noSpam.nerdshack.com> on Wednesday August 04, 2010 @03:59PM (#33142652)
            I recommend reading the programmer's guide [nvidia.com] to a modern graphics architecture; Caching is essential to them.

            Modern GPU architectures face the same clock speed/bus speed disparity and memory latency problems as CPUs and have taken their response much farther. They have several thousand registers per core and an L1 size & speed cache per processor group. Cache misses carry a typical penalty of several hundred cycles.
      • by Beardydog (716221)
        I thought GPU operations were one-way enough that separation issues were much more about bandwidth than latency.
        • by hitmark (640295)

          with todays direct attachment of screens, it probably is. but if the rendering is happening in a central location, and then routed back over a network, it may be something else.

          something like using the render farm to power the workstations during office hours, and then render the scenes after hours.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        No, the lag would be stupid.

        No the lag would not be stupid, just imperceptible. No, really. A ten meter cable will delay data sent to a Remote GPU (tm) by fifty nanoseconds. Not milliseconds. Not microseconds. Nanoseconds. You can't perceive that. Not in your wildest, most fevered gamer dreams.

        Contemporary GPUs couldn't accomplish this because they frequently interact with the host CPU in a synchronous manner. I'm guessing that is the point of the "rethinking computer design" topic.

  • moving memory and computational power to peripherals like laptop docks and monitors.

    I would think that this would make upgrading more complicated, not less so. Thoughts?

    • by derGoldstein (1494129) on Wednesday August 04, 2010 @03:00PM (#33141754) Homepage
      It would allow you to use components an a more modular way, especially around an office. If you're not big enough (of a company) to have dedicated rendering/encoding servers, you could move the GPU around depending on who's currently doing the work that requires it. Even on a more casual basis, you could have a bunch of laptops with mid-range GPUs, and have an external GPU for whomever if gaming at the moment. Just like people take turns in a household with the home-theater rig in the living room -- you don't need to install a huge LCD + amp + speaker system in every room, you just need to take turns.
      • Re: (Score:3, Insightful)

        by mhajicek (1582795)
        I like the mention of putting memory and such in a dock. So you have 8GB RAM in your laptop on the go, but when you get home or to the office and dock you have 32GB. You could also have your hot and power hungry CAD / gaming GPU in the dock and a lesser on built in.
        • Re: (Score:3, Insightful)

          Or made like LEGO Blocks. Need a quad core CPU? Go buy one and snap it onto your others.

          • Re: (Score:3, Interesting)

            by Nadaka (224565)

            Not exactly what you had in mind, but I've already seen a lego like modular computer in the embedded hobbyist market.

            It is mostly networking and user interface elements that can be stacked, not gpu's or cpu's.

            http://www.buglabs.net/products

          • by hitmark (640295)

            hmm, motherboard interconnect, NUMA for the home.

      • so you can have lot's boxes that need there own power cord / big black box.
        I don't see a data cable having the power to drive a good GPU.

  • dumb monitor (Score:3, Insightful)

    by demonbug (309515) on Wednesday August 04, 2010 @02:48PM (#33141578) Journal

    The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors

    Why would I want to pay for computational power in my monitor? When I buy a monitor I want it to do it's job - show the best quality images for the cheapest cost possible. A good monitor should last much longer than the associated computer driving it (unless we suddenly have a huge increase in the rate of development of display technology). Why would I want added cost in my monitor that will only make it out of date more quickly?

    • Re:dumb monitor (Score:4, Insightful)

      by jack2000 (1178961) on Wednesday August 04, 2010 @02:50PM (#33141618)
      So you can buy a new monitor again, and again and again. I bet this is what went through Steve Jobs' head when he they made macs hard to upgrade, that and a huge thunder of Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching ...
      • that and a huge thunder of Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching ...

        I'm almost certain that he was born with the "Ka-ching Ka-ching" sound looping in his brain.

      • Re: (Score:3, Insightful)

        by ceoyoyo (59147)

        For ages I avoided Macs and built my own machines with upgrades specifically in mind. Turns out I rarely ever actually upgraded any of them anyway, except occasionally the video card and, more often, hard drives and memory. It was usually more economical to sell the old machine to someone and buy or build another.

        When I started grad school the lab used all Macs. I've never missed the ability to upgrade.

        • Re: (Score:3, Insightful)

          by derGoldstein (1494129)
          What about the ability to re-use a good power supply and case? I've had my PSU/Case combo for 3 computers now. When I say that I've "upgraded my computer", I often mean that I've replaced the motherboard, CPU, and RAM to a new architecture. Many/most of the other components remain the same -- I often have no reason to upgrade the storage, video card, optical drives, and, as mentioned above, the PSU/case. It's more flexible and modular, even if it does take some more work.
          • by ceoyoyo (59147)

            Yeah, I kept one case for ages. A big steel monster that weighed a tonne but was far superior to the paper thin sheet metal deals they started making. It was a pain though, because it's tough to sell a computer without a case, so I usually ended up buying a new case whenever I "upgraded" anyway.

            It always was far easier, and frequently cheaper, just to sell the whole thing and buy another. Macs doubly so because they seem to hold their resale value better than a generic PC.

            • by hitmark (640295)

              not surprising as until apple went x86, they where something of a collectors item. The last holdout of the microcomputer era, building their own internals from the ground up.

        • by jedidiah (1196)

          Plugged a new machine into an old monitor?

          Then you've "upgraded your machine" by Apple standards.

          Storage would be one key thing to make easy to upgrade. Stuff is always getting bigger and bigger
          and we're always finding new ways of filling up disks. Plus, one might go bad and you would want
          to replace it.

          The idea that you would never need to repair or upgrade storage is silly.

          It would be nice if Macs allowed for easy standardized hot (or cold) swapping of internal drives.

      • Man, there should be a SJobs version of the Godwin rule.
      • Care to point out which ones are "hard to upgrade"? My Macbook Pro couldn't be easier to upgrade a HD or RAM in. The G5s up through the MacPros seem to be as simple of an upgrade path as you can get. Everything more or less slides out, no screws, nothing. [apple.com]

        The original Minis were difficult, but that probably came from cramming that amount of material into the form factor. Newer iMacs and Minis are just a twist off cover to upgrade RAM.

        • Re: (Score:2, Interesting)

          by jedidiah (1196)

          > Care to point out which ones are "hard to upgrade"?

          All the ones that don't cost an arm and a leg.

          I can easily upgrade a $300 PC. On a Mac, that's a privelege that requires a minimum $2400 buy in.

    • by bsDaemon (87307) on Wednesday August 04, 2010 @02:52PM (#33141644)

      you mean like an imac? /ducks (disclaimer: typed from a 24" imac while at work)

    • by Grishnakh (216268)

      A good monitor should last much longer than the associated computer driving it (unless we suddenly have a huge increase in the rate of development of display technology).

      Not likely. With today's LCDs (esp. the LED backlit ones), displays are already very good, and there's little reason to upgrade unless you want a bigger one. That trend is only going to go so far.

      Displays seem to make quantum leaps, so to speak. For a long time, we were all using CRT monitors. After VGA and SVGA came out, many of us wer

      • by dlgeek (1065796)
        One improvement you missed is pixel density. Think of stuff like the new Apple "Retina Display" but at a larger scale. You can get higher quality graphics on the same size screen with a higher resolution at a higher DPI.
        • by Grishnakh (216268)

          Ah yes, I did forget that. However, I'm not sure we're going to ever see higher pixel density, because while it's technically feasible, it just doesn't seem to sell very well.

          Even back in the SVGA CRT days, I came across tons of people who preferred setting their monitors at 640x480. These days, everyone is perfectly happy with 1920x1080.

  • DRM (Score:4, Interesting)

    by vlm (69642) on Wednesday August 04, 2010 @02:49PM (#33141602)

    moving memory and computational power to peripherals like ... monitors.

    They mean ever more complicated DRM. Like sending the raw stream to the monitor to be decoded there.

    • by mlts (1038732) *

      DRM comes to mind, as well as forcing/offloading various graphic rendering commands to the monitor. So when DirectX changes or gets upgraded, you have to buy not just a new card, but another monitor. I'm just waiting for HDCP to start having versions so someone with HDCP 2010a won't be able to watch Blu-Ray movies, nor HD TV unless they pitch the monitor and buy themselves a TV with HDCP 2010b or something along those goofy lines.

      • Re: (Score:3, Insightful)

        by Jesus_666 (702802)
        That would kill Blu-Ray. People flocked from VHS to DVD in droves because it didn't just offer higher quality, it offered greatly improved convenience as well. Look at the DVD-to-Blu-Ray switch: Many people are still happily using their DVDs, content with what they have. Blu-Ray only offers a modest increase in quality with no convenience increase and isn't quite as universally loved as DVD.

        Of course, Blu-Ray requires you to have compatible equipment. That's a bother (and another reason why some people do
  • I mean for laptops. Right now I can leave storage and a larger monitor when I take it with me, and of course anything that can be networked. I'd like to be able to "dock the laptop into" more RAM, a more powerful GPU, and (while I realize this is wholly unlikely) maybe a second CPU (4 cores on the laptop, 4 more on the table).

    Adding a GPU as an external peripheral has already been done, just not in a commercially viable way. Hopefully this will change.
    • Adding a second CPU is not that unlikely - motherboards with two sockets exist for a long time. If you can "push out" the RAM with this tech, why not a second CPU?

  • Here we go again (Score:5, Informative)

    by overshoot (39700) on Wednesday August 04, 2010 @02:55PM (#33141672)
    This is eerily reminiscent of Intel's flirtation with Rambus: they were so focused on bandwidth that they sacrificed latency to get it. Yeah, the Pentium4 series racked up impressive GHz numbers but the actual performance lagged because the insanely deep Rambus-optimized pipeline stalled all the time waiting for the first byte of a cache miss to arrive.

    Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

    Now, peripherals are another matter. But if bandwidth were all it took, we'd be using 10 Gb/s PCI Express for memory right now.

    • by Shikaku (1129753)

      BUT BUT BUT....

      50Gbps!!!!!!1

    • Re:Here we go again (Score:5, Informative)

      by demonbug (309515) on Wednesday August 04, 2010 @03:03PM (#33141802) Journal

      This is eerily reminiscent of Intel's flirtation with Rambus: they were so focused on bandwidth that they sacrificed latency to get it. Yeah, the Pentium4 series racked up impressive GHz numbers but the actual performance lagged because the insanely deep Rambus-optimized pipeline stalled all the time waiting for the first byte of a cache miss to arrive.

      Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

      Now, peripherals are another matter. But if bandwidth were all it took, we'd be using 10 Gb/s PCI Express for memory right now.

      I was thinking the same thing regarding latency and remote memory. If you've got your memory 1 physical meter away, you're already looking at something like 6.6 ns round-trip latency (in a vacuum) just for light traveling that physical distance; seems like once you include switching plus getting to/from the optical interconnect you're looking at some pretty serious latency issues compared to onboard RAM (I think DDR3 SDRAM is on the order of 7-9 ns).

      • Re: (Score:3, Insightful)

        by tantrum (261762)

        might split things up into something reminding onboard ram and external swap though.

        I don't need my 24gb swap space much at the moment, but it would be sweet to have it respond in something like 20ns anyways :)

      • Re: (Score:3, Funny)

        by Grishnakh (216268)

        So they just need to figure out how to make FTL optical cables...

    • Re: (Score:3, Insightful)

      by feepness (543479)

      Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

      That's the thing though, isn't it? There isn't a "the processor", there's 8, 16, 32, 128 processors. So stalling one may not be that great a loss.

    • by chrb (1083577) on Wednesday August 04, 2010 @03:21PM (#33142020)

      Same goes for optical interconnect to memory: the flood may be Biblical when it arrives

      But it won't be - the system is fundamentally limited by all of the rest of the components. A top end front-side bus can already push 80Gb; scaling that upto the 400Gbit that this optical link promises will probably be practical within a few years, but the latency of encoding and decoding a laser signal and pushing it over several meters is going to be a killer for computational applications. It will be great for USBX, and for high end networking it will challenge Infiniband (which currently tops out at around 300Gb). Infiniband is already used for networking high-performance computational clusters, but nobody is using it for the CPU to memory bus because of the high latency. Even with high bandwidth, computation still has to be carried out on the data, and so it still makes sense to put the data and processor as close together as possible.

      In the last decade there were many research papers proposing that co-processors would be placed on DRAM cards, or Embedded DRAM would allow CPU and processors to be fabricated on a single die (e.g. 1 [psu.edu], 2 [stanford.edu]). But if you have a processor and DRAM connected to similar units via an optical interconnnect, guess what - the architecture begins to look awfully similar to a regular network with optical ethernet. So, it looks likely that this will be just another incremental improvement in architecture rather than the radical shift that TFA envisions.

    • by Animats (122034)

      Yes. Not only do you have speed of light latency, you have marshaling latency, as the bits have to go into a register in parallel, then be clocked out serially for transmission, then converted to parallel at the other end. For memory access, that overhead matters.

      Optical interconnects do have faster propagation than electrical ones. Radio in vacuum achieves the speed of light, but in cables and on PC boards, capacitance and inductance slow down propagation [hightech12.com] well below the speed of light. Coax is 60-75

      • Coax is 60-75% of light speed. Traces on FR4 board are around 50%. Inner traces on multilayer PC boards are below 30% of light speed. Interconnects on chip are sometimes even worse.

        Well, I'm not aware of anyone using epoxy glass for cable insulation. You can get pretty quick (0.8 C0 or so) with foamed Teflon insulation, but you have to be seriously wanting to pay for it. Easy to damage, too.

    • So... uh... The 2nd Coming of FB-DIMMs?

      If that happens, I'm not thinking performance, I'm thinking short Intel stock.

      • So... uh... The 2nd Coming of FB-DIMMs?

        Without disclosing my Super Secret Identity, let's just say that I was there at the beginning of the FBDIMM fiasco, told my management to run, not walk, away from getting sucked into it, and proceeded to watch the train wreck from very close up. As in, on the field instead of front-row in the stands.

        I've made a lot of bad calls in my life but I totally nailed that one.

    • Re:Here we go again (Score:4, Interesting)

      by hackerjoe (159094) on Wednesday August 04, 2010 @03:48PM (#33142482)

      You people are not thinking nearly creative enough. The article doesn't make it clear why you'd want to move your memory farther away -- it would increase latency, yeah, but moreover, what are you going to put that close to the CPU? There isn't anything else competing for the space.

      Here's a more interesting idea than just "outboard RAM": what if you replaced the RAM on a blade with a smaller but faster bank of cache memory, and for bulk memory had a giant federated memory bank that was shared by all the blades in an enclosure?

      Think multi-hundred-CPU, modular, commodity servers instead of clusters.

      Think taking two commodity servers, plugging their optical buses together, and getting something that behaves like a single machine with twice the resources. Seamless clustering handled at the hardware level, like SLI for computing instead of video if you want to make that analogy.

      Minor complaint, the summary is a little misleading with units: they're advertising not 50 gigabits/s, but 50 gigabytes/s. Current i7 architectures already have substantially more memory bandwidth than this to local RAM, so the advantage is definitely communication distance here, not speed.

    • by LWATCDR (28044)

      Well I would think that depends on the caching. If you have a good enough cache then it may not bet that big of an issue.

  • by Chirs (87576) on Wednesday August 04, 2010 @02:58PM (#33141730)

    Without factoring in speed of light drops due to index of refraction changes, at a distance of 1 meter you're looking at latencies of 7 nanoseconds just for travel time. The bandwidth may be decent but the latency is going to be an issue for any significant distance.

    • I bet this is going to get all tangled up in the near future.

      http://arstechnica.com/old/content/2006/01/5971.ars [arstechnica.com]

      Potential applications: Subspace radio, wide area networks on a solar system scale. Just think, no more 3 minute wait for a radio signal from Mars or beyond.

      • Re: (Score:3, Informative)

        by Rakishi (759894)

        No known process allows for information transfer at speeds faster than light. Including quantum entanglement. Stop watching so much science fiction and go read up on what it actually does instead.

        • And if you could, then you could violate causality without breaking a sweat:
          http://sheol.org/throopw/tachyon-pistols.html [sheol.org]

          Not sure I want to live in a universe where we've invented FTL communication, it would get really, really confusing.

    • Absolutely. I think the more likely case is that we're going to see RAM on the compute device, or at least on-package. In the world of cache, even traversing the processor die is a latency worth worrying about.

      That said, how about optical numa? with HT or QPI the latency is already up above 100 ns, so adding an optical hop may be reasonable. How about using an optical cable to string together 2 single-socket motherboards into a dual-socket SMP? Not that you need optics to do this, but they make it possible

  • by PPH (736903) on Wednesday August 04, 2010 @02:59PM (#33141732)

    They want their rats nest of cables back.

    The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors.

    • by thethibs (882667)
      Actually, that's the 1960s.
      • Depending on what you're working on, it could be right now. Have you seen a graphic designer on his/her own "turf"? I didn't know a laptop could dock into so many things at the same time. Monitor, keyboard, mouse, wacom tablet, storage, network, scanner, printer, and a partridge in a pair tree. Many of us have never left the rat's nest...
    • by mhajicek (1582795)
      But without a nest of cables you can't do Serial Experiments Lain!
      • Also, Ghost in the Shell teaches us that if you want a really good connection to someone's brain, it needs to be a physical one.
        • And Shadowrun shows that even if you do upgrade to wireless, everyone will live in Faraday cages.
  • by idontgno (624372) on Wednesday August 04, 2010 @02:59PM (#33141740) Journal

    because this appears to be another aspect of Wheel of Reincarnation [catb.org].

    I'm old enough to remember a time where a computer was a series of bitty boxes tied together with cables. Then someone decided to integrate a lot of the stuff onto a motherboard, with just loosely-related stuff connected by cables to the motherboard. Then the loosely-related stuff got put into cards that plugged into the motherboard. Then that stuff just got integrated into the motherboard.

    And now it's being reborn as stuff in bitty boxes connected together with cables.

    I wonder what enlightement will be like, because karma appears to have been a bitch.

    • by Anonymous Coward

      In 30 years I'll suggest integrated optical motherboards.

    • That's what I was thinking: This is going back to the way it was in the mini-computer era. CPU in one box. Additional memory in another. Framebuffer in a third. Disk in a fourth...

      What's old is new again.

      • by Grishnakh (216268)

        Except the whole thing has terrible power consumption, because each unit has its own crappy wall-wart power supply, and you have to have 3 power strips wired in series to have places for them all to plug in.

        • No. Power is easy. Again, like the old days... One large power supply module with a cable jumpering modules together. Then it can be as efficient as you want.
    • Re: (Score:3, Insightful)

      by Jah-Wren Ryel (80510)

      Uh yeah, this isn't the first time around. The computer industry is constantly rediscovering previous designs. Timesharing, batch jobs, client-server, intergrated/distributed processing, etc, etc. Nothing new under the sun, just smaller and faster is all.

      I wonder what enlightement will be like, because karma appears to have been a bitch.

      It's called retirement - you get out of the loop and eventually you go out like a the flame of a candle.

    • by timeOday (582209)
      I wouldn't confuse "what might be enabled by this new technology" with what is actually going to happen.

      The vast majority of computers (even if known by other names such as "smartphone") will only become more and more integrated. I doubt we'll be buying standalone graphics cards for PC's in 10 years, and not even standalone RAM modules in many cases.

      Maybe for high performance computing there will be a big shared memory hooked up to tens of thousands of cores by optical interconnects, but not for 99% of

  • Latency? (Score:2, Interesting)

    by Diantre (1791892)
    IANAEE (I Am Not An Electrical Engineer) Pardon my possible stupidity, but what was keeping us from putting the RAM a few feet from the CPU? The way I understand it, electrons don't move much slower than light. Of course you might lose current.
    • by BZ (40346)

      > The way I understand it, electrons don't move much slower than light.

      Electrons move slowly. ;)

      Electrical signals (aka electromagnetic waves) in wires move at speeds that depend on the wire and the insulation around (and within, for coax) the wire. Speeds can be as high as 0.95c and as low as 0.4c with pretty typical wiring setups.

      • by EnsilZah (575600)

        Isn't that basically what parent was saying?
        I might be missing something but a one time improvement of at most doubling the speed doesn't sound that impressive to me.
        There's a lot to be said for optical interconnects but putting your RAM 20cm away from the CPU instead of 10cm doesn't seem revolutionary.

  • My dream computer has always been a completely modular system, with every component accessible and hot-swappable. I always imagined it being about the size and shape as a normal computer, but covered in slots, with video cards, RAM, drives, etc in the form of cartridges... pin lengths designed to make sure the right things contact in the right order...

    While lamenting the poor graphical performance of my laptop, I investigated external graphics cards. While they aren't currently suitable for... well... any
    • "I would even prefer an external video card for my desktop computer (if performance matched the internal version). It could have its own case, cooling, and powerbrick, instead of murdering my internal power supply, heating my computer up, screaming like a jet engine, and possible bursting into flames when my haphazard system design blocks vital airflow."

      You're too much in the minority for a market to be built up for you. Haven't you realized that these days people want to buy -one box- with -as few cables a

    • by LWATCDR (28044)

      "My dream computer has always been a completely modular system, with every component accessible and hot-swappable." it is called a mainframe.
      Actually some of IBMs none mainframe big iron can do the same thing.
      Some of their machines can even call for support on their own. They will contact IBM and a tech will show up and inform you that the RAM or drive is failing and swap the part. Mainframes even have hot swappable CPUs.

  • Bigger computers!
    What we've been working toward all these decades!

    • Modular computers. Easier upgrade paths. More re-use/re-sell value for external components. If you want to buy an iMac in which every component is epoxied together, that's your choice.
      • by H0p313ss (811249)

        Modular computers. Easier upgrade paths.

        This hadn't occurred to me, but now that you mention it I'm reminded of a friends failure to install SIMMs correctly on an old 486 era desktop. He actually managed to damage the motherboard since he didn't notice the retaining clips and just mashed them in.

        A plug & play architecture that is so modular and simple that even the noobiest of noobs can upgrade might have some legs. Right now upgrading is such a bitch that I don't even bother anymore, I just get kick-ass machines and replace them bi-anually a

      • by jedidiah (1196)

        Resale value is always going to be inherently limited because most people don't want stuff that's old or has been abused by someone else.

        Computer components are less reusable or resell-able not so much because of shifting connector formats but because stuff gets obsolete very quickly.

        Sub-500G 3.5" drives seem positively quaint when Target is selling 750G 2.5" USB drives.

        The fact that some GPU doesn't support some feature released in the last 3 years is going to be FAR more of an issue than what kind of card

  • Two things... (Score:3, Interesting)

    by MarcQuadra (129430) on Wednesday August 04, 2010 @03:13PM (#33141920)

    1. The Internet already does that. How much of the experience today is processed partly in a faraway datacenter? I know that even users like me use the Internet as a method to pull things away from each other so each part lives where it makes sense. I have a powerful desktop at home that I RDP into from whatever portable device I happen to be toting. I don't worry about my laptop getting stolen, the experience is pretty fast (faster than a netbook's local CPU, for sure), and I get to mix-and-match my portable hardware.

    2. This is going to have much more use at a datacenter than it will in a server closet or a home. I can already fit more RAM, CPU, and Storage than I need in a typical desktop. Most small businesses run fine on one or two servers. Datacenters, on the other hand, could really take advantage to commoditizing RAM and CPU, like they have with SANs in storage. No more 'host box/VM', it's time to take the next step and pool RAM and CPUs, and provision them to VMs through some sort of software/hardware control fabric. I think Cisco already knows this, which is why they're moving to building servers.

    Imagine the datacenter of the future:

    Instead of discrete PC servers with multiple VM guests each and CAT-6 LAN plugs, you have a pool of RAM, a pool of storage, and a pool of CPUs controlled by some sort of control interface. Instead of plugging the NIC on the back of it into your network equipment, the control interface is -built into- the network core, wired right into the backplane of your LAN. Extra CPU power that's not actually being used will be put to work by the control fabric compressing and deduplicating stuff in storage and RAM. The control interface will 'learn' that some types of data are better served off of the faster set of drives, or in unused RAM allocated as storage. 'Cold' data would slowly migrate to cheap, redundant arrays.

    Guest systems will change, too. No longer will VMs do their own disk caching. It makes sense for a regular server to put all its own RAM to use, but on a system like this, it makes sense to let the 'host fabric' handle the intelligent stuff. Guest operating systems will likely evolve to speak directly to the 'host' VFS to avoid I/O penalties, and to communicate needs for more or less resources (why should a VM that never uses more than 1GB RAM and averages two threads always be allocated 4GB and eight threads?).

  • by sjames (1099)

    Wasn't Infiniband, 3GIO (now PCIe) and a plethora of other forward looking interconnects supposed to have already done this by the early 2000s? There was even talk of extending Hyperlink to a 10m range at one point.

    Wake me up when it hits silicon.

  • 50 Gbps is bandwidth. What's the latency? That'd be kinda important for the purposes of remoting RAM.

Given its constituency, the only thing I expect to be "open" about [the Open Software Foundation] is its mouth. -- John Gilmore

Working...