Is a Wireless Data Center Possible? 172
Nerval's Lobster writes "A team of researchers from Microsoft and Cornell University has concluded that, in some cases, a totally wireless data center makes logistical sense. In a new paper, a team of researchers from Cornell and Microsoft concluded that a data-center operator could replace hundreds of feet of cable with 60-GHz wireless connections—assuming that the servers themselves are redesigned in cylindrical racks, shaped like prisms, with blade servers addressing both intra- and inter-rack connections. The so-called 'Cayley' data centers, so named because of the network connectivity subgraphs are modeled using Cayley graphs, could be cheaper than traditional wired data centers if the cost of a 60-GHz transceiver drops under $90 apiece, and would likely consume about one-tenth to one-twelfth the power of a wired data center."
90% Power Savings??? (Score:5, Interesting)
Or Rlly? So a traditional datacenter is sinking > 90% of its power into the wired network connections? Not the actual servers themselves? Not the cooling? The wired network connections? I'm not buying those power saving estimates.
Re:There are still wires (Score:4, Interesting)
Cost justifications (Score:5, Interesting)
When the 60Ghz transceiver (which doesn't exist yet commercially) drops to $90 each, won't 10Gig ethernet drop down to $9/port, skewing their cost justifiication results? They mention using 4 - 15gbit transceivers... what's the aggregate bandwidth of a 60Ghz network? If the aggregate bandwidth is 15gbit, that's not going to handle a rack full of servers.
Slashdot now stealing content (Score:5, Interesting)
So Slashdot is now ripping off other sites, copying their content to Slashdot-hosted pages, adding ads, and breaking links. [slashdot.org] The original article [cornell.edu] says "Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ANCSâ(TM)12, October 29â"30, 2012, Austin, Texas, USA. Copyright 2012 ACM 978-1-4503-1685-9/12/10 ...$15.00."
In the actual paper, the power consumption bullshit part reads "Power consumption: The maximum power consumption of a 60GHz transceiver is less than 0.3 watts [43]. If all 20K transceivers on 10K servers are operating at their peak power, the collective power consumption becomes 6 kilowatts. TOR, AS, and a subunit of CS typically consume 176 watts, 350 watts, and 611 watts, respectively [9â"11]. In total, wired switches typically consumes 58 kilowatts to 72 kilowatts depending on the oversubscription rate for datacenter with 10K servers. Thus, a Cayley datacenter can consume less than 1/12 to 1/10 of power to switch packets compared to a CDC. That's comparing transceiver drive power with a whole store and forward switching fabric.
It's also not clear how their "Y-switch" thing, which doesn't store anything, handles busy reception points. At some point, in a forwarding network, you either have to store packets or drop them. Or set up end to end channels first.
ok, no wires then... (Score:4, Interesting)
Within a data center, you could use $1.00 LED emitters and receivers with integral lenses for short runs, precision (but still cheap) alignment fixtures and $0.10 mirrors. For long runs, LED laser emitters. You'd still beat $90/point by a huge margin. And as a plus, you'd have some extremely high speed connections. Power consumption... I dunno, you'd have to do an analysis. One thing that seems obvious is that for any line not sending data, the LED should be off the vast majority of the time.
Re:ok, no wires then... (Score:4, Interesting)
To your points:
LEDs can be switched in the sub-nanosecond range with a little effort, in the single-digit nanosecond range without any unusual trickery at all. 10...100 ns for an 8 bit word isn't horrible. I don't understand your use of "chill" in this context.
Also not quite sure what you mean by tight transceiver pairs. I envision a transmitter LED nested at the bottom of a flat black tube on one end (crops the easily detectable emission to a very narrow AOV), and a sensor with an integral lens on the other. The only way the sensor could see the transmitting LED is to be lined up with it; parallax would prevent it from seeing adjacent LEDs on the same spatial alleyway, as it were. All low tech. You could fit a *lot* of these on a flat plane representing the end cap of the data allyway.
Most machines in a data center don't have a lot of connections going to them. One for sure, maybe two. That heads off to switches, routers. Those connections could be all LED. The router / switch, if consolidating to a high-traffic line, could use something else. If going out to other machines, LED again. No reason you couldn't mix tech here.