Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft Data Storage Power Wireless Networking

Is a Wireless Data Center Possible? 172

Nerval's Lobster writes "A team of researchers from Microsoft and Cornell University has concluded that, in some cases, a totally wireless data center makes logistical sense. In a new paper, a team of researchers from Cornell and Microsoft concluded that a data-center operator could replace hundreds of feet of cable with 60-GHz wireless connections—assuming that the servers themselves are redesigned in cylindrical racks, shaped like prisms, with blade servers addressing both intra- and inter-rack connections. The so-called 'Cayley' data centers, so named because of the network connectivity subgraphs are modeled using Cayley graphs, could be cheaper than traditional wired data centers if the cost of a 60-GHz transceiver drops under $90 apiece, and would likely consume about one-tenth to one-twelfth the power of a wired data center."
This discussion has been archived. No new comments can be posted.

Is a Wireless Data Center Possible?

Comments Filter:
  • by laron ( 102608 ) on Monday October 15, 2012 @02:22PM (#41661765)

    Unless they plan to use microwave beams for power.

    • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Monday October 15, 2012 @02:26PM (#41661811) Homepage
      Wake up Tesla
      • by tjonnyc999 ( 1423763 ) <tjonnyc@g[ ]l.com ['mai' in gap]> on Monday October 15, 2012 @02:49PM (#41662153)
        ...the Matrix has you.
        • ok, no wires then... (Score:4, Interesting)

          by fyngyrz ( 762201 ) on Monday October 15, 2012 @03:48PM (#41662953) Homepage Journal

          Within a data center, you could use $1.00 LED emitters and receivers with integral lenses for short runs, precision (but still cheap) alignment fixtures and $0.10 mirrors. For long runs, LED laser emitters. You'd still beat $90/point by a huge margin. And as a plus, you'd have some extremely high speed connections. Power consumption... I dunno, you'd have to do an analysis. One thing that seems obvious is that for any line not sending data, the LED should be off the vast majority of the time.

          • by postbigbang ( 761081 ) on Monday October 15, 2012 @05:14PM (#41663887)

            Unless you can remodulate or make incredibly dense modulation possible, LED transmitters can manage about the same data rate as you see in WDM, and so the data rate among hosts isn't quite so chill. Power would be low, and it would be tough to find background noise to foul things up. But eventually, you'd need to have alternate spectra to modulate (lambdas) and tight tranceiver pairs to make it work. Your engineering cost just shot your low-cost.

            • by fyngyrz ( 762201 ) on Monday October 15, 2012 @07:52PM (#41664893) Homepage Journal

              To your points:

              LEDs can be switched in the sub-nanosecond range with a little effort, in the single-digit nanosecond range without any unusual trickery at all. 10...100 ns for an 8 bit word isn't horrible. I don't understand your use of "chill" in this context.

              Also not quite sure what you mean by tight transceiver pairs. I envision a transmitter LED nested at the bottom of a flat black tube on one end (crops the easily detectable emission to a very narrow AOV), and a sensor with an integral lens on the other. The only way the sensor could see the transmitting LED is to be lined up with it; parallax would prevent it from seeing adjacent LEDs on the same spatial alleyway, as it were. All low tech. You could fit a *lot* of these on a flat plane representing the end cap of the data allyway.

              Most machines in a data center don't have a lot of connections going to them. One for sure, maybe two. That heads off to switches, routers. Those connections could be all LED. The router / switch, if consolidating to a high-traffic line, could use something else. If going out to other machines, LED again. No reason you couldn't mix tech here.

              • by postbigbang ( 761081 ) on Monday October 15, 2012 @09:28PM (#41665403)

                First, you need to use a modulation scheme that allows intense amounts of data exchange. If you don't do that, you're not trying and what did you do this for in the first place?

                You have to have pairs that are either lambda or phase delineated for rational discrimination. Then you need plenty of pairs, as this is a crossbar arrangement; otherwise it's useless and you might as well use RS-232.

                Finally, if you don't provide optimal switching, you're blocking, and if you're blocking, you're not state coherent, and why did you do this in the first place?

                • by fyngyrz ( 762201 )

                  on / off is sufficient to give you more speed than the vast majority of machines actually need. Nothing fancy required. receivers can only see one transmitter; on/off is just as good in that context as it is within a wire, as long as you don't block the path.

                  Number of pairs isn't a challenge, really. Should be able to get the density up to about what cables give you as long as you use the short transmitter sleeve I described.

                  It's not a crossbar arrangement. it's point to point. Same as an ethernet cable, wh

              • I have a very small home data cluster: two servers, one switch. I can hit kill saturation (causing the NIC to overheat) very easily running just ONE cable into each box. Problem solved, very easily, with TWO NICs per machine, two cables to the switch per machine, two IPs per machine. Ingoing data goes through one NIC, outgoing with t'other, cards stay relatively cool and nothing falls over. I'm sure those who have experience with larger data setups have seen similar problems and know therefore that doubling

              • by AmiMoJo ( 196126 )

                There are two major flaws with your plan, which is why it hasn't been implemented I would imagine.

                Firstly you need all that empty space for the light to travel down. It has to be dead straight and perfectly aligned, which severely limits how you can lay out. Sure, you have mirrors but they just introduce more alignment problems and you won't be packing them that tightly anyway. Compared to just putting in cables there is no real advantage and many disadvantages.

                Secondly you will need to keep your light path

                • by fyngyrz ( 762201 )

                  Empty space tends to be perfectly aligned, lol. Yes, of course. But what this means in practical terms is a transceiver group needs alignment -- once, unless the building shifts, etc. If the building shifts, you have other problems. The *space* isn't going to move.

                  Yes, you want to keep dust out of there, otherwise you'll see error rates go up. The good news is everything benefits from this. Servers don't like dust either.

                  The first is not a problem; the second... should be solved. So I don't see these are se

          • by FridgeFreezer ( 1352537 ) on Tuesday October 16, 2012 @07:02AM (#41667489)
            And to make sure the light beams don't get crossed over, you could use some of thee new-fangled glass-fibre cables... oh hang on...
      • by nurb432 ( 527695 )

        Why wake him up? Just use what he gave us so long ago.

    • by Anonymous Coward on Monday October 15, 2012 @02:27PM (#41661839)

      Let's just set up some servers in a room, blast 'em with every form of radiation known to man, and see what happens! Sounds like a fun weekend project.

    • bigger question....how are the union guys going to bill for running wireless?
    • Smells funny (Score:4, Insightful)

      by mcrbids ( 148650 ) on Monday October 15, 2012 @07:17PM (#41664733) Journal

      Somehow, they're concluding that 90% of the power used in a datacenter is used for network adapters, switches, and routers? Something smells rather funny here...

    • Using microwaves for computer networking is called "WiFi".

  • 90% Power Savings??? (Score:5, Interesting)

    by CajunArson ( 465943 ) on Monday October 15, 2012 @02:24PM (#41661793) Journal

    Or Rlly? So a traditional datacenter is sinking > 90% of its power into the wired network connections? Not the actual servers themselves? Not the cooling? The wired network connections? I'm not buying those power saving estimates.

    • by Anonymous Coward on Monday October 15, 2012 @02:27PM (#41661851)

      Not exactly. 90% less for networking.

      • not even close. Server NICs are generally integrated, wireless requires a dongle or card. That's extra power for each server. Even if you cut your switch power requirement by 75% there's still the problem of the extra power required by the interface cards, which at the very least will cancel out any power savings (which will be negligible anyway)

    • by Anonymous Coward on Monday October 15, 2012 @02:31PM (#41661917)

      DNRTFA, but I imagine that the figure is quoted off of the networking equipment alone, without regard to any other aspect of the datacenter. I.e.: your actual network equipment footprint would shrink 20-30 fold, and that renders the power savings -- and while that is far from a majority of the power utilization of a traditional, large-scale datacenter, it is not an insignificant number in either physical space or power consumption.

      That said, I doubt this is feasible without rethinking the datacenter design from the ground up. Simply rearranging the racks to minimize interference is not going to be enough.

  • Can someone explain how a wireless approach could use less power than a wired approach?
    I understand that if you compare a crappy wired implementation to highly optimized wireless implementation the wireless might win out,
    but then it would be cheaper to optimize the wired one.
    • by AK Marc ( 707885 )
      Switches are inherently hub and spoke (even the last of the rings were physically hub and spoke). So you have to have the hubs (networking switches, but literal hubs). With wireless, you could mesh and reduce hubs.

      Now, if we were to get switches better optimized for power (most seem to be going the wrong way, with even datacenter-class switches being PoE capable, requiring lots of extra power), then there wouldn't be a savings. Get switches that turn off ports and cores based on load and connections. E
    • by fa2k ( 881632 )

      You could get rid of some switches. Because 60 GHz doesn't penetrate through metal well, you can have your own little private network inside the rack cylinder without a switch. Each pair of computers could communicate on a separate frequancy, so you'd get the equivalent of a switched network (one coudl do full duplex to, by using more frequencies). The wireless approach would be more resiliant to failure too. You could use N! wires between the N computers instead, possibly using even less power.

      For inter-

      • by fa2k ( 881632 )

        You could use N! wires between the N computers instead, possibly using even less power.

        OK I promise not to use maths on slashdot ever again. It's not N!, it's N + (N-1) + ... + 2 + 1. It probably can be written more easily somehow.

        • OK I promise not to use maths on slashdot ever again. It's not N!, it's N + (N-1) + ... + 2 + 1. It probably can be written more easily somehow.

          N*(N+1)/2.

        • N*(N+1)/2

          Write the same sum but in the other direction just below the previous one, and sum both lines term by term. Notice you have N times N+1, and that's for twice the sum. So one half for a single line. A visual way to let a kid old enough to know multiplication tables to find it for small cases is to draw the sum as dots on a piece of grid paper as a rectangular triangle. Then double the triangle (symmetry on the long edge) and you get a rectangle where the number of dots can be computed with a simpl
        • by psmears ( 629712 )

          It's not N!, it's N + (N-1) + ... + 2 + 1. It probably can be written more easily somehow.

          You are correct... here's an easy way of figuring it out:
          N+ ... +1 = (N+1)
          (N-1)+...+2=(N+1)
          (N-2)+...+3=(N+1)
          Pairing up a term from the beginning of the expression with one from the end always makes (N-1), and there are N/2 such pairs. So the total is N(N-1)/2 (at least for even N - though it works for odd N too).

  • by smooth wombat ( 796938 ) on Monday October 15, 2012 @02:29PM (#41661895) Journal

    until the wackadoodles who claim they get headaches from radio signals find out they're living next to a place which runs such an environment.

    I can't wait to see the signs they use to protest as they stand outside in the blazing sun:

    Stop killing us with radio waves!

    Radio waves kill!

    Save a life. Turn off your radio.

  • I doubt it (Score:5, Insightful)

    by SuperMooCow ( 2739821 ) on Monday October 15, 2012 @02:30PM (#41661903)

    You can't have nearly infinite bandwidth in a finite frequency spectrum, but you can keep adding a shitload of wires if needed.

    Given the problems people have when multiple wi-fi routers are too close together like in an apartment building, I am doubtful that it would work well in a server environment, not matter which frequencies are used.

    • You can't have nearly infinite bandwidth in a finite frequency spectrum, but you can keep adding a shitload of wires if needed.

      On the contrary, any number of optical signals can pass right through each other, whereas cables (electric or fiber-optic) cannot do that. In other words, it's all a matter of how directional the signals are, and how powerful they are.

  • Cost justifications (Score:5, Interesting)

    by hawguy ( 1600213 ) on Monday October 15, 2012 @02:33PM (#41661943)

    When the 60Ghz transceiver (which doesn't exist yet commercially) drops to $90 each, won't 10Gig ethernet drop down to $9/port, skewing their cost justifiication results? They mention using 4 - 15gbit transceivers... what's the aggregate bandwidth of a 60Ghz network? If the aggregate bandwidth is 15gbit, that's not going to handle a rack full of servers.

    • When the 60Ghz transceiver (which doesn't exist yet commercially)

      60 GHz exists right now for point to point communications.
      You can get it on newer computers by looking for "Intel wireless display" aka WiDi
      You can use it commercially with multi-gigabit speeds at ranges up to 1.5km (about a mile assuming good weather).

      They mention using 4 - 15gbit transceivers... what's the aggregate bandwidth of a 60Ghz network? If the aggregate bandwidth is 15gbit, that's not going to handle a rack full of servers.

      Talking about aggregate bandwidth for 60GHz is meaningless.
      The only number you have to worry about is the maximum bandwidth of a single transceiver, because, unlike most current wireless offerings,
      60 GHz frequencies are so directional that you can run multipl

  • by alen ( 225700 ) on Monday October 15, 2012 @02:37PM (#41661987)

    the traffic is sent into the air and its up to each receiver to filter the noise and ignore data not meant for it. lots of interference.

    its OK for starbucks or for home use but not by much. i have at least 10 wifi networks around me that constantly interfere with mine. i used to get regular disconnects from x-box live that went away when i tried to connect my x-box to my router with Cat5 cable. same with video streaming.

    this is why large events have crappy data speeds. everyone is broadcasting into the same air space and interfering with each other.

    • by niado ( 1650369 )

      wireless is like the old layer 1 hubs the traffic is sent into the air and its up to each receiver to filter the noise and ignore data not meant for it. lots of interference.

      Um, well, not exactly. They are similar in that they operate at half-duplex, but WAP's operate at L2 and L3 in addition to L1 (Wifi uses CSMA/CA [wikipedia.org], vs. the CSMA/CD [wikipedia.org] used by switches). Interference can be an issue, but only in an uncontrolled or poorly-designed environment (Pro tip: don't put 2.4ghz wireless phones in your wireless data center).

    • Even a moderately isolated and shielded data center that sticks to mostly directional transmission should have none of these problems. Look up omnidirectional vs directional antenna. Considering that even off the shelf 802.11ac in the appropriate configuration can offer speeds of nearly 7Gbit/s I (http://en.wikipedia.org/wiki/IEEE_802.11ac) I somehow think you're not really understanding the nature of what is being discussed, I don't see why this should be an issue. They aren't talking about shoving a pile

  • by Alex Belits ( 437 ) * on Monday October 15, 2012 @02:40PM (#41662027) Homepage

    I am so happy that Microsoft is doing that kind of loony shit.

    • by Nutria ( 679911 )

      Contrary to the popular belief, there indeed is no God.

      Don't act like religionists and make absolute statements when you have no evidence for them. (They're the ones who are making positive statements and so must present the proof.)

      • This is how science works. Statements about existence of anything, made without no evidence to support them are supposed to be treated as false unless and until such evidence is provided. With given evidence, it's much more likely that I am a four-headed lizard who lives in a volcano, than that any kind of deity exists, or ever existed.

        • by Nutria ( 679911 )

          Statements about existence of anything, made without no evidence to support them are supposed to be treated as false unless and until such evidence is provided.

          Ironically, I'll quote the Bible: Even a fool, when he holds his peace, is counted wise: and he that shuts his lips is esteemed a man of understanding." Proverbs 17:28

          Example: those vocal anti-tectonic geologists looked pretty foolish. OTOH, the calm Extraordinary claims require extraordinary evidence scientists were sage and judicious.

          Now, I think it's more likely that pigs will sprout wings than an omniscient, omnipotent deity will present Himself to humanity, but still I'll stick to the Sagan Principle,

          • Too bad, Bible is not an authority on science or logic, and you lack even basic understanding of science. Plenty of things were called foolish, and then very soon were demonstrated to be true. Characters from ancient folklore about fear of death are not among those things, and I can assure you, never will be.

  • by Animats ( 122034 ) on Monday October 15, 2012 @02:49PM (#41662157) Homepage

    So Slashdot is now ripping off other sites, copying their content to Slashdot-hosted pages, adding ads, and breaking links. [slashdot.org] The original article [cornell.edu] says "Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ANCSâ(TM)12, October 29â"30, 2012, Austin, Texas, USA. Copyright 2012 ACM 978-1-4503-1685-9/12/10 ...$15.00."

    In the actual paper, the power consumption bullshit part reads "Power consumption: The maximum power consumption of a 60GHz transceiver is less than 0.3 watts [43]. If all 20K transceivers on 10K servers are operating at their peak power, the collective power consumption becomes 6 kilowatts. TOR, AS, and a subunit of CS typically consume 176 watts, 350 watts, and 611 watts, respectively [9â"11]. In total, wired switches typically consumes 58 kilowatts to 72 kilowatts depending on the oversubscription rate for datacenter with 10K servers. Thus, a Cayley datacenter can consume less than 1/12 to 1/10 of power to switch packets compared to a CDC. That's comparing transceiver drive power with a whole store and forward switching fabric.

    It's also not clear how their "Y-switch" thing, which doesn't store anything, handles busy reception points. At some point, in a forwarding network, you either have to store packets or drop them. Or set up end to end channels first.

  • Cancer? (Score:5, Funny)

    by Anonymous Coward on Monday October 15, 2012 @02:50PM (#41662185)

    Hopefully they will also pass out those cancer detecting bras to all of the staff members as well.

  • No way! (Score:4, Funny)

    by aglider ( 2435074 ) on Monday October 15, 2012 @02:52PM (#41662207) Homepage
    The overall amount of radiating energy involved would make a datacenter technician ... medium well.
    • Actually it would probably mostly just kill off all the technician's spermies, but for most datacenter technicians that probably wouldn't be an issue anyway

  • by kheldan ( 1460303 ) on Monday October 15, 2012 @02:54PM (#41662257) Journal
    Even with careful planning and management, wouldn't a completely wirelessly-networked datacenter be more of a target to hacking? Even with a high level of encryption, which would add to network overhead?
    • The datacentre would be inside a faraday cage, so no signals would be getting in, or out, except through the external (presumably fiberoptic) data links
    • by na1led ( 1030470 )
      I'm sure the network side of things would be secure, but I'd be worried about invisible radio wave attacks that could come from anywhere. A satellite, or parked van could essentially kill the whole network if they focused enough wattage at the data center. Not to mention other interferences like solar-flares.
  • Will be the solution. If only we dared to wait for the technology.
  • I wonder how robustly Microsoft plans to address security at a wireless data center. In many data centers, wireless devices, even encrypted ones, were simply forbidden and twisted pair was inside physically locked metal conduit. Most security schemes for wireless transmission will involve more overhead on CPU, memory, transmission and therefore, energy, air conditioning, floor space, etc., not to mention a staff division related to spectrum monitoring & analysis.

    On the other hand, if the data center i

  • 60GHZ would barely get through a wet tissue. You could track the location of the technicians by watching the server-down warnings move around.
  • Burritos (Score:4, Funny)

    by djhertz ( 322457 ) on Monday October 15, 2012 @03:15PM (#41662607)

    It'll be cool when somebody microwaves a burrito in the lunch room and random servers drop connection. 3.. 2.. 1.. ding! Hm, server connections are back.

  • I was always a bit dubious of the infrared based wireless networking (like IrDA) for an office environment, but what about optical wireless in a data center? Seems like that would solve the potential security issues and you could isolate racks (or parts of racks) on their own wireless network and then do the traditional wired scheme to join those nodes together so that you weren't stretching the bandwidth too thin?

  • These days, with VMs (and hence software switches) carrying the actual workload, and hastily programmed core switches broken down into a hundred VLANs, why are we hanging on to the ancient notion of "wires"? Clearly a wireless method for every server to be able to talk to every other server is the next logical evolution. Just sprinkle a little software on top to make sure that the servers only see/process what they are supposed to, and surely it will all work great!

    • by adri ( 173121 )

      Problem - 60GHz is currently very near-space wifi. It's also what, a couple of gigabit worth of bandwidth. Also, I haven't seen any studies yet looking at 60GHz saturation and lots of multi-path reflection. It's a cool technology but it does read like someone's trying to sell the tech, rather than really being suitable for it.

      • It looks like their hope with the cylindrical orientation is that each server will communicate directly with the 5 to 7 servers opposite it via the inside (and hopefully the signal would be absorbed there as well) and with the servers above/below it on the outside (where the signal would dissipate fast enough to not interfere with other cylinders). Quite intriguing, but it creates one giant (and complex) software-managed ether in the literal sense, information will just "be there" and hopefully the softwar

    • That's like saying, "with all these fancy new technologies in hybrid cars with GPS and everything, why are we still using wheels?"

      Wires are cheap and reliable, and even if they get eclipse in bandwidth at some point by wireless technologies, they will have a place for a long, long time for those reasons.

  • Hey, I have this awesome idea, let's take out all those expensive copper wires and make our data center wireless. It'll save so much money! But first we'll have to redesign racks to be cylindrical and servers will need to be keystone-shaped. Also, because of the new rack design, you won't have access to rear ports. If something in the center of the rack comes undone or stops working, you need to open the entire rack. And each rack will have to be a faraday cage so the signal doesn't leak out and collid
  • This will only work if the data-center is deployed as a PaaS cloud grade apps. Falling to leverage those best of industry game changing paradigm the interference from all that vapor will have a detrimental effect on the TCO and ROI KPIs.
  • From TFA: "the authors picked a Georgia Tech design with bandwidth of between 4-15Gbps and and effective range of less than or equal to 10 meters."

    Provided interference, does that mean you won't get more than 15Gbit per second for all the machine in a circle of 10 meters? How much is that 6 racks? You put what, 20 machine per rack? (I am not in IT, so I am not exactly sure.) so you share 15Gbit per second accros 120 machines?

    Assuming no other interference. Right now, you can get 10Gbit per second with 10gi

    • by rtaylor ( 70602 )

      60Ghz is very directional.

      Combined with phased arrays you could get ~10Gbit to any machine you can see directly (opposite side, perhaps 1 or 2 rows up and down) without restriction to the number of pairs.

      It actually sounds extremely useful for something like Hadoop because it doesn't require extremely fast switches since communication can be made through a less centralized means.

      • by godrik ( 1287354 )

        I understand it is directional, but I hardly believe you will be able to target the 6th machine on the 2nd rack on the left without irradiating half of the next 4 racks on the left. Maybe directionality will reduce it to 60 machines in your path instead of 150. But I don't think you'll reach anything below 10 machines.

        I am not even mentionning that collisions will only give you half duplex, most likely even less.

  • Simply put, in order to pull that off, you'll need fairly sophisticated data processing, simply pointing 2 directional antennas at each other works fine outside, but is much more problematic in a data center which has walls and obstacles creating reflections. So what you could do is to use modern MIMO systems, but that would require huge amounts of processing power to get any kind of decent bandwidth. It's no point designing a system now which already peaks out at 10 GBit.

If entropy is increasing, where is it coming from?

Working...