Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

What Web 2.0 Means for Hardware and the Datacenter 125

Tom's Hardware has a quick look at the changes being seen in the datacenter as more and more companies embrace a Web 2.0-style approach to hardware. So far, with Google leading the way, most companies have opted for a commodity server setup. HP and IBM however are betting that an even better setup exists and are striking out to find it. "IBM's Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it's worth choosing systems that make it easier and cheaper to deal with those failures."
This discussion has been archived. No new comments can be posted.

What Web 2.0 Means for Hardware and the Datacenter

Comments Filter:
  • by Junta ( 36770 ) on Monday May 26, 2008 @04:23PM (#23547995)
    The big companies are locking on to 'Web 2.0' as a moniker for embracing an idea they had been completely ignoring until Google took advantage of it and forced everyone to notice. Smaller companies had already gotten the message that while hardware-failure tolerant servers have their place, in many situations with large numbers of systems the only practical place to solve it is in software, and then expensive hardware redundancy is superfluous, costing both initial money and additional power/cooling.

    I'm not saying Google was by any means the first to think of this or do it, but no one else that did that as part of their core strategy had come to the spotlight to the degree Google has. Every single one of Google's moves to the industry at large has become synonymous to 'Web 2.0', and as such hardware designs done with an eye on Google's datacenter sensibilities logically become 'Web 2.0' related. You'll also note them saying 'Green computing' and every other possible buzzword that is fashionable.

    Of course, part of it is to an extent trying to create a sort of self-fulfilling prophecy around 'Web 2.0'. If you help convince the world (particularly venture capitalists) that a bubble on the order of the '.com' days is there to be ridden, you inflate the customer base. Market engineering in the truest sense of the phrase.
  • Karma Whoring (Score:1, Interesting)

    by Anonymous Coward on Monday May 26, 2008 @04:40PM (#23548129)
    To stop the web 2.0 discussion and focus on something interesting. From TFA:

    If you're an IT administrator for a bank and want to build a server farm for your ATM network, you make it fault tolerant and redundant, duplicating everything from power supplies to network cards. If you're a Web 2.0 service, you use the cheapest motherboards you can get, and if something fails, you throw it away and plug in a new one. It's not that the Website can afford to be offline any more than an ATM network can. It's that the software running sites like Google is distributed across so many different machines in the data center that losing one or two doesn't make any difference. As more and more companies and services use distributed applications, HP and IBM are betting there exists a better approach than a custom setup of commodity servers.
    Then they go on to talk about how google uses custom power supplies, how people are now charged by power consumption and how blade style servers use up too much power (?)

    They mentioned preconfigured linux servers for cheap, to help people avoid the extra work in setup (?)

    Etc. A jumble of suggestions for cheaper data centers, cooling many midrange servers, and so on.

    I would've thought selling VMs on a power-efficient mainframe would be more up IBM's alley, but that's not what they are selling. Anyone got any better ideas?

    (posting anonymously so as not to karma whore)
  • by fan of lem ( 1092395 ) on Monday May 26, 2008 @04:44PM (#23548175) Journal
    Servers post to twitter whenever they "don't feel well". Web 2.0-enabled system admins react quicker! (Esp. with a Firefox plugin)
  • by oneiros27 ( 46144 ) on Monday May 26, 2008 @04:51PM (#23548241) Homepage
    They mention 'sideways', and I thought they just meant rotating about the depth of the rack (ie, so a 19" rack would be about 11U wide), but the discussion is talking about the fans being 15" away vs. 25" ... which makes no sense, as they're mentioning servers being 47" deep. I think they're talking about side venting, which is what Suns _used_ to have, but you'd have to get these 30" wide racks (so there'd be ducts on each side for airflow in/out)

    And we have the useless quote:

    "In a data center the air conditioning is 50 feet away so you blow cool air at great expense of energy under the floor past all the cables and floor tiles," McKnight said. "It's like painting by taking a bucket of paint and throwing it into the air."
    I'm not going to claim that forced air is more efficient than bringing chilled water straight to the track, as it's not -- but the comparison is crap -- anyone who's had to manage a large datacenter will have had to balance ducts before -- it's not fun, I admit, but you don't just pump the air in, and expect everything to work.

    Then there's the great density -- 82TB in 7U. I mean, that's not bad, but the SATABeast is 42TB in 4U (unformatted), and I'm going to assume a hell of a lot cheaper. (although, it's a lower class of service). And HP's not using MAID yet, but spinning all of the disks.

    My suggestion -- skip the article. It reads more like a sales brochure, with very little on the actual technical details of what they're doing.
  • by hackstraw ( 262471 ) on Monday May 26, 2008 @05:52PM (#23548817)
    My thoughts exactly. Its like "Hmm, we need a good buzzword here, ah Web 2.0, that will work".

    I haven't read the FA yet, but here are the big 2 with data centers infrastructure-wise. 1) Power 2) cooling. Always has been, always will be. Frankly, I think that pumping a bunch of cold air in the floor is a bit primitive. I think in the near future we will see power and cooling be more a part of the racks than the way its done now. There are some data centers that are doing this, but its one of the things that its too new for it to be universal.

    I've thought for a long time that the hot row, cold row thing is also a bit primitive. I think that it would be cool if there were plenums _between_ the racks that removed the heat from the systems _upward_, not front to back like its done now.

    I also don't understand why DC/telco type systems are not more common, and put redundant power supplies in the racks and not have each 1U pizza box have a power supply. So much energy is lost this way, its not even funny.

    Anyway, while web 3.0 is on its way, I'll read the FA and see what is going on. I didn't know HP was in the petabyte storage arena, and I'll also see what IBM is up to...

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...