What Web 2.0 Means for Hardware and the Datacenter 125
Tom's Hardware has a quick look at the changes being seen in the datacenter as more and more companies embrace a Web 2.0-style approach to hardware. So far, with Google leading the way, most companies have opted for a commodity server setup. HP and IBM however are betting that an even better setup exists and are striking out to find it. "IBM's Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it's worth choosing systems that make it easier and cheaper to deal with those failures."
Don't blame the article author.. (Score:4, Interesting)
I'm not saying Google was by any means the first to think of this or do it, but no one else that did that as part of their core strategy had come to the spotlight to the degree Google has. Every single one of Google's moves to the industry at large has become synonymous to 'Web 2.0', and as such hardware designs done with an eye on Google's datacenter sensibilities logically become 'Web 2.0' related. You'll also note them saying 'Green computing' and every other possible buzzword that is fashionable.
Of course, part of it is to an extent trying to create a sort of self-fulfilling prophecy around 'Web 2.0'. If you help convince the world (particularly venture capitalists) that a bubble on the order of the '.com' days is there to be ridden, you inflate the customer base. Market engineering in the truest sense of the phrase.
Karma Whoring (Score:1, Interesting)
They mentioned preconfigured linux servers for cheap, to help people avoid the extra work in setup (?)
Etc. A jumble of suggestions for cheaper data centers, cooling many midrange servers, and so on.
I would've thought selling VMs on a power-efficient mainframe would be more up IBM's alley, but that's not what they are selling. Anyone got any better ideas?
(posting anonymously so as not to karma whore)
Re:WTF ? The Web 2.0 approach to hardware? (Score:3, Interesting)
So, after reading the article ... don't bother. (Score:5, Interesting)
And we have the useless quote: I'm not going to claim that forced air is more efficient than bringing chilled water straight to the track, as it's not -- but the comparison is crap -- anyone who's had to manage a large datacenter will have had to balance ducts before -- it's not fun, I admit, but you don't just pump the air in, and expect everything to work.
Then there's the great density -- 82TB in 7U. I mean, that's not bad, but the SATABeast is 42TB in 4U (unformatted), and I'm going to assume a hell of a lot cheaper. (although, it's a lower class of service). And HP's not using MAID yet, but spinning all of the disks.
My suggestion -- skip the article. It reads more like a sales brochure, with very little on the actual technical details of what they're doing.
Re:WTF ? The Web 2.0 approach to hardware? (Score:3, Interesting)
I haven't read the FA yet, but here are the big 2 with data centers infrastructure-wise. 1) Power 2) cooling. Always has been, always will be. Frankly, I think that pumping a bunch of cold air in the floor is a bit primitive. I think in the near future we will see power and cooling be more a part of the racks than the way its done now. There are some data centers that are doing this, but its one of the things that its too new for it to be universal.
I've thought for a long time that the hot row, cold row thing is also a bit primitive. I think that it would be cool if there were plenums _between_ the racks that removed the heat from the systems _upward_, not front to back like its done now.
I also don't understand why DC/telco type systems are not more common, and put redundant power supplies in the racks and not have each 1U pizza box have a power supply. So much energy is lost this way, its not even funny.
Anyway, while web 3.0 is on its way, I'll read the FA and see what is going on. I didn't know HP was in the petabyte storage arena, and I'll also see what IBM is up to...