Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

What Web 2.0 Means for Hardware and the Datacenter 125

Tom's Hardware has a quick look at the changes being seen in the datacenter as more and more companies embrace a Web 2.0-style approach to hardware. So far, with Google leading the way, most companies have opted for a commodity server setup. HP and IBM however are betting that an even better setup exists and are striking out to find it. "IBM's Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it's worth choosing systems that make it easier and cheaper to deal with those failures."
This discussion has been archived. No new comments can be posted.

What Web 2.0 Means for Hardware and the Datacenter

Comments Filter:
  • RTFA... (Score:4, Funny)

    by Anonymous Coward on Monday May 26, 2008 @02:36PM (#23547505)

    I'd love to RTFA but there's no link...
  • by Colin Smith ( 2679 ) on Monday May 26, 2008 @02:36PM (#23547507)
    Web 2.0 is about a thousand layers above hardware, it does not in any manner, approach.

     
    • by tsalmark ( 1265778 ) on Monday May 26, 2008 @02:48PM (#23547669) Homepage
      But if you run 2.0, your hardware wants to be on it's side, to um, well run better, because it's 2.0
      • Re: (Score:2, Funny)

        by ovidus naso ( 20325 )
        As long as the servers' edges are round...
      • by CAIMLAS ( 41445 )
        What I want to know is what the Internet, Government - whomever is responsible for this Web thing - decided to go a full version release without a single damn point release. What gives? They didn't even bother to fix any of the bugs, and now they're not even phasing off the 1.0 nonsense.
    • When I order a mocha, now I ask for 2.0 % milk
    • I don't think the articles author defines Web 2.0 the same way I do. Commodity virtual and/or high performance clusters have been around for at least a decade and don't have anything to do website design.
      • by Junta ( 36770 ) on Monday May 26, 2008 @03:23PM (#23547995)
        The big companies are locking on to 'Web 2.0' as a moniker for embracing an idea they had been completely ignoring until Google took advantage of it and forced everyone to notice. Smaller companies had already gotten the message that while hardware-failure tolerant servers have their place, in many situations with large numbers of systems the only practical place to solve it is in software, and then expensive hardware redundancy is superfluous, costing both initial money and additional power/cooling.

        I'm not saying Google was by any means the first to think of this or do it, but no one else that did that as part of their core strategy had come to the spotlight to the degree Google has. Every single one of Google's moves to the industry at large has become synonymous to 'Web 2.0', and as such hardware designs done with an eye on Google's datacenter sensibilities logically become 'Web 2.0' related. You'll also note them saying 'Green computing' and every other possible buzzword that is fashionable.

        Of course, part of it is to an extent trying to create a sort of self-fulfilling prophecy around 'Web 2.0'. If you help convince the world (particularly venture capitalists) that a bubble on the order of the '.com' days is there to be ridden, you inflate the customer base. Market engineering in the truest sense of the phrase.
        • by Moraelin ( 679338 ) on Monday May 26, 2008 @04:40PM (#23548705) Journal
          Unfortunately, there is one single definition of "Web 2.0", and that is the one of the guy who registered that trademark: Tim O'Reilly.

          Now I'm not usually one to make a big fuss over using a word wrong, but this one is actually a trademark. Deciding to use it in any other way, is a bit like deciding to call my Audigy 4 sound card a GeForce or an Audi. It just isn't one.

          And the extent to which both tech "pundits" and PHBs use it wrong, while (at least the latter) proclaiming their undying love and commitment to it, just leaves the impression that they use it as yet another buzzword. You don't proclaim your commitment to a technology, unless you actually understand what it is, how it can help you, and preferably how it compares to other technologies to the same end. Just going with a buzzword because it's popular, and ending up pledging your company to the camp of such a buzzword, is as silly (and often has the same effects) as making it your strategy to use scramjets in bicycles. Just because everyone seems to love scramjets lately, and you wouldn't want your mountain bike company to be left behind.

          To get back to the actual definition of that trademark, it's not even about technology as such. It's about people. It's not techno-fetishism, as in liking cool new technologies for their sake, it's techno-utopianism: the mis-guided belief that you only need to give more internet tools to a billion monkeys, to get a utopia like nothing imagined before. Although said monkeys never created anything worth reading with a keyboard, if it's keyboards connected to the Internet, now that's how you hit a gold mine.

          O'Reilly's idea is sorta along the lines of:

          - forget about publishing content (e.g., hiring expensive tech writers and marketers for your site), it's all about participation, baby. Let users write your content. Hust put in some wikis and forums, and a thousand bored monkeys will do the work faster, cheaper and more accurate. (People will just flock to offer you some free, quality work, just because they like donating to a corporation, I guess. And if instead you discover comments about how much your company sucks, the CEO's sexual orientation, and his mom's weight, well, I guess it must be true, 'cause collaborative efforts can't _possibly_ be wrong.)

          - forget about setting up your own redundant servers or dealing with Akamai, use BitTorrent. (Ask a lot of people how they felt about Blizzard's going almost exclusively through BitTorrent at launch. Nowadays their own servers serve a lot more of the content, if not enough other users are stuffing your pipe. I wonder why.)

          - forget selling media on the Internet, teh future is Napster letting people pirate it, like happened way back then. (No, literally, the "mp3.com --> Napster" line is part of his own page explaining Web 2.0. I guess good thing noone told Steve Jobs that.)

          - forget content management systems, use wikis. (I wonder in which alternate reality the piss-poor search engines of wikis can be compared to the capabilities of those systems.)

          - for that matter, forget about structuring information in any way, like through directories and portals, just let the users tag it. (I'm _sure_ that the tags "humor, theft, oldnews, !news, digg" will so help me find the story about a manager stealing the server from earlier. Never mind that search engines were already dumping searching for tags, in favour of full text search, even at the time when he came up with that idea.)

          Etc.

          Basically, if you have the patience to sift through his ramblings, and don't give up at the "well, Google started up as a web database" intro, the meat begins at "Harnessing Collective Intelligence". That's what's it about. It's not as much about what technology you use on the web, it's about connecting a billion clueless monkeys, and believing that the result is something a billion times more intelligent and informed. Anything that helps connect those monkeys is good, anything else is irrelevant. Even whether you us
          • The guy who founded MP3.com (IIRC) wrote a book entitled "The Cult of the Amatuer", which he basically talks about going on a weekend get away with O'Reiley and has an interesting critique of Web 2.0. So much so, I bought a couple extra copies and have given a couple copies to PHB when I'm hired to do consulting work.
          • by Junta ( 36770 )
            Merely that the companies are the ones tinting the situation for their benefit. 'Web 2.0' has become a bit of marketeering, since the original definition doesn't help a lot of those companies sell more crap.

            However, to an extent, fighting for the original spirit/meaning of 'Web 2.0' to an extent is like fighting for correct usage of 'begging the question', while you may be in the right, the masses still adopt the common usage. And in Web 2.0 in the true sense of the word, the most popular opinion tends to
            • Well, in a sense, he did it to himself, because (willingly or not) he did try to correlate his techno-utopian ideas to money. His claim is that he looked at which sites survived the dot-com bubble, and indeed thrived, and the common factors that he saw were, well, the ones I've already listed: wikis, blogs, tags, collaboration, content hauled through P2P, etc. So you too can be a part of the next great thing, if only you build your site around those.

              Kinda funny, because what the rest of us saw is: those who
          • I find you're discussion of web 2.0 interesting and insightful. I have tagged it web2.0 in delicious to help me find it in the future.
      • Re: (Score:3, Informative)

        by jonbryce ( 703250 )
        Web 2.0 is a corporate buzzword that PHBs throw into discussions to make it sound like they are really up to date.
    • Re: (Score:3, Interesting)

      by fan of lem ( 1092395 )
      Servers post to twitter whenever they "don't feel well". Web 2.0-enabled system admins react quicker! (Esp. with a Firefox plugin)
    • by thatskinnyguy ( 1129515 ) on Monday May 26, 2008 @03:47PM (#23548205)

      Web 2.0 is about a thousand layers above hardware, it does not in any manner, approach.

      Not to be pedantic, but it depends on what model you are using. According to the OSI model, the Application Layer is 6 layers above the Physical Layer. And according to the TCP/IP model, the Application Layer sits 4 layers above hardware.

      Network models with thousands of layers?! Not only is that crazytalk, it's way too precise to be practical.

      • Not to be pedantic, but
        Those words don't belong together.

         
      • by CAIMLAS ( 41445 )
        Your word of the day is "hyperbole". Please, dictionary.com it or something. It will be on the exam.

        But seriously, "Web 2.0" is a bit more than 4 or 6 "layers" of functionality above bare heardware, if you count them all:

        Hardware is abstracted by an OS, is abstracted by libraries, are utilized by a web server, which has extra functionality added to it for dynamic content, which is stored in a database. Said database is abstracted by a user interface written in a high-level programming language, which runs o
        • I was just trying to be a bit of a smartass. As a network guy, those two models are pretty much my bubble. Thanks for the lesson.

          Hyperbole(n.) A word elementary school teachers made-up to torture kids when spelling test time comes around. :)

    • Web 2.0 is about a thousand layers above hardware, it does not in any manner, approach.

      If you're running a highly redundant and completely pointless application, then you want to optimise your hardware differently than if you're running a monolithic and mission-critical one. Which is what the article is about.

    • Re: (Score:3, Interesting)

      by hackstraw ( 262471 )
      My thoughts exactly. Its like "Hmm, we need a good buzzword here, ah Web 2.0, that will work".

      I haven't read the FA yet, but here are the big 2 with data centers infrastructure-wise. 1) Power 2) cooling. Always has been, always will be. Frankly, I think that pumping a bunch of cold air in the floor is a bit primitive. I think in the near future we will see power and cooling be more a part of the racks than the way its done now. There are some data centers that are doing this, but its one of the things
  • by Anonymous Coward
    How about a link in the story? Or whats the deal here? We need a site to slashdot!
  • A new buzzword... move on, nothing to see here.
  • Web 2.0 (Score:5, Insightful)

    by 77Punker ( 673758 ) <(spencr04) (at) (highpoint.edu)> on Monday May 26, 2008 @02:38PM (#23547541)
    Oh, I get it. This is Web 2.0 hardware setup because users can add and modify servers as they see fit! Wait, the users have no control over the hardware?

    Sounds pretty stupid, but maybe Tom's hardware guide has a good explanation...wait, there's no link to the article, or anything at all! At least we'll get some good discussion going because this is Slashdot, right?

    This is probably the worst article I've ever seen on Slashdot.
    • It's a holiday. Everyone that posts informative articles is probably off wasted because it's their one day off this month. That's also probably why this article made it through with no link - one too many shots of tequila to celebrate Memorial Day........
    • Re:Web 2.0 (Score:5, Informative)

      by Eponymous Bastard ( 1143615 ) on Monday May 26, 2008 @04:17PM (#23548509)

      This is Web 2.0 hardware setup because users can add and modify servers as they see fit!
      Actually, I found this part interesting, from HP's offering:

      When drives or the fans in the disk enclosures fail, the PolyServe software tells you which one has failed and where - and gives you the part number for ordering a replacement. Add a new blade or replace one that's failed and you don't need to install software manually. When the system detects the new blade, it configures it automatically. That involves imaging it with the Linux OS, the PolyServe storage software and any apps you have chosen to run on the ExDS; booting the new blade; and adding it to the cluster. This is all done automatically. Automatically scaling the system down when you don't need as much performance as you do during heavy server-load periods or marking data that doesn't need to be accessed as often, also keeps costs down. [emphasis mine]

      I know, not what you meant, but a funny coincidence.

      IBM is offering a more optimized rack, with shared and optimized power supplies, different arrangement for the fans, a heat exchanger in every rack for your building's air conditioner, (which Tom's interprets as water cooling) and a couple other things.

      HP has a weird clustering software/hardware hybrid with large amounts/density of RAID 6 storage (for a flickr-style site, for example) together with a cluster of blades that can all access all the storage and can be added/removed at will. Interestingly they point at scaling down the system when load is low, to keep the costs down. I wonder if they put servers on stand-by automatically or something. They are also looking at not spinning all the disks all the time, but they're not there yet. I guess having some disks acting as a write cache could allow you to at least spin down the parity disks of the LRU sections or some such. You could even cache the read side if you're willing to put up with the spinup delay on a cache miss.

      Supposedly this is Web 2.0 because you want a google-style cluster with lots of generic hardware where any one computer can go down and the whole thing keeps going. IBM wants to lower the maintenance costs, HP didn't show them the server side, but pushed their storage technology.
    • by alx5000 ( 896642 )

      This is probably the worst article I've ever seen on Slashdot.

      You must be new here ;)

      • If this isn't the worst posting ever, what is? I've probably read 80% of the posts on this site for the last 7 or 8 years, and this one really stuck out for some reason. Just overwhelming misuse of already annoying (nearly meaningless) buzzwords combined with no link.
        • by alx5000 ( 896642 )
          Yes, I was only trying to be "funny". I completely agree with you: this is a blatant slashvertishment about two big companies taking advantage of the many buzzword-loving pointy-haired bosses out there, who are willing to puke huge piles of money into anything 'enterprisey'.

          I think we should celebrate the fact that there was no link to begin with ;)
          • That's a pretty grim outlook. I thought it was just ineptitude. I was serious about asking about the worst article ever, though. We should come up with a top/bottom 10 list of Slashdot stories on a slow news day.
  • by gnuman99 ( 746007 ) on Monday May 26, 2008 @02:44PM (#23547625)
    WTF is TFA link?

    But from the summary, it seems that "Web 2.0 servers" are like "Web 1.0 servers" but they would need more

        1. storage (for user comments)
        2. I/O (less caching, more throughput)
        3. processing power

    But then that is just common sense. Regardless, "Web 2.0" is clearly a misused term to fullest extent possible these days. Might as well be "web enabled" and "linux" at end of the 90s.
    • by Bodrius ( 191265 ) on Monday May 26, 2008 @02:50PM (#23547687) Homepage
      If they need those 3 things to offer the same performance, and are uglier to boot... then yeah, that's Web 2.0 all right.

      Maybe they let Ops mod their servers too.
      Gotta bring in the user content aspect into the picture.

    • From TFA:
      http://www.tomshardware.com/reviews/servers-hp-ibm,1937-3.html [tomshardware.com]

      The motherboards are industry standard SSI form factors, so although IBM only offers Intel quad-core CPUs today, if demand for AMD chips like Barcelona returns, then IBM can off them

      I didn't say it. Netcraft didn't say it. IBM and/or Tom's Hardware did!!

    • Maybe they figured that since we true /.ers never RTFA, they don't need to link us to them anymore.
    • Nah, what they mean when they refer to "Web 2.0" hardware is the same as Web 1.0... distributed stateless servers. Commodity replaceable parts, with software architectures designed to "run anywhere, we don't care where".

      With that approach, the big SMP systems are now the detritus of technologies past. Everything is cluster this, cluster that, basically. Web 2.0 doesn't change this any, but it makes for a nice buzz word.

      But never mind that. The article presents HP as having a storage solution at a "significa
      • I think you are very confused what they mean when they say $15/gig, that includes redundant load balanced raid controllers, the power required to run it all, any internal switching connected such as Infiniband, 10GigE, or FCP interfaces.

        When you add all this up then yes, a 60tb SAN array is not going to cost the price of 60 1tb Hard drives.

        Enterprise class storage with all the associated equipment does not come cheap even today.When your goal isn't just bulk capacity then you have to also consider the n

        • I buy redundant active-passive HA storage controllers, all storage attached to the system and networked to the storage controllers, with fibers, SFPs, RACKED, and with IP addresses inserted for me at the factory and turnkeyed for us, for under $2/GB... per usable gigabyte, that is, after the file system and RAID-6 overhead is removed... from one of the leading and most well known vendors in the storage community today.

          That's for Tier-II. For Tier-1, it's in the $4.50 range. I am presuming that the HP soluti
    • You forgot memory.

      web 2.0 'applications' soak up gigantic amounts of memory on the webserver side. Thats not the database, thats just the webserver.

      You'd be lucky to get away with 2 gigs of RAM on the webserver.

      These things are disgusting bloated beasts.
    • by CAIMLAS ( 41445 )
      Web 2.0 is nonsense. Anyone who's done web design knows how it goes...

      Client: I'd like a Web 2.0 site.
      Me: Could you describe it for me?
      Client: You know, something that plays by Web 2.0 standards. None of the old stuff.
      Me: Do you mean you'd like a database-driven, dynamic web site?
      Client: *fires up web browser and goes to, say, digg.com* No, like this.
      Me: So you want a blue and grey color scheme?
      Client: No, buttons that look like this with the jelly bean look!
  • For New Buzzwords...

    Web 2.0 is gonna be better then Web 1.0 Just like Vista was WAY WAY better then Windows XP!

    Though I mean common seriously, this stuff is getting a bit dicey. Web 2.0 isn't really even a standard of OTHER standards. It's a term for how much java, shockwave, and ads you cam JAM INTO A WEBSITE!

    What Web 2.0 means for hardware, is that a bunch of companies late to taking in the $$$ from Web 1.0 are gonna not miss the next gravy train. Overselling to data centers a rack of watercooled 12
    • Re: (Score:2, Insightful)

      by maxume ( 22995 )
      Web 2.0 doesn't have anything to do with Java, unless you are serving your Javascript with it.
    • Calling it Web 2.0 is so...1999. I propose some more marketable names:
      • Web II: The P2P Quickening
      • Web 2: RIAA Judgment Day
      • Web II: The wrath of Balmer
      • Web 2: The Information Hyper Highway
      • Web Episode II: Attack of the SpamBots
      • The Son of Web
      • The Second coming of Web
      • The Web Reloaded
      • The Web, Professional Edition
      • The Web, Service Pack 2
    • by anomalous cohort ( 704239 ) on Monday May 26, 2008 @03:41PM (#23548139) Homepage Journal

      The poster equates Web 2.0 to "java, shockwave, and ads" and gets modded as insightful? Riiiiight.

      Even if you were only focused on the technical aspects of Web 2.0, you would realize that these so-called Web 2.0 [blogspot.com] sites used AJAX and neither java nor shockwave. An even more relevant description of web 2.0 would include such terms as collective intelligence [blogspot.com], user generated content [transitionchoices.com], or the long tail [blogspot.com].

    • Re: (Score:2, Insightful)

      Oh. I thought Web 2 was where users use your website as an application, rather than perceive it as content, and then you charge advertisers. Client-side scripting makes the user experience better by providing a more responsive interface, but plain html would work ok, provided there's a fast enough connection on both ends. Right?
  • Commodity hardware and a solution like VMware ESX.

    High availability, built in redundancy, cheap per-unit cost. What's not to like?

    Works for your mission critical apps and your less critical stuff.
    • No, no, no, no. VMWare ESX was Web 1.5. New hardware is Web 2.0. Get with the program!

      For those not keeping up, here is my guide to Web 2.0:

      Web 1.0: House blend coffee
      Web 1.5: Tall, skinny latte with soy milk
      Web 2.0: Frappuccino.

      Web 1.0: Static HTML
      Web 1.1: Dynamic HTML
      Web 1.5: Dynamic XHTML
      Web 2.0: HTML? What's that?!

      Web 1.0: Cisco routers
      Web 1.1: Cisco routers runnning IOS
      Web 1.5: Nortel routers
      Web 2.0: Who needs routers? We have IPV6!

      Web 1.0: Wired
      Web 1.5: Wireless
      Web 2.0: Sharks. With friggin' LASE
    • Because when I have a virtual host fail (kernel dump), I want 15 "high availability" servers to all go down simultaneously, instead of one.
      • by Nursie ( 632944 )
        And come back up in seconds on redundant hardware or spare capacity running in the same VM cloud off the same SAN.

        Strangely enough they though of that.
        • If you have redundant hardware and a SAN, why not just use normal servers, and have redundancy for them? It is POSSIBLE to setup a nice ESX environment, but it's a challenge to keep current with their constant patches, their horrible support, and the gigantic machines that must be used to virtualize 15 production web servers. Load a resource-hungry application onto your farm, and watch it eat way more than it would without the overhead of the VM-layer. 4x 3.0 Ghz Xeons and 16GB/ram (each) load-balanced m
          • by Nursie ( 632944 )
            "If you have redundant hardware and a SAN, why not just use normal servers, and have redundancy for them?"

            Because you don't need anywhere near as much redundancy. You don't have to double up every server, just have enoug capacity for if one or two real boxes fail.

            I guess I live in a server software world where the software isn't all that hungry, but you want the high availability, server encapsulation and load balancing offered by ESX.
    • by Fallon ( 33975 )
      Actually VMware is the exact wrong way to go (although it does rock for many purposes). If your looking to put up a server farm for "web 2.0" apps, you want to have each box running as close to 100% efficiency with no extra overhead (VMware). As you scale up in a modular environment you'll gladly trade off the flexibility that VMware gives you for more efficiency. Redundant boxes provides high availability rather than extra software.

      Specing hardware for an application farm usually means piles of blades or 1
    • by rawler ( 1005089 )
      Uhm, yeah. Gotta tell that to my friends at work. Their VMWare-virtual hosts aren't constantly rebooting themselves and go down, crash in kernel, drift in clock etc. It just seems like it.

      Also, the way they manage to get some 15 poorly performing servers out of hardware and software investments close to 50k Euro must be brilliant. We only get decent performing servers for some 1.5-2k Euro/Server (WITH full redundancy for the services that requires it, including separate disk-arrays).

      Sorry, but I don't buy V
      • by Nursie ( 632944 )
        "Gotta tell that to my friends at work"

        By your comments I take it you mean you have experience with VMware workstation. Try using ESXi and coming back to me on that. You're out of date.
  • by discord5 ( 798235 ) on Monday May 26, 2008 @02:59PM (#23547791)

    The best way to organize your serverroom for web 2.0 compliance is by stacking the servers diagonally. This way, air can float freely between racks, improving the flow of the system administrator gas based bowel attacks.

    Don't bother with those 10Gb switches, just hook it all up on wireless. Wireless network, wireless fibre storage, wireless power! Your megaflops (the rate at which a million projects per second will turn out to be a flop) will increase by a factor of 213% per watt.

    Web 2.0, the best thing to happen to your serverroom since buttered toast and angry system administrators, can be yours now only for $ 9999,95 per diagonal server! Why go for a 1U server when you can have a 2U for three times the price. Call now, and receive a free "My other server is a web 3.0" bumpersticker which will be applied by an angry salesman who'll also slash your tires for FREE!

    Warning: servers may not be stacked diagonally on top of eachother, rather rammed into your rack repetetively by an angry monkey (which we've nicknamed "Bob the technician"). Aforementioned technician may or may not leave presents in your servers. Do not feed Bob during the installation process, nor introduce Bob to small children and pets.

    • Warning: servers may not be stacked diagonally on top of eachother, rather rammed into your rack repetetively by an angry monkey (which we've nicknamed "Bob the technician"). Aforementioned technician may or may not leave presents in your servers. Do not feed Bob during the installation process, nor introduce Bob to small children and pets.

      I thought the purpose of web 2.0 is so we would have FEWER managers.
    • can be yours now only for $ 9999,95 per diagonal server! Why go for a 1U server when you can have a 2U for three times the price.

      Don't you mean sqrt(2)U ?

  • Karma Whoring (Score:1, Interesting)

    by Anonymous Coward
    To stop the web 2.0 discussion and focus on something interesting. From TFA:

    If you're an IT administrator for a bank and want to build a server farm for your ATM network, you make it fault tolerant and redundant, duplicating everything from power supplies to network cards. If you're a Web 2.0 service, you use the cheapest motherboards you can get, and if something fails, you throw it away and plug in a new one. It's not that the Website can afford to be offline any more than an ATM network can. It's that the software running sites like Google is distributed across so many different machines in the data center that losing one or two doesn't make any difference. As more and more companies and services use distributed applications, HP and IBM are betting there exists a better approach than a custom setup of commodity servers.

    Then they go on to talk about how google uses custom power supplies, how people are now charged by power consumption and how blade style servers use up too much power (?)

    They mentioned preconfigured linux servers for cheap, to help people avoid the extra work in setup (?)

    Etc. A jumble of suggestions for cheaper data centers, cooling many midrange servers, and so on.

    I would've thought selling VMs on a power-efficient mainframe would

    • by Nursie ( 632944 )
      If IBM think youy have enough money, IBM will ram mainframs down your throat. This is for the people who want a green datacentre but for who System Z might be a little out of reach.
  • They cool their servers from the bottom to the top. Also sideways, so kinda diagonally, but they're getting excellent results. Sounds like an efficiÃnt idea to me. http://www.ibm.com/systems/deepcomputing/bluegene/ [ibm.com] And why don't they use Western Digital GreenPower drives? They have an enterprise version of those don't they?
  • by oneiros27 ( 46144 ) on Monday May 26, 2008 @03:51PM (#23548241) Homepage
    They mention 'sideways', and I thought they just meant rotating about the depth of the rack (ie, so a 19" rack would be about 11U wide), but the discussion is talking about the fans being 15" away vs. 25" ... which makes no sense, as they're mentioning servers being 47" deep. I think they're talking about side venting, which is what Suns _used_ to have, but you'd have to get these 30" wide racks (so there'd be ducts on each side for airflow in/out)

    And we have the useless quote:

    "In a data center the air conditioning is 50 feet away so you blow cool air at great expense of energy under the floor past all the cables and floor tiles," McKnight said. "It's like painting by taking a bucket of paint and throwing it into the air."
    I'm not going to claim that forced air is more efficient than bringing chilled water straight to the track, as it's not -- but the comparison is crap -- anyone who's had to manage a large datacenter will have had to balance ducts before -- it's not fun, I admit, but you don't just pump the air in, and expect everything to work.

    Then there's the great density -- 82TB in 7U. I mean, that's not bad, but the SATABeast is 42TB in 4U (unformatted), and I'm going to assume a hell of a lot cheaper. (although, it's a lower class of service). And HP's not using MAID yet, but spinning all of the disks.

    My suggestion -- skip the article. It reads more like a sales brochure, with very little on the actual technical details of what they're doing.
    • If they are water cooling, then not having the servers stacked vertically would keep you from frying everything below the one that springs a leak.
      • The servers are stacked pretty much like they have been before, but not as deep. The water cooling is contained within the door and does not go to the servers (as the article says, they thought that currently that was too pricey to be worth doing). Besides, they left their options open, it's easier to tack on a door or not based on the datacenter use of chilled water or not than it is to change system heatsinks, etc etc.
    • by Junta ( 36770 ) on Monday May 26, 2008 @04:25PM (#23548587)
      Most racks are on the order of 2 ft. wide and 4ft deep. The iDataplex racks are 4ft wide, and 2ft deep, with two columns each 19" wide. The cooling is still front to back with 19" wide servers, it's just that the racks are less deep. They are doubled up presumably to be in some way conventional for shipping, marketing, whatever, but ultimately aren't as exotic as some would fear. They could have just as well had 'normal' 42U racks with only half the depth and logically be analogous. They also take some of the spare horizontal space and carve out 16U of vertically oriented U space.

      As to the air cooling aspect, I think the discussion is tilted toward the extremes of bad datacenter design to sound better, but water-cooling is more efficient to pump the distance even with clear path for the air to go. Not saying this is specific to any particular vendor (the difficulty of sticking the converse of a radiator on the back of a rack seems like it would be low), but I think IBM is fishing for ways to take advantage of two-column racks in a remotely meaningful way. In this case, the ratio of usable surface area on the water pipes to unusable plumbing in the design is higher since they can be wider.
  • I've always wondered why don't they just place the servers in places where it's cold all the year, imagine how much power they could save by using ducts to distribute the cold air from the outside in the server room. I mean, a solution like this can't be that hard to implement!
  • What?! (Score:3, Insightful)

    by Bootarn ( 970788 ) on Monday May 26, 2008 @04:02PM (#23548347) Homepage
    So called "Web 2.0" means JavaScript. JavaScript is run on the client side.
    I fail to see why this requires supercooled servers, and until now I didn't even think it was possible to use the "Web 2.0" buzzword on hardware.
  • Web 2.0 FAQ (Score:5, Funny)

    by trollebolle ( 1210072 ) on Monday May 26, 2008 @04:05PM (#23548379)

    This seems like a good opportunity to mention the famous Web 2.0 FAQ by Rich "Lowtax" Kyanka on somethingawful.com. For those readers who are not entirely sure what web 2.0 is:

    Question: What is Web 2.0?

    Answer: Web 2.0 is a combination of Web 1.0 and being punched in the dick.

    Question: How do I know I'm using a website / service / product that is officially "Web 2.0" and not actually "Web 1.0" with various patches and enhancements added to it?

    Answer: Web 2.0 is made obvious by the addition of completely and highly unnecessary bells and whistles that don't do anything besides annoy you and make life more complicated. If Web 1.0 was the equivalent of reading a book, Web 2.0 is reading a book while all the words are flying around and changing pages as the book rotates randomly and sets your hands on fire. Also there's this parrot that keeps on flying towards your head in repeated attempts to gouge out your eyes.

    Question: I read about this one website in Wired Magazine. Is that Web 2.0??

    Answer: Oh definitely. Wired won't even mention Web 1.0 sites. Every single site in their magazine is at least Web 2.0. Sometimes they're even up to Web 45.2 (such as www.ebutts-and-credit-reports-delivered-via-carrier-pidgeon.com)!

    Question: My roommate said he "digged" a "wikipedia entry" about "the blogosphere" which mentioned "podcasting" as a viable form of "crowdsourcing."

    Answer: Your roommate is a faggot. Also, this wasn't technically a question.

    Question: What's Web 3.0?

    Answer: It's a product or service planned on release in spring of 2008, and consists solely of websites enabling the user to create even more detailed Kirby ASCII art. (O'.')-o

  • Web 2,0 (Score:3, Insightful)

    by EricVonZippa ( 719996 ) on Monday May 26, 2008 @04:11PM (#23548439)
    Web 2.0 really has nothing to do with 1u, or 2u servers being configured in any specific manner, nor the layout in the racks to be "sideways", upside down, or water-cooled. Web 2,0 is about moving the complexity required to support an application from the physical hardware into the application stack. This happens when an application provider builds resiliency and redundancy into the application and then the application utilizes the compute power of a series of systems merely as a process station. If a node goes offline, or fails, the application moves to the next logical set or online node. This really is nothing new to the industry, other than the capability now being available in the x86 platform. The hardware provider that will win in this space will be the provider that can build, design, and architect the highest possible compute spec while utilizing the least amount of both space and power. It's not virtualizing applications, or operating systems. It's about squeezing as many processing units into the smallest amount of space while utilizing as little power as possible and the application being architected in a manner that will utilize that. Gone are the days of needing to build fault tolerant hardware platforms, with back-up power-supplies, clustering,etc. Today we have smart applications that see that additional processing power is required, or that a process node is down, and the application fails over to the next node in line. This is really not new, what's new is the capability being available in the X86 space. that's a beautiful thing. that means the customer/consumer wins.
    • Mainframe Tech (Score:4, Insightful)

      by maz2331 ( 1104901 ) on Monday May 26, 2008 @04:48PM (#23548787)
      So, basically all we're doing is taking some mainframe tech and moving it to x86 servers. Add in some hardware-based virtualization (say, to run old code on different physical processor technology), mix it with virtualizing the rest of the hardware, and give it a proper hypervisor and you have....

      A Z9 mainframe.

      Maybe IBM should just make some nice REALLY low-end mainframe-type PC servers with a "clustering" port.

      Mainframe tech is great, except it's just too damn expensive, especially when you're not doing enterprise-level data crunching.

      • Essentially that is dead on the mark.. add to it that companies like Google, Microsoft, and Salesforce (and others) are leveraging this capability to deliver both applications and services that leverage this technology. It's cool stuff, and really is the next "big thing" in the industry. Imagine having access to all your data and applications from any place, on any device, anytime. This is where we are heading technically..
  • My servers are still running Synergy and I see no reason to upgrade in the near future.
  • Why not use mineral oil as a coolant? It has been used in transformers and other electrical applications to years. If it leaks, it won't do a whole lot of damage to the equipment or cause deadly electrical shorts.
  • Turn it sideways and watercool it? While you're at it, throw some D's on it.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...