Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Open Blade Servers? 207

Greg Smith points to this ZDNet story on new Intel chips aimed at blade servers, writing "Proprietary blade servers are coming on strong from IBM, Dell and HP. Where are the open blade servers? How did Google roll out 10,000 servers at such a low cost?"
This discussion has been archived. No new comments can be posted.

Open Blade Servers?

Comments Filter:
  • Amusing... (Score:5, Funny)

    by flewp ( 458359 ) on Saturday October 26, 2002 @09:58PM (#4539666)
    I got an ad for AMD server solutions for that story..
  • by Devil's BSD ( 562630 ) on Saturday October 26, 2002 @10:03PM (#4539686) Homepage
    Imagine a beowulf cluster of those...
    Really, though, the fact that this blade server consumes so much less electricity would be very meaningful to me. The server room at our school was not intended to be a server room. The wiring also is lacking, and every once in a while the breakers go pop!
    BTW, is it possible to use this in a laptop? Just imagine the power (or less consumption thereof) if you packed two processors in parallel on a laptop...
  • by Anonymous Coward
    My barber uses an open blade shaver, but I don't think it would be safe around all the wires in a server room. To be honest though, I don't see why you'd need any kind of blade in a server.
  • blade server (Score:5, Informative)

    by rob-fu ( 564277 ) on Saturday October 26, 2002 @10:07PM (#4539700)
    If you're too lazy to read the article and don't know what a blade server is...

    Server blades got their name because of their design. A blade server typically resembles a circuit board more than anything else. They're made to be stacked vertically. These types of servers are growing in popularity for more mundane tasks such as delivering Web pages or housing protective firewalls because they use less floor space and electricity than racks of traditional servers. Server blades also share a power supply, cables and memory, which further cuts down on costs and space. Although the down server market has dampened sales, analysts believe blades will eventually form a substantial part of the market.

    Maybe I'm retarded, but I didn't immediately picture exactly what a blade server was when I saw the name...so there it is.
    • by Anonymous Coward on Saturday October 26, 2002 @10:36PM (#4539817)

      Where Linux will really shine is the new PICMG-2.16 standard. It's an enhancement/alternative to CompactPCI where a chassis uses Ethernet signalling on the backplane instead of CompactPCI signals. That means a single chassis can have an intel, Sun, and/or motorola blade in the same chassis and they communicate via TCP/IP instead of hardware-specific signalling. It also means that a Linux-based blade can work in *any* manufacturers chassis. This removes a big barrier of entry for the Linux in the telecom market.

      Other cool things about PICMG 2-16 Blades:

      • Blades (like ethernet hosts) are more easily hot-swappable
      • Depending on the chassis switch, bus speeds could approach 24GB/s in the near future
      • Device drivers need only speak TCP/IP (one driver works on multiple blade operating systems)
      For more info see: The Next Big Thing (pdf) [picmg.org] and there might be something here since these guys designed part of the spec.
    • Re:blade server (Score:3, Interesting)

      Server blades also share a power supply, cables and memory

      Normally, redundancy is a high priority. Is the savings in hardware and electricity worth the risk of losing (say) 10 machines because one power supply failed?
      • Re:blade server (Score:5, Informative)

        by John Paul Jones ( 151355 ) on Saturday October 26, 2002 @10:52PM (#4539853)

        Normally, redundancy is a high priority. Is the savings in hardware and electricity worth the risk of losing (say) 10 machines because one power supply failed?

        Blade servers are akin to modular switches and routers. All servers share a backplane, delivering power and network connectivity, both within the chassis and to network patch panels. Some solutions have break out boxes that permit KVM access to individual blades, while others run that through the backplane as well. Redundant power isn't the issue, since the backplane usually has redundant power; the issue is that these servers usually don't have multiple hard drives, so redundant disk isn't possible per blade. There are some that do have mirrorsets, they are less dense than the single-disk models.

        The use of blades is normally for webserving, thin client servers, etc, where the failure of a single blade simply decreases the capacity of the overall farm, rather than rendering a service unavailable.

        The best designs implement SAN HBAs into the backplane, providing common disk to all devices, and with netbooting, the devices won't need local disk at all. That's probably going to be the future of compute farms...

        -JPJ

      • Usually the shared components are redundant. So instead of 20 power supplies for 10 machines, you have 2.
      • Re:blade server (Score:3, Informative)

        by ncc74656 ( 45571 )
        Normally, redundancy is a high priority. Is the savings in hardware and electricity worth the risk of losing (say) 10 machines because one power supply failed?

        I'm sure there's still more than one power supply. You just don't have 42 of them, like you would in a rack full of 1U servers...instead, you'd have maybe two or three (like you do in some conventional servers with redundant power supplies).

    • I wonder whether we will see a small blade housing being sold for desktop use. A box that sits under your desk and holds between one and four blades (possibly made to look like a single machine with MOSIX). In other words could the 'blade' form factor become a rival for ATX?
      • I wonder whether we will see a small blade housing being sold for desktop use.

        I think this is what you're looking for:

        from rocketcalc [rocketcalc.com]

        as seen on slashdot [slashdot.org]

        (granted it was a long time ago...)
      • It'd be nice to ditch the ATX, but there are a number of mini-ITX and PC104ish form factors out there. The big problem with blade servers for desktop use is that they don't have video or audio - if you don't mind quiet, you could use one with an X terminal, but for a typical office, you could do just as well with a big blade server in the back room somewhere. Also, blade servers usually run wimpy little disk drives - for a server machine, I'd rather have a row of removable 3.5s in a RAID configuration.

        The other problem with adapting current blade technology to the desktop is price - most of them are too expensive per blade compared to an equivalent-speed ATX.

        • That's what I meant, will the blade form factor become cheaper and more common.

          If history is any guide, it's more likely that someone will come up with ATX-blade (a rack where you can fit several ATX motherboards) and that this will take over the server market from the low end upwards.
  • Open What? (Score:5, Insightful)

    by coene ( 554338 ) on Saturday October 26, 2002 @10:07PM (#4539704)
    A blade server is a hardware product, it really has nothing todo with software, outside of the Operating System Clustering/Scaling functionality.

    Google does not use blade servers, last I knew it was just a large amount of x86 boxes running Linux.

    Open Source hardware? Does that even make sense? Either have drivers (or release the specs) that allow your hardware to be used on an Open Source operating system, or dont.

    Want an "Open Source Blade Server"? Yeah, thats called an HP with Clustered Linux on each blade...
    • Re:Open What? (Score:4, Informative)

      by Istealmymusic ( 573079 ) on Saturday October 26, 2002 @10:14PM (#4539738) Homepage Journal
      What, you mean like Open Hardware [open-hardware.org]?
      • I'm aware. From the site:

        "This program is designed to develop and promote a test suite, and to serve as an information repository that will allow you to verify that your hardware configuration is Linux or FreeBSD ready."

        Dell, IBM, AND HP (the 3 blade server manufacturers in the article text) are all on that list...

        To get back to the point of my other comment, what's the purpose of this entire article? The poster is seemingly asking for Blade Servers that support open source software, and the 3 manufacturers he lists already do.
    • You haven't seen the newer (than two years old) Google racks. While the older racks are nothing more than 1RU machines, they've evolved four machines per RU, last I saw their racks. Considering they're all diskless and completely on remote access, I think that qualifies as blade.
      • Actually, I -have- seen the newer Google racks. They're still using Rackable Systems' 1- and 2-U servers, and they most certainly are -NOT- diskless.

        The last time I saw them was when I was last in the San Jose MFNX datacenter (and, coincidentally, recognized all the servers that had been moved there from Exodus/GlobalCenter in Sunnyvale, where my former employer's equipment is still colocated). Still normal PCs with disks. Most of their servers are, as I said, the Rackable Systems' machines, with servers on both sides of the rolling racks, and an HP ProCurve switch in the middle on each side, running 100BaseTX to each box and Gigabit to the upstream.
    • s/or release the specs/release the specs/

      Drivers without specs may be somewhat helpful for an individual open-source opersing system, but they are nearly or completely useless for "open source opersing systems" as a whole.

    • Re:Open What? (Score:2, Insightful)

      by Moekandu ( 300763 )
      I believe, what he is asking, is whether or not someone will develop a "blade" hardware standard that in not proprietary. That way you can go down to Fry's, buy the parts and put together your own chassis with blade servers and storage enclosures.

      Much like the ATX, NLX and FlexATC form factors. Off the shelf backplanes, barebones blades, etc.

      Give it a couple years or so. 1U and 2U barebones server kits are getting pretty prevalent. I think it will come down to how quickly small/medium business will embrace blade servers before Intel and others will start putting out "whitebox" solutions.

      Patiently waiting,
      Moekandu

      "It is a sad time when a family can be torn apart by something as simple as a pack of wild dogs."
    • like PCI, USB, etc. (Score:4, Interesting)

      by g4dget ( 579145 ) on Sunday October 27, 2002 @12:46AM (#4540129)
      The notion of "open" makes sense for hardware, although it is slightly different than from software. "Open" hardware that is documented, hardware that conforms to standards, hardware that has well-defined interfaces for software, hardware that is at least licensed under reasonable and non-discriminatory terms. RS232C, parallel ports, PC104, PCI, ISA, USB, IDE, etc., all can be considered reasonably open. Stuff that comes only from a single company, requires proprietary drivers, etc., is not open.

      An "open" standard for blade servers would be nice. And, in fact, there are such standards: passive PCI backplanes, networking backplanes, and EuroBoards. Look around the web--there are plenty of systems to build open blade servers on--servers that are open in terms of both hardware and software.
  • Pentium IIIs? (Score:3, Interesting)

    by SexyKellyOsbourne ( 606860 ) on Saturday October 26, 2002 @10:08PM (#4539706) Journal
    The usage of Pentium IIIs for these monsters of serious computing only goes to show how much of a badly designed marketing ploy the Pentium IV is.
    • by coene ( 554338 ) on Saturday October 26, 2002 @10:11PM (#4539719)
      Not really...

      Intel knows that they cant get the P4's power consumption low enough to hit the numbers, so they use a P3.

      Blade servers are already marketed by everyone that makes them as "a tad slower, but much more energy effecient", and the main goal is better density, to allow more power in the given space. The Pentium III fits this bill perfectly.

      Intel is smart enough to know that the P4 isnt everything. Engineering > Marketing, whenever that happens, its a good thing!
    • Re:Pentium IIIs? (Score:5, Informative)

      by silentbozo ( 542534 ) on Saturday October 26, 2002 @10:17PM (#4539751) Journal
      While I won't argue about the Pentium IV being designed around the need to advertise a higher clock speed (irregardless of what that means in terms of actual computing power), the Pentium III is a more mature design, and benefits from lots of improvements to its power consumption. In a blade server, power consumption is one of the main issues, thus using a PIII doesn't necessarily mean that they wouldn't use a Pentium IV if they could get away with it - they just can't afford the power/heat issues.

      Now consider that fact with laptops using the P4 - that's one area where they can get away with it, at the cost of battery life...
      • Re:Pentium IIIs? (Score:3, Interesting)

        by Moridineas ( 213502 )
        As a side note, from talks I've heard given by Intel engineers, their goal was definitely to up the megahertz AND overall speed. To do this, they needed to design a totally scaleable architecture. Looks like they got it right too--what's the fastest P4 today? Somewhere in the 3.0GHz range (just below I believe, and with overclocking success well about), whereas Athlon's have yet to break the 2.0GHz barrier afaik. No question mhz-for-mhz the Athlon is faster, but when it's outpaced by over 1GHz, the advantage moves back to the p4's court.
  • by Anonymous Coward on Saturday October 26, 2002 @10:10PM (#4539715)
    That could be dangerous. Over 50 people* die per month in server blade accidents. Sadly, these needless deaths could've been prevented by simple server blade cover kits.

    Only YOU can prevent fatal server room mishaps.

    Keep those blades covered, kids.

    This is a public service message brought to you by Sally Shark, official mascot for server room safety.

    *note: statistics may be fictional.
  • by puto ( 533470 ) on Saturday October 26, 2002 @10:12PM (#4539728) Homepage
    Well, I am sure free advertising has a lot do with the Google roll out.

    The <insert powered by DELL,COMPAQ/YOMAMA) tags will start appearing all over google.

    I also reckon they were free or at pretty much close to cost. Companies know what there doing. Cost on that kinda margin is probably at 200 bucks a pop straight outta the factory when you consider markup is about 500 percent on computer parts. Remember buying in bulk is power.

    Example, you can get a pc on pricewatch with a 20 gig drive, 256 megs of ram, and a giga or more processor for, 250, so think about it.

    Puto
    • by Ferrule ( 82308 ) on Saturday October 26, 2002 @10:49PM (#4539846)
      500 % markup huh? I would still be in the hardware business if it was. You're way off.. Mod -3 uninformed and wrong.

      Basically the entire hardware industry runs off slim margins.

      I heard Dell runs at about 6%. Most distributors run a 1-2% margin, computer stores anywhere from 5-10%.

      As for the manufacturers, I haven't a clue, but they must have astronomical costs.

      Buying in bulk isn't that big of a deal anymore. When a company goes ITQ (invitation to quote) the vendors know they aren't going to win unless they at least halve their markup.
      • by puto ( 533470 ) on Saturday October 26, 2002 @11:37PM (#4539969) Homepage
        I was unclear, when I meant coming straight out, I meant right out of the factory. No distributors costs, but dell or compaqs actual costs. Which they would never publish, only allude to 6% margin.

        I certainly didnt mean pc shop.

        But if ia pc shop has an account with Tech data and other biggies then it is not unusal for 20-20 point markup on a pc. A pc that can be built with 2 gig processor, 256 meg, 19 inch monitor, getforce, nic, sound, yadda, for about 700 bucks. Will easily sell for 1200 bucks and people will think they are getting a deal, well lets subtract 100 bucks for xp home. And lets also remember that with a corporate account there are places cheaper than Pricewatch. so we say profit is 400 dollars. That is a little more than 6 percent on the homefront.

        Now I am not saying all PC shops do this, but most still have a healthy margin on the hardware. Because Johnny Six Pack still thinks you buy cheap you get cheap.

        The actual manufactures of the the seperate parts run on fairly slim margins, but the retailers control the prices. That is why we see such a disparity of prices on the web.

        I was a buyer for a fairly large shop up until recently, and we sold good boxes, good prices, and made at least 200 a box on a bad day. The only time we lost out was when we went up agains some of HP's all in one deals from Walmart or circut city. And we could still match them with better equipment and make 50 bucks on the hardware, but we didnt, of course we gave three year warranties, and people usually shelled out the extra 200 bucks, cause they had someone to throw the thing at if it didnt work.

        I also was a buyer in South America and bought things right off the boat. And you would not beleive some things I was able to get at very low prices.

        The hardware market is all price controlled. The margins are higher than they make out. But the larger the company the more staff, etc you gotta support, more perks, got to make shareholders happy. So you gotta to fudge margins, it is what we Americans or good at.

        So to end this rant. If I know I can call Tech data, get quality brand parts, sell a pc at a good price, comparable with dell. and I am going low here, make 150 bucks on top of my built in build cost. And knowing I have nowhere near the buld discounting on hardware parts as they do. Dell is making mucho bucks.

        Puto
      • Ok, so why can't dell make a server for any less than 50% more than I can make one for? Wouldn't that indicate 50% markup? It's either that or plain stupidity.. (probably the latter)
        • by puto ( 533470 )
          For arguments sake" I am not the worlds best computer guy". I do not claim to be, but I have been in the networking/hardware end for a long time, and can consistently make good money at it while using quality equipment.

          Let me breakdown a few of the responses concerns:

          Pricewatch: The best thing since sliced bread for the tech world. Price watch might be fine for the home user, a buddy, but you actually have no idea what you are getting, and why is it so cheap. RMA stuff, returns, "Fell off the Truck". So while all this transfers into savings for you and your customer, you have this problem. TechData or any of the biggies will cross ship, or ship ASAP an RMA, and you put the old one in the box. No questions asked 99.9 % of Pricewatch suppliers will not do this. Plus most of the stuff is OEM, 1 year, 90 day, 30 day warranty. I do not know about you guys but with hardware reliability being what it is these days compared to what it was years ago. I would rather pay the extra 12 bucks for the retail item and get the 3 year warranty. That way when something burns out in 91 days, I do not have an angry customer bitching at me. In the 'real'world you know part of your profit will be eaten up by service over the next two three years. So you need quality from a reputable company. Not a 20 dollar motherboard with everything built in and 1 pci slot, something burns out. You gotta replace it outta pocket because the company that has sold it to you has disappeared three days later.

          I do trust NewEgg. But I can wangle better prices with some of the larger places. My new box is entirely new egg. But there is nothing generic in it either. The only el cheapo thing I buy are nics and those are lend out spares for my customers until I grab them something decent. I picked up 100 nics at 2 bucks a piece. 10/100 realtek. Sell em for six bucks, everyone is happy, and they work well.

          TechData - I only mentioned Tech Data, because it is one of the largest and well knownones out there. I also use other companies. Sometimes Tech Data has really good deals on somethings and others no. Also Tech Data has different prices for different customers, all about how much you buy from them. Different prices for different people. And you can always get your rep and get them to cut you a break.

          Yeah, maybe my margin was way up there, and too high. But if you are making 6%, .
          1. Do not have accounts with the right companies.
          2. Shouldn't be on the hardware end ( I make my money on the service end, hardware is a pain in the ass, and I do not trust anyone under 28 to build a box for my company. I trust someone who has had his hands in an XT. Computers have gotten way to easy to build in the past few, so hardware skills have dropped dramatically. I have noticed in the shops I have run that the techs 25 and under tend to RMA more burned out stuff.... Maybe cause they weren't working on them in the day when a pc cost 4 grand and take the same amount of care.
          3. Do not know how to market your product to justify the price.

          I honestly make all my money networking this days. I only build my personal pcs. I assess the site and order Dells, and I again I call someone at Dell. I always see on Slashdot "Yeah well on Pricewatch,Dell,HP, site here is the price" Price for who? You can call and cut deals, and if you have been in this business for any amount of time you have a nice little Rolodex with all sorts of contacts. I can usually get 12 percent knocked off Dells web price by calling. Do a little research, pick up the phone, get these dusty people skills out. My customers will buy for me on bids even if I am a few dollars higher. Why? Personality, reliability, and quality.

          And the reported earning for Compaq, or whatever computer company. You thing this are actually the truth? Especially what we are seeing in business today? Enron, AOL, Anderseen? Come on! HP/Compaq employees went public with how management doctored the sheets so the merger would go through. And Compaq has been losing its ass for a long time. You actually believe all of what you read? Then give me about five minutes to throw a web page together. And for the paltry sum of 1000 dollars I will sell you a product that will make you ejaculate %5000 more, get you hot chicks, triple your earnings, and let you reliably predict the time of Cher's next facelift.

          And as for people saying then why can't I build (insert company name here) server for 50%, or the company %50 less. Cause the market is price controlled, get that through your heads. Easy analogy. The GeForce Ultra Mega 10000 comes out. It costs $500. The GeForce Ultra Mega 90000 from last month drops to $99.99.

          You think dell can carry all that baggage with 6 percent margins. They can with 6 percent reported margins.

          I understand business and markets very well. Because I do beleive in facts and figures reported by companies. They report what they want us to hear.

          Jeez it is rant Sunday for me.

          Puto
          • You have a very good write... but about people with xts... The XT doesn't have anything similar to modern computers, so it's kinda worthless for experience. The only thing you'll gain there is confidence... and that's not ALWAYS good.
  • Clarifying (Score:4, Funny)

    by davisshaver ( 583015 ) <canyougrokme AT hotmail DOT com> on Saturday October 26, 2002 @10:13PM (#4539729) Homepage
    So a balde server is sort of like a pie but in reverse. Instead of making it smaller you make it bigger by adding some more stuff, but you still share the same pan.
  • by spineboy ( 22918 ) on Saturday October 26, 2002 @10:13PM (#4539731) Journal
    Is it bad to when you see stuff like this to think how you can use it to further boost your Setiathome scores
  • by Anonymous Coward
    Blade Servers - Kickin' vampire ass (just in time for Halloween) and servin' web pages. That's just too slick.
  • by jba ( 3566 ) on Saturday October 26, 2002 @10:15PM (#4539741) Homepage
    Blade servers are not supposed to be stacked vertically, and you can fit *way* more than 42 blade servers in a single rack. The author is thinking of 1U boxes, which have only been around for say... 10 years!

    look at : http://www.compaq.com/products/servers/platforms/i ndex-bl.html

    280+ servers in a rack.
    • 280 servers in a rack?! is that why they call it open whordware?
    • by Anonymous Coward on Saturday October 26, 2002 @10:29PM (#4539794)
      Yes, you can fit more than 42 blade servers in a single rack. Good thing the article SAID that you can fit 100 or more in a rack. 42 servers was referring to traditional single unit high servers, the market blades is in many cases replacing.

      Read the article before commenting.
    • 280+ servers in a rack

      280? feh. We [rlx.com] do 336. Not that I'm biased or anything (I'm an employee of RLX), but to top that off--IMHO our management software beats the pants off of that of *cough* *cough* others.

      • Yeah, I was wondering whatever happened to the IBM/RLX deal [ibm.com]. I was getting all ready to deploy RLX blade, when suddenly we were pushing our own. Don't know what happened there.
    • If you're using your own real estate, it's pretty easy to power the things, but if you're buying commercial hosting space, blade servers and 1U rack servers quickly start running into problems with electricity. The problem is that Intel/AMD CPUs are fairly power-intensive, and increasing the density by a factor of 5-20 over traditional PC designs also increases the amount of power that a rack of servers needs to levels beyond what the typical hosting center is designed for. If you're getting a rack with 2 20-amp circuits, you've got 4KW to play with - doesn't go very far if you've got to feed 200 or 336 Xeon chips, and for that matter, isn't really ideal for 42 1U rack-mounted boxes, if you want to have redundant power supplies and you're burning 75W per CPU plus some more power for the disk drives.
      And of course, all those watts of heat require cooling. If you're planning to do it, have a serious talk with your real estate suppliers.
  • Old Article (Score:5, Informative)

    by hopbine ( 618442 ) on Saturday October 26, 2002 @10:19PM (#4539759)
    The article is quite old now - March 19 - and HP appears to favour the blade servers from the former compaq. That being said the advantage that blade servers give is that they save a great deal of space, and make cabling much easier. In essence you can stuff a lot of proccessors in a rack, also put in a small disk farm, network switch using copper or fiber, and away you go.
    • Re:Old Article (Score:2, Informative)

      by Anonymous Coward
      Not to mention you should save power. Electricity costs money, too.

      It's no surprise HP favors Compaq's blad servers. Compaq got into the blade server game when a bunch of former employees (including employee #3 Gary Stimac, IIRC) left en masse to RLX technologies, the company that first created blade servers. Fearing that their balls were about to be cut off by this new startup, Compaq ramped up their efforts to head off any threats. HP was kind of so-so with their blade servers.

      RLX seems to be heading down a slippery slope. I think they layed off a lot of employees, including Mr. Stimac.

      And yes, I am a little biased towards Compaq, if it shows.
  • Blade... ick (Score:5, Interesting)

    by LinuxHam ( 52232 ) on Saturday October 26, 2002 @10:23PM (#4539778) Homepage Journal
    I'm currently involved in a server consolidation project where the customer has dictated that they want to see some blade. Our primary platforms are some kickin' Intel servers (8-way 1.6GHz, 8GB RAM, max 16-way 64GB) running VMWare ESX, but the customer is insisting on seeing some blade. I am personally unimpressed by them. You need to make sure that your apps can and are built to either cluster or failover cleanly when you get blade involved. Or just not run any mission critical stuff on it.

    I prefer the VMWare ESX on our nearly-non-stop Intel hardware, the x440.

    • The only way to run a mission critical app is to make sure that your apps can and are built to either cluster or failover cleanly. If you aren't doing this, you're gambling, and the game doesn't change whether you're using blades or not, just the odds. I prefer the odds stacked in my favor, which means you use the most reliable hardware you can find for a reasonable price, and then assume it has a high failure rate and cluster the crap out of it.
    • Other obvious problems are heat density and power consumption. At some point, the real-estate is cheaper than all the provisions (and failure modes) required for the ultra-dense configurations.
  • Ok, this statement confuses me. IBM's BladeCenter line is based on Intel's new standard for blade servers, which means that the blade rack should be able to accept blades from ANY manufacturer that follows the new Intel server blade standard.

    HP/Compaq is also (supposedly) planning to use this standard for their new blade servers, so you'll be able to use HP blades in an IBM rack, and vice versa.

    The only server blade company that seems to be sticking with a "proprietary" design is RLX technologies, which uses a more compact blade system that was originally designed to use Transmeta Crusoe processors. They also have Intel blades as well, which use a simular RLX proprietary form factor.
  • by mzito ( 5482 ) on Saturday October 26, 2002 @10:28PM (#4539791) Homepage
    We won't see open blade servers for quite a while, if ever. Normal servers are only "open" because they use a common set of interconnects (standard power, ps/2 keyboard, 100BaseT), etc. On a blade server, you have to unify all of those interconnects in a hot-swappable fashion. The result? A customized connector and backplane architecture.

    In addition, there's no incentive for companies to open a standard for blade servers - they'll make more money by selling the chassis and blades, as well as the management software that is generally required for these types of servers.

    As far as Google goes, they rolled out their infrastructure for such a low cost because they did the following things:

    1) didn't use blade servers(more on that in a sec)
    2) bought in large quantities
    3) bought generic/semi-generic servers (by which I mean "not IBM")

    Not using blade servers was a sharp idea because the real advantages of blade servers come in certain particular situations. These include where power/heat/space is really expensive or where you need a lot of hosts without a lot of performance (like QA, staging and development environments). Remember, that while they use less space, power, etc., they also use laptop/low-power cpus and hard drives, so the performance can be lower, especially for i/o intensive operations. If you're not hugely space-constrained, using 1U servers will save you money in the long run.

    Thanks,
    Matt
    • You can build youself a blade server right now dependng on your price target and CPU requirements. At the low end VIA sell entire fanless EPIA systems that draw 60W power including disk and you can pack them in tight. With a bit of origami you should be able to get quite a few of them in a 5U case.

      I've seen small windtunnels on auction sites, is that the future of blade computing with Athlons - round machines with a chimney 8)

  • by deviator ( 92787 ) <bdp@NosPaM.amnesia.org> on Saturday October 26, 2002 @10:38PM (#4539823) Homepage
    why is there almost ALWAYS a knee-jerk reaction to the word "proprietary" by the slashdot editors?

    Just because the blades are "proprietary" doesn't mean they're bad. They're denser, thus easier to physically manage and run with lower power requirements than other types of servers. Just because they weren't created by a committee of "free-thinking" open source advocates doesn't mean they're useless to companies who need more processing power at lower cost.

    Seriously, the commercial market offers added value in their products that still lacks in many open source projects.

  • by Anonymous Coward
    Well, I cannot believe that there are no pictures of those 10,000 x86 boxes that Google has. C'mon I bet there are at least 50% of those Google employees that check /. regularly.

    Anyone?
  • How Google did it (Score:5, Informative)

    by faster ( 21765 ) on Saturday October 26, 2002 @10:50PM (#4539850)
    First, they planned to use a distributed architecture from the beginning. Then they used cheapo machines until the reliability started costing more than it saved, and then they started buying Rackable Systems [rackable.com] boxes. 1U, half-depth, 82 to a cabinet with a hub (or was it a switch?) at the top on each side.

    From there, they figured out a functional failover system and set up four geographically distributed data centers.

    Oh, and they coded up a search engine thing at the same time.

    • The questions I had was "Did Google, in fact, do it at low cost?" and "Define low cost."
  • I'm imaging a new market on the horizon for low-powered PCs. Laptops that can run for 6 hours+ without heavy batteries, and the sub-micro ATX form factor systems. The latter are an interesting case, useful for roll-your-own multimedia appliances and servers that you can leave on but won't chomp on your power budget. I have a PC I keep on constantly in a corner and use for my firewall, mailserver etc. I used an underclocked celeron to keep the heat down and to keep the power usage to a minimum. But it could do so much more if it wasn't so lightweight. The LP Pentium III would easily outpace what I currently have.

    So, it would be cool to see these chips and motherboard commoditzed for just this use. For a bit of extra money up front you can get double your money back in power savings (vs a high-power CPU). There aren't many sub 11W IA processors that can get the job done.
    • i'd personally love to see some sort of .13 micron process 486 DX100/133 chip, or maybe an arm processor hooked up to a 16 shade B&W LCD displays in one of those smaller widescreen sonly laptops. give me a gig on an IBM microdrive (or 2 512 meg compact flash memory chips, whichever uses less power) and a form-factor lithium ion battery like in the ipod.

      or hell, an ipod with a keyboard and 7" 16:9 B&W high rez display. no, i don't want a widescrren palm, i want somthing that has an FPU and i can run AIM, mozilla, and 2 or 3 telnet sessions with cpu to spare.
      • There was a time back during the dotcom bubble that you could get StrongARM boards with 200MHz CPUs and PC-like interfaces (PCI, PCMCIA) so you could put it in a small case. There was that netwinder product that was a tablet-like device. Like an uber-palm pilot. We tried to buy one but they went under before they could ship.

        Too bad. They sipped power (500-1000 mA at 5V, nothing compared to laptops). This is exactly what you described. With a touch screen to boot!

        OTH, the ARM processors lacked an FPU, so there's no free lunch I guess. :-P .13 micron 486... wow. I had this idea some time ago after being disillusioned by the difficult-to-comprehend insanity that was the pentium pro arch. But after further study I realized that a suitably clocked P6-line CPU could probably outperform the 486 on the same code and run cooler. The P6 has a larger die area and fancier logic. For probably cheaper too :-)

  • by LuxuryYacht ( 229372 ) on Saturday October 26, 2002 @11:26PM (#4539933) Homepage
    True Open Standard Blade Servers are just around the corner. Up until now the current offerings by RLX, HP and IBM have been proprietary blade server designs. The next generation blade servers will be based on an open hardware standards where different vendors blades can be swapped with each other the same way that Compact-PCI is a standard blade design where all cpu boards are interchangeable with each other.

    Low power CPU's are needed for the current crop of blade server designs since they forgot to deal with any heat management. The current blade designs rely entirely on airflow across the cpu package for cooling in a 2U or 3U high blade with 0.7" between each blade. Oops!!... how many blades can you stuff into a rack with each processor pulling 30 - 60 watts each and keep the temp down to 1K cpus per 42U rack) while still using Xeon and other x86 processors that produce over 60W of heat each.
    • Sorry to repost here, but the first try was truncated.

      True Open Standard Blade Servers are just around the corner. Up until now the current offerings by RLX, HP and IBM have been proprietary blade server designs. The next generation blade servers will be based on an open hardware standards where different vendors blades can be swapped with each other the same way that Compact-PCI is a standard blade design where all cpu boards are interchangeable with each other.

      Low power CPU's are needed for the current crop of blade server designs since they forgot to deal with any heat management. The current blade designs rely entirely on airflow across the cpu package for cooling in a 2U or 3U high blade with 0.7" between each blade. Oops!!... how many blades can you stuff into a rack with each processor pulling 30 - 60 watts each and keep the temp down under 70 deg C at the cpu package?

      The next gen of blade servers will have at least 3X the current density of cpus (1K cpus per 42U rack) while still using Xeon and other x86 processors that produce over 60W of heat each.
      • The next gen of blade servers will have at least 3X the current density of cpus (1K cpus per 42U rack) while still using Xeon and other x86 processors that produce over 60W of heat each.

        And how do you propose to get the heat out? Water cooling? People moved away from water cooling for a reason.
        • Getting the heat out won't be a problem if you can't get the electricity in to power them. If you're using your own real estate, it's one thing, but if you're actually using 60KW of electricity in one rack, that's about how much power a typical colo center provides for 10-40 racks of servers, depending on how you're counting redundant power feeds. If you're trying to fit that many processors in one rack, and using heavy-power Xeons instead of low-power Transmetas, you need to start looking at room airflow and not just in-box airflow. The obvious solution is to imitate a Cray-2, and use Fluorinert or some other liquid fluorocarbon coolant piped in from a big honking Air Conditioner outside your building, possibly combined with some kind of gas turbine to turn some of that waste heat back into electricity.
  • by goombah99 ( 560566 ) on Saturday October 26, 2002 @11:41PM (#4539982)
    Los Alamos has a large transmeta processor cluster on blades. It is so low power that The entire 200 blade system does not need any special cooling. It sits in an open room with offices and occupies just one ordinary rack. There are 24 blades per 3U of space sharing redundant powersupplies and built in network switches.

    The really interesting thing is that as it is used it appears to be faster than the same clock speed pentium. What? you say. How can this be, since transmeta has a rep for being slow.

    Well it truns out that for scientific applications, ones where you tend to sit in tight loops a lot the thing is faster. It's meta chips compile the intel instruructions into its internal processor code. Once the overhead of compiling is over its faster internally than a pentium 3

    The reason it got a bad rep for being slow is that for GUI type applications where the code is running all over the place and never doing the same thing for very long, it loses out.

    given the incredible stability (120 days no reboot), the increacing speed of the transmeta chips (1.2 Ghz), and the extreme low power, high density and no need for special cooling these things may revolutionize scientific and industrial computuing. But they may not dent the desktop market for raw power in GUI applications.

  • so why does? (Score:2, Interesting)

    by robpoe ( 578975 )
    I could see where it would be nice to have a bunch of dual 800 PIII boxen / taking less power input and generating less heat...

    Applications?

    1. Citrix farm. They're NOT disk intensive. You can do load balancing on them. If one goes down the ICA client only has to hit reconnect.
    2. Web service farm. One server goes down (MS), kernel panics for some reason...remote reboot .. back up no biggy, nobody sees or knows the difference.
    3. Novell (or NT) clusters. Exchange or Groupwise. Box dies / need to upgrade..
    4. Home control system..Building control system. have 2-6 blades controlling different things..

    there's a lot of benefit from cheap blade dual proc boxes..

  • by kinkie ( 15482 ) on Sunday October 27, 2002 @02:13AM (#4540419) Homepage
    How did Google roll out 10,000 servers at such a low cost?

    Certainly not by using blade servers. Contrary to popular belief, blade servers cost more tran their non-blade equivalents. Just like notebooks vs. laptops. Their selling points are (in some vendors' opinions) integrated management and supposed flexibility.
    • Certainly not by using blade servers. Contrary to popular belief, blade servers cost more tran their non-blade equivalents.

      While it is true that the average cost of a blade server is higher than the cost of a 1U server, that's not the whole picture. You need to look at the 3 year TCO. Start by thinking about floor space. A typical rack might have 21 1U servers. Using RLX blade servers, with 24 blades per 3U, you can fit 7x24=168 servers in the same floorspace.

      You'd need 8 racks to achieve that with low cost servers. If you've ever managed a data center (or rented space in one) you'll know they charge per square foot (or sq.m in Europe).

      When you have a large number of servers, you'll also need to look at the costs of power consumption. Especially with Transmeta processors, you can save on power -- AND COOLING costs, which are quite significant.

      Finally, I love the fact that the RLX have integrated switches, which saves me money on the network infrastructure, plus each blade has 3 LAN interfaces, which makes them ideal for IDS applications.

      • While I can agree with you on principle, of course the picture is even more complex.
        For instance, when you have 8k servers, I don't think you're colocated. So the cost for floor space is greatly reduced, if nothing else because physical security is much easier to manage.
        Transmeta processors are good, but they're especially advantageous when you don't need lots of horsepower, which is not the case with google where clustering is for high performance first and redundancy later. So I figure you'd need at least 4 blades just to replace your typical dual-p3 1U server, without taking the load balancing inefficiencies into account. Also, you have to take into account management costs, which are of course higher when you have multiple servers.

        This is all to say, I believe that in the end the TCO of blades and 1u servers balances out, and if there's an advantage for one or the other, it's fairly marginal.
  • At work we currently have 2 high powered (4 Zeon cpu/4GB RAM) web servers (IIS/Win2k) that they are thinking of replacing with blade servers. I have problems with this notion and would like some other views.
    • Replacing 2 high end Wintel boxen with 20 low power blades just increases the maintenance. Now instead of updating software twice, we have to do it 20 times.
    • This particular application is licensed per CPU. We have just increased our license cost 10 fold.
    • I dont think this application is very cluster friendly on the front end (JRun servlets on IIS) and is statefull (unlike something like Google). So instead of two JVM instances running, I have 20 instances.
    Am I just ignorant or is there some hidden value here that I just can't see? Blade Servers dont "automatically" make apps/systems cluster friendly do they? I understand that replacing them becomes physically easier, but you still have to install & configure the OS and apps anyway. And that's what takes the most time and effort anyway, right?
  • Yes, remember Transmeta's Crusoe processor?

    http://www.rlx.com - they're usually at LinuxWorld showing off their blade server units. They sell them with either Pentium III chips or Crusoe chips.

    Pretty cool stuff..
  • How Google did it. (Score:3, Informative)

    by Marasmus ( 63844 ) on Sunday October 27, 2002 @12:30PM (#4542006) Homepage Journal
    1. Start with a 24" rack, 72" tall. Rip the doors off the front and back.
    2. Get sheet-metal 24" trays to fit into the rack. Mount them every 2U, on both the front and back of the rack. Leave a few U open in the middle of the rack for your switch and KVM.
    3. Contract a company to build you custom power supplies that are 1U tall, use 90w of power, and only have 1 ATX connector and 1 molex hookup for a hard drive.
    4. Put two Tyan dual-PIII mini-ATX motherboards w/ onboard LAN and video side-by-side on each tray. Slap two 1ghz PIII's in there with good passive heatsinks. Add a small amount of RAM (128-256mb) and strap a 10-20gb hard drive to the free space on the tray using a velcro strap.
    5. Cluster 'em up! Heat is a HUGE problem, even with using the relatively-cold PIII's instead of P4's or Athlon MP's.

    After seeing the Ashburn facility in person a year or so ago, I figured out that it would have cost about $700 per node to build the cluster. Considering it was an approximately 960-node setup, it was most likely around $700,000 for the 1920-processor cluster. That's REALLY freakin' cheap!
  • The Google people (Schmidt, et.al.) gave a keynote talk at the IEEE Hot Chips conference this year. Basically, they depend on replication and software to get reliability from a vast array of commodity parts and rapidly failing disk drives. They have buyers who check the daily pricing on the cheapest motherboards and disk drives and buy in bulk.

    One major limiting factor is a 500W per square foot limit in most hosting facilities (that's why a lot of their systems are still PIII based). But if a low-power blade cost only 20% more per "MIP", it might still be cheaper to pay the server facility for the extra cooling and power.

    They said the HP/IBM/Dell salesmen just cry because there's no way those vendors can compete with the cheapest daily far-East motherboard import prices. The only salesmen who must like Google are the ones who sell them the diesel locomotives (err. backup power generators) when they exceed the power limit of some hosting facility.

If I want your opinion, I'll ask you to fill out the necessary form.

Working...