Open Blade Servers? 207
Greg Smith points to this ZDNet story on new Intel chips aimed at blade servers, writing "Proprietary blade servers are coming on strong from IBM, Dell and HP. Where are the open blade servers? How did Google roll out 10,000 servers at such a low cost?"
Amusing... (Score:5, Funny)
The inevitable comment (Score:3, Interesting)
Really, though, the fact that this blade server consumes so much less electricity would be very meaningful to me. The server room at our school was not intended to be a server room. The wiring also is lacking, and every once in a while the breakers go pop!
BTW, is it possible to use this in a laptop? Just imagine the power (or less consumption thereof) if you packed two processors in parallel on a laptop...
I'm no shaving technician, but.... (Score:1, Funny)
blade server (Score:5, Informative)
Server blades got their name because of their design. A blade server typically resembles a circuit board more than anything else. They're made to be stacked vertically. These types of servers are growing in popularity for more mundane tasks such as delivering Web pages or housing protective firewalls because they use less floor space and electricity than racks of traditional servers. Server blades also share a power supply, cables and memory, which further cuts down on costs and space. Although the down server market has dampened sales, analysts believe blades will eventually form a substantial part of the market.
Maybe I'm retarded, but I didn't immediately picture exactly what a blade server was when I saw the name...so there it is.
PICMG 2.16 Is where Linux can really shine (Score:5, Informative)
Where Linux will really shine is the new PICMG-2.16 standard. It's an enhancement/alternative to CompactPCI where a chassis uses Ethernet signalling on the backplane instead of CompactPCI signals. That means a single chassis can have an intel, Sun, and/or motorola blade in the same chassis and they communicate via TCP/IP instead of hardware-specific signalling. It also means that a Linux-based blade can work in *any* manufacturers chassis. This removes a big barrier of entry for the Linux in the telecom market.
Other cool things about PICMG 2-16 Blades:
Re:PICMG 2.16 Is where Linux can really shine (Score:2)
Linux blades are here now (Score:2)
Power Appliance [etagon.com] is already available, running Oracle 9i on Linux on up to 48 dual-processor (PIII, 800-1200 Mhz) blades in a single chassis, and the software to manage them.
Re:blade server (Score:3, Interesting)
Normally, redundancy is a high priority. Is the savings in hardware and electricity worth the risk of losing (say) 10 machines because one power supply failed?
Re:blade server (Score:5, Informative)
Normally, redundancy is a high priority. Is the savings in hardware and electricity worth the risk of losing (say) 10 machines because one power supply failed?
Blade servers are akin to modular switches and routers. All servers share a backplane, delivering power and network connectivity, both within the chassis and to network patch panels. Some solutions have break out boxes that permit KVM access to individual blades, while others run that through the backplane as well. Redundant power isn't the issue, since the backplane usually has redundant power; the issue is that these servers usually don't have multiple hard drives, so redundant disk isn't possible per blade. There are some that do have mirrorsets, they are less dense than the single-disk models.
The use of blades is normally for webserving, thin client servers, etc, where the failure of a single blade simply decreases the capacity of the overall farm, rather than rendering a service unavailable.
The best designs implement SAN HBAs into the backplane, providing common disk to all devices, and with netbooting, the devices won't need local disk at all. That's probably going to be the future of compute farms...
-JPJ
Re:blade server (Score:1, Redundant)
Re:blade server (Score:3, Informative)
I'm sure there's still more than one power supply. You just don't have 42 of them, like you would in a rack full of 1U servers...instead, you'd have maybe two or three (like you do in some conventional servers with redundant power supplies).
Blades on the desktop? (Score:2)
Re:Blades on the desktop? (Score:2)
I think this is what you're looking for:
from rocketcalc [rocketcalc.com]
as seen on slashdot [slashdot.org]
(granted it was a long time ago...)
Re:Blades on the desktop? (Score:2)
The other problem with adapting current blade technology to the desktop is price - most of them are too expensive per blade compared to an equivalent-speed ATX.
Re:Blades on the desktop? (Score:2)
If history is any guide, it's more likely that someone will come up with ATX-blade (a rack where you can fit several ATX motherboards) and that this will take over the server market from the low end upwards.
Open What? (Score:5, Insightful)
Google does not use blade servers, last I knew it was just a large amount of x86 boxes running Linux.
Open Source hardware? Does that even make sense? Either have drivers (or release the specs) that allow your hardware to be used on an Open Source operating system, or dont.
Want an "Open Source Blade Server"? Yeah, thats called an HP with Clustered Linux on each blade...
Re:Open What? (Score:4, Informative)
Re:Open What? (Score:2)
"This program is designed to develop and promote a test suite, and to serve as an information repository that will allow you to verify that your hardware configuration is Linux or FreeBSD ready."
Dell, IBM, AND HP (the 3 blade server manufacturers in the article text) are all on that list...
To get back to the point of my other comment, what's the purpose of this entire article? The poster is seemingly asking for Blade Servers that support open source software, and the 3 manufacturers he lists already do.
Re:Open What? (Score:2)
Re:Open What? (Score:2)
The last time I saw them was when I was last in the San Jose MFNX datacenter (and, coincidentally, recognized all the servers that had been moved there from Exodus/GlobalCenter in Sunnyvale, where my former employer's equipment is still colocated). Still normal PCs with disks. Most of their servers are, as I said, the Rackable Systems' machines, with servers on both sides of the rolling racks, and an HP ProCurve switch in the middle on each side, running 100BaseTX to each box and Gigabit to the upstream.
Re:Open What? (Score:2)
Drivers without specs may be somewhat helpful for an individual open-source opersing system, but they are nearly or completely useless for "open source opersing systems" as a whole.
Re:Open What? (Score:2, Insightful)
Much like the ATX, NLX and FlexATC form factors. Off the shelf backplanes, barebones blades, etc.
Give it a couple years or so. 1U and 2U barebones server kits are getting pretty prevalent. I think it will come down to how quickly small/medium business will embrace blade servers before Intel and others will start putting out "whitebox" solutions.
Patiently waiting,
Moekandu
"It is a sad time when a family can be torn apart by something as simple as a pack of wild dogs."
like PCI, USB, etc. (Score:4, Interesting)
An "open" standard for blade servers would be nice. And, in fact, there are such standards: passive PCI backplanes, networking backplanes, and EuroBoards. Look around the web--there are plenty of systems to build open blade servers on--servers that are open in terms of both hardware and software.
Re:like PCI, USB, etc. (Score:2)
It's not quite that simple. For blade servers, it's not just the individual PC compatibility that matters, but also whether the backplane, interconnect, and maintenance interfaces are open.
Re:like PCI, USB, etc. (Score:2)
Open hardware to me means that there can be multiple compatible implementations. Whether that's because of a de-facto standard or a real standard doesn't really matter that much.
Pentium IIIs? (Score:3, Interesting)
Re:Pentium IIIs? (Score:5, Funny)
Intel knows that they cant get the P4's power consumption low enough to hit the numbers, so they use a P3.
Blade servers are already marketed by everyone that makes them as "a tad slower, but much more energy effecient", and the main goal is better density, to allow more power in the given space. The Pentium III fits this bill perfectly.
Intel is smart enough to know that the P4 isnt everything. Engineering > Marketing, whenever that happens, its a good thing!
Re:Pentium IIIs? (Score:5, Informative)
Now consider that fact with laptops using the P4 - that's one area where they can get away with it, at the cost of battery life...
Re:Pentium IIIs? (Score:3, Interesting)
Re:Pentium IIIs? (Score:2, Informative)
Re:Pentium IIIs? (Score:2)
Re:Pentium IIIs? (Score:2)
open blades? (Score:5, Funny)
Only YOU can prevent fatal server room mishaps.
Keep those blades covered, kids.
This is a public service message brought to you by Sally Shark, official mascot for server room safety.
*note: statistics may be fictional.
Google - Free Servers (Score:3, Interesting)
The <insert powered by DELL,COMPAQ/YOMAMA) tags will start appearing all over google.
I also reckon they were free or at pretty much close to cost. Companies know what there doing. Cost on that kinda margin is probably at 200 bucks a pop straight outta the factory when you consider markup is about 500 percent on computer parts. Remember buying in bulk is power.
Example, you can get a pc on pricewatch with a 20 gig drive, 256 megs of ram, and a giga or more processor for, 250, so think about it.
Puto
Re:Google - Free Servers (Score:5, Informative)
Basically the entire hardware industry runs off slim margins.
I heard Dell runs at about 6%. Most distributors run a 1-2% margin, computer stores anywhere from 5-10%.
As for the manufacturers, I haven't a clue, but they must have astronomical costs.
Buying in bulk isn't that big of a deal anymore. When a company goes ITQ (invitation to quote) the vendors know they aren't going to win unless they at least halve their markup.
Re:Google - Free Servers (Score:4, Insightful)
I certainly didnt mean pc shop.
But if ia pc shop has an account with Tech data and other biggies then it is not unusal for 20-20 point markup on a pc. A pc that can be built with 2 gig processor, 256 meg, 19 inch monitor, getforce, nic, sound, yadda, for about 700 bucks. Will easily sell for 1200 bucks and people will think they are getting a deal, well lets subtract 100 bucks for xp home. And lets also remember that with a corporate account there are places cheaper than Pricewatch. so we say profit is 400 dollars. That is a little more than 6 percent on the homefront.
Now I am not saying all PC shops do this, but most still have a healthy margin on the hardware. Because Johnny Six Pack still thinks you buy cheap you get cheap.
The actual manufactures of the the seperate parts run on fairly slim margins, but the retailers control the prices. That is why we see such a disparity of prices on the web.
I was a buyer for a fairly large shop up until recently, and we sold good boxes, good prices, and made at least 200 a box on a bad day. The only time we lost out was when we went up agains some of HP's all in one deals from Walmart or circut city. And we could still match them with better equipment and make 50 bucks on the hardware, but we didnt, of course we gave three year warranties, and people usually shelled out the extra 200 bucks, cause they had someone to throw the thing at if it didnt work.
I also was a buyer in South America and bought things right off the boat. And you would not beleive some things I was able to get at very low prices.
The hardware market is all price controlled. The margins are higher than they make out. But the larger the company the more staff, etc you gotta support, more perks, got to make shareholders happy. So you gotta to fudge margins, it is what we Americans or good at.
So to end this rant. If I know I can call Tech data, get quality brand parts, sell a pc at a good price, comparable with dell. and I am going low here, make 150 bucks on top of my built in build cost. And knowing I have nowhere near the buld discounting on hardware parts as they do. Dell is making mucho bucks.
Puto
Re:Google - Free Servers (Score:2)
Re:Google - Free Servers (Score:3, Interesting)
Let me breakdown a few of the responses concerns:
Pricewatch: The best thing since sliced bread for the tech world. Price watch might be fine for the home user, a buddy, but you actually have no idea what you are getting, and why is it so cheap. RMA stuff, returns, "Fell off the Truck". So while all this transfers into savings for you and your customer, you have this problem. TechData or any of the biggies will cross ship, or ship ASAP an RMA, and you put the old one in the box. No questions asked 99.9 % of Pricewatch suppliers will not do this. Plus most of the stuff is OEM, 1 year, 90 day, 30 day warranty. I do not know about you guys but with hardware reliability being what it is these days compared to what it was years ago. I would rather pay the extra 12 bucks for the retail item and get the 3 year warranty. That way when something burns out in 91 days, I do not have an angry customer bitching at me. In the 'real'world you know part of your profit will be eaten up by service over the next two three years. So you need quality from a reputable company. Not a 20 dollar motherboard with everything built in and 1 pci slot, something burns out. You gotta replace it outta pocket because the company that has sold it to you has disappeared three days later.
I do trust NewEgg. But I can wangle better prices with some of the larger places. My new box is entirely new egg. But there is nothing generic in it either. The only el cheapo thing I buy are nics and those are lend out spares for my customers until I grab them something decent. I picked up 100 nics at 2 bucks a piece. 10/100 realtek. Sell em for six bucks, everyone is happy, and they work well.
TechData - I only mentioned Tech Data, because it is one of the largest and well knownones out there. I also use other companies. Sometimes Tech Data has really good deals on somethings and others no. Also Tech Data has different prices for different customers, all about how much you buy from them. Different prices for different people. And you can always get your rep and get them to cut you a break.
Yeah, maybe my margin was way up there, and too high. But if you are making 6%,
1. Do not have accounts with the right companies.
2. Shouldn't be on the hardware end ( I make my money on the service end, hardware is a pain in the ass, and I do not trust anyone under 28 to build a box for my company. I trust someone who has had his hands in an XT. Computers have gotten way to easy to build in the past few, so hardware skills have dropped dramatically. I have noticed in the shops I have run that the techs 25 and under tend to RMA more burned out stuff.... Maybe cause they weren't working on them in the day when a pc cost 4 grand and take the same amount of care.
3. Do not know how to market your product to justify the price.
I honestly make all my money networking this days. I only build my personal pcs. I assess the site and order Dells, and I again I call someone at Dell. I always see on Slashdot "Yeah well on Pricewatch,Dell,HP, site here is the price" Price for who? You can call and cut deals, and if you have been in this business for any amount of time you have a nice little Rolodex with all sorts of contacts. I can usually get 12 percent knocked off Dells web price by calling. Do a little research, pick up the phone, get these dusty people skills out. My customers will buy for me on bids even if I am a few dollars higher. Why? Personality, reliability, and quality.
And the reported earning for Compaq, or whatever computer company. You thing this are actually the truth? Especially what we are seeing in business today? Enron, AOL, Anderseen? Come on! HP/Compaq employees went public with how management doctored the sheets so the merger would go through. And Compaq has been losing its ass for a long time. You actually believe all of what you read? Then give me about five minutes to throw a web page together. And for the paltry sum of 1000 dollars I will sell you a product that will make you ejaculate %5000 more, get you hot chicks, triple your earnings, and let you reliably predict the time of Cher's next facelift.
And as for people saying then why can't I build (insert company name here) server for 50%, or the company %50 less. Cause the market is price controlled, get that through your heads. Easy analogy. The GeForce Ultra Mega 10000 comes out. It costs $500. The GeForce Ultra Mega 90000 from last month drops to $99.99.
You think dell can carry all that baggage with 6 percent margins. They can with 6 percent reported margins.
I understand business and markets very well. Because I do beleive in facts and figures reported by companies. They report what they want us to hear.
Jeez it is rant Sunday for me.
Puto
Re:Google - Free Servers (Score:2)
Re:Google - Free Servers (Score:2)
Clarifying (Score:4, Funny)
Re:Clarifying:REAL QUESTION (Score:2)
Setiathome Yess!!! (Score:5, Funny)
Look out Wesley Snipes (Score:1, Funny)
While we're on the movie theme... (Score:1)
Thanks, I'll be here all week, and don't forget to tip your waitresses.
whoever wrote this artlcle is on crack. (Score:4, Informative)
look at : http://www.compaq.com/products/servers/platforms/
280+ servers in a rack.
Re:whoever wrote this artlcle is on crack. (Score:3, Funny)
Re:whoever wrote this artlcle is on crack. (Score:4, Informative)
Read the article before commenting.
Re:whoever wrote this artlcle is on crack. (Score:2)
280? feh. We [rlx.com] do 336. Not that I'm biased or anything (I'm an employee of RLX), but to top that off--IMHO our management software beats the pants off of that of *cough* *cough* others.
Re:whoever wrote this artlcle is on crack. (Score:2)
Re:whoever wrote this artlcle is on crack. (Score:2)
Before I continue--I'm the lead IT architecture guy at RLX, not in marketing or sales. One of the the things I do is give our products (both hardware and software) a spin as a "customer" of RLX. With control tower [rlx.com] I'm able to manage a HUGE number of servers per administrator--many times over the industry standard (ie. the textbook ratios of 8-to-1 windows servers, etc...) And these are NOT in a cluster configuration...they do different things like web serving, DHCP, DNS, printing, IDS, firewalls, etc... We've got 81 servers in our primary NOC being managed by two administrators (well...one admin, plus 1/2 my time.) Plus with CT, provisioning servers (new ones, or for ones that have failed) is a snap--and takes mere minutes. Can you install a fully configured Win2K server in less than 10 minutes?
It's the savings on operational costs which are really compelling for all blade servers, but, yes, it _is_ a hard sell to some. The funny thing is, once most see a demo of our management software in real life--their jaw drops. This is the stuff that originally attracted me to RLX (in TX of all places!) over two years ago, before you (the wider "you") even heard the term "blade server."
Anyways...I tried to keep the propaganda to a minimum, but had to jump in with a reply there. Like I said, I'm in IT, so I don't have the price list handy. If you want, get the sales number off our web site [rlx.com], and give them a call to find out. We DO have demo gear, too--if you're really serious about checking it out.
(Now, I need to duck from the things that our sales & support folks will be throwing at my noggin!)
The usual applies--these are my opinions. It's the weekend, so my employer didn't pay me for them so they're mine, damnit!
Re:whoever wrote this artlcle is on crack. (Score:2)
Like I said before...it's all in the configuration. Same can be said for ANY vendor...not just RLX, not just blade servers.
I can buy a 1U server for about $1K, or I can configure it so it costs $5K.
Also, the chassis don't have motherboards--they generally just have the midplane, power supplies, and networking hookups. It's the blade servers themselves that are what would be recognized as a motherboard. The market is young...to young for a true standard to exist.
Re:whoever wrote this artlcle is on crack. (Score:2)
The point is that you can run many many more of them in a smaller space, for less power. And, in RLX's case, with far less administration overhead.
That's what I was referring to when I mentioned the "economies of scale."
So, yes...if you want to run a small number of servers, blades are probably not for you. If you have a cluster of hundreds, the savings in operational costs are very compelling.
'nough said, me thinks.
(Since it's been a few messages since I've said it--disclaimer: I work for RLX.)
Powering blades and 1U servers (Score:2)
And of course, all those watts of heat require cooling. If you're planning to do it, have a serious talk with your real estate suppliers.
Old Article (Score:5, Informative)
Re:Old Article (Score:2, Informative)
It's no surprise HP favors Compaq's blad servers. Compaq got into the blade server game when a bunch of former employees (including employee #3 Gary Stimac, IIRC) left en masse to RLX technologies, the company that first created blade servers. Fearing that their balls were about to be cut off by this new startup, Compaq ramped up their efforts to head off any threats. HP was kind of so-so with their blade servers.
RLX seems to be heading down a slippery slope. I think they layed off a lot of employees, including Mr. Stimac.
And yes, I am a little biased towards Compaq, if it shows.
Blade... ick (Score:5, Interesting)
I prefer the VMWare ESX on our nearly-non-stop Intel hardware, the x440.
Re:Blade... ick (Score:2)
The only way to run a mission critical app is to make sure that your apps can and are built to either cluster or failover cleanly. If you aren't doing this, you're gambling, and the game doesn't change whether you're using blades or not, just the odds. I prefer the odds stacked in my favor, which means you use the most reliable hardware you can find for a reasonable price, and then assume it has a high failure rate and cluster the crap out of it.
Re:Blade... ick (Score:2)
Re:Blade... ick (Score:2)
IBM's "proprietary" blade servers? (Score:2)
HP/Compaq is also (supposedly) planning to use this standard for their new blade servers, so you'll be able to use HP blades in an IBM rack, and vice versa.
The only server blade company that seems to be sticking with a "proprietary" design is RLX technologies, which uses a more compact blade system that was originally designed to use Transmeta Crusoe processors. They also have Intel blades as well, which use a simular RLX proprietary form factor.
Re:IBM's "proprietary" blade servers? (Score:2)
Re:IBM's "proprietary" blade servers? (Score:2)
And it looks to me [picmg.org] like it's pretty easy to get the specs. The current specs (not AdvancedTCA) cost $95.00. Hell, I can afford that. If you don't think $95.00 is reasonable/negligible, you don't need the spec because you're not manufacturing computer hardware.
In conclusion, it looks very much like an open standard.
Re:IBM's "proprietary" blade servers? (Score:2)
Easy - by not using blade servers.... (Score:5, Insightful)
In addition, there's no incentive for companies to open a standard for blade servers - they'll make more money by selling the chassis and blades, as well as the management software that is generally required for these types of servers.
As far as Google goes, they rolled out their infrastructure for such a low cost because they did the following things:
1) didn't use blade servers(more on that in a sec)
2) bought in large quantities
3) bought generic/semi-generic servers (by which I mean "not IBM")
Not using blade servers was a sharp idea because the real advantages of blade servers come in certain particular situations. These include where power/heat/space is really expensive or where you need a lot of hosts without a lot of performance (like QA, staging and development environments). Remember, that while they use less space, power, etc., they also use laptop/low-power cpus and hard drives, so the performance can be lower, especially for i/o intensive operations. If you're not hugely space-constrained, using 1U servers will save you money in the long run.
Thanks,
Matt
Re:Easy - by not using blade servers.... (Score:2)
I've seen small windtunnels on auction sites, is that the future of blade computing with Athlons - round machines with a chimney 8)
proprietary doesn't necessarily mean bad... (Score:3, Insightful)
Just because the blades are "proprietary" doesn't mean they're bad. They're denser, thus easier to physically manage and run with lower power requirements than other types of servers. Just because they weren't created by a committee of "free-thinking" open source advocates doesn't mean they're useless to companies who need more processing power at lower cost.
Seriously, the commercial market offers added value in their products that still lacks in many open source projects.
Google cluster. Anyone? (Score:1, Interesting)
Anyone?
Re:Google cluster. Anyone? (Score:4, Funny)
How Google did it (Score:5, Informative)
From there, they figured out a functional failover system and set up four geographically distributed data centers.
Oh, and they coded up a search engine thing at the same time.
Re:How Google did it (Score:2)
Re:How Sun did it (Score:2, Informative)
Low Powered Palmtops and "home servers" (Score:2)
So, it would be cool to see these chips and motherboard commoditzed for just this use. For a bit of extra money up front you can get double your money back in power savings (vs a high-power CPU). There aren't many sub 11W IA processors that can get the job done.
Re:Low Powered Palmtops and "home servers" (Score:2)
or hell, an ipod with a keyboard and 7" 16:9 B&W high rez display. no, i don't want a widescrren palm, i want somthing that has an FPU and i can run AIM, mozilla, and 2 or 3 telnet sessions with cpu to spare.
Re:Low Powered Palmtops and "home servers" (Score:2)
Too bad. They sipped power (500-1000 mA at 5V, nothing compared to laptops). This is exactly what you described. With a touch screen to boot!
OTH, the ARM processors lacked an FPU, so there's no free lunch I guess.
True Open Blade Servers (Score:3, Informative)
Low power CPU's are needed for the current crop of blade server designs since they forgot to deal with any heat management. The current blade designs rely entirely on airflow across the cpu package for cooling in a 2U or 3U high blade with 0.7" between each blade. Oops!!... how many blades can you stuff into a rack with each processor pulling 30 - 60 watts each and keep the temp down to 1K cpus per 42U rack) while still using Xeon and other x86 processors that produce over 60W of heat each.
Re:True Open Blade Servers (Score:2)
True Open Standard Blade Servers are just around the corner. Up until now the current offerings by RLX, HP and IBM have been proprietary blade server designs. The next generation blade servers will be based on an open hardware standards where different vendors blades can be swapped with each other the same way that Compact-PCI is a standard blade design where all cpu boards are interchangeable with each other.
Low power CPU's are needed for the current crop of blade server designs since they forgot to deal with any heat management. The current blade designs rely entirely on airflow across the cpu package for cooling in a 2U or 3U high blade with 0.7" between each blade. Oops!!... how many blades can you stuff into a rack with each processor pulling 30 - 60 watts each and keep the temp down under 70 deg C at the cpu package?
The next gen of blade servers will have at least 3X the current density of cpus (1K cpus per 42U rack) while still using Xeon and other x86 processors that produce over 60W of heat each.
Re:True Open Blade Servers (Score:2)
And how do you propose to get the heat out? Water cooling? People moved away from water cooling for a reason.
Heat vs. Electricity (Score:2)
transmeta and its applications (Score:5, Interesting)
The really interesting thing is that as it is used it appears to be faster than the same clock speed pentium. What? you say. How can this be, since transmeta has a rep for being slow.
Well it truns out that for scientific applications, ones where you tend to sit in tight loops a lot the thing is faster. It's meta chips compile the intel instruructions into its internal processor code. Once the overhead of compiling is over its faster internally than a pentium 3
The reason it got a bad rep for being slow is that for GUI type applications where the code is running all over the place and never doing the same thing for very long, it loses out.
given the incredible stability (120 days no reboot), the increacing speed of the transmeta chips (1.2 Ghz), and the extreme low power, high density and no need for special cooling these things may revolutionize scientific and industrial computuing. But they may not dent the desktop market for raw power in GUI applications.
Re:transmeta and its applications (Score:4, Informative)
cnn article [cnn.com]
infoworld article [infoworld.com]
Here's a link directly to a page w/in LANL [lanl.gov] and just for the heck of it a little something from google [google.com].
Standard Disclaimer + I work for RLX.
Re:transmeta and its applications (Score:2)
Anyways...the quick answer is, that would be a difficult prospect for a business to make money at. For now, it's probably best to focus on what we do, and do it better than anyone else.
Just my $0.02. Not that I don't like the idea!
so why does? (Score:2, Interesting)
Applications?
1. Citrix farm. They're NOT disk intensive. You can do load balancing on them. If one goes down the ICA client only has to hit reconnect. .. back up no biggy, nobody sees or knows the difference.
2. Web service farm. One server goes down (MS), kernel panics for some reason...remote reboot
3. Novell (or NT) clusters. Exchange or Groupwise. Box dies / need to upgrade..
4. Home control system..Building control system. have 2-6 blades controlling different things..
there's a lot of benefit from cheap blade dual proc boxes..
Not by using blade servers (Score:3, Informative)
Certainly not by using blade servers. Contrary to popular belief, blade servers cost more tran their non-blade equivalents. Just like notebooks vs. laptops. Their selling points are (in some vendors' opinions) integrated management and supposed flexibility.
Re:Not by using blade servers (Score:2)
While it is true that the average cost of a blade server is higher than the cost of a 1U server, that's not the whole picture. You need to look at the 3 year TCO. Start by thinking about floor space. A typical rack might have 21 1U servers. Using RLX blade servers, with 24 blades per 3U, you can fit 7x24=168 servers in the same floorspace.
You'd need 8 racks to achieve that with low cost servers. If you've ever managed a data center (or rented space in one) you'll know they charge per square foot (or sq.m in Europe).
When you have a large number of servers, you'll also need to look at the costs of power consumption. Especially with Transmeta processors, you can save on power -- AND COOLING costs, which are quite significant.
Finally, I love the fact that the RLX have integrated switches, which saves me money on the network infrastructure, plus each blade has 3 LAN interfaces, which makes them ideal for IDS applications.
Re:Not by using blade servers (Score:2)
For instance, when you have 8k servers, I don't think you're colocated. So the cost for floor space is greatly reduced, if nothing else because physical security is much easier to manage.
Transmeta processors are good, but they're especially advantageous when you don't need lots of horsepower, which is not the case with google where clustering is for high performance first and redundancy later. So I figure you'd need at least 4 blades just to replace your typical dual-p3 1U server, without taking the load balancing inefficiencies into account. Also, you have to take into account management costs, which are of course higher when you have multiple servers.
This is all to say, I believe that in the end the TCO of blades and 1u servers balances out, and if there's an advantage for one or the other, it's fairly marginal.
Are Blade Servers a good thing? (Score:2)
Re:Are Blade Servers a good thing? (Score:2)
The only fly in this ointment is that we are forced to run MS crap. Again, I find a solution that would be elegant, cheap, reliable and easy to maintain; but it wont work in the MS world. Damn, if I could just be free to set up my own stuff - the right way - my company would have a lot more flexability and power for less money.
Why don't they ever listen to the Geeks!?!?
Blade servers.. with Crusoe!! (Score:2)
http://www.rlx.com - they're usually at LinuxWorld showing off their blade server units. They sell them with either Pentium III chips or Crusoe chips.
Pretty cool stuff..
How Google did it. (Score:3, Informative)
2. Get sheet-metal 24" trays to fit into the rack. Mount them every 2U, on both the front and back of the rack. Leave a few U open in the middle of the rack for your switch and KVM.
3. Contract a company to build you custom power supplies that are 1U tall, use 90w of power, and only have 1 ATX connector and 1 molex hookup for a hard drive.
4. Put two Tyan dual-PIII mini-ATX motherboards w/ onboard LAN and video side-by-side on each tray. Slap two 1ghz PIII's in there with good passive heatsinks. Add a small amount of RAM (128-256mb) and strap a 10-20gb hard drive to the free space on the tray using a velcro strap.
5. Cluster 'em up! Heat is a HUGE problem, even with using the relatively-cold PIII's instead of P4's or Athlon MP's.
After seeing the Ashburn facility in person a year or so ago, I figured out that it would have cost about $700 per node to build the cluster. Considering it was an approximately 960-node setup, it was most likely around $700,000 for the 1920-processor cluster. That's REALLY freakin' cheap!
Google Hot Chips keynote (Score:2, Informative)
One major limiting factor is a 500W per square foot limit in most hosting facilities (that's why a lot of their systems are still PIII based). But if a low-power blade cost only 20% more per "MIP", it might still be cheaper to pay the server facility for the extra cooling and power.
They said the HP/IBM/Dell salesmen just cry because there's no way those vendors can compete with the cheapest daily far-East motherboard import prices. The only salesmen who must like Google are the ones who sell them the diesel locomotives (err. backup power generators) when they exceed the power limit of some hosting facility.
Re:Honestly, no thanks (Score:2)
Re:Wrong (Score:1)
Re:Honestly, no thanks (Score:2, Interesting)
Re:Honestly, no thanks (Score:2, Informative)
Now if these were actually real chips and not paper launches, it woudl all mean something.
Re:The math doesn't add up (Score:2)
Re:The math doesn't add up (Score:2)
How is that supposed to be a problem? CFLAGS="-mfpmath=sse2"
Re:Completely Worthless Post.... (Score:3, Insightful)
How did Google roll out 10,000 servers at such a low cost?" Am I supposed to know what timothy is talking about here? I sure don't. Google hasn't informed me of any "low cost" and the article and timothy's write-up don't say anything (else) about it. Perhaps if he's going to make a big deal of something he should explain what is revolutionary or amazing here, or at least do more than imply a special amazing but unmentioned price.
Not only that, but the last line of the ZDNEt write-up says The new 800MHz chip, which uses ServerWorks' LE3 chipset, will list for $289 each in 1,000 unit quantities. OK, low power is nice, dual processors are OK, but hardly anything special, particularly when they only run at 800 mhz. After all, the reason for a dual processor is to gain more processing power and speed, but a dual processor 800 meg chip will not perform as well as a simple single processor 1600 mhz chip and is more complex to program for. A single processor 1600mhz AMD chip is less expensive and will outperform this chip. I see no reason to get excited if the cpu chip price is $289!
Re:Completely Worthless Reply to the Post.... (Score:3, Informative)
Also, it is not just the Mhz that determines the usefulness of a given configuration. Case in point, for many large multi-user database applications the number of concurrent processes (so many per CPU based on the app itself) that the system can do is much more important than the clock speed of the CPU's. Hence the need for dual, quad and oct servers and clustering with shared storage.
Moekandu
"It is a sad time when a family can be torn apart by something as simple as a pack of wild dogs."
Re:Completely Worthless Post.... (Score:2, Informative)
After all, the reason for a dual processor is to gain more processing power and speed, but a dual processor 800 meg chip will not perform as well as a simple single processor 1600 mhz chip and is more complex to program for.
Well, it depends. Well written web applications under a moderate to heavy load tend to perform better under the multi-processor configuration. More complex to program for? Yessss....scalability often is.