Liquid Blade Brings Immersion Cooling To Blade Servers 79
1sockchuck writes "In the past year we've seen several new cooling systems that submerge rack-mount servers. Now liquid immersion cooling is coming to blade servers. Liquid-cooled PC specialist Hardcore Computer has entered the data center market with Liquid Blade, which features two Intel 5600 Xeon processors with an S5500HV server board in a chassis filled with dielectric fluid. Hardcore, which is marketing the product for render farms, says it eliminates the need for rack-level fans and room-level air conditioning. In recent months Iceotope and Green Revolution Cooling have each introduced liquid cooling for rack-mount servers."
The blind leading the ignorant. (Score:2, Insightful)
Immersion liquid cooling is something I have done in the past, and that is all well and good, it is after all HOBBY level tech.
For commercial level tech it isn't even a joke, imagine opening the bonnet / hood of your new 2010 car and finding a big tub full of water with the engine immersed in it.
Internal combustion engines have had closed circuit internal liquid cooling circuits for decades, and frankly computers and electronics have had closed circuit internal liquid cooling circuits for decades too.
Think backplane technology and hollow main boards, the liquid coolant flows through the hollow PCB, and mates and either side with the "backplane".
All the advantages of liquid cooling, and almost none of the disadvantages of liquid cooling.
Air cooling has one great advantage, "leaks" don't matter. Provided you have sufficient mass flow you can leak air all over the place.
Older internal combustion engines didn't even have forced circulation in the closed loop liquid coolant systems, they used thermal syphon, much like the space between the racks.
The salient fact here is you have to design in the cooling circuit at the engine block / PCB mechanical design stage, until and unless you do that you are going to be dealing with some god-awful heath-robinson kludge, like fitting an old "stationary engine" (google it) into a 2010 Dodge rolling chassis.
Instead of a 50 buck case containing a 100 buck mobo, you end up with a 100 buck case and a 200 buck mobo for closed circuit air cooling, or a 200 buck case and a 400 buck mobo for closed circuit liquid cooling, and these prices are for large volume manufacture with full economies of scale.
Now go back to your Dodge dealership and take in two 2010 rolling chassis for the annual service, one is running a bog standard cummins, the other is a kludged up stationary engine, and ask the mechanics which one will be more expensive to service.
Closed circuit liquid cooled electronics are not new, it is routinely used in avionics, which of course means that you can back 200 watts of thermal rejection (a modern desktop computer) into a package the size of an iphone, and run it flat out 24/7.
But it costs.
Unless you are in Hong Kong then the cost of land per acre is cheaper, and air is free, and leaks don't matter, and the coolant doesn't cause shorts.
The only other advantage of liquid coolant is it is much quieter, but even so, you can cure that problem by making everything bigger to accommodate much larger passive heatsinks.
http://hackedgadgets.com/wp-content/_hs2.JPG [hackedgadgets.com] for example, this stuff is extruded and bought by the metre, it doesn't have a failure mode.
Re:Or you would get a raise. (Score:2, Insightful)
So you have to pay twice as much?
Re:Upholding Moore (Score:4, Insightful)
Do we really NEED liquid cooled servers in datacenters? Is this just our feeble attempt to validate Moore's Law despite diminishing returns on smaller process size and core multiplication...?
Yes. No.
The massive densities you can achieve with liquid-to-liquid cooling allows for much smaller data centers (or much more performance in existing data centers).
Just being able to build a smaller data center can mean you've recouped the liquid cooling investment, even before factoring in the savings for increased cooling efficiency/watt, no AC, and no cooling fans.
Re:serviceability (Score:3, Insightful)
I would not characterize what he said as reactionary. He does have a valid question, which is how easy is this to service? Your right, that the hard drive is a non-issue since you would want to use SAN, but hard drives generate heat too, so why would we not want it for that too?
Not everybody uses Blades. I looked into it and I found it costly and proprietary compared to other solutions that could provide even greater density.
Even if we did create a completely sealed 1U server case, we would still need to hook up intake and outtake ports. One of those will bust eventually and we are left with a pretty damn big mess on the floor and a server with no capabilities to cool itself. What does it mean if a leak in one server eventually causes an entire rack to empty out? These are valid questions even for these liquid cooled Blades.
Complaining to just complain is stupid. However, I have yet to see a liquid cooling solution for data centers that really addresses all of these issues and provides contingencies for malfunctions.
Eventually you will need to remove a module and service it. How easy is it to service? How messy is it going to be? How reliably can you seal it back up? Will there be tests you can run under pressure to check proper seal before putting it back into production?
Under normal operation are there any safety valves that can detect loss of pressure and isolate a module and shut it down? What redundancy can be provided on coolant loops?
You see there really are a lot a valid questions about how this is going to work, what else are we not thinking about, normal operations, etc. I hardly think we could call that reactionary.