Become a fan of Slashdot on Facebook


Forgot your password?
Businesses Hardware

Small Startup Prevails In Server Cooling 'Chill Off' 45

miller60 writes "A small startup has shown exceptional energy efficiency in a data center 'chill off' comparing server cooling technologies. Clustered Systems posted the best numbers in the 18-month vendor evaluation sponsored by the Silicon Valley Leadership Group. The Menlo Park, Calif. company built a prototype server that uses no fans and cools processors with a cold plate with tubing filled with liquid coolant. The testing accidentally highlighted the opportunity for additional energy savings, when the Clustered Systems unit continued to operate during a cooling failure that raised the chiller plant water temperature from 44 to 78 degrees F."
This discussion has been archived. No new comments can be posted.

Small Startup Prevails In Server Cooling 'Chill Off'

Comments Filter:
  • by Animats ( 122034 ) on Sunday October 17, 2010 @05:10PM (#33926278) Homepage

    The "Silicon Valley Leadership Group" is kind of a joke. It used to be the "Silicon Valley Manufacturing Group", the lobby for the semiconductor industry, but after most of the semiconductor plants closed, it lost focus.

  • by Ancient_Hacker ( 751168 ) on Sunday October 17, 2010 @05:13PM (#33926310)

    Seymour Cray's 6600 was cooling with liquid-filled cold plates... in 1962. That's, er, 48 years?

  • by bananaendian ( 928499 ) on Sunday October 17, 2010 @06:04PM (#33926594) Homepage Journal

    The video shows a full size rack with 36 standard 1U rack servers installed on it.

    On each server they have installed milled metal blocks on all the components to bring them in contact with the upper cover of the server which has a metal foil interface to complete the fit for maximum heat conduction.

    The actual coolant is circulated in the rack in cold plates or shelves installed between the servers. Coolant is exchanged from the top of the racks into the piping that takes it to the heat exchanger outside.

    Comment: with this kind of system cooling is a function of the coolant temperature and flow. With the metal blocks, interfaces and surface areas that I could see it is nothing special to be able to cool down the components to very low temperatures. The engineer talks of 450 W dissipation per server with 150W previously going to the fans alone. So getting 300W of heat out of there isn't a problem with a cold plate that size. Military avionics use these a lot: Conduction Cooled cPCI and other standard cards. No need for liquid flow even. Just use aircraft structure as a cold plate. Those custom milled metal interfaces are expensive to make but its still a lot cheaper than anything really MILSPEC and there is no issues with vibration on this one. This would be called modified COTS.

  • by dbIII ( 701233 ) on Sunday October 17, 2010 @09:44PM (#33928192)
    If the water temperature is up to 25C the component temperature may still be relatively low since it's probably overdesigned for 6C anyway.
    Also let's say the CPU temperature is 40C, the water temperature is as high as 25C so that's still a 15C temperature difference to move a lot of heat on the conductive part and stop the CPU temperature getting a lot hotter.
    78F/25C is still slightly colder than the air at the back of my air cooled server racks anyway and I expect to run most of that gear until it is obsolete.
    Your point about remaining life lost due to overheating is valid (thermal fatigue, just plain expansion of drive bearings etc) but 25C isn't very hot so long as there is good conduction to where the fluid is and so long as the fluid keeps moving.

"Everyone's head is a cheap movie show." -- Jeff G. Bone