Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Businesses Hardware

Small Startup Prevails In Server Cooling 'Chill Off' 45

miller60 writes "A small startup has shown exceptional energy efficiency in a data center 'chill off' comparing server cooling technologies. Clustered Systems posted the best numbers in the 18-month vendor evaluation sponsored by the Silicon Valley Leadership Group. The Menlo Park, Calif. company built a prototype server that uses no fans and cools processors with a cold plate with tubing filled with liquid coolant. The testing accidentally highlighted the opportunity for additional energy savings, when the Clustered Systems unit continued to operate during a cooling failure that raised the chiller plant water temperature from 44 to 78 degrees F."
This discussion has been archived. No new comments can be posted.

Small Startup Prevails In Server Cooling 'Chill Off'

Comments Filter:
  • by Animats ( 122034 ) on Sunday October 17, 2010 @05:10PM (#33926278) Homepage

    The "Silicon Valley Leadership Group" is kind of a joke. It used to be the "Silicon Valley Manufacturing Group", the lobby for the semiconductor industry, but after most of the semiconductor plants closed, it lost focus.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Why is it a joke? It used to be called Silicon Valley Manufacturing Group because it represented manufacturers. Now those same manufacturers are primarily designers who offshore their manufacturing, so they changed the name of the lobbying group. The group still represents a number of very powerful and successful firms, I fail to see how they are "a joke".

  • by Ancient_Hacker ( 751168 ) on Sunday October 17, 2010 @05:13PM (#33926310)

    Seymour Cray's 6600 was cooling with liquid-filled cold plates... in 1962. That's, er, 48 years?

    • by roguer ( 760556 ) on Sunday October 17, 2010 @07:23PM (#33927102)

      Seymour ran refrigerant (fleurinert?) through coldplates on all his designs (and their descendants) up through the Cray-2. I am told that he used to call himself "the best refrigerator repair man in the industry". His downfall came when he abandoned coldplates for the full refrigerant emmersion that gave the Cray-2 its distinctive "aquarium" look. Unfortunately, in later designs he had to run refrigerant across the emmersed boards so fast that it actually caused friction corrosion.

      But, yeah, you have a point. Coldplates are old hat in the supercomputing industry. BTW, RISC is too. We used to joke that it stood for "Really Invented by Seymour Cray".

  • Ambient noise (Score:5, Interesting)

    by rcw-home ( 122017 ) on Sunday October 17, 2010 @05:20PM (#33926362)

    I'm pretty impressed by how quiet their demo rack is - it'd be a challenge to get a good audio recording of a conversation right next to a full rack of air-cooled 1U servers - it's frustrating using a cell phone in most server rooms, just because of the fan noise. 1U systems are the worst simply because the form factor requires a large number of tiny fans running at high speed.

    Even if there's some serious impracticalities with their approach, eliminating that fan noise is a huge selling point.

  • Cost (Score:5, Interesting)

    by markbao ( 1923284 ) on Sunday October 17, 2010 @05:44PM (#33926498)
    The world is no stranger to liquid-cooling in computers, but this is pretty impressive. Does anyone have any numbers on how much traditional cooling costs compared to estimated costs from this company?
  • by bananaendian ( 928499 ) on Sunday October 17, 2010 @06:04PM (#33926594) Homepage Journal

    The video shows a full size rack with 36 standard 1U rack servers installed on it.

    On each server they have installed milled metal blocks on all the components to bring them in contact with the upper cover of the server which has a metal foil interface to complete the fit for maximum heat conduction.

    The actual coolant is circulated in the rack in cold plates or shelves installed between the servers. Coolant is exchanged from the top of the racks into the piping that takes it to the heat exchanger outside.

    Comment: with this kind of system cooling is a function of the coolant temperature and flow. With the metal blocks, interfaces and surface areas that I could see it is nothing special to be able to cool down the components to very low temperatures. The engineer talks of 450 W dissipation per server with 150W previously going to the fans alone. So getting 300W of heat out of there isn't a problem with a cold plate that size. Military avionics use these a lot: Conduction Cooled cPCI and other standard cards. No need for liquid flow even. Just use aircraft structure as a cold plate. Those custom milled metal interfaces are expensive to make but its still a lot cheaper than anything really MILSPEC and there is no issues with vibration on this one. This would be called modified COTS.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      actually most of them on modern aircraft are cooled with jet fuel.

      • Re: (Score:3, Interesting)

        actually most of them on modern aircraft are cooled with jet fuel.

        Wrong. You are confusing jet engine components which are cooled with the fuel. Avionics racks are sometimes cooled with bleed air from the engine (no, not the exhaust, the other side).

        • Re: (Score:2, Interesting)

          by shougyin ( 1920460 )
          Yes, but only for fixed wing aircraft, with most of the helicopters having their own separate air intake which is filtered. However, the new designs are still causing heating problems. I think the engineers only real concern is that a component won't overheat and fry itself, because the air isn't filtered outside of the closet very well. I would still like to see any of this come into Military Aviation, but i doubt that will be for a long long time.
        • by Anonymous Coward on Sunday October 17, 2010 @08:53PM (#33927820)

          No, I'm not. I actually fly some of them, and our avionics are cooled by cold plates with jet fuel running through them. Think about aircraft built this century, not last century, with very dense avionics.

    • by dbIII ( 701233 )

      Just use aircraft structure as a cold plate.

      In that situation you have a nice cold surface on the other side of the conductor, which is a bit hard to organise in a server room without some nice cold liquid flowing about. Most people forget that conduction is the easiest way to move heat about until you hit other contraints.

  • False inferences. (Score:4, Interesting)

    by Richy_T ( 111409 ) on Sunday October 17, 2010 @06:13PM (#33926628) Homepage

    The testing accidentally highlighted the opportunity for additional energy savings, when the Clustered Systems unit continued to operate during a cooling failure that raised the chiller plant water temperature from 44 to 78 degrees F.

    Nonsense. I've seen equipment continue to function during AC failures with very high temperatures. It's how much the lifetime of the equipment is reduced during those failures that's the real test. Not unusual to see higher levels of hard drive failures months after the event.

    • by dbIII ( 701233 ) on Sunday October 17, 2010 @09:44PM (#33928192)
      If the water temperature is up to 25C the component temperature may still be relatively low since it's probably overdesigned for 6C anyway.
      Also let's say the CPU temperature is 40C, the water temperature is as high as 25C so that's still a 15C temperature difference to move a lot of heat on the conductive part and stop the CPU temperature getting a lot hotter.
      78F/25C is still slightly colder than the air at the back of my air cooled server racks anyway and I expect to run most of that gear until it is obsolete.
      Your point about remaining life lost due to overheating is valid (thermal fatigue, just plain expansion of drive bearings etc) but 25C isn't very hot so long as there is good conduction to where the fluid is and so long as the fluid keeps moving.
      • Re: (Score:2, Insightful)

        by chromatix ( 231528 )

        I wouldn't call it "potential for extra efficiency" so much as "robustness in the face of hardware failures". If the cooling remains adequate when a pump or a fan fails or a blockage occurs in an air path, then the increased coolant temperature provides a signal for admins to react to, and the servers don't suffer any downtime.

        Which is a very good thing for a cooling system. How many stories are there about overheated and dead servers due to an aircon unit failure?

One man's constant is another man's variable. -- A.J. Perlis

Working...