Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Cirocco Live Liquid Cooled Rack 119

Mark Grant writes "Cirocco have developed a liquid cooled rack of AMD Duron 1.1Gs in a Beowulf cluster. The rack has been installed in Cambridge University, England and has been under trial since Christmas. The system is being put through its paces running chemical research algorithms. Critical to Cirocco's liquid cooling system are the hot swappable quick couplings. These allow servers to be disconnected whilst the cooling system is in operation." The graph with live temperature readings is pretty neat.
This discussion has been archived. No new comments can be posted.

Cirocco Live Liquid Cooled Rack

Comments Filter:
  • by Anonymous Coward on Thursday March 20, 2003 @05:49PM (#5559958)
    Taking just one of those from the cluster.

    it would be like have a pc.

    wouldn't it

  • by questamor ( 653018 ) on Thursday March 20, 2003 @05:51PM (#5559997)
    The idea of "Hot swappable" when it comes to cooling couplings is making my head spin.
  • Hmmmmm (Score:3, Funny)

    by GeorgeH ( 5469 ) on Thursday March 20, 2003 @05:52PM (#5560007) Homepage Journal
    You have to wonder if "hot swappable" is the right term for this kind of system.
  • Quite Dangerous (Score:3, Interesting)

    by Anonymous Coward on Thursday March 20, 2003 @05:54PM (#5560025)
    While it's possible to secure this sort of thing so that water doesn't leak out from the cooling system, it's much harder to manage a _very_ large cluster of processors cooled by water. Cooling a cluster with air is easier, because you only have to watch to see if a fan breaks down, and that can be automated. Cooling a cluster with water is more dangerous in that you really should inspect it visually every so often to see if moisture is leaking out from the couplings (or condensing on the pipes). So really, could you imagine having to maintain a Beowulf cluster of these?
  • nice rack! (Score:5, Insightful)

    by joe_bruin ( 266648 ) on Thursday March 20, 2003 @05:55PM (#5560034) Homepage Journal
    didn't cray master the liquid cooled cabinet design, like, 30 years ago?
    • Kind of.....

      Cray used (uses?) a liquid freon system where each board had a mettalic layer that connected into a slot formed with the cooling tubes. Thus "sinking" the unit to the coolant frame itself.

      This system (the Cray one) did not "re-plumb" liquid in/out of cases.

      So (IMHO) Cray "mastered" the way to do it. This is just another way to attempt the same effect with different hardware.

    • Re:nice rack! (Score:2, Informative)

      by Curl E ( 226133 )
      The Cray T3E [cray.com] is cooled with fluorinert [3m.com]. The heat is then dumped into cooling water with an external heat exchanger unit. The processor element modules (PEMs) - a board with 8 alpha processors 4 on the bottom mounted against an solid aluminium block and 4 on the top mounted upside down to the same block - slide into the processor cabinet and have quick release cooling hose couplings.
    • Yes, Cray, IBM, HP have all used liquid cooling in the past, normally with exotic fluids (get out the chemistry notes). The major difference is that most previous systems were plumbed up like a house. Servers in racks need to be hot swappable or physically removable from the system.
  • by sssmashy ( 612587 ) on Thursday March 20, 2003 @05:56PM (#5560061)

    Personally, I always thought a liquid cooled rack is what happens when Pamela Anderson spills beer down her shirt...

    • actually if you substitute natalie portman for pam anderson and hot grits for beer then this post has it all!

      oh yeah, in soviet russia all of us are belong to your base.
      • In soviet russia, your sig says "Super lemons clone YOU!" And aren't the grits supposed to go in the pants? I don't think you can compare Portman's rack to Anderson's.

        But seriously, go easy on these jokes. I found out the hard way that nobody but you and me find them funny anymore. My karma's been like a yo-yo lately. (here it goes again!)

        P.S. You forgot the beow... never mind.
  • Fluid connectors (Score:4, Insightful)

    by Gordonjcp ( 186804 ) on Thursday March 20, 2003 @05:57PM (#5560080) Homepage
    Do those special, magic, fluid connectors look like scaled down versions of ordinary hydraulic dry-disconnect spools to anyone else?
  • by moosesocks ( 264553 ) on Thursday March 20, 2003 @05:59PM (#5560103) Homepage
    I wonder what type of chemical research these systems will be conducting... perhaps they will determining the reaction between water (H2O) and Silicon printed circuit boards? (come to think of it, reasearch isn't the only thing they'll be conducting)
    • Printed circuit boards are (usually) made from fiberglass, resin, copper, and tin. Silicon goes in the integrated circuits soldered to the PCB.</Pedantic Ass>
  • maybe... (Score:5, Interesting)

    by deadsaijinx* ( 637410 ) <animemeken@hotmail.com> on Thursday March 20, 2003 @06:00PM (#5560114) Homepage
    I'm missing something. As cool as it is, why do you need to liquid cool 1.1Ghz Athlons. Its nothing a fan can't handle adequitely and at a much more desireable cost. Are they just going for the wow factor, or is there an actually reason for the liquid cooling.

    Was going to make a beowulf joke, but then you insensitive clods would mark me redundant (I'm only like the 20th poster, how redundant can I be?)

    • I was wondering the same thing. Not only that, but they're just Durons... Unless they were donated I would much rather have spent the little extra and gotten actual Athlons. Even if it meant fewer. Unless they needed a certain number of processors for some reason.
      • I would expect that they had tested Duron vs. Athlon in advance, and bought the cpu type and number that gave them the most bang for the buck.

        Suppose they only run stuff that can fit in the cache size of the Duron, then the extra price for the athlon is a bad idea.

    • You should be able to get a much denser rack without having to worry about proper air flow.
    • I'm missing something. As cool as it is, why do you need to liquid cool 1.1Ghz Athlons. Its nothing a fan can't handle adequitely and at a much more desireable cost. Are they just going for the wow factor, or is there an actually reason for the liquid cooling.

      I'm no expert at this (and I didn't read the entire article), but it by using liquid cooling instead of fans you can stack a lot more CPUs into the same space. Getting rid of the heat would also be easier since you can put the radiator somewhere,

      • Although their site seems to be down, but Angstrom has a 1U server that supports quad Athlon processors [angstrommicro.com] in a single case by having two Athlon MP motherboards... mind you, all in one rack unit. That's definitely a feat, even without liquid cooling, but that machine must be one loud, whiny and heat soaked machine... mostly if you want to stick 40-42 of those in one cabinet.
      • I have a duron 1.1 system. You'd have to stack a hell of a lot of these on top of each other to require any sort of special cooling.

        I have a small $5 aluminum cooler-master heatsink with a underpowered fan on it which is able to keep the CPU comfortably cool (without a case fan)

        I suppose it could get a bit tight in a 1u configuration, but anything more dense would probably require special hardware.

        Speaking of special hardware, why can't they just squeeze a bunch of mini-itx motherboards into a server ca
    • Re:maybe... (Score:5, Insightful)

      by ComputarMastar ( 570258 ) on Thursday March 20, 2003 @06:40PM (#5560450)
      From the site:
      Each cpu dissipates just under 50W which is traditionally air cooled using a large heatsink and fan. This is fine for a stand alone computer but when multiple computers are used, eg a Beowulf cluster, the rise in room temperature and hotspots are an increasing problem. Normally air-conditioning is used but this is very inefficient. Cirocco directly cool the heatsink with water which can be cooled remotely and recirculated.
      Air cooling works well enough until you get many hot devices in a small space. Then you have the problem of some running too hot because the air thats supposed to be cooling them is already hot from cooling others.
    • The later Athlon XP cores (thoroughbred) run much cooler at higher clock speeds and faster. So downclocking such a chip would result in less cooling needed.
    • Ok. Your fan can cool 1 Athlon, no problem. The question is how you'd cool, say, a fairly densly packaged system of 1500 Athlons in 10 racks. (8 procs per 2u.) That's 75kW. Liquid cooling is simply more practical once you start packing enough hot processors into a small space. (This sort of density isn't implausible, it's only a short evolution from what cray was achieving in the T3E 8 years ago--and they did it with liquid cooling.)
    • In order to air cool, you need quite a considerable volume of airspace above the CPU and probably a head sink as well, and a relativewly unobstructed airflow from front to back of the PCB. (Some opeople do side to side, but usually not more than once). This is fine for one or two CPUs, but it makes the CPU and its cooling space effectively about two inches tall. Also, you cannot have one CPU pehind another in the airflow direction, because the second will have its air preheated by the first. For supercomput
  • by nick_davison ( 217681 ) on Thursday March 20, 2003 @06:01PM (#5560120)
    For those of use who've grown used to making "Woo, imagine a Beowulf cluster of them!" jokes yet have no clue what a Beowulf cluster actually is, the definition, history and so on is available at:

    NASA's Beowulf site [nasa.gov]

    In brief overview:
    In the summer of 1994 Thomas Sterling and Don Becker, working at CESDIS under the sponsorship of the ESS project, built a cluster computer consisting of 16 DX4 processors connected by channel bonded Ethernet. They called their machine Beowulf. The machine was an instant success and their idea of providing COTS (Commodity off the shelf) base systems to satisfy specific computational requirements quickly spread through NASA and into the academic and research communities. The development effort for this first machine quickly grew into a what we now call the Beowulf Project. Some of the major accomplishment of the Beowulf Project will be chronicled below, but a non-technical measure of success is the observation that researcher[s(sp)] within the High Performance Computer community are now referring to such machines as "Beowulf Class Cluster Computers." That is, Beowulf clusters are now recognized as genre within the HPC community./i
  • by MoTec ( 23112 ) on Thursday March 20, 2003 @06:01PM (#5560128)
    Perhaps they should look into using one of these for their webserver.
  • by Anonymous Coward
    The graph with live temperature readings is pretty neat.

    Even more impressive when slashdotted.

  • by ihatewinXP ( 638000 ) on Thursday March 20, 2003 @06:03PM (#5560145)
    yeah I used to have a Scirocco, water cooled and everything. A great Volkswagen but damn what a parts hog. 2.8 Liter engine and a 5 speed manual it was a blast to drive.

    Oh Cirocco? Not Scirocco? Whoops, but why on earth would anyone would dig up that dead convoluted name is beyond me. A few VW enthusiasts might always remember you but I think your just alienating your audience, all naming your company after trade winds.... Maybe iCirocco? Nah.
  • That they don't use their products on their own webservers...

    The /. effect nabs another victim!
  • by sakusha ( 441986 ) on Thursday March 20, 2003 @06:12PM (#5560216)
    This is a stupid idea. They say that air conditioning is inefficient, but they could have easily done it efficiently with ductwork.
    I've worked with quick-couplings on megawatt lasers, and I can just give em one tip: couplings fail more often than computers. Just wait til they spring a leak because some idiot forgets to twist the ring properly, and he floods the whole rack.
  • by SandSpider ( 60727 ) on Thursday March 20, 2003 @06:15PM (#5560237) Homepage Journal
    You know what I'm talking about. Don't do it.
  • I wouldn't mind having one of those to get my juices flowing, if you know what I mean. Wink Wink, Nudge Nudge.
  • by Dave21212 ( 256924 ) <dav@spamcop.net> on Thursday March 20, 2003 @07:12PM (#5560762) Homepage Journal

    ...that "Live Liquid Cooled Rack" was some sort of wet T-shirt contest for geeks ?

    Seriously though, match this with the IBM Ice-Cube storage cluster [theregister.co.uk] and you really would have one cool machine (ducking).

    "In a few years, one storage administrator should be able to manage a petabyte of storage, which is 100 times more than is typical today." - IBM Almaden Research Center [ibm.com]
  • I understand that mounting multiple machines in this fashion would necessitate a different cooling solution, but I still cannot understand why liquid cooling would be the proper one? I would think a properly designed air-flow cabinet would be the cheapest and most easily maintainable solution.

    I am thinking something like a front-to-back fan design to blow the hot air out of the 1U cases (and I know those cases are pretty tight and cramped, so intelligent routing of cables, or custom device interconnects wou

  • Hey, The temperature is increasing on the the cluster, probably as more people are checking out the live temperature monitor. Also, it seems to be loading slower.

    You can literally watch the slashdot effect on a server this way.

  • Why not just install rackmounting rails on the inside of a refrigerator? That's what I would do if I were stupid.
  • It already is one, a beowulf cluseer would be just the same but bigger. Move along.
  • by Smallpond ( 221300 ) on Thursday March 20, 2003 @07:49PM (#5561088) Homepage Journal

    Even though the heat capacity of gases is generally larger than liquids, the thermal conductivity of water is about 30 times larger than air. Also, plumbing lets me move the water to exactly where I want the cooling to take place without heating it along the way.

    IBM mainframes (ECL-based) used water-cooled plates for the CPU and have spent a lot of design effort on quick-connect couplings that don't leak. I just wish they had transferred some of that knowledge to the Sears washing machine group.

  • Imagine a Beowulf cluster of... damnit.
  • I just took a quick look at the site (at 7:09pm est one of the machines, clinux7, showed at 55C - wonder what happened) and it only shows 9 machines. Apart from that its a very neat idea if you consider the efficiency of water vs air cooling. Why chill a large room when all that you need to cool is a small chip. The trouble is one chiller and many small areas, solution is to have some way of disconecting an area (machine)if you have to do some kind of repair. I think that I would have use taps, and out all
  • 50 wats isnt much at all (yea ya i know been said) but I was just thinking. With my current setup my cpu outputs around 172watts and with 120CFM fan on a heater core (thats the thing in your car that delivers heat) I get around 40~45C with an ambient of 25. do they use one enormous radiator or a few little ones cuz their temsp are pathetic (highest 43C) not to mention that all of those have peeked to 55C - 80C within the last HOUR! (supprised some of those are still working)

    geuse they know not to put gell

  • many good reasons to use liquid cooling. firstly, it's *very* efficient and it allows very high volumetric density. secondly, there are times when air cooling is a Truly Bad Idea, like on boats, ships, and submarines, but also even "ground-based" transportation applications. It's pretty clear that blowing salt air over a circuit board (even with conformal coating) is a Bad Idea(tm), but it's also true in cars, trucks (aka "lorries"), and things like earthmovers. and in large systems, the efficiency part
  • In the IBM mainframe world, everybody, including IBM, was glad to get rid of liquid cooling. By the early 1990s, all IBM mainframes were air-cooled. This seems a step backwards.

    If cooling is a real problem, the usual solution in dense avionics racks is engineered airflow. All heat sources are measured, and small ducts and diverters are sized and built to deliver air to the key spots, while not wasting it on stuff that doesn't need it. If you're building rackmount servers as a product, it's worth the t

  • CPU's: 37-44, Ambient: 24, water in: 26.5, water out: 26...

    Isn't "cooling" supposed to create more than an accidental 0.5 degree temperature difference?
  • I don't own A Duron, let alone many Durons that would necessitate an entire rack.

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...