Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Power IT

Raised Flooring Obsolete or Not? 372

mstansberry writes "In part three of a series on the price of power in the data center, experts debate the merits of raised flooring. It's been around for years, but the original raised floors weren't designed to handle the air flow people are trying to get out them today. Some say it isn't practical to expect air to make several ninety-degree turns and actually get to where it's supposed to go. Is cooling with raised floors the most efficient option?"
This discussion has been archived. No new comments can be posted.

Raised Flooring Obsolete or Not?

Comments Filter:
  • Short Article. (Score:4, Interesting)

    by darkmeridian ( 119044 ) <william.chuangNO@SPAMgmail.com> on Thursday November 03, 2005 @04:46PM (#13944538) Homepage
    Says that raised floors may be inefficient if it gets block. Then says alternatives are expensive. Direct AC where you need it, the article says.

    Why wouldn't raised floors be bad if you used them properly?
  • by dextromulous ( 627459 ) on Thursday November 03, 2005 @04:50PM (#13944575) Homepage
    If we get rid of the raised floors, how am I supposed to impress people with my knowledge of zinc whiskers? [accessfloors.com.au]
  • by Seltsam ( 530662 ) on Thursday November 03, 2005 @04:50PM (#13944576)
    I interned at ARL inside of Aberdeen Proving Grounds this past summer and when touring the supercomputer room (more like cluster room these days), the guide said they used one of the computers in the room to simulate the airflow in that room so they could align the systems for better cooling. How geeky is that!
  • Re:Turns? (Score:5, Interesting)

    by AKAImBatman ( 238306 ) * <akaimbatman@gmaYEATSil.com minus poet> on Thursday November 03, 2005 @04:52PM (#13944598) Homepage Journal
    Indeed. It's been years since I've seen a raised floor. As far as I know, most new datacenters use racks and overhead wire guides instead. The reason for this is obviously not the air flow. The raised floor made sense when you had only a few big machines that ran an ungodly number of cables to various points in the building. (At a whopping 19.2K, I'll have you know!) Using a raised floor allowed you to simply walk *over* the cabling while still allowing you to yank some tiles for easy troubleshooting.

    (Great way to keep your boss at bay, too. "Don't come in here! We've got tiles up and you may fall in a hole! thenthegruewilleastyouandnoonewillnoticebwhahaha")

    With computers being designed as they are now, the raised floor no longer makes sense. For one, all your plugs tend to go to the same place. i.e. Your power cords go to the power mains in one direction, your network cables go to the switch (and ultimately the patch panel) in another, and your KVM console is built into the rack itself. With the number of computers being managed, you'd be spending all day pulling up floor tiling and crawling around in tight spaces trying to find the right cable! With guided cables, you simply unhook the cable and drag it out. (Or for new cables, you simply loop a them through the guides.)

    So in sort, times change and so do the datacenters. :-)
  • Re:Turns? (Score:5, Interesting)

    by convolvatron ( 176505 ) on Thursday November 03, 2005 @04:52PM (#13944600)
    you're right in some sense, the pressure underneath the
    plenum will force air through no matter what. there
    are however two problems. the first is that turbulence
    underneath the floor can turn the directed kinetic energy
    of the air into heat...this can be a real drag. in circumstances
    where you need to move alot of air, the channel may not
    even be sufficiently wide.

    more importantly, the air ends up coming out where the
    resistance is less, leading to uneven distribution of
    air. if you're grossly overbudget and just relying on
    the ambient temperature of the machine room, this isn't
    a problem. but when you get close to the edge it can
    totally push you over.
  • by G4from128k ( 686170 ) on Thursday November 03, 2005 @04:54PM (#13944629)
    Someone needs to create an air interconnect standard that lets server room designers snap-on cold air supplies onto a standard "air-port" on the box or blade. The port standard would include several sizes to accomodate different airflow needs and distribution form large supply ports to a rack of small ports on servers. A Lego-like portfolio of snap-together port connections, tees, joints, ducts, plenums, etc. would let an IT HVAC guy quickly distribute cold air from a floor, wall or ceiling air supply to a rack of servers.
  • No Raised Floors? (Score:3, Interesting)

    by thebdj ( 768618 ) on Thursday November 03, 2005 @04:55PM (#13944633) Journal
    We had an issue where I once worked because we had so many servers the general server room that many different groups used was no longer adequate for our needs, since we were outgrowing our alotted space. Now instead of building us a new server room with the appropriate cooling (which presumably would have included raised flooring) we got a closet in a new building. This is obviously not much fun for the poor people who worked outside the closet, because the servers made a good deal of noise and even with the door closed were quite distracting.

    Now, we had to get building systems to maximize the air flow from the AC vent in the room to ensure maximum cooling and the temperature on the thermostat was set to the minimum (about 65 F I believe). One day, while trying to do some routine upgrades to the server, I noticed things not going so well. So I logged off the remote connection and made my way to the server room.

    What do I find when I get there? The room temperature is approximately 95 F (the outside room was a normal 72) and the servers are burning up. I check the system logs and guess what, it has been like this four nearly 12 hrs (since sometime in the middle of the night). To make this worse our system administrator was at home for vacation around X-Mas, so of course all sorts of hell was busting loose.

    We wound up getting the room down after the people from building systems managed to get us more AC cooling in the room; however, the point is it was never really enough. Even on a good day it was anywhere from 75 F to 80 F in the room and with nearly a full rack and another one to be moved in there is was never going to be enough. This is what happens though when administrations have apathy when it comes to IT and the needs of the computer systems, particularly servers. Maybe we should bolt servers down and stick them in giant wind tunnels or something...
  • by Iphtashu Fitz ( 263795 ) on Thursday November 03, 2005 @04:55PM (#13944638)
    If something is airtight, putting air in one end will move air out the other end.

    The problem lies with larger datacenter environments. Imagine a room the size of a football field. Along the walls are rows of air conditioners that blow cold air underneath the raised floor. Put a cabinet in the middle of the room and replace the tiles around it with perforated ones and you get a lot of cooling for that cabinet. Now start adding more rows & rows of cabinets along with perforated tiles in front of each of them. Eventually you get to a point where very little cold air makes it to those servers in the middle of the room because it's flowing up through other vents before it can get there. What's the solution? Removing servers in the middle of hotspots & adding more AC? Adding ducting under the floor to direct more air to those hotspots? Not very cheap & effective approaches...

  • Not obsolete. (Score:2, Interesting)

    by blastard ( 816262 ) on Thursday November 03, 2005 @05:03PM (#13944720)
    Where I've worked it was primarily for running wires, not cooling. I've also worked in places that have the overhead baskets, and quite frankly, although they are convenient, they are 'tugly. They are great for temporary installations and where stuff gets moved alot, but I'd rather have my critical wires away from places where they can get fiddled with by bored individuals.

    So, no, I don't think they will be obselete any time soon. But hey, I'm an old punchcard guy.
  • by mslinux ( 570958 ) on Thursday November 03, 2005 @05:04PM (#13944735)
    "Some say it isn't practical to expect air to make several ninety-degree turns and actually get to where it's supposed to go."

    I wonder how all those ducts throughtout America (with tons of 90 degree turns) carry air that heats and cools houses and office buildings every day?
  • by Ed Almos ( 584864 ) on Thursday November 03, 2005 @05:07PM (#13944754)
    I'm in a data center right now with two rack mounted clusters and three IBM Z series machines plus a load of other kit. Without the raised flooring AND the ventilation systems things would get pretty toasty here but it has to be done right. The clusters are mounted in back to back Compaq network racks which draw air in the front and push it out the back. We therefore have 'cold' isles where the air is fed in through the raised floor and 'hot' isles where the hot air is taken away to help heat the rest of the building.

    The only other option would be water cooling but that's viewed by my bosses as supercomputer territory.

    Ed Almos
  • by circusboy ( 580130 ) on Thursday November 03, 2005 @05:08PM (#13944765)
    it can turn on a dime, but also stay on that dime. poor circulation results. trumpets have nice (if tight) curves, and even building ducts can have redirects inside the otherwise rectangular ducts to minimize trapped airflow in corners. for the most part even those corners are curved to help the stream of air.

    most server rooms aren't part of the duct, for example, the one here is large and rectangular, with enormous vents at either end. not very well designed.

    airflow is a very complicated problem, my old employer had at least three AC engineers on full time staff to work out how to keep the tents cold ( I worked for a circus, hence the nick.) the ducting we had to do in many cases was ridiculous.

    why do you think the apple engineering used to use a cray to work out the air passage through the old macs. just dropping air-conditioning into a hot room isn't going to do jack if the airflow isn't properly designed and tuned. air, like many things, doesn't like to turn 90 degrees, it needs to be steered.
  • by Anonymous Coward on Thursday November 03, 2005 @05:09PM (#13944773)
    We worked very closely with Liebert ( http://www.liebert.com/ [liebert.com] ) when we recently rennovated our data center for a major project. The traditional CRAC (Computer Room AC) units supplying air through a raised floor is no longer viable for the modern data center. CRAC units are now used as supplemental cooling, and primarily for humidity control. When you have 1024 1U, dual processor servers producing 320 kW of heat in 1000 sq ft of space, an 18 inch raised floor (with all kinds of crap under it) is not adequate to supply the volume of air needed to cool that much heat in so small a space.

    We had intended to use the raised floor to supply air, but Liebert's design analysis gave us a clear indication of why that wasn't going to work. We needed to generate air velocities in excess of 35 MPH under the floor. There were hotspots in the room where negative pressure was created and the air was actually being sucked into the floor rather than being blown out from it. So, we happened to get lucky as Liebert was literally just rolling off the production line their Extreme Density cooling system. The system uses rack mounted heat exchangers (air to refrigerant), each of which can dissipate 8 - 10 kW of heat, and can be tied to a building's chilled water system, or a compressor that can be mounted outside the building.

    This system is extremely efficient as it puts the cooling at the rack, where it is needed most. It's far more efficient than the floor based system, although we still use the floor units to manage the humidity levels in the room. The Liebert system has been a work horse. Our racks are producing between 8 - 9 kW under load and we consistently have temperatures between 80 - 95 F in the hot aisle, and a nice 68 - 70 F in the cold aisles. No major failures in two years (two software related things early on; one bad valve in a rack mounted unit).
  • by Flying pig ( 925874 ) on Thursday November 03, 2005 @05:13PM (#13944817)
    This seems to be more about bad rack design than raised floors. It's a basic principle of ducting design that, as the airflow spreads out from the source through different paths, the total cross section of the paths should stay roughly constant. (Yes, I am simplifying and I as sure someone can explain this better and in more detail. Yes, duct length and pressure drop is important. But the basic concept is true. If I want consistent airflow in my system, and the inlet is one square metre, the total of all the outlets should be around one square metre too.)

    Standard racks tend completely to ignore this. They rely on the internal modules handling their own airflow with fans, which is fine if the inlet area to the modules is much less than the size of the duct entering the cabinet. But if the total area of the inlets to the modules is more than the incoming duct area, the modules furthest from the duct (i.e. the ones at the top) will be starved of air. 1U servers are inevitably going to worsen the problem because they create a large number of competing inlets, stratified up the cabinet. Sucking air out at the top will only work if the air flow is so great it creates a significant pressure drop across the servers, which leads to noise problems, is inefficient, and may adversely affect local cooling inside the server. Blades are potentially much better because, with fewer modules in the cabinet, each with similar requirements, it should be easier to design a cabinet-wide ducting system. However, the most logical solution is to go back to designing the entire cabinet as an integrated system - in which case the entire base of the cabinet can be the inlet duct opening, with appropriate internal structures and blade design to fulfil the objectives of keeping consistent flow to each blade rack and across each blade.

    It's the old engineering issue - ad hoc design leads to suboptimal results, and systems need to be considered as a whole. Blades are, depending on how you look at it, a step in the right direction or a return to the way things used to be designed when real computers were loads of tight packed boards full of ECL and proper cooling design of the cabinet was essential if the thing was to work at all.

  • Re:Turns? (Score:5, Interesting)

    by LaCosaNostradamus ( 630659 ) <[moc.liam] [ta] [sumadartsoNasoCaL]> on Thursday November 03, 2005 @05:14PM (#13944838) Journal
    Obviously you realize that as the equipment contents of datacenters change, it doesn't make sense to change the room sturcture all that much? Hence many older datacenters have retained their raised floors. Of course, their air conditioners were also designed for raised floors.

    I don't know where you've worked, but every datacenter I've seen has had a raised floor, and all of them still had at least one mainframe structure still in use ... hence, they still routed cables under the floor for them, by design.
  • Re:Turns? (Score:4, Interesting)

    by nettdata ( 88196 ) on Thursday November 03, 2005 @05:17PM (#13944872) Homepage
    Actually, with the way computers are being designed now, raised flooring and proper cooling is even MORE of an issue than it was.

    With the advent of blades, the heat generated per rack space is now typically MUCH higher than it was a back in the day. If anything, the raised flooring should be redesigned, as it can't cope with the airflow that is needed for higher density server rooms.

    You'll find that a number of racks are being redesigned with built-in plenums for cooling... a cold feed on the bottom, and a hot return at the top, with individual ducts for various levels of the rack.

    There are even liquid-cooled racks available for the BIG jobs.

    I think that it's not so much that we're going to get rid of raised floors, but just redesign the materials and layout of them to be more effective with the needs of today.

  • Re:sub-floor (Score:3, Interesting)

    by Clemensa ( 800698 ) <Aranell@gmai[ ]om ['l.c' in gap]> on Thursday November 03, 2005 @05:24PM (#13944941)
    Yes...the bodies of mice. In all seriousness, every so often we get the most awful smell in our server room. That's when we call Rentokil, and they inevitably find the bodies of dead mice in our raised flooring in our server room. Bear in mind it's a couple of floors up....when people said to me "you are never more than 10 foot away from a rat when you are in London" I took it to mean horizontal distance, and not *actual* distance (I didn't imagine that many rats lived on every floor of buildings...)
  • by ka9dgx ( 72702 ) on Thursday November 03, 2005 @05:26PM (#13944956) Homepage Journal
    The problem is that power density has gone through the roof. It used to be that a rack of computers was between 2kw and 5kw. Modern blade servers easily push that up to 25kw per rack. You'd have to have 10 feet or more of space below the floor to accomplish cooling with an external source, thus the move to in-rack cooling systems, and the new hot aisle / cold aisle systems.

    Wiring is now usually ABOVE the equipment, and with 10Gigabit copper, you can't just put all of the cables in a bundle any more, you have to be very careful.

    It's a brave new datacenter world. You need some serious engineering these days, guessing just isn't going to do it. Hire the pros, and save your career.

    --Mike--

  • HVAC concerns (Score:3, Interesting)

    by Elfich47 ( 703900 ) on Thursday November 03, 2005 @05:37PM (#13945083)
    Heating Ventilation and Air Conditioning (HVAC) design is based upon how air moves through a given pipe or duct.

    When you are designing for a space (such as a room) you design for the shortest amount of ductwork for the greatest amount of distribution. Look up in the ceiling of an office complex sometime and count the number of supply and return diffusers that work to keep your air in reasonable shape. All of the ducts that supply this air are smooth, straight and designed for a minimal amount of losses.

    All air flow is predicated on two imporant points within a given pipe (splits and branching with in the duct work is not covered here): pressure loss within the pipe and how much power you have to move the air. The higher the pressure losses, the more power you need to move the same amount of air. Every corner, turn, rough pipe, longer pipe all contribute to the amount of power needed to push the air through at the rate you need.

    Where am I going with all of this? Well under floor/raised floor systems do not have alot of space under them and it is assumed that the entire space under it is flexible and can be used (ie no impediments or blockages). Ductwork is immobile and does not appreciate being banged around. Most big servers need immense amounts of cooling. A 10"x10" duct is good for roughly 200 CFM of air. That much air is good for 2-3 people (this is rough, since I do not have my HVAC cookbook in front of me.. yes that is what it is called). Servers need large volumes of air and if that ductwork is put under the floor, pray you don't need any cables in that area of the room. Before you ask: Well why don't we just pump the air into the space under the floor and it will get there? Air is like water, it leaves through the easiest method possible. Place a glass on the table and pour water on the table and see if any of the water ends up in the glass. Good chance it ends up spread out on the floor where it was easiest to leak out. Unless air is specifically ducted to exatcly where you want it, it will go anywhere it can (always to the easiest exit).

    Ductwork is a very space consuming item. Main trunks for 2 and three story buildings can be on the order of four to five feet wide and three to four feet high. A server room by itself can require the same amount of cooling as the rest of the floor it is on. (ignoring wet bulb/dry bulb issues, humidity generation and filtering, we are just talking about number of BTUs generated). A good size server room could easily require a seperate trunk line and return to prevent the spreading of heated air throughout the building (some places do actually duct the warm air into the rest of the building during the winter). Allowing this air to return into the common plenum return will place an additional load on the rest of the buildings AC system. Place the server on a seperate HVAC system to prevent overloading the rest of the building's AC system (which is designed on a per square foot basis assuming for a given number of people/computers/lights per square foot if the floor plan does not include a desk plan layout).

  • by twiddlingbits ( 707452 ) on Thursday November 03, 2005 @06:02PM (#13945330)
    I wouldn't use water but something that if a leak occurs nothing bad happens. Anti-freeze is pretty much inert and transfers heat well. IIRC, some of the Cray supercomputers were water cooled. So I guess that technology belongs to SGI (for now) since they bought Cray.
  • by cvd6262 ( 180823 ) on Thursday November 03, 2005 @06:13PM (#13945439)
    Another big reason for raised floors is to handle wiring.

    or pluming. I'm serious. (An a bit OT)

    When I was at IBM's Cottle Rd. facility, now (mostly) part of Hitachi, they had just finished rebuilding their main magnetoresitive head cleanroom (Taurus). They took the idea from the server techs, and dug out eight feet from under the existing cleanroom (without tearing down the building) and put in a false floor.

    All of the chemicals were stored in tanks under the floor. Pipes ran veritcally, and most spills (unless it was something noxious) wouldn't shut down much of the line. It was a big risk but, if what I hear is correct, people still say it's the best idea they had in a while.
  • by avronius ( 689343 ) * on Thursday November 03, 2005 @06:30PM (#13945581) Homepage Journal
    There are a number of slashdot visitors that do actually care about server room issues. The fact that you don't understand the need does not negate it's importance.

    Large organizations rely on server rooms for their computing environment. Having a cobbled environment where the file server is on the 3rd floor, and the application server is in the janitor's closet, etc. is a recipe for disaster. Troubleshooting connectivity issues (among others) can end up costing more than the apparent simplicity of such a design.

    Understanding ways to better cool the space that our servers occupy is important. And being able to do so in a cost effective manner is also important. The organization that I work in has one in-house server room (containing 60 racks of servers), and one 'co-located' server room (containing 72 racks of servers). Heat and power are the two killers. If we experience a 50% power loss (assume that one power grid is knocked out), do we have enough power to run AND cool the server room? If not, what percentage of my gear do I need to shut down in order to prevent overheating, without impacting critical business systems (like payroll).

    If we can find a cheaper / better / more cost effective method for cooling that utilizes less power, or find a way to use the cooling systems that we have in a more efficient manner, is that not worth an article on slashdot?

    IMHO, This is a valid topic.
  • Hell no (Score:5, Interesting)

    by Spazmania ( 174582 ) on Thursday November 03, 2005 @06:33PM (#13945610) Homepage
    Raised floor cooling was designed back when the computer room held mainframe and telephone switch equipment with vertical boards in 5-7 foot tall cabinets. The tile was holed or removed directly under each cabinet, so cool air flowed up, past the boards and out through the top of the cabinet. It then wandered its way across the ceiling to the air conditioners' intakes and the cycle repeated.

    Telecom switching equipment still uses vertically mounted boards for the most part and still expects to intake air from the bottom and exhaust it out the top. Have any AT&T/Lucent/Avaya equipment in your computer room? Go look.

    Now look at your rack mount computer case. Doesn't matter which one. Does it suck air in at the bottom and exhaust it out at the top? No. No, it doesn't. Most suck air in the front and exhaust it out the back. Some suck it in one side and exhaust it out the other. The bottom is a solid slab of metal which obstructs 100% of any airflow directed at it.

    Gee, how's that going to work?

    Well, the answer is: with some hacks. Now the holed tiles are in front of the cabinet instead of under it. But wait, that basically defeats the purpose of using the raised floor to move air in the first place. Worse, that mild draft of cold air competes with the rampaging hot air blown out of the next row of cabinets. So, for the most part your machines get to suck someone elses hot air!

    So what's the solution? A hot aisle / cold aisle approach. Duct cold air overhead to the even-numbered aisles. Have the front of the machines face that cold aisle in the cabinets to either side. Duct the hot air back from the odd-numbered aisles to the air conditioners. Doesn't matter that the hot aisles are 10-15 degrees hotter than the cold aisles because air from the hot aisles doesn't enter the machines.

  • Re:Where else? (Score:3, Interesting)

    by InvalidError ( 771317 ) on Thursday November 03, 2005 @07:15PM (#13945991)
    One of the new trends is side-to-side flow. Draw cooled air from the raised floor on the left side and exhaust hot air through the suspended ceiling on the right. To reduce interference, route power through the floor and data cables through the ceiling or vice-versa. This way, no system has to take any other's heat.

    Some datacenters have very odd cooling systems... some even distribute cold air from the top and collect hot air at the floor, quite a questionable choice.
  • by Anonymous Coward on Thursday November 03, 2005 @07:26PM (#13946089)
    Actually, the design work was done for free by Liebert and verified by the consulting engineering firm locally that actually did the design for the rennovations. If the Extreme Density System was not available, the project would have been scrapped and millions of dollars lost. It's not like we were coersed to buy the system, there was no other option for the project's success, none, zip, zero, nada. Unfortunately, your sarcasm has no basis in reality for this case since you were not there and don't have all the facts.
  • Re:No (Score:4, Interesting)

    by Keruo ( 771880 ) on Thursday November 03, 2005 @07:56PM (#13946317)
    Well, leave out raised floors and install servers on floor level then.
    But remember, this is what happens when shit hits the fan [novell.com] and servers are on floor level.
  • by pvera ( 250260 ) <pedro.vera@gmail.com> on Thursday November 03, 2005 @07:59PM (#13946345) Homepage Journal
    I spent the first 8 years of my professional life stuck working in NOCs with standard raised flooring, the cooling was just one of the many things it was needed for.

    Examples:

    Wiring: Not everyone likes to use overhead ladders to carry cables around. In the Army we had less than 50% of our wiring overhead, the rest was routed thru channels underneath the raised flooring.

    HVAC Spill protection: Many of our NOCs had huge AC units above the tile level, and these things could leak at any moment. With raised flooring the water will pool at the bottom instead of run over the tiles and cause an accident. We had water sensors installed, so we knew we had a problem as soon as the first drop hit the floor.

    If the natural airflow patterns are not enough for a specific piece of equipment, it does not take a lot to build conducts to guarantee cold air delivery underneath a specific rack unit.

    The one thing I did not like about the raised floors was when some dumbass moron (who did NOT work within a NOC) decided to replace our nice, white, easy to buff tiles, with carpeted tiles. 10 years later and I can't still figure out why the hell would he approve that switch, since our NOC with its white tiles looked fricking gorgeous just by running a buffer and a clean mop thru it. The tiles with carpeting were gray so they darkened our pristine NOC.

    I bet many of the people against raised flooring are land lords that don't want to get stuck with the cost of rebuilding flooring if the new tenant does not need a NOC area. I have been to a NOC in a conventional office suite, they basically crammed all of their racks into what seemed to be a former cubicle island. The air conditioning units were obviously a last-minute addition and it looked like the smallest spill would immediately short the lose power strips on the first row of racks in front of them. Shoddy as hell.
  • by ErikFreitag ( 758879 ) on Thursday November 03, 2005 @08:52PM (#13946699)
    I don't think it is a very good idea to hide your cabling, either power or data. Raised floor just becomes a place to hide things and collect dust, and makes it much harder to make changes. I've seen shallow raised floor which could not be re-seated after it was pulled because of the volume of cable underneath. I've also seen a raised floor environment that became a hazard when the Loma Prieta earthquake popped up every fifth tile or so.

    I believe the idea of hiding cable came from early IBM promotional photos that showed a beautiful sea of white tile with an IBM-logoed monolithic rectangular solid standing there in all of its phallic glory. The purchasers, who were not the operators, came to see this as a natural way to install and manage hardware. In my high school days I saw a Sperry Univac 1107 that was not only mounted on raised floor, but actually had components installed in decorative columns that matched the building deco, kind of like a light switch would be in an office -- the whole room became the computer.

    Cabinets also make little sense. Why make it hard to connect, disconnect, mount or dismount your hardware? The telcos have been using open racks since the beginning of time -- a much more efficient way to handle hardware that changes or must be inspected frequently.

    Power and data should run in separate ladder/tray overhead, where it can be seen and pulled, inspected or added to easily. 20A or 30A power outlets installed in the tray (or overhead duct dropped from the ceiling where electrical codes require) make it easy to attach your cabinet (or better, relay rack) power distribution.
  • by Anonymous Coward on Thursday November 03, 2005 @11:11PM (#13947424)
    Raised flooring works where the cabinet placed on the hole in the floor is required to be slightly pressurised. A lot of older network equipment cabinets and current closed network equipment cabinets, have no perforations to let air escape and the airflow moves from bottom to top (doesn't matter how the switches/ routers move air about themselves)the hot air always goes out the top.
    Contrast that with modern servers that move air from front to back through the server chassis, in the HP cabinets you must have the perforated / grill front door on the rack to allow the server to pull air through it. You must use the correct type of cabinet for the servers it's specified by HP.

    Some big-iron servers / Suns and whatnot require airflow from the bottom of the server (as it is in it's own chassis / cabinet) and this is where having a raised / pressurised floor system is required.

  • Re:Turns? (Score:3, Interesting)

    by Hannes Eriksson ( 39021 ) <hannes.acc@umu@se> on Friday November 04, 2005 @04:54AM (#13948607)
    Granted, this is 70mph wind stuff we're talking about, so it likely wouldn't apply in a datacenter environment.

    You've obviously not been in our data center. Rasied floor, two rows of racks, air blown up from the floor in front of the racks (every pannel immediately in the front of the racks), hot-air-returns in the ceiling behind the racks (center aisle). There's about 10 degree difference between the front and backside of the racks, and more than one person has complained about the "marlyin monroe" effect on the frontside.


    That "Marilyn Monroe" effect is quite nice on rainy days, drying your trousers after the bicycle ride to work, without the risk of getting ugly looking folding marks on them. No ironing textile care! Oh, and did I mention the nice side effect of letting the moisture help the AC keep the room antistatic? The heated airflow between two rows of racks allow for a quicker drying procedure, but that doesn't keep you away from those pesky users as long, does it?

8 Catfish = 1 Octo-puss

Working...