Forgot your password?
typodupeerror
Data Storage IT

Data Center Designers In High Demand 140

Posted by timothy
from the blinky-blue-is-the-new-dull-amber dept.
Hugh Pickens writes "For years, data center designers have toiled in obscurity in the engine rooms of the digital economy, amid the racks of servers and storage devices that power everything from online videos to corporate e-mail systems but now people with the skills to design, build and run a data center that does not endanger the power grid are suddenly in demand. 'The data center energy problem is growing fast, and it has an economic importance that far outweighs the electricity use,' said Jonathan G. Koomey of Stanford University. 'So that explains why these data center people, who haven't gotten a lot of glory in their careers, are in the spotlight now.' The pace of the data center build-up is the result of the surging use of servers, which in the United States rose to 11.8 million in 2007, from 2.6 million a decade earlier. 'For years and years, the attitude was just buy it, install it and don't worry about it,' says Vernon Turner, an analyst for IDC. 'That led to all sorts of inefficiencies. Now, we're paying for that behavior.'" On a related note, an anonymous reader contributes this link to an interesting look at how a data center gets built.
This discussion has been archived. No new comments can be posted.

Data Center Designers In High Demand

Comments Filter:
  • Green IT (Score:4, Informative)

    by mattwarden (699984) on Tuesday June 17, 2008 @10:14AM (#23822813) Homepage
    There is a growing area of interest in so-called "Green IT" (mostly due to inevitable regulations), and the first area being looked at is data center organization. It's always the first stat a consultant firm throws out, because it's relatively easy to show significant cost savings in such an environment (just by reorganizing the appliances to distribute heat in a different manner).
  • by zappepcs (820751) on Tuesday June 17, 2008 @10:23AM (#23822933) Journal
    From TFA:

    Mr. Patel is overseeing H.P.â(TM)s programs in energy-efficient data centers and technology. The research includes advanced projects like trying to replace copper wiring in server computers with laser beams. But like other experts in the field, Mr. Patel says that data centers can be made 30 percent to 50 percent more efficient by applying current technology.
    At least Mr Patel is doing the expected. He and others are applying the current technology the way that it was meant to be applied. The article did not cover the wide array of companies that are addressing this problem. Data Center efficiency is all about applying the technology correctly. What was not covered explicitly in the "also linked" article is how one company is building data center 'cells' in order to minimize on the cooling costs, and create efficient compartmentalized units inside a huge warehouse.

    Those of you who have been in data centers have seen forced air cooling that is not used correctly; cabinets not over vent tiles, vent tiles in the middle of the floor, cabinets over air vent tiles but with a bottom in the cabinet so no air flows.

    When equipment is nearing end of life and hardly being used, it sits there and turns electricity into heat while doing nothing. There are often a grand mix of cabinet types that do not all make best use of the cooling system, undersized cooling systems, very dense blade style cabinets replacing cabinets that were not so dense unbalances the heat/cooling process in the whole data center. Not to mention what doing so does to the backup power system when needed.

    There are hundreds of 'mistakes' made in data centers all over the country. Correcting them and pushing the efficiency of the data center is a big job that not many people were interested in paying for in years gone by.

    If you are interested in what you can do for your small data center, try looking at what APC does, or any cabinet manufacturer. They have lots of glossy marketing materials and websites and stuff. There is plenty of information available. Here's a first link for you http://www.datacenterknowledge.com/archives/apc-index.html [datacenterknowledge.com]
  • by Sobrique (543255) on Tuesday June 17, 2008 @10:53AM (#23823313) Homepage
    It's civil engineering, intersecting with 'real world IT'. Off the top of my head:
    • Power - redundancy, and resisiliency as much as 'just having enough'
    • Cooling - air conditioning is a BIG deal for a data centre - you need good air flow, and it probably doubles your electric bill.
    • Specialist equipment - datacentres are _mostly_ modular, based around 19" racks. But there's exceptions, such as stuff that is 'multi-rack' like tape silos.
    • Equipment accessibility - you'll need to add and remove servers, and possibly some really quite big and scary bits of big iron - IIRC A Symmetrix is 1.8 tonnes. You'll need a way to get that into a datacentre, which doesn't involve '10 big blokes' - spacing of your racks might not help
    • Putting new stuff in - a rack is 42U high. Right at the top of that rack, is going to require overhead lifting.
    • Cabling. Servers use a lot of cables. Some are for power, some are for networking, some are for serial terminals. You've got a mix of power cable, copper cables, fiber cables. They need to fit, they need to be possible to manipulate on the fly, and they need to not break the fibers when you lay them. You also need to be aware that a massive bundle of copper cables is not perfectly shielded, so you'll get crosstalk and intereference. And every machine in your datacentre will have 4 or more cables running into it, probably from different sources, so you need to 'deal' with that.
    • Operator access - if that server over there blows up, how to I get on the console to fix it. If I am on the console to fix it, how do you ensure I'm not twiddling that red button over there that I shouldn't be.
    • Remote/DR facilities - most datacenters have some concept of disaster planning - things as simple as 'farmer joe dug up the cable to the ISP' all the way to 'plane flew into primary data centre'. These things are relatively cheap and easy to deal with on day one, and utter nightmares to retroengineer onto a live data centre.
    • Expansion - power needs change, space needs change, technology changes and ... well, demand for servers increases steadily. It's something to be considered that you will, sooner or later, run out of space, or have to swap out assets.
    That's what springs to mind off the top of my head. There's probably a few more things. So yes, civil engineering, but with a splattering of IT contraints and difficulties.
  • by peragrin (659227) on Tuesday June 17, 2008 @10:53AM (#23823315)
    Because there hasn't been a large power plant built in the USA in 20 odd years? with the last one coming online 12 years ago.

    We are building high power devices on a power grid that was designed 20 years ago before such concepts weren't even thought of.

    Environmentalists won't let new plants be built. Solar, Wind are nt yet generating enough to even begin to offset the demand.

    it isn't distribution it is availability. by 2015 I expect rolling blackouts during hot summers, simply to keep the air conditioners going.
  • by j79zlr (930600) on Tuesday June 17, 2008 @11:53AM (#23824197) Homepage
    I am an HVAC engineer and design some data centers. The power usage on some new densely populated centers can range up to 6,000 watts per square foot. For perspective, the average office building is around 10-12 watts per square foot.
  • by postbigbang (761081) on Tuesday June 17, 2008 @12:18PM (#23824591)
    Blade servers do end up sharing power supplies, and possibly switches/other gear for efficiency. But the CPUs burn, and the disks turn. Density means that a single rack when loaded up consumes voracious amounts of power and requisite cooling. It's a great idea in a lot of ways, but data centers weren't designed for either the power draw or the chilling needs, let along the weight. Add in the fact that denser instances mean more can die in a single chassis, and there are questions posed by blades that older data centers were just not designed for.
  • Re:It's the NIMBYs. (Score:3, Informative)

    by Kohath (38547) on Tuesday June 17, 2008 @01:37PM (#23826277)
    Not in this case. It's the Big Stone 2 plant in South Dakota. The locals want it. The environmental elites that live hundreds of miles away are trying to kill it.
  • by Gilmoure (18428) on Tuesday June 17, 2008 @02:58PM (#23827989) Journal
    New small college data center (in new library) had some of it's design features changed from what ITS requested. The two big ones were 1. Getting rid of the raised floor and putting in carpeting and 2. putting fire sprinkler and building alarm controls behind locked server room door. There was a bit of a political struggle in the college, with the head of the library asserting his authority over IT, since they were in his building. He decided on the changes, as a way of saving money and freeing up 'unneeded space' by not having a separate building controls room. Small school politics are the worst.

Memory fault -- brain fried

Working...