Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage IT

Data Center Designers In High Demand 140

Hugh Pickens writes "For years, data center designers have toiled in obscurity in the engine rooms of the digital economy, amid the racks of servers and storage devices that power everything from online videos to corporate e-mail systems but now people with the skills to design, build and run a data center that does not endanger the power grid are suddenly in demand. 'The data center energy problem is growing fast, and it has an economic importance that far outweighs the electricity use,' said Jonathan G. Koomey of Stanford University. 'So that explains why these data center people, who haven't gotten a lot of glory in their careers, are in the spotlight now.' The pace of the data center build-up is the result of the surging use of servers, which in the United States rose to 11.8 million in 2007, from 2.6 million a decade earlier. 'For years and years, the attitude was just buy it, install it and don't worry about it,' says Vernon Turner, an analyst for IDC. 'That led to all sorts of inefficiencies. Now, we're paying for that behavior.'" On a related note, an anonymous reader contributes this link to an interesting look at how a data center gets built.
This discussion has been archived. No new comments can be posted.

Data Center Designers In High Demand

Comments Filter:
  • News at 11 (Score:5, Funny)

    by spikedvodka ( 188722 ) on Tuesday June 17, 2008 @09:00AM (#23822653)
    Qualified Professionals in demand, news at 11
    • Re: (Score:2, Funny)

      by ch-chuck ( 9622 )
      One demerit for error in format: that should be film at 11.
    • Ok, here's how it's going to work out...

      We can send people to college and have them study things like thermodynamics, the flow of air and water in a system, physics, electricity, scale, and perhaps even a little economics. (Things that would be useful for data center design) ...we will call them "ENGINEERS"

      And here's the real kicker: They can APPLY what they have learned in classrooms and labs to actual mechanical and electrical systems in a datacenter!

      Wow, that was rough sailing for a while there.

      Note, ho
      • by homer_s ( 799572 )
        Note, however, that this does not solve the problem of nobody wanting to PAY these "engineers" a real salary to build out their $50 million data-center.

        And what would be a "real salary" and how do you arrive at that number?
  • by Paranatural ( 661514 ) on Tuesday June 17, 2008 @09:00AM (#23822659)
    Get some folding card tables, throw yer servers on there, then get yerself a extension cord and a couple of power strips to give ya enough outlets offa those two plugs in th' wall, and get yerself one of them fans from Walmart ta blow over 'em if yer feelin fancy. Voila. Them college kids think they're so smart, that wasn't hard at all. You can even get a bucket of water in case anything catches fire!
  • So what does a data centre designer actually do?

    Do they constrain the end user to particular hardware, or is it just basic civil engineering?

    I can see that a well planned installation can reduce cooling costs, but if Customer A insists on having his Superdome rather than a more energy efficient alternative, what does the designer do then?

    • by Anonymous Coward on Tuesday June 17, 2008 @09:18AM (#23822879)
      A designer needs to understand how cooling, power and building design affect each other. High density cooling and power management is a different beast. (And disaster management! Redundant power, fire suppression that won't destroy your computers, etc...)

      I think that is the point here: Data centers have become large enough that you don't want to just stuff them into a random office building and hope everything will work out fine. Specialization is valuable in this case.
      • Re: (Score:3, Informative)

        by Gilmoure ( 18428 )
        New small college data center (in new library) had some of it's design features changed from what ITS requested. The two big ones were 1. Getting rid of the raised floor and putting in carpeting and 2. putting fire sprinkler and building alarm controls behind locked server room door. There was a bit of a political struggle in the college, with the head of the library asserting his authority over IT, since they were in his building. He decided on the changes, as a way of saving money and freeing up 'unneeded
        • Re: (Score:2, Interesting)

          by Cramer ( 69040 )
          Oh yes! Carpet in a server room. I wouldn't even put "NSA carpet" in there -- it has conductive filaments to ground out any EMI.

          I had the same several month long arguements in planning our new office. It's expensive. We cannot raise the ceiling (the building HVAC systems are in the plenum.) Do we really need 5ton air handlers. Do we have to have 2 of them. etc. etc. Well, my 12" floor became a 10" floor -- a compromise to make the ramp 2ft shorter, and 2 Lieberts became one because no one listened to
          • by Gilmoure ( 18428 )
            I think they had an outage when someone (not in IT), with access to server room, decided to plug a laser printer into an orange outlet. Just bad planning being laid over good.

            As for that carpet, a bunch of us showed up on a Saturday and busted our collective asses scraping that shit up. Was glued to slab!
    • by zappepcs ( 820751 ) on Tuesday June 17, 2008 @09:23AM (#23822933) Journal
      From TFA:

      Mr. Patel is overseeing H.P.â(TM)s programs in energy-efficient data centers and technology. The research includes advanced projects like trying to replace copper wiring in server computers with laser beams. But like other experts in the field, Mr. Patel says that data centers can be made 30 percent to 50 percent more efficient by applying current technology.
      At least Mr Patel is doing the expected. He and others are applying the current technology the way that it was meant to be applied. The article did not cover the wide array of companies that are addressing this problem. Data Center efficiency is all about applying the technology correctly. What was not covered explicitly in the "also linked" article is how one company is building data center 'cells' in order to minimize on the cooling costs, and create efficient compartmentalized units inside a huge warehouse.

      Those of you who have been in data centers have seen forced air cooling that is not used correctly; cabinets not over vent tiles, vent tiles in the middle of the floor, cabinets over air vent tiles but with a bottom in the cabinet so no air flows.

      When equipment is nearing end of life and hardly being used, it sits there and turns electricity into heat while doing nothing. There are often a grand mix of cabinet types that do not all make best use of the cooling system, undersized cooling systems, very dense blade style cabinets replacing cabinets that were not so dense unbalances the heat/cooling process in the whole data center. Not to mention what doing so does to the backup power system when needed.

      There are hundreds of 'mistakes' made in data centers all over the country. Correcting them and pushing the efficiency of the data center is a big job that not many people were interested in paying for in years gone by.

      If you are interested in what you can do for your small data center, try looking at what APC does, or any cabinet manufacturer. They have lots of glossy marketing materials and websites and stuff. There is plenty of information available. Here's a first link for you http://www.datacenterknowledge.com/archives/apc-index.html [datacenterknowledge.com]
      • by atrus ( 73476 )
        Placing your average cabinet over a perf tile will do... nothing.

        Servers vent, in their standard setup, front to back. While a vertical system with a hot air plenum is fantastic, its going to require specialized hardware, and since most datacenters are a odd mix of new and old, is never going to happen. Google could afford to do it, but Random Company X or Colo Y can't.

        If you're talking about a full cold-aisle containment system, in which a standard cold-air subfloor is much more closely controlled, the

    • by Sobrique ( 543255 ) on Tuesday June 17, 2008 @09:53AM (#23823313) Homepage
      It's civil engineering, intersecting with 'real world IT'. Off the top of my head:
      • Power - redundancy, and resisiliency as much as 'just having enough'
      • Cooling - air conditioning is a BIG deal for a data centre - you need good air flow, and it probably doubles your electric bill.
      • Specialist equipment - datacentres are _mostly_ modular, based around 19" racks. But there's exceptions, such as stuff that is 'multi-rack' like tape silos.
      • Equipment accessibility - you'll need to add and remove servers, and possibly some really quite big and scary bits of big iron - IIRC A Symmetrix is 1.8 tonnes. You'll need a way to get that into a datacentre, which doesn't involve '10 big blokes' - spacing of your racks might not help
      • Putting new stuff in - a rack is 42U high. Right at the top of that rack, is going to require overhead lifting.
      • Cabling. Servers use a lot of cables. Some are for power, some are for networking, some are for serial terminals. You've got a mix of power cable, copper cables, fiber cables. They need to fit, they need to be possible to manipulate on the fly, and they need to not break the fibers when you lay them. You also need to be aware that a massive bundle of copper cables is not perfectly shielded, so you'll get crosstalk and intereference. And every machine in your datacentre will have 4 or more cables running into it, probably from different sources, so you need to 'deal' with that.
      • Operator access - if that server over there blows up, how to I get on the console to fix it. If I am on the console to fix it, how do you ensure I'm not twiddling that red button over there that I shouldn't be.
      • Remote/DR facilities - most datacenters have some concept of disaster planning - things as simple as 'farmer joe dug up the cable to the ISP' all the way to 'plane flew into primary data centre'. These things are relatively cheap and easy to deal with on day one, and utter nightmares to retroengineer onto a live data centre.
      • Expansion - power needs change, space needs change, technology changes and ... well, demand for servers increases steadily. It's something to be considered that you will, sooner or later, run out of space, or have to swap out assets.
      That's what springs to mind off the top of my head. There's probably a few more things. So yes, civil engineering, but with a splattering of IT contraints and difficulties.


      • Putting new stuff in - a rack is 42U high. Right at the top of that rack, is going to require overhead lifting.

        With blades and other new high density things (48 disk trays, anyone), a lift will be required regardless.

        But yah, good analysis.

        The thing about being a data center designer is that it is multidisciplinary. You have to know a lot about facilities in particular to be anywhere near qualified, and all the computer stuff is there as well on so many levels.

  • really want respect, all they have to do is send an urgent email to the ceo that the dilithium crystals are deteriorating, and that the antimatter containment fields are failing, and we can't take much more of this captain
  • by Anonymous Coward on Tuesday June 17, 2008 @09:05AM (#23822711)

    '...these data center people, who haven't gotten a lot of glory in their careers, are in the spotlight now.'
    Glory? If I wanted glory I would have become a firefighter or something. I got into the data center business to read people's email, plain and simple. That's reward enough!
    • Amen (Score:3, Insightful)

      by Anonymous Coward
      The only "glory" we'd receive in the data center is when something goes wrong. That is the only time we ever got noticed.

      Only now they want people like myself? Screw that. I gave up on that long ago when it was a dead end.
    • Don't forget that downloads and torrents to the servers are lighting fast. Then you can just bring a thumb drive next time you go down to fix an "outage."
  • There is such a thing as a tesseract
  • by Kohath ( 38547 ) on Tuesday June 17, 2008 @09:08AM (#23822739)
    This is only a problem because the power grid has become very fragile.

    Electricity generation hasn't grown ahead of demand due to government meddling, atom-ophobia, and environmentalist obstruction in the courts and on planning boards.

    The rolling blackouts will be coming soon. It'll start with small ones. Then everyone will buy battery backups that draw a lot of power to recharge once power is restored. This will cause the duration of the periodic blackouts to go from a few minutes to a few hours in about 2 years.

    Not long after that, we'll start building power generation capacity in the US again.
    • by samkass ( 174571 ) on Tuesday June 17, 2008 @09:15AM (#23822819) Homepage Journal
      How many letters have you written to your congressman advocating that the government build a coal plant on your block? That's the fuel that America has the most of.
      • Re: (Score:2, Insightful)

        by Kohath ( 38547 )
        There's already one a short distance from my house, thanks. They are trying to build more in the region (a long way away from everyone's house), but the environmentalists won't allow it.
      • by j-pimp ( 177072 )

        How many letters have you written to your congressman advocating that the government build a coal plant on your block? That's the fuel that America has the most of.

        Because it would make less sense to import uranium. And I'd have no problem living next to a nuclear reactor plant, except of course living in Queens (NYC Borough) and they would never out one that close to Manhattan. Assuming I lived in Sachem though, I'd be all for them firing up the plant there. Its a small risk for cheap energy.

    • by bsDaemon ( 87307 ) on Tuesday June 17, 2008 @09:32AM (#23823035)
      On the other hand, it might be the final push that people need to start making their homes and businesses as energy efficient as possible, up to and including home solar and/or wind; use of more energy-efficient appliances, low-power-consumption electronics, etc.

      I would dare say that the future looks good for ARM and Via on that last account, at least.
      • by Kohath ( 38547 )

        On the other hand, it might be the final push that people need to start making their homes and businesses as energy efficient as possible...
        None of that generates any power.

        It's like asking a hungry person to cut back on his food intake just a little more to share with the other hungry people in his family or village.

        Or we could just generate some more power and have good lives instead of slowly starving ourselves.
        • by bsDaemon ( 87307 ) on Tuesday June 17, 2008 @09:49AM (#23823255)
          No, it doesn't generate power -- but it prevents me from needing as much. The less I need, the more I can make myself. If I can cut my use from say, 200Mega Watts to 800Kilo Watts by proper deployment of insulation, energy star appliances, replacing a desktop PC with a Pico-ITX system, etc -- then if I can generate half what I need with solar panels or a wind mill (if I live in an area where I can fit one), then I'd no longer be that big of a draw on the grid, would I?

          Of course, those numbers are all just pulled out of my ass for an example, but still -- you get the point. Cost saving measures at home are also going to lead to energy savings at large. With power prices going up ~30% next month, I think more people will start looking at the alternatives and where they can cut costs.

          I agree that things need to be done on the supply side as well, but they should be done in a responsible manner. Building more nuclear plants, for instance. But by reducing consumption, we can then close down fossil plants instead of doing a 1:1 replacement.
          • by Firehed ( 942385 )
            If you're going from 200MW to 800kW from those changes, you should really consider dealing with that short-circuit before anything else.
            • by bsDaemon ( 87307 )
              Yeah, I know the numbers are BS. They aren't based on anything real, and I probably didn't even capitalize correctly.
        • Re: (Score:3, Insightful)

          by Anonymous Coward
          It's like asking a hungry person to cut back on his food intake just a little more to share with the other hungry people in his family or village.

          No, energy efficiency is like getting a well fed person, who currently throws away 30% of their food because it goes bad before they eat it, to do a bit more planning and a bit less impulse buying so they reduce their wastage to (say) 10% or less and save themselves significant money for a very small effort, and do their bit for the environment as a side effect.
          • by Kohath ( 38547 )
            People already do the things that will "save themselves significant money for a very small effort".

            Why not just make more? It's easy and we know how and it's cheap and people are happy to buy it because they want their lives to be lives of plenty instead of lives of desperate want.
            • "People already do the things that will "save themselves significant money for a very small effort".

              Why not just make more? It's easy and we know how and it's cheap and people are happy to buy it because they want their lives to be lives of plenty instead of lives of desperate want."

              I disagree. I think many people are becoming more energy conscious, but there is still a ton of room to improve. Many people are still very wasteful in their energy consumption (leaving the tv on all hours of the day, not turni
              • by Kohath ( 38547 )

                I just feel that making a few changes in our lives to reduce our energy usage is quite different from "desperate want".
                And what would happen to make that opinion (or "feeling") change? How much conservation is enough? What evidence would you have to see to decide we need to switch gears and just make more energy available?

                How do we know when we've sacrificed enough of our wealth and wellbeing to meet your standards?

                The blackouts are coming.
                • "And what would happen to make that opinion (or "feeling") change? How much conservation is enough? What evidence would you have to see to decide we need to switch gears and just make more energy available?

                  How do we know when we've sacrificed enough of our wealth and wellbeing to meet your standards?"

                  Turning the lights out when I leave my office or my home does not decrease my supposed wealth. If anything it helps increase my monetary resources as they are not being spent on wasted electricity. Certainly it
                  • by Kohath ( 38547 )
                    Restricting production to try to force people to save energy will just increase the price and put a big burden on the poorest and most vulnerable in society. Those who can still afford the increased price will need to spend their time and effort saving energy instead of producing useful goods and services. Prices on all goods and services will increase as a result. Again, this hurts poor and vulnerable people the most.

                    As energy is made more and more artificially scarce, production will move to areas of t
    • by Hasai ( 131313 )

      ....Not long after that, we'll start building power generation capacity in the US again.
      And burning treehuggers at the stake?

      I'll supply the gas^h^h^hhydr^h^h^h^hethan^h^h^h^h^hbeer.
  • Green IT (Score:4, Informative)

    by mattwarden ( 699984 ) on Tuesday June 17, 2008 @09:14AM (#23822813)
    There is a growing area of interest in so-called "Green IT" (mostly due to inevitable regulations), and the first area being looked at is data center organization. It's always the first stat a consultant firm throws out, because it's relatively easy to show significant cost savings in such an environment (just by reorganizing the appliances to distribute heat in a different manner).
  • by apathy maybe ( 922212 ) on Tuesday June 17, 2008 @09:15AM (#23822833) Homepage Journal
    Before someone else does it, I direct you to BOFH, http://www.theregister.co.uk/2008/05/23/bofh_2008_episode_19/ [theregister.co.uk]
    The BOFH cares about important things:

    "There are seven pubs and two Indian joints within a one block radius, a tube station a couple of blocks away and a women's fitness centre across the road."
    Like service:

    "WE COULD CUT A HOLE IN THE FLOOR OF MISSION CONTROL, INSTALL A POLE, KEEP A CAR IN THE SERVICE BAYS AND CALL IT THE BATMOBILE!"


    So, besides electricity usage, what else should you care about? How about heat? Your room can't be too hot (you can send all the heat to the swimming pool in the fitness centre...).

    What about wires? Both a OHS issue, and a potential to kill off half your servers if you trip over an exposed power cord or network line. So you lay them under the floor?

    Complicated stuff this...
    • Your room can't be too hot (you can send all the heat to the swimming pool in the fitness centre...).

      I've been wondering about heat storage when we had the article about the new data center in the desert near Las Vegas. If you have a large enough mass (e.g. a couple 100 tons of water) you could store some amount of heat during the day until you can vent it during the night. That would require water cooled heat exchangers everywhere but you would need less power for active cooling.

  • by binaryspiral ( 784263 ) on Tuesday June 17, 2008 @09:24AM (#23822947)
    Companies with full data centers and in need of more servers are turning to virtualization technologies to increase their server density, reduce their physical server deployment, and improve efficiency in cooling, hardware maintenance, and administration.

    It's amazing to see the differences VMware has made in my career in just a few short years... going from deploying hardware servers in weeks to a virtual in seconds.
    • by flajann ( 658201 )
      A colleague of mine has created this wonderful system around Puppet that now allows him to spin-up a fully-configured server in just minutes.

      In our operation, virtual servers would NOT cut the mustard. They are nice for development and QA -- maybe -- but when you are dealing with thousands of simultaneous users, you need real iron, not virtual.

      • Re: (Score:3, Insightful)

        by Sobrique ( 543255 )
        Virtualisation is a way of managing capacity and demand. It is not an either/or case with 'real iron', it's just a different way of considering the problem domain.
    • by outcast36 ( 696132 ) on Tuesday June 17, 2008 @09:40AM (#23823131) Homepage
      yes and no.

      I've seen companies that turned to virtualization to solve their power and cooling problems. Yes, you can serve more OS instances with less hardware. That is good.

      However, these places are generally not managing their infrastructure well in the first place. Now you start running into problems with server sprawl and storage management. The management costs are going up because you have more servers running more applications. That takes more management, not less.

      I think it is great that we are seeing more specialization in this space. I think that SysAdmins need to look at how they want to specialize moving forward. Are you going to manage hardware & resources? Are you going to be more OS and application tuning? We can't expect one person to have enough breadth to go from HVAC/electric/network/storage/OS/application. I'd hate to see a tape ape get into Data Center design because "hey they're down in the Data Center anyway"

      Don't get me wrong, I think virtualization (server & application & desktop) is the wave of the future. But I don't think a lot of firms see this yet. I think they are still trading one problem for another.

      • However, these places are generally not managing their infrastructure well in the first place. Now you start running into problems with server sprawl and storage management. The management costs are going up because you have more servers running more applications. That takes more management, not less.

        Solving their infrastructure problems would be a good thing for everyone.
        But if they don't, It'd also be better for everyone if their increased costs go towards management and not towards increased electricity usage.

        I think they are still trading one problem for another.

        Labor is a "problem" with a solution that can be ramped up to full speed a lot faster than a new power plant. Not to mention the external costs from labor are preferable to the external costs of more coal fired power.

    • by dkf ( 304284 )

      Companies with full data centers and in need of more servers are turning to virtualization technologies to increase their server density, reduce their physical server deployment, and improve efficiency in cooling, hardware maintenance, and administration.

      It buys you a few years, but that's all. There's massive growth going on in the amount of server capacity needed, and all virtualization can do is to get your utilization up closer to the 95% level (bad idea to go much over that; you need a little space for admin overhead). But once you've done that, you're going to need more real capacity anyway (or the business isn't growing...) At best, use virtualization to buy yourself the time to get your physical server systems in order so you can host more physical

      • EC2 is also virtualized to hell and back. So, if you're already virtualizing, why go EC2? If you cannot virtualize for performance reasons, you still can't do EC2.
  • by khallow ( 566160 ) on Tuesday June 17, 2008 @09:30AM (#23823019)

    I don't understand the peculiar emphasis the New York Times places on "endangering" the power grid. Even though a data center uses a lot of electricity, it's a high value operation that needs a stable power supply. What's wrong with the idea of paying more to insure that your power supply is sufficiently stable for your needs? The power company accepting those checks can then work on delivering that power. It's like saying that I'm somehow responsible for the stability of the oil production and distribution infrastructure because I drive a car. Perhaps, if I tweak my engine just so, I can engineer a democratic transformation of Saudi Arabia. I'll see if changing the oil does the trick.

    At some point, you have to realize that the consumer, no matter how big, isn't responsible for the supply of resources by another party. If there's a problem with how those resources are supplied, be it fixed price (regardless of demand) power transmission lines, pollution, or deforestation, then that problem should appear as an increase in cost to the consumer. If it isn't, then it's a problem with how the resource is distributed, not a problem with the consumer.

    • Re: (Score:3, Informative)

      by peragrin ( 659227 )
      Because there hasn't been a large power plant built in the USA in 20 odd years? with the last one coming online 12 years ago.

      We are building high power devices on a power grid that was designed 20 years ago before such concepts weren't even thought of.

      Environmentalists won't let new plants be built. Solar, Wind are nt yet generating enough to even begin to offset the demand.

      it isn't distribution it is availability. by 2015 I expect rolling blackouts during hot summers, simply to keep the air conditioners
      • by guruevi ( 827432 )
        How about large datacenters having their own fairly large power generator? If you can buy gas, oil or coal in bulk it might be on par or cheaper and more reliable than buying general grid-based power. If you need cleaner energy, there was a news post a few months ago about a "backyard" nuclear generator similar to the ones used on nuclear powered subs and vessels.

        For smaller applications like home, a Radioisotope Thermoelectric Generator might do, they've been used in military applications since the middle
    • In many cases it's not just being ecologically friendly, it's also about self interest. With the price of oil getting higher, the price of electricity will also increase. Well as a company, I would think that you want performance and efficiency. As a company, you'll pay for the electricity that you need but if you're wasting it on inefficient systems that's money that could go for other things. I guess the point of the article was many were just built without the thought of efficiency (power, space, coo
    • Re: (Score:2, Interesting)

      by Sobrique ( 543255 )
      Well, between uninterruptible power, and air conditioning, datacentres are probably one of the highest 'power overhead' applications. There's a hell of a lot of 'waste' there, which you can design out, in some measure.

      But yes, it's a priority application too - datacentres score as 'business critical' in most companies, so no matter how much it costs to run, it's cheaper than it 'not running'.

      Part of the point of DC design is resiliency, and therefore you _do_ have to consider available services and supp

      • Re: (Score:3, Informative)

        by j79zlr ( 930600 )
        I am an HVAC engineer and design some data centers. The power usage on some new densely populated centers can range up to 6,000 watts per square foot. For perspective, the average office building is around 10-12 watts per square foot.
        • by khallow ( 566160 )
          Out of curiousity, when you include area for air conditioning, power/communication conduits, and other support infrastructure, what is the consumption rate per square foot? I'm thinking of it in relation to power generation. In comparison, current best effort solar power is somewhere around 10-20 watts per square foot (averaged over 24 hours) and the worlds largest nuclear plant [wikipedia.org] has a power density of somewhere around 200 watts per square foot (including the 4.2 square kilometer buffer of land, but not the
    • I think it's mostly a matter of the journalist misunderstanding the reasons for saving power in a datacenter.
  • Sound proof booth!

    That way, the server dude who is wrangling my server issue CAN HEAR ME when he calls me, elbow deep in whirring fans, spinning disks and humming thingees. Even if he listens, gives me a BRB, puts me on hold and dives into the machines.

    Or if not feasible, maybe the server dudes could wear priestly robes made of this stuff [physorg.com]
  • Instead of everyone hiring their own designer and doing a one off solution, go for the data center in a shipping container [sun.com]. Cost you less than the architects will charge you for thr building design, and a proper industrial design can make the HVAC more efficient and save lots of $$ in the long run.
    • Re: (Score:2, Interesting)

      We just thought about doing this for a slightly different reason. Trailers are a funny loophole in US regulations. If you pull a trailer inside a building, the building inspectors come (enviro, electric, whatever) and you just tell them it is a trailer that is currently stored inside. They assume it is regulated by the DMV or some other travel saftey org. But if you never actually drive the trailer on the roads, you never incur that regulation/inspections. Seriously, trailers are a sneaky way to avoid
    • When you put it behind the building, make sure it is secured and monitored.
      I know of one case where the whole trailer disappeared overnight.
  • by aaarrrgggh ( 9205 ) on Tuesday June 17, 2008 @09:42AM (#23823155)
    There has been a shortage of architectural engineers for the past two decades. I say architectural engineers because very few mechanical engineers go into HVAC, and very few electrical engineers do power systems. It doesn't seem quite as bad structurally.

    It us a shame because it really has a lot of great career opportunities.

    Data center work is just a subset of that-- it is hard to find people with the experience, but not impossible to train.
  • From T second FA:

    The centre is fitted with state-of-the-art security and has undergone an array of checks to allow it to process data up to government level.
    If they're up to UK Government data security level I wouldn't trust them with anything
  • by trybywrench ( 584843 ) on Tuesday June 17, 2008 @09:52AM (#23823297)
    I use to have a few friends who worked for UUNET in Richardson TX. After Worldcom bought them and then the scandel happened their datacenter was reduced to a skeleton crew (including security). My buddy worked nights so some weekends I'd drive up to Richardson from Dallas with some beer and he'd sneak me into the datacenter through a door that the smokers used and we'd hang out, drink, and download movies/watch pron. Good times.

    Their UPS was pretty impressive. It was about a 2 thousand square foot room full of what looked like car batteries. I didn't like to go in there, I don't like being around large, uninsulated, potential. (I was electrocuted pretty badly as a kid once)
  • by ILongForDarkness ( 1134931 ) on Tuesday June 17, 2008 @09:54AM (#23823327)
    You hear a lot about the big datacentres that are being planned by the likes of Google, Yahoo etc. I realize this is probably an over simpification but it seems like they know ahead of time what the systems will be for the datacentre. They seem to know the apps they will run, the servers they like etc.

    While I admit that these datacentres are huge and get a lot of publicity, thus a lot of pressure to design right and "green" I don't think that level of advanced knowledge is typical for SMBs and even most non-IT centric businesses regardless of size.

    In practice a company has a few servers and one or two system admins, then they grow, staff leaves, they start thinking about different technologies, required software changes etc. What they end up with is a few vendors servers, a few vendors disk arrays probably a few flavors of networking etc.

    In short the "real world" problem for the majority of companies/sys-admins isn't the very academic concept of building a single purpose datacentre, but handling growth and change. I'm yet to see a good reference for how to handle this. At best I see vendors showing how great there new server/rack combination is in isolation, Another popular thing is the ever popular look how low our power needs per FLOP are for a data centre based on our products. Yeah like we are likely to use identical systems for databases as we do for LDAP, and the same one for a fileserver as we use for a MPI cluster.

    Anyways, does anyone know a good reference to deal with these "real world" problems?

    • Re: (Score:3, Interesting)

      by aaarrrgggh ( 9205 )
      All companies face those same challenges, including Microsoft and Google. I work mostly with banks, and we are always faced with micro and macro change and growth planning. With help, some banks can go from a one-quarter projection to a reasonable three year projection. Five to six years is harder, but sometimes possible.

      The biggest secret is in providing enough space to allow for growth, changing needs, and eventually equipment replacement.

      As for efficiency, I have to tell an aspiring co-lo that they will
  • All of this crazy crap is necessary because

    a) Stoopid computer parts can't just sit outside and work right.

    b) Cheap heartland desert acres are beloved by accountants.

    No roof == no heat problem. If there's wind. And you're not in the sahara.

    If only my data center was water-resistant...
    • by Gilmoure ( 18428 )
      I wonder what the temperature is, at 20,000 feet? Why isn't Nepal the server capital of the world?
  • by miller60 ( 554835 ) * on Tuesday June 17, 2008 @10:05AM (#23823473) Homepage
    Here's an interesting related issue: how many people does it take to operate a data center? Google always says that it will create 200 full-time positions at each of its new data centers. But an analysis of data center staffing [datacenterknowledge.com] for new Yahoo and Microsoft facilities in Washington State suggests that these companies can run a data center with 30 to 50 staffers.


    Data center employment often comes up in discussions of economic development. Many communities are eager to attract data center projects, but struggle to define the economic benefits of these facilities. Jobs have always been the primary benchmark by which economic development projects are measured. Incentive packages offered by state and local governments are often based on the number of full-time jobs created by a new business. And do data centers really hire locally, or do trained data center engineers migrate from other existing data center hubs? In some cases, local officials try to stipulate local hires, which is a sticky wicket.

    • In some cases, local officials try to stipulate local hires, which is a sticky wicket.

      Well that might be an issue with skilled labor. Some skillsets cannot simply be trained, for example mechanical and electrical engineers. They require degrees and, in some cases, professional certification. Computer admins might be trained but more likely require experience and some certification. Electricians, HVAC technicians can be found locally but the question is then a matter of skill and availability. Unskille

      • That's a good point, and one of the reasons that data center site location specialists are focusing on markets where there's a local college or university with a program that trains IT staff, especially if they offer secruity certifications that meet NSA standards.
      • by Firehed ( 942385 )

        Some skillsets cannot simply be trained, for example mechanical and electrical engineers. They require degrees and, in some cases, professional certification.


        And how the hell do people get degrees and professional certification? Sure, those things tend to have a much longer training period, but it can't be that difficult to find people capable of Doing The Math, which is really to what a huge amount of engineering boils down.
        • A new data center is coming to town in 6 months. They need a mechanical engineer. Can a local convenience clerk be trained for that position in that amount of time? No. They need a degree (4+ years). If they are doing any work that involves construction, then they need to be a licensed Professional Engineer to sign off on documents. Professional Engineer License requires up to 4 years of work experience (depends on state) and passing of two technical exams similar to the bar for lawyer or medical boar

          • by jbengt ( 874751 )
            I have to disagree, though maybe some of my point are too pedantic.
            If a new data center is coming to town in 6 months, they better have had the planning of construction and staffing done already.
            As far as the design of the construction goes, there's nothing saying that the engineers have to be located in the town. I've designed systems constructed in about 35 different states without moving from my state.
            For mechanical operation and maintenance, they might want, but probably don't need, to employ a l
    • by ostiguy ( 63618 )
      Its got to be the IT equivalent of sports arenas (heretofore the most infamous boondoggle as a jobs creator/economic engine) - lets have a 100k sq ft thing consume far more power than 10-20 100k sq ft office parks, and we got TWO HUNDRED jobs to show for it. Fantastic.
    • Re: (Score:3, Insightful)

      And do data centers really hire locally, or do trained data center engineers migrate from other existing data center hubs? In some cases, local officials try to stipulate local hires, which is a sticky wicket

      When Google plants a data center in The Dalles, OR, and MSFT plants one in Moses Lake, WA, I guarantee you that most of the hires aren't local. Initially. They just plain don't exist there.

      As far as the local economy goes, though, even if every hire comes in from out of the area, it's likely to be goo
  • It's not just data center designers that are in demand. There are a ton of listings for data center managers at the Data Center Jobs [datacenterknowledge.com] site.
  • by miller60 ( 554835 ) * on Tuesday June 17, 2008 @10:26AM (#23823747) Homepage
    On the "how a data center gets built" front, last week I had a tour of a new $250 million data center facility in Virginia that is getting ready to open later this month. The facility manager provided a walk-through of the power and cooling infrastructure, explaining the company's approach to designing these systems for energy efficiency and scale. I shot video, which is now posted online [datacenterknowledge.com]. The data center operator, Terremark, separated most of the electrical infrastructure from the IT equipment, putting them on separate floors and housing the generators in a separate facility. They have 11 generators now, but will have 55 Caterpillar 2.25-megawatt units when the entire complex is finished.
  • They've already taken over the governorship of California and have many more ambitions.
  • IT = Volatile (Score:4, Insightful)

    by Tablizer ( 95088 ) on Tuesday June 17, 2008 @11:51AM (#23825131) Journal
    One thing I've learned from my 20-odd years of experience is that a career in IT is volatile. Your specialty will go up and down in value over the years as globalization, fads, and technology changes ebb and flow.

    The problem is that if you have a family, such volatility can be problematic. Possible solutions are to save during the good times (nearly impossible if you are married), or be a generalist, such as the only IT person at a small company or department. Generalists tend not to be paid well, but they do seem to weather downturns or paradigm changes better. It's a trade-off.
         
    • Re: (Score:3, Interesting)

      by Gilmoure ( 18428 )
      I've been a low level tech grunt for almost 20 years. Nice thing is, I'll always have work. Am the 21st C. equivalent of a general auto mechanic. Nice thing is, I've been able to make a decent living and pretty much pick my work environment. Have never been afraid to pick up and leave, even with family. What's funny is, there are several younger guys in my shop and they are chafing at the work. They don't like the hands on stuff and are just bitching and moaning, hoping to get a team lead position. Only one
  • What is all this computing infrastructure doing that's useful? Other than advertising delivery?

  • by bryce4president ( 1247134 ) on Tuesday June 17, 2008 @12:20PM (#23825867)
    I don't get it. Besides "think of the Polar Bears", I can't see why we don't just move all our data centers to the arctic. We can pump oil right out of the ground and burn it for the electricity needed to power the units. We don't need AC just some air pumps to push the cold air through the building. That solves two problems right there.

    1)We can now pump oil out of the national reserves in Alaska
    2)We don't have to work very hard to cool the data centers.

    Win Win if you ask me ;)
    • by jsailor ( 255868 ) on Tuesday June 17, 2008 @12:28PM (#23826079)
      While it may appear that you don't have to work hard to cool the data centers, you will have to work hard to humidify them if you do not want your equipment to die. This is a non-trivial cost and is the reason the "free cooling" (taking in outside air to cool a data center) is often not free.
      One answer may be heat wheels, but they are fairly new and unproven in the data center space. Take a look at http://www.kyotocooling.com/ [kyotocooling.com]

      • by Hatta ( 162192 )
        First, how are electronics hurt by a lack of humidity?

        Second, just bubble your dry warm air through some cold water. You'll get cold wet air.
        • by jsailor ( 255868 )
          Below about 20% RH, you get Electrostatic Discharge (ESD). The sparks you can see aren't the ones that kill the equipment, it's the ones that occur inside the equipment without human intervention that kill it.

          As the low-humidity cool air you've blown in heats up, its relative humidity drops even lower. You may find the humidifying the volume of air present in a 100,000-200,000 square foot data center with a 4 foot raised floor, a 12-15 foot ceiling, and a 4 foot ceiling plenum a little challenging. Espec
  • Much like the unintended consequences of the ethanol disaster, people who want to go paperless to save the trees have to look at the non-trivial task of supplying energy needs to a data center. When you realize this, the importance of green IT becomes apparent. Whether or not global warming is indeed the threat that they say it is, developing energy solutions that don't rely on fossil fuels is imperative. Basically, we need to use more nuclear power!

Disc space -- the final frontier!

Working...