Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Power IT Technology

Green buildings, Green Server Farms? 263

mstansberry writes "Has IT evolved to the point where it can consider energy efficiency without sacrificing uptime or performance? According to an interview with APC's Richard Sawyer, the answer is yes. The green buildings movement, spearheaded by the USGBC and other organizations has some people thinking about computing infrastructure's impact on the environment. Is it an IT issue or something from C-level executives?"
This discussion has been archived. No new comments can be posted.

Green buildings, Green Server Farms?

Comments Filter:
  • But (Score:4, Insightful)

    by Anonymous Coward on Monday May 16, 2005 @01:36PM (#12545066)
    Last time I checked my computer was a box full of toxic chemicals
    • by PopeAlien ( 164869 ) on Monday May 16, 2005 @01:49PM (#12545232) Homepage Journal
      Last time I checked my computer was a box full of toxic chemicals

      Ah! but what color are these chemicals?
    • Re:But (Score:4, Insightful)

      by Politburo ( 640618 ) on Monday May 16, 2005 @02:26PM (#12545667)
      Right, so because computers contain some toxic substances that are not emitted to the air or ground during normal use, that means we shouldn't attempt to mitigate the environmental impact computer use?

      +5, Insightful, but only if you're a simple-minded idiot.
      • that means we shouldn't attempt to mitigate the environmental impact computer use?

        The original point is more that the original production wastes of making a computer are so nasty that any contributions towards making the running of the thing more environmentally friendly has no practical effect on the balance. The manufacturing and refining processes are so nasty that a PC would have to OUTPUT free, clean energy for hundreds of years to come out even.
  • by guildsolutions ( 707603 ) on Monday May 16, 2005 @01:36PM (#12545070)
    Considering my mac mini takes less power than just my AMD cpu, let alone not talking about the video card, etc... Im really wondering if the push for massive cpu power at the cost of extreme electrical usage is really worth it.

    Green everything should be a good thing, but what if the cost of green than reclamation and regeneration?
    • by Anonymous Coward
      The mini reminds me of a friend who used an old 68k macintosh as a webserver. her desktop was plugged into mains power but the little web server only used 17w of power to run all day every day, and was on a solar power setup with battery backup. last time I heard from her it had gone down from lack of power only twice in a year.

      I bet if it wasnt a home built power system but a professional one with some better power management it could be used 24/7 too
  • by btempleton ( 149110 ) on Monday May 16, 2005 @01:39PM (#12545112) Homepage
    Even without environmental questions. CPUs have been getting faster and faster per dollar you spend on them, but they haven not been getting faster the same way per _watt_ you put into them. And each watt put into them also costs power to cool them.

    This applies even in the home. Here in California, land of the 14 cent kwh, a 100 watt PC running 24/7 costs $120 per year in power. In a 3 year life the power is more expensive than the CPU or any other major component except perhaps the monitor, sometimes more expensive than the whole PC.

    This also plays big on ideas like getting an old computer and putting linux on it to act as a router or music player or other special functions. You are much better off buying a dedicated box like a WRT54G than making use of the "free" old hardware.

    And yes, this does have environmental issues, but you can see the problem right away just by looking at costs.
    • Okay, maybe me.

      However, these new |337 modded overclocked mega-boxes with a zillion fans, accelerator cards, lighting, speaker systems, external super-spinning hard drives and 300-watt power supplies use a tad more fuel than that.

      I'd guess that with a CRT monitor, you're looking at an annual cost of at least twice that for a standard-vanilla (non modded) desktop, and the mods go up from there.

      I agree with the post about using laptop parts, and if I'm correct, that's what some manufacturers are starting t
      • I have to agree with you. My home system is a 1.0 GHz PowerBook from 2 1/2 years ago. I'm not a big games player. I surf the web, email, and use SSH to get in to the Linux servers I admin at work.

        The system is fast, smooth, and rock-solid. The fan is tiny, the system is silent and power consumption is LOW.

        Sometimes it's simply a matter of realizing what the right tool is for the job. I don't need a high-end data cruncher at home -- I do enough of that at work.
      • You must be out of touch. A heavily modded computer can easily use 600 watts. My new one came with a 560 watt power supply that can peak at 650.
        • by Mr Guy ( 547690 ) on Monday May 16, 2005 @02:53PM (#12546032) Journal
          You must be out of touch if you think the vast majority of people use that much power all the time.

          PSU Needs Calculator []

          Using this calculator, a sample system I just made up only needed 319 watts of peak power. To get that, I needed to be running the 3gig barton chip, 2 sticks of ram, 2 hard drives, a Radeon X800, sound, NIC, with 3 fans fullblast and 2 cathode tubes, and a dvd player. Keep in mind that's PEAK power required, which means all of that has to be going top speed to get there, which means something along the lines of running 3D mark while copying a dvd from one drive to the other while playing sound while downloading a file over the internet while having all your fans and lights cranked up.

          Hate to break it to you, bud, but just cause you have it doesn't mean you are using it.
        • You must be out of touch. A heavily modded computer can easily use 600 watts.

          But they don't need to (and in fact, often don't - That 600W power supply might only ever draw 200W, in many situations.

          For example, I recently upgraded my main machine to an Athlon 64 3000 (Winchester core). Measured at-the-plug (which even takes PS losses into consideration), it consumes a whopping 64W idle (how auspicious for an Athlon 64, eh?), or just under 100W with absolutely everything going (burning a DVD, CPU pegged
      • Well, the overclocked pimped-out boxes with 14 fans require a lot more, I'm sure, but 99% of computers aren't like that. I'd say the typical plain-Jane desktop computer does average about 100 watts when not doing any sort of major operation.

        The monitor will most likely double that, though.

        The best thing to do for home comptuers is probably turn on the power-saving options like turning off the monitor/hard drives after 5 minutes of idle, and having the computer sleep after 15 or so.
        • i remember doing a quick calculation to convince a medium sized firm to switch to lcds several years back, they had about 150-200 computers running, a fair few with multiple displays, the amount that they saved in electricity running the monitors was about $35k a year, air conditioning another 15-20, purely because of that change. the savings werent important for them, but the amount of space created was, especially with multiple monitor setups, deskspace is a scarcity, the financial incentive helped out wi
      • by Cecil ( 37810 ) on Monday May 16, 2005 @03:59PM (#12546735) Homepage
        My computers can make even a 250-watt powersupply catch fire (Panic and terror ensued, but the system survived)

        They're all relatively green though, because I pay extra to my local utility to have them put enough power from wind farms onto the grid to power my home. It's a different solution perhaps, but everyone has different needs.

        And I know what some of you want to say, so let me pre-empt you: Yes I know that my computers are powered by minced bird guts (B.S.) and weather pattern destruction (prove it)! Ha ha ha! I don't care. It's better than coal or gas or oil, so bite me, ok? Until direct solar energy becomes feasable, it's among the best solutions we've got.
    • And let's not kid ourselves; it affords a good avenue of attack for the marketing department.
    • by Shalda ( 560388 ) on Monday May 16, 2005 @01:59PM (#12545352) Homepage Journal
      I think you've hit the issue right on the head. Your average data-center manager could not care less about whether his server farm is environmentally friendly or not. On the other hand, electricity is a major expense. A dozen racks of 1U servers pulling 100-200 watts each will probably run you upwards of $80k/year. And that doesn't even include the cost of cooling your server room (which will add another $20k or so). Server consolidations and energy efficient servers save money. And that will always be your driving force. If company A says they have a "green" server room, it's just marketing. Their first concern and only concern is the bottom line.

      On the other hand, I live in Minnesota, and 5 months of the year, we can use that server energy to heat the rest of the building. :)
      • by periol ( 767926 )
        Their first concern and only concern is the bottom line. While we can argue the short-sightedness of this perspective all we want (and it is tremendously short-sighted to allow companies to pass on environmental costs to society), the truth is that we will win if we start impacting the bottom line of companies. It's possible, and getting more possible every day.
    • I wonder if using a heat pump to cool servers would be more efficient than using fans and A/C.

      And using a geothermal heat pump is significantly more efficient than using an atmospheric heat pump. The former pumps heat to and from the 50 degree Farenheit ground while the latter tries to pump heat into the hot air during the summer and get heat out of the air during the winter.

      Server farms using these type of pumps would save significant amounts of money using the same equipment.

  • by toby ( 759 ) on Monday May 16, 2005 @01:40PM (#12545118) Homepage Journal
    Most server hardware is massively overspecified. 90% of websites could run on a 486 and nobody would notice a difference - assuming, of course, that you are running a sane, frugal (UNIX family) O/S.

    Make enormous energy savings simply by consolidating services...

    Stop buying new servers and extend the lifetime of older ones. (Account for the energy costs of manufacture as well as running costs.)

    • by Radix37 ( 670836 ) on Monday May 16, 2005 @02:05PM (#12545426) Homepage
      90% of websites could run on a 486 and nobody would notice a difference

      Until Slashdot strikes...

    • by leoc ( 4746 ) on Monday May 16, 2005 @02:05PM (#12545432) Homepage
      Because they are already there. In fact I'd say 90% of all web sites out there are already running on less than the power of a 486 today. All 3 of my extremely low-volume web sites, for example, are not even running on real hardware. They are all virtually hosted along with hundreds of other sites on a single high power box. Web hosting companies operate on such a slim margin these days that they are the first to take advantage of any technology that saves energy.
    • Most server hardware is massively overspecified. 90% of websites could run on a 486 and nobody would notice a difference - assuming, of course, that you are running a sane, frugal (UNIX family) O/S.

      That's why many sites are virtually hosted on a single, more powerful box. It is usually much cheaper to simply buy a newer, more powerful box than to pay the maintenance costs associated with an older server that your vendor may no longer support.

    • Stop buying new servers and extend the lifetime of older ones.

      This only makes sense if continuing to use the old equipment you are not losing out using more power efficient hardware, that will result in an overall power savings, thus saving $$$.

      Account for the energy costs of manufacture as well as running costs.

      These costs are accounted for when you purchase the hardware, you paid for these manufactuing costs up front.

      I would suspect that your arguments would be better focused on the waste or

  • My server farm... (Score:4, Insightful)

    by Bananatree3 ( 872975 ) on Monday May 16, 2005 @01:40PM (#12545124)
    Would be racks and racks of laptops! No need to by expensive low-power servers, just pump money into high-end laptops that already run low on power. And the best thing is, I don't have to pay for APC's, as they all come with batteries!
    • Re:My server farm... (Score:4, Informative)

      by SuperBanana ( 662181 ) on Monday May 16, 2005 @01:53PM (#12545274)
      And the best thing is, I don't have to pay for APC's, as they all come with batteries!

      They do, but my experience with laptops (particularly old laptops) has been that their battery capacity gauges don't like being left on A/C power for a couple of months; either the battery gets discharged, or the chip thinks the battery has no capacity left, and instead of going on battery power when the A/C shuts off.

      PS: they're Uninterruptable Power Supplies. Not "APCs". Those are Armored Personnel Carriers.

      • American Power Conversion.. the market leader for UPS's (no that is not a transport company).
        • Actually, it stands for ain't protectin' crap. If you replace power strips periodically you will probably have sufficient surge protection. Decent switching power supplies, if sufficiently over-rated, will ride out most brownouts. Lightning will probably fry your UPS and your PC :P
        • American Power Conversion.. the market leader for UPS's (no that is not a transport company).

          I'm well aware, considering that I've spec'd out 16kVA room-wide UPS's and the like. My original comment was a slightly sarcastic comment aimed at the original poster who was "pulling a Xerox" (aka confusing the company name with the type of equipment).

      • ...the chip thinks the battery has no capacity left,....

        As an aside, I have a Fujitsu Lifebook from '98. When unplugged with the battery in, it thinks there's no charge in the batery. I have a bad habit of plugging my laptops in whenever I can for various reasons. I wonder if what you said is the problem with this laptop. Hmmm.

      • The reason why laptop *appears* to have battery capacity gauges that don't like being left on A/C power for a couple of months is not the gauge, it is the battery.

        Lithium Ion batteries works poorly in constant full charge conditions and in hot temperatures. Their effectiveness degrades in heat and constant full charge. And guess what? A constant plugged in laptop has BOTH! Heat from the computer and full charge all the time. So a laptop left plugged in for months will kill the battery fast with the heat it
    • Have terrible performance, it's why laptops are usually miles slower than a desktop system. Servers usually need the fastest hard disks you can find for them.

    • Nice idea, but I'm sure you realize it's not that simple. Servers can run high speed harddrives, use ECC RAM, etc etc, laptops are currently not able to sustain 24/7 usage with a acceptable failure rate.
  • Interview? (Score:5, Insightful)

    by SuperBanana ( 662181 ) on Monday May 16, 2005 @01:41PM (#12545137)

    "Interview?" More like, "opportunity to mention APC's UPS efficiency and then yack about how important that is."

    Somewhere, APC's PR firm is quite pleased.

    • As somebody in the business, APC does have significant lower UPS losses on big systems compared to the other industry mainstays (Powerware and Liebert).

      However, they require twice the number of batteries which will quickly eat away at any total cost savings. (Assuming that you have flooded batteries and actually care about uptime.)

  • by gtrubetskoy ( 734033 ) * on Monday May 16, 2005 @01:42PM (#12545145)

    We wrote about the environmental benefits of virtualization on our site [] a while back. I even started a little thread [] on Nanog about any numbers on relationship of server utilization and the energy cost, but it looked like few people cared. To see how underutilized your Linux server is, do:

    # cat /proc/uptime
    1122029.25 1101982.75

    The first number is the system uptime in seconds, the second is the number of seconds it's been idle. The number above is from my laptop - 98% idle.

    Virtualization is also going to be the way hardware vendors will keep the server price up - suddenly very powerful servers will start making sense. The questions is - who will win - Xen, UML or Linux VServer. We're banking on VServer. :-)

    • I like the info about /proc/uptime, thanks!

      for me a more-telling stat is:

      # uptime
      16:42:03 up 314 days, 23:10, 1 user, load average: 0.00, 0.00, 0.00

      The reason: even when a CPU is busy doing x number of things, (maxxing out the CPU graph to 100%) it still manages to be "idle" for a good chunk of those CPU cycles. Might have something to do with the way that threads are sliced for multi-tasking.

      That's my guess atleast.
    • I still have to wonder what the real point of virtualizing is. Yes, Microsoft pulled an amazing coup by convincing sysadmins that they should have a separate box for every tiny little service they wanted to run. But Microsoft got away with it because of the crappy design of Windows as a server OS. (i.e. You have to plan for complete system wipes and upgrades, security is such that one service could compromise another, and system software components are such that they happily interfere with each other.)

      Back in the land of all things sane (i.e. Unix style OSes), I see no reason why NOT to run a billion services on one machine. As long as you've got spare system resources, why shouldn't you make use of them? Why do I NEED the domain controller, file server, mail server, and ftp server to all be different machines? One big Unix box does the job better, and for a lower up front (and longterm!) cost than lots of tiny Windows boxes!

      Granted, there are still some issues that can't be overcome. But which really makes more sense, spending millions of dollars on tons of machines and an army of support staff, or spending a few hundred thousand on a couple of redundant machines and an admin or two to maintain them?
      • I've always liked it conceptually, as then I can have it has all the advantages of separate machines on a single machine, with few of the downsides.

        It makes it much easier to do security. I can limit which machines with with services can interact via a firewall much easier as each machine will have a different IP.

        If one kernel deadlocks, or I need to change a setting that needs a reboot (very rare with a Linux box, but they do exist). I can reboot each service independently. Generally, any PAM or li

      • Ever see how horribly Linux handles running out of memory?

        Slows to a complete crawl, then either deadlocks, or goes on a process killing spree taking out even system daemons (nscd in my case) when it was a user process (Firefox is my case) that used up the morory. Also often kills X, which leaves a garbled console.
        • Ever see how horribly Linux handles running out of memory?

          A long time ago, yes. That's why I only trust Sun Machines for mission critical work. :-)

          But if your primary reasoning for needing virtualization is to protect against OS failures, doesn't that suggest that you need a better OS instead? (Or wait until someone fixes the OS you're using?)
  • by dstone ( 191334 ) on Monday May 16, 2005 @01:43PM (#12545167) Homepage
    I thought I was filling out the cover pages on my TPS Reports properly, but I don't know what a "C-Level Executive" is. Do I have to meet with the Bobs to find out?
    • by Quikah ( 14419 ) on Monday May 16, 2005 @01:48PM (#12545220)
      CEO, CTO, CFO, etc.
    • C was their GPA (Score:3, Insightful)

      by Kyont ( 145761 )
      C is for Chief, as in Chief Information Officer, Chief Executive Office, etc.

      In America, it also refers to the grade-point average they barely managed to maintain while drinking their way through college and bonding with their frat brothers' dads so they could get hired onto corporate management tracks at age 23 so they could schmooze their way up to officer-level positions by age 46 and make outrageous salaries "providing leadership" for the rest of us and offering cushy internships to their sons' margina
  • by Kainaw ( 676073 ) on Monday May 16, 2005 @01:45PM (#12545177) Homepage Journal
    I want a low power/low heat computer because I want to be able to leave it on all the time. Every PC I've had has been both a computer and a space heater. It is hot enough. I want a computer without the space heater. It isn't that I care so much about global warming. I care about the warming in my own house and all the wasted electricity I have to pay for (both in the PC and my extra AC use). The problem is that it is hard to find a low heat PC. I would like to take the motherboard I have out of the case and drop in a low-heat one. But, all I can find are extremely overpriced complete systems with the obligatory Windows pre-install.
    • Get a slightly older spec Laptop. Specifically designed to be low power.

      PowerPC is lower power than Intel which is lower power than AMD. Transmeta if you can find one. StrongARM is also low power.

    • by Brian Stretch ( 5304 ) * on Monday May 16, 2005 @02:02PM (#12545395)
      1) Seasonic S12 series high-efficiency power supply. It makes a VERY noticible difference.
      2) Athlon 64 CPU (preferably the new Venice or San Diego core) and Socket 939 motherboard. Enable PowerNOW! power management (current Linux distros like FC3 support it automagically, some BIOSes don't enable it by default). The CPU runs at 800MHz at 1.1V core while idle, jumping to full speed as needed (just like a notebook). Even at full speed power consumption is about half that of an Intel P4 blast furnace. Run 64-bit Linux and get even more work done per watt.
      3) Avoid high-wattage video cards like the GeForce 6800 series in favor of 6600GT's. MASSIVE power consumption difference. Depending on how hard-core a gamer you are, the 6600GT's are good enough and a lot cheaper.

      See Newegg, etc for the parts.
    • by sffubs ( 561863 ) on Monday May 16, 2005 @02:06PM (#12545451)
      If you're more worried about heat than speed, something using a VIA Epia [] board would do the trick.
      • I did take an interest in the Mini-ITX boards, but I don't have the time to hunt and pick through the internet to learn about it. I want to buy a PC. It is just too hard to get a motherboard, CPU, and memory and be guaranteed that it will all work together. When I search, everything is very vague about what you get. When it says 512MB RAM, does it mean you get 512MB RAM or does it mean the motherboard supports 512MB RAM? As I said, I just don't have the time to research and find out what works with wha
    • As was hinted at above (WRT54G), I cannot recommend enough getting a hackable appliance running an embedded linux.

      Check out the Linksys [] NSLU2 [] NAS device. It has a couple USB ports, a Netword adapter, a 266MHz ARM processor, 32MB RAM and an active community [] porting apps to it.

      A website running on this obviusly couldn't stand up to a slashdotting, but it will work for a personal site and does a good job of streaming media around the house (aside from its primary function as a Samba server)

      The thing

    • So buy a laptop! I did 5 years ago and I'll never go back.
    • I want a low power/low heat computer because I want to be able to leave it on all the time.

      How about something like a Mac Mini, some sort of system with adaptive processor usage and an active cooling fan system? Having a good hardware sleep mode helps, too, unless you're actually running a server or something that needs to be up 24/7... my home computer spends most of it's time 'asleep', but is ready to use pretty damn quickly. I don't reboot short of a system upgrade...

      LCD monitors are probably the best

  • Pretty weak article (Score:3, Informative)

    by under_score ( 65824 ) <{moc.gietreb} {ta} {nikhsim}> on Monday May 16, 2005 @01:47PM (#12545209) Homepage
    It only really just mentions cost and green. I could say to someone "data centers have huge electrical bills and you can save a lot of money by using energy efficient equipment". That's basically what the article says.

    What about specific solutions? Even just general principles? Where would someone look to get help in reducing energy costs? What about alternative energy supplies? Are they reliable enough? Enough power density?

    I would have liked an article with a lot more information.
  • by Tenebrious1 ( 530949 ) on Monday May 16, 2005 @01:50PM (#12545237) Homepage
    That drives initiatives like consolidation. If you have 10,000 servers that are only 20% utilized, can't you get by with 2,000? The answer is probably no. But you might be able to get by with 4,000 and cut your cost in half on the equipment side. And then you start to look at not only the capital investment, but also the expense investment.

    What kind of wacky PHB approves the purchase of 10,000 servers when he only needs 4000? And more importantly, is he hiring?

    • If a boss mismanages resources that badly he might be hiring now... ...but I wouldn't count on long-term employment with that company.

    • PHB: "Oops, end of the budget year and we are way under. If we don't spend it, we get our budget cut. Quick, order something fast."

      Such are the efficiencies of large organizations...
  • Load balancing (Score:3, Interesting)

    by Colin Smith ( 2679 ) on Monday May 16, 2005 @01:51PM (#12545251)
    Save money, don't buy more machines, balance the performance more evenly. Condor, Sun Grid Engine etc.

  • by Animats ( 122034 ) on Monday May 16, 2005 @01:53PM (#12545267) Homepage
    Server software technology keeps getting worse, as .NET, J2EE, Perl, PHP, Flash etc. are deployed for pages that could just as well be static. How many barrels of oil per day go into "ad personalization"?
    • That's not insightful, it's stupid.

      American Idol has about 26 million viewers. If each of those TV sets consumes 100 watts, then that's 2.6 million kwh per week. Assuming 25 new episodes per season, that's 65 million kwh, not even counting the broadcast side of things.

      That's about 38,000 barrels of oil per year for American Idol.

      My point isn't that we should get rid of that stupid show, my point is a lot of things use a lot of energy (a hell of a lot more energy than a few CPU seconds uses). So what?
  • by asoap ( 740625 ) on Monday May 16, 2005 @01:54PM (#12545287)
    I had to look it up:

    From Wikipedia, the free encyclopedia.

    c-level is an adjective used in a variety of industries to refer "chief" or highest-level executives. The term arises from an urge to group together the alphabet soup of acronyms (CEO, CFO, COO etc.) found in the upper echelons of the corporate world.

  • Is it an IT issue or something from C-level executives?

    It seems like it's an issue that has relevance to both, since executives can likely benefit over the long haul (tax incentives to go green, the PR value, lower power expendatures, etc.), while IT people will be intimately involved in any implementation of green measures that relate to computing.

  • Pollution might not be a strictly "IT" issue. But neither is "paycheck", and that issue is a top priority for most people in IT.
  • A number of years ago now a company determined that the standard PC power supply is horribly inefficient (say around 30%) and that a better, more efficient p/s applied across the [then] millions of PCs in use would save a significant amount of power nationwide.

    They built it.

    It cost about twice as much as the existing PC p/s.

    Virtually nobody bought it.

    End of story.

    • Re:A Short Story... (Score:3, Informative)

      by NerveGas ( 168686 )

      30% efficient? Your numbers are hugely off. That might have been true waaaaaaay back in the day before switching power supplies, but it's not now. If that were true, a power supply delivering 300 watts to the computer would have to pull a kilowatt from the wall, and two computers would be enough to trip a 15-amp circuit that is so prevalent in newer construction, three computers would be much more than enough to trip a 20-amp circuit.

      At normal load, most power supplies are around or above 70% effic
      • Of course, if we simply increased the CAFE (Corporate Average Fuel Economy) by just five MPH, we would likely do far, FAR more good not just for the environment, but for world stability as well.

        Surely you mean MPG. I think we've got all the MPH we need :-\

    • PC power supplies are reasonably efficient, much better than the 30% you mention. There is room for improvement. The problem is that the market is very price sensitive, so government regulation would probably be required to force manufacturers to use best practices in their designs. They have already done something similar with power factor correction in the EU.
  • Move the servers (Score:4, Interesting)

    by G4from128k ( 686170 ) on Monday May 16, 2005 @01:58PM (#12545348)
    For many applications, the location of the server is not that important. Servers could be relocated to a cooler climate (avoiding the overhead of air-conditioning) or to an area of lower-cost electricity (e.g., Norway has aluminum smelters that take advantage of low-cost hydropower). At the very least, the server could be collocated at a nearby power plant to reduce transmission losses. One could also look into cogeneration -- using the heat of the server to warm water that is then used for another industrial process.
  • by Guano_Jim ( 157555 ) on Monday May 16, 2005 @02:05PM (#12545429)
    If you're not interested in running your own alternative-energy IT setup, you can always outsource it:

    Solar Hosting [] uses renewables (i.e. solar, hence the name) to power all their web servers.

    Looks like they offer a complete solution package, from web design to hosting.
  • Assuming you are running a portable operating system and applications, it would be useful if vendors quoted the MIPS per Watt that their systems delivered. Back in the days of big iron, people paid close attention to the number of MIPS a system delivered, what their jobs required, and what was the most cost-effective model for their needs.
  • solar (Score:2, Interesting)

    by DavidDeLux ( 650471 )

    Its funny that this topic appeared on /. today - I've been considering changing my computers to make them more energy efficient.

    My electric bill has been increasing, thanks to having an ever increasing number of servers and workstations chugging away whilst I do development work on them.

    I've also moved from Windows to Linux devlopment, and have been shocked at just how good Linux is... good as in how little it needs in terms of hardware:

    • my Windows 2003 systems run P4 processors with 1G RAM, huge hard dr
    • How about moving to a Unix-like operating system that doesn't require you to compile everything? Having the computer recompiling this, that, and the other uses much more energy than letting it idle and downloading binary packages.
  • It's a great idea, that I was hoping the article would expound on. Now I'm tempted to work up the numbers, comparing a full AC-fed battery backup system with a solar-based off-grid power setup. with a separate HVAC system for temperature control, the solar system would completely replace the traditional online UPS. In fact, this would be something I'd love to make money as a VAR selling to people. I'm sure tax advantages and environmental recognition are even possible.
  • Is it an IT issue or something from C-level executives?

    What a strange question, it's any one's problem when we unnessecarily consume energy or any other non-renewable resource!

  • Customers don't think about their power bills when they're buying computers, typically. They think about how fast their browsers come up or their screens refresh.

    Engineers don't think about overall power efficiency when designing a computer, typically. They think about getting the heat out of the components or out of the case, depending on what part of the problem they're tackling.

    If the customers wanted more watt-efficient computers, the engineers would optimize for that.

    On the other hand, this seems
    • If the rack is passively cooled today, sure, go on and try it. If it's actively cooled, you can only gain in total energy efficiency if you're replacing a bad heat conductor with a better heat conductor, which is also capable of generating power. If you can do that, you should have been able to put an even better heat conductor there in the first place, and by doing so avoiding a few fan revolutions.

      (Hint: an unpowered Peltier cooler will make a very bad heat sink. It might be able to power its own status

  • by NormAtHome ( 99305 ) on Monday May 16, 2005 @02:27PM (#12545685)
    During the good years (gone but not forgotten), I worked in several large office buildings.. Six, eight and ten stories, none of which could be considered new and I can tell you the people who designed them had no idea what the PC computer revolution would bring. With anywhere from fifty to two-hundred PC's to a floor the buildings air conditioning system in each case was totally incapable of handing the kind of heat thrown off by that many PC's. In one building (in the warmer months) they had to have someone in at 5am to crank the air conditioning as low as it would go (the air conditioning system was centrally programed to shut off at night, nothing we could do about it), then as the day went on it would go from 60 degrees with all machines off to just under a 100 by the end of the day.

    On my last move from one building to another I was thinking how buildings now should have some kind of special exhaust conduits built into the floor with exhaust ducts on the PC's like a gas dryer. That way the buildings air conditioning system wouldn't have to deal with all that, and in the winter time you could use that heat to help warm the building.
  • If you have a battery system and an inverter big enough to run everything, start feeding those batteries from solar panels. When the batteries are full (default state), run the inverter and run on sunlight all day.

    Building a sun farm over your server farm makes sense to me. Oh, sure, payback is like 10 years when you buy a photovoltaic generation system. I hope some current server farm operators expect to be around that long.
  • by lpangelrob2 ( 721920 ) on Monday May 16, 2005 @02:51PM (#12546014) Journal
    Would you be willing to be willing to save energy by turning off your computers when you're not at work/home? Would you do it to forego being on the top of this list []? Or is finding aliens / folding proteins more important than saving energy?

    Actually, to take the planks out of my own eye first, I probably ought to shut down the PC at 5:00p myself. (I'm at work) :-) The Macs at home (should) automatically go to sleep, though they haven't lately...

    • It's a good question - I need some of my machines to be available, not on.

      However I don't have an easy/secure/reliable way to, say, send a WOL packet at these computers when I'm not physically there. Maybe someone could craft a WRT54G into a WOL appliance that I could leave running and HTTPS into and wake my other computers. Better yet, detect my ssh connection through it and automatically do it for me.
  • It's time to move the hard drives out of computers. We should create cheap fast easy to setup OS storage outside the computer so that we can remove those power & space hungry hard drives from the box. Bootable IP based block devices with small 'usb key' style local boot partitions seems like a reasonable way to make this happen. Linux is probably flexible enough to make this happen quickly but to work right network block devices should be fully integrated into the kernel. Windows might have more tro

"I prefer the blunted cudgels of the followers of the Serpent God." -- Sean Doran the Younger