Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Why Don't Servers Support Power Management? 286

Cerlyn asks: "I am the network administrator of three server grade machines purchased from three separate companies. The recent power problems in California reminded me of the fact that none of these servers seem to support power management. The operating systems these systems run (Linux 2.2, 2.4, and FreeBSD 4.2) are compiled to support power management, but do not detect any power management capabilities at all. Granted, no one wants a server sleeping on the job. But the way things seem to be coded, processors can not even sleep while idle without known hardware support. Lightly loaded machines are often idle 75% of the time or more. Sleeping while idle could make them save a significant amount of power. For many companies, the extra ten seconds it would take to spin up a backup server's hard drive(s) likely would be a non-issue. So, why don't server grade computers support advanced power management (APM), APCI and the like?" And in the land of the rolling blackout, one has to wonder if the potential power saved could help the situation, assuming a good percentage of the big iron in Silicon Valley were configured to conserve what power it could (as opposed to adding on to the drain as it is now).
This discussion has been archived. No new comments can be posted.

Why Don't Servers Support Power Management?

Comments Filter:
  • by Anonymous Coward
    If people really want to save power on their machines just switch off the monitor. These things use about 200W even with what laughably gets called screensavers these days. I find it incredible the amount of power wasted in offices when monitorsa are left on overnight or over the w/e for no reason other than some people are just too damn lazy to press the power off switch.
  • by Anonymous Coward
    First of all, you seem to be only talking about the drives, which have to spin up. There is no cost or wear and practically no latency involved with changing the processor mode. Second, many servers sit idle at night. If you were to spin down the drives after 30 minutes idle, they would only cycle on and off once a day. And if you are worried about a drive failing on power up, that's what RAID is for. As for the UPS argument, adding 10 seconds to the shutdown sequence (for a sleeping drive to spin up) should be a non-issue unless you purchased the wrong size UPS.
  • by Anonymous Coward
    How many times have you driven by your work at night and noticed every light in every building on, even though probably less than 10% are being used? Or the lights in a lab? Or the lights in a server room? Even during the day, there are too many lights on. The point is that there are better ways to conserve than to make a server slower and less reliable.
  • Sure, RAID is great and all, but having the drives power up and down repeatedly will still shorten their life considerably. Why piss money away?

    - A.P.

    --
    * CmdrTaco is an idiot.

  • FreeBSD doesn't HLT on a SMP system, because they have not yet solved the problem of waking up the HLTed CPU in all cases. On a single processor system HLT is called.

    For those who want to save power on a server, turn the monitor off when you leave the console. There isn't ,much more that you can do.

  • How about instead of worrying about buggy APM support on servers, companies start to lobby California's government for some hard regulatory changes so power companies can actually start building new power plants there? I'm sorry if the environmentalists don't like this idea, but this is the only way to fix the situation. I find it personally offensive that the federal government has to force neighboring states to sell power to California because their own short-sightedness has put them in this situation. What did they think was going to happen? Huge population growth and no new power plants in years (decades?) are a recipe for disaster.
    You could try and get everyone to suddenly decide to put solar panels on their rooves and start conserving electricity. Not bloody likely to those newly rich Internet millionaires who just bought their first $5 million home with completely automated toilet flushing facilities and 10000 watt lighting in the backyard so they can play nerf gun wars in the middle of the night.


    Anyway, as others have pointed out, spinning down drives in ANY machine is a BAD idea. You're just putting more wear-and-tear on the system causing it to fail sooner. That may be fine for your $1k PC with the $100 ATA hard drive in it but when you're spinning down $50k worth of disks every once in awhile you're going to kill their MTBF rate! If your community cannot provide adequate power needs for your businesses you should leave and move it to somewhere that does. Come to the midwest for example. We'd be happy to build as many nuclear power plants as you need to get some of those fat tech jobs and money. ;-)

  • A hard drive is between 10W and 20W.

    A motherboard/CPU is maybe 80W.

    A light bulb is 100W

    A television is 300W

    A washing machine is 500W

    A vacuum cleaner is 1500W

    A hot air clothes dryer is 2000W

    A cold air conditioner is 2500W

    A hot water heater is 4000W

    So what's my point? Leaving a light bulb on overnight is far more wasteful. Watching TV for 2 hours is far more wasteful. Having a hot shower rather than a cold shower is more wasteful. Using an air conditioner rather than opening a window (or having a properly designed house) is far more wasteful.

    Try saving power in the real world first, then start worrying about the piddling small amounts of power consumed by your 20W hard disk. Turn off some light bulbs, use energy saving globes, don't use the air conditioner, have a cold shower, go read a book rather than watch the 80cm TV, ride a bicycle rather than drive that car to the local supermarket. These are all REAL energy savings.

  • Hmm. I hate to burst your bubble,

    That's OK, you didn't.

    but... Floursecent light: 20W.

    And I said light BULB. I also later stated you should move over to energy saving globes, which are simply compact fluoros.

    46" (115CM): 220W.

    Which is trivially close to my 300W.

    317W Now, on to my computer... 14" Monitor: 80W HDD 12v@500mA, 5v@1200mA: 12W Mobo/CPU: 80W CD-ROM drive (playing CD): ~50W Cable modem: 20W Speakers: 50W nominal miscellaneous other things: 50W Total: 342W well there ya go.

    Seeing as I never mentioned computer OR monitor (I only mentioned motherboard/CPU or hard disk) I honestly can't see what you're trying to argue here. If you work out your HDD there: 12*0.5 + 5*1.2 = 12W. That's pretty close to the 10-20W figure I gave! And your motherboard/CPU is very unlikely to be 12W: the Pentium-4 is 55W just by itself.

    Next time you try and burst someone's bubble, try a little harder.

  • Good article in last weeks Economist on the subject. Those ignorant politicians didn't understand deregulation, and were more interested in appeasing lobby groups, and screwed it all up. They claimed that the free market of deregulation would help... but they never really allowed a free market! Now they've given deregulation a bad name and a big political stigma. They froze retail prices, but California's are still going to have to pay extra via other means.
  • I'd say most businesses' accounting, print and file servers could be allowed to power down from closing time until opening the next day.

    But, if you are in an office like mine, we have batch jobs that are constantly being run during that time frame. Sure, some departmental servers could be powered down, but it would be more effective to have everyone shutdown their desktops when they leave for the night (Of course the real solution to the CA problem is to build some new power plants, which they haven't done for a decade).

  • by the red pen ( 3138 ) on Thursday February 01, 2001 @03:20AM (#465001)
    • Do I even have to mention that Windows 2000 comes complete with a robust power management system?
    ...and when we can run it on a Sun E10000, an IBM RS/6000, or an HP K380 we'll call you up and ask how to enable the power management.
    • Microsoft covers all the bases, you people are stuck in the outfield.
    Seeing as how no one is playing baseball right now, this analogy is ironically apt.
  • The environmental regs are a small part of the problem. Deregulation dragged on for so long and was such a a half-assed mess that nobody would have built a power plant out here even if they could have belched coal smoke all over a wildlife refuge. Why risk a few hundred million bucks if you have no idea what your ROI will be?

    There is some evidence that power generators in California (and other recently de-regulated states) may actually be decreasing output to increase the wholesale price of electricity.

    That is a clear evidence of a market with difficult entry requirements (because if they could cut supply to increase profit, someone else would come in and build more plants to make money). It is my understanding that there have been no large power plants built in California in the last twenty years, which given the increase in power demand, boggles the mind.

    While there is additional government power regulation risk, I think the expense of environmental regulations (particularly the impact statements) are keeping new plants from being built.

    Because of this, even if power distribution companies in CA could pass on the true wholesale price of power to ratepayers (which they can't by regulation today, and why they are going broke), the power generating companies themselves could continue to decrease supply and keep raising prices.

    Significant environmental de-regulation of power plants is the only real solution. Better do it carefully though.
  • Got some evidence for this? The deregulation battle dragged on for years. Several new power plants have been approved since deregulation was finally settled, and except for one recent approval for a peaker plant near SFO, I'm not aware of a case where they lowered environmental regs. But since they take years to build, I don't think any of them have come on line.

    An article [eei.org] from EEI in November claims: "In fact, virtually no new large powerplants have been built in California or New York for nearly two decades--at a time when the economy has surged and new demands have been imposed on the electric grid." Another article [bcentral.com] from the San Jose Business Journal in 1998 claimes: "In Northern California, the newest utility thermal unit began operating 26 years ago."

    I think the key here is large powerplants. I know there are two Northern California powerplants being built to provide about 500 MW each, but I think the definition of "large" EEI is talking about is 1000 MW and up.

    To get an idea of scale, today's forecast peak demand in California is around 30,000 MW. During the summer, it can be as high as 45,000 MW. If California has a problem now, wait until this summer which is supposed to be hot (i.e. air conditioning loads) and dry (i.e. less hydroelectric from the Northwest).

    If it was easy to build large powerplants, someone would have done so, since the wholesale electricity prices in California have been high for quite a while now, and it has been obvious to many for years that demand was outpacing supply.
  • We have about 50,000 sq ft and it's the raised floor cooling, AC, lights that suck current. We also have a ton of routers and switches and firewalls that can't power down. We have load balancing clusters that are always running.

    In a commercial environment you can't get away with an SLA that says "we'll power down your servers at random which will create 30% greater latency".
  • But perhaps someone here could answer the question of: in the real world are the heating demands of spooling server in and out of an idle state worse or better than leaving it on?

    You make some good points. Unfortunately it hasn't been my experience that customers would even be willing to pay less for power thrifty service. Even if they get 1 hit an hour they all seem convinced that the giant whale of all hits is comming and they have to be prepared for it.

    The problem with load balancers is that if you've built them right they're always busy and every node in the clusted is always doing something, say @ 25-35% capacity. Else you've wasted your cluster dollars if you have a whole node doing nothing for a period of time.
  • Well I haven't found a customer willing to pay for offline or nearline spare capacity. And an online HACMP failover cluster is very expensive. We would need a completely different service model to rent capacity on demand. Probably in 2-3 years we'll be able to do that reliably. In the meantime, sure, we need a better approach that just sucking volts out of the wall for nothing.
  • by FFFish ( 7567 ) on Thursday February 01, 2001 @09:00AM (#465015) Homepage
    Why should a hard drive stop spinning? There's a great amount of inertia to overcome when it's stopped, plus a lot of static friction.

    Seems to me that they HD should spin 15000rpm when it's in use, and operate on a sliding-scale when not in use: the longer unused, the slower it spins, down to perhaps 1000rpm.

    But keeping it spinning: I should think that's important in achieving fast spin-up times and reducing the power demand during spin-up.

    --
  • (and every server should have one) You're UPS is always drawing power, whether it's an off-line model where it's keeping the battery charges, or a line-interactive or on-line model, where it's rectifying the AC power to DC then reconstructing the AC sine wave.

    The point is, you can turn off the load side of the UPS, but the rectifier/battery charger/inverter will still draw power. Granted it's less power, but it's still power, and now it's doing absolutly nothing. It's one-thing to pay the penalty for this overhead when the UPS is doing actual work, but it's another when it just sucking up juice.

    check out HomePower Magazine [homepower.com] for the best scoop on renewable, off-grid power. Even if you don't want to go off the grid, most of what they talk about is applicable to rolling your own disaster recovery systems...

  • If you read my comment, let's say you have a pool of 10 servers, but sometimes you have load that 3 can handle fine. Send 7 of them to sleep, and bring them up (it takes like 15 seconds) when traffic load on the 2 active gets above a certain level.

    You could do this with an independant network with Wake-on-LAN cards used only for bring up/down servers.
  • by Barbarian ( 9467 ) on Thursday February 01, 2001 @03:13AM (#465018)
    I like your load balanced server idea... on large installations, you could have an authoritative server that's on always, and in a light traffic situation, tells, say, 7 of the other 9 to go to sleep, then wakes them up when when /. links..
  • Yup. That's part of it. Another part is thermal stress, which is one of the big killers for electromechanical systems.

    They are made of different materials, which expand and contract at different rates as the heat and cool down. This causes microscopic flexing every time the device is power cycled. Some components, that have wide engineering margins, can handle lots of power cycles. Others can't. Which one do you have? Only one way to find out, and you don't want to...

  • My ML370 here does support some functions of ACPI. In 2000, it allows the monitors and hard drives to go to sleep, and also offers hybernate. No sleep options though.

    Also, the fans are not just low and high, they do change speed based on temperature. Running the d.net client will increase the fan speed over time.

    With tools like the Compaq remote insight boards, server managers could properly shut down and bring back servers when needed from any location in the world. Or a cheaper method would involve the Insight Manager program and WOL setup.
  • by alhaz ( 11039 ) on Thursday February 01, 2001 @05:39AM (#465022) Homepage
    Well, the issue is, the APM specification does not cover multiple cpu systems.

    As Alan Cox said [linuxcare.com], "If making that APM call reformats your disk and plays tetris on an SMP box the bios vendor is within spec (if a little peculiar). No APM call of any kind is SMP safe."

  • by rnturn ( 11092 ) on Thursday February 01, 2001 @06:00AM (#465023)
    ``drive spinup seems to be the time when a borderline drive will fail''

    Not to mention that it's the time when the drive's current requirements are the greatest. These inrush spikes are not a big problem for a system with a drive or two but I've seen places with systems with large RAID arrays attached to servers where they popped breakers if the power came back on while the drive cabinets were sitting there with their power switches in the ON position. Apparently, not all setups allow you to or are configured to use the SCSI start command to sequence the drive's startups like they used to do in the days of yore. Happily, newer drives are not as power hungry (I can remember some old 5.25 inch disks that used 40+W of power) but now that these 15,000 RPM drives are coming out...

    If you're trying to save power turning off the monitors when no one's actually sitting in front of them helps enormously. Where I used to work, whenever there was a power outage and we switched over to the UPS (no generator while I was there) standard procedure was to immediately turn off any monitors that no one was actively working on. Gave us well over another half hour or more of battery time. Switching to KVM boxes to handle, say, eight servers with a single monitor halped out a lot too.


    --

  • by arivanov ( 12034 ) on Thursday February 01, 2001 @04:31AM (#465024) Homepage

    First, I agree power management in a server makes sense. But not because of california legislations but because the most important server parameter is MTBF. Power management can increase the MTBF and efficiency of the cooling subsystem. This in turn increases MTBF of disks and the entire system. One degree away from the optimum operating temperature can decrease a disk's life by an year or more.

    Also, you do not spin down disks on servers for both business and reliability reasons. The business reason is server latency. The reliability reason is that most server HDUs hate to be spun down and their MTBF decreases (which is again business in a sense). Also, the biggest power eaters in most modern servers are the cooling systems and the CPUs. Not the disks. Disks hardly go above 2-10W nowdays while a PIII with the fans can go up to a 100W. Alpha goes even beyond that. Also, spinning up and down disks to 7200-10000 RPM can actually generate more heat and consume more power than keeping them running.

    Some bits of info by platform:

    • x86APM does not work at all or has only limited functionality with SMP systems and any newer boards. Which means that only an ACPI supporting system will have working power management. ACPI is a new addition in linux and BSD. Neither ACPI nor APM exist in solaris. NT is not really using it for power management in servers to the extent of my knowledge. So only a very upto date installation can actually use power management. But it will be only the CPUs. I have yet to see an x86 server where the fan is actively controlled by chassis temperature. Usually servers have them hardwired at MAX. Which means the entire exercise meaningless as you are not actually improving your MTBF that much.
    • Alpha Only recently someone (forgot who) modified the original DGUX PAL code to do power management on the newer CPUs. This is hardly used and unusable in all AlphaBios installs. Which is a pity as the alphas have always had the fan speed controlled by CPU temperature.
    • MIPS - never heard of power management. Server lines of PPC derived (u)Sparc - same.

    So overall the situation is that for one of the most popular platforms the power magement is hardly used due to the fact that the OS support just came in. For the second most popular platform (Sun) the power management was never there. The others are pretty much there as well.

    And to conclude: I do not feel comfortable installing linux 2.4.0 or the ACPI support for BSD on real production machines yet.

  • by peter303 ( 12292 ) on Thursday February 01, 2001 @05:16AM (#465026)
    Not only do the servers consume kilowatts of power,
    but require kilowats of air conditioning.
  • by kentborg ( 12732 ) on Thursday February 01, 2001 @06:29AM (#465027)
    Three Points.

    First, APM itself might not be a good idea for serious servers, but building (and configuring) servers with some consideration of power efficiency would be smart. The power use by server farms is a horrible expense. The cooling costs of server farms is horrible. But up to now it seems that getting a computer to work at all is the only point; how many watts it takes and how many BTUs it dumps is mostly ignored. Being Biggest and Baddest is used to sell, efficiency is not. I expect this will soon change...

    Second, most servers are not on server farms. My basement server might be on a DSL connection that is faster than most leased lines of yore, but it is still IO-limited. So it works quite well for me to run a little hacked Think NIC box (www.thinknic.com): I added an otherwise missing hard disk and underclocked (!) the CPU, and the result takes very little power--it has to, the power supply on the thing is too small to draw much. I keep the CRT off when I am not using it. I also bought a little UPS--but the server takes so little power my backup time should be very good. Certainly I am a minimal case, but I suspect that many servers out there are over powered and misused.

    Third, why don't computers and related equipment have small builtin UPSs? They already have DC power supplies, and DC is what is needed to charge most batteries. DC is what the computer actually needs, and DC is what batteries produce. Doing some battery backup inside each box would be pretty easy. How much battery does a little ethernet hub need? External UPSs need to make AC from DC (which is never terribly efficient) and they themselves become single points for potential failure. Sure, if you need a survivable facility, buy big UPSs and generators, but the failover and resistance to tripping over power cords would be so much better if each piece of equipment had a few minutes of backup built in. A well maintained generator should be able to start up and be running smoothly within just a few minutes. If the equipment itself could last a dozen minutes or so, there would be no need for any external UPSs other than for a few CRTs. As most power problems are very short, even home users would like a few minutes of backup time.

    -kb, the Kent who thinks computers are in a brute-force '50s "muscle car" era and that there is a lot of room for a little design and deployment efficiency.

    P.S. Don't forget that most so called "screen savers" are really just entertainments that don't save anything.

  • And that's exacly why the USA is responisble for 25% percent of the CO2 production in the world.
  • by funkman ( 13736 ) on Thursday February 01, 2001 @03:00AM (#465030)
    There not meant to. If a server is in a position where it can go into a power saving mode, then someone has not done a good job on the server farm. Consolidate any boxes that have that light of load so light that they may frequently go to sleep. With consilidation: you save on administration (less boxes) and you should be more secure since there are less boxes to administer AND you'd be saving power because you are using less boxes.

    But lets say you need a box that needs to be on its own and has the ability(time) to spin down. I personally would not want this not because of the extra time for the spin up, but because the spin up is hard work on a motor and for a server - once that hard drive is spinning - keep it spinning. There is much less wear on the motor to keep it spinning than spinn up process. This should give a more predictable life to the drive.

  • I think a better idea would be low power machines. Rather than the machine sleeping. LEts face it most people don't need a 1.2 Gig Hz CPU. Many people would do fine with about a 400 Mhz, unless they are super serious gamers. Even then 500 Mhz is probably fast enough. They have the technology to make lower power consuming CPU's (2.3 volt) so why don't they use this technology to make lower power other components. Like video cards. The human eye can only process 72 fps (about) but we ahve cards that do over 200 fps. What's that for seasures? What about a 100 fps video card that used half or a quarter of the power of a 200+ fps card. Granted that a computer uses little power as it is, but when you have a company and multiply that by thousands then you can see why we in CA have a power problem. (That and other reasons).

    There is no reason that other technology in computer coudl not be low power. They do it for lap tops, why not desktops?

    Maybe what the PC really needs is a redesign, so that you can have smaller and lower power componets as standard. Maybe desktop PC should use pcmcia or some other small technology to make them not only low power, but smaller insize.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • My point was that not everyone needs a super fast machine. Many people who use computers in the offices do word processing, excell spread sheets, and other things that are rarely that CPU intensive. Mostly they are memory intensive. A 400 Mhz machine is plenty fast for lots of people. I occasionally load my machine and it is a 233 Mhz, but I can still listen to my mp3 and write code and surf the web jsut fine. There seems to be this I need a faster machine thing going on when the reality is that MOST people don't. Generally speaking gamers are the number 1 users of fast computers. Them and weather forcasting. Yes it is true that some people do need fast computers but not a large group.

    As for Mhz and volts, well yes they are. Most faster processors have lower core voltages, this is how they get them faster, that and the fact that they use more transistors and smaller gates. They could use this smaller gate technology to make a 400 Mhz processor that would run cooler with smaller gates but the same schematic layout. A cooler smaller processor with a lower core voltage would use less power. Than the current 400Mhz processor.

    Some graphics cards can do over 200 fps. your eye can't see it so why do you need it? Go ahead have your seasures, just tape it and put it on the net so I can have a laugh.

    Tech exists today to give the average person a low power inexpensive computer, just look at netpliance. Their big flaw was makeing a home networking appliance that had to be connected to the internet. If they made it a low power computer then they may have made it more popular.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Why, in the world of high end UNIX servers for example, would you want your backend database/web/application server to even THINK of powering down one of it drives even for a SECOND?

    Well, in the world of high end servers, you obviously wouldn't. But not all servers are high end servers. Some servers are 99% idle, and those are the candidates for smarter power use. It's a shame the hardware isn't designed to make that easy.

    Perhaps the hard disk guys should take a few months off from the race to make disks bigger (incidently, giving the tape drive guys some time to catch up) and spend their effort on other issues, such as 1) making their disks better at surviving power up and 2) reducing seek time.


    ---
  • Most of the wear on servers is typically on their drives, so sleeping the disks would increase failures and shorten their lifespan. For example, the large corporation that I did contract work for in San Diego had about 100 PC/NT servers, with another 100 HP-UX servers. For Y2K, they had checked for possible issues with disks, but they only restarted servers and left the disk arrays going. This is because the spinning up/down of the disk increases wear and opportunity for failure (motor bearings, etc).

    The second issue that is slightly incorrect is the state of California's power problem. The state deregulated and totally fscked up the way power was sold by allowing people to sell power at open market prices. Power plants were then purposely shut down, decommissioned, and reduced in capacity to raise the value of the price of power. For Example, you have 2 powerplants, PPA and PPB. They each create 1000Mw's of power at 1 cent per megawatt, for a total income of $20. You create an excuse to shut down PPB, causing a shortage of available power. This in turn raises the selling cost of power to 2 cents. You have just kept you same income but have halved your operating costs.

    There is a shortage of power, but not because California's usage suddenly went insane. This problem started back in the early summer in San Diego, and no one took action until the end of the summer.

    If you really wanted to conserer power, then have all the Slashdot readers retire Seti@Home until all blows over and let their boxes sit powered off or do Wake-on-LAN, as I am sure far more power is consumed by Seti@Home users in CA than by not-sleeping server processors.
  • Is there a way you could only use the power managment at night though?
    Pray tell, when is "night" on the Internet?

  • What's the advantage to this over a tradition UPS, other than being built into the case? Switching back and forth from two batteries will actually use more power than using the standard AC-DC-battery float-Inverter configuration that most (if not all) commercial UPS systems use.

    It takes more energy to charge a battery to it's capacity than the capacity itself. This is due to various resistances in the charging system itself. With most batteries, you can only charge at a certain replenisment rate, as the chemical changes can happen only so fast, and any excess power during the cycle will be dissipated as heat.

    Which, also incidentally, blows away your theory of "much less time to charge a cell than to draw it down". You can do this in small configurations (cellphone charger whilst using the phone at the same time), but I can draw down a deep cycle lead-acid cell faster than I can fast charge it with no load on it.

    "All those tubes and wires and careful notes!"

  • We have bad electricity at work and a UPS on every box. My laptop can last about two days on the UPS w/o external power because of power management. The file servers (which are not being used 100% of the time) and my desktop only get about 2 hours before dying.

    -m

  • I was thinking the other day how much equipment I have at home that just sits there doing nothing most of the time. The amount of energy consumed by these machines is quite large. Lets see:
    - TV, standby most of the day
    - Video, same thing
    - Printer power supply (I rarely use the printer), always feels warm.
    - Power supply of PC's. Even when turned of, the power supply is still active. One of my PC's actually has a second switch on the back to turn it off. However, I rarely use that switch.
    - Sub woofer, always has a led blinking.
    - Adsl modem, comes with a power supply too.
    - Microwave
    - Refrigarator

    All these machines are using energy constantly, even when I'm not there, 24 hours a day, 365 days a year. That's a lot of energy.
  • by Alan Cox ( 27532 ) on Thursday February 01, 2001 @04:22AM (#465055) Homepage
    Compiling in power management support on the test boxes I use cut the power bill by 20%. A lot of that actually seems to come from monitor powerdown rather than CPU idling, but with an Athlon drawing 60 watts of power at peak (or 240W once we all have nice quad athlon boxes) its still a substantial saving.

    For most boxes the cpu halting BSD and Linux do will actually give almost as good results as the APM bios. On laptops APM bios is often measurably better as it is able to reconfigure SDRAM timings and the like in ways only practical for box specific code.
  • the shutdown thing so that the computer came back to the same state after a shutdown that it was in before, then I'd shut my computer off at night. The way it is, I often have 5 minutes to get something off the computer and run out the door. If I have to wait 3.5 minutes for the thing to boot up then it won't happen, so I leave it running 24/7.

    I know of the swsup program, but it isn't included in the stock kernel, and the only available patch is for an old version of the kernel. Besides, if I change it, then security patches will no longer work.

    So, why are the kernel hackers resistant to this technology? If they are doing it for their own pleasure on their own machines for their own machines, why is an instant reboot feature not included? Why is 16 way multiprocessor support more important than instant shutdown and reboot to a hacker with one computer? My guess is that Linus, et.al., actually do know on which side the bread is buttered.

  • I'm sorry to say that I agree with you.

    The main beef I have against MS is that they produce a system designed for a single user, but they try to promote it as a multiuser system. When they do this, all the assumptions that they built the system on are wrong and the thing is just impossible to get to work as advertise. Sort of like trying to use and Indy racecar as an off-road vehicle by putting big tires on it.

    It appears that Linux is beginning to show the same cracks. It's designed as a server and is just not fitting into the 'home/SOHO' setting. This is just one example. A home/SOHO computer will sit idle for 12hrs a day, even if it is a server. This much downtime is bound to make up for any startup cost.

    So, what would be the assumptions for a good multi-user OS for the home and small office? Would Linux require a fork to fit that niche?

  • How about instead of worrying about buggy APM support on servers, companies start to lobby
    California's government for some hard regulatory changes so power companies can actually start building new power plants there?


    Please. The environmental regs are a small part of the problem. Deregulation dragged on for so long and was such a a half-assed mess that nobody would have built a power plant out here even if they could have belched coal smoke all over a wildlife refuge. Why risk a few hundred million bucks if you have no idea what your ROI will be?

    And they were smart to wait, too. Now that the faux deregulation has blown up, people are muttering about seizing the power plants. Or at least limiting them to 'fair' profit levels. Great way to attract investment, eh?

    The real solution is a dereg where retail prices aren't fixed by the government. Even here in California, not many people even bother to conserve power. Why? Because supply vs. demand doesn't matter.

    Anyway, as others have pointed out, spinning down drives in ANY machine is a BAD idea.

    Yeah, yeah. But that doesn't explain why you can't do other laptop tricks, like dropping processor speed during idle times.
  • We have about 50,000 sq ft and it's the raised floor cooling, AC, lights that suck current.

    Note that if your computers use less power, you need less cooling.

    In a commercial environment you can't get away with an SLA that says "we'll power down your
    servers at random which will create 30% greater latency".


    In load balancing situations, you could probably boot idle boxes out of the set. As utilization increases, you wake idle boxes up and bring 'em back in. It could be accomplished with little or no latency.

    And suppose you offer a discount in exchange? A quick back-of-envelope calculation suggests that these kind of tricks would save a couple thousand dollars a year for cabinet of hardware in an California coloc.
  • Pray tell, when is "night" on the Internet?

    I dunno about you, but I see a clear "night" on the internet.

    I just looked at logs for a site that draws 15 megabits/sec at peak, and less than half that during off times. In a load-balanced configuration, that suggests you can put a lot of capacity to sleep when you don't need it.
  • oh great! i was wondering when you wackos where gonna pop up on this thread. people like you
    are the reason CA and some of the other western states like WA are in the mess their in now.


    Wow! Having opinions entirely unconstrained by facts is so much easier, isn't it?

    The problem here in California is with a half-assed 'deregulation', which the utilities practially wrote. It has very little to do with environmental laws. Go pick up any recent edition of, say, The Economist.

    25% of the CO2? ok... so what? CO2 keeps us from freezing our asses off! 100% of the worlds freedom/economy/security/consumers/productivity/sc ience/FOOD hell why we only putting out 25%? seems a bit low to me, i think we should work on it!

    Have you every actually left the country, pal? I've lived on four continents so far, and it's more complex than you think.

    We hardly have 100% of the freedom or the security. For a first-world country, we have an extraordinary amount of crime and violence. We have about 5% of the consumers. The US may produce a lot of good science, but have you peeked in a faculty room lately? The percentage of native-born Americans is actually pretty small.

    wish you chicken littles would just go hide in the corner till the world ends. [...] after all if your right it will be next week.

    Try living in India or China for a month and then tell me what you think of environmental regulations, chief. In Delhi, many change shirts after lunch, as the air makes the shirts too grubby to look good for an entire day. In Xi'an, many people wear breathing masks in public.
  • While there is additional government power regulation risk, I think the expense of
    environmental regulations (particularly the impact statements) are keeping new plants from being built.


    Got some evidence for this? The deregulation battle dragged on for years. Several new power plants have been approved since deregulation was finally settled, and except for one recent approval for a peaker plant near SFO, I'm not aware of a case where they lowered environmental regs. But since they take years to build, I don't think any of them have come on line.

    Significant environmental de-regulation of power plants is the only real solution. Better do it carefully though.

    That doesn't clearly follow. Studying the environmental impact is one barrier to entry, but it's not a huge one. Building and running a power plant is a huge affair, and the environmental work is just one of many expenses.

    --

    Note also that lowering environmental standards may not save money overall; it can just shift costs from the people who use the power to other people. Increasing particulate emissions may save on hardware to remove it, but it increases health care costs. Why should I increase my risk of bronchitis so some bozo can fill his 4500 sq ft house with 500W halogen lamps?

    Hell, I moved to California partly because of the environment. I even pay (slightly) extra for environmentally friendly power [green-e.org]. It may be a reasonable thing for society to say "fuck the environment, I want cheap power", but that's not what voters here generally say. If citizens are willing to pay the costs of a clean environment, what's the problem?
  • If it was easy to build large powerplants, someone would have done so, since the wholesale electricity prices in California have been high for quite a while now, and it has been obvious to many for years that demand was outpacing supply.

    I was actually asking for evidence for the specific claim that environmental regulations are the reason that no new plants have been built.

    My impression is that the main reason that nobody built plants for ages was that a) for quite a while, there was suffcient supply, and b) by the time the need was obvious, deregulation was in the air, making it impossible to forcast the ROI on the hundreds of millions of dollars needed to build a power plant.

    The fact that new power plants are now in the works (and have been for at least a couple of years) without major concessions on environmental regulations further suggests that the environmental regulations aren't the main problem.
  • Good article in last weeks Economist on the subject. Those ignorant politicians didn't understand deregulation, and were more interested in appeasing lobby groups, and screwed it all up.

    Living in San Francisco, I can assure you that the politicians aren't the only ones who don't get it. Most people here have heard of this whole "market economy" thing, but only in the way people in Indiana have heard of communism: they may not know much about it, but they know it's bad and dangerous and all right-thinking people should be against it.

    If you need proof, look no further than the bursting of the Internet bubble [fuckedcompany.com], which strikes me a much more authentic California product than cheese [realcaliforniacheese.com].
  • But perhaps someone here could answer the question of: in the real world are the heating demands of spooling server in and out of an idle state worse or better than leaving it on?

    Agreed; some hard data on this would be great.

    The problem with load balancers is that if you've built them right they're always busy and every node in the clusted is always doing something, say @ 25-35% capacity. Else you've wasted your cluster dollars if you have a whole node doing nothing for a period of time.

    That's not necessarily true when you consider issues of peak load versus off-peak load. Suppose you have a site that on the peak day at the peak time, you serve a million hits per hour. Suppose further that to allow for surges, you never want to hit more than 75% utilization. And further suppose that each box can serve 70 hits/second, or 250,000 per hour.

    For peak load then, you'd need 6 boxes. But looking at stats for the biggest site I can easily check, the least busy hour is only 44% of the traffic of the busiest hour. Under these assumptions, during slack time you could put three servers into some low-power idle state and take them out of the load balancing set. And likely you'd keep a warm spare in the cabinet, which could also be idling except when needed during a failover.

    At the very least, this would save a fair bit of power. As others have pointed out, stopping and starting disk drives can increase the MTBF for drives. But being able to stop some of your drives for many hours a day could well increase their calendar lifespan, as well as reducing the risk of simultaneous failure.

    ---

    Of course, the software for this doesn't exist yet, but there's at least a plausible reason to have it. A back-of-envelope calculation suggests that the power bill for a setup like this would be in the $2000/yr price range at $0.10/kwh. But wholesale power prices are much higher in California now than that; at $0.30/kwh, then a 30% power reduction over 3 years would be $5400 bucks, which is not chump change.
  • by dubl-u ( 51156 ) <2523987012@pota . t o> on Thursday February 01, 2001 @05:57AM (#465080)
    There not meant to. If a server is in a position where it can go into a power saving mode, then someone has not done a good job on the server farm.

    This just isn't true. I dunno about you folks, but even with the nominally 24x7 web sites I work on, the difference between lowest and peak is more than 2x. So if you have a load balanced configuration, it's plausible that half the servers could leave the active set and sleep at nights. And in an office setting, the peak-to-valley gap should be even higher.

    As many posters mention, stopping a running hard drive is asking for trouble. But it would be nice if all processors could drop speed when idle, which apparently works well in laptops. And in power saving modes, it would seem to make sense to gently drop a disk drive's rotational speed. If it never stops and the spin-up to full speed is very gradual, this might extend disk lifetimes rather than reducing them.
  • by redelm ( 54142 ) on Thursday February 01, 2001 @04:31AM (#465082) Homepage
    APM is basically useless for servers. You certainly don't want to be spinning down their disks (wear and high start-up power) and they don't have monitors attached.

    The server OSes (*BSD, Linux, OS/2, and even MS-WindowsNT) all have HLT in their idle thread. When the machine has no tasks to run, it runs the idle thread. For x86 CPUs after the 486sl this automatically drops the CPU into powersavings. Typically a CPU that will draw ~20-30W will drop to less than 1 Watt at HLT. That's all you want.

    APM is more targeted at desktops where it's especially important to turn off that power-hungry monitor (100+W) and to compensate for the failings of MS-Windows9*|Me which idles in a busyloop.

    For non-x86 CPUs I cannot speak. I would hope that Sun & Alpha have something equivalent to x86 HLT powersavings by now. But my older Alpha 21066 does not. Perhaps the thinking is the machine will be busy all the time.
  • by lildogie ( 54998 ) on Thursday February 01, 2001 @05:41AM (#465084)
    SACRAMENTO - The California power grid was taken down today by a so-called "packet storm," where script kiddies coordinated themselves to ping every sleeping server in California to wake it up ...
  • Spinning up and down the drives is one thing, but there are other ways to save power. Many CPUs are able to micro-sleep (basically halt their clock) so they retain complete state (incl cache?). Any interrupt wakes them up. Even if it takes a couple of thousand cycles to come up to speed, this is plenty fast to respond to any external situation (*).

    Appart from heat expansion/contraction, I don't see that this could add any wear or tear to the computer. In fact, as the cpu will tend to run cooler, this might be a good thing in general.

    (*) if your computer can run a quarter of a second on the juice in the power-supply's capacitors after you yank the plug, this is something like 250 million cycles on a modern cpu. And I doubt it takes the cpu 1000 cycles to wake from a halted clock.
  • If a server is in a position where it can go into a power saving mode, then someone has not done a good job on the server farm.

    As others have pointed out, the load on the server farm varies and parts of it could go down when it's light.

    Unfortunately, the time when the load is light may not match the time when the load on the grid is heavy. There's no point in shutting down or "sleeping" the servers if it doesn't happen when it might make the difference on the rolling blackouts.

    Further, the servers are a drop in the bucket compared to the clients. Workstations in many businesses are NOT needed during the worst of the daily peak, and many of them are only in use part of the day anyhow. There's little to be saved by the effort to hack the server load balancing to accomodate sleeping servers, and a LOT to be gained by activating the automatic sleep-mode features on clients.

    So put the effort where there's something to be gained.
  • your 'mission' is most likely a small piece of [deleted]. like keeping a small department up. if it fails, not much of the world is going to be affected.

    But if it fails his employment is likely to be affected. I've always taken "mission critical" as meaning "paycheck critical". B-)

    Who cares about "the big scheme of things" when he's hunting for his family's next meal?

    And it seems to me that the only "big scheme of things" issue I've ever considered important is whether "the people" can, individually, handle their individual issues. Like their quality of life. Which often comes down to continued employment.
  • Yes there is a point in sleeping hardware whether or not it makes a difference on the rolling blackouts! We should be conserving energy, not wasting it, and we shouldn't wait until things are critical before taking action.

    First point: The truely limited resource is administration time. If you want to use it to save energy, you have a choice:

    - You can throw it at the servers and save a little power, which is consumed at a time where it DOESN'T risk blacking out California.

    - You can throw it at the clients and save a LOT of power, much of which is consumed at a time where it DOES risk blacking out California.

    Take your pick.

    Secondly:

    We are less than a hundred million miles from a STAR for crying out loud. There is NO energy shortage. There is only an energy CONVERSION shortage.

    The only reason we're still burning fossil fuels to generate power is that it's still CHEAPER than putting up solar panels or powersats.

    And the only reason California is browning out is that government and left-wing pressure groups interfered with the market, first by throwing roadblocks in the way of building power plants until the capacity was too small, and second by imposing controls on the power market and breaking the feedback loop.
  • i don't know what ALR's doing these days

    They got bought out a couple of years back by Gateway.
  • ../linux/Documentation/Configure.help

    Power Management support
    CONFIG_PM...

    ACPI Support
    CONFIG_ACPI
    ACPI/OSPM support for Linux is currently under development. As such, this support is preliminary and EXPERIMENTAL.

    Advanced Powehttp://phobos.fs.tum.de/acpi/index.htmlr Management BIOS support
    CONFIG_APM
    APM is a BIOS specification for saving power using several different techniques.

    etc...etc...etc...

    Some links of interest:

    --

  • Perhaps the hard disk guys should take a few months off from the race to make disks bigger (incidently, giving the tape drive guys some time to catch up) and spend their effort on other issues, such as 1) making their disks better at surviving power up and 2) reducing seek time.

    I'll agree with you that the tape drives could do with a little catching up, but don't pretend that hard drives aren't improving in more ways than capacity.

    1) There is no way that startup cannot be the most stressful mode of operation for a harddrive. There's this thing called Newtonian Physics which applies to most objects larger than electorns, making metal disks that took less energy to move from a stop to 7.5k-10k RPM than to keep moving at that speed would be a violation of Newton's laws. There is also the problem of electric motors, by nature they will appear to be of extremely low resistance when stopped and thus allow a relatively high current to flow when power is first applied (Combination of Ohm's law and lack of back EMF produced by moving a current through a B field as described by Gauss' Law). These sudden current surges can be tempered somewhat by good circuit design, but they cannot be completely eliminated. Drives are becoming more reliable overall and are better able to handle spinup stress, but the laws of physics can't be changed and they insist that spinning up a drive will be stressful.

    2) Reducing seek time can be accomplished in a few different ways:

    • Reduce disk diameter
    • Move heads faster
    • Increase rotational speed
    All of these things are infact happening, 3.5" disks are now much more common than the 5.25" disks of yore and the 2.5" laptop drives are becomming more and more common, the problem is that when the disk gets smaller its capacity gets smaller too. Drive heads are moving faster by the day, but they too are subject to Newton's laws. If you want to move drive heads faster there's going to be more wear on the drive. Increasing rotational speed is probably the area where drives are making the biggest speed gains these days. Low end IDE drives are now spinning at 7500 RPM, a year ago the $100 drive was spinning at 5400. This stuff is really increasing fast and it's cool.
    _____________
  • by Nailer ( 69468 ) on Thursday February 01, 2001 @03:58AM (#465098)
    (Sorry, the spelling on that was atrocious).

    ...and always have been. The specifications aren't always fully implemented and don't perform reliably even in consumer environments - i.e. every shipping copy of Win98SE is unable to recover once the machine goes into suspend unless a patch is applied. In terms of (SME) servers, the suport hasn't existed. Windows NT4 didn't support the full capabilties of either spec, and while Win2K does, it is still not in widespread use. As for Unix-likes, Linux has supported APM for some time now fairly reliably, but some applications (specfically poorly written FTP servers) still have some issues with it. Anyone know about ACPI?

    Powering down hard disks does indeed cause wear and tear, but there are other components - ie, monitors (if you use monitors on servers), KVMs, and even switches which aren't in use during certain hours which won't be significantly harmed by powering down.
  • You think that it's gonna sell machines if you tell people that the box is going to run slower than it has to?
    Sure. You tell people that the machine runs only as fast as it needs to, and saves you power the rest of the time. What good does a server do if it is spending 92% of its time executing an idle loop at 1.1 GHz? Spending 68% of its time executing that loop at 275 MHz (and 1/4 the power, or less) is no worse for the business's LAN and better for their power bill. Powering down during dead time is even better, especially in California where shortages of natural gas for powerplants would be assisted by night-time energy savings. Everybody wins.
    --
    Knowledge is power
    Power corrupts
    Study hard
  • Since we're talking about servers here, anyone who is running a bunch of servers without a KVM (keyboard, video, mouse) switch is pretty foolish. With a KVM you need only one monitor for a whole raft of servers. The technology is improving, there is at least one company that is sending all the KVM info from the servers to the switch via a single dedicated Cat5/RJ45 cable (per machine).

    Hooking this up to a flat panel monitor would save power, of course.

    The KVM is nice for when you need to have access to the console, but if you can handle a headless server, an xterm is even nicer (as long as you are running a Unix variant).

    As to a flat panel being cheaper in the long run... my 21" monitor is rated at about 200W, if a comparable flat panel draws 30% of that, or 60W. I'll assume an (outrageous) rate of $0.50/kWh. It costs about $0.10/hr to run the CRT, and $0.03/hr to run the FPD, a savings of $0.07/hr. Pricewatch gives me about $500 for a cheap 21", and $1300 for a cheap 18" FPD, an $800 price difference. You would have to run your monitors for about 11,500 hours for the price of the FPD to beat the CRT. If you work 10hr/day, 5 days a week,(and turn the monitor off when you're not there) it would take almost 4.5 years for the FPD to pay for itself. I'm not exactly sure of the California electricity rates- I encourage someone who has better numbers to correct mine.

    At a more typical rate of $0.10/kWh, it costs $0.02/hr for the CRT and $0.006/hr for the FPD, it would take 57,000 hours, or almost 22 years for the FPD to pay for itself.

    Flat panels are nice, but, I think you're going to need another arguement to convince your boss.
  • I guess the real question I've got is this: Why would you want your server to sleep?

    Pretty simple, really- lets say you're using load balancing, and to handle your peak load, and you need 20 servers, but at off peak times, you only need 25% of the capacity, 5 servers. If you can shut down the other 15, you would have about a 75% power savings during off-peak times. This is especially the case if the market you are serving is reigonal.

  • I guess I can't fully convince you yet, but I do know one thing for sure, the capability to do this has to be built with hardware first, before the software would be able to use it. That hardware is available right now- since wake-on-lan is becoming popular. Manufacturers are building it into products right now. If there was not a demonstrated need, they wouldn't be. As to software that supports that... I dunno, I'm a hardware guy.

    History has proven that software will be written that will fill the needs that present themselves. If it isn't available now, it will be.

    Energy savings just for the sake of the environment don't take off quickly, but if there is money to be saved, stand back, it's gonna happen.
  • by Lion-O ( 81320 ) on Thursday February 01, 2001 @04:28AM (#465117)
    Maybe this is an option for Windows machine (no trolling intended since I really would not know) but I'd consider this a big NO NO for *nix based machines. Sure, maybe you can save some power in theory by letting some hardware sleep or spin down for some time. But how much power would it cost to get everything back up & running every hour of the day?

    Getting things back up usually costs more power then letting them spin & run. I don't know about you and your servers but mine cannot give up on those hourly cronjobs since some jobs simply have to be done. So basicly I think it would end up consuming even more power then it does now. Not to mention the extra wear and shortened lifetime on the hardware which will surely not please my boss.

  • A recent /. story had a MS drone pointing out that Linux didn't have the ability to hot swap out ram and cpu, a needed enterprise feature.

    The ability to shut down a cpu, or a bank of ram would have great benefit in the large high availablity systems.

    It wouldn't take much to extend this to power savings, which shouldn't affect semiconductors as much as mechanical devices.

    If you had an 8-way 4Gb server, which goes mostly idle during extended periods, overnight or weekends, it could throttle back to 2 cpus and a gig of ram, powering the rest off. If the load average starts increasing, bring more cpus on-line. Same thing if there is a hardware problem, shut down the affected parts, and limp on the rest till its fixed.

    Given that Linux has just (2.4) started supporting multi processors of this sort, it is a few versions away, but it is coming
  • Besides the wear and tear the HDD spindown and spinup puts on the server, you have to say
    goodbye to some things that I for one wouldn't miss. Once tried hard to make a server (for home use) which should spin down when there was nothing to do. It did and after some time linux woke it up to write som log entries and "do it's thing". Drive spun down. Linux woke it, drive spun down, etc...

    Made me crazy. Saw patches wich could make linux stop doing "its thing", but as far as I remember it did so by disabling things that I wouldn't be without in a fallover situation :/
  • by zensonic ( 82242 ) on Thursday February 01, 2001 @03:21AM (#465120) Homepage
    There are too many things that can go wrong when drives spin up and down. Particularily if the drive hasn't been idle for a large period of time.

    I have had disks which have spun for years without problems in a server, but when taken down
    in order to upgrade the machine in some way. Some disks don't survive. Why you might ask? The above post is one point. Another is that the heat of old drives really degenerate the components of the HDD and thus it can't stand the powerup cycle which normally puts more stress on the components.

    Other HDD have used all the lubricant inside during the years, and when powered up after a server upgrade the HDD doesn't have the power to compensate for the not so smooth motion anymore.

    If it ain't broken don't stop it :)
  • Think "Intranet".
    Of course if the company's multinational there is no night, but for an 8-5 company with no night employees the servers could all go into low power mode at midnight and wake back up at 6:00 AM.
  • by ChristTrekker ( 91442 ) on Thursday February 01, 2001 @04:43AM (#465128)

    I'm surprised nobody's hit on this yet. Ditch the x86 boxen and use PPC instead. The power savings could be significant, maybe 50% or better. (Unless you've been using the waste heat from your servers to supplement the office toaster and microwave.) No need to spin the drive up and down from sleep, just use an efficient processor.

    Those millions of dollars that the gov't loaned out to the power companies could have been put to better use helping finance this upgrade. At least you'd have a better architecture to show for it, instead of 17 days later with no improvement.

    for the humor-impaired.

  • I don't understand this. This has been said many times, but IME it isn't true. Every night I hibernate my laptop, go home, and then boot it up again. The battery when it is running lasts an hour, it takes me a half hour to get home. But waking it doesn't consume half of the battery, or even 1% of the battery.

    I do agree that shutting it down and bringing it back up will wear the parts down more.

  • 25% of the CO2? ok... so what? CO2 keeps us from freezing our asses off! 100% of the worlds freedom/economy/security/consumers/productivity/sc ience/FOOD hell why we only putting out 25%? seems a bit low to me, i think we should work on it!

    Unfortunately, people like you vote.

  • I'm sure this will open up a feeding frenzy to post this on Slashdot, but some of NT Admins can't do that. While some consolidate on 1 or 2 boxes, a well designed NT Network will host at least 3 servers, and they may only have a few dozen users. For example, you should have a PDC, BDC, and Exchange Server. If you combine the functions (which I've done at small sites), you open yourself up to headaches later on.

    Besides, my PDC/BDCs get hammered as people log in, but there is little login during the day. Exchange doesn't get hit much at night, as only the coders are logged in and less e-mail is sent. This is a silly arguement.
  • Servers are all about running faster, longer, harder. You think that it's gonna sell machines if you tell people that the box is going to run slower than it has to?

    If it costs money to implement and it's not going to sell machines, it's not going to happen.
    --

  • Plus the fact that drive spinup seems to be the time when a borderline drive will fail. The only 'safe' power management I can see for servers would be adaptively reducing processor speed in relation to CPU load. Sleep and suspend are dangerous on a server, especially since a power failure can happen at any time, even in places other than California. Servers need to respond to a UPS power fault signal at a moments notice. As for DPMS monitor power savings, I don't see any harm, the monitor can be changed with or without the server running, so there's little danger of downtime because of that.
  • by Trepalium ( 109107 ) on Thursday February 01, 2001 @02:54AM (#465142)
    There are too many things that can go wrong when drives spin up and down. Particularily if the drive hasn't been idle for a large period of time. There's actually a problem where the drive heads can acumulate material from the disk itself, and by powering down the drive, the heads come to rest and the residue either falls in the drive platters or virtually glues the head to the platter. Not good.

    CPU and video card power management would be far less worrysome, since there's no mechanical wear and tear that can be caused by them going in/out of sleep, with the exception of the monitor which can be externally replaced without downing a server. However, I'd be rather concerned that they don't go completely to sleep otherwise a power failure may cause the server to completely drain the UPS without trying to shut itself down.

  • I must not understand your math. Let's say:
    • A = amount of power avaliable
    • B = amount of power requested

    Currently we have A < B. You're saying the only "solution" to the "problem" is to make A bigger? How about making B smaller? Duh...


    MyopicProwls

  • But aren't many servers idle during the night while people sleep?

    Name one hour of the day when everybody in the world with Internet access is asleep. The world is more than the USA, you know.


    Like Tetris? Like drugs? Ever try combining them? [pineight.com]
  • I guess that's one of the really cool things about the Crusoe; it adapts the speed of the processor to the load it has to manage. Recently people discovered that Crusoe is actually quite useful on server systems.

    Coming from an Amiga history, I still think it's a crime that normal home computers require a fan, anyway ;-), so that should also be dealt with sooner or later. Having a constant 1% CPU usage (deh except with Mozilla of course ;-), I think I don't really need that fan most of the time, if only the design of my computer was a little more thought over.

    It's... It's...
  • This is a classic example of a "generated shortage." There's enough power in California, but almost 10,000 MW of generation is "offline for maintenance." And has been for a year.

    This is at best a distortion. It's true that there are plants off-line for maintenance, but that happens every winter. Plants can't run 24/7/365, and there is some work that requires plants to be down for extended periods, rather than just at night. The generators cleverly schedule this for periods (like the winter) when peak demand is low so that it doesn't completely hose the grid. The problem right now is that:

    1. Power consumption is higher this year than last year because of population growth
    2. Power consumption is higher this year because it's been an unusually cold winter in California
    3. More plants are off-line than usual because they ran harder than normal to meet last summer's power crisis
    4. The utilities are having trouble convincing power suppliers to sell them power because the producers are (rightly) worried about getting paid when the utilities go bankrupt

    The current power crisis is the result of a number of factors, and is not just the product of a big conspiracy among power producers to drive up prices.

  • Please. The environmental regs are a small part of the problem.

    This is very true. There was a recent article in the LA Times which pointed out that there is a grand total of 1 powerplant in the state which is operating at any reduced capacity because of environmental regulations, and that one had refused to participate in a program that would have allowed it to operate at 100% had it wanted to. The NIMBY (Not In My Back Yard) syndrome has a lot more to do with it than environmental regs. Nobody wants a power plant near them, and California is full enough so that makes it tough to find anywhere in the state to put one. When people discover that comparatively clean powerplants can generate a lot of local jobs, they'll be much more willing to let them be built.

  • by vchoy ( 134429 ) on Thursday February 01, 2001 @03:21AM (#465159)
    May be one of the reasons why we do not see power panagement is because for most of the critical servers, even though they are not used most of the time, they are monitored all of the time.

    I have some critical servers that I am responsible for and run network monitoring agents on these machines that poll for disk utilization, capacity, CPU load, network ping status...etc etc. Big Brother [bb4.com] is one such network monitoring system I use. Others I have tried What's up, BMC Patrol, HP Openview. I know these do not take up much system resources, but the server needs to be 'awake' in order to collect and report network management data. Servers that are polled often do not get a chance to sleep unfortunately. Wake On Lan enabled servers will might never go to sleep if a network monitoring program is ping the server to see if it is alive every 5 minutes.


  • And in the land of the rolling blackout, one has to wonder if the potential power saved could help the situation, assuming a good percentage of the big iron in Silicon Valley were configured to conserve what power it could (as opposed to adding on to the drain as it is now).
    oh come on. California has huge heavy industries, gigantic metropolitan areas, teeming millions of people with personal computers, dishwashers, air conditioners, televisions... its a huge, populous, and industrial state. The impact of the Information Industry is still a minor fraction of the total costs. Even if you turned out the lights on Silicon Valley completely, California would still have a problem, because they haven't built enough new plants to supply the demand (of a huge, populous, industrial state).

    If you had the exact same deregulation fiasco in Texas, or New York, or Illinois, you'd see the same thing, and there aren't Silicon Valleys of even comparable size in any of those states.

    Is it just Geek Hubris to assume that our industry is the most important and central over all others? I see this same thing on reports on how our economy is supposedly tanking right now, just because of the NASDAQ. There's an entire world out there beyond our walled garden, you know..

  • ...to save power on a server i think. These machines definetly are the wrong place to save. I really like the idea of powersaving systems but on a server??? There are so many other machines running in a regular office wich could save power and honestly i dont feel comfortable about any power managment technics yet. These things just dont really work! I saw a TV Show lately where they said that the aircondition of the World Trade Center consumes as much energy per day as my hometown (munich/1.9million citizens). There are the real energy-suckers not down at the serverroom... Slightly offtopic: I live in germany and when i heard about the powershortage in California i couldnt believe it. It reminded me of playing SimCity 2000 and really messing the powerplants up, hehe...nom. Lispy
  • Does no good what-so-ever if the hardware doesn't support it. I believe this guy is refering to high end server equiptment. It takes a combination of hardware and software support to get power management to work. And even at that it's still so freakin buggy that I wouldn't trust it on a server.
  • For many companies, the extra ten seconds it would take to spin up a backup server's hard drive(s) likely would be a non-issue.

    My users get annoyed at sub-second response-time lags. What company can stand 10 seconds of wait? (Excluding NT servers, because users are used to the regular daily or weekly rebootal of their server).

    Futhermore, I try to exclude every non-useful line of code from the OS and daemons that I run on the server. That includes power management.
  • by ma11achy ( 150206 ) on Thursday February 01, 2001 @03:28AM (#465178)
    Yep, I'm fully in agreement with the wear and tear issues of power management. The spin-up and spin-down of hard drives is the one area that causes most wear and tear. I personally would not like to power down any drives for any reason on one of our systems running in the field. These machines (and the drives they contain) are DESIGNED to run 24x7 if configured properly. They are not designed (and in most cases, not desired) to spin down and spin up several times during a week, or a day. Why, in the world of high end UNIX servers for example, would you want your backend database/web/application server to even THINK of powering down one of it drives even for a SECOND? Especially in the high volume hits of today's I.T industry.
  • by clare-ents ( 153285 ) on Thursday February 01, 2001 @03:26AM (#465180) Homepage
    "
    Consolidate any boxes that have that light of load so light that they may frequently go to sleep.
    "

    What if there is only one box - e.g. the router / server machine in my house does nothing for 95% of the day and only has stuff to do when I'm actively using it - I'm fine with it going to sleep / powering off hard disks etc. since it's a couple of seconds wakeup time. I can't power it off since it requires me to have physical access to bring it up again [I can't be bothered to try wake on lan], plus everytime it boots it seems to force a disk check on me taking about an hour.

    This is a valid point for server farms though - the main servers I use have an obvious 24hour periodic cycle [loaded while US + Europe is awake - empty at other times] and it would be great to bring up additional machines as required.

  • by sootman ( 158191 ) on Thursday February 01, 2001 @04:13AM (#465185) Homepage Journal
    I used to live in california, and back when there were water shortages there, everyone was asked to put bricks and half milk cartons in their toiet tanks, water their lawns every other day, etc. Later, I learned that agriculture in CA uses 85% of the state's water. So, if every urban person in the whole state (about 30 million people) had cut their water usage in *half* (not bloody likely, or even possible), that would have only made a difference of about 7%.

    (Besides, at the same time, I got a job cleaning up a gov't construction site. The boss, at one point, took a running hose and stuck the nozzle into a urinal to save himself from having to walk to turn it off. I mentioned the drought. "The government," he said, "does not have a water shortage.")

    Similarly, a quick look at the laws of thermodynamics tells us that, for example, it takes more energy to cool a room because of a computer than the computer itself gives off. Air conditioning, lighting, and utilities for new residents are some of the reasons behind the brownouts in the golden state. A few idling CPUs and spun-down hard drives, while a Good Thing, wouldn't make much difference.

  • If a server had powerdown capability, it would also last longer on a UPS. In the event of a brownout/black out situation, if the UPS told the server to go into power save mode, the system would recover quicker when power came back up, and could live longer on ups power than a fully 'up' server.
  • by Helix150 ( 177049 ) on Thursday February 01, 2001 @02:49AM (#465203)
    good point. IMHO another good idea would be for load-balanced servers to put one in suspend when there is low traffic
  • by Helix150 ( 177049 ) on Thursday February 01, 2001 @03:43AM (#465204)
    well if you have one app/db/http/etc server running then you have neither the want nor the need for it to go offline. In such a case, the first user that wants something is going to be waiting 10-45 seconds for it to come back online.

    However, if you have cluster servers, redundant servers or load balancing where several servers do the same job, but not all the time, then APM is good. For example, say you have a website with a loadbalanced HTTP cluster and a redundant backup cluster. You would want one of the main machines to be always on. When it got overloaded it would wake up one of the others, and grab more of them as load increased in peak hours. Then when everyone went to bed it would suspend those it woke.

    For the redundant cluster, they should be kept in a constant state of suspension, but be ready to wake up should the main cluster fail.

    As for reliability of the drives, thats why you have RAID arrays. I would rather periodically weed out the weak ones and have the RAID re-distributed than shut them all down, and half dont come back on. Its absurdly unlikely that two drives are going to fail at the same powerup. If one dies, then thats OK. If two dies its not. Give them all ample opportunity to fail and they will do so one at a time. Give them few opportunities and they will die in clumps.
  • by JamesGreenhalgh ( 181365 ) on Thursday February 01, 2001 @03:13AM (#465207)
    Spinning up and down drives will (depending obviously on your load and spindown configuration) actually end up using more electricity, in worst case scenario. It's like people who turn their engine off in only light traffic jams - starting it up costs a lot more fuel than just leaving it running at idle. With desktop PCs it makes sense, since they tend to honestly do nothing at all unless people are sat using them (lets discount seti@homers ;-) ) for long periods of time, so you can make a big power saving.

    The other thing here is that as various people have pointed out, the most common failure point on drives is during spinup - the time at which most stress is being exerted on the drive. If you've got a big ultrareliable server, you'll want it to stay that way, and the best way to do this is by keeping everything the same. The car analogy fits here too. A car that does 100,000 miles in its life with lots of stopping and starting and small journeys, will be considerably worse off than one that just ran for long distances.

    In case anyone was wondering - the adverse wear+tear of power up/down also affects PCs (cpu/PSU/drives), and even monitors. I've seen it happen often enough, and it would be interesting to know how much money the PC hardware industry makes out of components failing early due to "power saving" measures (be they system controlled or little Johnny turning his PC off at night).

    If we move over to better renewable resources, like vast farms of hamsterwheels - this won't be a problem :-)
  • I hate responding to my own story, but it is the *hardware* that doesn't support power management. Instead of your standard APM suite, they add frills I don't use. These include things such as BIOS-based remote diagnostics that can be run over a modem, and a boot menu choice that allows me to boot off a special partition to configure the RAID array (useless if the array fails, I might add). The *software* (operating system, etc.) can not enable something that the hardware can not do.

    And yes, I realize that spinning hard drives down may be a very bad thing, but who says the CPU's can't be put into an idle mode? Even ethernet controllers nowadays have lower power modes they can go into while not actively transmitting.

  • by Decado ( 207907 ) on Thursday February 01, 2001 @02:49AM (#465216)
    I assume that Power saving only works if there arent frequent power downs and power ups, if these machines were power saving for 1min then had to power up again there probably wouldnt be much (if any) saving whereas the wear on the servers would be a lot greater.
  • I think this isn't an OS thing as much as it is hardware design. No intel platform that I know of can do this, and I venture to say, most Linux boxes are Intel.

    Now that SGI is offering Linux support, that kind of changes everything. I find it hard to believe that SGI hasn't written their own linux utils to deal with this, since it would be a waste to load Linux on an Origin 2000 or similar computer, if you don't get the benefits of being able to control the hardware the way it is designed to be controlled.
    -

  • And servers are more than just web servers, you know. Plenty of corporations have a lot of boxes that don't serve anyone other than internal users, and those are generally only used during business hours.
  • I don't even need any sophisticated power management, I'd be happy to be able to suspend or hibernate my desktop when I'm not using it. Windows seems to know how, it has a "sleep" button on the case, but the Linux 2.2 kernel doesn't know how even if I configure it with different power management options.

    There is a generic "hibernate" patch [worldvisions.ca] for the Linux kernel, but it doesn't seem to have ever made it into the main kernel tree. Too bad.

  • by rigor6969 ( 240549 ) on Thursday February 01, 2001 @03:03AM (#465241) Homepage
    Power supply fans, hard drive motors, they take on the most wear and tear during power up. Always have.. They are 99% of your power utilization. Thats where all the power is, the redundant power supplies, the raid unit.. I certainly wouldn't want my n+1 power supply to be sleeping, when the primary fails.. besides, a nice raid 0+1 which i use at work, takes upward for a few minutes to power up correctly, Stage that with some sleeping power supplies, and your talking minutes of down time. I'm sure most unix apps won't tolerate that. Most noc's use polling software like whatsup and redalert.com service, which test your sql server etc, won't work... You cali folks just need to have off-site backups in states where there is no issue. (Hint: i have an empty noc, msg me:) or just keep the lights off. I highly doubt "the internet" is truly eating up all of your power. How many light bulbs to computers are there in California? Why dont ya turn some of those off? ==sam=== free server vulnerability scan = www.vulnerabilities.org
  • Comment removed based on user account deletion
  • No -- This is a reason to bitch about Microsoft. In 1995, some of their "Product Managers" came in to our company and promised us that NT 4.0 would support APM (and Plug'n'Pray) as part of a sell to get us to standardize on NTW clients.

    They backed off on that promise, and punted the feature to NT5, which was going to ship in 1997 or 98 or 99, or last year. In the meanwhile, laptop vendors had to hack their own APM drivers, but desktop support hardly shipped. Bottom line is that there's 4 years worth of NT machines out there that can't even shut down the monitor when idle.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...