Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Power IT Technology

Benchmarking Power-Efficient Servers 97

modapi writes "According to the EPA, data centers — not including Google et al. — are on track to double power consumption in the next five years, to 3% of the US energy budget. That is a lot of expensive power. Can we cut the power requirement? We could, if we had a reliable way to benchmark power consumption across architectures. Which is what JouleSort: A Balanced Energy-Efficiency Benchmark (PDF), by a team from HP and Stanford, tries to do. StorageMojo summarizes the key findings of the paper and contrasts it with the recent Google paper, Power Provisioning for a Warehouse-sized Computer (PDF). The HP/Stanford authors use the benchmark to design a power-efficient server — with a mobile processor and lots of I/O — and to consider the role of software, RAM, and power supplies in power consumption."
This discussion has been archived. No new comments can be posted.

Benchmarking Power-Efficient Servers

Comments Filter:
  • ummmmm (Score:3, Insightful)

    by djupedal ( 584558 ) on Tuesday August 21, 2007 @09:27AM (#20304243)
    When someone considers the impact, end-to-end, from carving copper oreout of the ground to throwing the out-of-date server chassis into a furnace, then I'll pay attention...maybe.

    Until then, this is just marketing 101...
    • Not really (Score:3, Insightful)

      by WindBourne ( 631190 )
      The lifetime costs of the chips are what you can control directly. As it is, the manufacuering of the chip (and even the systems) are going to be relatively close to each in terms of energy. The CPU/GPU is the single largest means of our being able to control energy.
    • Re:ummmmm (Score:5, Insightful)

      by kebes ( 861706 ) on Tuesday August 21, 2007 @09:55AM (#20304691) Journal
      Your basic point, which is that we need to consider not just operating costs also manufacturing and disposal costs, is a valid one.

      However the way you've worded it amounts to "since we can't account for all aspects of impact, I'm not going to worry about any aspect of impact." That's a bit extreme. Surely reducing our power consumption during the operating lifetime of our servers is a step towards greater environmental and fiscal responsibility.

      Now, if you can show that the "energy saving" chips generate more pollution during production than the "normal" chips (and that this increase in pollution/energy-use/cost is greater than the savings during the lifetime operation of the chip), that's important. However I doubt that is the case. Thus, to ignore the potential advantages of power-saving measures in the data-center, simply because such measures don't address the orthogonal concerns of production impact, is silly.
      • You're right - let's do what we can, with what we've got. But at the same time, the rush to convert to so-called energy-efficient, 'sustainable' and/or renewables has already gotten out of hand.

        The crazy pork-loaded policy of subsidising turning feedstuff into ethanol is already distorting world food prices and policies, causing harm to the poor.

        The Toyota Prius (pious?) uses more fuel than a good small diesel car, and is less functional. In fact, you'd be doing more good for the planet if you just bought
        • Re: (Score:3, Insightful)

          by Azghoul ( 25786 )
          ... except that the Prius promises far more benefit long term than yet another diesel, by way of popularizing the idea that a car can, in fact, be driven by something other than burning dead shit.

          Never misunderestimate [:)] the power of technological progress - you gotta start somewhere.
          • Urm...like the fuel it burns? Remember, the Prius is not, unless you hack it, a 'plug-in'...it uses fuel like any other car, and more than some. Also, it's hard to recycle (battery pack) and performs poorly in a crash. So, not really a good role model then.

            But a Tesla, http://www.teslamotors.com/ [teslamotors.com], hmmmm

            Of course, the electricity to recharge the cells is mostly generated by coal-fired power stations. Damn...

            The sad fact is, (and yes, I mean 'sad' - I have kids, so I'm concerned about the future of the pl
            • by Azghoul ( 25786 )
              Yes of course the Prius burns fuel. Did you think we'd ever be able to one day just flip a switch (pun) and never use oil again? My point is that the Prius is a stepping stone towards making alternatives feasible in the marketplace. You can't mandate change.

              And I'm with you on nuke power, absolutely.
        • by Spoke ( 6112 )

          The Toyota Prius (pious?) uses more fuel than a good small diesel car, and is less functional.
          Please elaborate, because I don't believe you.

          I don't see any diesel cars on the market (or any diesels for that matter) that are similar in fuel economy, functionality and emissions, let alone more efficient or less polluting.
          • No problem, here in Europe we've got the Volks Polo for a start. I'm not sure if you guys in the sates get the same models...
            Also, checkout the latest Volks, Merc & BMW 'super efficient' models - low rolling resistance, engine cut-off on coast and at red lights...in 'real world' driving, (not EPA bullshit - yup, I'm an avid Car & Driver reader too), they beat the shit out of the Prius.
            • by Spoke ( 6112 )
              But none of the latest Volkswagons, Mercedes and BMWs are clean enough for for sale in the USA (yet, should be able to in the next year). With fancy (and expensive) urea injection and particle capture exhaust systems they manage to meet emissions requirements, but they still aren't for sale.

              BTW, did you remember to take into account the fact that diesel has ~10-20% more energy than gasoline and as a result ~10-20% more CO2 emissions for same fuel economy?

              Hybrids won't really get good until they start puttin
        • by redcane ( 604255 )
          Looking at http://www.greenvehicleguide.gov.au/ [greenvehicleguide.gov.au], The Prius comes first in terms of greenhouse gasses and air pollution, and equals the diesels in terms of fuel consumption. The Prius also offers more usable space than it's most fuel efficient competitors. Sure the embodied energy of any new car isn't amortized across as many passenger/kms, so you are nearly always better off putting the kilometres on a used car than a new one in terms of total energy use, but the Prius amortizes those manufacture costs muc
    • This is important in the IT industry for reasons other than being a tree-hugging hippie (not that there's anything wrong with that.)

      More efficient, lower power servers directly relate to a cash savings on your electric bill. One server operating at 10% greater efficiency may not be a big deal, but it starts to matter when multiplied over a room of servers. Servers that use less power (generally) put off less heat, so you also save electricity because you don't have to cool them as much, and you can cram mor
      • However, if those more efficient servers cost twice as much to purchase per unit of work, not to mention the energy used in manufacturing, the savings are reduced.
        • "However, if those more efficient servers cost twice as much to purchase per unit of work, not to mention the energy used in manufacturing, the savings are reduced."

          I think you missed something obvious - the cost of the servers includes the cost of the energy used to make them.

          That said, they could help reduce everyone's energy consumption by posting their stuff as plain html instead of pdf. Less data to transfer, no need to open up a pdf reader, etc.

      • I've always wanted to see better benchmarking systems [livejournal.com] that show how efficient various architectures are for various types of tasks, rather than just lists of MFLOPS and MIPS numbers. I know there's always a business push to be able to compare everything from vector-oriented supercomputers with huge registers to commodity clusters to massively parallel RISC arrays using just one single "magic" number (typically MFLOPS), but, this has so little effect on real-world performance. I want a benchmark to spit ou
  • Units? (Score:4, Funny)

    by niceone ( 992278 ) * on Tuesday August 21, 2007 @09:29AM (#20304279) Journal
    I'm hoping the units are going to be kWh/slasdotting.
  • I'm not a hardware guy - but in an all hands meeting the other week, we were told that virtualization was going to save us a bunch of money on power. Our data center isn't all that big and they were talking about 2-3 thousand (US$) a month or something like that.
    • Re: (Score:3, Interesting)

      by eln ( 21727 ) *
      Everyone is looking at virtualization for this sort of thing, and it does hold some promise. Currently, though, virtualization still comes with some very significant performance penalties. I think if virtualization can further mature, and we can get more cores and cheaper memory, we will be where we want to be.

      Eventually, if RAM continues to get bigger and cheaper, more cores get packed into chips, and virtualization becomes what it is intended to be in terms of performance and stability, we will start to
      • by growse ( 928427 )

        So you mean, big boxes with loads of CPUs and tonnes of memory, all connected to a huge storage system?

        Sounds like IBM did a good thing keeping their mainframe business open :p

        Commodity hardware was sold over big mainframes on the basis that it's much more scalable. If you want to do something else, just buy a couple of relatively cheap boxes and away you go. The thing that no-one mentioned is that it suddenly starts to cost a lot more $$$ to keep the things in power and cooled properly, so now we're se

        • It makes a lot more sense when you look at the cost of the computers over the past 20-30 years. In the 80's, a low end desktop cost $3,000. Today, you can get a high end desktop for that cost. Mainframes used to have seven figure prices. Today, you can get them for six figures. As the price of the mainframe came down, their advantage has grown, especially when you take into account the ease of maintaining redundancy on a single mainframe vs. tens (or more) of servers.

          Basically, mainframes are cheaper
      • Yippee, back to the mainframe! I wonder if I still have my old MVS and CICS manuals in the garage...

        I think we have the virtualisation technology we need - in fact we've had it for a long time. I just don't see massive adoption happening until there's a fat, cheap and secure pipe everywhere... Until then, I'll stick with my laptop, and home PC, and server, and think my kids will too. OK salesforce.com works, but it's still peanuts compared to the PC users worldwide - and what do people connect to salesf
      • by suggsjc ( 726146 )
        First, I want to state that I haven't actually worked directly with any virtualization implementations.

        Now, when the whole "virtualization is the answer to everything" wave started rolling in, I got excited and thought it was going to actually make good on all of its promises (and it still may). However, what is keeping me from actually putting it to use is that when you put several different VMs inside one box, then all of those VMs can be taken down by a single failure (disk, power supply, nic, etc) th
        • However, what is keeping me from actually putting it to use is that when you put several different VMs inside one box, then all of those VMs can be taken down by a single failure (disk, power supply, nic, etc) that would have normally only taken down a single "real" machine.
          If you have lots of tasks that don't take much computing power by todays standards and you don't have a massive budget then you have two choices.

          1: put each one on it's own cheap shit box
          2: put them all on one higher grade box which has
          • by suggsjc ( 726146 )

            you have the option of designing your software so machine failures are tollerated but I can't see how you can do that with a mix of legacy applications

            I really want to post my proposed architecture, but without going to too much detail, I am working on designing a system that uses mainstream software on commodity hardware in such a way as to break down each component into trivial tasks. Each of the tasks are stateless and therefore can be spread across different machines so that parallel requests (even by the same user) can be handled by different hardware components concurrently. By having at least 2 (more in most cases) machines that can perform each

    • by jon287 ( 977520 )
      Absolutely. I've seen a dozen or more corporate datacenters with racks and racks of outadated windows servers, mostly pentiums 2's and 3's that are nothing more than desktops turned on their sides with metal brackets from the harware store holding them in.

      Literally dozens of those things could be displaced by single modern VMWare or Xen hosts. Its all a matter of manpower and know-how. (As well as convincing the PHB that his initial outlay will be made up quicky with power savings and administration cost sa
      • In case anyone is wondering how to estimate the cost of power for running a server, I've found that simply plugging the server into a Kill A Watt EZ [the-gadgeteer.com] is quite effective. Just enter the cost for power and let it measure the power usage for a few days to get a good average, and it will calculate the cost for power per month or year automatically. To account for the cost of cooling, you may need to double that amount.
      • Write up a paper/memo/whatever and send it to the guys who pay for the power over the possible cost reductions in power. Write up a whatever and send it to the guys who pay for the replacement parts and management of the aging servers on the costs of managing 50 P2s over one larger server.

        If at that point they don't get together and work (apply pressure) to reduce the number, your company is full of idiots.
        • "Write up a paper/memo/whatever and send it to the guys who pay for the power over the possible cost reductions in power."

          We did just that a few years ago. It was justification for switching from CRT to LCD monitors.
          In one year we saved enough in electricity to pay for the difference in price. Like many businesses, our electricity costs are based on our highest month's bill. By reducing that we ended up saving for the whole year.

          We're regretting our move from a single IBM pSeries to 10 HP rack-mount servers
          • Any chance your new IT director has enough "street cred" (sorry, can't think of a better term), to convince enough of the right people to consolidate and eliminate the old servers? I'd think that a modern server could replace at least 10 P2 machines and consume far less power.
            • Unfortunately, this would require a lot of convincing. These servers run our mission critical 24/7 operations. They use Marathon solution for high availability and re-configuring them would be an big task. The last time we did a change (reconfiguring hard drive arrays), we had 2 hours of down time during the backup and restore and the Operations dept wasn't very happy. I'm going to try and talk him into replacing the servers with a single one next time we have a major software release, usually every 2-4 yea
    • by Znork ( 31774 )
      "we were told that virtualization was going to save us a bunch of money on power."

      Sortof. Unfortunately the ease of deployment and price reductions accomplished tend to result in a vast expansion of virtual servers instead. You're likely to end up with as much hardware except it's doing several times more than what it used to.

      At least from what I've seen of virtualiztion your bill isnt going to get smaller, you're just going to get more for it. Which isnt too bad anyway.
  • by Anonymous Coward
    EPA Official: You've gone mad with power!
    Russ Cargill: Of course I've gone mad with power! Have you ever tried going mad without power? It's boring and no one listens to you!
  • When looking at all the servers in datacenters I often wonder how many servers are not actually needed. I do not mean redundant or due to be decommissioned servers. I mean there are to many dedicated servers. It is a common theme to have every service using a dedicated server. Is there any reason a server can not do more that just DNS? A properly built, configured and maintained server should be able to fill multiple roles thereby reducing space, power and cooling requirements.
    • We have several web servers with IIS. Due to different software requirements (.Net 1.1 and .Net 2.0, some add-on or another), the functionality of the web servers is not guaranteed. Adding another web domain (developed with other tools) could be tricky.
            I would really like to have one server for each of the web sites running on Windows. Too bad virtualisation is out of discussion, as Windows is a big memory hog.
      • by SlamMan ( 221834 )
        I've found Windows to do better under virtualization than one would expect. ESX server does some nice tricks for shaving overhead amounts when you have multiple clients of the same OS.
      • For the most part this is due to software vendors not wanting to deal with support calls on more complicated systems. It's simply easier for them to support if you take out all of the variables introduced by other applications running on the machine.

        The simplest thing for them to do is say or recommend it goes on a dedicated server. Much less of a headache come support time. Of course businesses spend millions because of it, and Dell/HP/IBM rake in the bucks.

        I mean really, do license servers need to be de

    • by samkass ( 174571 )
      This is exactly the question answered by virtualization. Mixing lots of server processes on one OS instance makes it difficult to maintain status, monitoring, fault tolerance, etc. And before virtualization, a different OS instance required a different piece of hardware. When you hear about companies using VMware to save thousands on server power and air conditioning bills, this is where that savings is being realized.
    • by LWATCDR ( 28044 )
      Well I question if the X86 is the way to go. This one size fits all mentality with CPUs needs to go. For things like web serving why not something like the SunT1 or T2. For NAS the PowerPC, ARM. and MIPS might be more power efficent.
      For render farms, database servers, and HPCs the X86-64 and Power5, UltraSparc T2.

      The X86-64 does a good job at about everything but it is not the best at anything. The new low power laptop cpus are not terrible but I don't think they can match the ARM, PowerPC, or Mips in the
    • Have been optimising server resource utilisation for decades.

      The real problem is that most I.T. staff are either as dumb as bricks and have no idea how to make use of one or have plenty of profit to burn and just don't care.

       
    • Cramming lots of things onto one server running a single OS is generally problematic at best, it's the classic small IT shop mistake for long term reliability. Vmware can alleviate that by dedicating an OS to a function so you can do things like reboots without affecting a small pile of ancillary systems add a san and now your not adding piles of disks with each new server and can do vm level load balancing and clustering, have to replace some hardware move all the vm's off without any outage and fix the g
    • Why? Reliability!!!

      If every service / customer is independent of every other service / customer, then outages tend to stay small and simple. And with a simple, stupid datacenter your failure modes also tend to be simple and stupid. 99.999% uptime is often worth the cost to keep too many servers running.
  • Don't worry (Score:3, Funny)

    by Colin Smith ( 2679 ) on Tuesday August 21, 2007 @09:46AM (#20304533)
    "The Singularity" predicts that processing power will continue to increase exponentially for ever. So obviously, electricity generation will also do the same. Not a problem.

     
  • If we ever run into wide-spread power availability issues. In the event of a natural or economic disaster, perhaps a series of them or we just degenerate into a civil war between political factions. No one ever imagines we could go through a near-collapse and fragmentation similar to the old Soviet Union.

    We'd likely have bigger worries than whether we could keep our data centers running but it's an interesting scenario to contemplate. I honestly had no idea data centers in the US consumed that much powe

  • DC power (Score:3, Insightful)

    by rhaig ( 24891 ) <rhaig@acm.org> on Tuesday August 21, 2007 @10:02AM (#20304795) Homepage
    everything uses DC internally. Some hardware allows for DC inputs. using DC across the board would greatly reduce cooling costs.
    • OTOH, the cabling will be more expensive and complex. AFAIK there are no standard connectors for -48V DC.
      • Sort of.

        The smaller gear we use has standard-ish connectors. Positive, negative and ground go into a terminal block, which in turn gets plugged into the device. That may just be a Cisco thing, I'm not sure -- even then, it's only on the relatively low draw devices with high gauge wire that use it.

        The bigger stuff is all lugs, which I guess you could call a standard, but it's a nightmare to wire. Cut and crimp positive, negative and ground, fight with heavy gauge wire, find suitable ground points, or make yo
        • Re: (Score:3, Interesting)

          by Doctor Memory ( 6336 )

          Cut and crimp positive, negative and ground, fight with heavy gauge wire, find suitable ground points, or make your own if you have to...
          Nonsense. You just run the wires over to the buss bars that run between the racks, drill a new hold for each one and bolt it up. No muss, no fuss, and your wires are easy to manage. Plus, open buss bars hold serious comic potential if one of your cow-orkers likes to wear a big key ring on their belt...
    • by Anonymous Coward
      The only place DC power makes sense is large data centers, where AC is converted to DC in only a few places, instead of in each machine.

      That's because DC power distribution suffers from massive losses if it's transmitted across any decent distance.
      • What is it about DC that makes losses lower? AFAIK HVDC is used widely in power transmission, especially underseas where AC suffers losses due to induced currents in the water. I'd have thought corona losses would also be lower with DC, though correct me if I'm wrong.
        Modern solid state equipment means DC-DC conversion is more efficient than ever - AC was originally chosen because of how hard it was to convert DC between different voltages (the high ones required for transmission and the low ones required
        • note that this should read 'what is it about DC that makes losses higher'?
          • It isn't that it's DC. It is that the DC is low voltage compared to 110V AC. Wire losses are inversely proportional to the quare of voltage so 110 AC is about 100 times as efficient as 12V DC (Think I^2xR for the resistive power loss). It is very difficult to change the voltage of DC and very easy to do it with AC. For that reason high(er) voltage AC is preferred for power distribution.

            HVDC makes the most of the voltage limit in the wire insulation and the relative lack of inductive losses to ground fro
      • The only place DC power makes sense is large data centers, where AC is converted to DC in only a few places, instead of in each machine.

        Actually, traditionally the reverse is generally true-- it's only the small (That's because DC power distribution suffers from massive losses if it's transmitted across any decent distance.

        Low voltage infers high current; high current causes losses in the wiring. Traditionally, it was hard to convert DC voltages as is done in AC with transformers. This precluded having tr

    • by javaxjb ( 931766 )
      You mean like these http://www.rackable.com/solutions/dcpower.htm [rackable.com]?
    • Hmm, I've had this suggested to me before, and the main thing that worries me (apart from MASSIVE cable losses - seriously, you work them out), is that when you turn something on or off that's really meaty you'd get some lovely magnetic fields developing. I've not checked the maths for that yet, because it's not quite back-of-envelope for me, but I'd want to double check them pretty urgently before suggesting this near your hard drives.
    • everything uses DC internally. Some hardware allows for DC inputs. using DC across the board would greatly reduce cooling costs.

      Weeeelll... not necessarily. When you start dealing with long wires, you end up having to deal with voltage drops across those wires. If your computer needs 5.000V to run reliably, you simply can't feed it with 5.000V produced by a power supply ten metres away, because by the time the electricity reaches the computer it won't be 5.000V any more.

      Which means you need to feed it w

    • using DC across the board would greatly reduce cooling costs.

      Except that it wouldn't...

      Switching (AC) power supplies have the potential to be just as efficient, if not more so, than DC power supplies.

      With a DC datacenter, you have to have a big central AC/DC converter, and then a bunch of DC/DC voltage converters. There's very little to gain, even in theory.

      In practice, you'd probably do far better if you took a fraction of the money it would cost to make a DC datacenter, and instead replace all the PSUs w

    • That's what blades are for. Run a ton of machines off a pair of redundant power supplies.
  • Kudos to HP for taking the initiative on this, but one must question the validity of any benchmark designed by a corporation that will use the benchmark, whether it will be a true reflection of real world performance and efficiency, or just shine false light on HP's design practices.

    If nothing else, maybe this will spur the design of other relevant energy efficiency benchmarks.

    db

  • Seems to me someone left out another viable alternative.... especially since we just had that article last week about IBM consolidating several hundred PCs into a handful of Big Iron boxen...
  • I think a better solution would be for corporations to meter, document and budget power expenditures in the server room. Once costs like this are recognized corporations will be more motivated to do something about it. You can justify the cost of switching to a 12 volt setup if you have the numbers to back it up.
  • Two sentences I would highlight from the StorageMojo article [storagemojo.com]:

    1) Developers, the time may not too far away when your code is measured on power efficiency.
    2) Software effects will be found significant as well because widely used software affects so many systems.

    This reminds me of an article here on /. [slashdot.org], about how Microsoft could become the world's greenest company with a few small changes in code to be more aggressive about using power saving modes by default. Hardware makers have been harping on about
  • by maillemaker ( 924053 ) on Tuesday August 21, 2007 @11:24AM (#20306137)
    Years ago we heard how PCs were going to be embedded in everything from the dishwasher to the refrigerator, and I was left wondering, "why?"

    Perhaps now I know.

    It would be nice if I could set my house up on a "power budget", and let my appliances vie for electrical power and load-balance themselves to stay within that budget. If all appliances spoke over the in-house wiring (or perhaps wireless) and could turn themselves off or adjust their power usage that would be awesome.

    You could implement something similar to this today with an X10 system or the like, but this is more of an off/on scenario, and is not based on actual power demands.

    It would be great if all of my electrical things in my house could get together and say, "OK, guys, we have X amount of electricity to use today between all of us. Let's figure out, based on past usage patterns, who needs to be on and when in order to hit this budget".

    • It would be great if all of my electrical things in my house could get together and say, "OK, guys, we have X amount of electricity to use today between all of us. Let's figure out, based on past usage patterns, who needs to be on and when in order to hit this budget".
      It sounds great now, but wait for the screams you hear from the shower when the hot water heater cuts out 2 minutes in...
    • Re: (Score:3, Funny)

      by RAMMS+EIN ( 578166 )
      ``You could implement something similar to this today with an X10 system or the like...''

      Dude, we've been using X11 [x.org] for some time now! X10 has been obsolete for almost exactly 20 years...
    • It would be nice if I could set my house up on a "power budget", and let my appliances vie for electrical power and load-balance themselves to stay within that budget.

      I wonder if you are mixing up two things - power and energy.

      Power is important. Enough power plants and transmission capacity have to be built to handle the peak power load. Leveling out power usage can save money in construction costs and reduce the footprint of the electrical infrastructure.

      As individuals, most of us pay for electrical

    • Smart-Power appliances sound good until you consider that each of 50 appliances may be dissipating 5W while in standby. Oops! There goes another 2000 kW-hr per household per year.
  • by chthon ( 580889 ) on Tuesday August 21, 2007 @11:46AM (#20306531) Journal

    Answer me this : how much power is lost through the use of inefficient programming languages and architectures which only emphasize processor speed, instead of balancing memory, processor and IO ?

    Python, Perl and PHP all suffer from one big drawback : when you scale up you need that much extra processor power. One programming language I know (Common Lisp) offers the advantages of them, but can be compiled to near C/C++ speeds. I suppose there are others. Don't come saying that programmers are expensive. It seems that what you gain on programmers, you lose in the cost of your datacenter. I don't know how Java matches here, it probably depends upon the deployment of more recent JIT compilers.

    If you see how much a process has to wait on IO, how come there are still no good solutions in providing enough IO bandwidth that the processor can use fully ? (Unless you buy a mainframe or iSeries system that is)

    Just asking.

    • Re: (Score:3, Interesting)

      by DamonHD ( 794830 )
      As to Java: I have just moved a rack of (Solaris) servers @670W on to a single (Linux) laptop @18W (~25W from mains, but sometimes it runs off-grid on solar PV).

      http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]

      I actually now control the CPU-speed control with another small Java app (see update for 2007/08/20 on same page) and in particular watching it with strace() can't see the JVM doing anything that hand-crafted C wouldn't in the main loop.

      In fact, the whole machine, including several Java and static Web ser
      • I'm interested in knowing more about how you are booting from flash while mirroring to disk. Would you explain?
        • by DamonHD ( 794830 )
          There is no live mirroring going on.

          The 4GB SD card was/is essentially dd-ed mirror of the first three hard disc partitions (/, /usr, + spare) sized to fit 4GB, though now it has drifted away as I have not applied all recent changes to the hard disc copy (manually, by mounting the partitions and hacking files with vi, etc).

          The idea is that if/when the SD card dies from too many writes (I have no idea how long this will take even though I have minimised writes) then I can boot off hard disc with minimal work
    • That depends on the application. I do believe that most of the heavy processing in Python, Perl, and PHP apps normally occurs in the database, so they are probably responsible for most of the power consumption in those apps.
      • I should also add that if the database leaves any spare cycles, XML processing will use them up, long before the scripting code will.
        • by chthon ( 580889 )

          What would be the key here ? Are there optimisations possible to speedup XML processing ? Can guidelines be written to enhance XML designs for speeding up processing ? Have there been profiling tests to see where in the XML processing the bottlenecks are ? Should you use XML even ?

  • SiCortex [sicortex.com]

    They're more focused on computation than giant racks of storage, but their 2 systems are rated at max 3W/core total power consumption including drives, power supplies, interconnect, etc. I suspect the actual power draw will be much less.

    How much storage does a "typical" datacenter have? (I know any answer would have huge variances.) For probably over two million USD or so, you should be able to get their larger system with 8TB of RAM and run their RAM-based Lustre filesystem along with the

  • So, is there some kind of standard for data centers for computing power - some combination of raw processing and data throughput - per megawatt? I mean, is there some magic number that data centers should try to achieve? And if there is, how do you measure that?

    -John Mark

  • According to the EPA, data centers -- not including Google et. al
    So data centers, excluding large data corporations, are going to use a lot of power? ???
  • This was an idea that I had before at the last job, never got a chance to implement it. We had a lot of cluster nodes for computing various things that were business related. There was a queue of work, and they got assigned to nodes if it matched the right criteria. The issue is that we had a few things, like crunching financial data at month/year end which required us to have some database servers that were beefier than others.

    So the idea was to have systems re purposed automatically, using something like
    • Ah, a man after my own heart.

      Add VMware or Xen to the mix and you can pretty much get rid of the boot time as well as the install time. And if you have uniform hardware with LOM cards you can even automate the powering on/off of the base servers depending on the load of all the existing machines in the grid.

       
      • Since the install time is so little, it didn't matter much to me. VMware and Xen add complexity that wasn't necessary in my ideal, plus those solutions were well not terribly well vetted in the enterprise when I came up with the idea 3 years ago.

        The real benefit to those is that if you have systems that don't need resources in large chunks (i.e. 2-4 dual core Opterons, 8-16 GiB of RAM). That chunk size seemed to be fine, and it was similar to what the DBAs were used to throwing into our V880s when they need
  • Sun Microsystems is going to make a killing over the next couple years. Say what you will about "red-shift" and all their other hokey marketese, they definitely got ahead of the curve [sun.com] when it comes to the idea that power, cooling and infrastructure costs and going to become limiting factors on business growth in the coming decades.
  • ...if ./ and Digg went offline for a week?

    On a serious note, since I've got nothing else to do at work (as it's mind-numbingly boring), I was trying to figure out how many electrical plugs I would actually need to live a happy life (note, this is an extreme).

    My answer: 2

    One for the fridge, the other for a radio. Of course, it would be nice to have ceiling fans too, but those don't require power plugs, just direct wiring. Think about how many things you have plugged in to your house (e.g. those cell phone

You can be replaced by this computer.

Working...