Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power Hardware

Server Power Consumption Doubled Over Past 5 years 148

Watt's up writes "A new study shows an alarming increase in server power consumption over the past five years. In the US, servers (including cooling equipment) consumes 1.2% of all the electricity in 2005, up from 0.6% in 2000. The trend is similar worldwide. 'If current trends continue, server electricity usage will jump 40 percent by 2010, driven in part by the rise of cheap blade servers, which increase overall power use faster than larger ones. Virtualization and consolidation of servers will work against this trend, though, and it's difficult to predict what will happen as data centers increasingly standardize on power-efficient chips." We also had a recent discussion of power consumption in consumer PCs that you might find interesting.
This discussion has been archived. No new comments can be posted.

Server Power Consumption Doubled Over Past 5 years

Comments Filter:
  • by JusticeISaid ( 946884 ) on Friday February 16, 2007 @04:09PM (#18044162)
    Well, I blame Al Gore ... for inventing the Internet in the first place.
    • How dare you blame the man who has ridden the mighty moon worm!
    • by IflyRC ( 956454 )
      Well, you certainly can't blame the SUV on this one.
    • It's a bogus statistic anyways, just another liberal whacko with his panties in a bunch. Did you rta? "US servers now use more electricity than color TVs." Clearly they're scrabbling to invent an impressive statistic by discounting black & white televisions. There must be thousands of those still out there! We might as well just give up making computers more efficient.
  • There's a Gizoogle [blogspot.com] new machines on line!
  • by bilbravo ( 763359 ) on Friday February 16, 2007 @04:10PM (#18044172) Homepage
    Nah... the figure doubled. I'm sure the overall power consumption in the US (or elsewhere) has not lessened while servers have doubled.
     
    Nitpicking, I know...
    • Re: (Score:3, Insightful)

      by bilbravo ( 763359 )
      Hit submit, not preview...

      I wanted to add, I'm sure that means the number has more than doubled; I'm sure power consumption has grown, so if the percentage doubled, that needs to be multiplied by whatever factor energy consumption OVERALL has increased.

      I got too excited about my nitpicking to post my actual though.
    • Another nitpick, they claim servers use more electricy than TV. But looking at the graph, half the electricity they're counting for the servers is cooling. Did they count the electricity used to cool the TVs? Might sound silly since we don't think about "cooling" TVs, but if you're running AC, any appliance you use adds to the heat burden.
  • Solution (Score:4, Interesting)

    by Ziest ( 143204 ) on Friday February 16, 2007 @04:24PM (#18044380) Homepage
    48 volt DC. Why the hell are we still putting 110 AC into the power supply and steping it down to 24 volt DC. And what do you get when you do that? HEAT. And to compensate for not having a better power system you then get to spend a fortune on HVAC to cool the room that you heat by stepping down the voltage. 110 power supplies make sense in the home but in a data center it is stupid.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      The frustrating part is that some of the equpiment has that ability built in, it's just not standardized enough to be used. A bunch of our cisco gear has a plug for backup power, and we had some DEC equipment years back that did, but they were different plugs and different voltages. If it were standardized, life would be good.
      I think what it would take is for UPS manufacturers to standardize a set of voltages (12, 5, 3.3 perhaps) and a plug so that it would be very easy to replace standard power supplies wi
      • A bunch of our cisco gear has a plug for backup power, and we had some DEC equipment years back that did, but they were different plugs and different voltages. If it were standardized, life would be good.

        So switch to Redback gear. It can all be powered by telco-standard 48VDC supplies. B-)
        • by Pr0Hak ( 2504 )

          A bunch of our cisco gear has a plug for backup power, and we had some DEC equipment years back that did, but they were different plugs and different voltages. If it were standardized, life would be good.

          So switch to Redback gear. It can all be powered by telco-standard 48VDC supplies. B-)

          Cisco and most (all?) of the other high-performance router/swtich manufacturers have -48V DC power supplies for their equipment. Some of their equipment has only a -48V DC power option, no AC option.
    • So how do you get 12 volt, 5 volt, 3.3 volt, and 1.5 volt DC from that?

      • With DC-DC downconverters, which also generate heat (and potentially EMI).
      • Re:Solution (Score:4, Interesting)

        by Ungrounded Lightning ( 62228 ) on Friday February 16, 2007 @05:06PM (#18044968) Journal
        So how do you get 12 volt, 5 volt, 3.3 volt, and 1.5 volt DC from that?

        High-efficiency switching regulators on the blades. (They're actually getting so good that you have less heat loss by putting a local switcher near a power-hungry chip than by bringing its high current in at its low voltages through the PC-board power planes.)

        Getting the raw AC->DC conversion out of the way outside the air-conditioned environment saves you a bunch of heat load, as does distributing at a relatively high voltage (such as "relay-rack" standard 48VDC) to reduce I-squared-R losses. And switchers are more efficient with higher raw DC supplies, so going to 48V (about the highest you can while avoiding touch-it-and-die shock hazard - which is why Bell standardized on it) is much better than 12 or 24.
    • DC power would be dandy if it weren't cost prohibitive to convert older, massive, well-established operating systems to it. And small incremental additions to such an existing, large installation don't justify the added expense of DC power on their own. As a result, it's not so easy for data centers to do this conversion. If it were able to pay for itself in 8 weeks, you might see more activity...
      • Re: (Score:3, Informative)

        by NerveGas ( 168686 )
        I wasn't aware that operating systems really cared which voltage was powering the hardware... :-)

        While individual systems may vary, I've noticed that the older the facility where I was working, the more likely they were to have DC power - since the facilities were "telco" before they were "telecom", and most telco stuff is DC. Even in newer datacenters, it's only the small outfits that haven't had DC, most of the larger ones have had DC available.

    • Re:Solution (Score:5, Insightful)

      by NerveGas ( 168686 ) on Friday February 16, 2007 @06:01PM (#18045676)

            Get a grip on reality.

            Even if you switch to 48V DC, you still have to convert 120 VAC to 48 V DC, then down to 12/5/3.3/1.x volts for motors and logic, so all you're doing is moving the conversion from a decentralized setup (a power supply in each computer) to a centralized one (a single large power supply). In the end, however, you still have to get from 120 down to around 1 volt for the CPU, and you're not going to suddenly make an order-of-magnitude change in the efficiency of that - or even near a doubling.

          To keep it in perspective, though, there are vastly overshadowing losses which make the small differences in centralized/decentralized conversion efficiency moot. Your 120 VAC leg is probably coming from a 440 VAC lead coming into the building, and going through a very large transformer to get 120 VAC - and the 440 VAC that comes in is coming from a much higher voltage that was converted down at least once (and perhaps more) after being transmitted very long distances. The losses in all of that are much, much higher than the losses in conversion that you mention.

          Sure, if you could generate and transmit a nice, smooth, regulated 48V DC from the power station to your computer, that would be great - but that's so unfeasable that you might as well wish for a pink unicorn while you're at it.
      • Sure, if you could generate and transmit a nice, smooth, regulated 48V DC from the power station to your computer, that would be great - but that's so unfeasable that you might as well wish for a pink unicorn while you're at it.

        I wouldn't put it past Google if they haven't considered making their own power generation facilities already.

        Actually doing it on the other hand is another question in itself.
      • Actually, stepping down AC voltage is one of the most efficient energy conversion processes man has ever produced (we can get up to ~99.75% efficiency).
    • by jd ( 1658 )
      Well, AC is much easier to transport, for a start. DC is, in general, horrible for anything of size. I'd actually go the other way - increase the voltage to homes. The UK domestic power outlets are 240 volts, 50 Hz, 13 amps. Modern American homes have two independent sets of circuits, so that they can provide power to major appliances, as they only handle 120 volts each. In the end, you might waste anywhere from twice to eight times as much over the American home's electrical supply, through the use of much
      • by amorsen ( 7485 )
        Well, AC is much easier to transport, for a start. DC is, in general, horrible for anything of size.

        What is horrible about 400V DC? If that's too dangerous, go with 240V DC instead.
        • by jd ( 1658 )
          Danger isn't the issue, and if it were, current is vastly more dangerous than voltage. (Static discharges that you survive on a regular basis can be tens of millions of volts.) DC is much more troublesome to transport because of waste, not because it's a hazard. If you want to conserve power, then anything that wastes energy is a Bad Idea.
          • by amorsen ( 7485 )
            DC is much more troublesome to transport because of waste, not because it's a hazard.

            Why do you keep believing that DC wastes energy when transported? DC is a tiny bit MORE efficient to transport at the same voltage, since there is no skin effect. So use the same voltages as you did before with AC.
  • by mdsolar ( 1045926 ) on Friday February 16, 2007 @04:25PM (#18044390) Homepage Journal
    Maybe, if they are sending out data. The standby power use of TVs and such is greater.

    Sun's David Douglas, VP Eco Responsibility, estimates that the cost of running computers (power use) will exceed the cost of buying computers in about 5 years: http://www.ase.org/uploaded_files/geed_2007/dougla s_sun.pdf [ase.org]. This site has more (mainly corporate) musings on energy efficiency: http://www.ase.org/content/article/detail/3531 [ase.org].
    --
    Get abundant, get solar. http://mdsolar.blogspot.com/2007/01/slashdot-users -selling-solar.html [blogspot.com]
    • Re: (Score:3, Funny)

      >> Sun's David Douglas, VP Eco Responsibility, estimates that the
      >> cost of running computers (power use) will exceed the cost of
      >> buying computers in about 5 years

      i think if you're running linux/intel it's already the case.
      maybe the cost of sun's hardware is so high that the problem is still 5 years out for them.
      • You might be right. I know that old suns can last a very very long time, but boy they cost when they were new.
  • Moore's law (Score:5, Insightful)

    by k3v0 ( 592611 ) on Friday February 16, 2007 @04:27PM (#18044434) Journal
    Considering that the processing power has more than doubled over that amount of time it would seem that we are still getting more bang per watt than before
  • by G4from128k ( 686170 ) on Friday February 16, 2007 @04:29PM (#18044456)
    Why does this alarm anyone and is it even really true? Several factors conspire to make this statistic both bogus and unalarming.

    1. More computers are classed as "servers." I'd bet that before many of the workgroup and corporate IT computers and mainframes weren't classed as "servers." It's the trend toward hosted services, web farms, ASPs, etc. that is moving more computers from dispersed offices to concentrated server farms.

    2. More of the economy runs on servers - this would be like issuing a report during the industrial revolution that power consumption by factories increased at an "alarming" rate. Moreover, I'd wager that a good chunk of that server power is paid for by exporting internet and IT-related services.

    3. Electricity is only a small fraction of U.S. energy consumption. Most of the energy (about 2/3) goes into transportation (of atoms, not bits).

    It's only natural and proper that server power consumption should rise with the increasing use of the internet in global commerce. This report should be cause for celebration, not cause for alarm. (but then celebration does sell news, does it.)
    • by fred fleenblat ( 463628 ) on Friday February 16, 2007 @04:52PM (#18044784) Homepage
      more to the point energy-wise, people using those servers (for on-line shopping, telecommuting, etc) are saving tons of enegy by not driving to the store, the mall, or the office to accomplish everything.
      • Excellent point! The key is not "how much energy is item X using?" but how wisely is that energy being used? Moving bits is greener than moving atoms. YouTube or Bit Torrent is thousands of times "greener" than driving to Blockbuster. Using online banking is greener than driving to the ATM or mailing a check.
    • Re: (Score:3, Interesting)

      by AusIV ( 950840 )
      I agree. Server power consumption may have doubled over the past 5 years, but what has the increase in data throughput been? Using a mutilated version of Moore's law, I'll assume that each server is doubling it's throughput every 18 months. 5 years is 60 months, so each server should have doubled 3 and 1/3 times, meaning each server is over 8 times more productive than they were 5 years ago (it's closer to 10, but we'll round down, as I'm trying to make this a conservative estimate).

      It's also safe to say th

      • by Da Fokka ( 94074 )

        Suggesting that this increase of power consumption is alarming is absurd.
        I don't think there is anyone here who claims that. However, this does mean that server power is a more suitable candidate to look to for potential energy savings now than it was 5 years ago.
        • by AusIV ( 950840 )

          Suggesting that this increase of power consumption is alarming is absurd.
          I don't think there is anyone here who claims that.
          From the summary:

          A new study shows an alarming increase in server power consumption over the past five years
  • In the average home, the refrigerator had been the biggest power consumer for a long time. Now this place had been taken by the computer. Computers at home can be switched off when not in use, but for a server this is hardly possible. I'm not a computer hardware designer but I am curious in what ways the power consumtion of computers can be reduced. Using better cooling equipment? Using another semiconductor than silicon for the CPU? Or a radical change in the design of the CPU or orther components? Are the
    • Re: (Score:3, Informative)

      by Technician ( 215283 )
      Using another semiconductor than silicon for the CPU? Or a radical change in the design of the CPU or orther components? Are there experts here who can elaborate on this?

      Performance per watt is a biggie for chip manufactures. Having a less than 10 watt server chip is possible, but who wants to use a Palm Pilot for a transaction server?

      Having the performance to handle a slashdotting is what is needed in many servers. Performance is first, power consumption is second. That is why the performance per watt i
      • core2 may use less power but FB-DIMMS eat a lot more then ddr2 ecc and that is where amd is better.
      • Look at what the Core 2 Duo and quad is bringing to the server market.
        Please note the Woodcrest and Operon is now obsolete.


        I'll grant that C2D is faster than Opteron, but Opteron is hardly "obsolete".

        Also, Woodcrest isn't obsolete at all - it's the server version of Core 2 Duo (Conroe).
    • How about not buying Windows Vista?

      All the time this power increase has been happening, chips have been getting more efficient (in terms of power per operation). However, they're also doing a lot more work. 6 years ago a typical new computer was something like a 700-1000 MHz Pentium III (except for the Celeron cheapies) with 128-256 MB of RAM. The computer I built myself this last Christmas is 2.1 GHz dual core with a Gig (for now) of RAM. That's 4-6 times the clock cycles (at 64 bits, nonetheless) and 4
      • Every time hardware improves, we don't use it to cut down power or anything like that. We use it to increase the number of operations we perform.

        Not always. One can look at chips like the VIA EPIA and AMD Geode to see strides in that area. For instance, the project I'm working on right now uses PCEngines WRAP boards, based on the AMD Geode SC1100, which is basically a low-power 266 MHz 80486. Complete with 128 megabytes of RAM, a couple of gigabytes of CompactFlash storage, and a 802.11g mini-PCI car
    • by TheLink ( 130905 )
      Refrigerator? I'd think for the average home in the USA it's either airconditioning or heating. Only in a few places is the temperature "just right" all the time.

      Of course if you talk about the average home _globally_, then it's probably totally different. It'll be eating, cooking food and boiling water..
  • by gavint ( 785035 )

    driven in part by the rise of cheap blade servers

    Rubbish. One of the biggest myths in server sales today is that blades consume more power. If you fill racks full of them they consume more power per square metre of floor space, not per server. If you need the same number of servers they should consume less power, largely due to the centralised AC/DC conversion.

    HP especially are working to make blades some of the most efficient servers on the market.

    • Re: (Score:3, Funny)

      by timeOday ( 582209 )
      But compared to other computers, blade servers have a higher density of processors to other expenses (especially if you include server space), so your $100K buys more CPUs. I know it's certainly arguable which is the most relevant metric, but look at it this way: the Eniac [bookrags.com] pulled 150,000 watts. Since computers are so much more efficient now, the total burden from computers must have fallen, right? Wrong. Because the economics now allow google to run 200K computers (a guess, since it's a secret). Sure,
    • by TheLink ( 130905 )
      The problem with blades is in most server rooms or data centers you can't fill racks full of them anyway.

      This is because there's just so much cooling and power the data center can provide per square metre of floor space.

      So if less dense solutions are cheaper/performance you might as well use them instead of blades.

      I guess data centers will be upgrading their power and cooling, but it may be cheaper to build more data centers than to make them more dense.
  • by stratjakt ( 596332 ) on Friday February 16, 2007 @04:42PM (#18044622) Journal
    "If current trends continue" is almost always followed by a fallacious argument. Current trends rarely continue. Be it world population, transistor density, climatology, and especially at the blackjack table.

    Just pointing that out.
    • by mcrbids ( 148650 )
      Current trends rarely continue. Be it world population, transistor density, climatology, and especially at the blackjack table.

      Except that current trends have continued for 30 years in the case of Moore's law.
    • Yeah, I have to say that the numbers don't really make sense.

      In 5 years, server power has gone from 0.6% to 1.2% of the US' total energy usage.

      Is it a linear growth or a quadratic growth? (With two data points, I can say whatever I want, of course).

      So we can either expect server power usage to be either 1.8% of total energy usage (linear). Or, we can expect it to be 2.4% of our power usage (quadratic growth).

      Neither of these numbers seem like 40% to me. Of course, back in '99 we were all talking about how w
  • by Phanatic1a ( 413374 ) on Friday February 16, 2007 @04:42PM (#18044642)
    In the US, servers (including cooling equipment) consumes 1.2% of all the electricity in 2005, up from 0.6% in 2000. The trend is similar worldwide. 'If current trends continue ...then by the year 2100, server rooms and cooling equipment will consume over 300,000% of all the electricity!
    • by Knetzar ( 698216 )
      With only 2 datapoints there isn't a trend. It could be linear (12.6%), exponential (~630,000%),logarithmic (???), or something else (???).
  • by WoTG ( 610710 ) on Friday February 16, 2007 @04:49PM (#18044738) Homepage Journal
    It's not like we plug in computers to sit around idling all day. They're doing stuff. I can send an email to anywhere on the planet instead of stuffing and envelope to have it carried by truck, boat, or plane. Cars have better power plants than ever before... they didn't get that way with back of the envelope calculations! A lot of forms that I used to submit by fax or snail mail? All gone electronic.

    So, computers are using more power than 5 years ago? Who cares? If it bothers you, then get off the grid and fun in your cave.

    • computers are using more power than 5 years ago

      There's your problem, right there. You are thinking on such a short time scale. If you look back 100 years, the amount of electricity being used by computers is INFINITELY more than before. In no time at all, COMPUTERS WILL USE ALL THE ELECTRICITY IN THE UNIVERSE.

      Clearly this is a problem. Think about it - those electrical cords have two wires. Electricity comes in one side, swirls around your computer for a bit, heating things up and showing you devil imag

  • Expect to see local government force data centers to get more efficient. Right new there are many moves afoot to reduce the amount of AC (that is air conditioning, not alternating current) that can be provided to buildings. It will not take much of a push in this direction to make us start talking about "cooling bound' data centers. For example, in Washington State and other states there are already limits on the amount of heating capacity (BTUs) per square foot so this is a logical extension.
  • Won't heat become a much bigger problem before we get to the point that electricity in constrained? Rack servers are very dense from a BTU/sq ft perspective. Wont we bump against an inability to handle the cooling requirements if we double our power density per sq ft?
    • Yes, in many cases cooling is the limit (unless you install rack water/air heat exchangers). Reducing power also reduces heat output, so either way you might as well do it.
  • by DumbSwede ( 521261 ) <slashdotbin@hotmail.com> on Friday February 16, 2007 @05:08PM (#18044984) Homepage Journal
    This tends to be the trend with any useful technology. As technologies become more cost effective and energy efficient the rise in demand outpaces the energy savings as the economic advantage they offer is more fully utilized. This happened first with steam powered devices, then automotive, then air travel.

    While it may seem disturbing that computers are consuming a larger percentage of energy usage, one has to realize they probably more than offset their own energy use -- this by allowing other resources to either be used more efficiently or by enabling other economic activity that discovers and distributes resources, energy among them.
    • As technologies become more cost effective and energy efficient the rise in demand outpaces the energy savings as the economic advantage they offer is more fully utilized. This happened first with steam powered devices, then automotive, then air travel.

      Rail is more efficient than automobiles, but was still replaced by rail. The problem with your statement (which does apply HERE) is that often political concerns trump practical ones. We take a step forward, we take a step back. (We take a step forward, we

  • When the machines in their lust for power exhaust the conventional sources... they will turn to the only source left... mankind.

    Then we'll all have that inconvenient blue/red pill choice thingy.
  • Bullshit (Score:3, Insightful)

    by MindStalker ( 22827 ) <mindstalker AT gmail DOT com> on Friday February 16, 2007 @05:09PM (#18045012) Journal
    Trend continues. Thats like saying people have been using more 120W bulbs than when they used to use 60W bulbs, if this trend continues everyone will be using 500W bulbs by 2015.

    Yea as computing has gotten cheaper and people are using more of it, but thats because the relative cost of powering them have remained cheap. Don't expect the trend to continue once it becomes expensive compared to other things.
    • Thats like saying people have been using more 120W bulbs than when they used to use 60W bulbs, if this trend continues everyone will be using 500W bulbs by 2015.

      Well, that makes sense. As ISPs services return lower and lower margins, more ISPs will turn to setting up grow rooms where the heat and power consumption is hidden by that of the servers. You need a lot of hi-wattage lights for that kind of operation. They probably would have gone into meth (as it uses less space for the same dollar volume), bu

  • I can attest to this personally.

    I have several white-box servers in a co-lo that together with a good stiff tailwind draw about 4 amps total.

    I also have several Dell 1950 and 2950 servers in a data center for my day job. Each one by itself draws about 3 amps (dual supply, 1.5 amps per supply, surging to 3 amps when one of the supplies is turned off for whatever reason). Granted, there are many more fans in the Dell servers than in my whitebox servers, but I have more storage in my whitebox servers.

    • by Spoke ( 6112 )
      Did you measure power factor on all servers? How many is "several" white-box servers? The main power draw in most servers is the processor. I'd bet that "several" means 3-5 single processor servers. And the Dells are probably dual-quad processor.

      Dell servers load balance incoming power over both PSUs, which is why power consumption spikes when you pull the power on one.

      Did you measure power factor on your white-box servers? If they don't have power factor correction (preferably active PFC), they likely show
  • Trends (Score:5, Funny)

    by glwtta ( 532858 ) on Friday February 16, 2007 @05:26PM (#18045244) Homepage
    "This baby is only six months old and she already has one head and two arms; if these trends continue, she'll have 4 heads and 8 arms by the time she's two!"
  • by Aphrika ( 756248 ) on Friday February 16, 2007 @05:32PM (#18045340)
    ...but how much did performance increase by?
  • The article is the usual tabloid trash which confuses the issues and has some strange tie-ins to Chernobyl in the hopes of spreading panic. Exactly what I've come to expect on /.

    The story everywhere is that servers are getting more efficient, smaller, and more dense. This means that data centres all over Europe are at their capacity for supplying electricity and cooling, with lots of empty space they can't rent out. Even the newer centres designed a few years ago are having problems. I hear the same thing a
  • From what I remember one of the focus points of MIPS was a low footprint when it came to electrical power. I seem to recall that when AMD's and Intels were about 60W the equivalent MIPS-cpu were about 17W or so.
    This was some years ago so things are probably different now, but at the time this was a big selling point. Same computing power, lower electrical bill.

    .haeger

  • Total XBox 360 power consumption has gone up inf% since 2000.
  • OK, so the input has increased by 2x. In terms of output, how do current servers compare to five years ago? If the output is only 1.5 and the power consumption doubles, that sucks. If you're getting 5-10x the power output, then perhaps it's not really such a big deal. Through refining and better engineering most things can be made smaller, faster, and more efficient over time, but there is still a point where efficiency and output diverge.
  • A sickening thought, actually. Reminds me when the Sudbury nickle smelter belched out 2% of the world's SO2.
  • by bcrowell ( 177657 ) on Friday February 16, 2007 @08:16PM (#18046862) Homepage
    Personally, I use an insanely wasteful server because I don't have any choice. 99% of the time its cpu is 99% idle. However:
    • You can't get webhosting with good support and reliability unless you pay for the level of webhosting that gets you your own box.
    • I need my server to be able to stand up to a spike in demand caused by ten thousand spams hitting it in three seconds...
    • ... or 1000 ssh login requests in one minute from a bot searching for weak pasword...
    • ... or a brain-dead bot requesting the same 5 Mb pdf file 10,000 times in one hour, and sucking down 60 Mb worth of partial-content responses.
    Similar deal with multi-core CPUs. People are talking about making desktop machines into the equivalent of 1980 supercomputer, and one of the main justifications seems to be that anti-virus software can run all the time without affecting responsiveness. This is nuts. The internet and its protocols weren't designed for a world infested by Windows machines controlled by malware.
  • I'm not sure if it is new, but I've noticed that a lot of software vendors totally bloat the hardware requirements for their software. For example, our small college (300+ enrolled students) is looking to purchase PowerCampus. The "recommended" specs for this is like 4 servers, 3 database and one application, each with multiple RAID arrays. To a total of some 20+ disks. The actual data storage requirements are somewhere around 4 GB total. Though I'd be surprised if it was even that much. I mean, how much da

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...