Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD 'Venice' Core Shows Big Drop in Power Needs 399

dtjohnson writes " Lost Circuits has carefully measured the power consumption of four recent Athlon 64 cores and has found that power consumption has been dramatically reduced in the new 'Venice' core from the relatively-low (compared to Intel P4) numbers of the original 2003 'Clawhammer' core to less than 30 watts under load and less than 10 watts for Windows at idle. This huge power reduction was apparently accomplished by a combination of 90 nm die shrink, Silicon-on-Insulator technology, and something called 'dual-stress liner technology' As Lost Circuits points out, power consumption worldwide has been exploding as more CPUs come online and the CPU power requirements increase so a significant power reduction will reduce the burden on electrical grids everywhere."
This discussion has been archived. No new comments can be posted.

AMD 'Venice' Core Shows Big Drop in Power Needs

Comments Filter:
  • by plover ( 150551 ) * on Monday May 02, 2005 @04:18PM (#12411805) Homepage Journal
    I did the math to figure out how much energy was being used by the distributed.net project a few years ago. I don't still have all the numbers handy, but I remember I came up with roughly 10 trains filled with coal were required to break RC-64. That was making assumptions that an idle CPU consumed 15 W and a busy CPU consumed 60 W.

    Now, these numbers were completely extrapolated from the key cracking rates I saw generated on my Athlon 1200, and estimates based on published power consumption. But it pointed out to me that these distributed contests are not good for us, and they're not free. It personally cost me about $40.00 / year in electricity. So, I don't play the distributed computing games any more.

    • by Anonymous Coward
      Agreed. I get rather huffy when I think about how many tons of CO2 are released and how much of our limited fossil fuels has been spent on a frivolous project like SETI, especially when radio telescopy is the least likely method of contacting extra terrestrials.
    • by mmkkbb ( 816035 ) on Monday May 02, 2005 @04:24PM (#12411910) Homepage Journal
      is this you? [everything2.com]
    • Uh.... did you count your hard drive spinning as well? As far as I remember, I've never had my box power up and not be able to get to a load screen.... HOWEVER, I've been stuck there a few times because of my extra hard drive / etc.... One might think that the highly fine tuned physical movement of the drive disk might be much of the power consumption if not more of it than the raw CPU power usage...
      • by Anonymous Coward
        Alas not. Hard drive power consumption is pretty small. 10-12W in normal action. Now compare to CPU and GPU usage. These are the buggers that can take the juice to stop your drives firing up.
    • by marcus ( 1916 )
      While I was cracking with d.net, the heat generated by the PCs involved was simply replacing the heat that would have been generated by my home heater anyway. It's an even exchange and 100% efficient. That is, all of the engery expended in crunching the keys ended up heating my house.

      A completely different argument is that any advance costs. So, we learn about RCx, distributed processing pros and cons, some d.net politics, etc. If you expect to gain this knowledge for no cost you are simply being naive.
      • You actually raise a very good point, in winter, that is. In summer, you need more AC to offset the heat gain. It is probably a net gain of about zero for many areas (more if you heat in winter more than cool in summer, for example) if you consider all actual costs.

        The only actual power loss is by the photons emitted by your monitor when its in use, which is likely less than 1% of the energy used, so yes 99%-100% efficient is pretty accurate. I hear lots of people complaining about "wasting" energy with
      • Do you use electric heat in your house? Do you live in a cold climate?

        If you live in, say, Norway, I suppose there's a good chance the answer to both questions is yes.

        Otherwise, your argument doesn't stand. If you live in a warm climate, for at least part of the year, the CPU heat is at best not welcome or at worst increases your A/C load. In the winter, the production of the CPU heat may result in more energy use/pollution than what would have been produced by, say, a gas furnace, depending on the sou
  • good news! (Score:3, Interesting)

    by ShaniaTwain ( 197446 ) on Monday May 02, 2005 @04:20PM (#12411836) Homepage
    Excellent!

    This is quite a welcome change from the days of the old AMD chips that would tan you as you worked.
    Looks like its time for Intel to spend a bit more time looking at power consumption.

    hooray for competition!
  • 90nm (Score:5, Informative)

    by Anonymous Coward on Monday May 02, 2005 @04:20PM (#12411843)
    "a combination of 90 nm die shrink"

    No, the Winchester core preceding it was 90nm. There was no die shrink with Venice.

    Still a great core, but this is a blatant error on the front page.
    • Re:90nm (Score:3, Informative)

      by Anonymous Coward
      "a combination of 90 nm die shrink"

      The front page is only saying that Venice achieves power gains from the combination of those technologies. Nowhere does it say that the Venice is the first to have any one of them.
  • Transmeta (Score:5, Funny)

    by Anonymous Coward on Monday May 02, 2005 @04:20PM (#12411846)
    Great, now my transmeta stock is going to go negative.
  • WOw (Score:2, Interesting)

    by PunkOfLinux ( 870955 )
    That really is a big drop THat's what they should put in those computers for third-world countries >.>
    • Re:WOw (Score:5, Interesting)

      by Cadef ( 880567 ) * on Monday May 02, 2005 @04:32PM (#12412062)
      You laugh, but it's true. I spent some time living in Kenya, and Internet cafes are everywhere. The power grids in these countries are already so stressed that to have a chip that drastically reduces power consumption in these places would be a tremendous help. And not just for Internet cafes, but for point-of-sale terminals, businesses, etc. (And yes, they really do have point-of-sale systems in the "third world.")
  • Power used by /. (Score:3, Informative)

    by javamann ( 410973 ) on Monday May 02, 2005 @04:23PM (#12411895)
    I wonder how many cars of coal have been used to read /. ? While every watt counts, I could do much better replacing my light bulbs with lower wattage. In California it's like installing a low flush toilet and save 1000 gallons a month when the central valley uses 80% of the water for watering crops.
    • ...don't use any less water, since you have to flush them 5 times to get the crap down the hole.

      The old ones at least worked the first time around, even after a big meal.
      • by Grishnakh ( 216268 ) on Monday May 02, 2005 @04:44PM (#12412290)
        That's what you get for being a cheap-ass and buying the cheapest toilet you can find. If you'd buy a high-quality (over $125) 1.6gpf toilet, it'd flush the crap just as effectively as any 3.5gpf toilet, and probably better.

        I've had lots of 3.5gpf toilets clog on me; does that mean they all suck too? The high-efficiency toilets have gotten a bad rap because stupid house builders, who buy the cheapest crap they can find in order to maximize their profit, installed cheap toilets. So now that everyone's stuck with them (and they're apparently all too damn cheap to go to Home Depot or Lowes and get an American Standard Cadet II for $150 or so), they sit around whining about government regulations instead of blaming their builder.

        The government probably should have instituted a minimum performance test when they instituted the 1.6gpf requirement.
        • I have to say this: For someone who posts on Slashdot you sure know your shit.
    • by Roadkills-R-Us ( 122219 ) on Monday May 02, 2005 @04:36PM (#12412136) Homepage
      While a typical home user probably does have other, larger energy hogs, we have almost 300 systems between desktops and the compute farm. This would be a huge savings for us, both on the front end (direct power to run computers) and on the backend (air conditioning).

      For someone with a huge sim farm (ATI, Nvidia) or other giant compute farm (google, MS), it's a phenomenal win.
  • Venice? (Score:4, Funny)

    by Anonymous Coward on Monday May 02, 2005 @04:23PM (#12411902)
    I would be a little paranoid if I had a 'Venice' core and was using water cooling, what with the rising water and all...
  • to less than 30 watts under load and less than 10 watts for Windows

    Ta*dit*boom!

    Remember kids, it doesn't take much effort to break Windows, so be careful.
  • by brennanw ( 5761 ) on Monday May 02, 2005 @04:26PM (#12411948) Homepage Journal
    ... shouldn't it also reduce the heat produced by processors, therefore extending processor life?

    Or, for an overclocked machine, extending the amount of time it takes for the processor to die? :)
  • by G4from128k ( 686170 ) on Monday May 02, 2005 @04:27PM (#12411973)
    Lowering the power consumption per core is a first step to upping the number of cores. I imagine that CPU power consumption for desktops will level out in the 100 W range and makers will add cores, cache, and clock speed to maximize performance within a given power budget. I could also see some innovators creating new cooling technologies to boost the power budget and thus boost the permissible CPU performance within that expanded budget.
  • by djsmiley ( 752149 ) <djsmiley2k@gmail.com> on Monday May 02, 2005 @04:29PM (#12412003) Homepage Journal
    "As Lost Circuits points out, power consumption worldwide has been exploding as more CPUs come online and the CPU power requirements increase so a significant power reduction will reduce the burden on electrical grids everywhere."

    Erm? As more cpus? Or cpus with stupidly high power usage.

    Someone once told me that 7/10ths of the world doesn't have a phone line, let alone a computer. Now your telling me that the power usage of the world has increased due to all these people getting computers? I seriously doubt it.

    How about all these people are finally getting electric to their houses? They finally have eletric kettles, ovens, irons, microwaves...

    Im not saying that a lower usage cpu wouldn't make a difference, but im saying its going to make a very small difference compared to somethings.

    Plus its going to be a LONG while before we see any difference. The only chips really to take the pi££ when looking at powerusage are the top end P4s, not like teh A64s etc are as bad as these?

    As newer low powerchips are already out i doubt the p4's are going to make much of a impact either way.
  • by bersl2 ( 689221 ) on Monday May 02, 2005 @04:29PM (#12412011) Journal
    While it doesn't really make that much of a difference, the core lines go
    Clawhammer(754)->Clawhammer(939)->nothing->San Diego (1MB L2) and
    Newcastle(754)->Newcastle(939)->Winchester->V enice (512kB L2).

    But whatever. I'm sure the extra cache doesn't make too much of a difference.
  • by John Seminal ( 698722 ) on Monday May 02, 2005 @04:29PM (#12412022) Journal
    Do they still let users overclock their cpu's? I know intel locked thier CPU's. I wonder if AMD still lets people play with their products more.
    • by Anonymous Custard ( 587661 ) on Monday May 02, 2005 @04:37PM (#12412171) Homepage Journal
      Their Athlon 64 FX-5x line is unlocked, designed for the enthusiast crowd. Their Athlon 64 xx00 series is multiplier-locked, but you can still play with the FSB.
      • For the low-power / HTC crowd, you can always run a lower multiplier on A64/Opterons despite the package marking. The 'FX' series is factory tested to the speed on the package but the multiplier lock isn't set, so you can attempt to clock it up.

        IMO this is the right, hacker friendly way to allow overclockers to have their fun and also curb illicit remarking.
      • by doormat ( 63648 ) on Monday May 02, 2005 @06:27PM (#12413634) Homepage Journal
        Their Athlon 64 xx00 series is multiplier-locked

        Its only multiplier locked upward. You can however, turn the multiplier down. Which is actually really nice because of all the advances in DDR1-500MHz and faster RAM. You can take a 2GHz A64, and instead of running it at 10x200, you can run it at 8x250 (or something like that) and for the same clock speed (2000MHz) you get better performance (more memory bandwidth).
    • Yes and no.

      I've got a 3000+ Winchester (1.8GHz, nominal) which goes up to 2.8GHz (and a .1V increase 1.4V->1.5, I'll use 2.7GHz as reference, 300FSB, in this post.). However, the vast majority of the time, it runs at 1000MHz@1.1V, because contrary to the gentoo jokes, that doesn't all that much CPU time, and it more or less just idles.

      This is achieved simply by messing with the FSB, and having a motherboard that allows other modifications (A8N SLI)

      The biggest problem with overclocking, is that I have
  • Monitors (Score:3, Informative)

    by lotus_anima ( 862243 ) on Monday May 02, 2005 @04:32PM (#12412055) Homepage
    This is a pretty good decrease in consumption, but according to http://computer.howstuffworks.com/monitor10.htm [howstuffworks.com] "CRTs are somewhat power-hungry, at about 110 watts for a typical display, especially when compared to LCDs, which average between 30 and 40 watts."
  • Let's say I have a small shop that wants to keep four of these babies running constantly - various Net-facing servers - and I'd like to mount just enough solar cells outside to keep this going. What are the options for installing about 150 or 200 watts of constant solar power? We're considering putting in a backup generator anyway, so could this be done competitively?
    • by pclminion ( 145572 ) on Monday May 02, 2005 @04:45PM (#12412307)
      What are the options for installing about 150 or 200 watts of constant solar power?

      You need to get a solar chart for your area of the world, and look up the equivalent insolation in terms of hours. Around here, we get an equivalent of 3.5 hours of maximum sunlight per day, averaged over the course of the year. Assuming your numbers are similar, you'll need about (24/3.5)*200 watts worth of solar panels -- that's 1370 watts. Assuming you get a great deal, you might pay $2.25 per watt, uninstalled cost, so that's over $3000 just for the panels. You'd also have to build a mounting system and possibly install a small motor to keep the panels pointed in the optimum direction.

      On top of that, you need a battery system to provide power during hours of darkness. I could continue BS'ing the numbers to figure out how many batteries you'd need but would rather not. Needless to say, it's going to be several thousand dollars for the whole system.

      (Yes, I've done this before)

    • What are the options for installing about 150 or 200 watts of constant solar power?

      My guess is about 4 acres of land in southern california to have enough solar panel to power 4 servers ;) Don't forget, you still have to have power for the hard drives, fans, lights, a/c, video card, and the rest of the motherboard.

      As to "constant" solar power, I am not aware of this concept. It gets cloudy everywhere at least some of the time, even southern california. Maybe a wind powered generator hooked to the powe
  • by tayhimself ( 791184 ) on Monday May 02, 2005 @04:41PM (#12412246)
    Xbitlabs found that Venice uses slightly more power than Winchester (the older 0.09u core) around a month ago. They tested cores at the same speed unlike Lostcircuits, and while LC is a good site, xbit is generally better. Not to mention the guy at LC blew up a few MBs before "finding out" how to do his measurements. Aslo Xbit is the only site I know that has an accurate video card power consumption database. http://www.xbitlabs.com/articles/cpu/print/athlon6 4-venice.html [xbitlabs.com]
    • Yes, they are somewhat opposite, aren't they? The Xbit review provides some graphs of power consumption and generally finds that the Winchester and Venice core have similar profiles but does not mention how they were able to make those measurements. The Lost Circuits review OTOH provides enough detail on their power measurement procedure to allow someone else to reproduce their results. More importantly, the Lost Circuits information shows just how difficult and time consuming it really was to measure t
  • by mikeophile ( 647318 ) on Monday May 02, 2005 @04:42PM (#12412260)
    In other words, cautiously we project the current power consumption of all computers running somewhere in the order of at least 20 Hoover Dam power plants

    If 9000Mw/hrs are the equivalent of 4 Hoover dams and current estimate is 20 Hoover dams, then current consumption by CPUs is around 45,000 Mw/hrs.

    This [ecen.com] site quotes 10.9 cubic meters of oil per megawatt/hour.

    If my math and sources are right, then CPUs alone, worldwide consume the equivalent of nearly 500,000 cubic meters of oil each year.

    According to this [eppo.go.th] site, one American barrel of oil is 0.15899 cubic meters.

    That means that the power consumption of all the CPUs in the world equate to over 3 million barrels of oil/year.

    Perspective? The US currently uses a bit over 20 million barrels of oil/day. So CPUs worldwide are using around the equivalent of .04% of the annual US oil consumption.
    • That means that the power consumption of all the CPUs in the world equate to over 3 million barrels of oil/year.

      According to this page [teachersdomain.org], Hoover Dam generates 4e9 kilowatt-hrs, or 4e6 MWH per year. 20 Hoover Dams would account for 8e7 MWH per year. Using your conversion factors, that comes out to 5.48e9 (~5.5 billion barrels) of oil a year, or 15 million barrels a day. Scarcely a drop in the bucket, eh? Worldwide, that is 3/4 of the American oil burn rate being consumed by CPUs (note that much electricity i

  • by ewhac ( 5844 ) on Monday May 02, 2005 @04:49PM (#12412362) Homepage Journal
    Some time back, I was watching a discussion on a mailing list about the total worldwide power consumption of computers. As I recall, there are too many variables to make an accurate assessment of "true" power consumption. The consensus eventually settled out to, yes, CPU power consumption is rising, but is still dwarfed by the power needs of heavy industry. And if you measure energy consumption by amount of oil burned, then computers trail far, far behind passenger cars.

    Back in 2000, duing the California power "crisis," Amory Lovins of the Rocky Mountain Institute was asked what things citizens could do to conserve power. His response: "Conserve water. The lagest consumer of power in California is electric water pumps. So if you save water, you'll save power."

    Still, every little bit helps. By residents switching over from incandescents to screw-in fluorescents duing the power "crisis," California reversed approximately 8-10 years of power consumption increase (according to some estimates).

    Schwab

  • by Pesticide01 ( 865676 ) on Monday May 02, 2005 @04:49PM (#12412376)
    For some time now Intel has relied on slick marketing and big numbers while AMD did the same thing... better. Efficient computing is where AMD has gained a nice edge over the years. Intel is playing catch-up at this point. Keep it up. Competition helps us all.
  • paying twice (Score:5, Insightful)

    by plopez ( 54068 ) on Monday May 02, 2005 @04:50PM (#12412385) Journal
    Don't forget that in may large server rooms you actually end up paying twice:

    1) the first time to power the chips
    2) the second time to remove the waste heat in the server room.

    the pay off in some cases may be more than originally anticipated.
    • Re:paying twice (Score:3, Informative)

      by gr8_phk ( 621180 )
      Actually it's more like 3 to 4 times. Since the server room is cooled by a heat pump (refrigerator) and not by convection (blowing the heat out and replacing it with fresh outside air). The heatpump is only ~30% efficient. i.e. it takes 3 joules of energy to remove 1 joule from the room. That means 1W for the PC and 3 more to remove the heat. The same goes for anything (lights etc) you leave on in your house with the air conditioning on. You'd think big servers would all be moving to the northern states or
  • Logical fallacy (Score:5, Insightful)

    by booch ( 4157 ) <slashdot2010@cra ... UGARom minus cat> on Monday May 02, 2005 @04:51PM (#12412405) Homepage
    power consumption worldwide has been exploding as more CPUs come online

    I think you've made a huge leap there. You've tried to imply that CPUs are what's causing the increased demand for power. That's the logical fallacy of Correlation implies causation [wikipedia.org]. I'd be willing to bet that computers use very little of the additional power consumed. Think about if you lived in a developing country and had limited resources to spend, but increasing energy supplies. Would you be more likely to spend money on a PC, air conditioning, a laundry washing machine, or a TV? And of those, the PC probably uses the least energy already.

  • by Hewhosaysni ( 780774 ) on Monday May 02, 2005 @04:56PM (#12412483)
    ...because less heat means less fans, smaller enclosures (doesn't have to have so much room for the air to flow) i.e. __quieter__ machines.

    oooh sweet...
  • by Nom du Keyboard ( 633989 ) on Monday May 02, 2005 @05:46PM (#12413125)
    power consumption has been dramatically reduced

    This is just in time for my next Nvida PCI-E video card with two 75 watt auxiliary power connectors in addition to the 75W through the socket.

To write good code is a worthy challenge, and a source of civilized delight. -- stolen and paraphrased from William Safire

Working...