Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Power

Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell 88

MojoKid writes "Kirk Skaugen, Senior Vice President and General Manager of the PC Client Group at Intel, while on stage, at IDF this week snuck in some additional information about Broadwell, the 14nm follow up to Haswell that was mentioned during Brian Krzanich's opening day keynote. In a quick demo, Kirk showed a couple of systems running the Cinebench multi-threaded benchmark side-by-side. One of the systems featured a Haswell-Y processor, the other a Broadwell-Y. The benchmark results weren't revealed, but during the Cinebench run, power was being monitored on both systems and it showed the Broadwell-Y rig consuming roughly 30% less power than Haswell-Y and running fully loaded at under 5 watts. Without knowing clocks and performance levels, we can't draw many conclusion from the power numbers shown, but they do hint at Broadwell-Y's relative health, even at this early stage of the game."
This discussion has been archived. No new comments can be posted.

Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell

Comments Filter:
  • 30%? (Score:2, Informative)

    by Anonymous Coward

    Meaningless number unless we know they are comparing at same performance level. You can get another IvyBridge CPU, downclock it, and you'll get 30% less power use.

  • by Anonymous Coward

    Without knowing clocks and performance levels, we can't draw many conclusion from the power numbers shown

    Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell

    So a processor running at an unknown speed is using less power than a different processor running at an unknown speed, not to mention several other unknown factors, and we're going to write a story about that with a specific power savings?

    • you forgot the part about accomplishing an unknown amount of work on a benchmark with unknown results

    • by wmac1 ( 2478314 )

      Still it makes sense that a 14nm circuit use 30% lower power than a 22nm (I guess even a bit more than that would make sense).

    • Yes, please buy a new computer with the new chip in it.

      For the children.

    • Just to nitpick, wouldn't you suppose that Intel's claims to power consumption takes two chips of equal performance and specifications and claim that across the board, the new fab process provides a 30% less power yield based purely on the process?

      So, the real question isn't 30% compared to something else. That one is easily justified. Just assume a broadwell will use 30% less power than a haswell... Same architecture, smaller die.

      The question is, how fast is the 5W part?
  • How much does lowering CPU power usage help? How much of a computer's power usage comes from the CPU, instead of the GPU, the screen, the LEDs, the disks, etc?

    • by Anonymous Coward

      If you lower the power consumption by 33% for the same performance, you can cram 150% performance into the same thermal envelope.

      So I would say it's quite important.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Pretty huge.

      1) means smaller design, which means you can pack more in for the same power
      2) simpler cooling, which means you could fit it in smaller cases

      Both of those are very good because you could fit both scenarios in to a production line trivially.
      Larger procs go one way, smaller mobile ones the other way.

      Hell, I am just surprised they are at 14nm. I never thought they could get down that low because of leakage.

    • by Kjella ( 173770 )

      When you're talking about 5W SoCs there are almost no non-trivial uses of power, if you're visiting a Javascript-heavy site then the CPU eats power, if you're playing games then the GPU eats power, if you're watching a movie the screen eats power, RAM eats power, chipset eats power, motherboard eats power and so on. On a 5W package I'd estimate the CPU gets 1-2W, the GPU 2-3W and the rest of the system 1W but if you're running on full tilt at 5W your battery won't last very long. From what I've understood w

    • The CPU is the GPU in low power systems, they are integrated units. Gone is the time when integrated Intel GPUs were worthless. These days, they can handle stuff quite well, even modern games at lower resolutions. The display is still a non-trivial power user too but the CPU is a big one.

      Disks aren't a big deal when you go SSD, which is what you want to do for the ultra low power units. They use little in operation, and less than a tenth of a watt when idle.

      So ya, keeping CPU power low is a big thing for lo

      • by Khyber ( 864651 )

        "Gone is the time when integrated Intel GPUs were worthless"

        Actually, here's something funny about that. You want to know why Intel GMA945/950/X3100 sucked balls?

        They were all deliberately crippled by Intel. Their original spec speed was to be 400 MHz. Every desktop and laptop that had these had them at 133/166 MHz speed. Unusable for even Quake 3.

        But suddenly - if you fixed that clock speed issue, holy crap, you could play Q3 in OpenGL mode! Suddenly newer games like Faster Than Light run full speed instea

    • by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Saturday September 14, 2013 @11:01PM (#44853711) Journal

      Helps a lot. But there are many factors that affect power usage.

      Power supplies used to be awful. I've heard of efficiencies as bad as 55%. Power supplies have their own fans because they burn a lot of power. Around 5 years ago, manufacturers started paying attention to this huge waste of power. Started a website, 80plus.org. Today, efficiencies can be as high as 92%, even 95% at the sweet spot.

      GPUs can be real power pigs. I've always gone with the low end graphics not just because it's cheap, but to avoid another fan, and save power. The low end cards and integrated graphics use around 20W, which is not bad. I think a high end card can use over 100W.

      A CRT is highly variable, using about 50W if displaying an entirely black image at low resolution, going up to 120W to display an all white image at its highest resolution. An older flatscreen, with, I think, fluorescent backlighting, uses about 30W no matter what is being displayed. A newer flatscreen with LEDs takes about 15W.

      Hard drives aren't big power hogs. Motors take lots of power compared to electronics, but it doesn't take much to keep a platter spinning at a constant speed. Could be moving the heads takes most of the power.

      These days, a typical budget desktop computer system, excluding the monitor, takes about 80W total. Can climb over 100W easy if the computer is under load. So, yes, a savings of 5W or more is significant enough to be noticed, even on a desktop system.

      • by tlhIngan ( 30335 )

        Power supplies used to be awful. I've heard of efficiencies as bad as 55%. Power supplies have their own fans because they burn a lot of power. Around 5 years ago, manufacturers started paying attention to this huge waste of power. Started a website, 80plus.org. Today, efficiencies can be as high as 92%, even 95% at the sweet spot.

        They had to, because at 50% efficiency, if you wanted a 500W power supply, you're talking about drawing 1000W. And that would be a practical limit because a typical 15A outlet wo

  • ARM (Score:2, Interesting)

    by Anonymous Coward

    Arm meanwhile has 8 core processors suitable for Smartphones (and yes they can run all 8 cores simultaneously).

    What they need right now is an a chip *now* that is 30% less power THAN AN EQUIVALENT ARM, and more cores and cheaper, oh and it also needs to be SOC available.

    Really saying you're next chip is 30% lower power than one you just launched, means the one you just launched is 30% too much power drawn. Which is true, but not something to point out.

    • ARM vs x86 (Score:5, Interesting)

      by IYagami ( 136831 ) on Saturday September 14, 2013 @06:49AM (#44847887)

      There is a good comparison of ARM vs x86 power efficiency at anandtech.com: http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown [anandtech.com]

      "At the end of the day, I'd say that Intel's chances for long term success in the tablet space are pretty good - at least architecturally. Intel still needs a Nexus, iPad or other similarly important design win, but it should have the right technology to get there by 2014."
      (...)
      "As far as smartphones go, the problem is a lot more complicated. Intel needs a good high-end baseband strategy which, as of late, the Infineon acquisition hasn't been able to produce. (...) As for the rest of the smartphone SoC, Intel is on the right track."

      The future for CPUs is going to be focused on power consumption. The new Atom core is two times more powerful at the same power levels than the current Atom core. You can see http://www.anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested [anandtech.com]:

      " Looking at our Android results, Intel appears to have delivered on that claim. Whether we’re talking about Cortex A15 in NVIDIA’s Shield or Qualcomm’s Krait 400, Silvermont is quicker. It seems safe to say that Intel will have the fastest CPU performance out of any Android tablet platform once Bay Trail ships later this year.
      The power consumption, at least on the CPU side, also looks very good. From our SoC measurements it looks like Bay Trail’s power consumption under heavy CPU load ranges from 1W - 2.5W, putting it on par with other mobile SoCs that we’ve done power measurements on.
      On the GPU side, Intel’s HD Graphics does reasonably well in its first showing in an ultra mobile SoC. Bay Trail appears to live in a weird world between the old Intel that didn’t care about graphics and the new Intel that has effectively become a GPU company. Intel’s HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4. It’s a huge step forward compared to Clover Trail, but clearly not a leadership play, which is disappointing."

      • Re:ARM vs x86 (Score:5, Insightful)

        by Sycraft-fu ( 314770 ) on Saturday September 14, 2013 @07:55AM (#44848111)

        Ya I think ARM fanboys need to step back and have a glass of perspective and soda. There seems to be this article of faith among the ARM fan community that ARM chips are faster per watt, dollar, whatever than Intel chips by a big amount. Also that ARM could, if they wish, just scale their chips up and make laptop/desktop chips that would annihilate Intel price/performance wise. However for some strange reason, ARM just doesn't do that.

        The real reason is, of course, it isn't true. ARM makes excellent very low power chips. They are great when you need something for a phone, or an integrated controller (Samsung SSDs use an ARM chip to control themselves) and so on. However that doesn't mean they have some magic juju that Intel doesn't, nor does it mean they'll scale without adding power consumption.

        In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel. You already find with with 4/6 core chips on desktops. Some things scale great and use 100% of your CPU (video encoding for example). Others can use all the cores, but only to a degree. You see some games like this. They'll use one core to capacity, another near to it, and the 3rd and 4th only partially. Still other things make little to no use of the other cores.

        So ARM can't go and just whack together a 100 core chip and call it a desktop processor and expect it to be useful.

        Really, Intel is quite good at what they do and their chips actually are pretty efficient in the sector they are in. A 5-10 watt laptop/ultrabook chip does use a lot more than an ARM chip in a smartphone, but it also does more.

        Also Intel DOES have some magic juju ARM doesn't, namely that they are a node ahead. You might notice that other companies are talking about 22/20nm stuff. They are getting it ready to go, demonstrating prototypes, etc. Intel however has been shipping 22nm stuff, in large volume, since April of last year. They are now getting ready for 14nm. Not ready as in far off talking about, they are putting the finishing touches on the 14nm fab in Chandler, they have prototype chips actually out and testing, they are getting ready to finalize things and start ramping up volume production.

        Intel spends billions and billions a year on R&D, including fab R&D, and thus has been a node ahead of everyone else for quite some time. That alone gives them an advantage. Even if all other things are equal, they've smaller gates, which gives them lower power consumption.

        None of this is to say ARM is bad, they are very good at what they do as their sales in the phone market shows. But ARM fans need to stop pretending they are some sleeping behemoth that could crush Intel if only they felt like it. No, actually, Intel's stuff is pretty damn impressive.

        • In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel.

          Hah. That will change once programmers actually *learn* program algebra.

          • Re: (Score:3, Funny)

            If you have a way to split all tasks down and make them parallel, could you please share it with the rest of us? If it's this 'program algebra' of which you speak, could you please provide us with a link?

          • wtf is 'program algebra'
        • x86 needs to die a.s.a.p. because of the legacy crap it carries.
          Please look up the phenomenon A20 gate.
          Or just the pain that multiple FP/SIMD implementations cause devs: mix the wrong ones and your performance crashes.

          x86 architecture is hampering progress because it is so successful.

          • Nobody's running their x86 in a mode that's impacted by A20 any more. And hardly anybody's writing in assembler. So it doesn't matter. And for the minority who *are* writing in assembler, ARM isn't going to help them (unless they're writing ARM assembler of course).

            If x86's legacy carried a significant power or performance impact, it *would* matter. But it doesn't.

          • Actually it appears that Intel removed the A20 line starting with Haswell.

            Check out page 271 of the Intel System Programmers Manual Vol. 3A from June 2013 [intel.com]. Notice the following excerpt: "The functionality of A20M# is used primarily by older operating systems and not used by modern operating systems. On newer Intel 64 processors, A20M# may be absent."

            Now check out page 368 from the May 2011 [fing.edu.uy] version of that same document. In the same paragraph, the statement above is not present.

            From this, we can infer that

            • Re: (Score:2, Informative)

              by Anonymous Coward
              Wikipedia claims [wikipedia.org] that "Support for the A20 gate was removed in the Nehalem microarchitecture." but does not provide a citation.
        • by Anonymous Coward

          ARM (Acorn Risk Machines) already made desktop chips (and computers) who wipped the floor with Intel's... You are just too young. It was in the 80s. Google Acorn Archimedes.

          • Right which is why I can go out and buy one of those right now! ...

            Oh wait I can't. They haven't made a desktop chip since the ARM2 in the 80s.

            We are talking about the actual real world, here today, where you can buy Intel laptop, desktop, and server CPUs but not ARM CPUs in those markets.

  • At least that is what is implied. That is great for corporate energy usel but when will the real power hogs be addressed? Expansion video cards can use many multiples the power consumed by the rest of the system combined.

  • If the trend keeps up, we'll get a 1-3% IPC improvement, and even less overclocking headroom with Broadwell. It's absolutely disappointing that after waiting ~5 years, a fully overclocked 4770K (~4.4GHz) is only 1.37x as fast as a fully overclocked i7 920 (~4GHz).
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      The IPC has hit a brick wall. The proportion of time spent on cache misses and branch mispredictions simply is a limit.
      After all IBM Power8 will have 8 threads/core (as announced at Hot Chips, but as far as I know, there have been no article about it on Slashdot). I'm not sure 8 is very useful, but apparently on many workloads, the 4 threads/core of Power7/Power7+ gives more throughput then 2 threads. Several threads per core increase aggregate IPC, but not per thread IPC.
      The reason I'm doubtful on 8 threa

    • by Khyber ( 864651 )

      "a fully overclocked 4770K (~4.4GHz) is only 1.37x as fast as a fully overclocked i7 920 (~4GHz)."

      You got some benchmarks on that?

      • I believe that it's generally accepted that : Haswell -> 110% IvyBridge -> 113% SandyBridge -> 125% Nehalem not counting special cases such as AVX2. Hence, a 4.4GHz 4770K = 5.5GHz i7 920, or ~137%.
  • by Anonymous Coward

    Suppose they actually scaled the transistors proportionally with the 22nm to 14nm features size reduction. That would be a reduction to less than half the area, but still 70% of the power. That means that the power density (and thus heat per unit area) would be higher, 1.7x the old value. One of the hopes from the smaller process is to be able to run faster, which means even more power. This seems unrealistic given that current processors are already thermally limited. We are way past the point where die sh

  • So that's great for mobile chips for devices with batteries but they tend to go the same direction with their desktop chips. That opens the door for AMD to release a double or triple the wattage chip that's less efficient but faster overall and they price it at a far better "speed vs price" ratio that takes money right out of Intel's pockets. My advice to Intel is release some hyper-efficient but still 90W-ish 4.5-5.0GHz chip. AMD wouldn't get anywhere near that kind of performance.

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...