Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell 88
MojoKid writes "Kirk Skaugen, Senior Vice President and General Manager of the PC Client Group at Intel, while on stage, at IDF this week snuck in some additional information about Broadwell, the 14nm follow up to Haswell that was mentioned during Brian Krzanich's opening day keynote. In a quick demo, Kirk showed a couple of systems running the Cinebench multi-threaded benchmark side-by-side. One of the systems featured a Haswell-Y processor, the other a Broadwell-Y. The benchmark results weren't revealed, but during the Cinebench run, power was being monitored on both systems and it showed the Broadwell-Y rig consuming roughly 30% less power than Haswell-Y and running fully loaded at under 5 watts. Without knowing clocks and performance levels, we can't draw many conclusion from the power numbers shown, but they do hint at Broadwell-Y's relative health, even at this early stage of the game."
30%? (Score:2, Informative)
Meaningless number unless we know they are comparing at same performance level. You can get another IvyBridge CPU, downclock it, and you'll get 30% less power use.
Re:30%? (Score:5, Funny)
Re: 30%? (Score:5, Insightful)
Parent is correct.
Power usage goes up with *square* of voltage, but is *linear* with clock speed.
Frequency does not matter much, voltage does.
Re: (Score:2)
Ah, but decreasing the frequency generally allows the voltage to be decreased as well without becoming unstable.
Oh Yes We Can (Score:1)
Without knowing clocks and performance levels, we can't draw many conclusion from the power numbers shown
Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell
So a processor running at an unknown speed is using less power than a different processor running at an unknown speed, not to mention several other unknown factors, and we're going to write a story about that with a specific power savings?
Re: (Score:3)
you forgot the part about accomplishing an unknown amount of work on a benchmark with unknown results
Re: (Score:2)
Still it makes sense that a 14nm circuit use 30% lower power than a 22nm (I guess even a bit more than that would make sense).
Re: (Score:1)
Yes, please buy a new computer with the new chip in it.
For the children.
Re: (Score:2)
So, the real question isn't 30% compared to something else. That one is easily justified. Just assume a broadwell will use 30% less power than a haswell... Same architecture, smaller die.
The question is, how fast is the 5W part?
How much does this help? (Score:2)
How much does lowering CPU power usage help? How much of a computer's power usage comes from the CPU, instead of the GPU, the screen, the LEDs, the disks, etc?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
"led monitors use less power than ccfl at the expense of some colour quality"
What? Not even close. Go look at a spectrograph of a white LED versus any fluorescent. A white LED is only beat by an incandescent lamp as far as a complete light emission range goes.
Here you go. [scoutnetworkblog.com]
Re: (Score:2)
when playing games generally the video card will use the most cpu followed by the monitor followed by the cpu
I think you accidentally something there.
Re: How much does this help? (Score:2)
> Monitors just have to draw to a screen, at most at 120Hz.
Incorrect.
Most monitors refresh at 60 Hz.
My Asus VG248 is a 144 Hz monitor
Re: (Score:1)
If you lower the power consumption by 33% for the same performance, you can cram 150% performance into the same thermal envelope.
So I would say it's quite important.
Re: (Score:2, Interesting)
Pretty huge.
1) means smaller design, which means you can pack more in for the same power
2) simpler cooling, which means you could fit it in smaller cases
Both of those are very good because you could fit both scenarios in to a production line trivially.
Larger procs go one way, smaller mobile ones the other way.
Hell, I am just surprised they are at 14nm. I never thought they could get down that low because of leakage.
Re: (Score:2)
When you're talking about 5W SoCs there are almost no non-trivial uses of power, if you're visiting a Javascript-heavy site then the CPU eats power, if you're playing games then the GPU eats power, if you're watching a movie the screen eats power, RAM eats power, chipset eats power, motherboard eats power and so on. On a 5W package I'd estimate the CPU gets 1-2W, the GPU 2-3W and the rest of the system 1W but if you're running on full tilt at 5W your battery won't last very long. From what I've understood w
Quite a bit (Score:2)
The CPU is the GPU in low power systems, they are integrated units. Gone is the time when integrated Intel GPUs were worthless. These days, they can handle stuff quite well, even modern games at lower resolutions. The display is still a non-trivial power user too but the CPU is a big one.
Disks aren't a big deal when you go SSD, which is what you want to do for the ultra low power units. They use little in operation, and less than a tenth of a watt when idle.
So ya, keeping CPU power low is a big thing for lo
Re: (Score:3)
"Gone is the time when integrated Intel GPUs were worthless"
Actually, here's something funny about that. You want to know why Intel GMA945/950/X3100 sucked balls?
They were all deliberately crippled by Intel. Their original spec speed was to be 400 MHz. Every desktop and laptop that had these had them at 133/166 MHz speed. Unusable for even Quake 3.
But suddenly - if you fixed that clock speed issue, holy crap, you could play Q3 in OpenGL mode! Suddenly newer games like Faster Than Light run full speed instea
Re:How much does this help? (Score:4, Informative)
Helps a lot. But there are many factors that affect power usage.
Power supplies used to be awful. I've heard of efficiencies as bad as 55%. Power supplies have their own fans because they burn a lot of power. Around 5 years ago, manufacturers started paying attention to this huge waste of power. Started a website, 80plus.org. Today, efficiencies can be as high as 92%, even 95% at the sweet spot.
GPUs can be real power pigs. I've always gone with the low end graphics not just because it's cheap, but to avoid another fan, and save power. The low end cards and integrated graphics use around 20W, which is not bad. I think a high end card can use over 100W.
A CRT is highly variable, using about 50W if displaying an entirely black image at low resolution, going up to 120W to display an all white image at its highest resolution. An older flatscreen, with, I think, fluorescent backlighting, uses about 30W no matter what is being displayed. A newer flatscreen with LEDs takes about 15W.
Hard drives aren't big power hogs. Motors take lots of power compared to electronics, but it doesn't take much to keep a platter spinning at a constant speed. Could be moving the heads takes most of the power.
These days, a typical budget desktop computer system, excluding the monitor, takes about 80W total. Can climb over 100W easy if the computer is under load. So, yes, a savings of 5W or more is significant enough to be noticed, even on a desktop system.
Re: (Score:3)
They had to, because at 50% efficiency, if you wanted a 500W power supply, you're talking about drawing 1000W. And that would be a practical limit because a typical 15A outlet wo
Look at all the silicon used for crypto (Score:3, Interesting)
Take a look at this slide, on the right is the system on a chip version of their Broadwell 2 core processor:
http://hothardware.com/image_popup.aspx?image=big_idf-2013-8.jpg&articleid=27335&t=n
See how much of the chip is assigned to crypto functions? It's almost as big as one of the processor cores. All that silicon used for crypto and it's completely wasted because it cannot be trusted because of the NSA. It wouldn't surprise me if some of that silicon is NSA back door functionality because that's o
Re: (Score:1)
See how much of the chip is assigned to crypto functions? It's almost as big as one of the processor cores.
No, it's not. Better get your eyes checked.
ARM (Score:2, Interesting)
Arm meanwhile has 8 core processors suitable for Smartphones (and yes they can run all 8 cores simultaneously).
What they need right now is an a chip *now* that is 30% less power THAN AN EQUIVALENT ARM, and more cores and cheaper, oh and it also needs to be SOC available.
Really saying you're next chip is 30% lower power than one you just launched, means the one you just launched is 30% too much power drawn. Which is true, but not something to point out.
ARM vs x86 (Score:5, Interesting)
There is a good comparison of ARM vs x86 power efficiency at anandtech.com: http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown [anandtech.com]
"At the end of the day, I'd say that Intel's chances for long term success in the tablet space are pretty good - at least architecturally. Intel still needs a Nexus, iPad or other similarly important design win, but it should have the right technology to get there by 2014."
(...)
"As far as smartphones go, the problem is a lot more complicated. Intel needs a good high-end baseband strategy which, as of late, the Infineon acquisition hasn't been able to produce. (...) As for the rest of the smartphone SoC, Intel is on the right track."
The future for CPUs is going to be focused on power consumption. The new Atom core is two times more powerful at the same power levels than the current Atom core. You can see http://www.anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested [anandtech.com]:
" Looking at our Android results, Intel appears to have delivered on that claim. Whether we’re talking about Cortex A15 in NVIDIA’s Shield or Qualcomm’s Krait 400, Silvermont is quicker. It seems safe to say that Intel will have the fastest CPU performance out of any Android tablet platform once Bay Trail ships later this year.
The power consumption, at least on the CPU side, also looks very good. From our SoC measurements it looks like Bay Trail’s power consumption under heavy CPU load ranges from 1W - 2.5W, putting it on par with other mobile SoCs that we’ve done power measurements on.
On the GPU side, Intel’s HD Graphics does reasonably well in its first showing in an ultra mobile SoC. Bay Trail appears to live in a weird world between the old Intel that didn’t care about graphics and the new Intel that has effectively become a GPU company. Intel’s HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4. It’s a huge step forward compared to Clover Trail, but clearly not a leadership play, which is disappointing."
Re:ARM vs x86 (Score:5, Insightful)
Ya I think ARM fanboys need to step back and have a glass of perspective and soda. There seems to be this article of faith among the ARM fan community that ARM chips are faster per watt, dollar, whatever than Intel chips by a big amount. Also that ARM could, if they wish, just scale their chips up and make laptop/desktop chips that would annihilate Intel price/performance wise. However for some strange reason, ARM just doesn't do that.
The real reason is, of course, it isn't true. ARM makes excellent very low power chips. They are great when you need something for a phone, or an integrated controller (Samsung SSDs use an ARM chip to control themselves) and so on. However that doesn't mean they have some magic juju that Intel doesn't, nor does it mean they'll scale without adding power consumption.
In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel. You already find with with 4/6 core chips on desktops. Some things scale great and use 100% of your CPU (video encoding for example). Others can use all the cores, but only to a degree. You see some games like this. They'll use one core to capacity, another near to it, and the 3rd and 4th only partially. Still other things make little to no use of the other cores.
So ARM can't go and just whack together a 100 core chip and call it a desktop processor and expect it to be useful.
Really, Intel is quite good at what they do and their chips actually are pretty efficient in the sector they are in. A 5-10 watt laptop/ultrabook chip does use a lot more than an ARM chip in a smartphone, but it also does more.
Also Intel DOES have some magic juju ARM doesn't, namely that they are a node ahead. You might notice that other companies are talking about 22/20nm stuff. They are getting it ready to go, demonstrating prototypes, etc. Intel however has been shipping 22nm stuff, in large volume, since April of last year. They are now getting ready for 14nm. Not ready as in far off talking about, they are putting the finishing touches on the 14nm fab in Chandler, they have prototype chips actually out and testing, they are getting ready to finalize things and start ramping up volume production.
Intel spends billions and billions a year on R&D, including fab R&D, and thus has been a node ahead of everyone else for quite some time. That alone gives them an advantage. Even if all other things are equal, they've smaller gates, which gives them lower power consumption.
None of this is to say ARM is bad, they are very good at what they do as their sales in the phone market shows. But ARM fans need to stop pretending they are some sleeping behemoth that could crush Intel if only they felt like it. No, actually, Intel's stuff is pretty damn impressive.
Re: (Score:2)
Also, Intel spends less than the sum of all the ARM manufacturers, sure, but those guys aren't exactly collaborating either. Much of their R&D is redundant (developing Krait and Exynos wasn't cheap I'm sure, yet the two chips are fairly similar in performance), which makes the comparison pointless at best, misleading at worst.
Re: (Score:2)
you're right. what intel does do is an excellent version of frequency scaling, turning off unused execution units, etc. this is better than a dedicated low-power core, because it's finer grained and there's some benefit even without software particilation.
What's it like when you put a real workload on it? Benchmarks are interesting and all, but the proof is always reality. (I mistrust benchmarking because it's so easy to tune things to do well in a benchmark without actually being particularly good on anything else; this has definitely happened in the past with CPU design too.)
Mind you, Intel's main problem is that there's a large and expanding market out there where their CPUs just aren't the things that people choose. People select ARM-based systems becaus
Re: (Score:2)
In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel.
Hah. That will change once programmers actually *learn* program algebra.
Re: (Score:3, Funny)
If you have a way to split all tasks down and make them parallel, could you please share it with the rest of us? If it's this 'program algebra' of which you speak, could you please provide us with a link?
Re: (Score:2)
Re: ARM vs x86 (Score:1)
x86 needs to die a.s.a.p. because of the legacy crap it carries.
Please look up the phenomenon A20 gate.
Or just the pain that multiple FP/SIMD implementations cause devs: mix the wrong ones and your performance crashes.
x86 architecture is hampering progress because it is so successful.
Re: (Score:2)
Nobody's running their x86 in a mode that's impacted by A20 any more. And hardly anybody's writing in assembler. So it doesn't matter. And for the minority who *are* writing in assembler, ARM isn't going to help them (unless they're writing ARM assembler of course).
If x86's legacy carried a significant power or performance impact, it *would* matter. But it doesn't.
Re: (Score:2)
Actually it appears that Intel removed the A20 line starting with Haswell.
Check out page 271 of the Intel System Programmers Manual Vol. 3A from June 2013 [intel.com]. Notice the following excerpt: "The functionality of A20M# is used primarily by older operating systems and not used by modern operating systems. On newer Intel 64 processors, A20M# may be absent."
Now check out page 368 from the May 2011 [fing.edu.uy] version of that same document. In the same paragraph, the statement above is not present.
From this, we can infer that
Re: (Score:2, Informative)
Re: (Score:1)
ARM (Acorn Risk Machines) already made desktop chips (and computers) who wipped the floor with Intel's... You are just too young. It was in the 80s. Google Acorn Archimedes.
Re: (Score:2)
Right which is why I can go out and buy one of those right now! ...
Oh wait I can't. They haven't made a desktop chip since the ARM2 in the 80s.
We are talking about the actual real world, here today, where you can buy Intel laptop, desktop, and server CPUs but not ARM CPUs in those markets.
Re: (Score:2)
- Apple and Samsung have 60% of the smartphone market. And they produce their own cpus. Why would they drop their own chips in favor of Intel?
Apple doesn't manufacture their own CPUs. I don't know why Samsung would drop their own chips in favor of Intel's, but they already have in at least one tablet.
Re: ARM vs x86 (Score:2)
14nm is expensive to make. At least double patterning, relatively low yield, and only worth it if you want an expensive high performance part.
ARM chips are aleady being made in 14/20 so this is size is not a long term advantage for Intel.
A lot of chips are still made greater thon 60 because it's cheap. Some are even made at 160 to reduce mask cost by single patterning.
There is no doubt that Intel currently makes the highest performance parts with an equivalent power dissipation.
integrated graphics solution? (Score:2)
At least that is what is implied. That is great for corporate energy usel but when will the real power hogs be addressed? Expansion video cards can use many multiples the power consumed by the rest of the system combined.
Yawn (Score:1)
Re: (Score:2, Insightful)
The IPC has hit a brick wall. The proportion of time spent on cache misses and branch mispredictions simply is a limit.
After all IBM Power8 will have 8 threads/core (as announced at Hot Chips, but as far as I know, there have been no article about it on Slashdot). I'm not sure 8 is very useful, but apparently on many workloads, the 4 threads/core of Power7/Power7+ gives more throughput then 2 threads. Several threads per core increase aggregate IPC, but not per thread IPC.
The reason I'm doubtful on 8 threa
Re: (Score:2)
"a fully overclocked 4770K (~4.4GHz) is only 1.37x as fast as a fully overclocked i7 920 (~4GHz)."
You got some benchmarks on that?
Re: (Score:1)
Power density (Score:1)
Suppose they actually scaled the transistors proportionally with the 22nm to 14nm features size reduction. That would be a reduction to less than half the area, but still 70% of the power. That means that the power density (and thus heat per unit area) would be higher, 1.7x the old value. One of the hopes from the smaller process is to be able to run faster, which means even more power. This seems unrealistic given that current processors are already thermally limited. We are way past the point where die sh
setting up to lose to AMD again (Score:2)