Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Power Upgrades Hardware Linux

Intel Details Power Management Advancements in Haswell 113

MojoKid writes "Intel's next-generation CPU architecture, codenamed Haswell, puts heavy emphasis on reducing power consumption. Pushing Haswell down to a 10W TDP is an achievement, but hitting these targets requires collaboration. Haswell will offer finer-grained control over areas of logic that were previously either on or off, up to and including specific execution units. These optimizations are impressive, particularly the fact that idle CPU power is approaching tablet levels, but they're only part of the story. Operating system changes matter as well, and Intel has teamed up with Microsoft to ensure that Windows 8 takes advantage of current and future hardware. Haswell's 10W target will allow the chip to squeeze into many of the convertible laptop/tablet form factors on display at IDF, while Bay Trail, the 22nm, out-of-order successor to Clover Trail, arrives in 2013 as well. Not to mention the company's demonstration of the first integrated digital WiFi radio. Folks have been trading blows over whether Intel could compete with ARM's core power consumption. Meanwhile, Santa Clara has been busy designing many other aspects of the full system solution for low power consumption and saving a lot of wattage in the process." It's mildly amusing that Windows 8 is the first version to gain dynamic ticks, something Linux has had working since around 2007.
This discussion has been archived. No new comments can be posted.

Intel Details Power Management Advancements in Haswell

Comments Filter:
  • Contrary to other markets the mobile devices market is basically computer architecture agnostic. Since Intel cannot or do not want to manufacture CPUs cheaper than ARM licensees plus they still have lousy performance/watt their only remaining market is something which takes advantage of the vast catalog of pre-existing software for the x86 architecture namely Windows. I have little doubts Intel will eventually succeed to build a cheaper x86 CPU with better performance/watt than ARM given their manufacturing
    • Re: (Score:2, Insightful)

      by aliquis ( 678370 )

      They won't or aren't leading performance / watt already?

      Just not having as poor performance?

      It's a question not a troll. And feel free to answer with future processors from both sides.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        If I've been keeping up properly, Intel has a pretty solid lead over ARM in pure performance/watt (look at how ARM clusters work out), but Intel has never been able to scale down well enough to compete with ARM in the 10W area.

        • by Anonymous Coward

          Arg, /. ate my less than symbol... that's less than 10W.

        • by Locutus ( 9039 ) on Monday September 17, 2012 @09:39PM (#41370591)
          the OP was more likely talking about the low end of the scale as you noticed.

          If all portable devices got the battery life of say an e-ink Kindle there wouldn't be a ARM domination at the low end. But as we've seen, you scale up the screen to full color and slightly larger along with more software to run apps then you start seeing how putting large enough batteries on the things has an effect on their "portability" capabilities.

          We all know Microsoft has been in the tablet market for well over a decade, almost two, and they've failed constantly because the resulting products were huge, heavy and battery life was not so great. Here we see Microsoft trying it yet again and this time they are tuning the hardware to the OS to try and get something even close to the current ARM platforms while providing x86 compatibility. I'm looking forward to seeing what they come up with this time.

          As for ARM, how crippled will the OS and its capabilities be to get a comparable usability as existing options( iOS or Android )? There's still alot of secrecy in this area as recently noted by Microsoft's secret SDK. They want you to think it's about extra features for a marketing surprise but come on, when was the last time Microsoft surprised anyone with new useful capabilities? Most likely it's to limit how immature the platform is and possibly how limited it has to be to operate in the realm of existing battery life expectations. We'll know pretty soon though.

          LoB
      • NO, not in the low power segment. Here are some hard numbers from another /. article today.
        http://www.phoronix.com/scan.php?page=news_item&px=MTE4NjU [phoronix.com]

        CPU Model Performance TDP Efficiency
        AMD FireStream9370 - 52 / 225 / 2.35
        Intel Atom N570 - 6.7 / 8.5 / .79
        ARM Cortel-A9 - 2 / 0.5 / 4

        As you can see in the low power domain, ARM is still more that 4x as efficient and uses 17x less much power than Intel.

        • by yoshman ( 1416413 ) on Tuesday September 18, 2012 @12:41AM (#41371533)

          Well, comparing Atom N570 based system vs some Cortex A9 SoC isn't really a fair comparison, is it? The Atom system has to power things like PCI busses, SATA-controllers etc.

          How about redoing that comparison using Medfield (Atom based SoC) that still using an Atom CPU (the Bonnell core) that can hit 1.6GHz, but uses FAR less power when looking at the system as a whole.

    • I don't see it happening. They've been trying for years, and what do we get.. Atom? That's not even close... they need to drop power consumption by an order of magnitude to compete.

      And like you say, even if intel made a magical chip with ARM-esque power consumption, and better processing power.. they'd charge too bloody much for it, and wouldn't be able to compete still. Unless they dumped it at a loss... but that seems like a losing battle against all the ARM licensees.

      • They've been trying for years, and what do we get.. Atom? That's not even close... they need to drop power consumption by an order of magnitude to compete.

        True, this has been going on way too long to be explainable by incompetence or lack of focus, though there certainly was some of the latter in the past. What is the problem? Maybe this is it: as transistor count drops the relative cost of CISC decoding circuitry goes up, way up. Or?

    • >Since Intel cannot or do not want to manufacture CPUs cheaper than ARM licensees plus they still have lousy performance/watt their only remaining market is something which takes advantage of the vast catalog of pre-existing software for the x86 architecture namely Windows

      Wrong.

      http://www.engadget.com/2012/09/05/motorola-razr-m-europe-intel/ [engadget.com]

      http://www.anandtech.com/show/5770/lava-xolo-x900-review-the-first-intel-medfield-phone/6 [anandtech.com]

      • Well the numbers on that second page seem better than I expected but it is still below the top of the line ARM offerings in term of useable hours you can get from the device. I have heard of smartphones with their chips. Namely the K800 lePhone [engadget.com] by Lenovo but they aren't exactly something you can easily find.
        • Medfield was likely just an exercise of sort for Intel. I'm guessing they're going for a big splash with Airmont in 2014 since they're finally pulling up the Atom die size schedule along with the Core line. I'm not sure what Silvermont is going to be. If paired with Haswell graphics, Intel might be able to compete with ARM seriously, but I'm guessing they''re going to concentrate on Airmont design to create something that has a definite edge over ARM. Apple might be a wild card partner with Intel. They seem
          • What's amusing about the Apple timeline is that they've been through this one before*.

            PPC RISC => Intel CISC

            ARM RISC => Intel CISC

            Both times they kept a branch of code base up to date on the other architecture and have plenty of experience in both (I assume they have an X86 compatible iOS).

            *In the meanwhile they've secretly been creating their own architecture and instruction set.

            • What's amusing about the Apple timeline is that they've been through this one before*.

              PPC RISC => Intel CISC

              ARM RISC => Intel CISC

              Both times they kept a branch of code base up to date on the other architecture and have plenty of experience in both (I assume they have an X86 compatible iOS).

              They have an x86-compatible kernel called XNU and a bunch of x86-compatible kernel-loadable modules and and a bunch of x86-compatible libraries and system daemons. Whether they ensure that, when those are built as part of iOS (e.g., with `CONFIG_EMBEDDED` #defined) rather than as part of OS X, they continue to work on x86, and whether the stuff that's iOS-only is and remains x86-compatible, is another matter.

              *In the meanwhile they've secretly been creating their own architecture and instruction set.

              ...assuming they think they can create one so much better that it's worth the effort of maintaining

    • Re: (Score:3, Insightful)

      by CajunArson ( 465943 )

      [quote]Since Intel cannot or do not want to manufacture CPUs cheaper than ARM licensees plus they still have lousy performance/watt[/quote]

      Show me an ARM solution with better performance per watt than a standard Ivy Bridge Xeon server (or even Sandy Bridge)... and yes, I am *waiting* for you to dredge up those idiotic Calxeda "benchmarks" that claim Sandy Bridge runs at maximum TDP while running at a load of 15% and being substantially faster than Calxeda's yet-to-be-released quad-core ARM server running at

      • Ya I find it hilarious when people talk about Intel desktop CPU power specs vs ARM mobile as though the chips had even remotely the same performance.

        Call me when ARM produces a chip that is faster than one of Intel's desktop/laptop chips, even a low end one, and does so with less power.

        However don't trot out a mobile chip and say "See! It only uses one watt!" and act like it is in the same league as the much larger Intel chips.

      • by Pulzar ( 81031 ) on Monday September 17, 2012 @10:02PM (#41370763)

        You have confused performance per watt with total power consumption. ARM is very good at the latter, but is by no means the best at the former.

        Performance per watt isn't a single number that can be compared to tell the full story. In an envelope desired by small portable devices, ARM has a significant edge in performance per watt over Intel's Atom.

        In server market, Intel has an edge, of course, as they have chips specifically designed for those kinds of high-power workloads. ARM is still a few years away from having anything designed for similar use.

        Market share numbers in both categories reflect this.

    • Intel has started making Windows-only chips. Until they back off of that stance they are off my "recommended" list. It's a huge risk for me. They have power. All I have is conscience, but I can't let it go whatever the cost. My customers trust me and I will not recommend a processor vendor who demands a sole-source software vendor. I would rather sell some other stuff to get my bread.
      • by Kurlon ( 130049 )

        Intel doesn't make Windows Only chips, they make cpus and it just happens that for some they gave the specs to Microsoft for support and are letting others figure it out on their own. If there is interest in Linux running on them, it'll happen when the community dives in and makes it work, just like they have with nearly every other architecture out there. Intel isn't actively blocking other OSs from the platform. Just because Intel isn't devoting manpower to Linux support for a given chip you don't have

        • So you see what's going on, you just don't agree that it's a problem. You're entitled to that opinion.
        • Once Microsoft has had a year or two to patent every obvious or reasonable software implementation of the hardware, then the open software folks can have a peek at the specs? That's the cure you want to give?

          Time to go whole hog into ARM technologies then, where they don't feed one software vendor ownership of progress for no reason.

    • by slick7 ( 1703596 )
      *? [youtube.com]
    • by slick7 ( 1703596 )

      Contrary to other markets the mobile devices market is basically computer architecture agnostic. Since Intel cannot or do not want to manufacture CPUs cheaper than ARM licensees plus they still have lousy performance/watt their only remaining market is something which takes advantage of the vast catalog of pre-existing software for the x86 architecture namely Windows. I have little doubts Intel will eventually succeed to build a cheaper x86 CPU with better performance/watt than ARM given their manufacturing prowess and increasingly high amounts of integration they are providing. But it may take another processor generation or two.

      intel business plan [youtube.com]

  • Power Consumption. (Score:4, Informative)

    by bobwrit ( 1232148 ) on Monday September 17, 2012 @07:13PM (#41369579) Homepage Journal
    "Folks have been trading blows over whether Intel could compete with ARM's core power consumption. " For the mobile markets, Here's the best numbers I could find on the various processor's power output: http://www.xbitlabs.com/news/mobile/display/20110921142759_Nvidia_Unwraps_Performance_Benchmarks_of_Tegra_3_Kal_El.html [xbitlabs.com] The 10W Intel processor is still ~8x outside the power output of a Tegra 3 at 1GHz/Core, and ~6.662x the power output of a OMAP4 processor. While Intel is clearly working on getting down to the ~1W power range, they still have a ways to go. They may get there, but until I see silicon, I'm not holding my breath for it.
    • by aliquis ( 678370 )

      And it's how much faster than Tegra 3?

      • by Locutus ( 9039 )
        because the Tegra 3 isn't fast enough to do what? Run a desktop OS? Sorry, it might not have the power to run Windows.... see what's going on here?

        LoB
        • by aliquis ( 678370 )

          People say performance per watt all the time even though all they compare is watt it seems. That's why.

          Also no, Tegra 3 isn't very fast. Same webpage had a comparision of Lenovos "fatblet" (laptop with swivel screen) and I think it was about 8 times faster than the iPad 2 or something such? Don't remember. I think it had a 35 watt model though whereas some Samsung one in the same test had a 17 watt one.

          "It doesn't use as little power when idle!" might not matter so much since the battery life when idle is l

          • Convertible tablets are naturally going to be a lot faster if you compare the balls-out models. My lady has a Fujitsu Lifebook T900 with a Core i7. Yes, it is certainly several times more powerful than an ARM tablet. It's also several times thicker and yet gets less battery life. Without actually benchmarking that really doesn't prove anything about performance per watt either way, though, because the correspondingly larger machine has a correspondingly larger battery, and you also have to benchmark for a u

            • I still don't understand why nobody seems capable of putting a swivel hinge (and Wacom digitizer) on the equivalent of a Macbook Air...
              • Hopefully it will be a lot of years before I'm cracking open this T900 to find out. I'm thinking that the cost of a unibody laptop plus the cost of the swivel hinge crap plus the cost of multitouch plus wacom equals too much to have any significant market.

                • Who says it has to have multitouch? (Okay, maybe the answer is "everybody but me...")

                  Besides, a MacBook Air is $1000, and some equally-sleek Windows ultrabooks are in the $600-$800 range. Even if a 'wacom-tablet-ultrabook' cost $1500-$2000, I think the niche it would serve is big enough to be worth it.

                  • You can get a fat one refurb'd for $900 direct from Fujitsu. I don't know if that's an i7 or just an i5. The i7 is wicked fast, though, as you might imagine. They have wacom and they can have multitouch or daylight but not both, so you ought to be happy.

                    • The fact that it's fat is the problem. My point was that given the existence of other thin computers (e.g. ultrabooks), I see no technical reason it couldn't be thin.

    • Re: (Score:2, Insightful)

      by CajunArson ( 465943 )

      Uh... Fanboi much? Those Tegra 3 benchmarks have been shown to be *extreme* wishful thinking on Nvidia's part, and if you are naive enough to believe that Intel's lowest-power CPU burns 10 watts then I have a bridge to sell you...

    • They may get there, but until I see silicon, I'm not holding my breath for it.

      Does that mean when you do see the silicon you'll hold your breath? For how long?

  • by CajunArson ( 465943 ) on Monday September 17, 2012 @07:26PM (#41369685) Journal

    Look at the pie charts on this page: http://hothardware.com/Reviews/Intels-Game-Changer-One-Size-Fits-All-Haswell/?page=4 [hothardware.com]

    Notice how the display is quickly dominating the power consumption? The whole ARM vs. x86 power consumption bit is bunk. Intel has proven it can be competitive with ARM, and even if ARM could magically make a chip that uses zero power, your display isn't going to suck down any less juice based on the instruction set of the processor running your device....

    • Notice how the display is quickly dominating the power consumption? The whole ARM vs. x86 power consumption bit is bunk.

      Whoa, I think you're getting a little ahead of the curve there. Try running a moderately intense game and watch the battery drain.

      • by CajunArson ( 465943 ) on Monday September 17, 2012 @09:14PM (#41370427) Journal

        Try running a moderately intense game and watch the battery drain.

        I have... on my Motorola phone running on an ARM CPU using an embedded GPU that happens to be made by the exact same company that makes embedded GPUs for Medfield phones... So please explain to me how the exact same GPU magically uses zero power when it happens to be sitting next to an ARM core vs. an Intel core... your new learning amazes me!

    • Speak for yourself, I want to use phone hardware to run applications almost continuously, mainly for mesh networking, with the screen off.
  • Funny (Score:5, Interesting)

    by viperidaenz ( 2515578 ) on Monday September 17, 2012 @08:31PM (#41370143)

    It's mildly amusing that Windows 8 is the first version to gain dynamic ticks, something Linux has had working since around 2007.

    Its also mildly amusing that Windows has always trumped Linux in battery life, despite not implementing this power saving feature.

    • Re:Funny (Score:4, Insightful)

      by Tough Love ( 215404 ) on Monday September 17, 2012 @09:19PM (#41370457)

      It's mildly amusing that Windows 8 is the first version to gain dynamic ticks, something Linux has had working since around 2007.

      Its also mildly amusing that Windows has always trumped Linux in battery life, despite not implementing this power saving feature.

      Windows has always trumped Linux in batter life, you claim? That seems rather sweeping, whereas reports from the field seem mixed, with a significant number in fact reporting an advantage for Linux. I think it depends on a number of factors, including how much access Linux devs have to power management specs for a given OEM chipset. And there have been occasional regressions indeed. These get picked up pretty fast these days and usually corrected after a kernel bump or two.

    • That Linux just recently got and find that "mildly amusing".

      It is rather silly. Yes, OSes have different feature sets. They don't all implement everything at the same time.

      • by humanrev ( 2606607 ) on Monday September 17, 2012 @09:56PM (#41370711)

        To be honest it embarrasses me to want to associate myself with any "side" when it comes to operating systems and hardware. If I try to say why Windows is better at Linux than something (and make my statement completely without any emotional inflection or attachment), I'm gonna get piled on pretty quickly by a lot of hate posts that don't legitimately counter my points (posts that I would appreciate reading, since I don't know everything). If I go to say, Neowin.net, and try to make a comment about how I feel Windows 8 sucks for my workflow or how I like a particular feature in Linux that Windows doesn't have, I'll be piled on pretty quickly there too.

        There are a LOT of seasoned, battle-hardened vets of the operating system wars out there on the net who have nothing better to do than fight against those who don't have the same viewpoint as they do. The mere fact that people can't discuss things and see both sides of an issue without getting into an emotional wreck reminds me how fucking annoying and stupid humans really are.

      • What's amusing is people think MS couldn't do this before or it's some brilliant revolutionary idea. Microsoft has money, and they have Ph.Ds and they have ridiculously experienced software developers. If they choose not to implement something it's not because they're technically incompetent or unable to do it.
        • you they must maintain backwards compatibility, which Linux's implementation of dynamic ticks can impact. If application X is using system timer Y to time something, and the kernel reprograms that timer because it wants to skip some interrupts that won't execute any code its going to change the time the number of ticks on that timer represent and cause application X to report something happened quicker than it really did.
      • Or we could find other features Windows has ... That Linux just recently got and find that "mildly amusing".

        Like what.. Metro UI? No thanks. A standard Linux distro comes with an insanely full set of software and utilities that "just work" out of the box. Don't know why I would waste my time installing software for days just to make Windows usable. That OS doesn't even have a decent software repository for it yet.

        • Don't know why I would waste my time installing software for days just to make Windows usable.

          Im not sure what, but youre doing something wrong.

        • Between Metro UI, GNOME 3 and Unity, it's all a wash. Only thing is that w/ Linux or BSD, one can use older versions of the DEs, which is not possible in Windows 8 w/o external add-ons
          • Between Metro UI, GNOME 3 and Unity, it's all a wash.

            I'm not disagreeing with you on that point. But I don't run older versions of KDE -- the latest and greatest are fully functional for me. Perhaps it's time you tried a different distro?

            • Oh, I agree on KDE - that's definitely good. I'd try giving Razor-qt a spin as well. Aside from that, I wish there was a GNUSTEP based DE - I loved NEXTSTEP when it was around, and would welcome something like it.
        • First, I think your sig is awesome.

          Now. Come on now, you really want to go there? "Just works"? If you want to spend all day researching why libfoo-4.2.1-5r3.1.so isn't good enough for suparbar-3.6.1.35-r5.3-custom, that's cool, I'll be over here getting work done.

          The software repository you seek is called "just about every store" and "most of the Internet". It takes about an hour to install everything you really need to get productive on Windows, including the updates. Maybe more if you need Cygwin for som

    • Its also mildly amusing that Windows has always trumped Linux in battery life, despite not implementing this power saving feature.

      Really? I'd like to see a citation on that.

      When my T410s was brand new, battery life was exactly even between Win7 and OpenSUSE 11.x for web browsing and other non-intensive tasks.

      • by Zan Lynx ( 87672 )

        I have personal evidence that the default original install of Windows 7 on a Samsung Series 9 uses less power than a default install of Fedora 15.

        Now, after I tweaked the Linux install using PowerTop and a rc.local script, Linux uses less power. This took some time and specialized knowledge of Linux systems.

        So for an average computer user just installing an operating system, Windows 7 would use less power and have better battery life.

    • by tlhIngan ( 30335 )

      It's mildly amusing that Windows 8 is the first version to gain dynamic ticks, something Linux has had working since around 2007.

      Windows CE and PocketPC has had dynamic ticks far longer actually - it was a BSP option you could have. The scheduler supported it (it told you how long to idle, you told it how long you actually idled (so your interrupts had to determine the time idled).

      OS X has supported dynamic ticks for god knows since when - I think pratically from the beginning. It was immune to the CPU time [usenix.org]

  • Dynamic ticks (Score:5, Informative)

    by MtHuurne ( 602934 ) on Monday September 17, 2012 @11:04PM (#41371111) Homepage

    Linux has had the dynamic ticks (CONFIG_NO_HZ) feature for a while, but that only shuts down the timer tick when the system is completely idle. There is a new feature in the works named "adaptive tickless", see announcement [lkml.org] and a recent progress update [lwn.net], that will also shut down the timer tick when the system is running a single task.

    • It should be pointed out that the adaptive tickless work has nothing to do with power savings. It's about reducing the CPU time the OS takes away from running tasks. By itself the time is insignificant, but when you factor in cache impact and critical sections it can cause non-trivial performance and latency impacts in high-performance and real-time workloads.

      Windows isn't used for high-performance computing, nor as a real-time OS, so it probably won't ever get this feature.

      • Windows isn't used for high-performance computing, nor as a real-time OS, so it probably won't ever get this feature.

        O [microsoft.com] RLY [top500.org]? (okok, only 2 of the top500, but it's not like it's NOT used. I'd be surprised if it's used anywhere that's not being paid by Microsoft to do it though.)

        • Windows isn't used for high-performance computing, nor as a real-time OS, so it probably won't ever get this feature.

          O [microsoft.com] RLY [top500.org]? (okok, only 2 of the top500, but it's not like it's NOT used. I'd be surprised if it's used anywhere that's not being paid by Microsoft to do it though.)

          As I said :-)

      • My primary interest for this is games and emulators on handheld consoles, which have to do a certain amount of work per frame and then sleep until it is time for the next frame. If performance increases because there are less interruptions, a frame's work will be finished sooner, so the CPU can spend more time in sleep mode, thus saving power.

  • The freature was add to Linux kernel in 2.6.21 http://lwn.net/Articles/223185/ [lwn.net]
  • So intel's CPU architecture is so clunky and complicated compared with the likes of ARM that it needs special intricate OS kernel hacks to get close to the same level of power consumption as the more efficient processors?

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...