Please create an account to participate in the Slashdot moderation system


Forgot your password?

big.LITTLE: ARM's Strategy For Efficient Computing 73

MojoKid writes "big.LITTLE is ARM's solution to a particularly nasty problem: smaller and smaller process nodes no longer deliver the kind of overall power consumption improvements they did years ago. Before 90nm technology, semiconductor firms could count on new chips being smaller, faster, and drawing less power at a given frequency. Eventually, that stopped being true. Tighter process geometries still pack more transistors per square millimeter, but the improvements to power consumption and maximum frequency have been falling with each smaller node. Rising defect densities have created a situation where — for the first time ever — 20nm wafers won't be cheaper than the 28nm processors they're supposed to replace. This is a critical problem for the mobile market, where low power consumption is absolutely vital. big.LITTLE is ARM's answer to this problem. The strategy requires manufacturers to implement two sets of cores — the Cortex-A7 and Cortex-A15 are the current match-up. The idea is for the little cores to handle the bulk of the device's work, with the big cores used for occasional heavy lifting. ARM's argument is that this approach is superior to dynamic voltage and frequency scaling (DVFS) because it's impossible for a single CPU architecture to retain a linear performance/power curve across its entire frequency range. This is the same argument Nvidia made when it built the Companion Core in Tegra 3."
This discussion has been archived. No new comments can be posted.

big.LITTLE: ARM's Strategy For Efficient Computing

Comments Filter:
  • by Anonymous Coward on Wednesday July 10, 2013 @04:08AM (#44235261)

    Powered-down circuits have no leakage. Also a "little" implementation has vastly lower leakage than a bigger core.

  • by m6ack ( 922653 ) on Wednesday July 10, 2013 @04:27AM (#44235347)

    Big/little is a lazy way out of the power problem... Because instead of investing in design and development and in fine grained power control in your processor, you make the design decision of, "Heck with this -- silicon is cheap!" and throw away a good chunk of silicon when the processor goes into a different power mode... You have no graceful scaling -- just a brute force throttle and a clunky interface for the Kernel.

    So, not all ARM licensees have been convinced or seen the need to go to a big/little architecture because big/little has that big disadvantages of added complexity and wasted realestate (and cost) on the die. Unlike nVidea (Tegra) and Samsung (Exynos), Qualcomm has been able to thus far keep power under control in their Snapdragon designs without having to resort to a big/little and has thus been able to excel on the phone. So far, the Qualcomm strategy seems to be a winning one for phones in terms of both overall power savings and performance per miliwatt -- where on phones every extra hour of battery life is a cherished commodity. Such may not be true for tablets that can stand to have larger batteries and where performance at "some reasonable expectation" of battery life may be the more important.

  • by TheRaven64 ( 641858 ) on Wednesday July 10, 2013 @05:00AM (#44235447) Journal

    This solution _might_ be more power efficient. But it can not be more die and space efficient

    Two words: Dark Silicon. As process technologies have improved, the amount of the chip that you can have powered at any given time has decreased. This is why we've seen a recent rise in instruction set extensions that improve the performance of a relatively small set of algorithms. If you add something that needs to be powered all of the time, all you do is push closer to the thermal limit where you need to reduce clock speed. If you add something that is only powered infrequently, then you can get a big performance win when it is used but pay a price when it isn't.

    TL;DR version: transistors are cheap. Powered transistors are expensive.

  • by TheRaven64 ( 641858 ) on Wednesday July 10, 2013 @05:19AM (#44235497) Journal
    The power difference between an A7 and an A15 is huge. There's really nothing that you could do to something like an A15 to get it close to the power consumption of the A7 without killing performance. They're entirely different pipeline structures (in-order, dual-issue-if-you're-luck vs out-of-order multi-issue). The first generation from Samsung had some bugs in cache coherency that made them painful for the OS, but the newer ones are much better: they allow you to have any combination of A7s and A15s powered at the same time, so if you have a single CPU-bound task you can give it an A15 to run on and put everything else on one or more A7s (depending on how many other processes you've got, running multiple A7s at a lower clock speed may be more efficient than running one at full speed). The OS is in a far better place to make these decisions than the CPU, because it can learn a lot about the prior behaviour of a process and about other processes spawned from the same program binary.
  • Re:and... (Score:5, Insightful)

    by TheRaven64 ( 641858 ) on Wednesday July 10, 2013 @05:31AM (#44235547) Journal

    Nothing, except that Intel's most power efficient chips are in the same ballpark as the A15 (the power-hungry, fast 'big' chip) and they currently have nothing comparable to the A7 (the power-efficient, slow 'LITTLE' chip). And in the power envelope of the A7, an x86 decoder is a significant fraction of your total power consumption.

    One of the reasons why RISC had an advantage over CISC in the '80s was the large amount of die area (10-20% of the total die size) that the CISC chips had to use to deal with the extra complexity of decoding a complex non-orthogonal variable-length instruction set. This started to be eroded in the '90s for two reasons. The first was that chips got bigger, whereas decoders stayed the same size and so were proportionally smaller. The second was that CISC encodings were often denser, and so used less instruction cache, than RISC.

    Intel doesn't have either of these advantages at the low-power end. The decoder is still a significant fraction of a low-power chip and, worse, it is a part that has to be powered all of the time. They also don't win on instruction density, because both x86 and Thumb-2 are approximately as dense.

    MIPS might be able to do something similar. They've been somewhat unfocussed in the processor design area for the past decade, but this has meant that a lot of their licensees have produced chips with very different characteristics, so they may be able to license two of these and implement something similar quite easily. Their main problem is that no one cares about MIPS.

When you make your mark in the world, watch out for guys with erasers. -- The Wall Street Journal