Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

big.LITTLE: ARM's Strategy For Efficient Computing 73

MojoKid writes "big.LITTLE is ARM's solution to a particularly nasty problem: smaller and smaller process nodes no longer deliver the kind of overall power consumption improvements they did years ago. Before 90nm technology, semiconductor firms could count on new chips being smaller, faster, and drawing less power at a given frequency. Eventually, that stopped being true. Tighter process geometries still pack more transistors per square millimeter, but the improvements to power consumption and maximum frequency have been falling with each smaller node. Rising defect densities have created a situation where — for the first time ever — 20nm wafers won't be cheaper than the 28nm processors they're supposed to replace. This is a critical problem for the mobile market, where low power consumption is absolutely vital. big.LITTLE is ARM's answer to this problem. The strategy requires manufacturers to implement two sets of cores — the Cortex-A7 and Cortex-A15 are the current match-up. The idea is for the little cores to handle the bulk of the device's work, with the big cores used for occasional heavy lifting. ARM's argument is that this approach is superior to dynamic voltage and frequency scaling (DVFS) because it's impossible for a single CPU architecture to retain a linear performance/power curve across its entire frequency range. This is the same argument Nvidia made when it built the Companion Core in Tegra 3."
This discussion has been archived. No new comments can be posted.

big.LITTLE: ARM's Strategy For Efficient Computing

Comments Filter:
  • by Anonymous Coward

    This solution _might_ be more power efficient. But it can not be more die and space efficient. Looking at keeping die sizes down to place more other crap on the die's, a ginormous core complex does not really fit the bill. Besides, if you want to keep core context switch times low, you must keep all caches etc on the larger cores hot and that draws power. This solution probably fits when you start a game, so that you have an explicit trigger to switch to the larger cores. If you are talking on demand ultra

    • by TheRaven64 ( 641858 ) on Wednesday July 10, 2013 @04:00AM (#44235447) Journal

      This solution _might_ be more power efficient. But it can not be more die and space efficient

      Two words: Dark Silicon. As process technologies have improved, the amount of the chip that you can have powered at any given time has decreased. This is why we've seen a recent rise in instruction set extensions that improve the performance of a relatively small set of algorithms. If you add something that needs to be powered all of the time, all you do is push closer to the thermal limit where you need to reduce clock speed. If you add something that is only powered infrequently, then you can get a big performance win when it is used but pay a price when it isn't.

      TL;DR version: transistors are cheap. Powered transistors are expensive.

      • by RoboJ1M ( 992925 )

        Agreed.
        If there is ever the perfect fits all solution (such as Intel tries to build) it won't be for a very long time. If ever.
        Transistors are cheap, just built for what your target usage is.
        I seem to remember something about AMD and some sort of interconnect tech that (one day) allows you to quickly/cheaply/easily interconnect modular chip bits and really easily build for a target market.

    • At 28nm? It's the difference between 'tiny' and 'almost as tiny.' The packaging is many times the size of the chip, so if you can get both chips in one package it won't add anything. Power is more important.

    • by RoboJ1M ( 992925 ) on Wednesday July 10, 2013 @05:22AM (#44235755)

      Found it:

      http://semiaccurate.com/2013/05/01/sonics-licenses-fabric-tech-to-arm/ [semiaccurate.com]

      "Sonics and ARM just made an agreement to use Sonics interconnects patents and some power management tech in ARM products."

      "If Sonics is to be taken at face value on their functionality, then you can slap just about any IP block you have on an ARM core now with a fair bit of ease."

      This is kind of relevant too, the internet will eat all our electricities:

      http://www.theregister.co.uk/2012/11/26/interview_rod_tucker/ [theregister.co.uk]

      "and if we don’t do anything, it could become ten percent between 2020 and 2025"

      Although if you read it, the lion shares of internet electric usage is actually those amp happy DSL connections we have.

    • An asymmetric SMP machine is nothing new.

      That's a tautology. An asymmetric symmetric multiprocessor can't exist by definition, therefore it can't exist as a new thing by definition.

  • old news (Score:3, Informative)

    by Anonymous Coward on Wednesday July 10, 2013 @02:52AM (#44235203)

    Advertising much?

  • Some marketdroid had a field day finding that name, sheesh...

    I would have added "i" in front of it, personally. Everybody knows i-anything is teh kewl these days.

  • Its like a hybrid vehicle, when you only need to go slow it runs on a small motor, when you need power the big engine kicks in but needs more juice.
  • by m6ack ( 922653 ) on Wednesday July 10, 2013 @03:27AM (#44235347)

    Big/little is a lazy way out of the power problem... Because instead of investing in design and development and in fine grained power control in your processor, you make the design decision of, "Heck with this -- silicon is cheap!" and throw away a good chunk of silicon when the processor goes into a different power mode... You have no graceful scaling -- just a brute force throttle and a clunky interface for the Kernel.

    So, not all ARM licensees have been convinced or seen the need to go to a big/little architecture because big/little has that big disadvantages of added complexity and wasted realestate (and cost) on the die. Unlike nVidea (Tegra) and Samsung (Exynos), Qualcomm has been able to thus far keep power under control in their Snapdragon designs without having to resort to a big/little and has thus been able to excel on the phone. So far, the Qualcomm strategy seems to be a winning one for phones in terms of both overall power savings and performance per miliwatt -- where on phones every extra hour of battery life is a cherished commodity. Such may not be true for tablets that can stand to have larger batteries and where performance at "some reasonable expectation" of battery life may be the more important.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      where on phones every extra hour of battery life is a cherished commodity. Such may not be true for tablets that can stand to have larger batteries and where performance at "some reasonable expectation" of battery life may be the more important.

      This isn't directly for phones and tablets and it isn't "a lazy way out of the power problem".
      We are not talking about a gradual increase in efficiency here, this is to solve the standby energy requirements for permanently powered consumer devices like TV-sets. (See the One Watt Initiative [wikipedia.org])
      The first generation of devices that solved the problem had dual power supplies. One that was optimized for high efficiency for a low load. This was used to power a microcontroller that dealt with the remote control and s

    • by Anonymous Coward

      Lazy and effective is wiser than difficult and effective, every time. When you favour doing a clever design over adding a whole lot of cheap silicon you wind up with unreliable, hard to design, hard to build, hard to write-for monsters like the PS2 and PS3 architectures.

  • In the original 1965 paper Moore was talking about the density of the least-cost process, and this part is often left out when summarizing what Moore's law is. The full more law is "the density of the least-cost process doubles every 18 months". This means that Moore's law can fail in two ways: if we can't get any denser (not yet) or if we can get denser, but not at a lower cost too (now?). The following says that the "full Moore law" stops at 28 nm:

    Rising defect densities have created a situation where — for the first time ever — 20nm wafers won't be cheaper than the 28nm processors they're supposed to replace.

    The economic part is often left out on tech sites discus

    • 20nm wafers won't be cheaper than the 28nm

      Won't be, or currently aren't? There's always the possibility of improved 20nm yields.

    • It's still too early to declare the end of Moore's Law. If for no other reason, because very few fabs can produce 20nm chips, so nobody can tell if Intel made a mistake somewhere.

      Yelds increse through all the life of a fab process, and those 18 monts aren't quite exact. We can still go back to normal.

  • The cost of a 45 nm wafer was higher than that of a 65 nm wafer, etc. It was only the cost of an individual die that went down, because with a smaller geometry an equivalent die was smaller, thus there were more of them per wafer.
  • Came here for a companion cube analogy, leaving disappointed :(

  • Big/little is a lazy way out of the power problem... Because instead of investing in design and development and in fine grained power control in your processor, you make the design decision of, "Heck with this -- silicon is cheap!" and throw away a good chunk of silicon when the processor goes into a different power mode... You have no graceful scaling -- just a brute force throttle and a clunky interface for the Kernel. So, not all ARM licensees have been convinced or seen the need to go to a big/little a
  • The same strategy enabled high-EER air conditioning: use a small compressor which runs most of the time plus a larger one to handle peak cooling loads, rather than an even bigger compressor which cycles on and off frequently.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...