Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cellphones Hardware

ARM's Cortex-A72 and Mali-T880 GPU Announced For 2016 Flagship Smartphones 85

MojoKid writes ARM's Cortex-A57 is just now starting to break stride with design wins and full-ramp production in new mobile products. However, ARM is releasing a wealth of information on its successor: the Cortex-A72. ARM is targeting a core clock of 2.5GHz for the Cortex-A72 and it will be built using a 14nm/16nm FinFET+ process. Using the Cortex-A15 (NVIDIA Tegra 4, Tegra K1) as a baseline, ARM says that the Cortex-A57 (Qualcomm Snapdragon 810, Samsung Exynos 5433) offers 1.9x the performance. Stepping up to the Cortex-A72, which will begin shipping in next year's flagship smartphones, offers 3.5x the baseline performance of the Cortex-A15. These performance increases are being made within the same power envelope across all three architectures. So in turn, the Cortex-A72 can perform the same workload as the Cortex-A15 while consuming 75 percent less power. Much like the Snapdragon 810, which uses a big.LITTLE configuration (four low-power Cortex-A53 cores paired with four high performance Cortex-A57 cores), future SoCs using the Cortex-A72 will also be capable of big.LITTLE pairings with the Cortex-A53. ARM has also announced its new Mali-T880 GPU, which offers 1.8x the performance of the current generation Mali-T760. Under identical workloads, the Mali-T880 offers a 40 percent reduction in power consumption compared to its predecessor. ARM again also points to optimizations in the Mali-T880 to efficiently support 4K video playback.
This discussion has been archived. No new comments can be posted.

ARM's Cortex-A72 and Mali-T880 GPU Announced For 2016 Flagship Smartphones

Comments Filter:
  • by Anonymous Coward

    They always say things like that, but we just keep using bigger and bigger batteries (partly because of bigger screens) and yet battery life seems to only get worse year after year.

    • They always say things like that, but we just keep using bigger and bigger batteries (partly because of bigger screens) and yet battery life seems to only get worse year after year.

      I just got home from work. My Galaxy4 battery is at 92% after 8 hours. If you are having problems with battery life, you may want to change your habits. I only turn wifi and Bluetooth on when I am actively using them. I watch how much power different apps use, and replace those that suck too much (similar apps can differ by orders of magnitude). I exit (not just close) apps when I am not using them. If my battery life is dropping faster than expected, I reboot to clear out any background processes.

      • I don't do any of that jazz and my nexus 4 would last for days and my moto g lasts for days... 2-3 of them, approximately. any halfway decent phone has plenty of battery now unless you're really burning it up, in which case it's not unreasonable to expect to have to charge it once a day

      • by ebrandsberg ( 75344 ) on Thursday February 05, 2015 @12:04AM (#48986431)

        based on my experience, the #1 power consumer is... a bad cell signal. If you are at 92% after 8 hours on ANY phone, you are likely sitting in a building with a cell tower a few feet from your head, or you are just straight up lying about your power usage (or both). I've taken a few last-gen phones, put them on airplane mode, then powered up wifi, and they can last over a week. What burns the battery? mobile data access, and the screen.

        • Re: (Score:2, Informative)

          by gl4ss ( 559668 )

          your last gen phones turn off wifi when in sleep if they last over a week(there's an option for that).

          what keeps data connected phones burning battery is being data connected, which leads to phones having stuff running, updating the news, weather and all that shit doesn't come free. lots of stuff that doesn't get waken up if there is no data connection.

          so, easiest is to just turn off data when you're not using it. of course you can't then receive skype, gtalk or whatever voip calls or instant messaging on i

          • by Anonymous Coward

            a crappy old huawei y300 does well over a week with wifi and cellular data on and happily gets hangouts/gmail/fb/... notifications.

            install skype and it's flat after less than 2 days.

            what keeps data connected phones burning battery is shit apps that constantly wake the cpu and chatter over the network for no good reason.

            so, easiest is to just get rid of skype and enjoy not having to play human power management system.

      • My Sony Z2 checks the GPS every now and then and doesn't even bother trying to use Wifi if I'm not actually physically near any of my known wifi points.
        Stuff like that is quite sensible and practical. I equally get ridiculous battery life.

      • by AK Marc ( 707885 )
        My Galaxy S3 will not last an hour playing angry birds (it's a great entertainer for the kids, when we find we are stuck somewhere we have to wait a while). For pure standby, it eats about 10% per hour. Much better when in airplane mode with all apps forcebly closed, but then it's not a phone, but a tiny tablet with no connnection.
    • by raymorris ( 2726007 ) on Thursday February 05, 2015 @12:34AM (#48986557) Journal

      The newer SOCs have two high-performance cores and two low power cores. Like the old quadrajet carburetors, efficiency drops quite a bit when the high-perfomance side kicks in.

      That said, the screen and radios take up most of the power for most people. Dim the screen and turn off Bluetooth and WiFi as appropriate, or use power-saving mode to automate that process.

      • Actually, perf per watt, or computations performed using N joules of energy, is frequently better for the bigger cores. That's especially true for newer low-leakage deep submicron process nodes.

        • Computations per joule is not the relevant measurement. The relevant measurement is hours per charge. If you keep the computations per second below the threshold that the 53s can handle, the big cores never light and the battery lasts longer.

          A tractor-trailer gets better mileage per pound than a sedan. So do you drive a big rig to work to save gas?

          • No. As the other poster says, these cores consume negligible amounts of power when not in use. The performance per Watt of the bigger cores can often be better, so it can consume less energy to power one of the big cores for 250ms than power the little core for 1s, yet still get the same (or more) work done. If your OS scheduler is able to coalesce events then this can be a big energy saving (and, remember, it's energy not power that matters for battery life: your battery can - more or less - supply a fi
            • You realize you're claiming that ARM's chip architects are completely wrong and have been for a while now, now? You know they actually measure this stuff before they spend a few billion dollars fabbing chips.

              >. can consume less energy to power one of the big cores for 250ms than power the little core for 1s

              If you need to do 500 million operations, you're close to to the point where it makes sense to power the faster core, yes. Your phone spends 99% of it's time with picoseconds of CPU work to b

              • by TheRaven64 ( 641858 ) on Thursday February 05, 2015 @05:35AM (#48987333) Journal

                No, I'm not claiming that they're wrong - I'm repeating things that they've told me. We have a project with them to investigate good power-efficient scheduling behaviour for precisely this reason: The big.LITTLE configuration does not mean that it's always better to use the little cores, it means that it's better to use the little core for long-running tasks that have a lot of I/O and so can't put the core to sleep, but aren't CPU-bound. If you have something CPU-bound, then you're often better off doing it on the big core and then going back to sleep. Detecting these workloads is not a trivial problem.

                There are also some corner cases that are also quite interesting. The A7 has lower latency access to L1 than the A15, so for workloads with a very small working set, running them on the A7 can actually be faster (this shows up in one of the SPEC benchmarks).

                • > that it's better to use the little core for long-running tasks that have a lot of I/O and so can't put the core to sleep, but aren't CPU-bound.

                  If I'm understanding you correctly, you're saying it only saves power to use the little cores if there is io involved, such as a mobile network which is obviously much slower than the CPU cores. Or maybe storage device, like and SD card. Or any user interaction.

                  You're right, very few things that you do on a mobile phone would involve either the network, the S

                  • If I'm understanding you correctly, you're saying it only saves power to use the little cores if there is io involved, such as a mobile network which is obviously much slower than the CPU cores. Or maybe storage device, like and SD card. Or any user interaction.

                    No, I'm saying that it saves power to use the little cores if you have a load of interrupts that prevent the core from going to sleep, but are not CPU-bound. For some interactive tasks (lots of moderately demanding apps), you're CPU-bound for short bursts but you can then put the core to sleep and wait.

                    User interaction is often on a timescale where you can put the core into a low power state while you wait for a ponderously slow user (in comparison to CPU speeds) to press a button. Simple animations can

                    • >. Before you try to sound patronising again,

                      Sorry about that.

                      If I'm NOW understanding you correctly, you're saying that the big core is better IF the pause is long enough to enter low-power and sleep long enough to make it worth it, correct? Further, I'm reading between the lines and thinking you're saying that on a phone, that's normally the case - that the 53 cores aren't used often, or shouldn't be. Is that correct?

                    • If I'm NOW understanding you correctly, you're saying that the big core is better IF the pause is long enough to enter low-power and sleep long enough to make it worth it, correct?

                      Kind of. The big core is usually able to perform more computation per Joule, but uses more power per Watt when in its high-power state, so if you can complete some work and sleep the big core is usually better. If you have a constant stream of work, the little core is better.

                      Further, I'm reading between the lines and thinking you're saying that on a phone, that's normally the case - that the 53 cores aren't used often, or shouldn't be. Is that correct?

                      No, they're both used, but it isn't always a clear-cut decision which one is optimal. There are some other issues too. They don't have a shared L1 cache, so you take a small performance hit every time you migrate between them.

                      A pho

                • by Prune ( 557140 )
                  ARM still sucks. Note how comparisons between ARM and mobile x86 are based on raw compute power, which ignores the elephant in the room: x86 enforces a much stronger memory model (Intel themselves were guilty of doing this with Itanium). To implement the same lockless multithreaded algorithms on ARM, you'd have to insert explicit barriers; how do you think that would affect its performance relative to x86, which has much stricter reordering constraints? You can say "just use locks", but comparisons should b
                  • It should be noted that most programmers will never write or directly use a lockless multithreaded algorithm. The number of things on a phone or tablet that need (or even would benefit significantly from) such an algorithm is relatively small.

                    Most of the time I suspect that the various cores on a mobile device are doing independent things. The percentage of time that the average phone/tablet is going to be doing massively parallel cpu-bound work is tiny.

                  • To implement the same lockless multithreaded algorithms on ARM, you'd have to insert explicit barriers; how do you think that would affect its performance relative to x86, which has much stricter reordering constraints?

                    How does POWER (which has a very similar memory model to ARMv8) fare against x86? It's not as clear-cut as you make it out to be. Explicit barriers amount to bus traffic and that's what adds the overhead (in performance and power). On x86, you're paying that cost whenever you have cache lines aliased across cores, even if you don't need it. On ARM, you only pay the cost when you need it. If you're programming with the C[++]11 concurrency model, then the compiler will sort out the barriers for you and t

          • Computations per joule is not the relevant measurement. The relevant measurement is hours per charge. If you keep the computations per second below the threshold that the 53s can handle, the big cores never light and the battery lasts longer.

            A tractor-trailer gets better mileage per pound than a sedan. So do you drive a big rig to work to save gas?

            If you never tax the motor of your sedan to save gas, why didn't you buy one with a smaller motor in the first place?

      • by amiga3D ( 567632 )

        I find that if I don't use my phone the battery lasts for days. Whatever happened to those fuel cells that used lighter fluid to power laptops? That's what we need for smartphones. Zippo batteries.

      • Like the old quadrajet carburetors, efficiency drops quite a bit when the high-perfomance side kicks in.

        Not necessarily. Efficiency can actually increase If the high power cores are able to bring the whole system to a low power state sooner.

        • blah blah blah, everyone keeps saying that, and yet my battery life is always better when I keep the CPU max clock at about 80% of full speed.
          I'm sorry, but physics are a bitch, and you are too for claiming that power doesn't follow the cube of voltage in SoCs. (yes, cube. It follows the cube of voltage, not the I^2R you're used to seeing)

          • Right, you own a phone so you're an expert :)

            Keep in mind that the power management software in your phone may suck and fall short of achieving all the efficiencies that the hardware is capable of. BTW, it is not necessary to lecture me about power curves, far from it.

            • no, I studied electrical engineering at a school that's probably ranked much higher than yours, and advanced semiconductor fundamentals was my second favorite class, and embedded microcontroller design was my 3rd favorite class.

              in addition if what these clowns said were worth listening to, I wouldn't achieve lower idle battery drain by setting a low max-screen-off-frequency.

              • EEs are famous for thinking they have a clue about software :)

                • my job title is software engineer and I've written several drivers that have 100% up-time in multi-million dollar production deployments so...?

                  • So you most probably overestimate your ability in power management. Think about what might be necessary to achieve a win from sprint-to-power-save, and why the phone you own might not implement that. Think about the whole system.

    • by tlhIngan ( 30335 )

      They always say things like that, but we just keep using bigger and bigger batteries (partly because of bigger screens) and yet battery life seems to only get worse year after year.

      Well, ARM's power consumption has been pretty stable the entire time - about 1mW/MHz.

      The reason it's consuming tons of power and you need thermal throttling is because you're starting to pack a lot of MHz on a die.

      I mean, say 2.5GHz quad core. That's 2.5W/core at full tilt. With 4 of them, that's 10W! There's no way to get that s

    • in scandinavian countries, overheating is a desired attribute. cold weather really hurts batteries, so if the phone generates a little internal heat it prolongs battery life.

      • tbh, you have two choices for batteries today: Charge them fast, and they don't last as long, or charge them slow, and they last forever. Heat is the problem the batteries have. If you charge them just fast enough so that in the morning they are full, or at least they never get hot, you are going to do well. The difference that a Scandinavian country imposes is hardly likely to make a difference, due to the phone being in a pocket, your hand, or indoors while charging.

        The other idea is to buy a phone wi

        • Read the datasheets and whitepapers from the battery manufacturers. Charging them too slowly isn't good for them...plus it makes it harder to figure out when they're fully charged.

      • by Kjella ( 173770 )

        Only if you got one of those new 5-6" "phones" that don't fit in your pocket, otherwise you usually have an ample supply of body heat that far exceeds what the phone will provide. And Scandinavia is not ridiculously cold, it's been colder in the lower 48 (Montana) than anywhere here, it's not Alaska or Siberia. You might have heard that Norway is a big country for Tesla? We wouldn't be if the batteries kept freezing to death.

        And if you want to spend battery, launch Skype. I swear that even with no chatting

  • That's all I want to know!

  • Especially when talking about silicon as versions and die shrinks actually matter e.g. the K1 T132 is project denver which is 64bit and uses a JIT compiler to get speedup while version T124 is the Cortex-A15 R3

    interesting thing will be the uptake of unix like OS vs Windows 10 on ARM which is sure to annoy Intel who are loosing market share !

    regards

    John Jones

  • by Anonymous Coward

    Android holding ARM back.

    They have desktop class processors held back by an OS that won't run multiple apps on a screen at once (well without Samsungs extensions it won't). Meanwhile the head of Android is focusing on Chrome at the expense of Android. As if a Chrome wrapper for Android to let it do multiple windows is somehow acceptable!

    Its' ridiculous that ARM chips drive > 4K screens and yet Android has the calculator full screen.

    And while people and business expect their desktop PCs to be professional

  • ...now they need to find someone that has a 14nm FinFET process since Intel isn't that interested in selling theirs. That seems to be the biggest issue holding people outside of Intel back these days is I hear a lot of talk about 20nm and smaller, but I'm not seeing much in the way of delivery, products still seem to be 28nm by and large.

    I think it may be a bit over optimistic to think that TSMC will be doing 14nm by next year, given their recent history of over promising and under delivering on process tec

  • Can I have this part on a $37 Raspberry Pi mod next please?

  • by Billly Gates ( 198444 ) on Thursday February 05, 2015 @07:11AM (#48987551) Journal

    AMD is stuck still at 28 nm while these are 14. Wow.

    Even the latest intel ones are all 22 nm

    • Re: (Score:3, Informative)

      by the Hewster ( 734122 )
      Intel has shiped 14nm Core M CPUs (Broadwell) since december 2014 last year, and these ARM chips will only ship in 2016, so Intel still has a healthy lead.
    • by Kjella ( 173770 ) on Thursday February 05, 2015 @08:33AM (#48987749) Homepage

      I'm fairly sure AMD has pretty much quit making new designs and is exiting the market, same as Bulldozer. The APU sales are tanking, they did a $57 million inventory write-down on top of a $56 million operating loss on $662 million revenue in the "Computing and Graphics" segment last quarter and is forecasting another 15% decline in revenue. Corrizo is probably coming but I expect only incremental improvements, they're diversifying into so many other things there can't possibly be any money left for the R&D they'd need to create a new architecture.

      Sure they can do die shrinks, that's not so hard but a premium process costs premium money and AMD can't afford it, they need a value process to sell value chips. And it all depends on Samsung, Apple and TSMC - ARM can create the design but they still need to succeed with the production process. Intel struggled, maybe that's just Intel or it'll be tough for everybody. In AMDs position they certainly don't want to jump the gun and suffer delays or and immature process with bad yields. I expect they'll og 20nm once Apple has moved to 14/16nm and not before.

      • part of the AMD fabs->Global Foundries selloff tied AMD to using only GloFo for their CPU masks. The discrete GPUs are allowed to be on TSMC et al
        in other words the 'premium' you say is not why

    • by Anonymous Coward

      Your behind the times on tech news. Intel has certainly advanced into small forms. But seriously comparing a ARM chip vs any Intel chip besides a Atom is ridiculous. ARM chips are fine in device which basically run a mobile OS. But are really terrible at more complex OS. Yes, they are getting better, but I would also argue so is Intel. AMD is by far out of touch with mobile platform support. Always has been, always will.

    • part of the AMD fabs->Global Foundries selloff tied AMD to using only GloFo for their CPU masks. The discrete GPUs are allowed to be on TSMC et al

8 Catfish = 1 Octo-puss

Working...