ARM's Cortex-A72 and Mali-T880 GPU Announced For 2016 Flagship Smartphones 85
MojoKid writes ARM's Cortex-A57 is just now starting to break stride with design wins and full-ramp production in new mobile products. However, ARM is releasing a wealth of information on its successor: the Cortex-A72. ARM is targeting a core clock of 2.5GHz for the Cortex-A72 and it will be built using a 14nm/16nm FinFET+ process. Using the Cortex-A15 (NVIDIA Tegra 4, Tegra K1) as a baseline, ARM says that the Cortex-A57 (Qualcomm Snapdragon 810, Samsung Exynos 5433) offers 1.9x the performance. Stepping up to the Cortex-A72, which will begin shipping in next year's flagship smartphones, offers 3.5x the baseline performance of the Cortex-A15. These performance increases are being made within the same power envelope across all three architectures. So in turn, the Cortex-A72 can perform the same workload as the Cortex-A15 while consuming 75 percent less power. Much like the Snapdragon 810, which uses a big.LITTLE configuration (four low-power Cortex-A53 cores paired with four high performance Cortex-A57 cores), future SoCs using the Cortex-A72 will also be capable of big.LITTLE pairings with the Cortex-A53. ARM has also announced its new Mali-T880 GPU, which offers 1.8x the performance of the current generation Mali-T760. Under identical workloads, the Mali-T880 offers a 40 percent reduction in power consumption compared to its predecessor. ARM again also points to optimizations in the Mali-T880 to efficiently support 4K video playback.
...while consuming 75 percent less power (Score:1, Insightful)
They always say things like that, but we just keep using bigger and bigger batteries (partly because of bigger screens) and yet battery life seems to only get worse year after year.
Re: (Score:1)
That's a lie. When someone says 75% less power, it's supposed to mean 75% less power for PEAK performance, running for days with zero degradation.
Underclocking to save power is cheating!
Re: (Score:2)
They always say things like that, but we just keep using bigger and bigger batteries (partly because of bigger screens) and yet battery life seems to only get worse year after year.
I just got home from work. My Galaxy4 battery is at 92% after 8 hours. If you are having problems with battery life, you may want to change your habits. I only turn wifi and Bluetooth on when I am actively using them. I watch how much power different apps use, and replace those that suck too much (similar apps can differ by orders of magnitude). I exit (not just close) apps when I am not using them. If my battery life is dropping faster than expected, I reboot to clear out any background processes.
Re: (Score:1)
I don't do any of that jazz and my nexus 4 would last for days and my moto g lasts for days... 2-3 of them, approximately. any halfway decent phone has plenty of battery now unless you're really burning it up, in which case it's not unreasonable to expect to have to charge it once a day
Re: (Score:1)
Well, I for one, want to see 4K videos on a five inch screen
Re:...while consuming 75 percent less power (Score:5, Informative)
based on my experience, the #1 power consumer is... a bad cell signal. If you are at 92% after 8 hours on ANY phone, you are likely sitting in a building with a cell tower a few feet from your head, or you are just straight up lying about your power usage (or both). I've taken a few last-gen phones, put them on airplane mode, then powered up wifi, and they can last over a week. What burns the battery? mobile data access, and the screen.
Re: (Score:2)
I've noticed that if I put my Samsung S4 into airplane mode it'll use about 2% of it's battery power over the course of a day. So, clearly the apps and the software aren't the issue, but just accessing the network even if you're not using it seems to cause the trouble.
Yes and no. In airplane mode those apps might not be running as often. An app running in the background of course directly consumes power for CPU, and also via use of mobile data - I have no idea which consumes more.
Re: (Score:2, Informative)
your last gen phones turn off wifi when in sleep if they last over a week(there's an option for that).
what keeps data connected phones burning battery is being data connected, which leads to phones having stuff running, updating the news, weather and all that shit doesn't come free. lots of stuff that doesn't get waken up if there is no data connection.
so, easiest is to just turn off data when you're not using it. of course you can't then receive skype, gtalk or whatever voip calls or instant messaging on i
Re: (Score:1)
a crappy old huawei y300 does well over a week with wifi and cellular data on and happily gets hangouts/gmail/fb/... notifications.
install skype and it's flat after less than 2 days.
what keeps data connected phones burning battery is shit apps that constantly wake the cpu and chatter over the network for no good reason.
so, easiest is to just get rid of skype and enjoy not having to play human power management system.
Re: (Score:2)
My Sony Z2 checks the GPS every now and then and doesn't even bother trying to use Wifi if I'm not actually physically near any of my known wifi points.
Stuff like that is quite sensible and practical. I equally get ridiculous battery life.
Re: (Score:3)
like the quadrajet carb, the big is BIG in big.lit (Score:4, Informative)
The newer SOCs have two high-performance cores and two low power cores. Like the old quadrajet carburetors, efficiency drops quite a bit when the high-perfomance side kicks in.
That said, the screen and radios take up most of the power for most people. Dim the screen and turn off Bluetooth and WiFi as appropriate, or use power-saving mode to automate that process.
Re: like the quadrajet carb, the big is BIG in big (Score:1)
Actually, perf per watt, or computations performed using N joules of energy, is frequently better for the bigger cores. That's especially true for newer low-leakage deep submicron process nodes.
that's not the measure. Measure is hours per charg (Score:2)
Computations per joule is not the relevant measurement. The relevant measurement is hours per charge. If you keep the computations per second below the threshold that the 53s can handle, the big cores never light and the battery lasts longer.
A tractor-trailer gets better mileage per pound than a sedan. So do you drive a big rig to work to save gas?
Re: (Score:2)
You're much smarter than ARM's chip designers. (Score:3)
You realize you're claiming that ARM's chip architects are completely wrong and have been for a while now, now? You know they actually measure this stuff before they spend a few billion dollars fabbing chips.
>. can consume less energy to power one of the big cores for 250ms than power the little core for 1s
If you need to do 500 million operations, you're close to to the point where it makes sense to power the faster core, yes. Your phone spends 99% of it's time with picoseconds of CPU work to b
Re:You're much smarter than ARM's chip designers. (Score:5, Informative)
No, I'm not claiming that they're wrong - I'm repeating things that they've told me. We have a project with them to investigate good power-efficient scheduling behaviour for precisely this reason: The big.LITTLE configuration does not mean that it's always better to use the little cores, it means that it's better to use the little core for long-running tasks that have a lot of I/O and so can't put the core to sleep, but aren't CPU-bound. If you have something CPU-bound, then you're often better off doing it on the big core and then going back to sleep. Detecting these workloads is not a trivial problem.
There are also some corner cases that are also quite interesting. The A7 has lower latency access to L1 than the A15, so for workloads with a very small working set, running them on the A7 can actually be faster (this shows up in one of the SPEC benchmarks).
just io-bound llike mobile networks, sd card, user (Score:2)
> that it's better to use the little core for long-running tasks that have a lot of I/O and so can't put the core to sleep, but aren't CPU-bound.
If I'm understanding you correctly, you're saying it only saves power to use the little cores if there is io involved, such as a mobile network which is obviously much slower than the CPU cores. Or maybe storage device, like and SD card. Or any user interaction.
You're right, very few things that you do on a mobile phone would involve either the network, the S
Re: (Score:2)
If I'm understanding you correctly, you're saying it only saves power to use the little cores if there is io involved, such as a mobile network which is obviously much slower than the CPU cores. Or maybe storage device, like and SD card. Or any user interaction.
No, I'm saying that it saves power to use the little cores if you have a load of interrupts that prevent the core from going to sleep, but are not CPU-bound. For some interactive tasks (lots of moderately demanding apps), you're CPU-bound for short bursts but you can then put the core to sleep and wait.
User interaction is often on a timescale where you can put the core into a low power state while you wait for a ponderously slow user (in comparison to CPU speeds) to press a button. Simple animations can
sorry about the tone (Score:2)
>. Before you try to sound patronising again,
Sorry about that.
If I'm NOW understanding you correctly, you're saying that the big core is better IF the pause is long enough to enter low-power and sleep long enough to make it worth it, correct? Further, I'm reading between the lines and thinking you're saying that on a phone, that's normally the case - that the 53 cores aren't used often, or shouldn't be. Is that correct?
Re: (Score:2)
If I'm NOW understanding you correctly, you're saying that the big core is better IF the pause is long enough to enter low-power and sleep long enough to make it worth it, correct?
Kind of. The big core is usually able to perform more computation per Joule, but uses more power per Watt when in its high-power state, so if you can complete some work and sleep the big core is usually better. If you have a constant stream of work, the little core is better.
Further, I'm reading between the lines and thinking you're saying that on a phone, that's normally the case - that the 53 cores aren't used often, or shouldn't be. Is that correct?
No, they're both used, but it isn't always a clear-cut decision which one is optimal. There are some other issues too. They don't have a shared L1 cache, so you take a small performance hit every time you migrate between them.
A pho
Re: (Score:2)
lockless multithreaded not exactly common (Score:2)
It should be noted that most programmers will never write or directly use a lockless multithreaded algorithm. The number of things on a phone or tablet that need (or even would benefit significantly from) such an algorithm is relatively small.
Most of the time I suspect that the various cores on a mobile device are doing independent things. The percentage of time that the average phone/tablet is going to be doing massively parallel cpu-bound work is tiny.
Re: (Score:2)
To implement the same lockless multithreaded algorithms on ARM, you'd have to insert explicit barriers; how do you think that would affect its performance relative to x86, which has much stricter reordering constraints?
How does POWER (which has a very similar memory model to ARMv8) fare against x86? It's not as clear-cut as you make it out to be. Explicit barriers amount to bus traffic and that's what adds the overhead (in performance and power). On x86, you're paying that cost whenever you have cache lines aliased across cores, even if you don't need it. On ARM, you only pay the cost when you need it. If you're programming with the C[++]11 concurrency model, then the compiler will sort out the barriers for you and t
Re: (Score:2)
Computations per joule is not the relevant measurement. The relevant measurement is hours per charge. If you keep the computations per second below the threshold that the 53s can handle, the big cores never light and the battery lasts longer.
A tractor-trailer gets better mileage per pound than a sedan. So do you drive a big rig to work to save gas?
If you never tax the motor of your sedan to save gas, why didn't you buy one with a smaller motor in the first place?
Re: (Score:2)
I find that if I don't use my phone the battery lasts for days. Whatever happened to those fuel cells that used lighter fluid to power laptops? That's what we need for smartphones. Zippo batteries.
Re: (Score:3)
Like the old quadrajet carburetors, efficiency drops quite a bit when the high-perfomance side kicks in.
Not necessarily. Efficiency can actually increase If the high power cores are able to bring the whole system to a low power state sooner.
Re: (Score:2)
blah blah blah, everyone keeps saying that, and yet my battery life is always better when I keep the CPU max clock at about 80% of full speed.
I'm sorry, but physics are a bitch, and you are too for claiming that power doesn't follow the cube of voltage in SoCs. (yes, cube. It follows the cube of voltage, not the I^2R you're used to seeing)
Re: (Score:2)
Right, you own a phone so you're an expert :)
Keep in mind that the power management software in your phone may suck and fall short of achieving all the efficiencies that the hardware is capable of. BTW, it is not necessary to lecture me about power curves, far from it.
Re: (Score:2)
no, I studied electrical engineering at a school that's probably ranked much higher than yours, and advanced semiconductor fundamentals was my second favorite class, and embedded microcontroller design was my 3rd favorite class.
in addition if what these clowns said were worth listening to, I wouldn't achieve lower idle battery drain by setting a low max-screen-off-frequency.
Re: (Score:2)
EEs are famous for thinking they have a clue about software :)
Re: (Score:2)
my job title is software engineer and I've written several drivers that have 100% up-time in multi-million dollar production deployments so...?
Re: (Score:2)
So you most probably overestimate your ability in power management. Think about what might be necessary to achieve a win from sprint-to-power-save, and why the phone you own might not implement that. Think about the whole system.
Re: (Score:2)
yes, I'm just saying the marketers and pedants claiming that TECHNICALLY it CAN save power, are overrated
Re: (Score:2)
If you want to kick somebody's butt about it, aim in the general direction of Qualcomm, not marketing but engineering.
Re: (Score:2)
Well, ARM's power consumption has been pretty stable the entire time - about 1mW/MHz.
The reason it's consuming tons of power and you need thermal throttling is because you're starting to pack a lot of MHz on a die.
I mean, say 2.5GHz quad core. That's 2.5W/core at full tilt. With 4 of them, that's 10W! There's no way to get that s
Samsung rumored to drop 810 due to overheating (Score:3)
Re: (Score:3)
in scandinavian countries, overheating is a desired attribute. cold weather really hurts batteries, so if the phone generates a little internal heat it prolongs battery life.
Re: (Score:2)
Why would you think he's american?
Re: (Score:2)
tbh, you have two choices for batteries today: Charge them fast, and they don't last as long, or charge them slow, and they last forever. Heat is the problem the batteries have. If you charge them just fast enough so that in the morning they are full, or at least they never get hot, you are going to do well. The difference that a Scandinavian country imposes is hardly likely to make a difference, due to the phone being in a pocket, your hand, or indoors while charging.
The other idea is to buy a phone wi
slow charging not great (Score:2)
Read the datasheets and whitepapers from the battery manufacturers. Charging them too slowly isn't good for them...plus it makes it harder to figure out when they're fully charged.
Re: (Score:3)
Only if you got one of those new 5-6" "phones" that don't fit in your pocket, otherwise you usually have an ample supply of body heat that far exceeds what the phone will provide. And Scandinavia is not ridiculously cold, it's been colder in the lower 48 (Montana) than anywhere here, it's not Alaska or Siberia. You might have heard that Norway is a big country for Tesla? We wouldn't be if the batteries kept freezing to death.
And if you want to spend battery, launch Skype. I swear that even with no chatting
Re: (Score:2)
Wow, warmer than MONTANA!!!! [montanatourismnews.org] That is a real endorsement. :)
How long till I can put this in my Raspberry Pi? (Score:1)
That's all I want to know!
you have to be specific... (Score:2)
Especially when talking about silicon as versions and die shrinks actually matter e.g. the K1 T132 is project denver which is 64bit and uses a JIT compiler to get speedup while version T124 is the Cortex-A15 R3
interesting thing will be the uptake of unix like OS vs Windows 10 on ARM which is sure to annoy Intel who are loosing market share !
regards
John Jones
Android holding it back (Score:2, Interesting)
Android holding ARM back.
They have desktop class processors held back by an OS that won't run multiple apps on a screen at once (well without Samsungs extensions it won't). Meanwhile the head of Android is focusing on Chrome at the expense of Android. As if a Chrome wrapper for Android to let it do multiple windows is somehow acceptable!
Its' ridiculous that ARM chips drive > 4K screens and yet Android has the calculator full screen.
And while people and business expect their desktop PCs to be professional
That sounds great (Score:2)
...now they need to find someone that has a 14nm FinFET process since Intel isn't that interested in selling theirs. That seems to be the biggest issue holding people outside of Intel back these days is I hear a lot of talk about 20nm and smaller, but I'm not seeing much in the way of delivery, products still seem to be 28nm by and large.
I think it may be a bit over optimistic to think that TSMC will be doing 14nm by next year, given their recent history of over promising and under delivering on process tec
Can I have that with Pi. (Score:2)
Can I have this part on a $37 Raspberry Pi mod next please?
Lower NM size than desktop CPUs (Score:3)
AMD is stuck still at 28 nm while these are 14. Wow.
Even the latest intel ones are all 22 nm
Re: (Score:3, Informative)
Re:Lower NM size than desktop CPUs (Score:4)
I'm fairly sure AMD has pretty much quit making new designs and is exiting the market, same as Bulldozer. The APU sales are tanking, they did a $57 million inventory write-down on top of a $56 million operating loss on $662 million revenue in the "Computing and Graphics" segment last quarter and is forecasting another 15% decline in revenue. Corrizo is probably coming but I expect only incremental improvements, they're diversifying into so many other things there can't possibly be any money left for the R&D they'd need to create a new architecture.
Sure they can do die shrinks, that's not so hard but a premium process costs premium money and AMD can't afford it, they need a value process to sell value chips. And it all depends on Samsung, Apple and TSMC - ARM can create the design but they still need to succeed with the production process. Intel struggled, maybe that's just Intel or it'll be tough for everybody. In AMDs position they certainly don't want to jump the gun and suffer delays or and immature process with bad yields. I expect they'll og 20nm once Apple has moved to 14/16nm and not before.
Re: (Score:2)
part of the AMD fabs->Global Foundries selloff tied AMD to using only GloFo for their CPU masks. The discrete GPUs are allowed to be on TSMC et al
in other words the 'premium' you say is not why
Re: (Score:2)
Your information is a quite outdated, AMD has been making CPUs at TSMC since 2011 [extremetech.com] because GloFo couldn't deliver. It's possible AMD paid something to get out of that deal, but it's long ago now.
Re: (Score:2)
oh. weird.
Re: (Score:1)
Your behind the times on tech news. Intel has certainly advanced into small forms. But seriously comparing a ARM chip vs any Intel chip besides a Atom is ridiculous. ARM chips are fine in device which basically run a mobile OS. But are really terrible at more complex OS. Yes, they are getting better, but I would also argue so is Intel. AMD is by far out of touch with mobile platform support. Always has been, always will.
Re: (Score:2)
part of the AMD fabs->Global Foundries selloff tied AMD to using only GloFo for their CPU masks. The discrete GPUs are allowed to be on TSMC et al