Qualcomm Announces Next-Gen Snapdragon 808 and 810 SoCs 47
MojoKid (1002251) writes "Qualcomm has announced two fundamentally new chips today with updated CPU cores as well as Qualcomm's new Adreno 400-class GPU. The Snapdragon 808 and the Snapdragon 810 have been unveiled with a host of new architectural enhancements. The Snapdragon 810 will be the highest-end solution, with a quad-core ARM Cortex-A57 paired alongside four low-power Cortex-A53 cores.
The Snapdragon 808 will also use a big.Little design, but the core layouts will be asymmetric — two Cortex-A57's paired with four Cortex-A53's. The Cortex-A57 is, by all accounts, an extremely capable processor — which means a pair of them in a dual-core configuration should be more than capable of driving a high-end smartphone. Both SoC's will use a 20nm radio and a 28nm RF transceiver. That's a major step forward for Qualcomm (most RF today is built on 40nm). RF circuits typically lag behind digital logic by at least one process node. Given that RF currently accounts for some 15% of the total area and 30-40% of the PCB, the benefits of moving to a smaller manufacturing process for the RF circuit are significant." To clarify, the 810 can use a combination of the Cortex-A57 and Cortex-A53 cores so a single task that needs a lot of power won't cause as large of a power jump. All of the chips are 64-bit ARM too.
The Snapdragon 808 will also use a big.Little design, but the core layouts will be asymmetric — two Cortex-A57's paired with four Cortex-A53's. The Cortex-A57 is, by all accounts, an extremely capable processor — which means a pair of them in a dual-core configuration should be more than capable of driving a high-end smartphone. Both SoC's will use a 20nm radio and a 28nm RF transceiver. That's a major step forward for Qualcomm (most RF today is built on 40nm). RF circuits typically lag behind digital logic by at least one process node. Given that RF currently accounts for some 15% of the total area and 30-40% of the PCB, the benefits of moving to a smaller manufacturing process for the RF circuit are significant." To clarify, the 810 can use a combination of the Cortex-A57 and Cortex-A53 cores so a single task that needs a lot of power won't cause as large of a power jump. All of the chips are 64-bit ARM too.
Re:Needs x86 emulation. (Score:5, Insightful)
I can't see how wasting cycles on implanting an x86/x64 instruction set would be of much use commercially. I don't get the impression that many ARM manufacturers have any interest in trying to beat Intel on its own platform.
Re: (Score:3)
Why are you designing your drone for x86? Are you writing it in assembly code?
He's running Windows on it, duh.
Re: (Score:3)
even Windows programs (ie created with visual studio) can recompile to ARM instructions. I guess he just can't install Windows itself on it.
Moral: don't lock yourself in to anything!
Re: (Score:3)
they need to build in an x86 emulation layer to make these more attractive to gp programmers ... if they had that I may be able to make them work with the drone I'm designing for i/o and avionics control but I do not feel like rewriting the whole damn code base to run on these frankenchips.
You're programming your drone in assembler language?
SVLTE/SVDO? (Score:5, Interesting)
So does this finally mean we'll get Simultaneous voice and LTE/SVDO back? Because all the current generation Qualcomm processors lack dual radio paths for some reason, this despite the fact that the previous generation had it. I have to assume it's because they used so much power/transistor budget on 'more cores!!!!' that they didn't have room for an RF design to accommodate features that are actually useful.
Re:SVLTE/SVDO? (Score:4, Informative)
So does this finally mean we'll get Simultaneous voice and LTE/SVDO back?
64-bit ARM and support for simultaneous voice and LTE/SVDO are completely different things.
The 64-bit ARM cores are application processors (AP). They do not control the modem (that can be part of the SoC together with the AP or an external component): Qualcomm modems have nifty internally developed (and publicly documented) a VLIW CPU called "Hexagon" that offers DSP-like instructions to control the modem. Some modems have two, and another Hexagon is used to process audio and cal also run user provided applications. You can find some information here http://en.wikipedia.org/wiki/Q... [wikipedia.org] and a lot more is linked.
And even this has nothing to do with dual radios. They are independent things.
Roberto
Re: (Score:2)
Re: (Score:1, Flamebait)
"There's zero benefit a consumer gets from that" (Score:5, Informative)
"I know there's a lot of noise because Apple did [64-bit] on their A7. I think they are doing a marketing gimmick. There's zero benefit a consumer gets from that," -Anand Chandrasekher, former Qualcomm CMO
Re:"There's zero benefit a consumer gets from that (Score:4, Insightful)
Now that one person is doing it, everyone is going to have to do it. It's going to be difficult selling a 32-bit processor when the guy across the street is selling a 64-bit one.
There's a lot more reason to go 64 bit than that. The biggest is that it's not going to be long before smartphones and tablets have > 3 GiB RAM. Yeah, there are all sorts of workarounds you can use to access larger amounts of RAM with 32-bit pointers, but it's much nicer to have a flat address space, including plenty of address space for memory-mapped devices. Granted that we're probably a couple of years away from needing 64 bits, but it's coming, fast.
Re: (Score:2)
The biggest is that it's not going to be long before smartphones and tablets have > 3 GiB RAM.
That was my thought. Chips that are being announced now are still going to be on the market when 3 or 4 gigs of RAM is normal, so not having 64-bit support is starting to be a problem.
Re:"There's zero benefit a consumer gets from that (Score:4, Informative)
Now that one person is doing it, everyone is going to have to do it. It's going to be difficult selling a 32-bit processor when the guy across the street is selling a 64-bit one.
There's a lot more reason to go 64 bit than that. The biggest is that it's not going to be long before smartphones and tablets have > 3 GiB RAM. Yeah, there are all sorts of workarounds you can use to access larger amounts of RAM with 32-bit pointers, but it's much nicer to have a flat address space, including plenty of address space for memory-mapped devices. Granted that we're probably a couple of years away from needing 64 bits, but it's coming, fast.
32-bit ARM already addresses more than 32 bits: recent 32 bit ARM architectures have a 48 bit address space, and several chips support 36 or 40 bits. The problem of individual applications addressing at most 32 bits is minor, at this stage, but sooner or later we will have big graphics editing applications on Tablets, and larger address spaces help.
The main advantage that Aarch64 has at this very moment is that it offers a more streamlined instruction set (that makes instructions easier to reorder) and more registers. Even just compiling 32 bit code in the new model you can get impressive performance gains.
Roberto
Re: (Score:3, Informative)
32-bit ARM already addresses more than 32 bits: recent 32 bit ARM architectures have a 48 bit address space, and several chips support 36 or 40 bits. The problem of individual applications addressing at most 32 bits is minor, at this stage, but sooner or later we will have big graphics editing applications on Tablets, and larger address spaces help.
It can, but it's not really fun that way. Realistically, using high memory (that is not within the directly mapped part accessible to the kernel) eventually causes headaches. The default configuration on arm32 (and x86-32, for that matter) is to have only 768MB of lowmem, if you go beyond that you get into trouble because a lot of the kernel's data structures (page tables, inodes, socket buffers, ...) have to be in lowmem. You can push that limit to 2 or 3 GB at the expense of limiting user address space, b
Re: (Score:1)
Re: (Score:2)
There's a reasonable argument for moving to 64-bit on security grounds too. The increase in virtual address space makes ASLR far more effective since there are many more options for positioning compared to 32-bit code. On top of that, any attacks are more likely to hit a unallocated page as opposed to anything useful (with some limitations of course).
Re: (Score:3)
"it's not going to be long before smartphones and tablets have > 3 GiB RAM"
You mean like the MS Surface Pro (4 GiB and 8 GiB models?), the Acer Iconias, Fujitsu Stylistic tablets etc.?
Re: (Score:2)
"which of course have 64-bit x86 CPUs and run a 64-bit Windows."
So, problem solved. Excellent.
Re: (Score:2)
"Qualcomm is somewhat forced to go with 64-bit though, because Apple has taken the genie out of the bottle."
The Qualcomm part was already near completion by the time anyone knew anything about the A7.Going 64-bit as a "response" in ~6 months is not plausible.
My point was to poke fun at Qualcomm naysaying 64-bit at the same time they were developing 64-bit.
Re: (Score:3)
"I know there's a lot of noise because Apple did [64-bit] on their A7. I think they are doing a marketing gimmick. There's zero benefit a consumer gets from that," -Anand Chandrasekher, former Qualcomm CMO
According to Anandtech ( http://www.anandtech.com/show/... [anandtech.com] )
"Integer performance: The AES and SHA1 gains are a direct result of the new cryptographic instructions that are a part of ARMv8. The AES test in particular shows nearly an order of magnitude performance improvement. This is similar to what we saw in the PC space with the introduction of Intel's AES-NI support in Westmere. The Dijkstra workload is the only real regression. That test in particular appears to be very pointer heavy, and the increase in
Re: (Score:2)
and a handfull of other things. But all of the above aren't really benefits of 64 bitness, just the improvements to the architecture. The real benefit of a 64 bit architecture is the larger virtual address space... processes with >2-3 GB of memory. Every other improvement is usually just improvements that could have been added in 32-bit mode but they threw into 64 bit arch. Similar wi
Re: (Score:2)
Don't tell it to me, tell it to Anand Chandrasekher.
Binary drivers (Score:2, Flamebait)
Did Qualcom also announce their commitment to binary only drivers and refusal to work with the free and open source software communtiy ?
Qualcom are just another greedy corp trying to take everyones stuff.
Re: (Score:1)
Exactly, these might be intresting processors, but not as long as there's no open specs. Fuck qualcomm
Re: (Score:2)
Do you need drivers for a CPU?
Re: (Score:2)
Re: (Score:2)
Oops. Forgot about the GPU.
Re: (Score:1)
I dont know (Score:1)
I think this may not be enough for a smartphone. After all, it's not for making phone calls. Right? It's for viewing ads. The more the better for all of us.
I don't get it. (Score:2)
I really don't 100% get this big.little architecture.
The idea of having a weak but low power core capable of running all the basic functions and driving the GPU is reasonably obvious. I get that. I get the idea of having it backed by 4 big fast cores which can be switched on when a task overloads the little CPU.
But why have 4 little CPUs?
Re: (Score:2, Informative)
The idea of the "little" part of BIG.little is that a device can be much more power efficient. While the big cores (Cortex-A57) can do the heavy processing power, for lower CPU requirements the little cores (Cortex-A53) will suffice and the big cores will be off to save power.
http://www.arm.com/products/processors/technologies/biglittleprocessing.php (It's helpful, but a bit like an advertisement)
Re: (Score:2)
Sorry, I wasn't clear.
Why have more than 1 little CPU. If you have something intense enough to use 4, why not just fire up one big CPU?