Intel Medfield SoC Specs Leak 164
MrSeb writes "Specifications and benchmarks of Intel's 32nm Medfield platform — Chipzilla's latest iteration of Atom and first real system-on-a-chip oriented for smartphones and tablets — have leaked. The tablet reference platform is reported to be a 1.6GHz x86 CPU coupled with 1GB of DDR2 RAM, Wi-Fi, Bluetooth, and FM radios, and an as-yet-unknown GPU. The smartphone version will probably be clocked a bit slower, but otherwise the same. Benchmark-wise, Medfield seems to beat the ARM competition from Samsung, Qualcomm, and Nvidia — and, perhaps most importantly, it's also in line with ARM power consumption, with an idle TDP of around 2 watts and load around 3W."
2W idle power consumption! (Score:2, Interesting)
That just doesn't cut it. Based on that, I'd assume the mobile version of the chip to consume at least 1W at idle loads. That _still_ doesn't cut it.
beat ARM on what, 45nm? (Score:4, Interesting)
So here we have Intel putting their low cost product on their high cost process and claiming a victory? I don't buy it but since Intel is going to be selling these things at deep discounts, I might buy a product or two. I don't think in the long run they can continue this game but it's fun to see them attempting it.
LoB
apples and oranges? (Score:4, Interesting)
How can you compare a 1.6ghz presumably single core against dual core cpus on a single thread benchmark?
I just compared my laptop which is 2.2ghz dual core with my desktop, 3ghz single core. laptop gets 16,000, desktop gets 24,000. Laptop was at 50% cpu, desktop was at 100%.
Just let x86 die, please. (Score:4, Interesting)
It's bloated. It had its time. I LOVED writing in assembly on my 80286, the rich instruction set made quick work of even the most complex of paddle ball games...
However, that was when I was still a child. Now I'm grown, it's time to put away childish things. It's time to actually be platform independent and cross platform, like all of my C software is. It's time to get even better performance and power consumption with a leaner or newer instruction set while shrinking the die.
Please, just let all those legacy instruction's microcode go. You can't write truly cross platform code in assembly. It's time to INNOVATE AGAIN. Perhaps create an instruction set that lets you get more out of your MFG process; Maybe one that's cross platform (like ARM is). Let software emulation provide legacy support. Let's get software vendors used to releasing source code, or compiling for multiple architectures and platforms. Let's look at THAT problem and solve it with perhaps a new type of linker that turns object code into the proper machine code for the system during installation (sort of like how Android does). DO ANYTHING other than the same old: Same Inefficient Design made more efficient via shrinking.
Intel, it's time to let x86 go.
Re:One benchmark (Score:5, Interesting)
According to what I could dig up (memory, and corroboration here [blogspot.com]), snapdragons use about 500mw at idle. Thats one quarter to one sixth the power consumption of intel's offering.
Doing some research, it looks like Tegra3s use about .5w per core as well. Again, Intel is pretty far back if theyre throwing out a single core and hitting 2-3 watts.
Re:Just let x86 die, please. (Score:5, Interesting)
I scoured your post for one actual reason why you think x86 is an inferior ISA, but I couldn't find any. I'll give you a couple reasons why it is superior, or at least on par with, any given RISC ISA, on its own merits, not taking into account any backwards compatibility issues:
x86 should be able to compete quite well with any RISC ISA on its own merits today.
Re:Dubious (Score:5, Interesting)
Bloodthirsty bastard, aren't you? Killing off the competition is fun?
I haven't liked Intel very much since I read the first story of unethical business practices. Intel doesn't rank as highly on my shitlist as Microsoft, but they are on it.
Re:Dubious (Score:5, Interesting)
RISC isn't an instruction set - it's a design strategy.
RISC = reduced instruction set computing
CISC = complex instruction set computing
The idea of RISC (have a small highly regular/orthogal instruction set) goes back to the early days of computing when chip design and compiler design wasn't what it is today. The idea was that a small simple instruction would correspond to a simpler chip design that could be clocked faster than a CISC design while at the same time being easier to compile optimized code.
Nowadays advances in chip design and compiler code generation/optimization have essentially undone these benefits of RISC, but the remaining benefits are that RISC chips have small die sizes hence low power requirements, high production yields and low cost, and these are the real reasons ARM is so successful, not the fact that the instruction set is "better".
Re:Dubious (Score:3, Interesting)
> RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows
Modern ARM processors aren't pure RISC processors. Most ARM code is written in Thumb-2, which is a variable length instruction code just like x86. Back in the 90s when transistor budgets were tiny, RISC was a win. When you only have a hundred thousand gates to play with, you're best off spending them on a simple pipelined execution unit. The downsides of RISC has always been the increased size of the program code and reduced freedom to access data efficiently (ie with unaligned accesses, byte addressing and powerful address offset instructions). With modern transistor budgets it is worth spending some gates to make the processor understand a compact and powerful instruction set. That way you save more gates in the rest of the computer than you spend doing this (ie in the caches, databuses and RAMs).
As a result of all this, in some ways, ARM chips are evolving to look more and more like an Intel x86 design. I'm still a big fan of ARM though. Intel will have a long way to go to compete on price, even if they can compete on power.
Re:Just let x86 die, please. (Score:4, Interesting)
Indeed. And the ARM ISA isn't even RISC anyway. In fact, which ARM ISA are we even talking about here? Thumb, Thumb2, ThumbEE, Jazelle or the 32-bit ISA? And which extensions, I wonder? NEON, maybe? Or one of the two different sorts of FPU? That's already a significantly complex instruction decoder. The x86 microcode-for-uncommon-instructions approach is probably better.
Whenever this topic comes up, the discussion is immediately flooded with ARM fanboys insisting that x86 can never compete for magical reasons that don't stand up to sensible analysis. And as Intel approaches ARM's level of power consumption, as they inevitably must (for there is no magic in ARM and there is nothing physically preventing parity), what we hear is denial: the insistence that Intel is playing dirty tricks.
At least, post OnLive, nobody is claiming that there is no demand for x86 applications on mobile devices. I suppose the "ARM = magic" power claims will have a similar lifetime, and one day will look as silly as claims that Windows XP will be a failure because everyone will be using Linux by 2005. Hope is a good thing, but this is just foolishness.