Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Cellphones Intel Hardware

Intel Medfield SoC Specs Leak 164

MrSeb writes "Specifications and benchmarks of Intel's 32nm Medfield platform — Chipzilla's latest iteration of Atom and first real system-on-a-chip oriented for smartphones and tablets — have leaked. The tablet reference platform is reported to be a 1.6GHz x86 CPU coupled with 1GB of DDR2 RAM, Wi-Fi, Bluetooth, and FM radios, and an as-yet-unknown GPU. The smartphone version will probably be clocked a bit slower, but otherwise the same. Benchmark-wise, Medfield seems to beat the ARM competition from Samsung, Qualcomm, and Nvidia — and, perhaps most importantly, it's also in line with ARM power consumption, with an idle TDP of around 2 watts and load around 3W."
This discussion has been archived. No new comments can be posted.

Intel Medfield SoC Specs Leak

Comments Filter:
  • by Anonymous Coward on Tuesday December 27, 2011 @11:14PM (#38511016)

    That just doesn't cut it. Based on that, I'd assume the mobile version of the chip to consume at least 1W at idle loads. That _still_ doesn't cut it.

  • by Locutus ( 9039 ) on Tuesday December 27, 2011 @11:26PM (#38511112)
    come on, when talking about comparing embedded SoC's is it really fair to say a new die shrunk version of another architecture best another using a much larger die size?

    So here we have Intel putting their low cost product on their high cost process and claiming a victory? I don't buy it but since Intel is going to be selling these things at deep discounts, I might buy a product or two. I don't think in the long run they can continue this game but it's fun to see them attempting it.

    LoB
  • apples and oranges? (Score:4, Interesting)

    by viperidaenz ( 2515578 ) on Tuesday December 27, 2011 @11:44PM (#38511244)
    It looks like CaffeineMark 3 is single threaded. At least the online version is anyway.
    How can you compare a 1.6ghz presumably single core against dual core cpus on a single thread benchmark?

    I just compared my laptop which is 2.2ghz dual core with my desktop, 3ghz single core. laptop gets 16,000, desktop gets 24,000. Laptop was at 50% cpu, desktop was at 100%.
  • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Wednesday December 28, 2011 @12:57AM (#38511718)

    It's bloated. It had its time. I LOVED writing in assembly on my 80286, the rich instruction set made quick work of even the most complex of paddle ball games...

    However, that was when I was still a child. Now I'm grown, it's time to put away childish things. It's time to actually be platform independent and cross platform, like all of my C software is. It's time to get even better performance and power consumption with a leaner or newer instruction set while shrinking the die.

    Please, just let all those legacy instruction's microcode go. You can't write truly cross platform code in assembly. It's time to INNOVATE AGAIN. Perhaps create an instruction set that lets you get more out of your MFG process; Maybe one that's cross platform (like ARM is). Let software emulation provide legacy support. Let's get software vendors used to releasing source code, or compiling for multiple architectures and platforms. Let's look at THAT problem and solve it with perhaps a new type of linker that turns object code into the proper machine code for the system during installation (sort of like how Android does). DO ANYTHING other than the same old: Same Inefficient Design made more efficient via shrinking.

    Intel, it's time to let x86 go.

  • Re:One benchmark (Score:5, Interesting)

    by LordLimecat ( 1103839 ) on Wednesday December 28, 2011 @01:32AM (#38511900)

    According to what I could dig up (memory, and corroboration here [blogspot.com]), snapdragons use about 500mw at idle. Thats one quarter to one sixth the power consumption of intel's offering.

    Doing some research, it looks like Tegra3s use about .5w per core as well. Again, Intel is pretty far back if theyre throwing out a single core and hitting 2-3 watts.

  • by AcidPenguin9873 ( 911493 ) on Wednesday December 28, 2011 @02:37AM (#38512242)

    I scoured your post for one actual reason why you think x86 is an inferior ISA, but I couldn't find any. I'll give you a couple reasons why it is superior, or at least on par with, any given RISC ISA, on its own merits, not taking into account any backwards compatibility issues:

    • Variable length instruction encoding makes more efficient use of the instruction cache. It is basically code compression, and as such it gives a larger effective ICache size than a fixed length instruction encoding. Even if you have to add marker bits to determine instruction boundaries, it's still a win or at least a wash.
    • x86 has load-op instructions. Load-op is a very, very common programming idiom both for hand written assembly and for compiler generated code. ARM and other RISC ISAs require two instructions to accomplish the same thing.
    • AVX, the new encoding from Intel and AMD, gives you true RISC-like two source, one non-destructive dest instructions.
    • Dedicated stack pointer register allows for push/pop/call/return optimizations to unlink dependence chains from unrelated functions. With a GPR-based stack, RISC has false dependence problems for similar code sequences that they can't really optimize,
    • AMD64 got rid of cruft, added more GPRs, and added modern features like PC-relative addressing modes, removing that advantage from RISC too.
    • ARM's 64 bit extensions were just announced and won't be shipping until 2014. x86 has been 64 bit for 8 years.

    x86 should be able to compete quite well with any RISC ISA on its own merits today.

  • Re:Dubious (Score:5, Interesting)

    by Runaway1956 ( 1322357 ) on Wednesday December 28, 2011 @03:10AM (#38512414) Homepage Journal

    Bloodthirsty bastard, aren't you? Killing off the competition is fun?

    I haven't liked Intel very much since I read the first story of unethical business practices. Intel doesn't rank as highly on my shitlist as Microsoft, but they are on it.

  • Re:Dubious (Score:5, Interesting)

    by SpinyNorman ( 33776 ) on Wednesday December 28, 2011 @04:31AM (#38512804)

    RISC isn't an instruction set - it's a design strategy.

    RISC = reduced instruction set computing
    CISC = complex instruction set computing

    The idea of RISC (have a small highly regular/orthogal instruction set) goes back to the early days of computing when chip design and compiler design wasn't what it is today. The idea was that a small simple instruction would correspond to a simpler chip design that could be clocked faster than a CISC design while at the same time being easier to compile optimized code.

    Nowadays advances in chip design and compiler code generation/optimization have essentially undone these benefits of RISC, but the remaining benefits are that RISC chips have small die sizes hence low power requirements, high production yields and low cost, and these are the real reasons ARM is so successful, not the fact that the instruction set is "better".

  • Re:Dubious (Score:3, Interesting)

    by abainbridge ( 2177664 ) on Wednesday December 28, 2011 @05:58AM (#38513152)

    > RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows

    Modern ARM processors aren't pure RISC processors. Most ARM code is written in Thumb-2, which is a variable length instruction code just like x86. Back in the 90s when transistor budgets were tiny, RISC was a win. When you only have a hundred thousand gates to play with, you're best off spending them on a simple pipelined execution unit. The downsides of RISC has always been the increased size of the program code and reduced freedom to access data efficiently (ie with unaligned accesses, byte addressing and powerful address offset instructions). With modern transistor budgets it is worth spending some gates to make the processor understand a compact and powerful instruction set. That way you save more gates in the rest of the computer than you spend doing this (ie in the caches, databuses and RAMs).

    As a result of all this, in some ways, ARM chips are evolving to look more and more like an Intel x86 design. I'm still a big fan of ARM though. Intel will have a long way to go to compete on price, even if they can compete on power.

  • by JackDW ( 904211 ) on Wednesday December 28, 2011 @09:47AM (#38514120) Homepage

    Indeed. And the ARM ISA isn't even RISC anyway. In fact, which ARM ISA are we even talking about here? Thumb, Thumb2, ThumbEE, Jazelle or the 32-bit ISA? And which extensions, I wonder? NEON, maybe? Or one of the two different sorts of FPU? That's already a significantly complex instruction decoder. The x86 microcode-for-uncommon-instructions approach is probably better.

    Whenever this topic comes up, the discussion is immediately flooded with ARM fanboys insisting that x86 can never compete for magical reasons that don't stand up to sensible analysis. And as Intel approaches ARM's level of power consumption, as they inevitably must (for there is no magic in ARM and there is nothing physically preventing parity), what we hear is denial: the insistence that Intel is playing dirty tricks.

    At least, post OnLive, nobody is claiming that there is no demand for x86 applications on mobile devices. I suppose the "ARM = magic" power claims will have a similar lifetime, and one day will look as silly as claims that Windows XP will be a failure because everyone will be using Linux by 2005. Hope is a good thing, but this is just foolishness.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...