Forgot your password?
typodupeerror
Cellphones Intel Hardware

Intel Medfield SoC Specs Leak 164

Posted by Soulskill
from the just-over-the-horizon dept.
MrSeb writes "Specifications and benchmarks of Intel's 32nm Medfield platform — Chipzilla's latest iteration of Atom and first real system-on-a-chip oriented for smartphones and tablets — have leaked. The tablet reference platform is reported to be a 1.6GHz x86 CPU coupled with 1GB of DDR2 RAM, Wi-Fi, Bluetooth, and FM radios, and an as-yet-unknown GPU. The smartphone version will probably be clocked a bit slower, but otherwise the same. Benchmark-wise, Medfield seems to beat the ARM competition from Samsung, Qualcomm, and Nvidia — and, perhaps most importantly, it's also in line with ARM power consumption, with an idle TDP of around 2 watts and load around 3W."
This discussion has been archived. No new comments can be posted.

Intel Medfield SoC Specs Leak

Comments Filter:
  • Re:One benchmark (Score:5, Informative)

    by icebike (68054) * on Tuesday December 27, 2011 @11:23PM (#38511084)

    It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".

    Nothing fishy about that at all.

    Quote Vrzone:

    Intel Medfield 1.6GHz currently scores around 10,500 in Caffeinemark 3. For comparison, NVIDIA Tegra 2 scores around 7500, while Qualcomm Snapdragon MSM8260 scores 8000. Samsung Exynos is the current king of the crop, scoring 8500. True - we're waiting for the first Tegra 3 results to come through.

    But the same paragraph says

    Benchmark data is useless in the absence of real-world, hands-on testing,

    If the performance figures are realistic this is one fast processor, and it appears to be a single core chip, (or at least I saw nothing to the contrary). That's impressive.

    Single cores can get busy handling games or complex screen movements, leading to a laggy UI. If they put a good strong GPU on this thing you might never see any lag.

  • by Sycraft-fu (314770) on Wednesday December 28, 2011 @12:00AM (#38511382)

    These days 32nm is their main process. They use 45nm still but not for a ton of stuff. Almost all their chips have moved to it. Heck they have 22nm online now and chips will be coming out rather soon for it (full retail availability in April).

    Once of Intel's advantages is that they invest massive R&D in fabrication and thus are usually a node ahead of everyone else. They don't outsource fabbing the chips and they pour billions in to keeping on top of new fabrication tech.

    So while 32nm is new to many places (or in some cases 28nm, places like TSMC skipped the 32nm node and instead did the 28nm half node) Intel has been doing 32nm for almost 2 years now (first commercial chips were out in January 2010).

  • Re:One benchmark (Score:5, Informative)

    by teh31337one (1590023) on Wednesday December 28, 2011 @12:14AM (#38511486)

    Yeah... no.

    vr-zone [vr-zone.com]

    As it stands right now, the prototype version is consuming 2.6W in idle with the target being 2W, while the worst case scenarios are video playback: watching the video at 720p in Adobe Flash format will consume 3.6W, while the target for shipping parts should be 1W less (2.6W)

    extremeTech [extremetech.com]

    The final chips, which ship early next year, aim to cut this down to 2W and 2.6W respectively. This is in-line with the latest ARM chips, though again, we’ll need to get our hands on some production silicon to see how Medfield really performs.

  • Re:whoosh (Score:5, Informative)

    by Svartalf (2997) on Wednesday December 28, 2011 @01:41AM (#38511944) Homepage

    Recent track record... Yeah, sure...

    http://www.pcper.com/reviews/Graphics-Cards/Larrabee-canceled-Intel-concedes-discrete-graphics-NVIDIA-AMDfor-now [pcper.com]

    There's a few others like this one. This includes the GMA stuff where they claimed the Xy000 series of GMA's were capable of playing games, etc. They're better than their last passes at IGPs, but compared to AMD's lineup in that same space, they're below sub-par. Chipzilla rolls out stuff like this all the time. Been doing it for years now.

    Larrabee.
    Sandy Bridge (at it's beginnings...).
    GMA X-series.
    Pentium 4's NetBurst.
    iAPX 432.

    There's a past track record that implies your faith in this is a bit misplaced at this time.

  • Re:One benchmark (Score:4, Informative)

    by Tr3vin (1220548) on Wednesday December 28, 2011 @01:52AM (#38512012)
    UI lag is almost exclusively limited to fill-rate on mobile devices. This is a problem on Android, since it is hard for them to optimize it for all of the various chipsets. If the GPU cannot quickly fill pixels, more of the preparation of a frame has to be offloaded to the CPU. For modern GUIs, each pixel can be touched several times, so without a good fill rate, more heavy lifting is required from the CPU. Multiple cores can help, since more processing power can be dedicated to quickly updating the UI.
  • Re:whoosh (Score:4, Informative)

    by 0123456 (636235) on Wednesday December 28, 2011 @02:26AM (#38512180)

    To Intel,, perception is everything, reality is nothing -- as proven by their continuous predominance in the desktop despite AMD's frequent performance-per-dollar and performance-per-watt lead, and occasional absolute performance lead.

    Ah, yes. No-one ever buys Intel chips because they're the best option, poor old AMD keep building the best x86 chips on the planet but stoopid consumers keep buying Intel anyway.

    Back in the real world, at the time when AMD were the best choice you could hardly find anyone at all knowledgeable who was recommending Intel Pentium-4 space-heaters, and now that Intel is the best choice for desktop systems the only people recommending AMD CPUs are the dedicated fanboys. And in the low-power space, no-one uses Intel x86 CPUs because that would be absurd; even a 2W CPU can't compete against ARM.

  • Re:Dubious (Score:3, Informative)

    by the linux geek (799780) on Wednesday December 28, 2011 @02:53AM (#38512322)
    Every Windows release from the NT line since NT 3.1 has run on at least one RISC architecture.
  • by Anonymous Coward on Wednesday December 28, 2011 @03:58AM (#38512654)

    1. Having variable length instructions complicates instruction decoding, which cost die space and cycles (once for actual decoding and once for instructions spanning fetch boundaries). Also several processor architectures save 16-bit instructions (ARM, SH, MIPS, TI C6x off the top of my head), still having access to 16 general purpose registers as x86-64 with its up to 16 byte insns.

    2. Load-op insns and many others are split up internally in smaller micro-ops. They are about as useful as assembler macros. Load-op insns are also hurting performance - for example on Intel processors load-op are split on two mops, one of which is dispatched to port 2, which means that two load-ops cannot by dispatched on the same cycle, whereas up to three simple ops can can be dispatched in one cycle.

    3. AVX is good, having the same style for general purpose insns is better.

    4. Dedicated SP engine is a solution to a problem, which does not exist on common RISC architectures anyway. The dependency, which is eliminated by the stack pointer tracker is the dependency of a push/pop insn on that value of SP, which is a result of a previous push/pop. There's no such dependency if simple moves to/from memory (e.g. `movq %rbx, 10(%rsp)') are used as in typical RISC (or in x86 too). Also ARM (and THUMB) can save/restore multiple registers on stack with a single insn, so no dependency there either.

    5. The advantage of 64-bit address space for an architecture, traditionally targeted at embedded and mobile applications is quite dubious.

    x86 has no merits, but just age old quirks, which are solved by throwing in a ton of additional logic and gigahertz. Make no mistake, x86-64 CPUs are good because the manufacturing process is good and not because, but despite the ISA.

  • Re:One benchmark (Score:4, Informative)

    by LordLimecat (1103839) on Wednesday December 28, 2011 @10:47AM (#38514672)

    My mistake-- those numbers are at full load, not idle. That certainly doesnt help intel at all.

Make headway at work. Continue to let things deteriorate at home.

Working...