Forgot your password?
typodupeerror
Intel Hardware

Why Intel Leads the World In Semiconductor Manufacturing 226

Posted by Soulskill
from the nobody-else-seems-to-want-to dept.
MrSeb writes "When Intel launched Ivy Bridge last week, it didn't just release a new CPU — it set a new record. By launching 22nm parts at a time when its competitors (TSMC and GlobalFoundries) are still ramping their own 32/28nm designs, Intel gave notice that it's now running a full process node ahead of the rest of the semiconductor industry. That's an unprecedented gap and a fairly recent development; the company only began pulling away from the rest of the industry in 2006, when it launched 65nm. With the help of Mark Bohr, Senior Intel Fellow and the Director of Process Architecture and Integration, this article explains how Intel has managed to pull so far ahead."
This discussion has been archived. No new comments can be posted.

Why Intel Leads the World In Semiconductor Manufacturing

Comments Filter:
  • by Anonymous Coward on Wednesday May 02, 2012 @05:14AM (#39865825)

    The shrink from 22 to 32nm is a staggering size change - 33% finer lithography - and it uses their much-hyped 3D transistor technology on top of things. Yet, Ivy Bridge, being just a shrink of the older Sandy Bridge die, shows no improvements over the 32nm version. Traditionally, Intel has always been able to show lower power consumption and more than a tangible performance improvement when just doing a process shrink, but the Ivy Bridge does nothing extra in terms of performance and consumes not lower power than its older 32nm sibling - and let's not mention the inefficient heat packaging causing temperatures hotter than the 32nm Sandy Bridge. There's a problem here, Intel.

  • by SuricouRaven (1897204) on Wednesday May 02, 2012 @05:50AM (#39865947)
    That's in part because they put the extra die space freed up to a new purpose: Graphics performance. If you just look at processor performance, Ivy is no better. Benchmark the inbuilt graphics and it's far ahead.

    Of course, anyone who actually needs decent graphics wouldn't be using the on-chip graphics anyway, so I question just how useful this really is.
  • by QQBoss (2527196) on Wednesday May 02, 2012 @06:21AM (#39866069)

    The shrink from 22 to 32nm is a staggering size change - 33% finer lithography - and it uses their much-hyped 3D transistor technology on top of things. Yet, Ivy Bridge, being just a shrink of the older Sandy Bridge die, shows no improvements over the 32nm version. Traditionally, Intel has always been able to show lower power consumption and more than a tangible performance improvement when just doing a process shrink, but the Ivy Bridge does nothing extra in terms of performance and consumes not lower power than its older 32nm sibling - and let's not mention the inefficient heat packaging causing temperatures hotter than the 32nm Sandy Bridge. There's a problem here, Intel.

    While I will accept you reversed some numbers (the shrink was from 32 to 22, not the other way around) and Intel is using tri-gate transistors, most everything else you describe is just flat out wrong. Ivy Bridge DOES show lower power consumption at stock voltages (TDPs of 77W vs 95W are a testament to that), and it is higher performance at the lower power consumption (though not by huge amounts, nor was it intended to be). Since it is lower power than Sandy Bridge at the same frequency, it is not having any issues related to thermals and packaging.

    Now, if you want to rant about the fact that it doesn't handle overvoltage well for overclocking purposes, that is fine, but it is a separate discussion compared to stock. What you are seeing now is that Intel (probably extremely wisely for the market they are chasing most heavily [anandtech.com]) has tuned in their process node for stock voltages, but this is resulting in very leaky transistors at high voltages. Additionally, while the current packaging has the ability to remove heat just fine at stock voltages, when you start leaking too much the heat builds up too quickly- which certainly is a 22nm node issue and not actually a packaging issue. [tweaktown.com] Quite possibly, though how far in the future I can't begin to guess, they will probably tweak the process for the Extreme Edition CPUS to make them handle an overclock without leaking so much, but that will take some time learning how they can play with the various knobs to get what they want without destroying what they need.

    This leaves me with the feeling that the only problem here is your expectations of a CPU that was manufactured with the intent of taking the mobile market by storm (and they have tuned the process properly for that) when what you want is an overclocking king. Let's see how they tune the process technology for the Extreme Edition (and hopefully copy into other desktop-bound CPUS) before any decisions are made that they have screwed the pooch on being able to overclock.

  • Re:How come Apple (Score:4, Interesting)

    by afidel (530433) on Wednesday May 02, 2012 @11:12AM (#39868265)
    I'm not sure about GM but I know Ford is no longer casting around here, they recently started tearing down the 60+ year old casting plant in Brooke Park. The reasons for the plants demise is that it's an ironworks and Ford basically doesn't use a cast iron block any longer and they don't view block casting as a core area and so there was no way they were going to invest the massive amount of capital it would have taken to move the plant over the casting aluminium.
  • by SecurityTheatre (2427858) on Wednesday May 02, 2012 @05:20PM (#39872943)

    It might be worth pointing out that Core wasn't on the roadmap. It was a happy accident.

    The design came from the Pentium-M, which was just a rehashed Pentium 3. The P4 "Netburst" was on the roadmap for a decade when it came out, followed by IA-64.

    The Pentium M was intended to be a "mobile" chip to put into mid-range laptops where the P4 was too big and hot. The rather unknown Israel design team was put on it and produced a really remarkable product that scaled far better than they expected. As a result, after release, they were put to the task of improving it and re-working it to be a real desktop chip (the Core) and then, because it was still so tiny, the CoreDuo and later the Core2Duo.

    Talking about flukes, anyway. Sometimes engineering stems from them. It's not because they did it wrong, it's just how it is.

    As far as I know, they are still benefiting from some of the amazing hand-layouts that were done on the Pentium M and early Core chips. Nobody else would even consider doing a manual layout on a modern chip. They had a few people who did just that and it made all the difference.

How often I found where I should be going only by setting out for somewhere else. -- R. Buckminster Fuller

Working...