Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Intel Hardware

Intel's 14nm Broadwell Delayed Because of Low Yield 96

judgecorp writes "Intel has put back the delivery of its 14nm Broadwell desktop chip by a quarter because of a manufacturing issue that leaves it with too high a density of defects. The problem has been fixed, says CEO Brian Krzanich, who says, 'This happens sometimes in development phases.'" The good news is that it is just a defect density issue. A first round of tweaks failed to increase yield, but Intel seems to think a few more improvements to the 14nm process will result in acceptable yield.
This discussion has been archived. No new comments can be posted.

Intel's 14nm Broadwell Delayed Because of Low Yield

Comments Filter:
  • by Anonymous Coward on Thursday October 17, 2013 @04:16PM (#45156821)

    14 nanometers should be enough for anyone.

    • Nah, we're not quite at that point yet. I've seen estimates that we could see 1nm processes by 2030, but many people say anything below 5nm (expected circa 2020) isn't feasible. Either way, we're at about the manufacturing limit of the Newton and Thomson/Bohr/Rutherford universe. Atoms are between 0.3 to 3 Angstroms in size. That's 0.03nm to 0.3nm. If we want to go smaller than that, we have to construct our devices out of something other than atoms, and it's assuming that subatomic/quantum forces don'

      • or stop using electrons

        • Two minor points.

          Electrons are Easy to use and last a long time.

          Subatomic particles on the other hand are much hard to deal with. Also not many of them are actually smaller.

      • If we want to go smaller than that, we have to construct our devices out of something other than atoms

        You could shrink the atoms using muons, but overclockers will suffer a nasty surprise.

      • The thing to remember is that roughly about 1nm is the end of the line for the current process for chip making. It doesn't mark the end of circuits, it just means we need a different method. That could be with Silicene, nano tubes, or heck even quantum computers. This isn't the end of the progress of processors, it's just the end of Moore's Law. There might be even a 0.5nm or 0.1nm era, but there will be some serious diminishing returns for that (unless someone is really clever)

    • 14 nanometers should be enough for anyone.

      It's not a problem of 14 being enough, it's a problem of 14 being too much.

    • by hairyfeet ( 841228 ) <bassbeast1968@@@gmail...com> on Thursday October 17, 2013 @08:43PM (#45159691) Journal

      Actually if the rumors some of the other sites are saying is true the 14nm delay isn't because of low yield...its because nobody is buying. Oh sure the yields aren't great but like AMD it looks like Intel has realized there really ain't a point in constantly putting out faster and better chips when they can't move the ones they got.

      What both need to realize, and what will be biting ARM right in the ass in less than 18 months by my calculations is thus....The software just hasn't kept up with the hardware and X86 by switching from MHz wars to core wars went from "good enough" to "insanely overpowered" and when you can't even stress the one you have, what is the point of buying a new one? Intel and AMD are finding this carries over to other areas as well, take laptops for example. Used to you could set your watch by my customers replacing their laptops, every 2 years for the business guys and every 3 max for the home users, because the combo of heat cycling and software requirements would make them break or painfully slow, now? Well most of the time the laptop is twiddling its thumbs so its not getting hot enough to kill it and even a 5+ year laptop these days is a C2D or Turion X2 with 3+ GB of RAM and 300GB+ HDDs, more than Joe and Sally Average need frankly.

      Oh and for the guys praising ARM and thinking that train is gonna keep on rolling? You got 18-24 months by my calculations and then? Hope you enjoy the same boat Intel and AMD are in now. The reason why is simple...ARM doesn't scale well and there is only so many cores and so much MHz you can push in a thin and light before you end up with battery life measured in minutes so just like how Intel and AMD hit the heat wall? So too will ARM hit the battery wall. When you combine this with the incredible race to the bottom going on right now, we are talking about dual core tablets in the $70 range at Chinamart and quads starting at $100? it won't be long before everybody and their dog has a phone and tablet that is faster than they know what to do with and then like X86 they won't replace until the unit dies.

      so I wouldn't be surprised if intel just sits on 14nm until they get it down so well they can sell it as cheap or cheaper than current chips, after all AMD has already said it'll be a year before they release a new chip and why should they? Thanks to having a mature process they can sell hexacores for $100 and octocores for $130 and their yields on the APUs is so good the OEMs are selling quad laptops for $399, why spend all that money for a new chip when sales are already depressed? The same goes for Intel, they have chips at just about every price point, mature process means high yields and more profits per wafer, and with the global economy a crawl and PCs becoming like appliances why come out with a new chip? Stick with what you've got, they are several orders of magnitude faster than Joe and Sally know what to do with anyway.

      • Yeah, because nobody that runs devices on battery power wants faster more efficient devices. Until my phone can last a week while running crysis you have no point.
  • Short term setback for Intel. They will get yield up eventually. I just hope it's before they run out of cash to run operations...

    • I just hope it's before they run out of cash to run operations...

      Lolwut? Yeah, umm, that's not even remotely a concern.

      • Sorry, tongue was firmly in cheek on that one..

        • On the other hand, the guys at Altera have bet the bank on Intel, so they're likely praying that Xilinx's 16nm TSMC process gets delayed.
          While Intel has utter dominance on their market, Altera is in catch-up mode...

  • Since, in practice, they'd want to get rid of old stock before selling their shiny new product, this isn't really that much of a problem.

    It's not like AMD is going to magically beat Haswell before Broadwell is released. It would be nice if they did, though...

  • ... where does it end? I had to actually check what the atomic size of Silicon is (111pm), so there are only a few years left (maybe 10-20) to reach the atomic level. Then what? I'm really curios as I'm quite impressed how this development came - actually how quick...

    • by stms ( 1132653 ) on Thursday October 17, 2013 @04:46PM (#45157275)

      Potentially it can keep going until the size of a transistor is just a few electrons across but as we get closer to that point quantum teleportation becomes more of an issue. This is cool video that explain some basic stuff about transistors and the end of Moore's Law.
      https://www.youtube.com/watch?v=rtI5wRyHpTg [youtube.com]

      • by Anonymous Coward

        Potentially it can keep going until the size of a transistor is just a few electrons across but as we get closer to that point quantum teleportation becomes more of an issue.

        More often referred to as electron tunneling or quantum tunneling.

      • by Anonymous Coward

        a few electrons across?

        An electron is a point particle, it has no diameter. I think you mean a few *atoms* across. You cannot make a transistor out of a single atom.

        Quantum teleportation?

        I think you mean quantum tunneling.

        sigh...what has happened to the Slashdot of Old, where the comments were insightful and informative. and written by people who actually know the subject they are talking about....

      • Is the next microchip frontier something else entirely? Specifically I'm wondering if RAM will become more important as CPUs stop shrinking. It seems like RAM is 15 to 20 times slower than the CPU (for DDR3 [wikipedia.org]) at this time. Will this ever change? Will cache RAM grow by a greater factor than CPU transistor size will shrink? If, hypothetically, RAM became as fast as the CPU, we would have vastly increased performance. But how likely is this?
        • by Bengie ( 1121981 )
          RAM vs cache are two different things. Cache scales something like O(n^2), but is a small number, while RAM scales O(1), but is a large number. The latency of a huge sram cache would be horrendous. Very very generalized.
    • Re: (Score:3, Funny)

      Maybe we could build computers out of Planck planks. They're really small.

    • If you made a law preventing any transistors below 14nm, the architectural work would continue making things faster.
      Architectural changes have contributed more to speedup than transistor size over time. They are not independent, since smaller transistors allows more integration and co-location, but from where I sit, there's plenty to be done in computer architecture to make them faster and plenty to do in software to stop blowing away so much performance on fripperies, bad drivers, bad memory management and

  • They released Haswell in June, they've barely had time to sell that so Q4 2013 to Q1 2014 is still ahead of their yearly tick-tock. They're not announcing any delay to Airmont which is their mobile 14nm chip and we all know one quarter to or from won't change much in the desktop/server market. In related news AMD posted their Q3 earnings today and their CPU sales are still down, their gross margin is down but on the bright side the console sales are finally coming in so overall they're making a profit this

    • their CPU sales are still down

      So are Intel's. This is not a good time to sell x86 CPUs. AMD has a chance to reverse their trends, since they are just a small player, but they'll have to steal that market from Intel.

  • Not the real issue (Score:4, Interesting)

    by Anonymous Coward on Thursday October 17, 2013 @05:11PM (#45157623)

    Intel has produced two new generations of processor that were WORSE than Sandybridge. Higher power use (under load) and far less over-clockability. The newer part were ONLY better (in desktop systems) if you intended to use their new instructions (vanishingly unlikely) or the integrated graphics (which would be pointless- people buy expensive Intel CPUs to partner them with expensive GPUs from AMD or Nvidia).

    Intel, of course, were in the same position with the waves of Core2 parts, each of which essentially overlapped each other in performance generation on generation (although power consumption was much improved over the first generation of Core2 i7 parts).

    Intel currently doesn't know exactly where to go in the near future, and is attempting to hedge its bets by trying various things. It is currently undercutting its own HYPER-expensive ULV mobile high-end parts with the new 4-core 'atom' Bay-trail chips that seek to go head-to-head with ARM. Because current high-end ARM is so good, Intel is forced to sell a very dangerously good chip (dangerous to Intel's profits, that is) into low and mid-end tablets, running Android or Windows8.1

    However, even Intel's first decent 'Atom' part ever (after 5+ attempts) is beaten by Nvidia's somewhat lame Tegra 4, and Qualcomm's Snapdragon 800. It is exterminated by Apple's new ARM chip, soon to be seen in Apple's new iPad refresh.

    Intel's 22nm process, and use of FinFETs, has been a total disaster so far. A process advantage, and custom designed chips, doesn't allow Intel to beat ARM parts coming from commodity foundries at TSMC and Samsung in 28nm. Sure, Intel can make its own chips smaller than those on the previous process, and theoretically get more parts per wafer, but the per wafer costs rocket, the yields drop (initially), and insanely expensive new plants have to be built to service the new process.

    What does Intel get from spending all this new money on R+D? At this moment in computer history, almost nothing. The x86 is dying, and everyone BUT Intel builds ARM solutions. Every major player has a GPU (graphics) solution as good as Intel, and Intel isn't within a million miles of matching the AAA-gaming GPU designs from AMD and Nvidia (despite the fact that Intel has spent more money than every graphics company combined, across their combined periods of existence, to create its own GPU solutions).

    Intel simply has no current use for its expensive 14nm process. It has built the factories, so it is engaged in a waiting game- waiting for mobile parts to roll off the 14nm production lines that have clear market advantages over its current mobile chips. It just isn't worth Intel's time launching another round of non-improved parts. The market has changed forever, and on-one wants to buy "this season's Intel" for the brand loyalty reasons previously apparent.

    Intel fanboys want 6-core and 8-core parts, but Intel is extremely loathe to risk introducing better value into the desktop market. If Intel properly sold 6-core solutions, they would have to sell 6-core i5 parts, and these would beat-up their EXTREMELY profitable 4-core i7 parts. Intel is too in love with the status quo.

    If Intel's bay-trail 4-core parts prove good enough for tablets and non-gaming laptops, and they will do having greater performance than the more than adequate mobile 2-core core2 parts used in the first decent cheap laptops years ago, where does most of Intel's mobile biz go from here? Bay-trail parts (unlike those years old mobile 2-core core1/core2 laptop chips) also do all the video decoding in hardware, allowing flawless playback of all current video content (and bay-trail is strong enough to do CPU enhanced decode of 4K video recorded in h264).

    Bay-trail is the part Intel moved Heaven and Earth NOT to produce. Bay-trail is the final step on the race-to-the-bottom for x86 based computers that most non-AAA gamers will need. If the only real money Intel makes ends up from chips lie Bay-trail, Intel is done.

    Think about this. In a few weeks, you

    • by 0123456 ( 636235 )

      In a few weeks, you will be able to buy quite decent Android tablets for $150 using 4-core bay-trail. A little hacking, and you've got yourself a $150 dollar Windows8.1 tablet. A $150 tablet running PROPER unrestricted Windows.

      Yeah, 'cause Windows is, you know, free and stuff.

      At retail, you'd be paying about $100 for Windows alone.

      • Re-read this part:

        A little hacking, and you've got yourself a $150 dollar Windows8.1 tablet.

        Do you really think we won't be pirating the shit outta Windows when we do that?

    • by alvinrod ( 889928 ) on Thursday October 17, 2013 @08:54PM (#45159767)
      The reason that there's less overclocking headroom has little to do with the architecture design and a lot to do with the crappy thermal solution Intel started using on their IB and Haswell chips. Basically they started using some really crappy thermal paste instead of soldering the IHS like they did with Sandy Bridge. People who delid their chips and use better thermal interface material get far better results. Some people can get upwards of an extra GHz in speed when over-clocking and see some fairly substantial temperature drops as well.
  • by slashmydots ( 2189826 ) on Thursday October 17, 2013 @05:43PM (#45157973)
    So make a "triple core" edition called an i4 where really 1 core just didn't pass quality control so they turn it off. AMD did it and it sold so well they had to purposely cripple working quad cores to meet the demand for triple core chips.

    I have a theory that those T-edition chips Intel made that are just underclocked, hyper-efficient, ultra-low wattage editions of their recent chips are actually just ones that wouldn't run properly at the normal stock clock. I never heard a solid claim that they actually had different voltage regulation circuits or something like that. They just underclocked them and made them have a higher tendency to not click to a full multiplier level as often or for as long.
    • Re:Pull an AMD (Score:5, Interesting)

      by 0123456 ( 636235 ) on Thursday October 17, 2013 @06:09PM (#45158233)

      Except the cores are probably small compared to the L3 cache, so most failures will be in the cache, not the cores.

      Back when I worked in the chip business, we designed them so some components could be disabled if they failed the manufacturing tests, but there were very few that we could actually sell that way. Either the fault would be in the components that couldn't easily be disabled, or there'd be multiple faults in too many places to make it viable.

      I have wondered myself whether the low-power CPUs are just the bin that wouldn't work at normal power levels.

      • Actually check out Haswell's die configuration [wccftech.com], the integrated graphics takes up about 2 times more area than the L3 cache. Also, look at how dense the transistors are in the GPU area, looks as dense or maybe even more dense than the cache. It wouldn't surprise me if graphics are a source of manufacturing problems in addition to L3 at this point.

  • The good news is that it is just a defect density issue.

    And what kind of problem you can have on a fab that is not a "defect density issue"?

    In a related question, can I declare Moore's law dead already, or is there some current fab upgrade that isn't delayed by at least 18 months?

    • 'And what kind of problem you can have on a fab that is not a "defect density issue"?'

      Systematic flaws, like all horizontal wires are printing 10% narrower than intended or the effective dielectric constant at a particular layer trails off evenly from the center of a wafer to the edges.

  • ... because of a manufacturing issue that leaves it with too high a density of defects.

    Sorry, after a long time, I have to put myGrammar Nazi hat on for (I think) the first time. It's not just you, this is just the example that tipped me over the edge - much like the "leaves it with too high [of] a" phrasing leaves the reader tipping off into.... what?

    This type of construction has become endemic in conversation in the last few years, and I'm sorry, but it's cumbersome, ungainly, unsightly, and painful to hear or see. Perhaps, just perhaps, if I say something, this bad practice will lose so

    • by Daikiki ( 227620 )

      It's too short a season to grapple with so harsh a critique of this minor a transgression.

      Sorry. I guess that was a bit like scratching a chalkboard, but I personally rather like this particular grammatical construct. It's efficient and it front loads the subjective point the author is trying to make, making comprehension easier. Compare: "The season is too short to grapple with a critique that's so harsh of a transgression that's this minor."

      • It's too short a season to grapple with so harsh a critique of this minor a transgression.
        Wow, you managed to get three of them into a single sentence! :)
        How about "The season is too short to spend it grappling with such a harsh critique of a transgression this minor".

        Our brains read predicively, constructing the most probable usage as we go. (There was an article on slashdot about this recently). In this case, I would say that "of this minor a transgression" is first interpreted by our brains first as re

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake