Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware IT Technology

Intel Says It Will Move Away From 'Tick-Tock' Development Cycle 124

An anonymous reader writes: In its latest annual report, Intel says that it will be moving away from its decade-old "tick-tock" strategy (PDF) for developing new chips. From the company's 10-K filing, "We expect to lengthen the amount of time we will utilize our 14nm and our next generation 10nm process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions." Anand Tech's Ian Cutress explains, "Intel's Tick-Tock strategy has been the bedrock of their microprocessor dominance of the last decade. Throughout the tenure, every other year Intel would upgrade their fabrication plants to be able to produce processors with a smaller feature set, improving die area, power consumption, and slight optimizations of the microarchitecture, and in the years between the upgrades would launch a new set of processors based on a wholly new (sometimes paradigm shifting) microarchitecture for large performance upgrades. However, due to the difficulty of implementing a 'tick', the ever decreasing process node size and complexity therein, as reported previously with 14nm and the introduction of Kaby Lake, Intel's latest filing would suggest that 10nm will follow a similar pattern as 14nm by introducing a third stage to the cadence."
This discussion has been archived. No new comments can be posted.

Intel Says It Will Move Away From 'Tick-Tock' Development Cycle

Comments Filter:
  • R.I.P. Andy Grove (Score:5, Insightful)

    by Thud457 ( 234763 ) on Wednesday March 23, 2016 @10:06AM (#51761023) Homepage Journal

    My grandfather's clock was too large for the shelf,
    So it stood ninety years on the floor;
    It was taller by half than the old man himself,
    Though it weighed not a pennyweight more.
    It was bought on the morn of the day that he was born,
    And was always his treasure and pride;
    But it stopped short â" never to go again â"
    When the old man died.

    Ninety years without slumbering
    (tick, tock, tick, tock),
    His life's seconds numbering,
    (tick, tock, tick, tock),
    It stopped short â" never to go again â"
    When the old man died.

    • I was thinking the same thing...

      (for those who haven't heard: http://arstechnica.com/informa... [arstechnica.com] )

      Maybe the board was waiting for him to be safely on The Other Side before doing this?

      • Maybe the board was waiting for him to be safely on The Other Side before doing this?

        Then they got it a year too early. Articles were floating around already last year about this, and really why would they? Is it an offence to someone to change a strategy that has been in place for a very very long time because of a changing market, reaching limitations, and wasn't there a law of physics or 2 involved in there somewhere?

        I would have liked to believe a man of Andy Grove's genius would have been the first to recognise that doing the same thing over and over again will eventually lead to probl

  • by Anonymous Coward

    In other words, AMD finally catches up with Intel and ARM has a big up over Atom because of Intel's lost fab advantage. In good news for us, though, computer chips will get cheaper because it will finally make sense to build more fabs. If we are going to be stuck at 10nm for an indeterminate period of time, the process gets cheaper and it makes sense to build more foundries.

  • by JoeyRox ( 2711699 ) on Wednesday March 23, 2016 @10:10AM (#51761071)
    At $5B+ for a single fab and the market for computers continuing its backward slide it's no surprise that Intel is putting the brakes on its capital expenditures.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      I also wonder what they call paradigm shifting micro-architectures. Basically, apart from the PIV/Netbu(r)st series, all Intel processors are descendants of the Pentium-Pro, their first OOO processor. The changes have been in the area of completely different FPU units (good riddance for the x87 stack), switching to 64 bit (you have to thank AMD for that), and a few other improvements. But putting the memory controller or the GPU on the chip, using faster/wider I/O busses, or multiplying the number of core

      • what about all the new instructions they have added outside of x64. MMX, SSE and i forgot what else
        • Re: (Score:2, Informative)

          by Anonymous Coward

          New instructions don't change the fundamental flow of data inside the processor. In the PPro presentation, they said that the FPU needed 86 bit wide busses (80 bits data + status) and that this was " a lot of bits". Now they have AVX256 and 512 is right around the corner. Using parallelism to implement vector instructions is great for some tasks, but a compiler, for example or an interpreter (Python, Ruby, Perl) still executes mostly basic i386 instructions (or their 64 bit extensions).
          Making the instructio

      • by armanox ( 826486 )

        It is my understanding that the PPro line ended with the Core 1, and the Core 2 was a redesign, and then Sandy Bridge was another redesign.

    • And there simply is no competition anymore. Tick-tock was designed to hammer, hammer and keep on hammering against AMD until they were dead, deAD, DEAD! (for those that don't know AMD used to compete against Intel)

      • The competition now are the chips that are already out there and for most people they work just fine.
      • by tlhIngan ( 30335 )

        And there simply is no competition anymore. Tick-tock was designed to hammer, hammer and keep on hammering against AMD until they were dead, deAD, DEAD! (for those that don't know AMD used to compete against Intel)

        AMD is still around, and I'm sure Intel is keeping them alive because they serve as "competition".

        Should AMD disappear, Intel would be in a world of hurt from government regulators (the EU has found Intel to be in violation of anti-monopoly laws). So AMD right now is right where Intel wants them -

    • by castionsosa ( 4391635 ) on Wednesday March 23, 2016 @10:37AM (#51761271)

      You hit the nail on the head. "Good enough" has knocked Moore's Law off the rails. Since there isn't that much demand, other than adding cores for virtualization [1], it isn't surprising that Intel is backing off the gas pedal with CPU development.

      There are other things as well to add to a CPU. Disk I/O hasn't kept up with capacity gains, and there is always working on better power management which is something I'm sure Intel's enterprise customers are heavily damanding for PR reasons.

      [1]: The ideal would be faster cores, since Microsoft has hopped on the Oracle and Sybase bandwagon and started licensing by core, and not CPU socket, but more cores is better than nothing.

      • by dlenmn ( 145080 ) on Wednesday March 23, 2016 @11:16AM (#51761605)

        I'm not picking on you in particular, but I'm seeing a lot of posts implying that Moore's law could keep going but it's too expensive, there's not enough competition to warrant it, etc. The fact is that physics is the nail in the coffin for Moore's law. Making small fab processes is getting more and more difficult because these size scales are super tiny, and the difficulty means that Moore's law simply cannot keep going because we have to develop fundamentally new technology -- not just scaled down current technology.

        There's a reason Intel is planning to stop using Silicon at 7 nm (not clear what they'll move to -- maybe indium gallium arsenide), and getting up to production quality with a new material is a huge task that is fundamentally incompatible with Moore's law. (InGaAs is not "new" per se, but InGaAs has never seen real commercial use; it has been confined to research labs.)

        There's also a reason that research in classical (not only quantum) computing with superconducting circuits is again being seriously researched by commercial enterprises -- including companies like Northrup Grumman which are not traditionally associated with designing computer chips. (IBM poured a lot of money into superconducting computers in the 1980s but ultimately gave up because Si computing was marching along just fine. I think that IBM is back in the superconducting game too.) Again, getting superconducting circuits up and running is _hard_ and fundamentally incompatible with Moore's law.

        Moore's law is intrinsically dead. End of story. Even if/when the non-Si chips get up and running, I don't expect that Moore's law will be revived. 7 nm equates to about 14 silicon atoms. The end of the road is in sight. It's trying to march through quicksand from here on out.

        PS. I don't get the "lack of competition" hypothesis for why Intel is slowing down; there are a number of manufacturers matching or closing in on Intel's fab process. E.g. Samsung and Globalfoundries are already at 14 nm. TSMC is at 16 nm. These aren't in direct competition with Intel at the moment, but they will be if Intel ever gets serious about putting their chips in things other than desktops/laptops/servers. Intel isn't stupid; they see these other companies as competitors, and Intel really wants a leg up on them. If Intel could keep up with Moore's law, they would.

        • The premature death of Moore's law due to physics has been falsely predicted for about three generations of fabs. Those predictions were wrong then and they're wrong now. The profit motive and billions of dollars always found a way to find a solution to intractable technical problems - that profit motive is now disappearing (for Intel specifically) due to conditions in their markets.
          • Moore's rule of thumb expired two years ago [slashdot.org].

            It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens.
            - Gordon Moore, in 2005 [wikipedia.org]
          • Guess what: Moore's law has been failing for several generations of fabs! The divergence from Moore's law has been gradual. No one is saying that progress will suddenly stop, but we've been slowly falling behind the "doubling every two years" schedule for a while now (arguably since at least 2012).

            Now, you can argue about why that is. However, the problem is not a lack of effort or funding. I have a bunch of friends who work at Intel, and they're not taking it easy. They're working their asses off but makin

          • by Alomex ( 148003 )

            Three? Try 15... I remember reading thirty years ago how fabs would fail once we reached dust size particles/visible light circuit size. For the first problem we got the humans outside the fab, for the second we moved to x-ray frequencies.

          • by Ramze ( 640788 )

            Intel is at 14nm and working hard to push out 10nm after many setbacks, but not long after that, they're done. Did you not read the whole explanation about why Intel is stopping at 7nm on silicon and/or 5nm with other materials? 7nm is 14 silicon atoms wide. Any smaller, and quantum tunneling becomes such a serious issue, they need new materials. With other materials, they MIGHT be able to go as small as 5nm. That's it, though. Any smaller and you basically either need an optical computer or a quan

        • by DrJimbo ( 594231 )

          Your overall point may (or may not) be valid but this passage in particular is either incorrect or grossly misleading:

          Making small fab processes is getting more and more difficult because these size scales are super tiny, and the difficulty means that Moore's law simply cannot keep going because we have to develop fundamentally new technology -- not just scaled down current technology.

          We have had to develop new technology after new technology for decades to keep pace with Moore's Law. This is one of the things that makes Moore's Law so fascinating -- it has already spanned over five orders of magnitude (powers of ten). Take a look at the section on enabling factors and future trends on the Wikipedia page [wikipedia.org]. It is possible we have finally reached the end of Moore's Law

          • Back in the day, moving from bipolar to MOSFET transistors was a fundamentally new technology, but we haven't done anything like that any time recently. Almost all of the examples on that list are old or speculative. All the chips in recent memory have been silicon MOSFETS made using ultraviolet photolithography. Moving from planar transistors to FinFETs is the closet thing to a new technology, but that really seems like a refinement. Moreover, banking on a fundamentally new technology won't save Moore's la

            • by DrJimbo ( 594231 )

              I fully agree with you that if we are at the end of Moore's Law then it is because of physical limitations and not economics. As for no preceding tech breakthroughs, Intel's first CTO said [sys-con.com] (in 2008):

              I compare Moore's Law to driving down the road on a foggy night, how far can you see? Does the road stop after 100 metres? How far can you go?

              [...] That's what it's been like with Moore's Law. We thought there were physical limits and [now] we casually speak about going to 10 nanometres. We have work going on different transistor structures. Silicon has become scaffolding for the rest of the periodic table. We're putting these other structures into the materials. We see no end in sight and we've had 10 years of visibility for the last 30 years.

              I think it is quite possible he is wrong about Moore's Law extending out to 2028 but I find it very hard to believe he is wrong about the history of Moore's Law leading up to 2008. He was in a position to see the tech breakthroughs first-hand. I don't see why he would lie about it.

        • by MetricT ( 128876 )

          Moore's Law isn't completely dead, it's just metamorphosing into a new form.

          If Intel is moving into spintronics (as rumor suggests), then next-generation chips should use millions of times less power (like, run your CPU for a month on a AAA).

          If so, it becomes possible to start stacking CPU layers like memory/flash is today. Imagine a next-generation Moore's law stating the number of transistors in a 3D stack doubling every X months.

      • In a given technology, going faster means more power dissipated, and there's a limit on the temperature that silicon semiconductors can tolerate. Power management is one way to gain a little on the speed-heat tradeoff. It's more than PR.
      • by trparky ( 846769 )
        I'd have to disagree with the disk I/O part of your comment.

        Disk I/O speed and bandwidth has been growing by leaps and bounds in the last three years due SSDs. SSDs have made huge improvements in computer performance lately, so much so that if you were to take even a four year old computer and put an SSD into it it would figuratively take off like a rocket. That just goes to show you, if you can't get the data and instructions to the CPU fast enough you're going to be staring a screen wondering why your ap
    • by Kjella ( 173770 )

      Actually it's just Moore's law breaking down, the difficulty is in producing smaller transistors, the technology can't keep up. We know Intel had to delay the 14nm launch because of bad yields, now on 10nm it's probably a lot worse. And to go beyond that you need extreme ultraviolet lithography (EUV) which is still heavily in the R&D phase. I'm guessing that what Intel really knows at this point is that with a lot of tweaking they can probably do 10nm with acceptable yields using mostly known technology

      • Actually it's an admission they can't do 10nm. Their current process will not support it. They'll need to follow IBM's lead in order to produce 10nm.
  • by 110010001000 ( 697113 ) on Wednesday March 23, 2016 @10:23AM (#51761161) Homepage Journal
    This will make a LOT of people here mad, but the exponential growth computational power of digital computers is ending. We can no longer count of the computers of tomorrow to be significantly faster or have more memory than today. If you have been following the industry closely you can already see start to happen 10 years ago. So we can forget about projections that used the argument of exponential growth creating the "Singularity" or "AI". There just simply won't be enough processor power available with classical digital computers. The computer you use 10 years from now will look and perform a lot like the one you have today.
    • by Anonymous Coward

      Your comment is bad and you should feel bad for making it

    • This will make a LOT of people here mad, but the exponential growth computational power of digital computers is ending. We can no longer count of the computers of tomorrow to be significantly faster or have more memory than today. If you have been following the industry closely you can already see start to happen 10 years ago. So we can forget about projections that used the argument of exponential growth creating the "Singularity" or "AI". There just simply won't be enough processor power available with classical digital computers. The computer you use 10 years from now will look and perform a lot like the one you have today.

      Heck, the computer I use 10 years from now might very well be the same computer that I'm using today.

      • What you are forgetting is that to keep processor fabs paid for they keep shrinking everything else. While CPUs are 14nm most gpus are not. Ram is not.

        In time expect to see ram, gpus, and the other components shrink as well. In 10 years you will buy a computer where all transistors inside it are at 14nm or less and it is Using a fraction of the power.

        • by tlhIngan ( 30335 )

          What you are forgetting is that to keep processor fabs paid for they keep shrinking everything else. While CPUs are 14nm most gpus are not. Ram is not.

          In time expect to see ram, gpus, and the other components shrink as well. In 10 years you will buy a computer where all transistors inside it are at 14nm or less and it is Using a fraction of the power.

          Memory almost certainly is at 14nm, if not smaller (they're usually a half-node ahead).

          Memory is the most transistor-dense device you could make - of the billi

    • This will make a LOT of people here mad, but the exponential growth computational power of digital computers is ending. We can no longer count of the computers of tomorrow to be significantly faster or have more memory than today. If you have been following the industry closely you can already see start to happen 10 years ago. So we can forget about projections that used the argument of exponential growth creating the "Singularity" or "AI". There just simply won't be enough processor power available with cl

      • by Psiren ( 6145 )

        At that same conference, my colleague talked to someone working in Xpoint R&D who told him, "If you need a solid-state drive right now, buy the cheapest Samsung model you can get by with, because in the next two years we're going to blow the competition completely away."

        Oddly enough, I was at a conference last week where we had a keynote by HP. He was saying pretty much the same thing about memristors. He held up a roughtly credit card sized model that would apparently hold 1.5PB of data. It all sounds cool, but I'll believe it when I see it. These "just around the corner" technologies sometimes take a lot longer than expected to reach market.

        • Oddly enough, I was at a conference last week where we had a keynote by HP. He was saying pretty much the same thing about memristors. He held up a roughtly credit card sized model that would apparently hold 1.5PB of data. It all sounds cool, but I'll believe it when I see it. These "just around the corner" technologies sometimes take a lot longer than expected to reach market.

          Even more oddly enough, I actually did some consulting with HP back in 2001 concerning their prototype memristor chips. They were h

          • by PCM2 ( 4486 )

            They've been developing memristor technology more than 15 years, so hopefully they've finally licked the problems.

            Doesn't sound like it. Their world-changing God device known as The Machine is supposed to be based entirely on non-volatile memristor storage. But the first demo units are going to ship based on DRAM.

            The HP mouthpiece's excuse? "DRAM essentially is non-volatile as long as the power doesn't go out."

        • by bored ( 40072 )

          I heard the same thing nearly two years ago in the form "we are going to produce a 10TB 2.5" drive this year that will kill flash based devices".

          I was skeptical then, because I could see someone making the device, I just couldn't see them making it for a price where all the flash vendors up and gave up.

          As no one has actually seen a xpoint device, I suspect its still a couple years out for high end applications. Once intel/micron/etc milk that market for a couple years you might see one for your PC, maybe...

    • by Maritz ( 1829006 )
      The cost per calculation per second capabilty of humanity has been following that exponential moore-like curve since at least the late 1800s. When 2-D silicon bounces off physical limitations then a new paradigm will likely keep the curve going. I guess we'll see.
    • We can no longer count of the computers of tomorrow to be significantly faster or have more memory than today.

      Sure we can. We just can't count on them being in the same identical form factor. I fully expect the computer I use 10 years from now to have 4x the RAM, and 4x the number of processing cores. There's still plenty of space in my case for it. I don't expect my laptop to achieve the same thing.

    • Sorry, but that's simply not true. Look at the case of Nvidia they had hoped for 16nm for Maxwell (their Kepler successor), but it simply wasn't ready on time. So, they redesigned it and made it more efficient and faster, and that was despite it being on the same 28nm process as Kepler.

  • by jones_supa ( 887896 ) on Wednesday March 23, 2016 @10:38AM (#51761291)
    For an ordinary joe it won't matter that much. Most of the services he uses (Facebook, Twitter, Spotify, Netflix, Skype, lightweight gaming) could be implemented even on a Pentium II with a little bit of optimization. Even Microsoft does not bother artificially bloating their operating system anymore.
    • by Anonymous Coward
      That's the most stupid commentary ever. Software has become so bloated that it's hard to find any machine that can handle it gracefully, let alone in a 'snappy' way. The fact that software is rewritten for Browsers behind 10 additional layers of abstractions doesn't help either.
      • I did not mean what software currently is, but how much horsepower is actually needed for a certain application, if programmed in an optimized fashion. Of course you lose performance if you make facepalmy things like make applications inside web browser or within .NET Framework.
      • It's funny that you first say it's "the most stupid commentary ever" and then you follow on with rephrasing his very complaint about accidental complexity of software in your own words.
  • by avandesande ( 143899 ) on Wednesday March 23, 2016 @10:43AM (#51761337) Journal
    Differences in performance (speed, power consumption etc.) are now almost imperceptible between process changes.
    • LMOL that's because Intel only puts out modest increases in processor performance. You dealing with a business limitation not a technology limitation.
      • What do you mean by puts out? They do that by shrinking the die and creating a new fab, which is precisely what the article is talking about. Back in the day you could see a double or triple in clock speed with each iteration. You really think that Intel is sitting on a 10ghz i8 for business reasons?
  • It's pretty telling when the CPU single-thread desktop performance leader, the i7-4790K, is almost two years old [cpubenchmark.net]. That used to be an eternity in silicon fab. Intel is busy on the server side cramming ever-more cores into their Xeons for high-density server rooms and reducing power consumption on the mobile side. The market (and Intel, who in part sets the market) has decided the Devil's Canyon is apparently fast enough for any single-threaded work you'll ever do. That doesn't help those of us who count
  • Sooo... Tick-Tock-Tock then?
  • My dream is to have a computer waiting for my input. Today, even with the fastest machine, I am contunually waiting on the machine. Blame it on crappy software, networks, whatever, but CPUs really need to be a LOT faster IMO.
  • Who are these engineers who design the new, smaller manufacturing processes? I'm quite sure Intel or TSMC will reward you quite gratuitously if you are an engineer in a research team that makes 10nm feasible. Can you imagine, those guys change the world.
  • by Chas ( 5144 ) on Wednesday March 23, 2016 @11:49AM (#51761835) Homepage Journal

    I vote that we call it "Boom Shaka Laka"!

  • The article says: " Intel introduced 14nm back in August 2014, and has since released parts upwards of 400mm2, whereas Samsung 14nm / TSMC 16nm had to wait until the launch of the iPhone to see 100mm2 parts on the shelves"
    This is not really a fair statement, as Intel's 14nm process began with very poor yields, while TSMC began from the startoff with very good yields. It was only mid - 2015 that Intel fixed their yield problems.

  • "We're going to artificially slow our release cycle to squeeze as much money out of the consumer as possible."

    Of course they've already been doing this all along. As we rapidly approach the size of a molecule the new frontier will just be power consumption.
  • Remember the good old days when hard drive capacities doubled or tripled every single year? There was a time when if a 20 GB drive was the biggest thing this year; you could expect 60 GB or 80 GB drives next year. Those days are over. We see higher capacities, but they take a lot longer between cycles. The same thing is happening with CPUs. Once the dies got below about 50 nm, it became increasingly hard to keep shrinking it further. I'm not saying that 1 nm is impossible, but it's going to be very difficul
  • by Tough Love ( 215404 ) on Wednesday March 23, 2016 @05:24PM (#51765079)

    Oh, tick-tock was the bedrock of Intel's success? Silly me, I thought it was more about monopoly control and cutting off AMD's air supply.

  • Intel and "Tick-Tock" basically ground AMD into dust. With AMD unable to keep up R&D development, they are no longer really competitive in many of the CPU segments. Meaning that Intel doesn't need to bother anymore (or at least for awhile), as they are really only competing against themselves. Not only are most of Intel CPU offerings "good enough" they are also "better than anything else" so why bother...

  • As CPUs slow to a crawl and soon come screeching to a...pause I'm thinking that water cooling and phase change cooling is going to get a boost. People can finally justify spending money--real money on sophisticated cooling systems. I already have a high end water cooling setup that I haven't used for years, but I've never seriously considered making the jump to phase change..until now.

  • Time for programmers to once again enthusiastically embrace assembly language. The age of depending on ever faster hardware as an excuse for fast/lazy/elegant programming is about to end.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...