Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Details Comeback Plan To Leapfrog Chipmaking Rivals by 2025 (cnet.com) 72

Intel unveiled on Tuesday a smorgasbord of new technologies designed to help it reclaim processor manufacturing leadership within four years. The plans bear the fingerprints of newly installed CEO Pat Gelsinger, who has pledged to restore the company's engineering leadership and credibility. From a report: The developments include a new push to improve the power usage of Intel chips, a key element of battery life, while simultaneously raising chip performance. The technologies involve deep redesigns to how processors are constructed.

One technology, RibbonFET, fundamentally redesigns the transistor circuitry at the heart of all processors. Another, PowerVia, reimagines how electrical power is delivered to those transistors. Lastly, Intel is updating its Foveros technology for packaging chip elements from different sources into dense stacks of computing horsepower. Intel's commitments, unveiled at an online press event, will mean faster laptops with longer battery life, if realized. And the advancements could boost technologies like artificial intelligence at cloud computing companies and speed up the services on mobile phone networks. "In 2025, we think we will regain that performance crown," Sanjay Natarajan, who rejoined Intel this year to lead the company's processor technology development, said in an interview.
Further reading: Intel's foundry roadmap lays out the post-nanometer "Angstrom" era.
This discussion has been archived. No new comments can be posted.

Intel Details Comeback Plan To Leapfrog Chipmaking Rivals by 2025

Comments Filter:
  • by davermaine ( 7077933 ) on Tuesday July 27, 2021 @10:51AM (#61625309)
    It's easy to make promises, but for Intel to change direction like this will take a massive amount of effort. Color me dubious.
    • It's easy to make promises, but for Intel to change direction like this will take a massive amount of effort. Color me dubious.

      Me too. One of the most obvious problems is the current incompetence of middle and senior management at Intel. Ain't gonna fix that any time soon.

      (I remember when Intel and HP used to be "good" companies as far as tech went. Not as a low level bar for standards of corporate stupidity)

      • Re: (Score:2, Insightful)

        by jwymanm ( 627857 )
        For HP it's because they gutted everyone that was loyal to the company and kept cheap asses. Thanks to that horrible Carly and whatever bean counters / board members wanted to pump and dump. Every company has been following though so I guess I can't blame one single company. Nobody keeps good employees anymore they make life hell for them while companies bend over backwards to fill positions over and over and over with basically temp workers.
    • Another day, another Intel press release about how their amazing new chip that will beat everything is right around the corner. It's already been a couple years. I'm sure it'll happen, but this is them begging people to not try AMD.

    • by jellomizer ( 103300 ) on Tuesday July 27, 2021 @12:11PM (#61625619)

      I have seen it done before.

      Back in 2005-2006 Intel was lagging behind AMD, because their Pentium Lineup chips were starting to under perform, and too expensive compared to AMD lineup.

      Then Intel released the Core CPU models. This offered better performance, parallel processing on a single chip, as well used less power. This had blindsided AMD and put them behind roughly 3 years behind Intel. Intel peaked around the 6th generation Core CPU where AMD took over again.

      Back in 1994-1995 Intel was lagging behind AMD, because their 386/486 was copied by AMD, and was able to out perform Intels 486.... The Pentium push AMD back again.

      Intel being the larger company, has more money in R&D and can dump money into new mass production methods faster than AMD. However AMD is smaller and more nimble, so they can play the catch-up game quicker, and exceed Intel, because Intel is stuck with a large factory that they cannot afford to stop to retool as often.

      It is kinda like Apple and Samsung Galaxy phones. In 6 months Apple has the best phone, having stolen much of Samsung best ideas and had improved on it, as well added some new stuff. 6 months later Samsung makes the top phone taking a lot of stuff Apple has put in and made their version better and put in their own new stuff.

      This type of stuff happens, the trick is to not feel so emotionally tied to the brand name.

      • "Then Intel released the Core CPU models. This offered better performance, parallel processing on a single chip"
        The Pentium D offered two Pentium IV cores in a single package, and I think it predated the Core2 families.

        • by bjb ( 3050 )

          The Pentium D offered two Pentium IV cores in a single package, and I think it predated the Core2 families.

          The D did predate the Core/Core2, but the major difference is that the D was hardly more than two Pentium 4 cores in the same package. If you look at the actual silicon, it was a technical equivalent of sticking two of them on the same die with some hot glue and scotch tape to hold them together. The two cores did not have any direct paths to work with each other.

          The first time that Intel (IMHO) m

      • Re: (Score:3, Informative)

        by Anonymous Coward

        I have seen it done before.

        Back in 2005-2006 Intel was lagging behind AMD, because their Pentium Lineup chips were starting to under perform, and too expensive compared to AMD lineup.

        Then Intel released the Core CPU models. This offered better performance, parallel processing on a single chip, as well used less power. This had blindsided AMD and put them behind roughly 3 years behind Intel.

        This is superficially accurate, but sort of whitewashes history. After years of "budget" processors AMD came out with the Athlon, which took everyone by surprise, and from around 2000 the P III could no longer keep up (they even had to recall a 1.13GHz version), they switched to the P4 which was actually a step-back in performance, while AMD jumped further ahead with the AMD 64 in 2003. Between 2000-2005 AMD was far ahead in performance. I dare you to try and use a P4 from the era, vs an Athlon 64. And yet,

      • by gweihir ( 88907 )

        Not this time. All those times before Intel had worse CPU designs, but a real manufacturing edge. That edge is gone, today they just have worse CPU design.

      • Exactly. Intel has done it before and I'm not counting them out. AMD isn't going to roll over and play dead, and we may see additional competition from ARM and possibly even RISC-V CPUs, so Intel has work to do. But if they succeed and again become a leading edge manufacturer and designer, I will consider it a good thing.
    • Intel hope that all other manufacturers won't create new processor in the next 3 years.

    • by Guspaz ( 556486 )

      If nothing else, Intel is making all the right moves for a turnaround. You can't really point to anything about their turnaround strategy and say it's the wrong move or something is missing. As to if they can pull it off, that's another story.

      • by gweihir ( 88907 )

        They may be making that now. But a 10 year history of ripping off customers and not innovating is not something that you can just easily get rid off.

        • by Guspaz ( 556486 )

          They've had a significant shakeup at the upper levels, including a new CEO. An engineer this time, unlike the people who ran the company from 2005-2021.

    • "In 2025, we think we will regain that performance crown" is actually not an easy thing for the CEO of Intel to say. It certainly isn't much of an endorsement of their current and near-term offerings, is it? Your point is taken, catching up is easier said than done. But it sounds like they have taken the first step - admit you have a problem.
    • by gweihir ( 88907 )

      These are just marketing claims (i.e. "lies") of an obviously increasingly desperate enterprise. Grasping at straws, like predicting some "miracle" that will make them not suck anymore is typical for that. Fortunately, the world does not need Intel. Besides overpriced CPUs they have very little to offer today.

  • I thought that the big problem in things like power consumption is that Intel is one generation behind in manufacturing?

    I though that their 10nm process is in broad categories the same as TMSCs 7nm, but TMSC has a 5nm process working too?

    • Kinda (Score:5, Interesting)

      by JBMcB ( 73720 ) on Tuesday July 27, 2021 @11:29AM (#61625473)

      The difference in process can explain the somewhat slight advantage AMD has over them. However, ARM's performance-per-watt has a HUGE lead over Intel, for desktop/laptop workloads at least. The Apple M1 is faster than any Intel desktop or laptop chip, per-core and per-MHz, about on-par with any AMD chip, but beats the pants off of both architectures in performance-per-watt.

      I think the underlying cause of this is the legacy cruft baked into Intel's (and AMDs) silicon. Memory access modes rarely used (these are implemented in the memory controller) 16-Bit ACPI and I/O support. Basically, a lot of hardware support for accelerating real-mode x86 stuff, which at this point is kind of silly. An all-software real-mode emulator runs 16-bit software thousands of times faster than original hardware.

      • Both are definitely true, but don't underestimate the amount of heat that has to be moved to keep 10nm cool on the same workload as 7nm or 5nm. It's huge. Performance per watt is higher when fewer watts are needed and those watts are travelling a shorter distance.

        I don't think anyone doubts that M1 could be faster than a chip emulating x86 on-die (which Intel chips basically do now). Intel has 64-bits wide worth of instructions that can be accessed. They have room to create an entire new instruction set

        • Intel's 10nm is roughly the same feature size as TSMC and Samsung's 7nm, so you need to go back and retool your theories. Simple version: Intel screwed up the node. Yields too low, couldn't ramp the clocks. There were multiple reasons for this but the central one is simple: the handset market far exceeds the PC market as of today, and therefore so does the engineering investment in new nodes.

          • Correct me if I'm wrong, but I think their advantage is only in density. The features are packed closer together but are not truly smaller, so you can get more performance but not better power efficiency.

            • "Packed together closely" and "density" are the same thing. Feature size is (roughly speaking) a measure of density. But note: nobody outside of actual engineering size talks about feature size any more because it doesn't shrink fast enough for the marketing department.

              • They're not the same thing unless you're the marketing department. Intel had found a way to bring them closer together with shorter interconnects, but that doesn't decrease the power requirements (or heat) in any significant way. The fact that your think Intel's 10nm is close to the same is purely believing the marketing.

      • It's hard to really compare those since much of the performance/w differences come down to better fabs, and more premium bins of that same fab. The hidden downside to all the Ryzen desktop processors is that they are all dies which are not good enough for use as an Epyc or even Threadripper sku.

        Otoh, the architecture difference is hard to line up because there aren't any general purpose ARM chips, and although the M1 is one of the better attempts at achieving this, it doesn't have a reasonable hypervisor i

        • by JBMcB ( 73720 )

          there aren't any general purpose ARM chips, and although the M1 is one of the better attempts at achieving this, it doesn't have a reasonable hypervisor implementation,

          Xen will run on AARM64, which is what the M1 is based on, so I'm not sure what you are talking about. In fact, Rosetta 2 uses the ARM virtualization modes (VIMD?) to speed up x86 emulation, which is fairly decent considering it's a completely different architecture.

      • ... legacy cruft baked into Intel's (and AMDs) silicon.

        I do wonder about how much we pay for backward compatibility and being able to run old code on new machines. I stuck to 32 bit for longer than was reasonable, on account of not needing the bigger RAM addressing space. But when I changed to 64 bit, I noticed that GCC was compiling more efficient code for the newer 64 bit CPU architectures. My floating point number crunching became more efficient. Basically, GCC could compile in SSE instructions for a 64 bit target, whereas I had to plumb that in by hand if

        • by _merlin ( 160982 )

          You can get the same effect with GCC for 32-bit x86 - just add "-msse2 -mfpmath=sse" and it will favour the SSE unit, like it does by default for x86-64. However, having SSE2 available as a baseline feature isn't the only architectural benefit you get with x86-64. You also get twice the architectural registers, more orthogonal instruction set, and PC-relative addressing modes. On top of that, the register-based calling convention and improved C++ exception handling model lead to significant performance i

      • by gweihir ( 88907 )

        Besides Microsoft crap there is actually no reason to stay with this old architecture....

    • Retooling is expensive. And for most companies they want to utilize what they have for as long as they can get away with it. Intel is optimized for their 10mm core cpu production, being that they sell much more than what AMD does, it will be a big hit to retool as often as AMD can.

      • AMD no longer has fabs, so it doesn't have to retool at all.
        And Intel's strategy was to retool a factory to the latest process technology, build premium processors, then build cheap processors when the "+1" process is implemented in a different fab, then build chipsets when the "+2" process is implemented, then retool.
        It went quite well for them for a long time.

    • Not necessarily. Intel's latest flagship outperforms AMD's flagship in select benchmarks by having the chip to use a lot more power. To me it was the only way Intel could still claim the lead in benchmarks. The reason that Intel had to do this is their 10nm fabs are 4 years behind schedule. Intel has made 10nm for years but yield problems means they have been stuck with 14nm for most of their products and squeezing as much performance as possible from that node. AMD spun off their fabs (Global Foundries) a
  • did a video about this yesterday. [youtube.com] Both the video, and the comments are on it, are interesting.

  • Good luck to them (Score:4, Interesting)

    by crgrace ( 220738 ) on Tuesday July 27, 2021 @10:58AM (#61625343)

    Intel certainly would like for this to be true, but a decade or more of "financialization" has really put them behind the 8 ball. Targeted R&D takes years to yield results. While their manufacturing R&D base is still strong, they have lost a ton of design expertise to other companies over the last few years especially. With Moore's Law slowing down, differentiating by innovative design is becoming increasingly important, and this is where AMD (and others) have started to lead.

    It will be tough for Intel to catch up.

    • by Z80a ( 971949 )

      I hope they do, or AMD will turn into intel

    • When someone suggests that Intel can survive the near future, point out that it generally takes 5 years to go from initial design to full production on a fab.

      Design does not mean instruction set, and there is significant interplay between the lithography capabilities of the fab and a design that can be profitably fabbed. Maybe the litho has problems making tighly packed sram cells, so the design might have to skimp on much of the otherwise expected L0 cache in order to be fabbable, but maybe the problem p
  • Was it really power inefficient chips that dethroned Intel though? Or was it making chips that are vulnerable to countless security flaws? Lol. Improve the right thing, Intel! Yes your next gen chip needs to be more powerful than the previous gen or nobody will buy it. But not at the expense of security!
    • Was it really power inefficient chips that dethroned Intel though?

      Yep. The power envelope defines the maximum performance.

      Or was it making chips that are vulnerable to countless security flaws? Lol.

      LOL is the correct word here. Absolutely no one outside of maybe 10 Slashdot users and a bunch of cloud providers cares about these security flaws. The are irrelevant to desktop and laptop PC users, and if it gives you a performance boost then I wish AMD would be more loose with speculative execution if it would give my system performance boosts as well.

      But I am a reckless nut. I mean I'm sitting here next to a window. A WINDOW. Someone could throw a b

      • by Z80a ( 971949 )

        If i'm a malware writer, i will want to do the least effort to hit the biggest number of individuals possible. If you have the same combination of flaws that millions have, it's not like i will be targeting you specifically, but you're getting hit too.

        • You're absolutely right. So why spend a lot of effort executing a complex attack that needs to be customised to each user's case when you could simply creating a phishing email and blast it out to 10 million email addresses or buy a set of credentials on the dark web.

          You inadvertently emphasised my point. Speculative execution attacks are not a risk to users because there are a myriad of easier ways to attack said users. Incidentally this is also why we effectively see zero actual attacks on users using the

      • +1 Reckless

    • Currently their problem is power efficiency while maintaining performance. The 11900K can beat AMD's 5950X in some benchmarks but the 11900K requires a lot more power to do so. AMD could do release an updated 5900 chip that does the same; however, they do not have to do so as AMD can claim top performance in most benchmarks without melting customers computers.
    • by godrik ( 1287354 )

      Intel is a $200B company. Companies of that size can do many things at the same time. They can improve process, supply chain, architecture, and security at once. They do have process problems and that is a significant fraction of their (lack of) competitive advantage.

      Intel is not one guy. It is thousands team over the world; they don't all look at a single problem.

      • by crgrace ( 220738 )

        You'd be surprised what an effect company culture can have. They've tried for decades to get into mobile and communications electronics for decades and haven't been able to do it properly.

    • All of the above. Actually, it was a thousand little cuts including having lame ass ticktock man Gellsinger at the helm. The biggest of all the issues is simply that the PC market is way smaller than the handset market now and Intel failed to get the memo that the time for owning and operating in house fabs was a decade ago.

  • Leave the Intel processors in the museum where they belong. Preferably switched off so that they aren't vulnerable to attacks.

  • , as we called this back in the day. Intel must be really nervous from AMD's recent success with the Ryzens. On the other hand, in the datacenter where Intel makes most of its revenue AFAIK, they still hold a market share of more than 90 percent, with a customer base that is really conservative.
  • I just hope technical sites will be puting the real number next to Intel's marketing-speech. So "Intel 7 (10 nm)", "Intel 5 (7 nm)","Intel 20A (5nm)" etc.

  • I like the shrinking CPU feature size too, but what about BEOL interconnect delay and transistor switching speed? I mean is there even a limit on theoretical transistor switching speed? Who is to say transistors cannot switch a hundred or even a thousand times faster than they do currently?

    • by ceoyoyo ( 59147 )

      Yes, there are a whole bunch of limits. Charge only moves so fast through silicon. Most of the limits are improved by... decreasing size.

    • Yeah, it's still healthy. Higher density continues to mean lower power and better use of silicon. But the doubling rate has slowed, now running around 6 years instead of 1.5. So shrinking is still a thing, it just isn't the primary thing now.

  • Obviously if intel liquidated the company and bought back shares it would increase their share price, why are they wasting this time developing technology?
  • the proof is in the pudding. There is no need to pay them any mind until they actually start making better chips. They aren't the underdog, they were the fastest runner then they took a few years off, ate too much, got way out of shape and now they are talking about making a comeback. They should shut up until they have something important to say but given their history, there is no chance of that.

  • "In 2025, we think we will regain that performance crown," Sanjay Natarajan, who rejoined Intel this year to lead the company's processor technology development, said in an interview.

    Did this just happen?

    Intel actually, in writing, admit they lost the performance crown?
    And for years more ??

    Did Tartarus freeze over and no one told me?
    Are we at the end of days?
    Is the sky falling?


    Back to reality, this is good news. Hopefully AMD will at least keep up this time. I don't mind if they exchange performa

    • Back to reality, this is good news. Hopefully AMD will at least keep up this time.

      With my free market economist hat on, this looks healthy. A near-monopoly like Intel responding to technically superior competitors, by investing in technical advances.

      There is one question, though. What exactly makes a better CPU these days? I think the raw data throughput wars have long gone beyond what the average consumer requires. Lower power and better security appear to be the selling points these days. We have GHz coming out of our ears. I don't think Intel are at any great disadvantage by being stu

  • Comment removed based on user account deletion
  • All cars will be electric.
    Diabetes and cancer will be cured
    Intel will be number one

  • RibbonFET
    PowerVia

    what other miracles of secret technology are they going to finally unleash from the crypt?

  • Longer battery life? Sure, they can promise that. But the computer makers will then turn around and make laptops even thinner and lighter, shrinking the battery thereby negating any power saving from the chips.
  • Hoping Intel and AMD keep leapfrogging eachother to keep processors improving! Bonus points if processor manufacturing/fabs grow handle/reduce chip shortages.

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...