Intel CEO Promises Quicker Return To Technological Leadership (bloomberg.com) 37
Intel CEO Pat Gelsinger, facing investor skepticism about his turnaround bid, said the company now expects to reach a key technological milestone sooner than planned, helping the storied chipmaker regain its edge. From a report: Under Gelsinger, Intel has been working to restore its leadership in semiconductor process technology -- an effort that requires the company to retool factories. The CEO has previously promised investors that Intel could reach that point by 2025. "Now we think late 2024," he said in an interview with Bloomberg Television.
The remarks follow a weaker-than-expected forecast from Intel that sent its shares on their worst slide in months Friday. A slowdown in personal-computer sales is weighing on its outlook, but some on Wall Street also see Gelsinger's comeback plan as an uphill fight. He's spending tens of billions of dollars to get Intel back on track and expand into new markets, a push that includes new factories in the U.S. and Europe. Intel, the largest producer of computer processors, dominated the chip industry for decades and was synonymous with Silicon Valley innovation. That was based on a foundation of having the most advanced production. How chips are made is crucial to improving their ability to store and hold information, how efficient they are and how costly.
The remarks follow a weaker-than-expected forecast from Intel that sent its shares on their worst slide in months Friday. A slowdown in personal-computer sales is weighing on its outlook, but some on Wall Street also see Gelsinger's comeback plan as an uphill fight. He's spending tens of billions of dollars to get Intel back on track and expand into new markets, a push that includes new factories in the U.S. and Europe. Intel, the largest producer of computer processors, dominated the chip industry for decades and was synonymous with Silicon Valley innovation. That was based on a foundation of having the most advanced production. How chips are made is crucial to improving their ability to store and hold information, how efficient they are and how costly.
Re: (Score:2, Insightful)
Zen4 may be poised to retake the lead (we don't know), but as it sits right now, Alder Lake outperforms Zen3 in nearly every way, and not by a small amount.
It's hard to argue that it isn't a superior design.
Comment removed (Score:5, Interesting)
Re: (Score:2, Informative)
Athlon came out in 1999, long after speculative execution which (1) nobody thought for 20 years was a security risk (or maybe you can post to the comments you made back in the late 1990s criticizing Intel for it)
If you had followed the conversations we had here about it you'd know that they actually knew it was a bad idea back then, too. There are citations in those old discussions proving it.
and (2) is, actually, a legitimate way to increase performance
It's not because they used speculative execution, fanboy. It's because they deliberately compromised security to make it faster by doing security checks after the horse already escaped the barn.
Re: (Score:3, Insightful)
2) Not just they. The Industry at large did.
AMD did in fact avoid making that decision, but they were also making noncompetitive processors that were so bad (worse performance, power usage, and cost per unit of work done) at the time that ultimately nearly bankrupted them.
Person you're replying to is obviously not the fanboy here. Yo
Re: (Score:3)
1) You're conflating Spectre with Meltdown. Stop doing that. AMD is vulnerable to Spectre-class (speculative execution) sidechannels, like any modern superscalar processor.
No, I am not. AMD is less vulnerable to both Spectre-type and MELTDOWN-style attacks.
2) Not just they. The Industry at large did.
Not everyone, only Intel, IBM, and Sun. IBM actually designed the problem out of their later processors, Intel has depended on mitigations.
AMD did in fact avoid making that decision
Well, at least you said one completely true thing here.
but they were also making noncompetitive processors
They were making more secure processors. Intel's competitive advantage was based in large part on nobdy knowing that they were compromising security.
worse performance, power usage, and cost per unit of work done
At the time power was cheap and cost per unit of work done was far, far superior for
Re: (Score:2)
No, I am not. AMD is less vulnerable to both Spectre-type and MELTDOWN-style attacks.
Incorrect.
Meltdown is a very specific class of attacks. Intel was far more vulnerable to them. AMD was only vulnerable to one Meltdown variant.
Both are vulnerable (along with all speculatively executing processors) to Spectre-type attacks. Spectre variants will continue to be produced forever now. That genie is out of the bottle. Only software, and not trusting the processor you're running on can fix the issue. This is exactly what the industry is doing.
Not everyone, only Intel, IBM, and Sun. IBM actually designed the problem out of their later processors, Intel has depended on mitigations.
And Arm. You may know them as the producers of the m
Re: (Score:2)
Meltdown is a very specific class of attacks. Intel was far more vulnerable to them. AMD was only vulnerable to one Meltdown variant.
Both are vulnerable (along with all speculatively executing processors) to Spectre-type attacks
From the VERY BEGINNING the AMD processors were not only less vulnerable to spectre-class attacks, but the vulnerabilities also leaked less data. When you learn to read you will see that I did not say AMD processors were invulnerable to these attacks, only less vulnerable. But I guess your Intel-fellating fervor is interfering with your ability to do that.
Re: (Score:2)
From the VERY BEGINNING the AMD processors were not only less vulnerable to spectre-class attacks, but the vulnerabilities also leaked less data.
Again, incorrect.
You're conflating Meltdown with Spectre again.
Worse, as time goes on, it looks like AMD is going to be vulnerable to its own share of Meltdown-style attacks (the really bad ones) that were simply missed, because at the time, there just wasn't as much focus on them.
What you don't understand, my rabid little friend, is that these problems aren't Intel's problem, or AMD's problem. They're every out-of-order speculatively executing CPUs problem. AMD dodged a bullet by not allowing their C [arxiv.org]
Re: (Score:2, Interesting)
Every speculative executing CPU is vulnerable to Spectre-class exploits, even if not the officially published POCs.
That's the entire fucking point of the original whitepapers. Only changing how code is compiled can reasonably mitigate the possibility of timing sidechannels in this manner.
The real "whoops" for Intel was Meltdown (though they were far from the only people affected by that design decision), which really was mo
Re: (Score:2)
"Athlon came out in 1999, long after speculative execution which (1) nobody thought for 20 years was a security risk"
I can tell you weren't on usenet.
I knew about the problem of executing instructions before they were actually required to be executed long ago. It's pretty common fucking sense; you do shit out of order, something's GOING to fuck up in some unexpected way eventually. This is practically a law for ordered systems.
Back to a basic systems engineering class for you.
Re: (Score:2)
I can tell you weren't on usenet.
And I can tell that if you did in fact lurk there, the conversations were so fucking far above your head that anything approaching an interpretation of them from your point of view could be reasonably considered output from a RNG.
I knew about the problem of executing instructions before they were actually required to be executed long ago. It's pretty common fucking sense; you do shit out of order, something's GOING to fuck up in some unexpected way eventually. This is practically a law for ordered systems.
That's some stupid shit, right there.
Nothing is fucking up, electrical characteristics of the machine are being gleaned by advanced timing heuristic measurements.
Beyond that, the claim is provably nonsensical.
Back to a basic systems engineering class for you.
Dude, the most complicated thing you ever engineered was a cheeseburger
Re: (Score:2, Interesting)
After 5 years of failure, they then switched back to traditional FinFET, but because they were virgins they went from "14nm" up to "16nm", and maturing that then took years as well (technically they are still worki
Re: (Score:1)
Their performance lead starting after the awful Pentium 4 was mainly from being at least a generation ahead on fabrication processes, an advantage that lasted until they spun their wheels for half a decade trying to get their 3D Tri-Gates working at a size/performance that they could call "10nm"
2 things.
First, no, the performance lead after the awful Pentium 4 was mostly because Bulldozer was one of the worst CPU architectures ever shipped.
Beyond the dubiously named "cores" on them, actual hyper-threading simply performed better, using less power, regardless of fab process.
Second, ya, that's a pretty good description of Great Stagnation era.
After 5 years of failure, they then switched back to traditional FinFET, but because they were virgins they went from "14nm" up to "16nm", and maturing that then took years as well (technically they are still working on mastering "14nm" FinFET unlike those that have been dedicated to FinFET for over a decade)
To be fair, Intel wasn't the only person that tried to make tri-gates work. They're the only ones who wouldn't let it go, though. I think they were just coc
Re: (Score:1)
First, no, the performance lead after the awful Pentium 4 was mostly because Bulldozer was one of the worst CPU architectures ever shipped.
What about those 5 years in between, shit-for-brains?
You know this one fact on this, yeah, and you then spun a bunch of fucking yarn around it, yeah, because you are a dishonest fuck, yeah?
You know "bulldozer sucked" but then get everything wrong anyways
Intels first Core series dropped in 2006 - this was Intels dropping of the P4 architecture and re-promoting their P3 architecture (which they had still used for mobile) - because P4 was a bad design, intel immediately took the lead back because they h
Re: (Score:2)
What about those 5 years in between, shit-for-brains?
There was no performance lead after the K6, until Skylake v. Bulldozer.
K6-K10 were highly competitive.
You know this one fact on this, yeah, and you then spun a bunch of fucking yarn around it, yeah, because you are a dishonest fuck, yeah?
Why do you keep interspersing "yeah" into your sentence? Are you talking yourself up in a mirror? Did you practice this?
There's no yarn, just a bunch of facts to help paint the picture.
Nothing dishonest about facts.
You know "bulldozer sucked" but then get everything wrong anyways
Sure didn't. I eagerly await your explanation of this claim, though.
Intels first Core series dropped in 2006 - this was Intels dropping of the P4 architecture and re-promoting their P3 architecture (which they had still used for mobile) - because P4 was a bad design, intel immediately took the lead back because they had more than a generation of process advantage and P3 wasnt so much a bad design.
Core architecture was not superior to contemporary K9. An Athlon 64 X2 was comparable to a Core2 Duo.
You're talking out of
Sincerely wish them the best (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
Ok sure, I just meant in the sense that the US in general doesn't produce as many chips as it should.
I do not understand what you mean by "should" as that implies there is a requirement or tenet that does not exist. The US produces a lot of chips. The issue the last several years is every single chip fab has had supply chain and logistics issues at the same time as high demand. Before the pandemic, Intel was struggling with making chips at the leading edge of 10nm. Their older fabs pumped out chips just fine.
Simply that chip production is RETURNING to the US. And honestly, it's going to take years to build those plants here, so it's still going to be a while.
Again I do not understand this. The US has lots of chip foundries. A lot. There are around 506 curr [wikipedia.org]
Re: (Score:1)
Re: (Score:1)
Re:Pat Gelsinger is no Andy Grove (Score:4, Insightful)
1) M1
2) Alder Lake and a very distant:
3) Zen3.
Intel was definitely riding a wave for a long fucking time, but people seem to think the turnaround hasn't already happened, and I don't really get that.
The CPU market is fickle, as AMD showed us. People aren't hesitant to switch to a different vendor.
The switch back to Intel has already happened.
Is the future a fun tit-for-tat between Intel and AMD? I think it is.
Is Intel still stuck in the Skylake(n*+) era? No, they most certainly are not.
Re: (Score:1)
The best performing CPUs per clock on the planet right now are:
1) M1
2) Alder Lake and a very distant:
3) Zen3.
Don't forget POWER10... The fact that it is out of the reach for most, shouldn't disqualify it from this list, especially after your factor in its 120MB of L3-cache/chip, 15 cores per chip, 2MB L2-cache/core, 8-way SMT/core and 8 FPUs/core, beyond other niceties... I've always loved the POWER machines in our national labs.
Re: (Score:2)
Re: (Score:2)
Power10
Re: (Score:2)
Intel quality control (Score:2)
Re: (Score:2)
The real solution to these are software mitigations and changing how we think about trust and security on processors from a software perspective.
Wow (Score:2)
They will be ready December 31st 2024 instead of January 1st 2025.
Re: (Score:2)
Caught that did ya?
Morris Chang Disagrees With Intel (Score:1)
If they want tech leadership (Score:2)
Maybe they should get some technical leaders.
Promise (Score:2)