Intel's 10nm 'Cannon Lake' Processors Won't Arrive Until Late 2019 (digitaltrends.com) 149
At the company's second quarter 2018 financial results conference call, Intel chief engineering officer Venkata Renduchintala said the "Cannon Lake" 10mn processors won't appear in products until the 2019 holiday season. "The systems on shelves that we expect in holiday 2019 will be client systems, with data center products to follow shortly after," Renduchintala said. Interim CEO Robert Swan went on to tout the company's "very good lineup" of 14nm products. Digital Trends reports: "Recall that 10nm strives for a very aggressive density improvement target beyond 14nm, almost 2.7x scaling," Renduchintala said during the call. "And really, the challenges that we're facing on 10nm is delivering on all the revolutionary modules that ultimately deliver on that program." Although he acknowledged that pushing back 10nm presents a "risk and a degree of delay" in the company's road map, Intel is quite pleased with the "resiliency" of its 14nm roadmap. He said the company delivered an excess of 70 percent performance improvement over the last few years. Meanwhile, Intel's 10nm process should be in an ideal state to mass produce chips towards the end of 2019.
Intel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design. Given the previous launch window, the resulting chips presumably fell under the company's eighth-generation banner despite the older design. But with mass production pushed back to late 2019, the 10nm chips will fall under Intel's ninth-generation umbrella along with CPUs based on its upcoming "Ice Lake" design. Intel claims that its 10nm chips will provide 25 percent increased performance over their 14nm counterparts. Even more, they will supposedly consume 50 percent less power than their 14nm counterparts.
Intel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design. Given the previous launch window, the resulting chips presumably fell under the company's eighth-generation banner despite the older design. But with mass production pushed back to late 2019, the 10nm chips will fall under Intel's ninth-generation umbrella along with CPUs based on its upcoming "Ice Lake" design. Intel claims that its 10nm chips will provide 25 percent increased performance over their 14nm counterparts. Even more, they will supposedly consume 50 percent less power than their 14nm counterparts.
What processing pipeline bugs are present? (Score:1)
Given that "ntel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design" what execution silo bugs are currently present in the designs?
Re: (Score:2)
Silo?
As usual some "bugs" (Intel erratas) will be fixed and some new created.
Don't know what that have to do with Intels failure of getting their 10nm process up and running?
Re: (Score:1)
Do you have the slightest clue about how modern day processors are created? Just building and configuring the fabs and tools needed to build the tools that finally allows mass production of the 10nm CPU is a major undertaking. A the production cycle cannot be rushed until the actual CPU has been produced for review, benchmarking, and extensive testing. From the whiteboard to mass production of a CPU probably ranks as the most complicated process the human race undertakes today. Without the continuing effort
Re: (Score:2)
Also Intel's 10nm process is about on par with GF/TSMC 7nm. so basically "same technology level"
Re: (Score:2)
Given that "ntel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design" what execution silo bugs are currently present in the designs?
No it is shrink of Skylake X (with AVX-512). Sky Lake X is newer than Kaby Lake, which CPU wise is just a straight run of the mill Sky Lake with no changes at all.
Anyway, process shrinks props up all kinds of issues, and apparently many of the tricks they have used in all the 14nm generations simply doesn't work in 10nm, and they are learning that the hard way. They got ahead of the competetion with those clever tricks, and now they no longer work.
Re:What processing pipeline bugs are present? (Score:4, Insightful)
They got ahead of the competition through anticompetitive business practices, and by deliberately compromising security. You can call that clever tricks, but I call it sociopathic behavior.
Re: (Score:2)
and by deliberately compromising security
Not at all. Yes to anti-competitive, but they didn't deliberately compromise anything. They implemented a common mechanism to speed up processes used by a variety of vendors which at the time had no demonstrated security implications, arguably you could say theoretical but even then only indirectly.
Re: (Score:2)
"Yes to anti-competitive, but they didn't deliberately compromise anything. "
What? Yes, of course they did. They decided to do security checks later than, for example, AMD. That others also made the same poor decision does not excuse intel, it only indicts IBM and the like.
Re: (Score:2)
They decided to do security checks later than, for example, AMD.
Yes when you're a time lord and can happily bounce around between 2018 and the time you made the decision then you could knowingly have introduced a security vulnerability.
However since that is just BBC fantasy the reality is they chose to optimise the timing of the security check like many other vendors with the knowledge that no such side channel exploit exists. That isn't deliberately compromising security anymore than you aren't deliberately compromising your security by not living in a bank vault.
Re: (Score:2)
the reality is they chose to optimise the timing of the security check like many other vendors with the knowledge that no such side channel exploit exists.
No, a thousand times no. There was no knowledge that no such side channel exploit existed. There was, contrarily, knowledge that they were doing the checks in the wrong order deliberately to eke out more performance. They went so far as to patent it, and they were warned that it was a bad approach by many at the time. They did it anyway and the rest is history, plus people like you making apologies for intel due to your cognitive dissonance. You think you're smart, and you bought intel, so you want to belie
Re: (Score:2)
There was, contrarily, knowledge that they were doing the checks in the wrong order deliberately to eke out more performance.
I can't believe you dare to go online to post this. Don't you understand the deliberate security risks of connecting to a network? Are you some kind of crazy person?
Nope, just one who doesn't understand risk.
Re: (Score:2)
I can't believe you dare to go online to post this. Don't you understand the deliberate security risks of connecting to a network? Are you some kind of crazy person?
Nope, just one who bought AMD instead of Intel.
There, fixed that for you.
I do own one intel system, it's a c2d and it's in storage in case I need it for something. It's small so I kept it. I also do own one arm64 platform, it's a pine64 and I use it for an in-home server and I don't websurf on it.
I am managing my risk responsibly, and I'm not making bullshit excuses for Intel's antisocial behavior since that's not my dog.
Re: (Score:2)
I am managing my risk responsibly
So was Intel. Now if you put down your 20/20 hindsight and realise that the decision and optimisation they made had no at the time known risk, not even in a lab you may have made that optimisation too. Just like everyone except for AMD did.
Re: (Score:2)
if you put down your 20/20 hindsight and realise that the decision and optimisation they made had no at the time known risk,
Total falsehood. The risk was understood at the time. This has been discussed in these threads repeatedly.
Re: (Score:2)
The risk was understood at the time.
It was. Risk is likelihood and consequence. And at the time the likelihood was deemed to be never and the consequence was well understood for this giant non-event that it is.
Now with your hindsight the likelihood has changed. Claiming they knew this back then is just stupid. The earliest documentation of side channel attacks were theorised in 1995 and remained a theory for over 2 decades. The practical security impacts of it still remain theoretical for most computing workloads where you don't hand complete
Re: (Score:2)
It was. Risk is likelihood and consequence. And at the time the likelihood was deemed to be never and the consequence was well understood for this giant non-event that it is.
You can't say it was well-understood to be a giant non-event when it is currently a gigantic event. You can say it was believed to be a giant non-event, but even that is only partially true. To wit: Not every chipmaker chose to do it the irresponsible way that was highly likely to cause problems. AMD didn't, and the rest is just bullshit fucking excuses.
Now please get of the internet, there are scary malware looking packets out there. It's really not safe here.
Yes, because of cognitively dissonant lames like you who make bullshit excuses for people who chose to do things wrong.
Re: (Score:2)
You can't say it was well-understood to be a giant non-event when it is currently a gigantic event
Except it isn't a giant event. Pick your colloquialism: storm in a teacup, mountain out of a molehill, or just call it what it is: media driven absurdity and fear about a security risk that is not well understood by the media.
To wit: Not every chipmaker chose to do it the irresponsible way that was highly likely to cause problems. AMD didn't, and the rest is just bullshit fucking excuses.
And you can't say "not every". That is being intentionally dishonest when the actual answer was: 1. One chip maker chose not to do this. IBM, ARM, Sun (now Oracle) all followed Intel's path in some of their products.
who make bullshit excuses
Keep trying mate, you'll understand the issue one day.
Re: (Score:2)
And you can't say "not every". That is being intentionally dishonest when the actual answer was: 1. One chip maker chose not to do this. IBM, ARM, Sun (now Oracle) all followed Intel's path in some of their products.
That is definitely an excellent reason to purchase AMD processors, and basically nothing else unless you cannot avoid it. Thanks for making my point for me.
Re: (Score:2)
Yeah. I too hate performance.
Re: (Score:3, Insightful)
At this point it looks like Intel needs a major re-design of its CPUs to mitigate all the Spectre variants and associated issues. Since they are not doing that (cheaper to spread FUD about the competition and downplay the problems, not enough people suing them) the best thing you can do now is buy AMD.
AMD CPUs are better for many reasons anyway.
Re:What processing pipeline bugs are present? (Score:5, Insightful)
AMD CPUs are better for many reasons anyway.
Yeah, unless you need the best single-threaded performance available.
Quit being a zealot. Use the right tool for the job at hand.
Re: (Score:2)
"Quit being a zealot. Use the right tool for the job at hand."
The right tool is one that won't break on you while you're using it. I was just looking at generators. It costs twice as much to buy a Honda as some cheap Chinese crap. The crap generator is almost as good until you have a problem, which will not only happen sooner, but which you'll have to deal with some crappy company to get resolved.
In order to get better single-thread performance from intel over AMD, you must spend significantly more money. N
Re: (Score:3)
Keeping in mind that your Ryzen 2 won't clock higher than 4.3GHz in a sustainable scenario that's what you need to shoot for in your Intel. That's for example when you can compare an R5 2600X with an i5-8600(non-K), w
Re: (Score:2)
That's for example when you can compare an R5 2600X with an i5-8600(non-K), where the latter also allows you to overclock to 4.3GHz on all cores given the right motherboard.
You are incorrect sir. Intel locks all multipliers with exception of their K and X processors. There is no overclocking that i5-8600(non-K) no matter the motherboard you buy.
Re: (Score:2)
Re: (Score:2)
I was under the impression that non k or x Intel CPU's do not allow modifying the multiplier at all. Atleast it hasn't on any of the Intel CPU's I've owned. The motherboard doesn't matter if multiplier is locked as I said. As far as boost clocks go that's all don't by the processor it's self. Where did I become irrational? I was talking from experience. Also bclk overclocks normally don't turn out any good because it overclocks your entire system. I have heard there are boards with an extra clock for pcie t
Re: (Score:2)
It's true that BCLK can get problematic, but a very mild rise by something like 5MHz is considered to be stable in all scenarios that are known to me. Anyway, the point here is that you can push a lot of Intels even further this way, while you'd be lucky to get a Ryzen 2 that can run at 4.3GHz consistently. Getting that higher single core performance from Intel is not as exorbitantly more expensive as
Re: (Score:2)
You claim I'm being irrational, but you don't address the fact that you claim you can change multiplier on non k and x cpu's. you also dont have very much experience overclocking, and damn sure not for 24/7 daily use. bclk is about the worst way to overclock your system(hint: thats why they unlock the multiplier for you, well intel makes you pay extra for that "feature") and I'm assuming at this point that you have never used NVME storage, even your little 5mhz overclock will still corrupt data. not to say
Re: (Score:2)
BCLK overclocking is still in use on various systems, usually on unlocked CPUs or older systems, which of course do not include that i5-8600. As far as older systems go I've been running a i5-3570 with a 5% BCLK increase for 6 years, without issues. It's true that I don't use an
Re: (Score:2)
All is well, took a few posts but you at least owned up to your mistake. it happens, thank you for not turning toxic about it like users have done to me in the past for correcting them. however as i said above you make some valid points. one being the latency issue on ryzen, which a threadripper user alerted me of under certain memory intensive processing. https://old.reddit.com/r/Amd/c... [reddit.com] there is the link if you want to read up on what he has come to so far. i gathered a few other irc users i know that ha
Re: (Score:2)
Oh I also forgot to add that ECC memory *may* mitigate the corrupted data on NVME, I however do not know for sure as i have never had a chance to test that theory. and as i also pointed out ive heard of boards that have extra clocks for PCIE clock generation, leaving the cpu's bclk only to modify cpu/memory clocks. once again can not verify this information first hand.
Re: (Score:2)
Ever since overclocking through multipliers was introduced changing the BCLK only made sense in case where multipliers were locked. Now that Intel apparently has taken every measure to prevent end users from doing this, I do hope even more that AMDs will catch up.
As far as leaks/rumours go, I've also seen a lot of leaks in the past. I remember the first presentation slides for Ryzen showing their performance in GPU bottlenecked s
Re: (Score:2)
That 15% was before node shrink, so at 14nm that they are currently on. I would assume the node shrink from 14 to 7 should get them atleast another 10-15% and I forget what or if they mentioned clock speed. But they're saying TR2 is going to be 4.2base clock I believe or maybe that was turbo. Can't remember too well. And I've disliked Intel since I found out about the dirty business practices they used to keep amd's better CPU's out of people's hands. And I used to love Intel. But I won't support their shit
Re: (Score:3)
You are just playing top trumps, selecting one specific metric that "proves" your choice of CPU is better.
AMD give you more cores for the money. You get advanced features like encrypted RAM. More PCIe lanes. ECC memory support even on the base models.
Unless single core performance being 10% better is all that matters, you don't care at all about any other features or cost or lifespan of the mobo, then Intel is better. Otherwise Ryzen/Threadripper wins.
Quit being a zealot. Use the right tool for the job at h
Re: (Score:2)
Zen2 is going to kill Intel if the leaks are true. Its claiming 15% ipc uplift before node shrink.
Re: (Score:2)
I didn't mention a specific vendor -- you did.
Let me give you a very real use case that we deal with every day -- our facility uses a piece of commercial software with key parts that are single-threaded. It is no longer under active development and there is no expectation that another (multi-threaded) software package that does the same thing will become available any time soon. The faster it runs, the less money production costs because our employees spend more time doing actual work and less time starin
Re: (Score:2)
not nearly as bad as intel is. A lot of the spectre variants AMD is susceptible to can be mitigated with a good admin.
Intel is so far behind (Score:3)
It is amazing to see and a sign the PC is the new mainframe and not as cutting edge.
10nm has been out for cell phones for years. By the time Intel has finally got it right AMD will be having 7nm Ryzen2 CPUs on the market. Samsung and global foundaries have risen to take over blindsiding Intel. I am glad I don't own any Intel stock.
Intel did release some i3 10nm. The reason why is the cores had so many defects. On Arstechnica a guy who owned a shop seen a huge failure rate as well after a few months with the chips. I don't blame Intel for halting production and trying again next year.
No one would have believed this 15 years ago.
Re:Intel is so far behind (Score:5, Insightful)
10nm has been out for cell phones for years.
nm are just labels when it comes to chips. The manufacturers call it whatever they want. There is no mass-production chip that actually has meaningful features measured at 10nm, much less 7nm.
Intel manufacturing is about level with the competitors, possibly slightly ahead. This however is a massive change from most of chip history, where mass produced Intel chips could be counted on to be at least one and sometimes two generations ahead of mass produced competitors.
Re: (Score:2)
that number normally refers to gate pitch. which isnt meaningless, but isnt actual transistor size. that being said, intel has been slipping since they thought they killed AMD off. which they did a pretty good job of accomplishing until that lawsuit, and console market blowing up. Intel should look into separating the foundry from R&D. they might not have a choice if apple drops their contract.
Re: (Score:2)
10nm has been out for cell phones for years.
Intel's 14nm is more like their 10nm. Marketing now controls the advertised feature size for a process and it has little to do with reality.
Re: Intel is so far behind (Score:3, Insightful)
Apple should just switch to AMD CPUs first. At least that avoids a big architectural jump. That could buy them time to ensure a flawless transition to ARM CPUs sometime later on. People are already unhappy with the current state of affairs in the Mac world. A bungled transition to ARM CPUs could potentially destroy the Mac brand.
Re: (Score:1)
What you said makes sense except Apple really doesn't give a shit about its customers because they rationalize any problem anyway and keep on coming back for more.
Re: (Score:3)
With the way things are going right now because of Tim Cook and Jony Ive, there won't be much of a Mac brand to destroy soon, ARM processors or not.
Re: (Score:2)
We are unhappy because: ... or for some even the keyboard
a) the OS sucks
b) the UI sucks more and more
c) the hardware, e.g. the touchpad sucks
We are in no way concerned about the CPU, who cares about fuck like that? IMHO they should go straight to RISC V ...
Re: (Score:2)
Nope. Apple wins by vertical integration in this day and age. Apple needs something it controls rather than relying on a vendor. Unlike the PC Mac users are happy to fork over money to repurchase software again and doesn't have the kind of whiners that business and legacy Windows users have.
Look at PowerPC to Intel as an example? Meanwhile we all remember the XP loyalists furious after a mere 13 years of support here on Slashdot back in 5 years ago lol.
ARM cortex is their IP basically. The only thing missin
Re: (Score:2)
All I read was "so long, bye" - the rest is just made-up words.
Awful. (Score:3)
If they are merely shrinking the existing architecture then that means they still haven't fixed the fundamental issue behind the Meltdown vulnerability. Anybody that wants fast I/O rates should avoid Intel like the plague until further notice.
Won't fix this decade, if ever (Score:5, Interesting)
> merely shrinking the existing architecture then that means they still haven't fixed the fundamental issue behind the Meltdown vulnerability.
That fundamental issues won't be changed in the next ten years, if ever. They can either keep playing whack-a-mole with different hardware and microcode side-effects, or you can add a very simple (and slow) separate CPU for security-sensitive operations.
Current CPUs are very complex, with out-of-order execution, speculative execution based on branch prediction, multiple concurrent threads of execution, various different types of caches, etc. All of this complexity is there for a good reason - it makes a huge improvement in performance. For that reason, it's not going away, we're not going back the 8086. All the complexity also means operations will effect caches and predictive microcode and other things, so CPU operations will have side effects. Side effects mean you get Spectre and Meltdown style vulnerabilities.
A very simple CPU which doesn't have any modern optimizations (complexity), with a single core running one thread at a time, could be much more secure in this regard. It would also be much slower, so it wouldn't be good as the main general-purpose CPU. It would need to be used to offload things like handling private keys that are particularly sensitive.
Re: (Score:2)
Javascript is both security-sensitive and performance-critical. Locking it to a single in-order core would be awful for browsing.
We could hope that Javascript developers would then fix their code, of course. Good luck.
Re: (Score:1)
Well, if javascript is locked to a single in-order core then developers would be forced to fix and improve their "code" or people will stop browsing those sites.
Re: (Score:2)
Javascript or other code would by default be handed to a normal fast core. However, it could use a special command/API call to request to run some code on a secure core, if it's doing something sensitive. Kind of like a TPM or Apple's secure enclave.
Re: (Score:2)
Everything that Javascript does is sensitive. I don't want any site to be able to inspect what I am doing at any other site. That is the problem with Spectre, you cannot spot-mitigate it, everything needs hardening.
Re: (Score:3)
I think that is a excellent ideal. If we locked it to one core and made it useless then it will die off faster. I like that.
Re:Won't fix this decade, if ever (Score:5, Informative)
That fundamental issues won't be changed in the next ten years, if ever.
Meltdown is a fairly simple hardware fix that AMD had already done right, don't speculate with memory that belongs to a different process. Intel fucked up big there but once the fix is in it's not likely to resurface. The Spectre class of exploits is tough but it's fairly trivially solved through software design - don't put secrets in the same process space as untrusted code like say Javascript you download online and there'll be nothing to steal even if you find a new side effect. That's the direction Chrome is going with Site Isolation and is pretty much a blanket protection for web browsing. It's still a big deal for cloud services etc. but if you'd rather be safe than sorry then run your own dedicated servers with just your code. Which is probably a good idea for all sorts of reasons if it's that sensitive.
Neither addresses the fundamental issues (Score:2)
Those are whack-a-mole. Site isolation helps *reduce* the impact of Spectre attacks that happen to be done in JavaScript, in the same way that eating fruit reduces your risk from a heart attack . It doesn't do anything for the majority of Spectre-class attacks.
Similarly, so long as you have speculative execution, you're running code that wasn't supposed to ever run. Running code will have physical and microarchitecture side effects, too. Just as writing to one memory location has a side-effect on other mem
Typo: there are NOW. Not there are no (Score:2)
That should read:
Hacking has been around long enough that there are NOW standard, well-known methods for turning minor issues into major ones. Something that doesn't seem like a big deal (guesstimating whether a given value might be cached) is leveraged into "read any memory location you want". Spectre is an example. If you read the basic vulnerability, it seems like not a really big deal. Hackers came up with ways to make it a big deal, though, to turn something small into something much bigger.
Re: (Score:2)
Those are whack-a-mole. Site isolation helps *reduce* the impact of Spectre attacks that happen to be done in JavaScript, in the same way that eating fruit reduces your risk from a heart attack . It doesn't do anything for the majority of Spectre-class attacks.
If site isolation involves separate CPU processes on a CPU which checks access control before speculative loads lke AMD does, then Spectre type attacks do not work. Without the speculative load and test, there is no access to the data.
Spectre attacks rely on brain dead JIT compilers which are executing code from difference sources all in the same task. Why is it the CPU's fault that a process can access its own data?
This attack works on AMD (Score:2)
You're posting this in comments to an article about yet another Spectre-class attack which affects AMD - one that is network accessible and has nothing whatsoever to do with any JIT.
You're focusing on just one of the seven Spectre related CVEs currently known.
Re: (Score:2)
You're posting this in comments to an article about yet another Spectre-class attack which affects AMD - one that is network accessible and has nothing whatsoever to do with any JIT.
You're focusing on just one of the seven Spectre related CVEs currently known.
All of the Spectre class attacks are a problem but process isolation turns them into Meltdown class attacks which access control can prevent ... except on Intel.
Otherwise preventing Spectre class attacks relies on programmers and compilers and some new CPU features from preventing speculative execution where state can leak which is a much more difficult problem.
You can't put the transistors in a sandbox (Score:3)
> if the code you run is properly sandboxed so you don't have to care what is run.
If you are talking about a script, that runs inside of a program,that runs in a process, that runs inside of an operating system, you can model things as "kind of like a kids sandbox". You can implement this metaphorical sandbox using the idealized model of a simple computer that is exposed to C++, the language the browser is written in.
There is no sand inside the CPU. In the microcode, there are no processes. The microcode
Re: (Score:2)
"Which is completely irrelevant if the code you run is properly sandboxed "
Nothing is properly sandboxes though. It's a spherical cow, an abstraction to explain to undergrads. There is no sandbox. If I run code on your box, I can own it- if not me, then someone with more resources.
My spherical cow checks access privileges before executing speculative loads. It is not my fault that Intel's spherical cows did not bother doing this and it is not my fault if the software writers are using the spherical cows incorrectly.
Re: (Score:2)
It's not just memory access that is an issue for Intel, although that is the most severe one and not as easy as you make out to solve (memory from other processes and the kernel can be read using Meltdown). By exploiting the branch prediction and hyperthreading it is possible to infer secrets from other processes as well.
Re: (Score:2)
It's not just memory access that is an issue for Intel, although that is the most severe one and not as easy as you make out to solve (memory from other processes and the kernel can be read using Meltdown). By exploiting the branch prediction and hyperthreading it is possible to infer secrets from other processes as well.
Don't all of the exploits ultimately come down to executing a speculative load, speculatively testing it, and then speculatively creating a result which alters timing which can be detected?
Re: (Score:2)
Depends on the slowest part (Score:2)
Suppose that a task can be broken down into two parts. On a single core, part A takes 20 seconds on a typical CPU, part B takes 80 seconds. Part B is fully parallelizable. Part A is sequential. What is the minimum amount of time the task can take with an infinite number of cores?
Suppose you have a BILLION cores, each much simpler than a Core i7, but ten times slower. What would be the total time?
Re: (Score:2)
A massively parallel computer (2048+ cores) with each core having suficient cache to eliminate most memory bottlenecks would give the same results as all that complexity
The problem is that "enough cache" is quite large, and you shouldn't be multiplying it by 2048 willy nilly.
Re: (Score:2)
A massively parallel computer (2048+ cores) with each core having suficient cache to eliminate most memory bottlenecks would give the same results as all that complexity- providing the software was written to fully support it.
But only at lower performance.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
merely shrinking the existing architecture then that means they still haven't fixed the fundamental issue behind the Meltdown vulnerability.
That fundamental issues won't be changed in the next ten years, if ever. They can either keep playing whack-a-mole with different hardware and microcode side-effects, or you can add a very simple (and slow) separate CPU for security-sensitive operations.
Does this mean I should dig out that Acer Netbook from the closet? Atom CPU, no speculative executive, simple straight through execution!
Re: (Score:2)
That fundamental issues won't be changed in the next ten years, if ever. They can either keep playing whack-a-mole with different hardware and microcode side-effects, or you can add a very simple (and slow) separate CPU for security-sensitive operations.
Or they could perform access control checks before speculated loads and enforce process isolation.
How will the access control affect cache rate? (Score:2)
And how exactly will performing those access control chicks affect the contents of the L1 and L2 caches, and therefore their hit rate? Further to the point, leaving caches aside, what does that do for AVX state?
Re: (Score:2)
And how exactly will performing those access control chicks affect the contents of the L1 and L2 caches, and therefore their hit rate? Further to the point, leaving caches aside, what does that do for AVX state?
Who cares? If the speculatively loaded data is not loaded then it cannot be speculatively operated on and there is no state to leak through the caches or AVX state.
Re: (Score:2)
>> Further to the point, leaving caches aside, what does that do for AVX state?
> Who cares?
Anyone who understands how Spectre, Meltdown, etc attacks work cares. The "problem" is that what one process does can affect the state of the hardware implementation - caches, how full the pipeline is, etc - and the state of the hardware can effect the timing or other things in a different process. Therefore, by measuring rates in the affected process many times, you can infer some things about the process wh
Re: (Score:2)
Anybody that wants fast I/O rates should avoid Intel like the plague until further notice.
Why? 99.99% have workloads where meltdown are completely irrelevant.
Re: (Score:2)
And thus, awful.
will apple go AMD or delay mac pro to 2020? (Score:2)
will apple go AMD or delay mac pro to 2020?
Re: (Score:3)
will apple go AMD or delay mac pro to 2020?
When was the last time Apple cared one shit about the performance of their "Pro" products? They will slap whatever Intel has that sounds in it, or will replace it with a mobile processor of their own making, because they just don't care..
Moore's Law (Score:1)
Re: (Score:2)
You keep getting it wrong. Moore's law is transistor count.
People still buy Intel? Google AMD Zen (Score:5, Interesting)
After Intel's laughable Netburst initiative (shilled by Slashdot at the time as 'genius'), Intel gave up the 'very long pipeline' race to 10GHz, and went back to the Pentium 3 architecture, that they crossed with AMD's advances used in the excellent AMD x64 chips of the time. Legal cos of cross-patent agreements between the two.
Pentium 3 + AMD tech = 'CORE', the horrid name Intel has used to describe all its architectures since Netburst (at first core 1/2, and now 'core'. Despite the confusing name, all 'core' Intel chips have one common feature- ZERO hardware protection of interthread memory access.
On a multi-threaded chip, you are supposed to use lock and key. A thread has a 'key' (thread id), and this key must be used to unlock a 'chest' containing any RAM access.
Lock and key takes a LOT of transistors. A lot of energy. And significant time delay added to RAM access. By secretly dropping this CS requirement, Intel gained a massive power and speed advantage over AMD.
Today, thanks to a genius CPU architect, AMD's zen has lock and key, and less than 10% disadvantage in IPC for software compiled to be optimum on Intel's core architecture (most commercial software). If software were optimised for zen (which can issue multiple complex instructions while Intel is optimised for 1 complex and 3 simple instructions) zen would have a greater than 10% advantage over Intel.
AMD's last downside is a 700Mhz gap with Intel (when both are clocked to sane max). Most chips sold do not show this gap, of course, since very few Intel chips are ultra high-end. Intel offers far more cores (and hyper-threading per core) than Intel per dollar.
Early 2019, AMD's Zen 2 (confusingly the new AMD zen parts from this year are zen+) will pass Intel on IPC, and almost catch up on max clocks. All this remember with zen having 'lock and key' and no Intel part til 2021 at the earliest fixing meltdown and spectre.
When IBM slected Intel to provide the dreadful 16-bit processor for IBM's home PC, every other chip company had better 16-bit designs, and some vastly better (Motorola). IBM selected Intel precisely because its chip was so awful (and thus didn't compete with IBM proprietary hardware). However Intel eventually used the mega profits from being the heart of the now generic PC design to create the excellent 486/Pentium 1, just before Intel illegally stole RISC tech from all major players to design the Pentium Pro/2 (for which Intel later paid billions in fines).
Since that date Intel's 'lead' has been a pure consequence of Intel outspending the competition by thousands to one in R+D (and even then AMD has had the lead over Intel on at least 3 periods).
Intel's final advantage was a 'process' lead- but as this article points out, that lead is GONE- TSMC, Samsung and GF are now ahead of Intel. Unless you game at 120 Hz, there is literally no reason to buy Intel today. Intel was always a lousy company. Now its social engineering policies have sunk the entire company.
PS can't use 'less than' and 'greater than' symbols in my text? WTF slashdot.
Re: (Score:3, Interesting)
Brandname matters. Intel/Nvidia is the best gaming combo. It just works and games are tested and optimized for both as they own 90% of the CPU/GPU market. Corporations buy whatever HP and Dell throw at them. THey like their Intel contracts and want to stay good with Intel for cheap pricing.
Intel means reliability to corporate buyers. It works well and everyone else uses it so they need to use it too. Brand name again and last drivers and issues are less with Nvidia and Intel. Always have. AMD is playing cat
Re: (Score:2)
Re: (Score:2)
PS can't use 'less than' and 'greater than' symbols in my text? WTF slashdot.
You mean WTF HTML. < gives < and > gives >.
Re:People still buy Intel? Google AMD Zen (Score:4, Interesting)
Actually, "IBM" -- which is to say, the small skunkworks project that was given the job of making an IBM-brand PC, isolated from the rest of IBM corporate -- picked Intel's processor because it was backwards-compatible with existing personal computers. The 8088 could use the same cheap, widely-manufactured, well-known support chips as the common 8080/8085 (and to some extent Z80) machines, and it was easy to cross-assemble 8080/8085 and machine-translate Z80 code into 8088 code, particularly with PC-DOS's high level of compatibility with CP/M-80 calls.
We will particularly note that compatibility concerns with the rest of the personal computer market drove the PC's project's selection of Microsoft BASIC (when IBM had its own corporate BASIC, included on earlier 51x0 model number machines before the 5150 PC), and that Microsoft didn't have BASIC for any 16-bit processors at the time. MS BASIC on the 8088 was a performance dog because its core was old 8080 assembly code assembled for the 8088; Microsoft's programming efforts for the PC were extensions rather than a rewrite.
Had IBM picked a rival 16-bit processor, it would have required a whole bunch more expensive support chips and it would have had a lot less software on day one. Which would have affected the ability of the IBM PC to make sales. After all, the 5100/5110/5120/5130 with their 16-bit PALM processors didn't sell all that well, did they? The 5150, with a Microsoft BASIC dialect, a CP/M-80 clone OS, and popular CP/M-80 programs like WordStar and dBase available on day 1, on the other hand, did.
Ever since the MITS Altair debuted, the dominant "personal computer" architecture has been a direct lineal descendant of the Altair's 8080 processor. Every attempt to substitute a "better" architecture failed, even the three times Intel itself attempted it (iAPX 432, i860, Itanium). There's no reason at all to expect that had IBM picked a substantially-different chip, that would have made that chip successful; it's far more likely that a different chip would have simply made the IBM PC 5150 a failure.
Re: (Score:2)
"Ever since the MITS Altair debuted, the dominant "personal computer" architecture has been a direct lineal descendant of the Altair's 8080 processor."
Amd64 may be a direct descendant of x86, but it is sufficiently different to be considered a new ISA. It's so good, even intel had to adopt it. It really has none of the problems of x86; notably, it not only has enough GPRs to do real work, it also actually has GPRs. x86 really doesn't, because so many instructions have to read from and store to specific regi
Re: (Score:3)
If you take written-in-1974 8080 assembly code program and run it through a circa 1978 Intel 8086 assembler, the object code produced by the assembler will execute on any AMD64 processor operating in real mode.
Further, if you have an operating system that allows it, you can take 16-bit protected-mode code compiled in 1982 for the then-new 80286 and execute it directly on an AMD64 processor even when using it in 64-bit protected mode.
So, whether you consider it the "same" ISA or not in some sense (certainly
Re: (Score:2)
After Intel's laughable Netburst initiative
Which at the time pushed them significantly in front of AMD.
Despite the confusing name, all 'core' Intel chips have one common feature- ZERO hardware protection of interthread memory access.
A distinction that produced a speed advantage for many years with no security downsides.
Intel offers far more cores (and hyper-threading per core) than Intel per dollar.
I know right? It's like AMD hasn't been in the race for so long that you can't even talk them up right.
When IBM slected Intel to provide the dreadful 16-bit processor for IBM's home PC, every other chip company had better 16-bit designs, and some vastly better (Motorola). IBM selected Intel precisely because its chip was so awful (and thus didn't compete with IBM proprietary hardware).
LOL. Or maybe it's because the 68000 was so very good and fast it was printing erratas at record rates. The wonderful thing about being years ahead in performance and features is that you're also years behind in debugging. Thank god the 68000 wasn't chosen, it
Cannot Lake (Score:2)
The little lake that couldn't.
Both disappointing and not unexpected (Score:2)
This is disappointing, and no amount of marketing spin can change the fact that 10nm was supposed to be launched in 2016, and personally I would not bet on it actually being available before the end of 2019. So this is a three year delay at best, for a technology transition that should have taken 3 years. Of course tick tock is dead, and maybe we are approaching physical limits.
And it is predictable, because Intel needs to fix the whole Spectre family of bugs. This will need a radically new CPU architecture
Playing catch up for a change... (Score:2)
This is a major shift in the power balance of the CPU business.
For many years, one of the key things that kept Intel on top is that they were the best in the business at making chips. Not at designing an architecture, and not always at executing that architecture (like the era of Pentium 4 vs Athlon 64), but their chip fabrication technology was on top.
But now they find themselves in the unaccustomed position of playing catch-up. Samsung and TSMC are already shipping 7nm, with GlobalFoundries close behind,
Re: (Score:2)
"He said the company delivered an excess of 70 percent performance improvement over the last few years."
I would say 7 or more years is quite a bit more than a few.
They are counting multithreaded workloads and the best improvements from instruction set extensions like AVX2. Single threaded scalar code has not improved nearly as much.