Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Power Hardware Technology

Intel's 10nm 'Cannon Lake' Processors Won't Arrive Until Late 2019 (digitaltrends.com) 149

At the company's second quarter 2018 financial results conference call, Intel chief engineering officer Venkata Renduchintala said the "Cannon Lake" 10mn processors won't appear in products until the 2019 holiday season. "The systems on shelves that we expect in holiday 2019 will be client systems, with data center products to follow shortly after," Renduchintala said. Interim CEO Robert Swan went on to tout the company's "very good lineup" of 14nm products. Digital Trends reports: "Recall that 10nm strives for a very aggressive density improvement target beyond 14nm, almost 2.7x scaling," Renduchintala said during the call. "And really, the challenges that we're facing on 10nm is delivering on all the revolutionary modules that ultimately deliver on that program." Although he acknowledged that pushing back 10nm presents a "risk and a degree of delay" in the company's road map, Intel is quite pleased with the "resiliency" of its 14nm roadmap. He said the company delivered an excess of 70 percent performance improvement over the last few years. Meanwhile, Intel's 10nm process should be in an ideal state to mass produce chips towards the end of 2019.

Intel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design. Given the previous launch window, the resulting chips presumably fell under the company's eighth-generation banner despite the older design. But with mass production pushed back to late 2019, the 10nm chips will fall under Intel's ninth-generation umbrella along with CPUs based on its upcoming "Ice Lake" design. Intel claims that its 10nm chips will provide 25 percent increased performance over their 14nm counterparts. Even more, they will supposedly consume 50 percent less power than their 14nm counterparts.

This discussion has been archived. No new comments can be posted.

Intel's 10nm 'Cannon Lake' Processors Won't Arrive Until Late 2019

Comments Filter:
  • by Anonymous Coward

    Given that "ntel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design" what execution silo bugs are currently present in the designs?

    • by Megol ( 3135005 )

      Silo?
      As usual some "bugs" (Intel erratas) will be fixed and some new created.

      Don't know what that have to do with Intels failure of getting their 10nm process up and running?

      • by Anonymous Coward

        Do you have the slightest clue about how modern day processors are created? Just building and configuring the fabs and tools needed to build the tools that finally allows mass production of the 10nm CPU is a major undertaking. A the production cycle cannot be rushed until the actual CPU has been produced for review, benchmarking, and extensive testing. From the whiteboard to mass production of a CPU probably ranks as the most complicated process the human race undertakes today. Without the continuing effort

    • Given that "ntel's Cannon Lake chip is essentially a shrink of its seventh-generation "Kaby Lake" processor design" what execution silo bugs are currently present in the designs?

      No it is shrink of Skylake X (with AVX-512). Sky Lake X is newer than Kaby Lake, which CPU wise is just a straight run of the mill Sky Lake with no changes at all.

      Anyway, process shrinks props up all kinds of issues, and apparently many of the tricks they have used in all the 14nm generations simply doesn't work in 10nm, and they are learning that the hard way. They got ahead of the competetion with those clever tricks, and now they no longer work.

      • They got ahead of the competition through anticompetitive business practices, and by deliberately compromising security. You can call that clever tricks, but I call it sociopathic behavior.

        • and by deliberately compromising security

          Not at all. Yes to anti-competitive, but they didn't deliberately compromise anything. They implemented a common mechanism to speed up processes used by a variety of vendors which at the time had no demonstrated security implications, arguably you could say theoretical but even then only indirectly.

          • "Yes to anti-competitive, but they didn't deliberately compromise anything. "

            What? Yes, of course they did. They decided to do security checks later than, for example, AMD. That others also made the same poor decision does not excuse intel, it only indicts IBM and the like.

            • They decided to do security checks later than, for example, AMD.

              Yes when you're a time lord and can happily bounce around between 2018 and the time you made the decision then you could knowingly have introduced a security vulnerability.

              However since that is just BBC fantasy the reality is they chose to optimise the timing of the security check like many other vendors with the knowledge that no such side channel exploit exists. That isn't deliberately compromising security anymore than you aren't deliberately compromising your security by not living in a bank vault.

              • the reality is they chose to optimise the timing of the security check like many other vendors with the knowledge that no such side channel exploit exists.

                No, a thousand times no. There was no knowledge that no such side channel exploit existed. There was, contrarily, knowledge that they were doing the checks in the wrong order deliberately to eke out more performance. They went so far as to patent it, and they were warned that it was a bad approach by many at the time. They did it anyway and the rest is history, plus people like you making apologies for intel due to your cognitive dissonance. You think you're smart, and you bought intel, so you want to belie

                • There was, contrarily, knowledge that they were doing the checks in the wrong order deliberately to eke out more performance.

                  I can't believe you dare to go online to post this. Don't you understand the deliberate security risks of connecting to a network? Are you some kind of crazy person?

                  Nope, just one who doesn't understand risk.

                  • I can't believe you dare to go online to post this. Don't you understand the deliberate security risks of connecting to a network? Are you some kind of crazy person?
                    Nope, just one who bought AMD instead of Intel.

                    There, fixed that for you.

                    I do own one intel system, it's a c2d and it's in storage in case I need it for something. It's small so I kept it. I also do own one arm64 platform, it's a pine64 and I use it for an in-home server and I don't websurf on it.

                    I am managing my risk responsibly, and I'm not making bullshit excuses for Intel's antisocial behavior since that's not my dog.

                    • I am managing my risk responsibly

                      So was Intel. Now if you put down your 20/20 hindsight and realise that the decision and optimisation they made had no at the time known risk, not even in a lab you may have made that optimisation too. Just like everyone except for AMD did.

                    • if you put down your 20/20 hindsight and realise that the decision and optimisation they made had no at the time known risk,

                      Total falsehood. The risk was understood at the time. This has been discussed in these threads repeatedly.

                    • The risk was understood at the time.

                      It was. Risk is likelihood and consequence. And at the time the likelihood was deemed to be never and the consequence was well understood for this giant non-event that it is.

                      Now with your hindsight the likelihood has changed. Claiming they knew this back then is just stupid. The earliest documentation of side channel attacks were theorised in 1995 and remained a theory for over 2 decades. The practical security impacts of it still remain theoretical for most computing workloads where you don't hand complete

                    • It was. Risk is likelihood and consequence. And at the time the likelihood was deemed to be never and the consequence was well understood for this giant non-event that it is.

                      You can't say it was well-understood to be a giant non-event when it is currently a gigantic event. You can say it was believed to be a giant non-event, but even that is only partially true. To wit: Not every chipmaker chose to do it the irresponsible way that was highly likely to cause problems. AMD didn't, and the rest is just bullshit fucking excuses.

                      Now please get of the internet, there are scary malware looking packets out there. It's really not safe here.

                      Yes, because of cognitively dissonant lames like you who make bullshit excuses for people who chose to do things wrong.

                    • You can't say it was well-understood to be a giant non-event when it is currently a gigantic event

                      Except it isn't a giant event. Pick your colloquialism: storm in a teacup, mountain out of a molehill, or just call it what it is: media driven absurdity and fear about a security risk that is not well understood by the media.

                      To wit: Not every chipmaker chose to do it the irresponsible way that was highly likely to cause problems. AMD didn't, and the rest is just bullshit fucking excuses.

                      And you can't say "not every". That is being intentionally dishonest when the actual answer was: 1. One chip maker chose not to do this. IBM, ARM, Sun (now Oracle) all followed Intel's path in some of their products.

                      who make bullshit excuses

                      Keep trying mate, you'll understand the issue one day.

                    • And you can't say "not every". That is being intentionally dishonest when the actual answer was: 1. One chip maker chose not to do this. IBM, ARM, Sun (now Oracle) all followed Intel's path in some of their products.

                      That is definitely an excellent reason to purchase AMD processors, and basically nothing else unless you cannot avoid it. Thanks for making my point for me.

                    • Yeah. I too hate performance.

  • by Billly Gates ( 198444 ) on Sunday July 29, 2018 @01:36PM (#57029072) Journal

    It is amazing to see and a sign the PC is the new mainframe and not as cutting edge.

    10nm has been out for cell phones for years. By the time Intel has finally got it right AMD will be having 7nm Ryzen2 CPUs on the market. Samsung and global foundaries have risen to take over blindsiding Intel. I am glad I don't own any Intel stock.

    Intel did release some i3 10nm. The reason why is the cores had so many defects. On Arstechnica a guy who owned a shop seen a huge failure rate as well after a few months with the chips. I don't blame Intel for halting production and trying again next year.

    No one would have believed this 15 years ago.

    • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Sunday July 29, 2018 @02:49PM (#57029422)

      10nm has been out for cell phones for years.

      nm are just labels when it comes to chips. The manufacturers call it whatever they want. There is no mass-production chip that actually has meaningful features measured at 10nm, much less 7nm.

      Intel manufacturing is about level with the competitors, possibly slightly ahead. This however is a massive change from most of chip history, where mass produced Intel chips could be counted on to be at least one and sometimes two generations ahead of mass produced competitors.

      • that number normally refers to gate pitch. which isnt meaningless, but isnt actual transistor size. that being said, intel has been slipping since they thought they killed AMD off. which they did a pretty good job of accomplishing until that lawsuit, and console market blowing up. Intel should look into separating the foundry from R&D. they might not have a choice if apple drops their contract.

    • by Agripa ( 139780 )

      10nm has been out for cell phones for years.

      Intel's 14nm is more like their 10nm. Marketing now controls the advertised feature size for a process and it has little to do with reality.

  • by Gravis Zero ( 934156 ) on Sunday July 29, 2018 @01:43PM (#57029096)

    If they are merely shrinking the existing architecture then that means they still haven't fixed the fundamental issue behind the Meltdown vulnerability. Anybody that wants fast I/O rates should avoid Intel like the plague until further notice.

    • by raymorris ( 2726007 ) on Sunday July 29, 2018 @02:23PM (#57029294) Journal

      > merely shrinking the existing architecture then that means they still haven't fixed the fundamental issue behind the Meltdown vulnerability.

      That fundamental issues won't be changed in the next ten years, if ever. They can either keep playing whack-a-mole with different hardware and microcode side-effects, or you can add a very simple (and slow) separate CPU for security-sensitive operations.

      Current CPUs are very complex, with out-of-order execution, speculative execution based on branch prediction, multiple concurrent threads of execution, various different types of caches, etc. All of this complexity is there for a good reason - it makes a huge improvement in performance. For that reason, it's not going away, we're not going back the 8086. All the complexity also means operations will effect caches and predictive microcode and other things, so CPU operations will have side effects. Side effects mean you get Spectre and Meltdown style vulnerabilities.

      A very simple CPU which doesn't have any modern optimizations (complexity), with a single core running one thread at a time, could be much more secure in this regard. It would also be much slower, so it wouldn't be good as the main general-purpose CPU. It would need to be used to offload things like handling private keys that are particularly sensitive.

      • by amorsen ( 7485 )

        Javascript is both security-sensitive and performance-critical. Locking it to a single in-order core would be awful for browsing.

        We could hope that Javascript developers would then fix their code, of course. Good luck.

        • by Anonymous Coward

          Well, if javascript is locked to a single in-order core then developers would be forced to fix and improve their "code" or people will stop browsing those sites.

        • by mentil ( 1748130 )

          Javascript or other code would by default be handed to a normal fast core. However, it could use a special command/API call to request to run some code on a secure core, if it's doing something sensitive. Kind of like a TPM or Apple's secure enclave.

          • by amorsen ( 7485 )

            Everything that Javascript does is sensitive. I don't want any site to be able to inspect what I am doing at any other site. That is the problem with Spectre, you cannot spot-mitigate it, everything needs hardening.

        • by jwhyche ( 6192 )

          I think that is a excellent ideal. If we locked it to one core and made it useless then it will die off faster. I like that.

      • by Kjella ( 173770 ) on Sunday July 29, 2018 @03:16PM (#57029530) Homepage

        That fundamental issues won't be changed in the next ten years, if ever.

        Meltdown is a fairly simple hardware fix that AMD had already done right, don't speculate with memory that belongs to a different process. Intel fucked up big there but once the fix is in it's not likely to resurface. The Spectre class of exploits is tough but it's fairly trivially solved through software design - don't put secrets in the same process space as untrusted code like say Javascript you download online and there'll be nothing to steal even if you find a new side effect. That's the direction Chrome is going with Site Isolation and is pretty much a blanket protection for web browsing. It's still a big deal for cloud services etc. but if you'd rather be safe than sorry then run your own dedicated servers with just your code. Which is probably a good idea for all sorts of reasons if it's that sensitive.

        • Those are whack-a-mole. Site isolation helps *reduce* the impact of Spectre attacks that happen to be done in JavaScript, in the same way that eating fruit reduces your risk from a heart attack . It doesn't do anything for the majority of Spectre-class attacks.

          Similarly, so long as you have speculative execution, you're running code that wasn't supposed to ever run. Running code will have physical and microarchitecture side effects, too. Just as writing to one memory location has a side-effect on other mem

          • That should read:

            Hacking has been around long enough that there are NOW standard, well-known methods for turning minor issues into major ones. Something that doesn't seem like a big deal (guesstimating whether a given value might be cached) is leveraged into "read any memory location you want". Spectre is an example. If you read the basic vulnerability, it seems like not a really big deal. Hackers came up with ways to make it a big deal, though, to turn something small into something much bigger.

          • by Agripa ( 139780 )

            Those are whack-a-mole. Site isolation helps *reduce* the impact of Spectre attacks that happen to be done in JavaScript, in the same way that eating fruit reduces your risk from a heart attack . It doesn't do anything for the majority of Spectre-class attacks.

            If site isolation involves separate CPU processes on a CPU which checks access control before speculative loads lke AMD does, then Spectre type attacks do not work. Without the speculative load and test, there is no access to the data.

            Spectre attacks rely on brain dead JIT compilers which are executing code from difference sources all in the same task. Why is it the CPU's fault that a process can access its own data?

            • You're posting this in comments to an article about yet another Spectre-class attack which affects AMD - one that is network accessible and has nothing whatsoever to do with any JIT.

              You're focusing on just one of the seven Spectre related CVEs currently known.

              • by Agripa ( 139780 )

                You're posting this in comments to an article about yet another Spectre-class attack which affects AMD - one that is network accessible and has nothing whatsoever to do with any JIT.

                You're focusing on just one of the seven Spectre related CVEs currently known.

                All of the Spectre class attacks are a problem but process isolation turns them into Meltdown class attacks which access control can prevent ... except on Intel.

                Otherwise preventing Spectre class attacks relies on programmers and compilers and some new CPU features from preventing speculative execution where state can leak which is a much more difficult problem.

        • by AmiMoJo ( 196126 )

          It's not just memory access that is an issue for Intel, although that is the most severe one and not as easy as you make out to solve (memory from other processes and the kernel can be read using Meltdown). By exploiting the branch prediction and hyperthreading it is possible to infer secrets from other processes as well.

          • by Agripa ( 139780 )

            It's not just memory access that is an issue for Intel, although that is the most severe one and not as easy as you make out to solve (memory from other processes and the kernel can be read using Meltdown). By exploiting the branch prediction and hyperthreading it is possible to infer secrets from other processes as well.

            Don't all of the exploits ultimately come down to executing a speculative load, speculatively testing it, and then speculatively creating a result which alters timing which can be detected?

      • A massively parallel computer (2048+ cores) with each core having suficient cache to eliminate most memory bottlenecks would give the same results as all that complexity- providing the software was written to fully support it.
        • Suppose that a task can be broken down into two parts. On a single core, part A takes 20 seconds on a typical CPU, part B takes 80 seconds. Part B is fully parallelizable. Part A is sequential. What is the minimum amount of time the task can take with an infinite number of cores?

          Suppose you have a BILLION cores, each much simpler than a Core i7, but ten times slower. What would be the total time?

        • A massively parallel computer (2048+ cores) with each core having suficient cache to eliminate most memory bottlenecks would give the same results as all that complexity

          The problem is that "enough cache" is quite large, and you shouldn't be multiplying it by 2048 willy nilly.

        • by Agripa ( 139780 )

          A massively parallel computer (2048+ cores) with each core having suficient cache to eliminate most memory bottlenecks would give the same results as all that complexity- providing the software was written to fully support it.

          But only at lower performance.

          https://en.wikipedia.org/wiki/... [wikipedia.org]

      • merely shrinking the existing architecture then that means they still haven't fixed the fundamental issue behind the Meltdown vulnerability.

        That fundamental issues won't be changed in the next ten years, if ever. They can either keep playing whack-a-mole with different hardware and microcode side-effects, or you can add a very simple (and slow) separate CPU for security-sensitive operations.

        Does this mean I should dig out that Acer Netbook from the closet? Atom CPU, no speculative executive, simple straight through execution!

      • by Agripa ( 139780 )

        That fundamental issues won't be changed in the next ten years, if ever. They can either keep playing whack-a-mole with different hardware and microcode side-effects, or you can add a very simple (and slow) separate CPU for security-sensitive operations.

        Or they could perform access control checks before speculated loads and enforce process isolation.

        • And how exactly will performing those access control chicks affect the contents of the L1 and L2 caches, and therefore their hit rate? Further to the point, leaving caches aside, what does that do for AVX state?

          • by Agripa ( 139780 )

            And how exactly will performing those access control chicks affect the contents of the L1 and L2 caches, and therefore their hit rate? Further to the point, leaving caches aside, what does that do for AVX state?

            Who cares? If the speculatively loaded data is not loaded then it cannot be speculatively operated on and there is no state to leak through the caches or AVX state.

            • >> Further to the point, leaving caches aside, what does that do for AVX state?

              > Who cares?

              Anyone who understands how Spectre, Meltdown, etc attacks work cares. The "problem" is that what one process does can affect the state of the hardware implementation - caches, how full the pipeline is, etc - and the state of the hardware can effect the timing or other things in a different process. Therefore, by measuring rates in the affected process many times, you can infer some things about the process wh

    • Anybody that wants fast I/O rates should avoid Intel like the plague until further notice.

      Why? 99.99% have workloads where meltdown are completely irrelevant.

  • will apple go AMD or delay mac pro to 2020?

    • will apple go AMD or delay mac pro to 2020?

      When was the last time Apple cared one shit about the performance of their "Pro" products? They will slap whatever Intel has that sounds in it, or will replace it with a mobile processor of their own making, because they just don't care..

  • Further proof that Moore's Law is dead. Of course, people hate to hear it, because that means that many things they dreamed of in the digital world won't happen. From now on you can expect only marginal improvement in digital computing year over year.
  • by Anonymous Coward on Sunday July 29, 2018 @06:21PM (#57030048)

    After Intel's laughable Netburst initiative (shilled by Slashdot at the time as 'genius'), Intel gave up the 'very long pipeline' race to 10GHz, and went back to the Pentium 3 architecture, that they crossed with AMD's advances used in the excellent AMD x64 chips of the time. Legal cos of cross-patent agreements between the two.

    Pentium 3 + AMD tech = 'CORE', the horrid name Intel has used to describe all its architectures since Netburst (at first core 1/2, and now 'core'. Despite the confusing name, all 'core' Intel chips have one common feature- ZERO hardware protection of interthread memory access.

    On a multi-threaded chip, you are supposed to use lock and key. A thread has a 'key' (thread id), and this key must be used to unlock a 'chest' containing any RAM access.

    Lock and key takes a LOT of transistors. A lot of energy. And significant time delay added to RAM access. By secretly dropping this CS requirement, Intel gained a massive power and speed advantage over AMD.

    Today, thanks to a genius CPU architect, AMD's zen has lock and key, and less than 10% disadvantage in IPC for software compiled to be optimum on Intel's core architecture (most commercial software). If software were optimised for zen (which can issue multiple complex instructions while Intel is optimised for 1 complex and 3 simple instructions) zen would have a greater than 10% advantage over Intel.

    AMD's last downside is a 700Mhz gap with Intel (when both are clocked to sane max). Most chips sold do not show this gap, of course, since very few Intel chips are ultra high-end. Intel offers far more cores (and hyper-threading per core) than Intel per dollar.

    Early 2019, AMD's Zen 2 (confusingly the new AMD zen parts from this year are zen+) will pass Intel on IPC, and almost catch up on max clocks. All this remember with zen having 'lock and key' and no Intel part til 2021 at the earliest fixing meltdown and spectre.

    When IBM slected Intel to provide the dreadful 16-bit processor for IBM's home PC, every other chip company had better 16-bit designs, and some vastly better (Motorola). IBM selected Intel precisely because its chip was so awful (and thus didn't compete with IBM proprietary hardware). However Intel eventually used the mega profits from being the heart of the now generic PC design to create the excellent 486/Pentium 1, just before Intel illegally stole RISC tech from all major players to design the Pentium Pro/2 (for which Intel later paid billions in fines).

    Since that date Intel's 'lead' has been a pure consequence of Intel outspending the competition by thousands to one in R+D (and even then AMD has had the lead over Intel on at least 3 periods).

    Intel's final advantage was a 'process' lead- but as this article points out, that lead is GONE- TSMC, Samsung and GF are now ahead of Intel. Unless you game at 120 Hz, there is literally no reason to buy Intel today. Intel was always a lousy company. Now its social engineering policies have sunk the entire company.

    PS can't use 'less than' and 'greater than' symbols in my text? WTF slashdot.

    • Re: (Score:3, Interesting)

      Brandname matters. Intel/Nvidia is the best gaming combo. It just works and games are tested and optimized for both as they own 90% of the CPU/GPU market. Corporations buy whatever HP and Dell throw at them. THey like their Intel contracts and want to stay good with Intel for cheap pricing.

      Intel means reliability to corporate buyers. It works well and everyone else uses it so they need to use it too. Brand name again and last drivers and issues are less with Nvidia and Intel. Always have. AMD is playing cat

      • by tlhIngan ( 30335 )

        Brandname matters. Intel/Nvidia is the best gaming combo. It just works and games are tested and optimized for both as they own 90% of the CPU/GPU market. Corporations buy whatever HP and Dell throw at them. THey like their Intel contracts and want to stay good with Intel for cheap pricing.

        Intel means reliability to corporate buyers. It works well and everyone else uses it so they need to use it too. Brand name again and last drivers and issues are less with Nvidia and Intel. Always have. AMD is playing cat

    • by jaa101 ( 627731 )

      PS can't use 'less than' and 'greater than' symbols in my text? WTF slashdot.

      You mean WTF HTML. &lt; gives < and &gt; gives >.

    • by SEE ( 7681 ) on Sunday July 29, 2018 @09:59PM (#57030696) Homepage

      Actually, "IBM" -- which is to say, the small skunkworks project that was given the job of making an IBM-brand PC, isolated from the rest of IBM corporate -- picked Intel's processor because it was backwards-compatible with existing personal computers. The 8088 could use the same cheap, widely-manufactured, well-known support chips as the common 8080/8085 (and to some extent Z80) machines, and it was easy to cross-assemble 8080/8085 and machine-translate Z80 code into 8088 code, particularly with PC-DOS's high level of compatibility with CP/M-80 calls.

      We will particularly note that compatibility concerns with the rest of the personal computer market drove the PC's project's selection of Microsoft BASIC (when IBM had its own corporate BASIC, included on earlier 51x0 model number machines before the 5150 PC), and that Microsoft didn't have BASIC for any 16-bit processors at the time. MS BASIC on the 8088 was a performance dog because its core was old 8080 assembly code assembled for the 8088; Microsoft's programming efforts for the PC were extensions rather than a rewrite.

      Had IBM picked a rival 16-bit processor, it would have required a whole bunch more expensive support chips and it would have had a lot less software on day one. Which would have affected the ability of the IBM PC to make sales. After all, the 5100/5110/5120/5130 with their 16-bit PALM processors didn't sell all that well, did they? The 5150, with a Microsoft BASIC dialect, a CP/M-80 clone OS, and popular CP/M-80 programs like WordStar and dBase available on day 1, on the other hand, did.

      Ever since the MITS Altair debuted, the dominant "personal computer" architecture has been a direct lineal descendant of the Altair's 8080 processor. Every attempt to substitute a "better" architecture failed, even the three times Intel itself attempted it (iAPX 432, i860, Itanium). There's no reason at all to expect that had IBM picked a substantially-different chip, that would have made that chip successful; it's far more likely that a different chip would have simply made the IBM PC 5150 a failure.

      • "Ever since the MITS Altair debuted, the dominant "personal computer" architecture has been a direct lineal descendant of the Altair's 8080 processor."

        Amd64 may be a direct descendant of x86, but it is sufficiently different to be considered a new ISA. It's so good, even intel had to adopt it. It really has none of the problems of x86; notably, it not only has enough GPRs to do real work, it also actually has GPRs. x86 really doesn't, because so many instructions have to read from and store to specific regi

        • by SEE ( 7681 )

          If you take written-in-1974 8080 assembly code program and run it through a circa 1978 Intel 8086 assembler, the object code produced by the assembler will execute on any AMD64 processor operating in real mode.

          Further, if you have an operating system that allows it, you can take 16-bit protected-mode code compiled in 1982 for the then-new 80286 and execute it directly on an AMD64 processor even when using it in 64-bit protected mode.

          So, whether you consider it the "same" ISA or not in some sense (certainly

    • After Intel's laughable Netburst initiative

      Which at the time pushed them significantly in front of AMD.

      Despite the confusing name, all 'core' Intel chips have one common feature- ZERO hardware protection of interthread memory access.

      A distinction that produced a speed advantage for many years with no security downsides.

      Intel offers far more cores (and hyper-threading per core) than Intel per dollar.

      I know right? It's like AMD hasn't been in the race for so long that you can't even talk them up right.

      When IBM slected Intel to provide the dreadful 16-bit processor for IBM's home PC, every other chip company had better 16-bit designs, and some vastly better (Motorola). IBM selected Intel precisely because its chip was so awful (and thus didn't compete with IBM proprietary hardware).

      LOL. Or maybe it's because the 68000 was so very good and fast it was printing erratas at record rates. The wonderful thing about being years ahead in performance and features is that you're also years behind in debugging. Thank god the 68000 wasn't chosen, it

  • The little lake that couldn't.

  • This is disappointing, and no amount of marketing spin can change the fact that 10nm was supposed to be launched in 2016, and personally I would not bet on it actually being available before the end of 2019. So this is a three year delay at best, for a technology transition that should have taken 3 years. Of course tick tock is dead, and maybe we are approaching physical limits.

    And it is predictable, because Intel needs to fix the whole Spectre family of bugs. This will need a radically new CPU architecture

  • This is a major shift in the power balance of the CPU business.

    For many years, one of the key things that kept Intel on top is that they were the best in the business at making chips. Not at designing an architecture, and not always at executing that architecture (like the era of Pentium 4 vs Athlon 64), but their chip fabrication technology was on top.

    But now they find themselves in the unaccustomed position of playing catch-up. Samsung and TSMC are already shipping 7nm, with GlobalFoundries close behind,

They are called computers simply because computation is the only significant job that has so far been given to them.

Working...