Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Intel Security Software Hardware Technology

Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches (extremetech.com) 170

Phoronix has conducted a series of tests to show just how much the Spectre and Meltdown patches have impacted the raw performance of Intel and AMD CPUs. While the patches have resulted in performance decreases across the board, ranging from virtually nothing to significant depending on the application, it appears that Intel received the short end of the stick as its CPUs have been hit five times harder than AMD, according to ExtremeTech. From the report: The collective impact of enabling all patches is not a positive for Intel. While the impacts vary tremendously from virtually nothing to significant on an application-by-application level, the collective whack is about 15-16 percent on all Intel CPUs without Hyper-Threading disabled. Disabling increases the overall performance impact to 20 percent (for the 7980XE), 24.8 percent (8700K) and 20.5 percent (6800K).

The AMD CPUs are not tested with HT disabled, because disabling SMT isn't a required fix for the situation on AMD chips, but the cumulative impact of the decline is much smaller. AMD loses ~3 percent with all fixes enabled. The impact of these changes is enough to change the relative performance weighting between the tested solutions. With no fixes applied, across its entire test suite, the CPU performance ranking is (from fastest to slowest): 7980XE (288), 8700K (271), 2990WX (245), 2700X (219), 6800K. (200). With the full suite of mitigations enabled, the CPU performance ranking is (from fastest to slowest): 2990WX (238), 7980XE (231), 2700X (213), 8700K (204), 6800K (159).
In closing, ExtremeTech writes: "AMD, in other words, now leads the aggregate performance metrics, moving from 3rd and 4th to 1st and 3rd. This isn't the same as winning every test, and since the degree to which each test responds to these changes varies, you can't claim that the 2990WX is now across-the-board faster than the 7980XE in the Phoronix benchmark suite. It isn't. But the cumulative impact of these patches could result in more tests where Intel and AMD switch rankings as a result of performance impacts that only hit one vendor."
This discussion has been archived. No new comments can be posted.

Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches

Comments Filter:
  • by Anonymous Coward

    With all of these x64 security and performance issues happening, would now be a good time for the entire industry to switch to ARM architecture CPUs in all desktops, laptops and servers? Most phones and tablets are already there.

    • With all of these x64 security and performance issues happening, would now be a good time for the entire industry to switch to ARM architecture CPUs in all desktops, laptops and servers?

      No, because speculative execution is the problem so high end ARM chips are vulnerable too. A raspberry pi isn't because it has an in-order CPU, unlike most decent phones and tablets. You can get low end phones with the A53 core but they're not very good.

      Try out a quad core pi 3 and see how fast it is. It's OK, but no match f

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This is unrelated to CPU ISA, and related to design choices.

      The biggest takeaway is that Intel did not design for secure context switches, and AMD did.

      This gave Intel a nice boost in some benchmarks, but now they are way behind. The mitigations have to use slow state clearing mechanisms in the Intel design, whereas AMD have fast (not as fast as not doing it, obviously) state clearing and proper security checks.

    • ARM is really great at what it's designed for- very high performance to power ratio, AT low power usage.

      It just doesn't scale up to real desktop/server work. x86 is an absolute dog of an architecture but it'll still whip ARM up and down the block for most desktop/server scenarios.

      • x86 is an absolute dog of an architecture but

        ...it's not an architecture, it's an instruction set. And the x86 instruction decoder is a minuscule portion of a modern, multicore CPU with a boatload of cache. And most of us aren't even processing that many x86 instructions, they are mostly amd64 instructions, or handed off to some kind of multimedia coprocessor.

        There's a reason why the majority of supercomputers are built with AMD or Intel and not with something else, and that is that the best performance comes from these CPUs. Since they went NUMA (Fir

  • by Anonymous Coward on Monday May 20, 2019 @09:19PM (#58627296)

    We should go back to making software efficient again, like we had to in the 1990s and before. It wasn't even that hard to do. It just takes native languages like C, C++, Pascal or Delphi, and a little bit of care.

    Right now so much computing power is wasted on frivolous UI animations and other stupid effects. Computing power is also wasted on bloated programming language virtual machines and scripting languages.

    Hardware performance issues become less of a problem when not using slow and bloated programming languages, and when not doing stupid things like pointless UI effects.

    • by Cmdln Daco ( 1183119 ) on Monday May 20, 2019 @09:26PM (#58627336)

      You're talking to the wrong crowd. Slashdot is festering with "web developers" these days.

      • Here's something to make you feel safe. Guess what language all those gnome plugins are written with? Javascript!

      • by Chozabu ( 974192 )
        Then why is your comment rated +5 insightful?
        Perhaps these web developers don't mind their participation being referred to as "festering"? Or perhaps they just don't have the mod points.


        I guess it is also possible they are too busy trying to debug giant typeless piles of steaming JS :)
    • The animations that are totally not the point but I see why you think that...

      Iâ(TM)s the boxing and unboxing of millions of objects, their respective heap allocations and cleanup (GC), and generally not giving a fuck about performance by developers. Most of which donâ(TM)t use what they work on...

      Câ(TM)est la vie.

    • by Anonymous Coward on Monday May 20, 2019 @10:14PM (#58627602)

      I agree. We're still deploying embedded systems running on 4, 8 and 16K of RAM - not megabytes or gigabytes.

      PC operating systems and applications are overly wasteful - greenies should be targeting those instead of sweating over the alleged 5% of greenhouse emissions caused by air travel.

    • by spongman ( 182339 ) on Tuesday May 21, 2019 @01:04AM (#58628126)

      most computing power is wasted waiting for I/O. very few CPU cores are pegged.

      instead of worrying about trivial micro-benchmarks, they should be more concerned about the latency holes in the I/O pipelines.

      • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday May 21, 2019 @06:26AM (#58628952) Journal
        Aside from the commercially interesting move of AMD back to not sucking; hasn't improvement(though by no means perfection) in I/O been the biggest thing in computing in the past few years? SSDs moving down to even trash-tier consumer stuff; fast NVMe getting downright affordable; the HBM/silicon interposer stuff for putting big stacks of RAM really near your die without having to actually fab it on the same die; Intel's work on nonvolatile storage fast enough to hang off memory controllers; RDMA moving from being purely for pricey exotic interconnects to including merely expensive Ethernet, etc.

        In terms of number of CPU clocks spent waiting it still doesn't match the old days(with contemporary clock speeds increasing rather faster than the speed of light it's not clear that we'll ever see those ratios again, unless someone deliberately builds the CPU down to the speed of the RAM and/or onto the die or both); but improvements to I/O have really been a pretty solid area of late.
        • I agree. I manage or work with about a half-dozen VMware clusters, and aggregate CPU demand is really at the bottom of the list. Seldom do you see a cluster with more than 50% CPU utilization.

          The only real CPU utilization "problem" is single-threaded performance, which mostly seems to tie into the fact that CPUs with high clock rates are fucking expensive, and probably because there's such a huge market for virtualization and people mostly aren't worried about aggregate CPU demand.

          If I had a complaint abo

          • by Kokuyo ( 549451 )

            Ya know what wouÃd really give me a stiffy being a Vmware admin and all? If we could have beastly single core coprocessors. From time to time there actually is that one customer that has an application that makes the infrastructure sweat. More often than not because they throw way too many cores at the thing wanting to speed things up but in the process fucking the scheduler.

            I'd like to have a dual core Xeon with 4.5 GHz next to the 2.5GHz multi-core CPUs. If a VM demands raw power, the scheduler simpl

            • I wonder what the system board limitations are of differently clocked CPUs.

              • I wonder what the system board limitations are of differently clocked CPUs.

                As long as the bus speeds are the same, it should be physically possible. Nobody has built a PC system that way that I'm aware of, though. We have multicore processors that can increase clock rate on some cores while other cores are idle. Most tasks are parallelizable, and from what we know of physics and the difficulty of increasing clock rates further, there's probably more benefit going forward to adding more parallelism than to figuring out how to get better single-thread performance.

                • I think you're right in general, the problem is nearly everybody keeps dealing with "bad" software that seems unable or unwilling to take advantage of parallelism and only benefits from a lot of extra clock.

                  I'm not sure how many of these cases are just "bad" software where nobody has bothered to try to enhance parallelism and just depended on Moore's Law type increases or whether they're corner cases where parallelism *is* just not possible or too hard to be worth trying.

        • Aside from the commercially interesting move of AMD back to not sucking;

          Let's be clear here, AMD never sucked. Their 386s were just as good as Intel's, their 486s were better, the 586 was faster than the Pentium when writing code optimized for it and not for the Pentium, and the K6 was faster than the P2 clock-for-clock under similar circumstances. The K7, as we all know, was superior on every level to Intel's chips at the time. The only reason the K6 was slower was that everyone optimized for Intel, not AMD. Gentoo built for the K6 was screaming fast.

          Intel sucked plenty, though. They may have been faster, but they deliberately created provably less secure designs for the sake of that speed. That sucks, in my book. And even putting aside the technical level, Intel behaved anticompetitively at every turn. That sucks too.

        • no. if you look at the total processing power available on an average machines (clock speed, multiple cores, GPU), that has increased significantly more than usable I/O bandwidth.

      • Are you considering tensorflow.org workloads?
    • We should go back to making software efficient again

      Or we could just not conform to the mass hysteria we've been told to and treat Spectre and Meltdown with the proper respect, which is to ignore it unless you're a cloud hosting provider or being actively targeted by the CIA.

      • Yes. I think Linux allows you to choose if you want to run with Spectre mitigation or not and in Windows maybe you can just not install or unininstall the relevant fixes. Maybe we have to accept that security vs speed is a zero sum game and we have to choose. I would normally choose speed, but it depends on the context. I just hope that choice is not taken away. Most software is coded with fast CPUs in mind. If CPUs become much much slower it could make certain software not even usable.

        • GRUB_CMDLINE_LINUX_DEFAULT="text nospectre_v1 nospectre_v2 spectre_v2=off spectre_v2_user=off spec_store_bypass_disable=on net.ifnames=0 biosdevname=0"

          that's what I now use.

      • Maybe someone qualified can explain if these exploits are even a thing for your average PC user. I guess any malware must already have found its way onto my PC in order to exploit any CPU vulnerabilities. At that point, my PC is compromised anyway and many more bad things can happen, which are probably much easier to abuse than a cryptic and fairly random CPU vulnerability.
        I agree with parent that this can be a problem for hosting providers whose CPU's might be shared with all sorts of totally unrelated sof

        • Maybe someone qualified can explain if these exploits are even a thing for your average PC user. I guess any malware must already have found its way onto my PC in order to exploit any CPU vulnerabilities.

          In theory, many of the vulnerabilities could be exploited by Javascript. If you run any code on your computer that might be less than perfectly trustworthy, you could be at risk, even if it's in a container that is supposed to be able to keep it contained.

          • Interesting point. I'm not an expert on JavaScript, but like most modern languages, I suppose it is relatively high-level or runs in a VM, so the memory is managed for it, and there is no direct access to any CPU caches.
            I would assume you would have to do systems-level programming in C to be able to access something like that.

            • Interesting point. I'm not an expert on JavaScript, but like most modern languages, I suppose it is relatively high-level or runs in a VM, so the memory is managed for it, and there is no direct access to any CPU caches. I would assume you would have to do systems-level programming in C to be able to access something like that.

              VMs are no protection. The purpose of VM-based isolation is to defend against vulnerabilities in the operating system, but these are hardware-level vulnerabilities.

              Yes, Javascript is high-level... but all modern Javascript environments do JIT compilation to native code. Getting the JIT compiler to generate exactly the right code to exploit these vulnerabilities is likely very hard (no one has managed it yet, that we know of, other than for one of the earliest and simplest speculative execution vulns), b

              • The way I understand this is, that you would also have to get extremely lucky to even get any sensitive data in those caches. So it is in no way a targeted attack, but rather some sort of random snooping, without any context of what that data could even be. So overall it sounds to me like an extremely theoretical security vulnerability. Have to wonder if its really worth sacrificing CPU performance to fix that.
                Except on hosted or shared environments of course. I can imagine a malicious entity creating softw

              • As of right now, I don't think the average computer user is at any risk of being successfully attacked via a speculative execution attack, even without the mitigations.

                People's computers get infected with malware all the time, there's no reason these attacks can't be delivered onto the system in the usual ways. We don't need to invoke Javascript, even if it is a feasible means of attack.

                • As of right now, I don't think the average computer user is at any risk of being successfully attacked via a speculative execution attack, even without the mitigations.

                  People's computers get infected with malware all the time, there's no reason these attacks can't be delivered onto the system in the usual ways. We don't need to invoke Javascript, even if it is a feasible means of attack.

                  The problem with that argument is that "malware" is an extremely broad category. allcoolnameswheretak said that when his PC is infected by malware "at that point, my PC is compromised anyway and many more bad things can happen", but that isn't necessarily true. It depends on exactly what privileges the malware has, and what privilege escalations are available... meaning, basically, the software security posture. The risk of these hardware attacks is that they provide an avenue that is exploitable even if

      • We should go back to making software efficient again

        Or we could just not conform to the mass hysteria we've been told to and treat Spectre and Meltdown with the proper respect, which is to ignore it unless you're a cloud hosting provider or being actively targeted by the CIA.

        So far, I agree with this. But attacks will improve, and it's not inconceivable that a paper could be published tomorrow that demonstrates how to exploit one or more of the vulnerabilities from Javascript.

    • by sad_ ( 7868 )

      there is a benefit to these 'slow' programming & scripting languages and that is it's much easier to write software without certain types of bugs.
      things like overflows or double free's are almost non-existent, and before you say that these are well known these days, just remember that the linux kernel just recently had a double free security bug discovered.
      i do agree about efficiency though, but in most cases that is mostly not an issue of the chosen language.

    • by AmiMoJo ( 196126 )

      Optimizing software tends to make it harder to maintain. It's usually the last thing you do if possible.

      So while modern languages are less efficient to run, they are more efficient to write.

      Also remember that in the early 90s and earlier the operating systems were also much more primitive. Real multi-tasking was fairly rare (AmigaOS was one of the first for microcomputers, but MacOS and DOS/Windows took well into the 90s to catch up) and so was memory protection, so there was basically zero security (any ta

    • by Somervillain ( 4719341 ) on Tuesday May 21, 2019 @08:13AM (#58629388)

      Computing power is also wasted on bloated programming language virtual machines and scripting languages.

      Hardware performance issues become less of a problem when not using slow and bloated programming languages, and when not doing stupid things like pointless UI effects.

      Agreed that UI animations are pointless, but I think a major culprit is layers of abstraction. Look at the UI tier. I have seen 1000 line package.json files for a simple form-based UI. I have no clue what most of the dependencies are and the developer that made them barely has one as well. Look at the node-based text editors. They are crazy slow on fast computers. My biggest frustrations are that ever time I turn around someone is inventing an abstraction layer. 15 years ago, it was Java...inventing dozens of redundant abstraction layers just to get to form input. This only went away when we moved to REST-based architecture and moved form binding to JavaScript.

      Most of the UIs I work with are modern...and slow and terrible. However, I do have to work with one legacy system...an old JSP and servlet-based system from 20 years ago with 20-year-old HTML and only a little JavaScript(a typical UI of the time) that no one has updated...it is an actual delight. It loads FAST. I can scroll through 10,000 rows without my computer breaking a sweat. When the page says it's loaded, it's ready to use (that is my biggest pet peeve...pages that say they're loaded, but need to make 20 server calls and you never know when you have all the data). I hate to sound old, but I LOVE fast legacy UIs so much more than the trendy new ones that download megabytes of JavaScript and CSS and have convoluted complex layouts in HTML just to accommodate mobile users (and the UIs look like crap anyway on a phone, plus most corporate apps are not ones people want to use on their phone, so why make your primary users suffer?)

      Server-Side Java use to be terribly bloated, but has matured and even slightly trimmed down (or more precisely, people abandoned the slowest technologies) for JAX-RS-based REST services. I hope the same happens to the UI tier...that minimal and fast becomes trendy.

      From what I've seen of native apps, they're following a similar trajectory...lots of layers of abstractions and toolkits that tangibly slow down the UI, but add no value for the end user. I hope minimal becomes trendy again. It's not that hard to implement. I wish there was a way to incentivize good engineering and quicky, snappy UIs over quick turnaround, excessive layering and abstraction and bloated toolkits.

    • Aside from the valid point you have about making software efficient, and throwing "modern" ui crap into the shitter.... there's a other side of coin here, which most developers don't see.

      Namely, the users. All the smooth, nice animation via JS stuff came to be because "users" found it helpful. I found that as more I add nice loader animations, smooth transitions, etc in the apps I make, the users respond more positively toward the aforementioned app, and the reason I make these apps ... is for them "user
  • Waiting... (Score:5, Funny)

    by mschaffer ( 97223 ) on Monday May 20, 2019 @09:29PM (#58627346)

    I am waiting to find out that the only way Intel was able to compete with AMD was to intentionally introduce these flaws.

    • Re:Waiting... (Score:4, Interesting)

      by Mashiki ( 184564 ) <mashiki@[ ]il.com ['gma' in gap]> on Monday May 20, 2019 @10:47PM (#58627744) Homepage

      Shouldn't have to wait long. Both Intel and Nvidia have been busted multiple times for creating optimizations for popular benchmark programs at the microcode level in order to give a false performance boost. And with the gigantic shift in servers to consoles all moving to AMD for CPU's and APU's, it'll just be matter of time before more garbage is found out.

    • I suspect that this will only come out in someone's memoirs, if that; but I'd be honestly fascinated to know why AMD dodged the bullet in this case: They haven't escaped all the 'speculative execution, actually really nontrivial' issues(though neither has anyone making CPUs aimed at that performance segment); but they have mostly come out better for any of the specific issues where Intel and AMD are affected differently.

      Did AMD recognize risks that Intel didn't? Did both recognize risks but only Intel f
      • Look when these flaws were introduced... when Intel went back to the P6 core after AMD was destroying them on the performance/power consumption of the Pentium 4. The Pentium 4 hasn't been shown to be vulnerable AFAIK.

        Intel was desperate to catch up to AMD, and they got more performance by having hidden unsafe underpinnings of their processors for years. If they hadn't been under such pressure from AMD, maybe they would have stuck with Netburst? Or maybe they would have fixed the problems in P6's security be

    • I am waiting to find out that the only way Intel was able to compete with AMD was to intentionally introduce these flaws.

      You don't have to wait. We already know. You can tell because Intel's performance lead is erased when mitigations are enabled.

    • Comment removed based on user account deletion
  • by Narcocide ( 102829 ) on Monday May 20, 2019 @09:47PM (#58627448) Homepage

    I predicted this in 2006.

    • by ToTheStars ( 4807725 ) on Monday May 20, 2019 @10:51PM (#58627758)
      So did Intel, but for the sake of performance, they let another thread go ahead as if it were a good idea...
    • Well that was obviously a mistake
    • by gweihir ( 88907 ) on Monday May 20, 2019 @11:17PM (#58627862)

      Speculative execution is fine, as long as you keep security in mind. AMD did and has a minor problem. Intel did not, screwed up its customers, and delivered ill-gotten performance at inflated prices. The sheep kept buying Intel and even today keep buying. People are stupid.

      Both AMD and Intel were warned a long time ago by the research community that speculative execution is dangerous and needs extra care.

      • by Anonymous Coward

        Saying it is "fine" is one thing; the implementation is another. There are no fixes--only mitigations, and they add cost in performance, power, and complexity. Unwinding state, cache effects, etc. is non-trivial, and more than likely just moves the problem rather than eliminating it, because buffers are finite.

        The rate at which new speculative exploits are being discovered makes your assertion extremely dubious. In-order is the only safe option at present; anyone claiming otherwise is selling something or w

        • Saying it is "fine" is one thing; the implementation is another. There are no fixes--only mitigations, and they add cost in performance, power, and complexity.

          If you're truly worried about the risk of Spectre and Meltdown then we wouldn't be having this conversation as you wouldn't be silly enough to do such a risky thing as connect a computer to a foreign network like the internet. Cloud providers may want to seek some compensation, as would those people who are actively being targeted by a nation state. To everyone else, why do you bother with the mitigations? The complexity of attack is orders of magnitude higher than any normal person needs to worry about.

        • by gweihir ( 88907 )

          A "mitigation" is a fix in some other place than where the problem is. This is pretty much a fixed term in IT security engineering and it has a different meaning from normal English.

      • Speculative execution is fine providing you remember security is a sliding scale of risk and not an absolute. Spectre and Meltdown won't affect nearly every computer in the hands of people out there, and are a risk only for highly targeted attacks and situations where you can carefully characterise the machine (e.g. VM hosts).

        To anyone who doesn't fall into this category and worries about this, do you ever get tired of living in an underground bunker with a giant bank vault for a door, and if you actually l

      • by AmiMoJo ( 196126 )

        Intel gets a lot of sales from laptops, because AMD's mobile offerings are not as good on battery life. You can buy Ryzen laptops but the battery life just isn't competitive with Intel.

        Hopefully AMD can fix that because the Ryzen/APU combo is great performance-wise, they just need to get the power management stuff sorted out.

        • by gweihir ( 88907 )

          I have an older APU netbook. Works nicely under Linux and had a far better price-point than Intel-based alternatives.

          I do expect that the end of what can be done performance-wise is pretty much reached (both AMD and Intel), so power will be next for AMD.

      • Comment removed based on user account deletion
        • by gweihir ( 88907 )

          And that turns out to actually not be true. For example, AMD has vastly superior multiprocessing, so games stay well playable at significantly lower FPS than on Intel, for example. The focus on benchmarks is flawed and so is only comparing the fastest offerings. What matters is the user experience.

        • People aren't stupid.

          Really? Then how do you explain all those people running Windows?

  • by slashmydots ( 2189826 ) on Tuesday May 21, 2019 @02:19AM (#58628300)
    How affected are non-HT CPUs? I have a pure 6-core i5-8400 CPU and I commonly use it for Photoshop and Premier editing.
    • Non-HT Intel CPUs are effectively immune from the security/performance issues. What’s funny is that these non-HT CPUs likely are just CPUs that had defective HT and were binned and sold cheaper. e.g. The same chip will be produced for the i5-8400 and i7-8850H, and then separated/binned into working-HT and non-working-HT(they also get binned on lots of other categories)
    • by AmiMoJo ( 196126 )

      Unfortunately they are hit pretty badly. With Premier rendering might maintain decent speed if you are using GPU acceleration, otherwise it's probably going to take a fair hit I'm afraid.

    • I have a better question for you: How affected are *you*? Security is the answer to risk. Risk is a personally assessed based on a consequence and likelihood of your situation.

      If you're running a photoshop and premier machine the likelihood of you being affected by Spectre and Meltdown is so close to zero that if you're worried about it I also have asteroid impact insurance to sell you.

      There's a reason why the Kernel team made many of the mitigations not only optional but also disabled by default. Why would

You know that feeling when you're leaning back on a stool and it starts to tip over? Well, that's how I feel all the time. -- Steven Wright

Working...