Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches (extremetech.com) 170
Phoronix has conducted a series of tests to show just how much the Spectre and Meltdown patches have impacted the raw performance of Intel and AMD CPUs. While the patches have resulted in performance decreases across the board, ranging from virtually nothing to significant depending on the application, it appears that Intel received the short end of the stick as its CPUs have been hit five times harder than AMD, according to ExtremeTech. From the report: The collective impact of enabling all patches is not a positive for Intel. While the impacts vary tremendously from virtually nothing to significant on an application-by-application level, the collective whack is about 15-16 percent on all Intel CPUs without Hyper-Threading disabled. Disabling increases the overall performance impact to 20 percent (for the 7980XE), 24.8 percent (8700K) and 20.5 percent (6800K).
The AMD CPUs are not tested with HT disabled, because disabling SMT isn't a required fix for the situation on AMD chips, but the cumulative impact of the decline is much smaller. AMD loses ~3 percent with all fixes enabled. The impact of these changes is enough to change the relative performance weighting between the tested solutions. With no fixes applied, across its entire test suite, the CPU performance ranking is (from fastest to slowest): 7980XE (288), 8700K (271), 2990WX (245), 2700X (219), 6800K. (200). With the full suite of mitigations enabled, the CPU performance ranking is (from fastest to slowest): 2990WX (238), 7980XE (231), 2700X (213), 8700K (204), 6800K (159). In closing, ExtremeTech writes: "AMD, in other words, now leads the aggregate performance metrics, moving from 3rd and 4th to 1st and 3rd. This isn't the same as winning every test, and since the degree to which each test responds to these changes varies, you can't claim that the 2990WX is now across-the-board faster than the 7980XE in the Phoronix benchmark suite. It isn't. But the cumulative impact of these patches could result in more tests where Intel and AMD switch rankings as a result of performance impacts that only hit one vendor."
The AMD CPUs are not tested with HT disabled, because disabling SMT isn't a required fix for the situation on AMD chips, but the cumulative impact of the decline is much smaller. AMD loses ~3 percent with all fixes enabled. The impact of these changes is enough to change the relative performance weighting between the tested solutions. With no fixes applied, across its entire test suite, the CPU performance ranking is (from fastest to slowest): 7980XE (288), 8700K (271), 2990WX (245), 2700X (219), 6800K. (200). With the full suite of mitigations enabled, the CPU performance ranking is (from fastest to slowest): 2990WX (238), 7980XE (231), 2700X (213), 8700K (204), 6800K (159). In closing, ExtremeTech writes: "AMD, in other words, now leads the aggregate performance metrics, moving from 3rd and 4th to 1st and 3rd. This isn't the same as winning every test, and since the degree to which each test responds to these changes varies, you can't claim that the 2990WX is now across-the-board faster than the 7980XE in the Phoronix benchmark suite. It isn't. But the cumulative impact of these patches could result in more tests where Intel and AMD switch rankings as a result of performance impacts that only hit one vendor."
Time for ARM CPUs in all computers? (Score:1, Interesting)
With all of these x64 security and performance issues happening, would now be a good time for the entire industry to switch to ARM architecture CPUs in all desktops, laptops and servers? Most phones and tablets are already there.
Re: (Score:2)
Re: (Score:2)
Look, we're just looking to get in on the anti-Intel bandwagon. All the other go-fast, low-drag cool kids are doing it.
Re: (Score:2)
With all of these x64 security and performance issues happening, would now be a good time for the entire industry to switch to ARM architecture CPUs in all desktops, laptops and servers?
No, because speculative execution is the problem so high end ARM chips are vulnerable too. A raspberry pi isn't because it has an in-order CPU, unlike most decent phones and tablets. You can get low end phones with the A53 core but they're not very good.
Try out a quad core pi 3 and see how fast it is. It's OK, but no match f
Re: (Score:2)
The Pi is marketing genius. It started with a chip where the main cpu was meant to be the videocore, with an ARM cpu on the side for minor tasks, that didn't sell worth a damn for Broadcom. Ebon and co managed to make an 'arm' SBC out of it, bit banging USB, etc and somehow it sells like hotcakes. Most of the Pi's faults aren't so much decisions by the Pi team themselves but the fact that they're using an SoC that was never intended for this usage to begin with.
Re: (Score:2)
The Pi is marketing genius.
No it's not. Marketing genius implies that the entire popularity is due to marketing and there are other products that are largely superior which the savvy person would be better off buying instead. It's that it fills a large niche that was inadequately filled before.
This is not the case.
There are other products which are faster, and there are other products which are cheaper. Sometimes there are both. However there's a lot more to a product like this than raw CPU speed or volum
Re: (Score:2)
USB isn't bit banged, but some of the other protocols the board speaks (on the Pi bus) are. The USB does, however, suck rocks. They've mitigated the problems but performance is still crap as a result. Hopefully someday they get a sufficiently different core that it has decent USB.
The Pi wins over other SBCs because it has the best support for the money. The community is just huge. If they only could bring one out with some decent RAM, they would probably kill almost all other similar SBCs. The Pi is fine fo
Re: (Score:2, Informative)
This is unrelated to CPU ISA, and related to design choices.
The biggest takeaway is that Intel did not design for secure context switches, and AMD did.
This gave Intel a nice boost in some benchmarks, but now they are way behind. The mitigations have to use slow state clearing mechanisms in the Intel design, whereas AMD have fast (not as fast as not doing it, obviously) state clearing and proper security checks.
Re: (Score:2)
ARM is really great at what it's designed for- very high performance to power ratio, AT low power usage.
It just doesn't scale up to real desktop/server work. x86 is an absolute dog of an architecture but it'll still whip ARM up and down the block for most desktop/server scenarios.
Re: (Score:2)
x86 is an absolute dog of an architecture but
...it's not an architecture, it's an instruction set. And the x86 instruction decoder is a minuscule portion of a modern, multicore CPU with a boatload of cache. And most of us aren't even processing that many x86 instructions, they are mostly amd64 instructions, or handed off to some kind of multimedia coprocessor.
There's a reason why the majority of supercomputers are built with AMD or Intel and not with something else, and that is that the best performance comes from these CPUs. Since they went NUMA (Fir
Re: (Score:2)
You think speculative execution is a marketing gimic?!
You think removing it isn't an even bigger hit?
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
If that were true, there would be hige numbers of servers getting in on that performance / cost action.
Most discussions of cost/performance ratio only involve the price of the CPU itself. If you want to decide whether to use Intel, AMD, or ARM processors in your server room, you would have to include other costs of using ARM processors. In particular, you'd have to also take into account the cost of other hardware and space. You can pay $500 for a single Intel processor, or pay $400 for a dozen ARM processors with the same total processing power (numbers made up for this example). But to use the dozen ARM pr
Re: (Score:2)
Mobile ARM performance / power consumption ratio is actually higher. ARM instruction set hasn't much to do with performance.
If that were true, there would be hige numbers of servers getting in on that performance / cost action.
GP: It's about power consumption
You: If it's about cost, then...
Fact: It's not about cost, it's about power consumption.
Intel can't make x86 as efficient as ARM. They've tried, and they're still well off. Hell, they couldn't make ARM as efficient as ARM. They bought StrongARM, remember? It was the fastest ARM implementation, but it was also the least power-efficient, and Intel couldn't find a way to change that — so they put it out to pasture instead.
ARM isn't used in servers because power consumption
Re: (Score:3)
Netcraft confirms, Intel is dying.
We could go back to making software efficient (Score:5, Insightful)
We should go back to making software efficient again, like we had to in the 1990s and before. It wasn't even that hard to do. It just takes native languages like C, C++, Pascal or Delphi, and a little bit of care.
Right now so much computing power is wasted on frivolous UI animations and other stupid effects. Computing power is also wasted on bloated programming language virtual machines and scripting languages.
Hardware performance issues become less of a problem when not using slow and bloated programming languages, and when not doing stupid things like pointless UI effects.
Re:We could go back to making software efficient (Score:4, Insightful)
You're talking to the wrong crowd. Slashdot is festering with "web developers" these days.
Re: (Score:1)
Here's something to make you feel safe. Guess what language all those gnome plugins are written with? Javascript!
Re: (Score:2)
It's the default on the most popular distro, that makes it significant. Javascript has absolutely no business being run in a desktop environment.
Re: (Score:1)
Perhaps these web developers don't mind their participation being referred to as "festering"? Or perhaps they just don't have the mod points.
I guess it is also possible they are too busy trying to debug giant typeless piles of steaming JS
Re: We could go back to making software efficient (Score:1)
The animations that are totally not the point but I see why you think that...
Iâ(TM)s the boxing and unboxing of millions of objects, their respective heap allocations and cleanup (GC), and generally not giving a fuck about performance by developers. Most of which donâ(TM)t use what they work on...
Câ(TM)est la vie.
Re:We could go back to making software efficient (Score:5, Insightful)
I agree. We're still deploying embedded systems running on 4, 8 and 16K of RAM - not megabytes or gigabytes.
PC operating systems and applications are overly wasteful - greenies should be targeting those instead of sweating over the alleged 5% of greenhouse emissions caused by air travel.
Re: (Score:1)
Alright William, 64K should be enough for anybody, we get it... But seriously, that is insightful, please mod parent up!
Re: (Score:2)
Indeed it is, it should be at least a +2 so, mod GP up some more
Re:We could go back to making software efficient (Score:5, Informative)
most computing power is wasted waiting for I/O. very few CPU cores are pegged.
instead of worrying about trivial micro-benchmarks, they should be more concerned about the latency holes in the I/O pipelines.
Re: We could go back to making software efficient (Score:4, Interesting)
In terms of number of CPU clocks spent waiting it still doesn't match the old days(with contemporary clock speeds increasing rather faster than the speed of light it's not clear that we'll ever see those ratios again, unless someone deliberately builds the CPU down to the speed of the RAM and/or onto the die or both); but improvements to I/O have really been a pretty solid area of late.
Re: (Score:2)
I agree. I manage or work with about a half-dozen VMware clusters, and aggregate CPU demand is really at the bottom of the list. Seldom do you see a cluster with more than 50% CPU utilization.
The only real CPU utilization "problem" is single-threaded performance, which mostly seems to tie into the fact that CPUs with high clock rates are fucking expensive, and probably because there's such a huge market for virtualization and people mostly aren't worried about aggregate CPU demand.
If I had a complaint abo
Re: (Score:3)
Ya know what wouÃd really give me a stiffy being a Vmware admin and all? If we could have beastly single core coprocessors. From time to time there actually is that one customer that has an application that makes the infrastructure sweat. More often than not because they throw way too many cores at the thing wanting to speed things up but in the process fucking the scheduler.
I'd like to have a dual core Xeon with 4.5 GHz next to the 2.5GHz multi-core CPUs. If a VM demands raw power, the scheduler simpl
Re: (Score:2)
I wonder what the system board limitations are of differently clocked CPUs.
Re: (Score:2)
I wonder what the system board limitations are of differently clocked CPUs.
As long as the bus speeds are the same, it should be physically possible. Nobody has built a PC system that way that I'm aware of, though. We have multicore processors that can increase clock rate on some cores while other cores are idle. Most tasks are parallelizable, and from what we know of physics and the difficulty of increasing clock rates further, there's probably more benefit going forward to adding more parallelism than to figuring out how to get better single-thread performance.
Re: (Score:2)
I think you're right in general, the problem is nearly everybody keeps dealing with "bad" software that seems unable or unwilling to take advantage of parallelism and only benefits from a lot of extra clock.
I'm not sure how many of these cases are just "bad" software where nobody has bothered to try to enhance parallelism and just depended on Moore's Law type increases or whether they're corner cases where parallelism *is* just not possible or too hard to be worth trying.
Re: We could go back to making software efficient (Score:4, Insightful)
Aside from the commercially interesting move of AMD back to not sucking;
Let's be clear here, AMD never sucked. Their 386s were just as good as Intel's, their 486s were better, the 586 was faster than the Pentium when writing code optimized for it and not for the Pentium, and the K6 was faster than the P2 clock-for-clock under similar circumstances. The K7, as we all know, was superior on every level to Intel's chips at the time. The only reason the K6 was slower was that everyone optimized for Intel, not AMD. Gentoo built for the K6 was screaming fast.
Intel sucked plenty, though. They may have been faster, but they deliberately created provably less secure designs for the sake of that speed. That sucks, in my book. And even putting aside the technical level, Intel behaved anticompetitively at every turn. That sucks too.
Re: (Score:2)
When people say AMD sucked, they're talking about the previous generation: the FX/Bulldozer line of CPUs. The had problems with performance, power, and gaming, There were use-cases for the FX desktop line (mutli-core rendering, streaming), but they seriously lagged in single-core tasks.
What single-core tasks? Everything complicated I do is multi-core. That's why my FX-8350 is still doing the things I need it to do.
Re: (Score:2)
no. if you look at the total processing power available on an average machines (clock speed, multiple cores, GPU), that has increased significantly more than usable I/O bandwidth.
Re: (Score:2)
Re: (Score:2)
yup.
i didn't say _all_ workloads. _most_ workloads (almost all).
Re: (Score:2)
most workloads are I/O bound. i stand by that 100%.
Re: (Score:2)
i know plenty, thanks.
Re: We could go back to making software efficient (Score:1)
Re: (Score:2)
great. i didn't say 'all'. do you understand the fifference between 'most' and 'all'?
Re: (Score:3)
We should go back to making software efficient again
Or we could just not conform to the mass hysteria we've been told to and treat Spectre and Meltdown with the proper respect, which is to ignore it unless you're a cloud hosting provider or being actively targeted by the CIA.
Re: (Score:2)
Yes. I think Linux allows you to choose if you want to run with Spectre mitigation or not and in Windows maybe you can just not install or unininstall the relevant fixes. Maybe we have to accept that security vs speed is a zero sum game and we have to choose. I would normally choose speed, but it depends on the context. I just hope that choice is not taken away. Most software is coded with fast CPUs in mind. If CPUs become much much slower it could make certain software not even usable.
Re: (Score:2)
GRUB_CMDLINE_LINUX_DEFAULT="text nospectre_v1 nospectre_v2 spectre_v2=off spectre_v2_user=off spec_store_bypass_disable=on net.ifnames=0 biosdevname=0"
that's what I now use.
Re: (Score:2)
Maybe someone qualified can explain if these exploits are even a thing for your average PC user. I guess any malware must already have found its way onto my PC in order to exploit any CPU vulnerabilities. At that point, my PC is compromised anyway and many more bad things can happen, which are probably much easier to abuse than a cryptic and fairly random CPU vulnerability.
I agree with parent that this can be a problem for hosting providers whose CPU's might be shared with all sorts of totally unrelated sof
Re: (Score:2)
Maybe someone qualified can explain if these exploits are even a thing for your average PC user. I guess any malware must already have found its way onto my PC in order to exploit any CPU vulnerabilities.
In theory, many of the vulnerabilities could be exploited by Javascript. If you run any code on your computer that might be less than perfectly trustworthy, you could be at risk, even if it's in a container that is supposed to be able to keep it contained.
Re: (Score:2)
Interesting point. I'm not an expert on JavaScript, but like most modern languages, I suppose it is relatively high-level or runs in a VM, so the memory is managed for it, and there is no direct access to any CPU caches.
I would assume you would have to do systems-level programming in C to be able to access something like that.
Re: (Score:2)
Interesting point. I'm not an expert on JavaScript, but like most modern languages, I suppose it is relatively high-level or runs in a VM, so the memory is managed for it, and there is no direct access to any CPU caches. I would assume you would have to do systems-level programming in C to be able to access something like that.
VMs are no protection. The purpose of VM-based isolation is to defend against vulnerabilities in the operating system, but these are hardware-level vulnerabilities.
Yes, Javascript is high-level... but all modern Javascript environments do JIT compilation to native code. Getting the JIT compiler to generate exactly the right code to exploit these vulnerabilities is likely very hard (no one has managed it yet, that we know of, other than for one of the earliest and simplest speculative execution vulns), b
Re: (Score:2)
The way I understand this is, that you would also have to get extremely lucky to even get any sensitive data in those caches. So it is in no way a targeted attack, but rather some sort of random snooping, without any context of what that data could even be. So overall it sounds to me like an extremely theoretical security vulnerability. Have to wonder if its really worth sacrificing CPU performance to fix that.
Except on hosted or shared environments of course. I can imagine a malicious entity creating softw
Re: (Score:2)
As of right now, I don't think the average computer user is at any risk of being successfully attacked via a speculative execution attack, even without the mitigations.
People's computers get infected with malware all the time, there's no reason these attacks can't be delivered onto the system in the usual ways. We don't need to invoke Javascript, even if it is a feasible means of attack.
Re: (Score:2)
As of right now, I don't think the average computer user is at any risk of being successfully attacked via a speculative execution attack, even without the mitigations.
People's computers get infected with malware all the time, there's no reason these attacks can't be delivered onto the system in the usual ways. We don't need to invoke Javascript, even if it is a feasible means of attack.
The problem with that argument is that "malware" is an extremely broad category. allcoolnameswheretak said that when his PC is infected by malware "at that point, my PC is compromised anyway and many more bad things can happen", but that isn't necessarily true. It depends on exactly what privileges the malware has, and what privilege escalations are available... meaning, basically, the software security posture. The risk of these hardware attacks is that they provide an avenue that is exploitable even if
Re: (Score:2)
We should go back to making software efficient again
Or we could just not conform to the mass hysteria we've been told to and treat Spectre and Meltdown with the proper respect, which is to ignore it unless you're a cloud hosting provider or being actively targeted by the CIA.
So far, I agree with this. But attacks will improve, and it's not inconceivable that a paper could be published tomorrow that demonstrates how to exploit one or more of the vulnerabilities from Javascript.
Re: (Score:2)
there is a benefit to these 'slow' programming & scripting languages and that is it's much easier to write software without certain types of bugs.
things like overflows or double free's are almost non-existent, and before you say that these are well known these days, just remember that the linux kernel just recently had a double free security bug discovered.
i do agree about efficiency though, but in most cases that is mostly not an issue of the chosen language.
Re: (Score:1)
Optimizing software tends to make it harder to maintain. It's usually the last thing you do if possible.
So while modern languages are less efficient to run, they are more efficient to write.
Also remember that in the early 90s and earlier the operating systems were also much more primitive. Real multi-tasking was fairly rare (AmigaOS was one of the first for microcomputers, but MacOS and DOS/Windows took well into the 90s to catch up) and so was memory protection, so there was basically zero security (any ta
Abstractions are the culprit (Score:5, Insightful)
Computing power is also wasted on bloated programming language virtual machines and scripting languages.
Hardware performance issues become less of a problem when not using slow and bloated programming languages, and when not doing stupid things like pointless UI effects.
Agreed that UI animations are pointless, but I think a major culprit is layers of abstraction. Look at the UI tier. I have seen 1000 line package.json files for a simple form-based UI. I have no clue what most of the dependencies are and the developer that made them barely has one as well. Look at the node-based text editors. They are crazy slow on fast computers. My biggest frustrations are that ever time I turn around someone is inventing an abstraction layer. 15 years ago, it was Java...inventing dozens of redundant abstraction layers just to get to form input. This only went away when we moved to REST-based architecture and moved form binding to JavaScript.
Most of the UIs I work with are modern...and slow and terrible. However, I do have to work with one legacy system...an old JSP and servlet-based system from 20 years ago with 20-year-old HTML and only a little JavaScript(a typical UI of the time) that no one has updated...it is an actual delight. It loads FAST. I can scroll through 10,000 rows without my computer breaking a sweat. When the page says it's loaded, it's ready to use (that is my biggest pet peeve...pages that say they're loaded, but need to make 20 server calls and you never know when you have all the data). I hate to sound old, but I LOVE fast legacy UIs so much more than the trendy new ones that download megabytes of JavaScript and CSS and have convoluted complex layouts in HTML just to accommodate mobile users (and the UIs look like crap anyway on a phone, plus most corporate apps are not ones people want to use on their phone, so why make your primary users suffer?)
Server-Side Java use to be terribly bloated, but has matured and even slightly trimmed down (or more precisely, people abandoned the slowest technologies) for JAX-RS-based REST services. I hope the same happens to the UI tier...that minimal and fast becomes trendy.
From what I've seen of native apps, they're following a similar trajectory...lots of layers of abstractions and toolkits that tangibly slow down the UI, but add no value for the end user. I hope minimal becomes trendy again. It's not that hard to implement. I wish there was a way to incentivize good engineering and quicky, snappy UIs over quick turnaround, excessive layering and abstraction and bloated toolkits.
Re: (Score:2)
Namely, the users. All the smooth, nice animation via JS stuff came to be because "users" found it helpful. I found that as more I add nice loader animations, smooth transitions, etc in the apps I make, the users respond more positively toward the aforementioned app, and the reason I make these apps
Re: (Score:1)
I agree but I'd dump Delphi.
Only C or Pascal were ever "good" languages to develop with
Pascal??
Looks like you were not on the tier 1, and you probably will never do.
Tier 1 folks are experts in machine language and asm.
Only morons use C.
Re: (Score:2)
No your assembly programs aren't recompiled. The rest I guess is trying to describe out of order execution but is still very wrong.
Since the day processors needed caches* one can not count clock ticks directly - but one can still count cycles given the best case and with some analysis average/worst case. One can still optimize on assembly level and get better performance.
But most old assembly language programmers weren't actually good at it so improved compilers removed the need in general.
(* actually even
Waiting... (Score:5, Funny)
I am waiting to find out that the only way Intel was able to compete with AMD was to intentionally introduce these flaws.
Re:Waiting... (Score:4, Interesting)
Shouldn't have to wait long. Both Intel and Nvidia have been busted multiple times for creating optimizations for popular benchmark programs at the microcode level in order to give a false performance boost. And with the gigantic shift in servers to consoles all moving to AMD for CPU's and APU's, it'll just be matter of time before more garbage is found out.
Re: (Score:2)
Making microcode that runs faster than the competition... those sneaky bastards.
One of the things they did was reduce FPU precision without telling anyone. Sneaky is exactly the word for this shit.
Re: Waiting... (Score:3, Interesting)
Did AMD recognize risks that Intel didn't? Did both recognize risks but only Intel f
Re: (Score:2)
Look when these flaws were introduced... when Intel went back to the P6 core after AMD was destroying them on the performance/power consumption of the Pentium 4. The Pentium 4 hasn't been shown to be vulnerable AFAIK.
Intel was desperate to catch up to AMD, and they got more performance by having hidden unsafe underpinnings of their processors for years. If they hadn't been under such pressure from AMD, maybe they would have stuck with Netburst? Or maybe they would have fixed the problems in P6's security be
Re: (Score:2)
I am waiting to find out that the only way Intel was able to compete with AMD was to intentionally introduce these flaws.
You don't have to wait. We already know. You can tell because Intel's performance lead is erased when mitigations are enabled.
Re: (Score:3)
Speculative execution was a mistake. (Score:3, Interesting)
I predicted this in 2006.
Re:Speculative execution was a mistake. (Score:5, Funny)
Re: (Score:2)
Re:Speculative execution was a mistake. (Score:5, Interesting)
Speculative execution is fine, as long as you keep security in mind. AMD did and has a minor problem. Intel did not, screwed up its customers, and delivered ill-gotten performance at inflated prices. The sheep kept buying Intel and even today keep buying. People are stupid.
Both AMD and Intel were warned a long time ago by the research community that speculative execution is dangerous and needs extra care.
Re: (Score:1)
Saying it is "fine" is one thing; the implementation is another. There are no fixes--only mitigations, and they add cost in performance, power, and complexity. Unwinding state, cache effects, etc. is non-trivial, and more than likely just moves the problem rather than eliminating it, because buffers are finite.
The rate at which new speculative exploits are being discovered makes your assertion extremely dubious. In-order is the only safe option at present; anyone claiming otherwise is selling something or w
Re: (Score:1)
Saying it is "fine" is one thing; the implementation is another. There are no fixes--only mitigations, and they add cost in performance, power, and complexity.
If you're truly worried about the risk of Spectre and Meltdown then we wouldn't be having this conversation as you wouldn't be silly enough to do such a risky thing as connect a computer to a foreign network like the internet. Cloud providers may want to seek some compensation, as would those people who are actively being targeted by a nation state. To everyone else, why do you bother with the mitigations? The complexity of attack is orders of magnitude higher than any normal person needs to worry about.
Re: (Score:2)
A "mitigation" is a fix in some other place than where the problem is. This is pretty much a fixed term in IT security engineering and it has a different meaning from normal English.
Re: (Score:2)
Speculative execution is fine providing you remember security is a sliding scale of risk and not an absolute. Spectre and Meltdown won't affect nearly every computer in the hands of people out there, and are a risk only for highly targeted attacks and situations where you can carefully characterise the machine (e.g. VM hosts).
To anyone who doesn't fall into this category and worries about this, do you ever get tired of living in an underground bunker with a giant bank vault for a door, and if you actually l
Re: (Score:2)
Intel gets a lot of sales from laptops, because AMD's mobile offerings are not as good on battery life. You can buy Ryzen laptops but the battery life just isn't competitive with Intel.
Hopefully AMD can fix that because the Ryzen/APU combo is great performance-wise, they just need to get the power management stuff sorted out.
Re: (Score:2)
I have an older APU netbook. Works nicely under Linux and had a far better price-point than Intel-based alternatives.
I do expect that the end of what can be done performance-wise is pretty much reached (both AMD and Intel), so power will be next for AMD.
Re: (Score:3)
Re: (Score:2)
And that turns out to actually not be true. For example, AMD has vastly superior multiprocessing, so games stay well playable at significantly lower FPS than on Intel, for example. The focus on benchmarks is flawed and so is only comparing the fastest offerings. What matters is the user experience.
Re: (Score:2)
People aren't stupid.
Really? Then how do you explain all those people running Windows?
Re: (Score:2)
Ah, yes. Those too stupid to mount a heatsink, yet insisting to do it themselves. And, of course, it is never your fault, so you never learn.
I never had problems with AMD CPUs and I had one from every generation except Zen-1.
Re: (Score:1)
Thanks for this. Made me chuckle. :)
Re: (Score:1)
Ooh, but why does it make you so very angry though? It's almost as though you feel somehow shamed yourself by having not predicted this...
Did you spend a lot of money on a highly-vulnerable Intel chip and now you dearly regret it, you poor thing? I would have saved you the money you know, if you had just asked me for advice. Pride is ultimately the downfall of us all, isn't it?
I have a dumb question (Score:3)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Unfortunately they are hit pretty badly. With Premier rendering might maintain decent speed if you are using GPU acceleration, otherwise it's probably going to take a fair hit I'm afraid.
Re: (Score:3)
I have a better question for you: How affected are *you*? Security is the answer to risk. Risk is a personally assessed based on a consequence and likelihood of your situation.
If you're running a photoshop and premier machine the likelihood of you being affected by Spectre and Meltdown is so close to zero that if you're worried about it I also have asteroid impact insurance to sell you.
There's a reason why the Kernel team made many of the mitigations not only optional but also disabled by default. Why would
Re: (Score:1)
No
Re: (Score:3)
[1] Rust code is compiled using LLVM infrastructure. That is to say, whatever compiler mitigations Rust programs have, the same mitigations are available to C++, and vice versa.