Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Bug Operating Systems Privacy Security Software Windows Hardware Linux Technology

Intel Responds To Alleged Chip Flaw, Claims Effects Won't Significantly Impact Average Users (hothardware.com) 375

An anonymous reader quotes a report from Hot Hardware: The tech blogosphere lit up yesterday afternoon after reports of a critical bug in modern Intel processors has the potential to seriously impact systems running Windows, Linux and macOS. The alleged bug is so severe that it cannot be corrected with a microcode update, and instead, OS manufacturers are being forced to address the issue with software updates, which in some instances requires a redesign of the kernel software. Some early performance benchmarks have even suggested that patches to fix the bug could result in a performance hit of as much as 30 percent. Since reports on the issues of exploded over the past 24 hours, Intel is looking to cut through the noise and tell its side of the story. The details of the exploit and software/firmware updates to address the matter at hand were scheduled to go live next week. However, Intel says that it is speaking out early to combat "inaccurate media reports."

Intel acknowledges that the exploit has "the potential to improperly gather sensitive data from computing devices that are operating as designed." The company further goes on state that "these exploits do not have the potential to corrupt, modify or delete data." The company goes on to state that the "average computer user" will be negligibly affected by any software fixes, and that any negative performance outcomes "will be mitigated over time." In a classic case of trying to point fingers at everyone else, Intel says that "many different vendors' processors" are vulnerable to these exploits.
You can read the full statement here.
This discussion has been archived. No new comments can be posted.

Intel Responds To Alleged Chip Flaw, Claims Effects Won't Significantly Impact Average Users

Comments Filter:
  • Video streaming? (Score:2, Interesting)

    by Anonymous Coward

    What about video streaming (writing, compressing) with Intel's Quicksync? We do a lot of I/O. Presumably it's going to kill our performance. I wonder if a class action lawsuit will be incoming.

    • Re:Video streaming? (Score:5, Interesting)

      by Hal_Porter ( 817932 ) on Wednesday January 03, 2018 @05:21PM (#55858251)

      If the hit is really 30% for FUCKWIT [mail-archive.com] I wonder if there's a case to be made for a 'I know all the software on my box, don't protect me against kernel to user mode data leakage'.

      You could have "--bareback" switch the user could pass into the kernel from the bootloader.

    • I guess they won't be affected unless your application reads and writes data from/to the disk one byte at a time. People have already run network tests and the KPTI patch has a minimum performance loss.
      • The questioner said "We do a lot of I/O". If you do io 512 bytes at a time, this may be noticeable. But that was a poor choice to begin with. 8192 bytes can be a lot faster, even without this issue, and even more so now. Each disk read is a call into kernel space. To minimize the number of calls, grab more data each time.

        Try different values and benchmark. It can make a big difference.

      • by MrKaos ( 858439 )

        I guess they won't be affected unless your application reads and writes data from/to the disk one byte at a time.

        That's only one part. Reads wait on writes and I/O *to* disk forces a context switch from the CPU scheduler but that doesn't mean that the CPU isn't going to context switch when it's de/compressing a block in memory or for some other memory bound process.

        Reads and writes to disk provide an opportunity to mask the CS latency in the I/O latency however there is no such opportunity in a CPU cache to system ram operation and this is where a lot of the impact will be felt. It's every task switch and that will

  • Performance (Score:5, Interesting)

    by phantomfive ( 622387 ) on Wednesday January 03, 2018 @05:06PM (#55858137) Journal
    "All you little people, performance doesn't matter for you." I do like this quote, though:

    "Intel believes its products are the most secure in the world"

    Yeah, more secure than all those other products who don't let you log in with an empty password.

    • Re:Performance (Score:5, Insightful)

      by Anonymous Coward on Wednesday January 03, 2018 @05:15PM (#55858193)

      "Intel believes its products are the most secure in the world"

      Jerry, just remember: it's not a lie if you believe it

    • I wonder if large corporations have any notion of conscience because the Intel ME fiasco is just too fresh in our memory to make such outrageous claims.
    • Significant and average. Weasel words to deflect attention from their poor product. I had an average heart attack.... but it wasn't significant.
    • Just think of all the new processors they will sell when everyone's brand new processor ends up slower than sandy bridge processors that are more than 10 years old.

    • by gweihir ( 88907 )

      This statement nicely illustrates the difference between "belief" and "knowledge". Also refer to "delusion".

  • by Joe_Dragon ( 2206452 ) on Wednesday January 03, 2018 @05:06PM (#55858139)

    why are non broken AMD chips flagged intel?

    • From what I'm reading, it's cause the code is still in development so they basically have it turned on for everything. They plan on fixing that soon.

      https://www.phoronix.com/scan.... [phoronix.com]

      https://www.phoronix.com/scan.... [phoronix.com]

      • why is intel saying many different vendors?? When there BIG revel AMD does not have this bug.

        • Because they're lying and trying to spread the blame around so they don't look so bad?

  • Nice try (Score:5, Interesting)

    by blackomegax ( 807080 ) on Wednesday January 03, 2018 @05:09PM (#55858157) Journal
    Nice try Intel, but phoronix benchmarks prove you wrong, and show even up to 60! % loss in some loads.
    • Those workloads with significant performance losses are more or less completely artificial, e.g. average users don't create hundreds of thousands of files day in and day out and even in this case only SSD disks are affected. Considering that SSD disk operations are sometimes several orders of magnitude faster than those for spinning disks this performance loss is still nothing to worry about.
    • Nice try Intel, but phoronix benchmarks prove you wrong, and show even up to 60! % loss in some loads.

      They do nothing of the sort. Phoronix benchmarks hardly have anything to do with "average computer users" who provided they aren't surfing some web that is serving up coinhive malware probably don't even exceed the 40% mark on their CPU regularly.

  • by Anonymous Coward on Wednesday January 03, 2018 @05:13PM (#55858175)

    Intel says "Intel believes these exploits do not have the potential to corrupt, modify or delete data."
    They do not say anything about read. This means exploit lets read protected memory.

    • by gweihir ( 88907 )

      They also do not say that the things that can be read (like credentials and crypto-keys) can of course be used to "corrupt, modify or delete data". A shameless lie by misdirection.

  • by ilsaloving ( 1534307 ) on Wednesday January 03, 2018 @05:16PM (#55858205)

    I think their magic excuse 8-ball is broken too, cause I think this is the exact same excuse they've used for all their previous screw ups too.

  • by Swave An deBwoner ( 907414 ) on Wednesday January 03, 2018 @05:18PM (#55858225)

    All my users are above average.

  • Some info (Score:4, Informative)

    by Artem S. Tashkinov ( 764309 ) on Wednesday January 03, 2018 @05:27PM (#55858287) Homepage

    I like how they've weaseled out [intel.com] of the whole fiasco (why didn't /. post a link to the original press release?):

    "Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time".

    I'm not sure I can read between the lines properly but I guess new revisions of Coffee Lake/Kaby Lake/SkyLake(X) CPUs are coming and they will contain a hardware fix (though it still seems highly unlikely considering how difficult it's to deploy a new hardware design - however unlike other fabless companies, like AMD/NVIDIA/ARM/etc Intel has everything under control). After all they've known about this issue for almost half a year.

    Meanwhile as for consumer workloads they are correct. Two [hardwareluxx.de] German [computerbase.de] websites have already tested a Windows build with a fix and found very little performance losses.

    Phoronix [phoronix.com] has also run a number of tests on Linux and found out that only few (mostly artificial) tasks are seriously affected.

    Intel home users may sleep well. As for enterprise customers no one has run virtualization tests yet though - that's what truly important for large deployments (clouds).

    • Re:Some info (Score:4, Interesting)

      by RogueyWon ( 735973 ) on Wednesday January 03, 2018 @06:24PM (#55858743) Journal

      The Hardwareluxx benchmarks are interesting. They certainly don't show "no" impact on gaming. In fact, what they show is more or less what you would expect to see with decreased CPU performance.

      If you look at the 4K benchmarks, there is minimal-to-no impact. That's not surprising, because you would expect most modern games to be GPU-constrained at 4K, outside of some really fringe cases. Drop to 1080p, however, and you are looking at roughly a 4% or so reduction in framerates. Their test rig has a 1080 Ti - one of the best gaming cards money can buy right now and one that you would expect to be able to eat most games for breakfast at 1080p. It's not unusual for games on high-end graphics cards to hit CPU constraints at 1080p and, indeed, this is usually how sites like Eurogamer's Digital Foundry benchmark CPUs for gaming performance. By their usual standards, that 4% performance loss is pretty severe.

      Will it actually affect anybody's gaming performance in the real world? Possibly. Gamers with older CPUs but a more recent graphics card (a fairly common combination) still using 1080p monitors may well see modest but still noticable performance hits based on those benchmarks. Even if it's not a huge real-world impact, it's a massive reputational blow for Intel.

  • Heard this before (Score:4, Insightful)

    by Jason1729 ( 561790 ) on Wednesday January 03, 2018 @05:36PM (#55858347)
    When they had the Pentium floating point division bug they also said it wouldn't affect the average user. All they did was piss off their customers before they recalled the chips anyway.

    Some people never learn.
  • by Anonymous Coward on Wednesday January 03, 2018 @05:38PM (#55858363)

    If the 'sensitive information' they can gather includes credentials or tokens the user wouldn't otherwise have access to, it sure as shit allows modification of data

    • Nice catch, however, to be honest, you're talking about possible ramifications, not about direct modification of the RAM which your process/application shouldn't get access to.
      • by gweihir ( 88907 )

        Yes, but the users does not care about this. The users care whether their data is at risk of being "corrupted, modified or deleted" by this severe bug and yes, it very much is. Intel is using the tactics of lying by shameless misdirection here, apparently hoping that nobody understands what they are actually saying.

    • They're being honest, more or less. It's standard to describe what the exploit allows you to do directly.

      Being able to read anything in kernel space will allow credential theft, true, but the exploit alone doesn't allow modification of data. Vulnerability reports typically describe exactly what is possible via the exploit and expect the reader to understand the implications---or to ask someone who does.

      Anyone who rates vulnerabilities is going to put this into the highest risk category anyway, so it's not l

    • by vadim_t ( 324782 )

      So the paper is out. I'm not yet done reading it, but so far what I gathered is this:

      There's a demonstrated attack capable of dumping all of kernel memory at a speed of 503 KB/s. This is 34 minutes per GB, so a full dump is going to take a while at this rate, but it seems plenty fast to cause some huge amounts of trouble if the attacker knows where the juicy stuff is.

      There's also a version for reading the memory of another process. This seems trickier to pull off, and the paper describes a speed of 10 KB/s

  • by QuietLagoon ( 813062 ) on Wednesday January 03, 2018 @05:44PM (#55858413)
    That was one of the most uninformative, denying-we-did-anything-wrong press releases I've read in a long while. Therefore I suspect it came from the legal team. If only Intel's CPU designers were as good as the Intel legal team.
  • by tsqr ( 808554 ) on Wednesday January 03, 2018 @05:50PM (#55858453)

    Intel will soon be announcing a $29 CPU replacement program for qualifying customers.

    • by sl3xd ( 111641 )

      Intel will soon be announcing a $29 CPU replacement program for qualifying customers.

      ...speaking of $29 to fix the battery (to speed up Apple's iPhones): Since ARM64 is also affected, every iOS device since the iPhone 5s (late 2013), as well as Android devices of similar vintage will also be seeing a slowdown from this.

      Here's the hard reality: It takes roughly a year to go from tape-out (end of chip development) to a fabricated chip. That doesn't count manufacturing time, integration into designs, physical distribution, and so on.

      Even if Intel (or any of the ARM64 makers) were to find and

  • PR lies (Score:5, Insightful)

    by gweihir ( 88907 ) on Wednesday January 03, 2018 @06:00PM (#55858525)

    Does not "corrupt, modify or delete data". Yes, nice. It can just steal your passwords and encryption keys and then use them to do that corruption, modification or deletion. A shameless lie by misdirection. Intel has no honor at all.

  • by Anonymous Coward on Wednesday January 03, 2018 @06:03PM (#55858541)

    Now I have nothing to complain about. Get the same performance with a much lower price.

  • by fahrbot-bot ( 874524 ) on Wednesday January 03, 2018 @08:51PM (#55859473)

    ... any negative performance outcomes "will be mitigated over time."

    Meaning, when you buy a new CPU or computer - i.e. "fixed in the next release".

  • by mveloso ( 325617 ) on Wednesday January 03, 2018 @08:54PM (#55859491)

    From what I've read, this "problem" looks to be a design decision on the part of Intel. Speculative access needs to be fast, and making it subject to access control basically removes the benefit of speculative access.

    Given how Intel the company operates, there's no way that this could be a bug

    I myself would rather run with the current behavior, since I don't particularly care about the problem; it's more an issue for shared hardware, and I don't generally share my hardware.

  • by ffkom ( 3519199 ) on Wednesday January 03, 2018 @09:38PM (#55859709)
    ... written in 2015 at https://danluu.com/cpu-bugs/ [danluu.com]

    As someone who worked in an Intel Validation group for SOCs until mid-2014 or so I can tell you, yes, you will see more CPU bugs from Intel than you have in the past from the post-FDIV-bug era until recently. Why? Let me set the scene: It’s late in 2013. Intel is frantic about losing the mobile CPU wars to ARM. Meetings with all the validation groups. Head honcho in charge of Validation says something to the effect of: “We need to move faster. Validation at Intel is taking much longer than it does for our competition. We need to do whatever we can to reduce those times we can’t live forever in the shadow of the early 90’s FDIV bug, we need to move on. Our competition is moving much faster than we are” - I’m paraphrasing. Many of the engineers in the room could remember the FDIV bug and the ensuing problems caused for Intel 20 years prior. Many of us were aghast that someone highly placed would suggest we needed to cut corners in validation - that wasn’t explicitly said, of course, but that was the implicit message. That meeting there in late 2013 signaled a sea change at Intel to many of us who were there. And it didn’t seem like it was going to be a good kind of sea change. Some of us chose to get out while the getting was good. As someone who worked in an Intel Validation group for SOCs until mid-2014 or so I can tell you, yes, you will see more CPU bugs from Intel than you have in the past from the post-FDIV-bug era until recently.

    It's basically the same fuck-up as in the software industry: Profits and "time-to-market" prioritized over security.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...