Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug Security Hardware Technology

Google Researchers Say Software Alone Can't Mitigate Spectre Chip Flaws (siliconrepublic.com) 98

A group of researchers say that it will be difficult to avoid Spectre bugs in the future unless CPUs are dramatically overhauled. From a report: Google researchers say that software alone is not enough to prevent the exploitation of the Spectre flaws present in a variety of CPUs. The team of researchers -- including Ross McIlroy, Jaroslav Sevcik, Tobias Tebbi, Ben L Titzer and Toon Verwaest -- work on Chrome's V8 JavaScript engine. The researchers presented their findings in a paper distributed through ArXiv and came to the conclusion that all processors that perform speculative execution will always remain susceptible to various side-channel attacks, despite mitigations that may be discovered in future.
This discussion has been archived. No new comments can be posted.

Google Researchers Say Software Alone Can't Mitigate Spectre Chip Flaws

Comments Filter:
  • ....not surprised.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      ... actually quite surprised.

      Software engineers admitting something cannot be just fixed in software? Astounding.

    • I am. Though maybe it's just bad technology reporting. When they say "all processors that perform speculative execution will always remain susceptible" there's something wrong being reported. They should add in that this does not mean ALL processors (past, present, future, and from any vendor), and should end with "unless processors are redesigned."

      Reading the summary as-is literally, it is disgreeing with itself.

  • by KirbyCombat ( 1142225 ) on Thursday February 21, 2019 @11:56AM (#58158436)
    Is my understanding not correct? I thought that these vulnerabilities were due to processors not applying memory access controls during speculative execution. For me personally, I was very surprised to find out that memory access controls could be bypassed at all. Isn't it just a matter of always applying memory access controls? Isn't that why the access control is in the hardware?
    • Well the hardware implementation was buggy, so you can bypass it after all. And since you cannot patch hardware (unless the patch is disabling speculative execution altogether and making all the affected hardware much slower) people try to find all kinds of workarounds to mitigate the issue.
      • Is it even possible, though? That's like trying to make a turd sandwich taste good.

        Maybe they should go back to the drawing board and find something better than speculative execution.

        • But until that happens they just have to... sell slower hardware? Good luck explaining to Joe at Best Buy why his new laptop will be slower than the one he has (despite similar form factor and thickness) because something called "vulnerabilities".
          • Because "Security". Joe at Best Buy knows that security always slows things down (driving, airport, etc).

            • Still, good luck convincing Joe that an ultrabook should revert to running like a 386, when someone else (be it Via or some Chinese company, if both Intel and AMD hypothetically agree to not use speculative execution or use it in significantly diminished capability) offers the performance Joe has come to expect or close to it. Which begs the question, should the government regulate on such things?
      • by msauve ( 701917 ) on Thursday February 21, 2019 @12:10PM (#58158548)
        But the summary claims "...all processors that perform speculative execution will always remain susceptible...". That's a blanket statement which covers all processors, existing or future.
        • by Anonymous Coward

          Technically it might be possible to fix but would require the state of all caches and buffers to be restored to the state before speculation started which would defeat the performance gain of speculation.
          So yes, it's impossible.
          The memory access is meltdown and has nothing to do with spectre.

        • by Anonymous Coward

          Not all. Less powerful processors such as the ones in the Raspberry Pi devices, do not perform speculative accesses, and are thus not affected by Spectre.

      • I wonder if this is can be mitigated in a hypervisor (Green Hills INTEGRITY, KVM, etc.) Maybe PC makers might be wise to have an onboard hypervisor as a way to limit damage.

        • Then the hypervisor might have a bug.
          • True. However, it is about risk management. A hypervisor has a smaller attack surface than a general purpose OS. Yes, there are bugs in hypervisors, but generally a lot fewer.

    • by suutar ( 1860506 )

      As I recall, the problem is that speculative execution alters the state of the CPU and its cache, and there are ways to determine information about that state afterwards that don't involve violating memory access restrictions, like "hey, loading that address didn't take as long as expected, something must've pulled it into the cache already" - that sort of thing. So the question is how much can they make the state act like the speculative execution never happened without actually reaching the point that spe

      • For now, separate processes into security domains and never let two processes from different domains run on the same core. In the future, get rid of that thing and design computer systems that don't require squeezing out every drop of sequential execution speed you possibly can. There doesn't really seem to be a lot of ways of getting rid of that problem.
        • by Anonymous Coward

          Computers would be slow as dirt. Even branch prediction would have to be thrown out.

        • by AuMatar ( 183847 )

          Do you want to go back to the computer you used in the early 90s? Because that's what you're going to get if you remove it entirely.As for the idea of not letting different apps share the same core- so you want to get rid of mutltasking? Because you have very few cores compared to the number of processes running. Sharing cores is basically how all multitasking works.

          • Do you want to go back to the computer you used in the early 90s? Because that's what you're going to get if you remove it entirely

            I'm not really sure that the computer I used in the early 90s had thousands of cores enabled by sub-20nm lithography.

          • A 3 GHz Apple IIgs would still be pretty useful. Extend it to 64 bits and add a graphics coprocessor. If these attacks do turn out to be serious issues we might have to take a step back from the current levels of complexity.

        • You have assumed a multicore CPU that does not share the last level cache between cores, which is not necessarily true.

          While I have not a description of a Spectre exploit that specifically involves a multilevel cache, and I have not seen that case specifically rejected either.

    • by imgod2u ( 812837 )

      The issue is that you don't *know* whether the memory being fetched is accessible until the data comes back from DRAM. So you can assume all accesses to DRAM are not accessible -- very very poor for performance -- and play it safe. Or assume it's accessible until it comes back from DRAM.

      Of course, during that time you've also launched a whole lot of other memory requests that is speculative. And it doesn't matter that you end up discarding the results from the software interface perspective -- the cache (or

    • The side channel attacks do things like measure how long a set of operations take to execute, which gives extra information about what the operations actually did. That can tell you if some relevant data was already in the cache or whether it had to be loaded from RAM. Do the appropriate magic and you can start to deduce things about the contents of memory that you can't see.

      That is, it's a side-channel attack. They're still treating the processor a a black box that you can't peek inside of, but you can sti

    • by ras ( 84108 )

      I thought that these vulnerabilities were due to processors not applying memory access controls during speculative execution.

      Not really. First of all, Spectre is not about the hardware letting you access memory it thinks you don't have permission to access, so memory access controls aren't relevant here. That variant did exist and has another name: Meltdown. Meltdown was a disaster. An Intel hardware bug meant an unprivileged program you could use the Spectre method to read all of kernel memory, even th

  • Some people are always trying to poo-poo old technology. I suspect this is a play to act like every CPU made before yesterday is no good. "Sorry folks, you're going to have to turn on all those old computers we can't control^H^H^H Uh, I mean, those vulnerable systems for your own safety." That or so some smug security weenie can sit and smirk pointing to some ridiculous "researcher" saying "it's impossible to prevent". I just think there may be more going on here than just "old stuff sucks"
  • maybe make it run at a few 100 MHz, or upgrade to the 65816 architecture.... I think I'd rather have that.

    • Isn't the 68000 series immune to all this bullshit? The 68040 is better than the 65816, isn't it?

      • Not in the 040 but in the 060, yes. Check out how the branch cache works and the timings for branches. The branch prediction is like the static prediction in the 68040 where previous branch targets are assumed occupied because they are probably loops and new targets are assumed unoccupied. When the 060 sees a branch for the first time in order to aid in dynamic prediction it creates an entry in the branch cache. However, I don't know enough about the engineering they did to know if it's definitely vulne
      • 65816 is only a 16 bit processor. Famous for a variant used in old nintendo consoles, first appearance is actually the Apple IIgs.

        Several IIgs technologies had re-emerged after the IIgs faded, the first being the 65816 CPU used in later consoles, and the other was the Ensoniq DOC2 audio chipset which eventually found itself in the Gravis UltraSound (GUS) as the GF1 (actually based on the DOC3 chipset)

        But honestly, 16-bit... is a non-starter, and in fact 32-bit is also a non-starter.

        I would gladly trad
  • We need to get away from this unsigned, unreviewed, wild code (like javascript) running on your machines. Lock it down and stuff like this won't be a problem.
    • by bigpat ( 158134 )

      We need to get away from this unsigned, unreviewed, wild code (like javascript) running on your machines.

      Lock it down and stuff like this won't be a problem.

      Systems for whitelisting apps and websites can help. But then the problem just shifts to how much do you trust whichever app stores or website whitelists you are using which are basically the same thing as a signing system. I mean I try to be careful about which apps I download, but if you want your computer to be a general purpose computer then you have to have some flexibility to run unsigned code. As a developer that often means my own code. Otherwise it is an appliance.

    • We need to get away from this unsigned, unreviewed, wild code

      As a representative of programmers everywhere, can you kindly take your idea and go fuck yourself?

  • Having handrails will help mitigate people from dying from falling down the stares. It doesn't stop it, or even prevent it. It is just a tool to help regain balance after you have lost your step.

    Software can Mitigate the problem, by catching the most common and easiest calls to cause the issue.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Thursday February 21, 2019 @01:22PM (#58158954)
    Comment removed based on user account deletion
    • So far, it has only been 'successfully' exploited in tightly controlled lab conditions. If you ask me, and no one did, it has been a way overblown risk by researchers; then fixes were hurriedly attempted with way too little testing; now researchers are saying 'it is still not fixed' and 'it can't be fixed', once again blowing the actual risk for us simple consumers way out of proportion.

      In the server and cloud world there might be a legitimate risk at some point, once we actually see an exploit in the wi
    • The fundamental flaw dates all the way back to the Power 1 processor/architecture from the early 1990s. Intel didn't come to the party until 1996 (the chip was in development sometime before that, likely after 1991 or 1992 when the Pentium engineering samples would've first been released.) Even in that case the problem doesn't reveal itself until you have single package cache-coherent processors, which limits practical application of these flaws to Hyperthreaded pentium 4s, Core series processors, and later

    • by Anonymous Coward

      If they deliver, I will probably switch back to AMD. Intel burned a lot of goodwill in the past few years.

      Just keep in mind that *any* CPU design that uses speculative execution is fundamentally and unavoidably vulnerable to such attacks.
      That includes AMD as well as Intel.

      So if you're switching due to disliking Intel as a company more than AMD as a company, that's all good and well.
      If you're switching under the assumption only Intel CPUs do speculative execution and are vulnerable, you won't be solving any problems.

      The unfortunate fact of the matter is raw instruction execution speed in CPUs isn't increasing le

      • by dryeo ( 100693 )

        Speculative execution is pretty flawed, with new ones found seemingly every day.
        But getting rid of it entirely would set us back to the Pentium 1 and 2 days, where a 400mhz chip was priced out of most peoples reach and was the peak of performance, where most of our $1k usd processors ran 100mhz.

        That's a HUGE cost. One not everyone would even be willing to pay.

        But if you are wanting to build a new PC without speculative execution vulnerabilities in it, go fish a 486 or sparc out of the trash and go to town. Waiting may only give you less options as such hardware becomes more rare.

        The first generation Atom had no speculative execution and while it did dispatch executions at about the same speed as a Pentium, it ran much faster, had simd instructions and was cheap.

    • Let me start by defending Intel before turning this around. Speculative execution attacks are extremely difficult to execute in any constructive way without detailed targeted knowledge and access to the machine. As yet there's no known case of it actually being used and with very good reason: Pretty much every other security exploit is easier and more effective to execute with the exception being attacking a VM from another VM. Spectre / Meltdown really shouldn't come into consideration when buying a CPU un

    • by lkcl ( 517947 ) <lkcl@lkcl.net> on Thursday February 21, 2019 @05:58PM (#58160868) Homepage

      I'm assuming at this point we're probably at least a couple of CPU generations away from Intel fixing this properly.

      unfortunately, it's much wose than the press is making out. i've had to investigate this in-depth as part of the design of the libre-riscv soc, because we criically rely on out-of-order execution for the vectorisation. i was shocked to learn that even in-order systems are potentially vulnerable to timing attacks.

      the first thing that people need to get absolutely clear: spectre was *just the first* in a *class* of timing attacks that opened researchers and hardware designers eyes to a blind-spot in computing architectures.

      the definition of a timing attack is as follows: one instruction may affect the completion time of past OR future instructions through resource starvation / contention, OR through state not being reset after use to a known uniform initial state.

      the FIRST spectre attacks were against memory and caches, on speculative designs.

      however it is perfectly possible, for example, for a multi-issue IN-ORDER system to have insufficient numbers of register ports, such that a certain unique combination of instructions may be arranged by an attacker to starve future instructions of the ability to complete instructions in a uniform time... and REQUIRE that they stall.

      by forcing instructions to stall, that is the very DEFINITION of a timing attack.

      against an *IN-ORDER* design.

      now, it is possible to put in place certain speculation mitigation barriers in hardware, however these barriers *ONLY* occur at interrupts, exceptions and, at a software / OS level, on "context switches". hence the reason why this paper says that no matter what hardware designers try to do, *intra-process* attacks simply CANNOT be mitigated without moving to an *INTER*-process software security model.

      FastCGI is toast, basically.

      there is a solution, and it's going to require a massive world-wide campaign to introduce a concept to the entire computing software world: the creation of intra-process speculation barriers. if we wish to keep using FastCGI, and if we wish to keep using Firefox and python-gevent (the single-process paradigm), we *need* a hardware instruction that "quiesces" internal state *AS IF* the hardware had just made a context-switch, terminating all speculative execution, resetting all internal state and so on.

      one way in which that may be possible to do in an out-of-order system that does not have such hardware-assisted in-process speculation barrier instructions is to issue about a hundred NOPs. the back-lash against doing so will be extreme, however it's not like there's much of a choice, here.

      bottom line is: this has been a major, major oversight by the entire computing industry for over 25 years. it's a problem *across the entire industry*, not just Intel, not just AMD, it's *everybody*. it's not going to be fixed in a couple of hardware revisions by one company.

  • They want GOOGLE Cloud not FreeBSD cloud or Linus' Cloud Spectre can be mitigated by implementing a move operation that moves central data to a cloud then fails, then recovers using self repairing techniques, then defaults back to a node. I dont know much about hacking though
  • If you are executing someone else's code natively on the CPU then it's true that it cannot be fully secured. However, if you execute (JavaScript) code through an interpreter engine rather than using JIT/dynamic recompilation engine then it is by default that spectre cannot be accessed. However, this would be throwing years of effort away in making JavaScript fast so that advertisers can exploit you easier without you noticing the slower execution speed. For this reason, the safest and simplest JavaScript

  • or more precisely, per security principal.

    I for one welcome our new architectural overlords, that is, whoever can make an efficient multi-core, multi-cache,.... multi-everything architecture, perhaps that only shares over high-level interfaces over fibre connections, or whatever.

    And I guess those high-level physical interfaces will have to include timing randomization "chaff".
  • The irony (Score:4, Interesting)

    by OneHundredAndTen ( 1523865 ) on Thursday February 21, 2019 @02:52PM (#58159566)
    And the much-maligned, all-but-dead Itanium is immune to Spectre. Fancy that.
    • Spectre effects any processor with branch prediction. Itanium has branch prediction therefor it isn't immune.
  • Best for security if everyone backs off from multi-tenant cloud servers for now. It was a nice try, but too insecure.

  • This is the sort of thing that scares me about IoT. What happens when the processor in my desk lamp has an unpatchable hardware bug? I doubt that I'll be able to replace just the processor.

    Of course, PATCHING an IoT object has it's own issues. How do I control who (and when) does the patching.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...