Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Encryption Hardware

Stealthy Dopant-Level Hardware Trojans 166

DoctorBit writes "A team of researchers funded in part by the NSF has just published a paper in which they demonstrate a way to introduce hardware Trojans into a chip by altering only the dopant masks of a few of the chip's transistors. From the paper: 'Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against "golden chips."' In a test of their technique against Intel's Ivy Bridge Random Number Generator (RNG) the researchers found that by setting selected flip-flop outputs to zero or one, 'Our Trojan is capable of reducing the security of the produced random number from 128 bits to n bits, where n can be chosen.' They conclude that 'Since the Trojan RNG has an entropy of n bits and [the original circuitry] uses a very good digital post-processing, namely AES, the Trojan easily passes the NIST random number test suite if n is chosen sufficiently high by the attacker. We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests. The higher the value n that the attacker chooses, the harder it will be for an evaluator to detect that the random numbers have been compromised.'"
This discussion has been archived. No new comments can be posted.

Stealthy Dopant-Level Hardware Trojans

Comments Filter:
  • Fascinating... (Score:2, Insightful)

    by CajunArson ( 465943 )

    So all the NSA needs to do is kidnap your chip, microscopically re-dope it, and shove it back in your computer without you noticing!

    Phew... I'm glad there are absolutely no other simpler ways for the NSA to spy on us other than re-doping chips! I'll just superglue mine into the socket so I know I'm safe.

    • by Anonymous Coward

      Silicon is just politics by other means. So presume both the Chinese and the West are trying to flood supply channels with compromised/counterfit silicon in hopes of it finding its way the other side's hardware.

    • by h4rr4r ( 612664 )

      Why would they bother with that, when they can have someone working at the fab do it?

    • Re:Fascinating... (Score:5, Insightful)

      by Anonymous Coward on Friday September 13, 2013 @09:36AM (#44839983)

      NSA? Probably not. The Chinese chip fab that has been known to have a third shift and has full access to masks and such? Certainly.

      The NSA isn't the only agency wanting to know everything a person does.

    • Why? So many other avenues of attack. Don't bring out the silliest arguments and expect us to debate it from the extremely silly point of view.

    • Re:Fascinating... (Score:4, Interesting)

      by omnichad ( 1198475 ) on Friday September 13, 2013 @09:42AM (#44840025) Homepage

      All they need to do? It's already been done at the fab! Why else would this be coming out now? These researchers have been under a gag order for years and only now got bold enough to stand up to the NSA.

      Opinions above are exaggerated for entertainment purposes only

    • So all the NSA needs to do is kidnap your chip, microscopically re-dope it, and shove it back in your computer without you noticing!

      They could have a batch of compromised chips and replace the one in your computer.

      Would you ever know? I really doubt it.

      • So all the NSA needs to do is kidnap your chip, microscopically re-dope it, and shove it back in your computer without you noticing!

        They could have a batch of compromised chips and replace the one in your computer.

        Would you ever know? I really doubt it.

        The fact that Windows wants you to reactivate would be your first clue.

        • That only works if you've replaced enough other stuff in your computer, that the compromised chips don't have a code that the compromised windows is programed to ignore, that you didn't buy a compromised chip in the first place, etc...

          Also, if I'm bothering with custom compromised chips, I might just have the CPU ID be reprogrammable on them, and bring with me a device capable of reading the code from the removed CPU and burning it into the replacement.

          In reality though, they'll just use an unadvertised zer

        • The fact that Windows wants you to reactivate would be your first clue.

          You think the NSA would go to all that trouble but not have a valid Windows activation key...?

  • by Overzeetop ( 214511 ) on Friday September 13, 2013 @09:01AM (#44839685) Journal

    Can an entire three-letter-agency get a corporate hard-on? 'Cause if they can, this gave our favorite one the biggest boner in the known universe.

    • Can an entire three-letter-agency get a corporate hard-on? 'Cause if they can, this gave our favorite one the biggest boner in the known universe.

      On the contrary... more likely, either the NSA or the Chinese (or both!) will read this and say "Crap! They figured it out!"

      If it's the NSA, we'll see some new laws passed soon giving them broad new secret vetoing power over publishing in scientific journals.

      • by JanneM ( 7445 )

        If it's the NSA, we'll see some new laws passed soon giving them broad new secret vetoing power over publishing in scientific journals.

        How would you know they don't have that already?

  • by stewsters ( 1406737 ) on Friday September 13, 2013 @09:01AM (#44839689)
    I would guess that an intelligence agency figured this out a few years ago. One that can plant moles at Intel. That's why they also want to remove rdrand from Linux.
    http://linux.slashdot.org/story/13/09/10/1311247/linus-responds-to-rdrand-petition-with-scorn [slashdot.org]
    • by Anonymous Coward on Friday September 13, 2013 @09:43AM (#44840037)

      If I were a disgruntled member of the intelligence industrial complex and knew that this was actually being done by a government agency, and I did not relish the thought of a Russian sabbatical, couldn't I surface the news by telling researcher friends of mine how to do it?

  • If you modify a chip, you can make it behave differently?

    What's the news here please?

  • There are easy numeric methods for determining how random data is. Optical inspection would be unnecessary to discover this modification. You might even get away with generating a few megabytes of data, zipping it, and then comparing the resulting compression ratio to a known good chip.

    • by Anonymous Coward on Friday September 13, 2013 @09:24AM (#44839893)

      There are easy numeric methods for determining how random data is.

      Actually, no. Technically speaking, there is no such thing as random data, only a random process. You can certainly test how random a data stream seems, but if the data source is a black box, you never really know.

      From TFS:

      Since the Trojan RNG has an entropy of n bits and [the original circuitry] uses a very good digital post-processing, namely AES, the Trojan easily passes the NIST random number test suite if n is chosen sufficiently high by the attacker. We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests.

      What if your black box is just feeding you encrypted bits of pi? You would never know, but the black box's maker could trivially reproduce your "random" numbers.

      • Oh, you mean like RSA tokens and the seed files? :P

      • The NIST 800-22 test has bit length parameters. The article doesn't indicate if it passed the 128 bit NIST test after they reduced the entropy to 32 bits, only that it passed *some* NIST test. From another poster it seems the standard NIST parameters used for the NIST test may not be sufficient to test that the prng exhibits the level of entropy that people are relying on it to exhibit. The lavarnd folks pass a billion bit NIST test, so it is possible to run longer versions of the test. If the reduced e

        • It would have to be based on a statistical analysis which means it isn't a proof, it is demonstrated to a confidence level. How confident do you need to be?

          Secondly, to properly evaluate to greater number of bits of entropy is going to require a larger sampling and I expect this increases exponentially. How much time do you have to reach your confidence?

          The testing would be balancing those two questions, but in no case could an absolute answer be found.

          But, from the horses mouth:

          The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.

          Random Number Generation [nist.gov]

          I

      • As a person who has worked in semiconductors since the first SSI 7400 , I can say for certain that many things have been done and there are some really talented people who can do things that -almost- defy reason. I know that engineers put their own little signatures in ASICs and that some engineers are far more competent than can be understood by most. I have seen many circuits that were situationally controlled or externally controlled by means that would not be obvious without an understanding of the phy
      • by jhol13 ( 1087781 )

        Actually, no. Technically speaking, there is no such thing as random data, only a random process.

        Actually, there is random data. That is, data generated by a random process.

        Unsurprisingly, there are quite a few different tests which can determine, or perhaps "preditct the chance" if some data is produced by a random process i.e. is random, or not. The easiest for a layman is to try to compress it. Random data of sufficient size won't compress with unbelievably huge probability.

        • by jhol13 ( 1087781 )

          (Sorry for screwing the quote ... not the first time ... apparently my brain is a random process)

        • by vux984 ( 928602 )

          Actually, there is random data. That is, data generated by a random process.

            I build 2 boxes

          The first produces its data stream by a random process.
          The 2nd box, as its process, copies the data from the first box.

          Any test that would grade the first data stream as random would grade the 2nd data stream as random.

          The 2nd data stream is not random, as the owner of the first box can tell you, in advance, what every output of the 2nd box will be.

          • by jhol13 ( 1087781 )

            Are you really claiming, that exactly same data can be mathematically speaking both random and non-random?

            • by vux984 ( 928602 )

              I am claiming that the same data can be produced by a random process or a non-random process.

                Therefore one cannot merely examine the data to determine if its truly random. One MUST examine the process.

              • by jhol13 ( 1087781 )

                In that case we get to the philosophical question: is there anything "truly" random. No process describable by mathematics certainly is not.

                • by vux984 ( 928602 )

                  We could but we don't have to.

                  Box 1 is random to the best of our ability. Sure we can discuss the philisophical question of free will vs determinism and absolute cause and effect, and whether or not something can be truly random.

                  But we can agree that rigth now, nobody has the faintest idea what's going to come out of box 1 next.

                  Box 2 isn't random at all. It runs in lock step to box 1. Anyone with access to box 1 knows what's going to come out of box 2.

  • This should still be detectable. It just requires more time. One could also reduce the time by looking at the combined output of an entire batch of chips. If they all have the same mask, they will all produce the same reduced set of random numbers. So one additional meta-test of data from a lot could show they have been compromised.
    • Tell me, what hardware will you test the chips via?

      You are now aware that the infamous Ken Thompson Compiler / Microcode Hack was well known to the government before he pontificated on it during his ACM acceptance speech / paper. [bell-labs.com]

      Acknowledgment
      I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.

      Which was published in the very apt. year of 1984, I might add...

      Tell me, indeed, how exactly would you select the chips that did not already have such modification for comparison? Oh it should take more time indeed, but far much more than you realize. Get out your Oscillosc

  • Since the Ivy-bridge random number generator is supposedly "unauditable" how are these researchers able to prove anything about re-doping a black box design? Shouldn't they just look at it and spot the massive array of transistors that spells out "NSA BACKDOOR UNIT" instead of having to worry about all this subterfuge?

    • by h4rr4r ( 612664 )

      What do you mean unauditable?
      Do you mean inconvenient to audit? It might take a long time but there are methods to check how good the random number generator is.

      • by ssam ( 2723487 )

        no there aren't. The digits of pi have no patten other than being the digits of pi, so they will pass a random number tests. A good pseudo random number generator will pass randomness tests, but can be easily reproduced if you know the starting seed. Also putting a simple sequence (1,2,3,4...) through an encryption algorithm will give you an output that passes randomness tests.

    • I thought we already covered this in the linux rdrand story. It's called unauditable because it whitens the raw entropy output using encryption on chip, making even quite non-random source data appear to be random. It is not called unauditable because it's a black box design. The paper states that the design is very well known.

      The attack described in this paper is to modify both the entropy source output "c" and the post-processing encryption key "K", undetectably setting a fraction of them to constant b

      • I looked at the paper from CRI, they apparently did do testing on the raw (pre-whitening) entropy source on test chips that give direct access to it. Unfortunately the goal of that audit was to build confidence in the general design, the NSA wasn't an issue when that was done.

        What I take away from this is - the good news is, the RDRAND circuitry has an open, well documented design which is apparently robust. Thus, if we can obtain confidence that it's not backdoored by the NSA, it's a great feature to have.

  • I wonder if it's possible for an attacker to mess with microcode in such a way as to trojan things like random number generation, without having any other effects that would be more easily noticed. It doesn't seem likely.

    Of course, true RNG depends on things like timing keystrokes, mouse clicks, network packets, etc. The LSBs of such times probably aren't used for anything else, and could thus be tampered with more easily.

    It's pretty hard to get reliable crypto when your adversaries are the SIGINT arms of s

  • Linux uses the Ivy Bridge random number generator in the kernel, along with other sources of randomness.

    That makes it OK, because as everyone knows, mixing the other sources with a predictable string makes the output even more random!

    Didn't Linus completely settle [slashdot.org] this issue?

    • by gweihir ( 88907 )

      Also notice that this attack does not make RdRand unusable. It still gives you some bits of entropy per output value, just a lot less than expected. However if you expect nothing or very little, the output is good even in the compromised version. And for various reasons, RdRand has a lot less entropy than 1bit/1bit anyways (theoretically as low as 1bit/512bit), so hashing it together large-scale is necessary in any case (I bet many people overlooked that little gem...).

    • No. What this story means is that if you want to write trustworthy code, you have to make your own IC chips.
  • by RichMan ( 8097 ) on Friday September 13, 2013 @09:51AM (#44840121)

    These parts would not pass the standard verification process and would be rejected from being assembled into devices.
    Standard testing of ICs for functional faults includes a scan process. Per the design specification that the part was supposed to buildt a number of scan vectors are passed through the devices. These scan vectors check as much of the device as possible. The goal is to check every flop and every logic path between flops. The tests are to detect manufacturing errors. And can find single faults in devices.
    Typical errors are stuck at 1 or stuck at 0, also shorts and would easily expose modifications of this sort, especially of such a scale as to radically change things.

    • by return 42 ( 459012 ) on Friday September 13, 2013 @10:29AM (#44840467)

      Sigh.

      "Hello, Intel. Under the terms of this national security letter, you must change your verification software to ignore certain errors. The engineers who carry out this order must not reveal anything about this. Anyone who does will be subject to a term of incarceration not exceeding..."

      Tell me why this would not happen.

      • Because national security letters can only be used to request information.

    • by ssam ( 2723487 )

      So intel runs a scan to check that the random number generator gives the correct output?

      well that settles it.

      • by RichMan ( 8097 )

        1) computer generated" random numbers" of the type this covers are fully state to state defined they are not random in any way. To make them random you need to seed the initial state and then reduce the output.
        2) the automated scan check is bit by bit on the logic it does not care that 64 bits make a random number it looks at the logic cone input for every single bit independently and verifies the functionality. This is done to make sure all the logic works.

  • I doubt that an altered chip would pass BIST [siliconfareast.com] testing.

  • by hormiga ( 600498 ) on Friday September 13, 2013 @10:08AM (#44840293)

    Given Hanlon's razor, an accidental, rather than malicious, error in doping would be even more likely. If the chip were inadvertently doped incorrectly, it would pass visual inspections and even software tests without awareness of the defect. How many defective dice, not merely with RNGs but also with other circuits, are already in service due to inspection failures?

    Although this paper shows how insidious a threat from a well-funded adversary might be, even more it shows the need for more comprehensive inspection mechanisms to discover misdoping which might go undetected by existing standard procedures.

    BTW, the paper includes a well written and readable introduction to the context of the problem. Good job.

    • For us uninformed, please define doping.

      • Re: (Score:3, Informative)

        by hormiga ( 600498 )

        In semiconductor manufacturing, doping is the introduction of slight amounts of impurities into a semiconducting material, to create a condition of surplus or deficit electrons. Donors such as arsenic and phosphorus add electrons, creating n-type semiconductors, while acceptors such as boron and aluminum cause a deficit of electrons, making a p-type semiconductor. The terms surplus and deficit are relative to a state where all of the atomic orbitals are filled and the semiconductor has almost no conductivit

    • by floodo1 ( 246910 )
      hard and fast rules are always wrong.
    • A misdoping would light up the equipment alarms, in-line electrical tests, end-of-line electrical tests (both on the chips themselves and special test regions in the lines between the chips). Doping is performed relatively early in the manufacturing process and Intel et al know just how big a risk a misdoping is and test for it extensively in-line. This is because if you only catch it at the end of the line you potentially have hundreds of millions of dollars worth of product to scrap because from the 20

      • by hormiga ( 600498 )

        I would agree almost all the time. An error in doping, not being selective, would likely be obvious, because it would affect the other components on the same layer.

        However, there is a small amount of boutique production which is done almost by hand, and more subject to errors. The chips are usually less complex, and given the right kind of circuit (such as the RNG from the paper) errors are more likely to slip through, especially if the circuit were to be confined, by itself, to layers not used in the inter

  • Then we can buy them from fabs that we trust, and they will have to more explicitly compete on the issue of trust.

    There is also some possibility that buyers could inspect the manufacturing processes.

    Anomalies in other computational functions are less of a concern, IMHO, because any environment with a mix of CPUs and chipsets should reveal tainted chips at least occassionally. Random number generation is an exception here.

  • by gweihir ( 88907 ) on Friday September 13, 2013 @11:31AM (#44841077)

    This can only be used for attacks on things that can be compromised in a way such that they do not need to perform their original function perfectly anymore. A CPRNG is an ideal target, as it does not need to produce good _and_ bad number after the attack, it is sufficient if it produced bad numbers that look good. The AES whitener in the CPRNGs this was demonstrated on make this very easy and while it looks convenient, it may have been inserted in there exactly to make compromised versions of this CPRNG hard to detect. On the other hand, if you attacked, say, a hash function or a block cipher in this way, it would start producing wrong outputs, potentially for a large number of cases and not only would it fail at its original function, this would also be pretty obvious.

    Still, this is a significant attack and underlines why a single source of entropy should never be fully trusted and that CPRNGs should always be open software and use multiple entropy sources that get mixed.

  • I don't believe the authors attacked the Ivy Bridge RNG in the way described. They described a way, they didn't do it.

    Why?
    1) They show a plot of a DFFR_X1. This is a normal D type flip flop you would find in synopsis libraries and many other libraries you would use in an SoC process. These are not the flops used in the Ivy Bridge DRNG. Also the plot was from a layout program, not a micrograph.

    2) The proposed attack required an average of 2.1 billion attacks (fixing k and v until you hit the right CRC). I do

  • to create a verifiable fast RNG. There may be other parts of the kernel that can be optimized with some HW acceleration.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...