Forgot your password?
typodupeerror
Security Hardware Technology

Unspoofable Device Identity Using Flash Memory 145

Posted by timothy
from the double-edged-sword dept.
wiredmikey writes with a story from Security Week that describes a security silver lining to the inevitable errors that arise in NAND flash chips. By seeking out (or intentionally causing) defects in a given part of the chip, a unique profile can be created for any device using NAND flash which the author says may be obscured, but not reproduced: "[W]e recognize devices (or rather: their flash memory) by their defects. Very much like humans recognize faces: by their defects (or deviations from the 'norm') a bigger nose, a bit too bushy eyebrows, bigger cheeks. The nice twist is that if an attacker manages to read your device identity, he cannot inscribe it into his own device. Yes, he can create errors — like we did. But he cannot control where in the block they occur as this relies solely on microscopic manufacturing defects in the silicon."
This discussion has been archived. No new comments can be posted.

Unspoofable Device Identity Using Flash Memory

Comments Filter:
  • by zero.kalvin (1231372) on Friday October 15, 2010 @05:02AM (#33905746)
    Just because we don't know a way. That doesn't mean it can't be done.
    • by Joce640k (829181) on Friday October 15, 2010 @05:56AM (#33905924) Homepage

      Can't you create a device emulator and emulate the defects?

      • by Nerdfest (867930)
        This translates to "It can't currently be done cheaply".
        • How cheap is some code that:

          1. reads an existing pattern
          2. outputs that pattern via the known device interface.

        • by Joce640k (829181) on Friday October 15, 2010 @07:26AM (#33906360) Homepage

          If it can be done in software then it's cheap...hackers have a lot of spare time.

          • by multisync (218450)

            If you have "spare time" you're probably not a hacker.

          • er... done before? (Score:5, Interesting)

            by dogsbreath (730413) on Friday October 15, 2010 @11:07AM (#33908314)

            What they are saying is that this hardware has a unique "biometric" and that can be used to definitively identify true chips/boards from fake. Hmmm...

            First thought that popped up is that this isn't new: floppy disks were "copy protected" using defects punched into the original disk. That didn't work out very well so why would this?

            Second thought was that biometrics have strengths and weaknesses and are not unspoofable. Why would this be any different?

            Several other things come to mind:

            1. If a h/w encoded w/o (write only) serial number is not good enough, why is this better? Is it because it is cheaper? ie: the flash mem is already there so additional gates/circuitry is not required?

            2. What happens if the h/w tech changes? ie: flash mem is no longer cheap and ubiquitous because the whole h/w base has moved to a new technology? In other words, it binds h/w verification, which we want to be a reliable long term solution, to h/w technology which is highly volatile. Probably not a good idea.

            3. There is an assumption that these defects are random. I know from experience that many things we assume to be random are actually patterned and predictable. For example: I have observed DRAM chips that power on with repeatable bit patterns that sometimes vary with production run. Highly consistent, quality controlled production runs tends to remove entropy from the product. Faults and errors occur but within well defined constraints. So... disk drives used to fail within a fairly broad standard deviation from the MTBF but now, in a storage centre with hundreds of drives, I get multiple drive failures almost all at once. The standard deviation is much narrower because the manufacturing process is so well controlled. I used to replace drives when they failed and I was confident that the spares and RAID set redundancy would be sufficient to cover the rebuild time. Now I replace drives before the point in time when I expect failures to start because I can get multiple drive failures within the disk rebuild time. The failures are random but correlated. Go figure. Fortunately, tech change often happens before pre-emptive replacement is required.

            If we base a h/w verification scheme on the randomness of some aspect of a manufactured product then the scheme is bound to the manufacturing process. If you change your process then you change the verification confidence and security. Not good to make these things dependent.

            I think that if there is a need to provide h/w verification then the scheme should be controllable and independent of h/w technology and processes. It should also be able to encode other information with it (er ... it should be extensible). Code a w/o number onto the chip that works like PGP or a cert. Forget about biometrics.

            • Outstandingly informative post. Thanks for taking the time to share your invaluable knowledge and experience!
            • by sjames (1099)

              A zillion years ago, I wrote an Int 13h intercept that could emulate floppy errors on an IBM XT. It seemed to be the obvious thing to do.

              Likewise, the solution here will be to emulate the HW errors. The more that depends on the HW "fingerprint", the more motive there will be to create such an intercept for flash chips. Once created, it won't be very expensive.

              That, of course, assumes that you can't just hack the routine that scans for the errors and cause it to report whatever you want it to or that you don

            • by jhantin (252660)

              1. A write-once serial number doesn't prevent an attacker obtaining a brand-new, unimprinted device and cloning the original device's serial number. Which cells in a block of flash memory fail early is entirely a physical process issue, and falls in the general category of intrinsic physical unclonable functions [wikipedia.org]. The entire point of a PUF is that it can't be duplicated; by definition, this means it can't be backed up, customized, or otherwise controlled, only observed.

              2. Using flash memory's bad bits as a

        • by Abstrackt (609015)

          This translates to "It can't currently be done cheaply".

          So it's reasonably secure for day to day stuff then.

      • by tepples (727027) <tepples@g[ ]l.com ['mai' in gap]> on Friday October 15, 2010 @06:26AM (#33906032) Homepage Journal
        The device emulator that you suggest would fail a Trusted Platform Module check. From the article: "run a secure boot or a reliable software-based attestation scheme".
        • by KiloByte (825081) on Friday October 15, 2010 @07:03AM (#33906232)

          If you have a working Treacherous Computing setup that you believe isn't breached, what would you want the technique in the article for? With working TC, you have all of that and more. Without TC, it can be worked around with a simple kernel patch.

          • by RichiH (749257)

            Not simple as timing will definitely be used in a scheme like this. Still, it sounds hard, not impossible.

            • by Joce640k (829181)

              I think the hard part will be getting some quality time alone with the device you're trying to clone.

              Still ... this is just some academic paper. I doubt it will ever be used in practice.

          • by maztuhblastah (745586) on Friday October 15, 2010 @09:59AM (#33907512) Journal

            If you have a working Treacherous Computing setup that you believe isn't breached, what would you want the technique in the article for?

            Funding.

      • by nospam007 (722110) *

        Sure, 30 years ago there were floppy copy protection schemes with damaged areas, (uncopyable! they called it) after a couple of weeks somebody had found a way to simulate the damage.
        They never learn.

    • Re: (Score:3, Insightful)

      Very true.

      It's getting almost funny how someone states that something is "unbreakable" or "uncopiable" (remember quantum encryption stories?) and then a few months later, someone finds a workaround, or some previously unthought of method of breaking the security.

      That said, though, relying on random microscopic flaws for unique identity is very clever and would be *extremely difficult* (not impossible) to copy.

      Not saying I can do it, but I'm sure someone...somewhere...will figure out a way.

      Just my $0.02.

      -JJS

  • by migla (1099771) on Friday October 15, 2010 @05:08AM (#33905766)

    "I'm unspoofable! Not even Zeus himself could spoof me!"

  • Unspoofable? (Score:5, Insightful)

    by Anonymous Coward on Friday October 15, 2010 @05:13AM (#33905790)

    ...you mean I can't create a simple device that works as a flash drive, but every time the OS requests a bad block, it responds with an entirely fake response that just so happens to match the identity of the spoofed drive? Say, by using any low-cost prototyping board to spoof a USB interface? Or SATA interface?

    • Re: (Score:3, Interesting)

      by somersault (912633)

      wiredmikey writes with a story from Security Week where they admit "we're idiots who don't know anything about computing or, indeed, security"

      These are the sorts of guys that get publishers to buy into moronic DRM schemes..

      • My first thought was about DRM too, and from there immediately to "it 'll only last for a few days"
        • Yeah. I don't know that much about security, but I think having a board on the device to digitally sign data from the drive would be better (public key cryptography type thig). At least that way you couldn't simply copy the device's signature using a program in the machine it's plugged into. If you design it right you'd have to have physical access to the internals of the device to copy its private key.

    • Re:Unspoofable? (Score:5, Insightful)

      by Thanshin (1188877) on Friday October 15, 2010 @06:05AM (#33905966)

      And the most retarded part is that just about everyone in any technical community can tell them why the idea is idiotic, useless and dangerous. I mean, there are pretty few things the internet does better than highlight your stupidity; they should learn to use that wonderful virtue.

      Can someone send them a simple email explaining how to first post their new ideas in a tiny forum so children can tell them why it won't work, before talking to the news?

    • Re:Unspoofable? (Score:5, Informative)

      by tepples (727027) <tepples@g[ ]l.com ['mai' in gap]> on Friday October 15, 2010 @06:32AM (#33906060) Homepage Journal

      you mean I can't create a simple device [...] by using any low-cost prototyping board to spoof a USB interface? Or SATA interface?

      Markus Jakobsson wrote in the article:

      No need for error-correcting codes; in fact, we will read and write "raw", which is possible since all of this will be done on OS level.

      He's talking about using raw NAND flash without a (hardware) controller, which is more than likely soldered to the motherboard. All USB flash drives have a controller performing error correction, as do all CompactFlash, SD, and Memory Stick memory cards. The only popular consumer flash storage devices that don't have a built-in controller are SmartMedia and xD-Picture cards; the controller for these is inside the camera or the USB card reader.

      • Can this be emulated - Yes

        Since this relies of flaws in the silicon - it is surely really easy to accidentally change the error profile of a device so it is no longer recognised ...

        This is the flaw in facial recognition as well, to make it so that it does not report false negatives it is easier to fool as well

        • by tepples (727027)

          Since this relies of flaws in the silicon - it is surely really easy to accidentally change the error profile of a device so it is no longer recognised

          But who is going to solder in such a TSOP flash chip? In addition, reading the "key" from a device changes its error profile because setting it back to all 1's is an erase, and an erase causes wear. It's like a vinyl record, deteriorating with each play, but that might be exactly what publishers of works of entertainment want.

          • by Belial6 (794905)
            Obviously reading wouldn't destroy the key because if it did, then the legitimate reader wouldn't be able to read the key either. Of course, even if reading doesn't put enough ware on it to really matter, your point does being up the fact that chips do wear. This is a lock that is eventually going to fail just due to age.
          • by tftp (111690)

            But who is going to solder in such a TSOP flash chip?

            I can do it for you. Or you can buy an analog of a ZIF socket. None of that is rocket science - there are billions of TS[S]OP chips installed (and safely removed) on this planet. To give you an example, you can use a low temperature alloy that melts in hot water. This way you can install and remove TSOP chips until you die from old age, and the ICs will still work fine. They do that with 1000-ball BGAs at trade shows.

            In addition, reading the "key" f

  • by JensR (12975) on Friday October 15, 2010 @05:14AM (#33905792) Homepage

    So what? We connect another memory device through an FPGA and emulate the error pattern. At least to the extend detected by the software.

    • by Anonymous Coward

      What if one more bit goes bad during normal usage..Identity is gone. Any thing tied to it will stop working.."Very much like humans recognize faces: by their defects"..if your son had plastic surgery without your knowledge..you will fail to recognize him?

      • by jojoba_oil (1071932) on Friday October 15, 2010 @05:36AM (#33905848)

        What if one more bit goes bad during normal usage..Identity is gone. Any thing tied to it will stop working.."Very much like humans recognize faces: by their defects"..if your son had plastic surgery without your knowledge..you will fail to recognize him?

        Especially if that plastic surgery was done unintentionally just by looking at him one time too many.

      • by jgoemat (565882)
        They are relying on the fact that once a bit goes bad it will never fix itself, so it doesn't matter much if a few more errors creep in. Let's say
    • I get the impression that they would apply a specific "identity creation" process to the NAND chip, and that would bring out the inherit flaws in the chip fab process. You can apply the same process to other silicon, but you won't get the same result.

      You may well be able to emulate it using some awesome hardware, but how is that going to help if this is using your mobile phone internal memory for purchase authentication? You going to carry around your FPGA emulation rig to spoof payment authorisation?
      • by Peeteriz (821290)

        How is an external device know what defects are in my silicon? The bits flowing through the connector will tell it whatever I want.

        • by tepples (727027)

          How is an external device know what defects are in my silicon?

          The device is not external. Please see my other comment [slashdot.org].

      • but how is that going to help if this is using your mobile phone internal memory for purchase authentication?

        There are already systems in place for that: SIM cards and electronic serial numbers. Neither of those require purposefully breaking read-write memory in a way that provides no benefit over simple ROM, and both are just as "unspoofable" as this is. Not to mention that SIMs/ESNs have a much reduced chance of randomly changing the identity.

        • by tepples (727027)

          There are already systems in place for that: SIM cards and electronic serial numbers.

          Neither of which will help if you have removed the SIM card in your unlocked smartphone to use it as a PDA.

          • Neither of which will help if you have removed the SIM card in your unlocked smartphone to use it as a PDA.

            With the example of iPhone/iPodTouch, they still have EDID (basically ESN) when no SIM is present... And if you never connect the device to a network, the whole reason for having this kind of ID is moot anyway.

            I don't understand what you're trying to argue here.

    • I hear your point; The flip argument might be that security is never an absolute, but rather a question of the time it takes to break it. (Safes are in fact rated in the hours it takes to breach them).
      One can emulate; however the emulation is often not as time-effective as the real; so I wonder if a reader could not detect the time difference of the emulation?

  • Someone could just create an emulator/interpreter to sit between the chip and the PCB. It reads the input, responds correctly for the emulated chip, and puts it a good or bad space according to what the emulated chip should be.
  • Why? (Score:4, Insightful)

    by jgoemat (565882) on Friday October 15, 2010 @05:17AM (#33905802)
    I fail to see the utililty... If the OS can be compromised, this doesn't help at all. If it can't, then why bother?
    • by amn108 (1231606)

      Precisely. And compromising an OS is in fact an expected norm by any self-respecting computer user. It goes together with having the right to f.e. change system files etc. So the second question is valid: why bother? For those with "trusted computing" systems however, it's a whole other mixbag.

    • Re: (Score:3, Insightful)

      by Wonko the Sane (25252)

      Stop discouraging them! Let them think their scheme is flawless so that they'll actually implement it instead of something stronger.

  • Now if. (Score:2, Interesting)

    by leuk_he (194174)

    The last line in TFA gives the problem in this scheme:

    "If we run a secure boot or a reliable software-based attestation scheme before we ID a device, we know that there is no active malware that may modify the report that results from reading the machine identity. So we know that the reading actually comes from the intended block, and that it was done correctly."

    However if this secure boot thingy is comprimised you can force to read it form a virtualized memory block that contains a forged block. . You ca

    • The maxim of security is that if you have physical access to the hardware then all security is pointless

      This is why you are always kept one (or more) steps away from the actual hardware in secure systems

      If you can secure clean boot the system then you have physical access and you can manually verify the system so this is unnecessary, if you can't then it is not reliable

      Another solution looking for a problem ....

    • Re: (Score:3, Informative)

      by Amouth (879122)

      because this "trusted" hardware will/can have a specialized chip that contains a non-tamperable key.

      Its not easy - but TPM has been proven breakable.

      http://hackaday.com/2010/02/09/tpm-crytography-cracked/ [hackaday.com]

      http://www.nzherald.co.nz/technology/news/article.cfm?c_id=5&objectid=10625082 [nzherald.co.nz]

    • What do I need the NAND for?

      This whole thing is dumb. If I had a system which already couldn't be tampered, I wouldn't need this NAND thing. And for the NAND, I can read out all the info about the NAND that could possibly be used as a key and then replicate it in a hardware-based emulator that I attach to the board in place of the NAND, leaving the rest of the system in place so it can answer any difficult security questions that are asked.

      The NAND system adds nothing of value because it is replicable. Even

  • by Sprite_tm (1094071) * on Friday October 15, 2010 @05:26AM (#33905828)

    From what I know of flash, the 'bad bits' aren't repeatedly bad. The bad-sector-swap-out-routine in most flash drives and USB sticks will actually swap out a sector after a single read that can't be ECC-corrected, but that doesn't mean all the bits in the sector can't be written correctly ever again.

    For example, in this [ieee.org] article (IEEExplore, so paywalled for you, sorry) a generic NAND flash chip has been tested for bit-error-rates. In the 5K write cycles after an average bit has failed, it only failed to be written correctly 4 times more. That would mean that a single erase-rewrite cycle would write the complete sector without any bit errors 99% of the time: to find 'most' of the bad bits, the sector would have to be rewritten 1000s of times every time the software would want to check the fingerprint.

    Not only would that take a fair amount of time, it would also introduce new failed bits. That would mean the ID of the flash chip can only be checked so many times beffore the complete sector goes bad.

  • by jojoba_oil (1071932) on Friday October 15, 2010 @05:28AM (#33905834)

    So let me get this straight... They "create" an ID by writing and rewriting a bunch of bits until they start failing, then mark the whole block bad. To "read" the identity, they set all bits to 0 and see which ones are stuck at 1 and then set all bits to 1 to see which are stuck at 0. The "bad block" ID area has already been written to thousands of times intentionally. What's going to guarantee that by "reading" the bad block ID (with 2 assignments each time), we won't unintentionally be making the final write to an extra bit or two?

    • Re: (Score:3, Interesting)

      by redhog (15207)
      That can be fixed by using some kind of error recovery code. But I still don't see the utility of this. It's just a ROM with random content for every device.

      If all you want is random content on my machine that I send multiple times to you, it can be stored in normal undamaged flash and generated in a multitude of ways.

      If all you want is data I can't change, on my general purpose machine, sorry, that's not gonna happen - I can just swap the whole chip (or even the whole machine).

      if all you want is data I can
      • If all you want is data I can't change, on my general purpose machine, sorry, that's not gonna happen

        Then the major motion picture studios can choose not to sell or rent their works to you if you choose to use only a general-purpose machine. Some major video game developers have made similar decisions.

        • Then the major motion picture studios can choose not to sell or rent their works to you if you choose to use only a general-purpose machine.

          And, of course, no one can live without those movies...

          • by tepples (727027)
            For one thing, intentionally remaining ignorant of part of the popular culture would make me look like the Area Man Constantly Mentioning He Doesn't Own a Television [theonion.com]. For another, even if I don't buy copies of movies, enough other people do that MPAA studios' business model remains sustainable. Once enough things are blocked from general-purpose computers, the economies of scale in making and selling home computers become weaker, and home computers eventually get discontinued in favor of home-priced applian
    • by thijsh (910751)
      Yeah, this sounds more like unspoofable stupidity...
    • by hAckz0r (989977)
      Not only that, but the law of entropy demands that new errors will happen just when you need to use it online to pay your mortgage. Now try replacing it? If the premise is that you can't intentionally create an error to match a known pattern then how does one replace a "failed" identity? You simply do it like this http://www.flylogic.net/blog/?p=10 [flylogic.net] Actually, forcing an error is easy, its getting rid of an unwanted error that is hard, and you can't prevent new errors. In other words, John Doe is toast when a
  • by thegarbz (1787294) on Friday October 15, 2010 @05:37AM (#33905852)
    Spoofing means to make a parody of or mis-represent. Spoofing does not imply that you're duplicating the original device it means that you make others think it's the original device. You don't need to re-create the hardware errors to do this, just intercept the calls which are looking for this hardware ID, and then spoof it.

    This may be an unduplicatable ID, but it is a far cry from unspoofable.
    • by vegiVamp (518171)
      In a sense,you're mis-representing your own ID / hardware signature -- just like when you're spoofing a mac address.
  • by airfoobar (1853132) on Friday October 15, 2010 @05:42AM (#33905872)
    Microsoft's Windows activation thing could become even more annoying in the future.
  • by betasam (713798) <betasam@@@gmail...com> on Friday October 15, 2010 @06:11AM (#33905986) Homepage Journal

    Bad blocks are inherent in NAND flash. SLC NAND Flash devices are more reliable (have fewer errors) and costly. MLC NAND Flash devices are less reliable (have more inherent errors) but are affordable and easily available. NAND Flash devices are known to progressively degrade [cyclicdesign.com] until the number of bad blocks is too high to reliably store data. Inherent errors during manufacturing increase on usage (both read and write.) Most Flash Storage Devices will ultimately become too error-prone to store data. The industry might want to justify inherent errors (and gradually increasing errors) by calling it a fingerprint. They are still searching [intel.com] for techniques to make NAND Flash more reliable.

    The article fails to provide mathematical basis to prove that two NAND flashes cannot have the same bad blocks on manufacturing or at some point of usage thereby obscuring identity. NAND flash controllers are designed to check and resolve errors using known algorithms. Most controllers allow hardware to hide errors while allowing OS device drivers to read the NAND flash medium. The Operating System and the NAND Flash Controller are at least two points were any such fingerprint can be compromised. The Filesystem adds another layer of abstraction. The number of "Real" bad blocks and remaps is usually stored on the NAND Flash. Altering the Bad Block Table is not difficult.

    Hard Disks interestingly have similar failure rates and complex issues like Data remanence which have been studied. I wonder why no one proposed a signature scheme for using errors on Hard Drive Platters to identify them. Computer Forensics for Hard Drives has a longer track record of being studied. Marketing fud can be ignored.

  • There are tons of problems with this, not the least of which lies in the fact that if you have a secure boot and trusted environment, you don't really need a NAND chip to read an identity from, you can make do with a file that user cannot remove or alter, i.e. a system file. That's what trusted would mean here. This however presents another problem - amount of users willing to use such a "trusted" system is inversely proportional to how many of these users grok computers. Typically, your mildly paranoid hac

  • Driver level (Score:3, Informative)

    by SharpFang (651121) on Friday October 15, 2010 @06:52AM (#33906160) Homepage Journal

    Easy to spoof by implementing a flash memory emulation in a microcontroller. A chip that will behave like a flash chip, but in fact provides an extra abstraction layer and simulates faulty areas. Just like HDD controller that remaps faulty sectors to free area at the end of the disk, so from PC viewpoint the disk is fault-free and continuous, doing a similar device (which on top of remapping bad sectors, simulates ones where ones are not present, and makes them look precisely as expected) for flash seems quite easy.

  • SafeDisc (and older DRM schemes) detected bad sectors on disks, which are hard to duplicate. On the other hand, they've very easy to emulate. This technique sounds very similar, and the fact that they haven't addressed the emulation issue makes me VERY skeptical.
    • That was my immediate thought as well. Not only were those systems easy to emulate, they also had the problem that damage would make the disk (or disc) unusable by the application long before it was actually unreadable by screwing up the pattern. As with most content protection, it didn't work and screwed legitimate consumers while not harming pirates at all, yet for some reason the idea keeps coming back with every new data medium.

  • lol (Score:3, Funny)

    by Charliemopps (1157495) on Friday October 15, 2010 @07:37AM (#33906410)
    Unspoofable? Buahahahaha!
  • Why this won't work. (Score:3, Informative)

    by gmarsh (839707) on Friday October 15, 2010 @08:22AM (#33906658)

    I'm an embedded designer, and I recently created a system which has a raw NAND flash memory chip installed on it. We've manufactured a few hundred of these already, and the majority NAND chips come from the factory with half a dozen bad blocks marked, but I've personally seen a few NAND chips which have *no* bad blocks.

    And devices which do have bad blocks have the blocks marked as bad by programming them, so you can mark any good block as bad if you want. So there's nothing stopping me from buying a few trays of NAND, reading the bad blocks and picking out the few error-free ones, and cloning the NAND chip from one of these supposedly "unclonable because of its bad blocks" devices referred to in the original post - copying bad blocks and all.

    But you don't even have to do that.

    Even devices which *do* have bad blocks may not have hard failures in those blocks, where a bit is completely unable to be programmed or erased. And if you successfully erase a bad block, you've just marked that block as good again. So with enough program/erase cycles, you may be able to successfully make a bad block good again and hold the data you want. If not, move onto the next chip from your tray of NAND and try again.

    And you might not even have to get that 100% right, provided you don't have more than 1 bit of error per sector between the original device and the clone. The ECC will correct that bit error, and the now-cloned device (assuming it uses a proper NAND filesystem) should just encounter the bad sector, move the block and mark the previously-bad block as bad again. At this point, you may only need to buy a few NAND chips instead of a few trays in order to clone any given NAND chip.

    Now as a protection against this last idea, the device could fsck its NAND on boot and set a maximum # of new bad blocks as a tripwire for cloning protection. But if you know what that threshold is, just throw more NAND chips at the problem until you successfully program one below that threshold.

  • by time961 (618278) on Friday October 15, 2010 @08:42AM (#33906774)
    Schemes like this are (as others have observed) pretty common, and don't address the real problem: what we "desperately need" is a trustworthy way of knowing that an automated system is acting in accord with its owner's intent. Alas, that does not seem to be on the horizon.

    I mean, suppose my computer has "secure boot" and "unspoofable identity" and "remote attestation". That's great, if my goal is to prove to the secure server at the other end of the connection that I am running various specific (albeit a priori bug-infested versions) of Windows, drivers, browsers, JVMs, etc. But that's a silly goal, because my adversary is just going to take advantage of it, by running malware on my system that looks like it's acting on my behalf (after all, it has ready access to my unspoofable identity) but is actually transferring the contents of my bank account to the Netherlands Antilles without my knowledge.

    Not to mention the general uselessness of "remote attestation" that a TPM provides: it may be able to attest to your configuration (modulo flaws in your gigabytes of software that enable attestation to be subverted or bypassed), but how on earth is the remote end going to make a meaningful decision based on the identity of hundreds or thousands of components that are attested to? Sure, it can reject known bad (flawed) components, but it's preposterous to imagine that you can know what all the bad components might be. Remote attestation is a plausible way of validating that a machine's configuration is the same as it was when it left a corporate IT department, but for making decisions about arbitrary machines in the hands of arbitrary consumers, it's useless.

    And as for this specific scheme, come on: it might be a way to identify a flash device reliably if you have the hardware in hand, but, as described, it's done in software. That's right, software, which can be made to emulate any particular configuration of bit errors it desires, without there necessarily even being a physical flash device in the picture. Yes, for limited-resource embedded systems, and environments where access timing can be inferred with high accuracy, there are tricks one can use to make such attacks difficult, but for general-purpose PCs connecting over unreliable high-latency networks? Nope... not without mountains of false alarms.

    Make no mistake: trustworthy computing is a hard problem. Unique IDs are fun to research, but not closely related to the solution.
  • by mobilityguy (627368) on Friday October 15, 2010 @08:44AM (#33906786)
    This sounds like an early 80s copy-protection scheme that depended on the bad-sector map of the installed hard drive to identify it. It was reliable because only a low-level format would change the pattern, and very few people ever did a low-level format to their drives. The scheme failed when production improved and most drives could be manufactured error-free.
  • ...I think the authors are oversimplifying way too much.

    OK, you take your flash, write it until it breaks, and use the resulting cells to determine an identity. How do you read it? Write it, then read back which cells aren't written. Uh, wait, if you read it by writing it, won't you cause more failures?

    Furthermore, flash often doesn't fail so cleanly. Some cells will simply not write to 0. However, other times, they will become leaky and read as 0, but then flip back to 1 at some later time. So the ap

  • And over time... (Score:3, Interesting)

    by aero6dof (415422) <aero6dof@yahoo.com> on Friday October 15, 2010 @11:07AM (#33908324) Homepage
    So what happens as you continue to use the flash and new error regions show up?

No skis take rocks like rental skis!

Working...