Unspoofable Device Identity Using Flash Memory 145
wiredmikey writes with a story from Security Week that describes a security silver lining to the inevitable errors that arise in NAND flash chips. By seeking out (or intentionally causing) defects in a given part of the chip, a unique profile can be created for any device using NAND flash which the author says may be obscured, but not reproduced: "[W]e recognize devices (or rather: their flash memory) by their defects. Very much like humans recognize faces: by their defects (or deviations from the 'norm') a bigger nose, a bit too bushy eyebrows, bigger cheeks. The nice twist is that if an attacker manages to read your device identity, he cannot inscribe it into his own device. Yes, he can create errors — like we did. But he cannot control where in the block they occur as this relies solely on microscopic manufacturing defects in the silicon."
Argument from ignorance (Score:5, Insightful)
Re:Argument from ignorance (Score:5, Insightful)
Can't you create a device emulator and emulate the defects?
Re: (Score:2)
Re: (Score:2)
How cheap is some code that:
1. reads an existing pattern
2. outputs that pattern via the known device interface.
Re:Argument from ignorance (Score:5, Insightful)
If it can be done in software then it's cheap...hackers have a lot of spare time.
Re: (Score:2)
If you have "spare time" you're probably not a hacker.
er... done before? (Score:5, Interesting)
What they are saying is that this hardware has a unique "biometric" and that can be used to definitively identify true chips/boards from fake. Hmmm...
First thought that popped up is that this isn't new: floppy disks were "copy protected" using defects punched into the original disk. That didn't work out very well so why would this?
Second thought was that biometrics have strengths and weaknesses and are not unspoofable. Why would this be any different?
Several other things come to mind:
1. If a h/w encoded w/o (write only) serial number is not good enough, why is this better? Is it because it is cheaper? ie: the flash mem is already there so additional gates/circuitry is not required?
2. What happens if the h/w tech changes? ie: flash mem is no longer cheap and ubiquitous because the whole h/w base has moved to a new technology? In other words, it binds h/w verification, which we want to be a reliable long term solution, to h/w technology which is highly volatile. Probably not a good idea.
3. There is an assumption that these defects are random. I know from experience that many things we assume to be random are actually patterned and predictable. For example: I have observed DRAM chips that power on with repeatable bit patterns that sometimes vary with production run. Highly consistent, quality controlled production runs tends to remove entropy from the product. Faults and errors occur but within well defined constraints. So... disk drives used to fail within a fairly broad standard deviation from the MTBF but now, in a storage centre with hundreds of drives, I get multiple drive failures almost all at once. The standard deviation is much narrower because the manufacturing process is so well controlled. I used to replace drives when they failed and I was confident that the spares and RAID set redundancy would be sufficient to cover the rebuild time. Now I replace drives before the point in time when I expect failures to start because I can get multiple drive failures within the disk rebuild time. The failures are random but correlated. Go figure. Fortunately, tech change often happens before pre-emptive replacement is required.
If we base a h/w verification scheme on the randomness of some aspect of a manufactured product then the scheme is bound to the manufacturing process. If you change your process then you change the verification confidence and security. Not good to make these things dependent.
I think that if there is a need to provide h/w verification then the scheme should be controllable and independent of h/w technology and processes. It should also be able to encode other information with it (er ... it should be extensible). Code a w/o number onto the chip that works like PGP or a cert. Forget about biometrics.
Mod Parent Up, Please! (Score:2)
Re: (Score:2)
A zillion years ago, I wrote an Int 13h intercept that could emulate floppy errors on an IBM XT. It seemed to be the obvious thing to do.
Likewise, the solution here will be to emulate the HW errors. The more that depends on the HW "fingerprint", the more motive there will be to create such an intercept for flash chips. Once created, it won't be very expensive.
That, of course, assumes that you can't just hack the routine that scans for the errors and cause it to report whatever you want it to or that you don
Re: (Score:2)
1. A write-once serial number doesn't prevent an attacker obtaining a brand-new, unimprinted device and cloning the original device's serial number. Which cells in a block of flash memory fail early is entirely a physical process issue, and falls in the general category of intrinsic physical unclonable functions [wikipedia.org]. The entire point of a PUF is that it can't be duplicated; by definition, this means it can't be backed up, customized, or otherwise controlled, only observed.
2. Using flash memory's bad bits as a
Re: (Score:2)
This translates to "It can't currently be done cheaply".
So it's reasonably secure for day to day stuff then.
Defeated by Trusted Computing (Score:4, Informative)
Re:Defeated by Trusted Computing (Score:5, Insightful)
If you have a working Treacherous Computing setup that you believe isn't breached, what would you want the technique in the article for? With working TC, you have all of that and more. Without TC, it can be worked around with a simple kernel patch.
Re: (Score:2)
Not simple as timing will definitely be used in a scheme like this. Still, it sounds hard, not impossible.
Re: (Score:2)
I think the hard part will be getting some quality time alone with the device you're trying to clone.
Still ... this is just some academic paper. I doubt it will ever be used in practice.
Re: (Score:2, Funny)
input "Is this VM running slow? (y/n) ", runningSlow$
Re: (Score:2)
It can easily run 1000 times slower without the code knowing it.
Does NTP have authentication?
Re: (Score:2)
I assumed the attack to be done against non-TPM stuff as the technique is useless and the attack scenario different for TPM, anyway.
Re: (Score:3, Informative)
When you test for specific hardware behavior as a means of authentication, it's always a good idea to include speed measurements & checks in your code. That way, it's harder for the emulator to fake stuff. As this is common practice, an attack against this scheme would need to take care of these tests, as well.
Re: (Score:2)
When you test for specific hardware behavior as a means of authentication, it's always a good idea to include speed measurements & checks in your code. That way, it's harder for the emulator to fake stuff.
Give me an eval board of a microcontroller with a USB device interface and a week of time, and you will get all of that and more. As matter of fact, I have an AVR32UC3A based board here that I built myself. I can plug it into a PC and it will do whatever the code in the MCU tells it to do, including
Re: (Score:2)
Please note that I did not really believe in the defect thing in the first place. I was stating that timing would be an issue. There may be weird command paths that produce unique delays and unless you tested all possible combinations of commands so you can emulate everything, you can't be sure there isn't something left.
But yah, any random dev board is a good place to start with this.
Re: (Score:2)
There may be weird command paths that produce unique delays and unless you tested all possible combinations of commands so you can emulate everything, you can't be sure there isn't something left.
That would be indeed a somewhat worthy challenge; though the hacker would just run the tool for a day or two and collect all feasible code paths and delays associated with them. That alone wouldn't be a big deal to spoof.
However in practice a dependency on those "unique delays" is impossible. The average life
Re: (Score:2)
Indeed, this entire concept is stupid.
What they mean is that the flash problems can't be replicated. They can trivially be spoofed.
This is like a copy-protected CD, except in this case it's like the CD is built into a USB-CDROM, so it's even easier to fake.
Yeah, it sure would be a hassle trying to get another USB-CDROM exactly like that...but, um, if you're faking hardware, you just get some hardware that, duh, says exactly the right things instead of replicating the hardware.
Re:Defeated by Trusted Computing (Score:4, Insightful)
If you have a working Treacherous Computing setup that you believe isn't breached, what would you want the technique in the article for?
Funding.
Re: (Score:2)
Sure, 30 years ago there were floppy copy protection schemes with damaged areas, (uncopyable! they called it) after a couple of weeks somebody had found a way to simulate the damage.
They never learn.
Re: (Score:3, Insightful)
Very true.
It's getting almost funny how someone states that something is "unbreakable" or "uncopiable" (remember quantum encryption stories?) and then a few months later, someone finds a workaround, or some previously unthought of method of breaking the security.
That said, though, relying on random microscopic flaws for unique identity is very clever and would be *extremely difficult* (not impossible) to copy.
Not saying I can do it, but I'm sure someone...somewhere...will figure out a way.
Just my $0.02.
-JJS
Re: (Score:3, Funny)
The paternity test and court-ordered child support, however, is compelling evidence.
Re: (Score:2)
Not to mention the restraining order.
Famous last words? (Score:5, Funny)
"I'm unspoofable! Not even Zeus himself could spoof me!"
Re:Famous last words? (Score:4, Insightful)
At first It was mechanical punctures on floppies, then random laser marks on CD, now this...
Re: (Score:2)
"All this has happened before, and all this will happen again!"
At first It was mechanical punctures on floppies, then random laser marks on CD, now this...
Yeah, holes punched in floppys. However could that be circumvented? Har har har
Glad to see there are other old guys around.
Re: (Score:2)
I think I heard that before... (Score:2)
I think German said the same thing about their Engima machines.
Unspoofable? (Score:5, Insightful)
...you mean I can't create a simple device that works as a flash drive, but every time the OS requests a bad block, it responds with an entirely fake response that just so happens to match the identity of the spoofed drive? Say, by using any low-cost prototyping board to spoof a USB interface? Or SATA interface?
Re: (Score:3, Interesting)
wiredmikey writes with a story from Security Week where they admit "we're idiots who don't know anything about computing or, indeed, security"
These are the sorts of guys that get publishers to buy into moronic DRM schemes..
Re: (Score:2)
Re: (Score:2)
Yeah. I don't know that much about security, but I think having a board on the device to digitally sign data from the drive would be better (public key cryptography type thig). At least that way you couldn't simply copy the device's signature using a program in the machine it's plugged into. If you design it right you'd have to have physical access to the internals of the device to copy its private key.
Re: (Score:2)
[...] We're trying to defeat DRM --- not enhance it. [...]
Those two are the same. Any truly secure DRM renders the protected content unusable, thereby rendering the DRM useless.
Re: (Score:3, Funny)
They have been doing that on the PS3 for years.
Re: (Score:3, Informative)
This isn't so much about DRM as verifying the source of information. Similar technologies are involved, but it's not the same concept. DRM is about obscuring information to all but authorised users, while signing information is about making sure that an authorised source has written a message (or a driver for example), and anyone is free to read it.
Re: (Score:2)
I wasn't talking about public/private keys for DRM, I was talking about verifying sources of information (information which anyone is free to read). I also implied it would probably be possible to copy the private key if you had physical access to the device. With the Wii or an iPhone or whatever you wouldn't even need to have access to the private key to sign software, you would just need some way of making the device think that all sources are authorised.
Re:Unspoofable? (Score:5, Insightful)
And the most retarded part is that just about everyone in any technical community can tell them why the idea is idiotic, useless and dangerous. I mean, there are pretty few things the internet does better than highlight your stupidity; they should learn to use that wonderful virtue.
Can someone send them a simple email explaining how to first post their new ideas in a tiny forum so children can tell them why it won't work, before talking to the news?
Re: (Score:2)
It's a nice theory but if that was the case, wouldn't they get a higher level response by presenting an idea with less obvious flaws?
Or you mean they didn't have in the vicinity anyone technically able to help them correct some of the most obvious problems.
Re:Unspoofable? (Score:5, Informative)
you mean I can't create a simple device [...] by using any low-cost prototyping board to spoof a USB interface? Or SATA interface?
Markus Jakobsson wrote in the article:
No need for error-correcting codes; in fact, we will read and write "raw", which is possible since all of this will be done on OS level.
He's talking about using raw NAND flash without a (hardware) controller, which is more than likely soldered to the motherboard. All USB flash drives have a controller performing error correction, as do all CompactFlash, SD, and Memory Stick memory cards. The only popular consumer flash storage devices that don't have a built-in controller are SmartMedia and xD-Picture cards; the controller for these is inside the camera or the USB card reader.
Re: (Score:2)
Can this be emulated - Yes
Since this relies of flaws in the silicon - it is surely really easy to accidentally change the error profile of a device so it is no longer recognised ...
This is the flaw in facial recognition as well, to make it so that it does not report false negatives it is easier to fool as well
Re: (Score:2)
Since this relies of flaws in the silicon - it is surely really easy to accidentally change the error profile of a device so it is no longer recognised
But who is going to solder in such a TSOP flash chip? In addition, reading the "key" from a device changes its error profile because setting it back to all 1's is an erase, and an erase causes wear. It's like a vinyl record, deteriorating with each play, but that might be exactly what publishers of works of entertainment want.
Re: (Score:2)
Re: (Score:2)
But who is going to solder in such a TSOP flash chip?
I can do it for you. Or you can buy an analog of a ZIF socket. None of that is rocket science - there are billions of TS[S]OP chips installed (and safely removed) on this planet. To give you an example, you can use a low temperature alloy that melts in hot water. This way you can install and remove TSOP chips until you die from old age, and the ICs will still work fine. They do that with 1000-ball BGAs at trade shows.
In addition, reading the "key" f
Re: (Score:2)
Or, you can just offer substitute hardware.
Either replace the controller with one that reports to the OS what you want it to report, or replace the flash with a circuit that ACTS like flash ram but returns the error profile you want to the controller.
Sigh. We can emulate it. (Score:5, Insightful)
So what? We connect another memory device through an FPGA and emulate the error pattern. At least to the extend detected by the software.
What if one more bit goes bad during normal usage. (Score:3, Insightful)
What if one more bit goes bad during normal usage..Identity is gone. Any thing tied to it will stop working.."Very much like humans recognize faces: by their defects"..if your son had plastic surgery without your knowledge..you will fail to recognize him?
Re:What if one more bit goes bad during normal usa (Score:5, Funny)
What if one more bit goes bad during normal usage..Identity is gone. Any thing tied to it will stop working.."Very much like humans recognize faces: by their defects"..if your son had plastic surgery without your knowledge..you will fail to recognize him?
Especially if that plastic surgery was done unintentionally just by looking at him one time too many.
Re: (Score:2)
Re: (Score:2)
You may well be able to emulate it using some awesome hardware, but how is that going to help if this is using your mobile phone internal memory for purchase authentication? You going to carry around your FPGA emulation rig to spoof payment authorisation?
Re: (Score:2)
How is an external device know what defects are in my silicon? The bits flowing through the connector will tell it whatever I want.
Re: (Score:2)
How is an external device know what defects are in my silicon?
The device is not external. Please see my other comment [slashdot.org].
Re: (Score:2)
Re: (Score:2)
System requirements for the technology that you mention include soldering, and a lot of end users don't have the coordination to solder a TSOP [wikipedia.org] or especially a BGA [wikipedia.org] correctly.
When you say a lot of end users don't, you're implicitly admitting that not all end users don't. So by your own logic there are users who have the coordination to solder correctly. Remind me how this maintains the "unspoofable" part of this "technology" that you seem to be struggling so hard to justify.
As I said in a comment that you haven't responded to [slashdot.org], I don't understand what you're trying to argue.
Re: (Score:2)
So by your own logic there are users who have the coordination to solder correctly.
There are users who can solder, but the number of them is commercially insignificant, just as the number of people with a gaming PC connected to a TV is commercially insignificant. Providing an unlocking service is already a crime (17 USC 1201 and foreign counterparts).
Re: (Score:2)
but how is that going to help if this is using your mobile phone internal memory for purchase authentication?
There are already systems in place for that: SIM cards and electronic serial numbers. Neither of those require purposefully breaking read-write memory in a way that provides no benefit over simple ROM, and both are just as "unspoofable" as this is. Not to mention that SIMs/ESNs have a much reduced chance of randomly changing the identity.
Re: (Score:2)
There are already systems in place for that: SIM cards and electronic serial numbers.
Neither of which will help if you have removed the SIM card in your unlocked smartphone to use it as a PDA.
Re: (Score:2)
Neither of which will help if you have removed the SIM card in your unlocked smartphone to use it as a PDA.
With the example of iPhone/iPodTouch, they still have EDID (basically ESN) when no SIM is present... And if you never connect the device to a network, the whole reason for having this kind of ID is moot anyway.
I don't understand what you're trying to argue here.
Re: (Score:2)
I hear your point; The flip argument might be that security is never an absolute, but rather a question of the time it takes to break it. (Safes are in fact rated in the hours it takes to breach them).
One can emulate; however the emulation is often not as time-effective as the real; so I wonder if a reader could not detect the time difference of the emulation?
Emulate? (Score:2)
Why? (Score:4, Insightful)
Re: (Score:2)
Precisely. And compromising an OS is in fact an expected norm by any self-respecting computer user. It goes together with having the right to f.e. change system files etc. So the second question is valid: why bother? For those with "trusted computing" systems however, it's a whole other mixbag.
Re: (Score:3, Insightful)
Stop discouraging them! Let them think their scheme is flawless so that they'll actually implement it instead of something stronger.
Now if. (Score:2, Interesting)
The last line in TFA gives the problem in this scheme:
"If we run a secure boot or a reliable software-based attestation scheme before we ID a device, we know that there is no active malware that may modify the report that results from reading the machine identity. So we know that the reading actually comes from the intended block, and that it was done correctly."
However if this secure boot thingy is comprimised you can force to read it form a virtualized memory block that contains a forged block. . You ca
Re: (Score:2)
The maxim of security is that if you have physical access to the hardware then all security is pointless
This is why you are always kept one (or more) steps away from the actual hardware in secure systems
If you can secure clean boot the system then you have physical access and you can manually verify the system so this is unnecessary, if you can't then it is not reliable
Another solution looking for a problem ....
Re: (Score:3, Informative)
because this "trusted" hardware will/can have a specialized chip that contains a non-tamperable key.
Its not easy - but TPM has been proven breakable.
http://hackaday.com/2010/02/09/tpm-crytography-cracked/ [hackaday.com]
http://www.nzherald.co.nz/technology/news/article.cfm?c_id=5&objectid=10625082 [nzherald.co.nz]
if this has a non-tamperable key (Score:2)
What do I need the NAND for?
This whole thing is dumb. If I had a system which already couldn't be tampered, I wouldn't need this NAND thing. And for the NAND, I can read out all the info about the NAND that could possibly be used as a key and then replicate it in a hardware-based emulator that I attach to the board in place of the NAND, leaving the rest of the system in place so it can answer any difficult security questions that are asked.
The NAND system adds nothing of value because it is replicable. Even
Not sure if that'd work... (Score:4, Interesting)
From what I know of flash, the 'bad bits' aren't repeatedly bad. The bad-sector-swap-out-routine in most flash drives and USB sticks will actually swap out a sector after a single read that can't be ECC-corrected, but that doesn't mean all the bits in the sector can't be written correctly ever again.
For example, in this [ieee.org] article (IEEExplore, so paywalled for you, sorry) a generic NAND flash chip has been tested for bit-error-rates. In the 5K write cycles after an average bit has failed, it only failed to be written correctly 4 times more. That would mean that a single erase-rewrite cycle would write the complete sector without any bit errors 99% of the time: to find 'most' of the bad bits, the sector would have to be rewritten 1000s of times every time the software would want to check the fingerprint.
Not only would that take a fair amount of time, it would also introduce new failed bits. That would mean the ID of the flash chip can only be checked so many times beffore the complete sector goes bad.
How long do you want your ID to last? (Score:4, Insightful)
So let me get this straight... They "create" an ID by writing and rewriting a bunch of bits until they start failing, then mark the whole block bad. To "read" the identity, they set all bits to 0 and see which ones are stuck at 1 and then set all bits to 1 to see which are stuck at 0. The "bad block" ID area has already been written to thousands of times intentionally. What's going to guarantee that by "reading" the bad block ID (with 2 assignments each time), we won't unintentionally be making the final write to an extra bit or two?
Re: (Score:3, Interesting)
If all you want is random content on my machine that I send multiple times to you, it can be stored in normal undamaged flash and generated in a multitude of ways.
If all you want is data I can't change, on my general purpose machine, sorry, that's not gonna happen - I can just swap the whole chip (or even the whole machine).
if all you want is data I can
Sorry, appliances only. (Score:2)
If all you want is data I can't change, on my general purpose machine, sorry, that's not gonna happen
Then the major motion picture studios can choose not to sell or rent their works to you if you choose to use only a general-purpose machine. Some major video game developers have made similar decisions.
Re: (Score:2)
And, of course, no one can live without those movies...
Re: (Score:2)
Re: (Score:2)
If the major motion picture studios want to sell or rent their works to someone to get money for it, they better sell what the market wants, not whatever kind of spyware fantasies they themselves entertain.
The market has shown its willingness to go along with the spyware fantasies of major publishers of works of entertainment. Look at the popularity of DVD players, BD players, video game consoles, and other video-playing appliances, and the dearth of home theater PCs.
All that needs to be done is find out how to emulate the key.
And watch a copyright owner successfully sue to block distribution of the key. Do you remember what happened to Lik Sang?
Re: (Score:2)
One minute you're talking about the market supporting DRM, the next about government force doing so. Which is it?
Re: (Score:2)
And watch a copyright owner successfully sue to block distribution of the key. Do you remember what happened to Lik Sang?
One minute you're talking about the market supporting DRM, the next about government force doing so. Which is it?
What exactly did you mean by "government force doing so"? The United States government didn't force Nintendo, Microsoft, and Sony to add lockouts to all their products, but they did anyway. The market supports DRM, and the electorate supports its government backing by continuing to elect the Republican Party and the Democratic Party to the United States Congress.
Re: (Score:3)
Re: (Score:2)
Doesn't know what spoofing means. (Score:5, Insightful)
This may be an unduplicatable ID, but it is a far cry from unspoofable.
Re: (Score:2)
Re: (Score:2)
If they were alive today, they'd be connecting water mains to your Internet tubes, so you would get a splash in your face, when you pull the cable on your DSL.
This post really isn't getting the attention it deserves. The Three Stooges were great and "A Plumbing We Will Go [youtube.com]" is possibly one of my favorites.
Seems like.. (Score:3, Funny)
Obscure Security and Marketing Fud? (Score:4, Informative)
Bad blocks are inherent in NAND flash. SLC NAND Flash devices are more reliable (have fewer errors) and costly. MLC NAND Flash devices are less reliable (have more inherent errors) but are affordable and easily available. NAND Flash devices are known to progressively degrade [cyclicdesign.com] until the number of bad blocks is too high to reliably store data. Inherent errors during manufacturing increase on usage (both read and write.) Most Flash Storage Devices will ultimately become too error-prone to store data. The industry might want to justify inherent errors (and gradually increasing errors) by calling it a fingerprint. They are still searching [intel.com] for techniques to make NAND Flash more reliable.
The article fails to provide mathematical basis to prove that two NAND flashes cannot have the same bad blocks on manufacturing or at some point of usage thereby obscuring identity. NAND flash controllers are designed to check and resolve errors using known algorithms. Most controllers allow hardware to hide errors while allowing OS device drivers to read the NAND flash medium. The Operating System and the NAND Flash Controller are at least two points were any such fingerprint can be compromised. The Filesystem adds another layer of abstraction. The number of "Real" bad blocks and remaps is usually stored on the NAND Flash. Altering the Bad Block Table is not difficult.
Hard Disks interestingly have similar failure rates and complex issues like Data remanence which have been studied. I wonder why no one proposed a signature scheme for using errors on Hard Drive Platters to identify them. Computer Forensics for Hard Drives has a longer track record of being studied. Marketing fud can be ignored.
A problem... (Score:2)
There are tons of problems with this, not the least of which lies in the fact that if you have a secure boot and trusted environment, you don't really need a NAND chip to read an identity from, you can make do with a file that user cannot remove or alter, i.e. a system file. That's what trusted would mean here. This however presents another problem - amount of users willing to use such a "trusted" system is inversely proportional to how many of these users grok computers. Typically, your mildly paranoid hac
Driver level (Score:3, Informative)
Easy to spoof by implementing a flash memory emulation in a microcontroller. A chip that will behave like a flash chip, but in fact provides an extra abstraction layer and simulates faulty areas. Just like HDD controller that remaps faulty sectors to free area at the end of the disk, so from PC viewpoint the disk is fault-free and continuous, doing a similar device (which on top of remapping bad sectors, simulates ones where ones are not present, and makes them look precisely as expected) for flash seems quite easy.
Sounds like ancient DRM (Score:2)
Re: (Score:2)
That was my immediate thought as well. Not only were those systems easy to emulate, they also had the problem that damage would make the disk (or disc) unusable by the application long before it was actually unreadable by screwing up the pattern. As with most content protection, it didn't work and screwed legitimate consumers while not harming pirates at all, yet for some reason the idea keeps coming back with every new data medium.
lol (Score:3, Funny)
Why this won't work. (Score:3, Informative)
I'm an embedded designer, and I recently created a system which has a raw NAND flash memory chip installed on it. We've manufactured a few hundred of these already, and the majority NAND chips come from the factory with half a dozen bad blocks marked, but I've personally seen a few NAND chips which have *no* bad blocks.
And devices which do have bad blocks have the blocks marked as bad by programming them, so you can mark any good block as bad if you want. So there's nothing stopping me from buying a few trays of NAND, reading the bad blocks and picking out the few error-free ones, and cloning the NAND chip from one of these supposedly "unclonable because of its bad blocks" devices referred to in the original post - copying bad blocks and all.
But you don't even have to do that.
Even devices which *do* have bad blocks may not have hard failures in those blocks, where a bit is completely unable to be programmed or erased. And if you successfully erase a bad block, you've just marked that block as good again. So with enough program/erase cycles, you may be able to successfully make a bad block good again and hold the data you want. If not, move onto the next chip from your tray of NAND and try again.
And you might not even have to get that 100% right, provided you don't have more than 1 bit of error per sector between the original device and the clone. The ECC will correct that bit error, and the now-cloned device (assuming it uses a proper NAND filesystem) should just encounter the bad sector, move the block and mark the previously-bad block as bad again. At this point, you may only need to buy a few NAND chips instead of a few trays in order to clone any given NAND chip.
Now as a protection against this last idea, the device could fsck its NAND on boot and set a maximum # of new bad blocks as a tripwire for cloning protection. But if you know what that threshold is, just throw more NAND chips at the problem until you successfully program one below that threshold.
We don't "desperately need" device identity (Score:3, Insightful)
I mean, suppose my computer has "secure boot" and "unspoofable identity" and "remote attestation". That's great, if my goal is to prove to the secure server at the other end of the connection that I am running various specific (albeit a priori bug-infested versions) of Windows, drivers, browsers, JVMs, etc. But that's a silly goal, because my adversary is just going to take advantage of it, by running malware on my system that looks like it's acting on my behalf (after all, it has ready access to my unspoofable identity) but is actually transferring the contents of my bank account to the Netherlands Antilles without my knowledge.
Not to mention the general uselessness of "remote attestation" that a TPM provides: it may be able to attest to your configuration (modulo flaws in your gigabytes of software that enable attestation to be subverted or bypassed), but how on earth is the remote end going to make a meaningful decision based on the identity of hundreds or thousands of components that are attested to? Sure, it can reject known bad (flawed) components, but it's preposterous to imagine that you can know what all the bad components might be. Remote attestation is a plausible way of validating that a machine's configuration is the same as it was when it left a corporate IT department, but for making decisions about arbitrary machines in the hands of arbitrary consumers, it's useless.
And as for this specific scheme, come on: it might be a way to identify a flash device reliably if you have the hardware in hand, but, as described, it's done in software. That's right, software, which can be made to emulate any particular configuration of bit errors it desires, without there necessarily even being a physical flash device in the picture. Yes, for limited-resource embedded systems, and environments where access timing can be inferred with high accuracy, there are tricks one can use to make such attacks difficult, but for general-purpose PCs connecting over unreliable high-latency networks? Nope... not without mountains of false alarms.
Make no mistake: trustworthy computing is a hard problem. Unique IDs are fun to research, but not closely related to the solution.
History repeating itself (Score:5, Informative)
Nice try but (Score:2)
...I think the authors are oversimplifying way too much.
OK, you take your flash, write it until it breaks, and use the resulting cells to determine an identity. How do you read it? Write it, then read back which cells aren't written. Uh, wait, if you read it by writing it, won't you cause more failures?
Furthermore, flash often doesn't fail so cleanly. Some cells will simply not write to 0. However, other times, they will become leaky and read as 0, but then flip back to 1 at some later time. So the ap
And over time... (Score:3, Interesting)
Re: (Score:2)
1d107
Controllerless flash (Score:2)
With regard to the actual flash ID technique, I thought any decent flash device (e.g. all SD cards) would have a wear-leveling controller
Markus Jakobsson wrote in the article:
No need for error-correcting codes; in fact, we will read and write "raw", which is possible since all of this will be done on OS level.
It appears he refers to raw NAND flash chips without a dedicated hardware controller in front, soldered directly to the motherboard. These chips don't behave like an SD or CF or SATA or USB device; they're more like xD-Picture cards.
Re: (Score:2)
It means that I control the hardware (motherboard) which does the identity checking - and for any practical purpose of identification, it means that I can force it to claim that "yes, I have a NAND chip with xxxx pattern" to anyone who wants to identify my harware - say, a software program installer or a networked device. It's just like the processor ID numbers, which were embedded in the silicon and 'unchangeable'.
It's just like spoofing a hardware dongle - since the consumer OS isn't "trusted", almost an