## RunCore Introduces Self-Destructable SSD 168

Posted
by
timothy

from the magic-smoke-that-is dept.

from the magic-smoke-that-is dept.

jones_supa writes

*"RunCore announces the global launch of its InVincible solid state drive, designed for mission-critical fields such as aerospace or military. The device improves upon a normal SSD by having two strategies for the drive to quickly render itself blank. First method goes through the disk, overwriting all data with garbage. Second one is less discreet and lets the smoke out of the circuitry by driving overcurrent to the NAND chips. Both ways can be ignited with a single push of a button, allowing James Bond -style rapid response to the situation on the field."*
## Re:Encryption (Score:5, Interesting)

Considering the (mostly) invincible state of good encryption, this seems unnecessary. Sure, it is a fun idea, but not a practical one.

No encryption is invincible. Especially 5 years from now... Computing power has advanced to the point where you can just brute force "invincible encryption" from a few years back...

A few have pointed out that the keys are too large to brute force. I figure you out to know why that is: http://everything2.com/title/Thermodynamics+limits+on+cryptanalysis [everything2.com]

That is a good little write up on the subject. Short, sweet, and easy to follow. It demonstrates that non quantum 256 bit keys are safe from brute force attacks for... ever.

Two wrenches (one esoteric, one practical): Reversable Computation and Quantum Computers.

First the "practical" one, Quantum Computers. The algorithm for searching an unsorted database for a key is Grover's Algorithm. This gives a speed up of O(N1/2) and a space complexity of O(log N). For a 256 bit key this gives a time complexity of 2**128 and a space complexity of 78. Now, that time complexity will kill you. Move to a 512 bit key and we are back to 2**256 time complexity (jsut like in the linked article). The space complexity goes to 155. It might not seem like a big deal, but adding another qbit to a quantum machine isn't trivial. In fact it is properly hard, and gets harder for every extra qbit. also that space complexity is a multiplier, not a count. you need log N * or something along that scale (Big O notation demonstrates the rate of growth as things go to infinity so small problems can be dominated by other factors till they "scale up"). Obviously even quantum computation isn't going to help crack a 256 bit key and a 512 bit key will restore the same level of security even IF they could be built large enough and numerous enough and fast enough for the 256 bit version (LOTS OF IFS and with an easy out. As pointed out increasing an encryption key's size is relatively trivial)

Now for the one that caused me some trouble, Reversable Computing. Fancy way of saying that the computation is reversable with no energy expended after being performed and reversed (actually arbitrarily little energy appraoching zero as closely as you care to come... kinda. Physical devices pose practical problems, but let us se that asside for a moment). This is a theory, and a good one. The problem is that you need to drive through all of the states. Let us assume that a computation takes one plank time on our perfect reversable computer (this is impossible, of course. It would be far higher even with a "perfect" device, but this is a lower bound given to us by nature). You need 1.4 * 10**16 time the current age of the universe (1.979 * 10**26 years) worth of computer time to go through all the states. Average is half that to find the correct key. Now you'll want to parallelize this computer to get to that (wholly impractical) time faster. How many can you build? How large are they? I'll leave it as an exerccise to the reader to determine how many you might be able to construct before they collapsed into a black hole. Also: 1 plank time is a few dozens of orders of magnitude smaller than any computation done with matter can achieve. It takes 4.48*10**20 plank times for a photon to pass an electron (if wolfram alpha is being nice to me, that is). Scale your time to be, say, the same as the time it takes a photon to cross your theoretical perfect reversable computer and then work out how many you need to complete the cracking of the key within a reasonable time. You'll get a black hole or incredible distances beyond the mortal ken.

Conclusion: Brute forcing any appreciably sized cryptographic key (512 bit or greater) will never, ever be possible no matter what happens with technology so long as computers are made of matter and compute in space. Period.

256 bit keys will remain equally unchallenged until we can create and power quantum computers the size of grains of sand trilions at a time.

Take that Moore's law

## Re:Old News (Score:3, Interesting)

That reminds me of a double WD hard drives failure within a week, the main HD and its backup (this happened in a cloudless life). Amazing, they're even synchronized...

This is totally f@#$ standard. There are two drives bought at the same shop at the same time. Do you think the manufacturers specially made sure to mix them all in with drives from different places? They aren't even just the same batch. They have probably been produced within seconds of each other.

What do you think happens when a drive fails? Some capacitor has been made with the wrong chemicals; some piece of metal has impurities. Some bit was screwed too tight and is weakening the rest of the structure. It's not deliberate; it's may not even be outside normal manufacturer's specifications, but it's the thing which ends up as the weakest link in your drive. The drives made just before have the same metal; the same capacitors and have been put together by the same machine which is reaching the point where it needs calibration. They are all going to fail at the same time.

At the point where the first drive fails, the second one suddenly gets more load (e.g. double the number of reads; up to N-1 times the number of reads if it's part of a RAID array before you even take into account that some idiot normally rebuilds the array before backing it up). It's extremely close to failing as it is. Rebuilding is likely to drive it over the edge and kill the array.

Don't talk to me about the idiots at HP, IBM and every other bloody server manufacturer who put a series of ten identical drives with consecutive serial numbers in their RAID arrays and then sell it as ...

For more start here [miracleas.com]; and this [zdnet.com] also seems worth reading