Is SSD Density About To Hit a Wall? 208
Zombie Puggle writes "Enterprise Storage Forum has an article contending that solid state disks will stay stuck at 20-25nm unless the materials and techniques used to design Flash drives changes, and soon. 'Anything smaller and the data protection and data corruption issues become so great that either the performance is abysmal, the data retention period doesn't meet JEDEC standards, or the cost increases. Though engineers are working on performance and density improvements via new technologies (they're also trying to drive costs down), these are fairly new techniques and are not likely to make it into devices for a while."
Re:So... (Score:3, Interesting)
Agreed. And, I believe that 34nm is near the best they can do today, in any kind of production.
So, if you can go from a 34nm * 34nm feature to a 20nm * 20nm feature, you can almost triple the density.
So, in the same space you can produce a 128G drive, you can then produce a roughly 384G drive, going from 34nm to 34nm.
So, if a USB Keychain is produced w/ 128G, a 384G can be produced at the same size, barring other issues.
That assumes they are even using 34nm process SSD's, today, to produce 128G USB SSD drives. If they are using a 40nm process, then expect 512G USB SSD's, as a future possibility.
This doesn't even take into consideration stacking SSD vertically and horizontally in a RAID configuration on a drive and maximizing use of space (packaging, support chips, etc.) or making larger physical USB devices.
In the future, hardware compression, deduplication, etc., may further add to storage improvements.
My best guess? 1 Terabyte uncompressed on a keychain, eventually, assuming a 20nm process.
If they can go further than 20nm or improve in other ways, all the better.
Sure it might hit a wall... (Score:5, Interesting)
Re:Just lower the manufacturing cost (Score:3, Interesting)
'Just'.
It really does cost quite a lot to make flash.
For example, a fab capable of the latest geometries will set you back over a billion dollars.
This fab is only cutting edge for a yearish before needing retooled, or moving down the value chain to make cheaper - less profitable - stuff.
Re:The wall, and the end of the world. (Score:1, Interesting)
The current cpu architecture will be ditched and will go asynchronous.
L1 L2 and L3 cache will become a conscious thing for high performance applications
The only thing we will see the end of are the half hearted abstractions that make no sense, programmers should know at the cpu level what their code does.
However, hardware abstractions are fine, and would allow many many threads to use every bit of that 40Ghz speed
CPUs could go back to the 6500 simplicity and we would have thousands of them at 40Ghz and 64KB of on die cache per processor
Then maybe a few terabytes at the L2 level and a few thousand TB of main memory
The operating system would just be a hypervisor to manage the loading of all the processes on all of the processors
There is no wall, there is only a wall in ones perceptions. A very simple processor @ 20Ghz with thousands of cores would kick ass
Re:Or more likely PCM (Score:5, Interesting)
34nm is better tech than 25nm (Score:5, Interesting)
The smaller the NAND flash process size the shorter the write endurance and data retention times. A 25nm NAND flash SSD will have a much shorter lifespan and hold data for a much shorter period of time than current 34nm tech. Does this mean that 2010 NAND flash SSDs will be better than 2011 ones? Well I guess that depends on how much you value reliability and longevity in your storage devices. Lower cost and shorter life is a win/win for the manufacturers. This limit on NAND flash technology has been known since the start. I don't see the big deal. Just stop at 34nm and work at other technologies that are faster or scale in size better. We usually think of larger process size as being better, but in this case it's not.
http://features.techworld.com/storage/3212075/is-nand-flash-about-to-hit-a-dead-end/?intcmp=ft-hm-m [techworld.com]
http://hardforum.com/showthread.php?t=1492711 [hardforum.com]
Re:Or more likely PCM (Score:4, Interesting)
Comment removed (Score:3, Interesting)
Re:The wall, and the end of the world. (Score:4, Interesting)
I'd like to see stuff start getting tougher.
When that 2Tb SSD can fall 4 stories (while in use) and carry on without even noticing, then I start getting tingles...
Re:Or more likely PCM (Score:3, Interesting)
This is not true. You need to be aware of one thing: "Memristors" were not new when they were "discovered". The memory industry knew the concept years before as RRAM [wikipedia.org]. I can assure you that all other nonvolatile memory vendors are developing RRAM or are at least looking into the possibilities. Samsung has been publishing about NiO based RRAM long before it was "discovered" again, IBM has some interesting papers from the Zurich labs. Furthermore, there are several start up companies looking into 3D RRAM which may offer densities far above that of flash. Matrix Semiconductors (bought by Sandisk) and a company by a former Micron guy.
One significant issue with RRAM (and the memristor) so far is that the memory cells have to be "formed". They need an initial high voltage pulse to induce the switching behaviour. This is something that is difficult to do when you have billions of memory cells. To my knowledge no good solution to this problem has been found yet, although there is progress.
Conductor pairing (Score:3, Interesting)
The effects of EM fields can be significantly reduced by conductor pairing. When two currents of equal and opposite magnitude run side by side, the EM field is almost entirely confined to a space around those conductors. This can be achieved by creating cell pairs arranged so they are side by side, but turned in opposite directions. This allows the current of one to be in the opposite physical direction of the other, when the same operation is being performed on each. Since erase and read (but not write) can always be done at the same time, this reduces the number (in the case of read) and severity (in the case of erase) of EM fields, reducing the overall effect of EM fields on adjacent inactive cells.
Sometimes the Grail is found. (Score:2, Interesting)
Until 2008 the memristor was a theoretical construct - a presumed fourth element to complete the symmetry between resistor, inductor and capacitor. But then in a moment it went from theoretical to provably found, and the theory became real. It turns out that it took researchers this long to find it because the effect doesn't work at all in larger process sizes. They needed to try it at the recently evolved process sizes to definitively find the effect. Now they have found it, and it works. Since it's a new discovery limited by feature size at 50nm maximum, one would presume that we will need to explore new finer lithography technologies for some time before its minimum feature size is found.
The innate nature of the technology is that it's stackable. It can exploit dimension z. That's not even debatable - it's even given in the fine article. It doesn't rely on dopants embedded in the silicon, but on the junctions between mettalic elements laid upon it. It is fast. Cells are analog so it's possible to store multiple bits in a cell to the limit of how finely the programming current can be regulated, which is a factor that improves over time. It's low-power, and obviously so low-heat. There are some thermal implications for filesystems based on this storage that can best distribute the thermal load of writing, but that's a programming issue easy to overcome.
It's also already small. It doesn't even work on feature sizes larger than 50nm. We won't know how small a feature size it works on until we develop new methods of lithography that work at finer levels of resolution than it works at. It could be quite some time before that happens. We're stretching the limits of ultraviolet already and up from here is X-Rays and Gamma rays, which are hard to produce.
Between the three dimensional elements, the fine resolution elements, the multiple bits per cell elements, the high speed of access and programming, this does look like the technology to carry us forward from flash memory if it can be produced commercially. The partnership between HP and Hynix to implement commercial production does imply that it's coming. They've announced a plan and a schedule. One would presume their engineers are hard at work and the remaining practical questions involve layout of the memory grids to optimize performance to the interface and provide sufficient indirection to deal with inherent physical media reliability.