Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Technology

Is SSD Density About To Hit a Wall? 208

Zombie Puggle writes "Enterprise Storage Forum has an article contending that solid state disks will stay stuck at 20-25nm unless the materials and techniques used to design Flash drives changes, and soon. 'Anything smaller and the data protection and data corruption issues become so great that either the performance is abysmal, the data retention period doesn't meet JEDEC standards, or the cost increases. Though engineers are working on performance and density improvements via new technologies (they're also trying to drive costs down), these are fairly new techniques and are not likely to make it into devices for a while."
This discussion has been archived. No new comments can be posted.

Is SSD Density About To Hit a Wall?

Comments Filter:
  • Re:So... (Score:3, Interesting)

    by mikehoskins ( 177074 ) on Saturday September 18, 2010 @07:50PM (#33623102)

    Agreed. And, I believe that 34nm is near the best they can do today, in any kind of production.

    So, if you can go from a 34nm * 34nm feature to a 20nm * 20nm feature, you can almost triple the density.

    So, in the same space you can produce a 128G drive, you can then produce a roughly 384G drive, going from 34nm to 34nm.

    So, if a USB Keychain is produced w/ 128G, a 384G can be produced at the same size, barring other issues.

    That assumes they are even using 34nm process SSD's, today, to produce 128G USB SSD drives. If they are using a 40nm process, then expect 512G USB SSD's, as a future possibility.

    This doesn't even take into consideration stacking SSD vertically and horizontally in a RAID configuration on a drive and maximizing use of space (packaging, support chips, etc.) or making larger physical USB devices.

    In the future, hardware compression, deduplication, etc., may further add to storage improvements.

    My best guess? 1 Terabyte uncompressed on a keychain, eventually, assuming a 20nm process.

    If they can go further than 20nm or improve in other ways, all the better.

  • by gman003 ( 1693318 ) on Saturday September 18, 2010 @07:57PM (#33623128)
    but who says the wall is going to win that collision? I've seen it time and time again: a problem is encountered, and dealt with. Optical disk rotation speed. Parallel data buses. Processor clock speeds. They all hit a wall, and we got around that wall. We lowered the wavelength of the laser instead of go to 56x CDs. We switched to serial buses when parallel encountered clocking issues. We switched to multicore processors when we couldn't keep upping the gigahertz. I'm fully confident we'll figure out a solution to this problem as well, whether it be new manufacturing techniques, memristors, or just larger Flash chips.
  • by queazocotal ( 915608 ) on Saturday September 18, 2010 @08:07PM (#33623174)

    'Just'.
    It really does cost quite a lot to make flash.
    For example, a fab capable of the latest geometries will set you back over a billion dollars.

    This fab is only cutting edge for a yearish before needing retooled, or moving down the value chain to make cheaper - less profitable - stuff.

  • by Anonymous Coward on Saturday September 18, 2010 @08:14PM (#33623220)

    The current cpu architecture will be ditched and will go asynchronous.
    L1 L2 and L3 cache will become a conscious thing for high performance applications
    The only thing we will see the end of are the half hearted abstractions that make no sense, programmers should know at the cpu level what their code does.
    However, hardware abstractions are fine, and would allow many many threads to use every bit of that 40Ghz speed
    CPUs could go back to the 6500 simplicity and we would have thousands of them at 40Ghz and 64KB of on die cache per processor
    Then maybe a few terabytes at the L2 level and a few thousand TB of main memory
    The operating system would just be a hypervisor to manage the loading of all the processes on all of the processors
    There is no wall, there is only a wall in ones perceptions. A very simple processor @ 20Ghz with thousands of cores would kick ass

  • by cheesybagel ( 670288 ) on Saturday September 18, 2010 @08:15PM (#33623230)
    Phase-change memory... Oh dear. I still remember when it was being pushed as Ovonic Unified Memory (OUM) or calcogenics. I certainly hope Samsung and the usual suspects can get this to work. But it has been a long time in coming. Well, maybe not as long as MRAM but still...
  • by 0111 1110 ( 518466 ) on Saturday September 18, 2010 @08:59PM (#33623452)

    The smaller the NAND flash process size the shorter the write endurance and data retention times. A 25nm NAND flash SSD will have a much shorter lifespan and hold data for a much shorter period of time than current 34nm tech. Does this mean that 2010 NAND flash SSDs will be better than 2011 ones? Well I guess that depends on how much you value reliability and longevity in your storage devices. Lower cost and shorter life is a win/win for the manufacturers. This limit on NAND flash technology has been known since the start. I don't see the big deal. Just stop at 34nm and work at other technologies that are faster or scale in size better. We usually think of larger process size as being better, but in this case it's not.

    http://features.techworld.com/storage/3212075/is-nand-flash-about-to-hit-a-dead-end/?intcmp=ft-hm-m [techworld.com]

    http://hardforum.com/showthread.php?t=1492711 [hardforum.com]

  • by TheRaven64 ( 641858 ) on Saturday September 18, 2010 @09:20PM (#33623540) Journal
    PC-RAM stands a good chance of being the long-term future (I had the good fortune recently to share a very nice bottle of port with one of the scientists behind underlying technology, and came away quite convinced, and a lot drunk), but the largest currently shipping PC-RAM modules are 64MB. It has a lot of catching up to do before it reaches, let alone passes, the density of flash.
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Saturday September 18, 2010 @09:30PM (#33623572)
    Comment removed based on user account deletion
  • by X0563511 ( 793323 ) on Saturday September 18, 2010 @10:44PM (#33623882) Homepage Journal

    I'd like to see stuff start getting tougher.

    When that 2Tb SSD can fall 4 stories (while in use) and carry on without even noticing, then I start getting tingles...

  • by Bender_ ( 179208 ) on Sunday September 19, 2010 @04:23AM (#33625642) Journal

    This is not true. You need to be aware of one thing: "Memristors" were not new when they were "discovered". The memory industry knew the concept years before as RRAM [wikipedia.org]. I can assure you that all other nonvolatile memory vendors are developing RRAM or are at least looking into the possibilities. Samsung has been publishing about NiO based RRAM long before it was "discovered" again, IBM has some interesting papers from the Zurich labs. Furthermore, there are several start up companies looking into 3D RRAM which may offer densities far above that of flash. Matrix Semiconductors (bought by Sandisk) and a company by a former Micron guy.

    One significant issue with RRAM (and the memristor) so far is that the memory cells have to be "formed". They need an initial high voltage pulse to induce the switching behaviour. This is something that is difficult to do when you have billions of memory cells. To my knowledge no good solution to this problem has been found yet, although there is progress.

  • Conductor pairing (Score:3, Interesting)

    by Skapare ( 16644 ) on Sunday September 19, 2010 @11:34AM (#33627642) Homepage

    The effects of EM fields can be significantly reduced by conductor pairing. When two currents of equal and opposite magnitude run side by side, the EM field is almost entirely confined to a space around those conductors. This can be achieved by creating cell pairs arranged so they are side by side, but turned in opposite directions. This allows the current of one to be in the opposite physical direction of the other, when the same operation is being performed on each. Since erase and read (but not write) can always be done at the same time, this reduces the number (in the case of read) and severity (in the case of erase) of EM fields, reducing the overall effect of EM fields on adjacent inactive cells.

  • by symbolset ( 646467 ) on Monday September 20, 2010 @12:26AM (#33632508) Journal

    Until 2008 the memristor was a theoretical construct - a presumed fourth element to complete the symmetry between resistor, inductor and capacitor. But then in a moment it went from theoretical to provably found, and the theory became real. It turns out that it took researchers this long to find it because the effect doesn't work at all in larger process sizes. They needed to try it at the recently evolved process sizes to definitively find the effect. Now they have found it, and it works. Since it's a new discovery limited by feature size at 50nm maximum, one would presume that we will need to explore new finer lithography technologies for some time before its minimum feature size is found.

    The innate nature of the technology is that it's stackable. It can exploit dimension z. That's not even debatable - it's even given in the fine article. It doesn't rely on dopants embedded in the silicon, but on the junctions between mettalic elements laid upon it. It is fast. Cells are analog so it's possible to store multiple bits in a cell to the limit of how finely the programming current can be regulated, which is a factor that improves over time. It's low-power, and obviously so low-heat. There are some thermal implications for filesystems based on this storage that can best distribute the thermal load of writing, but that's a programming issue easy to overcome.

    It's also already small. It doesn't even work on feature sizes larger than 50nm. We won't know how small a feature size it works on until we develop new methods of lithography that work at finer levels of resolution than it works at. It could be quite some time before that happens. We're stretching the limits of ultraviolet already and up from here is X-Rays and Gamma rays, which are hard to produce.

    Between the three dimensional elements, the fine resolution elements, the multiple bits per cell elements, the high speed of access and programming, this does look like the technology to carry us forward from flash memory if it can be produced commercially. The partnership between HP and Hynix to implement commercial production does imply that it's coming. They've announced a plan and a schedule. One would presume their engineers are hard at work and the remaining practical questions involve layout of the memory grids to optimize performance to the interface and provide sufficient indirection to deal with inherent physical media reliability.

Machines have less problems. I'd like to be a machine. -- Andy Warhol

Working...