Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Bug Data Storage Hardware

SSD Latency, Error Rates May Spell Bleak Future 292

Posted by timothy
from the everything-counts-in-large-amounts dept.
Lucas123 writes "A new study by the University of California and Microsoft shows that NAND flash memory experiences significant performance degradation as die sizes shrink in size. Over the next dozen years latency will double as the circuitry size shrinks from 25 nanometers today, to 6.5nm, the research showed. Speaking at the Usenix Conference on File and Storage Technologies in San Jose this week, Laura Grupp, a graduate student at the University of California, said tests of 45 different types of NAND flash chips from six vendors using 72nm to 25nm lithography techniques showed performance degraded across the board and error rates increased as die sizes shrunk. Triple-Level NAND performed the worst, followed by Multi-Level Cell NAND and Single-Level Cell. The researchers said MLC NAND-based SSDs won't be able to go beyond 4TB and TLC-based SSDs won't be able to scale past 16TB because of the performance degradation, so it appears the end of the road for SSDs will be 2024."
This discussion has been archived. No new comments can be posted.

SSD Latency, Error Rates May Spell Bleak Future

Comments Filter:
  • Sounds legit (Score:5, Insightful)

    by sbrown7792 (2027476) on Thursday February 16, 2012 @04:12PM (#39065855)
    Because there could *never* be a breakthrough discovery/invention found within the next 10 years.
    • by jedidiah (1196) on Thursday February 16, 2012 @04:14PM (#39065885) Homepage

      OK then. You've got 10 years. Get going.

      • by fuzzyfuzzyfungus (1223518) on Thursday February 16, 2012 @04:48PM (#39066401) Journal
        Tiny monks with tiny paintbrushes, inscribing ones and zeros on individual electrons. No problem.
      • Re:Sounds legit (Score:5, Insightful)

        by mcavic (2007672) on Thursday February 16, 2012 @05:38PM (#39067049)
        Well, to start with you can make an SSD as big as you want by taking smaller SSD's and chaining them together with an intelligent front-end.
    • Re:Sounds legit (Score:5, Insightful)

      by Anonymous Coward on Thursday February 16, 2012 @04:18PM (#39065953)

      Oh there will be a great discovery/invention in the next 10 years. Unfortunately it will be tied up in patent litigation for the next 50 years after that. All fun and games when it is a hard drive. Not so funny when it is a medicine that can save your kid.

      • by LearnToSpell (694184) on Thursday February 16, 2012 @05:11PM (#39066727) Homepage
        Won't somebody think of the hard drives!
      • Re: (Score:3, Insightful)

        by sonicmerlin (1505111)

        HDD tech has advanced without patent litigation tying anything up. What makes you think it will be different for NAND's successor?

        • by hairyfeet (841228)

          Because Seagate and WD have a nice MAD thing going on since each company has bought out half the former competitors (Hitachi and Samsung being the last two they bought up) so each side knows the other would bury them in patents in court. With flash there is still enough players and patents out there not owned by a megacorp to make it turn nasty, just look at how a little company that most had probably never heard of called rambus was able to troll the RAM market for years.

          I say give it a decade though, and

    • Re:Sounds legit (Score:5, Interesting)

      by Grishnakh (216268) on Thursday February 16, 2012 @04:26PM (#39066079)

      We already have the breakthrough, but it's not Flash, it's PRAM [wikipedia.org].

      • by bughunter (10093) <bughunter&earthlink,net> on Thursday February 16, 2012 @04:32PM (#39066153) Journal

        Yes, but what I heard about PRAM is that you have to push it. A lot.

        • by Anonymous Coward on Thursday February 16, 2012 @04:37PM (#39066233)

          But I *LIKE* to push the PRAM a lot!

      • Re:Sounds legit (Score:5, Informative)

        by maxwell demon (590494) on Thursday February 16, 2012 @04:39PM (#39066255) Journal

        We already have the breakthrough, but it's not Flash, it's PRAM [wikipedia.org].

        And MRAM. [wikipedia.org] And FeRAM. [wikipedia.org]

      • Samsung managed 1 Gbit at 58 nm in February 2011. The rest of the alternatives are significantly lower density than even PRAM. Not particularly promising IMO.

    • Re:Sounds legit (Score:5, Interesting)

      by Zouden (232738) on Thursday February 16, 2012 @04:27PM (#39066093)

      Perhaps it's already been found:
      http://en.wikipedia.org/wiki/Phase-change_memory [wikipedia.org]
      PCM still has hurdles to overcome, but it's generally considered that performance increases as size decreases, the opposite of NAND.

    • by westlake (615356)

      Because there could *never* be a breakthrough discovery/invention found within the next 10 years.

      Tomorrow's "breakthrough" doesn't mean you have a commercially viable product within the next ten years --- or even the next twenty,

    • by sgt scrub (869860)

      Wasn't there about 6 alternatives to NAND discovered last year? I think IBM announced 2 of them.

      • No, actually. I wrote an article about some of the alternatives last year. MRAM, FeRAM, and PCRAM have all been under development by various companies for well over a decade. The only real new discovery is memrister memory. The reason the other three are starting to appear on sites like Slashdot is that they're now getting to the point where they can make the transition to shipping product.
    • by 0racle (667029)
      So obviously research into the limits of the current technology is pointless?
    • by Bengie (1121981)

      Memresistors are suppose to come out in the next 1-2 years. Will be even used as system memory because it has no effective "wear". That "breakthrough" is already done, it's just being readied for production.

    • Re:Sounds legit (Score:4, Interesting)

      by PopeRatzo (965947) on Thursday February 16, 2012 @05:34PM (#39067021) Homepage Journal

      Because there could *never* be a breakthrough discovery/invention found within the next 10 years.

      Didn't you hear? We've reached the limitations of technology and innovation.

      That's why it's so stupid to put any money into non-fossil energy. If we can't power a house by solar energy now, we'll never be able to and we just have to accept it.

      It's the End of History. Again.

    • by hairyfeet (841228)

      The problem with SSDs is the hot/crazy scale [codinghorror.com] where you get hot performance but crazy failure rates but for this tech to get the economies of scale needed to lower prices there simply can't be a hot/crazy scale as it needs to at LEAST be as reliable as the tech it wants to replace and right now its anything but. Sure a geek knows to "backup backup backup" but consumers don't and those are the ones getting burnt by SSDs. Hell notice how quickly Tiger and Newegg switched back to HDDs for their kits instead of

      • And to make matters worse they don't "fail gracefully" as the old spinning rust does. honestly i can't remember a HDD that failed without warning in the past....oh hell the last one was probably a Deathstar around 2000, no thanks to SMART you'll usually get SOME kind of indication, be it SMART or noise or weird errors, something, and then you can get your data off. My gamer customers went back to running raptors in RAID because they bought SSDs and lost data, just one day they flipped the switch and poof! No drive even in BIOS and no way for me to get a single byte of data back.

        FLASH does (should) indeed fail gracefully. Once a block wears out, programming it will fail, and the FLASH and controller will know this and mark it bad. But other blocks will still be readable, and the now dead block contained no useful data (else the controller wouldn't be erasing it.)

        What you're talking about are firmware based bugs, the controller not making the FLASH contents accessible. These problems are probably the result of block translation tables being corrupted, and is entirely the fault of th

  • SSD =/= NAND Flash (Score:5, Informative)

    by MischaNix (2163648) on Thursday February 16, 2012 @04:14PM (#39065879)
    There will be other solid-state storage solutions. The only reason NAND is currently used is its relative cheapness and reliability.
  • In other news... (Score:4, Informative)

    by Troyusrex (2446430) on Thursday February 16, 2012 @04:14PM (#39065887)
    An old study (well, executive) showed that there was a world wide demand for "maybe 6" computers. This might all be true at current technology levels but technology will have changed an awful lot by 2024.
    • by jedidiah (1196)

      ....conflating what's possible with what's desired.

      One of these depends on human nature and the other one depends on physics.

    • by dyingtolive (1393037) <brad.arnett@NosPam.notforhire.org> on Thursday February 16, 2012 @04:22PM (#39066025)
      "Hey, would you want a computer? It's a city block large, uses all of these punchcards for I/O, and doesn't really do much other than crack Enigma. Hey, where are you going?"

      "Hey, would you want a computer? It can fit in your pocket, let you talk to anyone in the world, can take pictures and provide you god damn near any information written down by a human being, and you can watch porn on it!"

      Computers are the same thing they were even 20 years ago in name only.
      • by steelfood (895457)

        You're not looking at it from the right level of abstraction.

        The information of yesterday was Enigma codes. The information of today is pictures, wikipedia, and pr0n. The computers of yesterday and today are effectively doing the same things: storing and moving information and making calculations to glean new information.

        They are the same. You just can do more with it now, because your potential is limited, while a computer's potential is only limited by technological progress. But the theory backing comput

  • Stuff like this... (Score:5, Insightful)

    by blahplusplus (757119) on Thursday February 16, 2012 @04:16PM (#39065935)

    ... always denies other areas of innovation. The same way processors were thought not to scale down to x nm and we're at 20'ish nm now. The same way hard drives were thought only to have x capacity and we're now in the terabytes. If nand is really so limited then something different then nand will take it's place. But a few terabyte will be more then enough for 99% of applications and hard disks will be for packrats and those who need large amounts of longer term storage.

    • by PRMan (959735)

      One time, my wife's cousin (who was studying RAM at MIT and is now a brain specialist teaching at Stanford) said that "you will never be able to put more than 40MB on a PCMCIA card."

      I replied, "Within 5 years, we'll be carrying a GB around in our pocket the size of a postage stamp." I was right. Sometimes smart guys are so focused on their area that they fail to see the realities of supply and demand combined with Moore's law.

  • Obligatory ... (Score:2, Redundant)

    by techstar25 (556988)
    "16TB ought to be enough for anybody."
    • "16TB ought to be enough for anybody."

      Didn't you mean "640TB blah blah blah".
      We've already got at least 20TB of fixed disks at home (including online backups). The media server alone has 12TB.

  • I want HAL's memory (Score:4, Interesting)

    by na1led (1030470) on Thursday February 16, 2012 @04:21PM (#39066001)
    Still waiting for the Holographic Memory that should have been hear a decade ago.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Holographic memory requires fusion power.

      • Not a problem, since I have that in my goddamned flying car.

    • by roman_mir (125474) on Thursday February 16, 2012 @05:04PM (#39066617) Homepage Journal

      Still waiting for the Holographic Memory that should have been hear a decade ago.

      - there is the problem.

      With holographic memory you shouldn't be trying to 'hear' anything, it's something likely in visible electromagnetic spectrum instead!

    • by HiThere (15173)

      Holographic memory is capacious enough, but it's SLOW. And nobody's come up with a way to make it cheaply. (Much less to make it Read/Write & cheaply.)

    • by ShakaUVM (157947)

      I worked on holographic memory. It has a huge capacity, but very very slow write times. It was something like 1 byte per second, or something ridiculous like that.

      If people could come up with a medium that could be developed quickly, it could be neat.

  • as reported here [semi.org] and here [nature.com]? I thought people have been busy about it for quite some time.
  • by John Hasler (414242) on Thursday February 16, 2012 @04:27PM (#39066087) Homepage

    Yes. They'll all stop working then and it will become impossible to make any more.

    • by Surt (22457)

      Well, not so much that but rather than hard drive rotational latencies will finally catch up to nand. With our disks spinning at a paltry 100,000,000 rpm, latency will finally be a worry of the past.

      • by rtaylor (70602)

        I've wondered what spinning disk could do in a vacuum chamber and with a non-contact magnetic bearing.

        • by T-Bone-T (1048702)

          You still have the limits of the disk to deal with. That's why optical media like DVDs and CDs aren't getting any faster. The disks are already spinning as fast as they can.

  • Not bleak at all (Score:2, Informative)

    by Anonymous Coward

    From the article, "This will reduce the write latency advantage that SSDs offer relative to disk from 8.3x (vs. a 7 ms disk access) to just 3.2x.". Yeah, doom and gloom.

  • by goldcd (587052) on Thursday February 16, 2012 @04:33PM (#39066167) Homepage
    But I'm choosing to ignore it all, entirely based on font.
    http://cseweb.ucsd.edu/~lgrupp/CV.pdf [ucsd.edu]
  • 4TB limit (Score:5, Insightful)

    by afidel (530433) on Thursday February 16, 2012 @04:36PM (#39066205)
    Yeah, about that 4TB limit, I think these [fusionio.com] folks will be surprised that their 5TB and 10TB drives won't be possible in the next few years....
    • by Desler (1608317)

      I hate to break it to you but that is 8 drives in one device. Hence the "octal" name.

    • by Surt (22457)

      They're talking about on a single device. Those drives are arrays of something like 64 devices.

      • by blueg3 (192743)

        It's the size of a single double-wide PCI card. Okay, scratch that, it *is* a single double-wide PCIE card. That counts as a single device. Just like how if you put a bunch of hard drive platters behind a common interface within a standard-size hard drive shell, it counts as one hard drive.

    • They probably cheated by putting more than 96 NAND dies in their device. 96 NAND dies should be enough for anyone. (sorry)

    • by rtaylor (70602)

      That's not exactly a single chip.

  • Can't scale past 16TB? Why not just stack them?
    • Re:Just add more (Score:5, Informative)

      by Surt (22457) on Thursday February 16, 2012 @04:56PM (#39066503) Homepage Journal

      It costs money to stack. At a much higher rate than it does to scale. Or at least that has been the case. It will be a significant hit to the industry when they can no longer count on device scaling to help bring up density, and get forced to wire multiple chips in ever expanding arrays.

  • I only ask because Apple is the largest flash customer/reseller in the world and they just bought this company

  • Along with error rates, what will happen to retention times as the cell size shrinks?

    Supposedly, flash memories have expected retention times as short as 5-10 years or so (if not refreshed by re-writing), thanks to gradual leakage of the trapped charges they use to record data; this value is expected to drop as flash cells get smaller. I've had gadgets whose firmware mysteriously become corrupted after sitting around for a few years, and sometimes they could be revived by re-flashing them -- I sometimes wo

    • by PRMan (959735)
      I have an 8MB SD card from the first camera I bought (in about 2003). Because of the small size, we immediately replaced it. I found it the other day (late 2011) and I was able to read a couple test pictures just fine over 8 years later. I can read my first CD-R's still too. I don't believe any of this digital media rot stuff. I haven't seen it happen at all in anything that supposedly rots.
      • by imp (7585)

        Retention time in 2003-time-frame flash is tens of years. Retention time for the latest 25nm flash is measured at one year. Much less if you wear it out. Your 8MB SD card likely hasn't had the level of cycling needed to see reduced data life.

  • ... microelectronics fabrication, making them vulnerable to inductive effects?

  • I was kind of hoping we'd have something better than NAND Flash within 5 or so years. Maybe something using memristors? NAND is just too expensive to be useful. Prices haven't dropped in a couple years.

    • Re:NAND Successor (Score:4, Informative)

      by WuphonsReach (684551) on Thursday February 16, 2012 @10:13PM (#39070005)
      Prices haven't dropped in a couple years.

      Prices are now down to about $1.50/GB for standard 2.5" SSDs. And you can sometimes find them for $1.25/GB. That's lower then the $2.50-$3.00 of 18-24 months ago.

      Sure, it's expensive compared to the $0.10/GB of bulk storage like 1/2/4TB drives, but when you compare it to things like 10k RPM SATA/SAS and 15k SAS (about $1/GB) it starts to not look so expensive. The only things that make me nervous about them is that SSDs still have some controller issues and it's a younger technology compared to traditional hard drives.

      At $1.50/GB, that means you can purchase a 120GB SSD for about $180. For a lot of people, that's big enough and cheap enough in exchange for vastly improved performance. And if you can keep the users from storing stuff locally, you could go with one of the 64/80GB units which are in the $100-$125 range.

      I've converted a few users over to SSD over the past 2 years. It's been worth the money every time. The machines are far more responsive to user input, they don't sit there and spin, and it generally means that the CPU starts being the bottleneck again. Not all of these are power users, either.

      I paid about $1.75/GB for my 250GB SSD. Do I wish it was bigger? Sometimes. But it turned a 4-year old laptop from something that I hated using due to the slowness of the old 500GB 5400 RPM hard drive into something that is fast and responsive. For work it made me much more productive.
  • Type one: bandwidth sensitive. OS files, application files, cached application data, etc.

    Type two: bandwidth insensitive, e.g. streamed. E.g. Video, audio.

    Store type one on a SSD. Store type two on a higher-capacity magnetic drive. How likely is type one data expected to grow? Perhaps not that fast. My home machine is an ~8 year old Dell laptop with a 60G disk. It's not even half full. That includes an OS, browser, Office suite, and a few other applications.

  • There's been tons of research on alternative technologies, including phase-change memory (http://en.wikipedia.org/wiki/Phase-change_memory) and magnetic tunneling junction (http://drl.ee.ucla.edu/index.php?page=research&function=sttram) memory. Obviously commercializing them is expensive, but some progress has already been made there. I'm sure other competing technologies will be developed in that time as well.

  • only use the ssd partitions where it makes sense. no reason to install the whole OS to the ssd. just mount /tmp or /var/lib/mysql on an ssd slice. use plain old sata raid for the rest.

  • by jcrb (187104) <[jcrb] [at] [yahoo.com]> on Thursday February 16, 2012 @06:03PM (#39067371) Homepage

    While they discuss individual SSDs, modern flash storage arrays ( http://www.violin-memory.com/products/6000-flash-memory-array/ [violin-memory.com] ) can hide all the write latency and its effects on read latency. When you start talking about 16TB SSDs the same techniques can be used.

    As far as bandwidth and IOPs, they use a 4K/8K write size for MLC/TLC, but MLC already exists with 8K pages, as well as having the ability to write more than one plane at once, which doubles the write bandwidth. Double the page size again and you double the BW.

    Now bigger page sizes only help on the reads if you can use more than a single user read worth of data in the page, which might be possible depending on what the system knows about access patterns. But without making assumptions about the ability to store data together that's likely to be read together, garbage collection, which can wide up reading more bytes than the user does, can use most of the data in a page.

    So there are factors of 2X, 4X maybe 8X in performance that the paper misses out on.

    As far as density, it is not necessary to go to smaller features to get more bits per chip by using 3D techniques such as Toshiba's P-BiCS (Pipe-shaped Bit Cost Scalable) MLC NAND which allow vertical stacking which increases density without using smaller features with their worse performance and lifetime.

    The group at UCSD that authored this has done some nice work so I don't mean to be too negative, but they are trying to predict too far from a limited and faulty set of assumptions which unfortunately negates much of the validity of this paper.

    jon

    p.s. in the interests of full disclosure, I make the arrays in the first link :)

I've never been canoeing before, but I imagine there must be just a few simple heuristics you have to remember... Yes, don't fall out, and don't hit rocks.

Working...