Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Google Stats IT

Google-Backed SSD Endurance Research Shows MLC Flash As Reliable As SLC (hothardware.com) 62

MojoKid writes: Even for mainstream users, it's easy to feel the differences between using a PC that has an OS installed on a solid state drive versus a mechanical hard drive. Also, with SSD pricing where it is right now, it's also easy to justify including one in a new configuration for the speed boost. And there's obvious benefit in the enterprise and data center for both performance and durability. As you might expect, Google has chewed through a healthy pile of SSDs in its data centers over the years and the company appears to have been one of the first to deploy SSDs in production at scale. New research results Google is sharing via a joint research project now encompasses SSD use over a six year span at one of Google's data centers. Looking over the results led to some expected and unexpected findings. One of the biggest discoveries is that SLC-based SSDs are not necessarily more reliable than MLC-based drives. This is surprising, as SLC SSDs carry a price premium with the promise of higher durability (specifically in write operations) as one of their selling points. It will come as no surprise that there are trade-offs of both SSDs and mechanical drives, but ultimately, the benefits SSDs offer often far outweigh the benefits of mechanical HDDs.
This discussion has been archived. No new comments can be posted.

Google-Backed SSD Endurance Research Shows MLC Flash As Reliable As SLC

Comments Filter:
  • Anyone try to just submit a random string of characters? I am fairly certain it would end up posted.

  • by gweihir ( 88907 ) on Tuesday March 01, 2016 @09:54AM (#51614733)

    Just as worthless as their last "study" on storage reliability, as they do not name manufacturers and models. Research published by Google sucks badly.

    • as they do not name manufacturers and models

      When you're studying differences purely on a technological point of view trying to address the conception of SLC vs MLC, what has manufacturer got to do with it? People constantly post about the differences between SLC and MLC regardless of which manufacturer manufacturers the drive, so when attempting to study that all you're doing is adding additional distractions from your point.

      Yes it would be nice to know who's doing the best and the worst.
      No that was not at all the point of the study.

      • by gweihir ( 88907 )

        Several points:

        1. Manufacturers and models are critical to repeatability and verifiability. As it is, they could have pulled those numbers from their backsides and nobody could tell.
        2. There are quite a few SSDs out there that have problems in the relevant time-of-purchase time-span. For example, OCZ had much higher failure rates in a number of models. Without knowing whether any of those (and how many) were in the sample, you do not get a realistic picture, as you are comparing devices on different maturit

  • by U2xhc2hkb3QgU3Vja3M ( 4212163 ) on Tuesday March 01, 2016 @10:16AM (#51614919)

    All I know is that even Intel can't even make a decent SSD. The first SSD I bought was their Intel SSD 530 Series 120GB and I've never been able to use the damn thing. I've tried it on two computers, a Mac mini 2010 and a DIY PC with a recent motherboard, and in both of them the drive just won't boot after a warm reset. Even after all these years, Intel hasn't published a firmware upgrade to fix the problem.

    • Are you sure that's just not a defective drive? I've put the same SSD in a MacBook Pro 13 2011 and some random Toshiba laptop (Windows 8.1) for my sisters in law, both with the 240GB version of the drive. Seems to work perfectly fine and they've been running for a couple years without issues.

      • That particular problem with the 530 Series has been known for years.

        Intel says it's a problem with Macs.
        Apple says it's a problem with the drive.

        • It's almost certainly Intel's fault. Some of their SSDs do not follow the SATA spec properly on reset which can cause the initial probe to fail with a timeout. If you probe a second time it will succeed. I actually had to add a second probe to DragonFlyBSD's AHCI driver to work around the problem. It doesn't seem to be related to startup time, even with a long delay I'll see first-probe failures on Intel SSDs in various boxes.

          Strangely enough the failures occur with Intel AHCI chipsets + Intel SSDs, but

    • by Kobun ( 668169 )
      I have a couple dozen each of 520 and 530 models deployed right now. Those machines have no problems with their drives.
      • by SB5407 ( 4372273 )
        I have about 10 of them (120 GB and 180 GB Intel 530 Series SSDs) deployed in my environment in HP laptops and they've been great. They've been much more reliable than the failed half-height laptop HDDs they replaced.
    • Tell you what, I'd do a no questions asked exchange for an OCZ Vortex drive for you. How does that sound?
      Yep I'll take one for the team.

  • by swb ( 14022 ) on Tuesday March 01, 2016 @10:20AM (#51614951)

    ...then it would stand to reason that other storage vendors mostly know this, too.

    So why aren't there more MLC based flash arrays, especially all-flash models? For storage capacities under 24 TB raw, it would be pretty price competitive to HDD but produce a storage device with insane I/O potential.

    • by fnj ( 64210 )

      it would be pretty price competitive to HDD

      No it wouldn't/isn't. Not even close.

    • by tlhIngan ( 30335 )

      So why aren't there more MLC based flash arrays, especially all-flash models? For storage capacities under 24 TB raw, it would be pretty price competitive to HDD but produce a storage device with insane I/O potential.

      Because flash is expensive compared to spinning rust.

      24TB of hard drive storage can be had for maybe $1000 or so, 4 x 8TB hard drives in a RAID5 style array.

      a 1TB SSD runs around $400. 24TBs of that is $96K, raw storage. Maybe you can get a bulk discount and pay $60k.

      Sure, you can buy 2/4 TB SS

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Your math is off... 400*24=$9600, not 96K.

      • by swb ( 14022 )

        First off, your math is way off -- 24 x $400 is $9600.

        Secondly, nobody would build a 24 TB array with 4x8TB in RAID 5. The risk of data loss during a disk rebuild is too high and it would provide so little I/O that it would be all but useless for anything but low-access archiving.

        A better comparison for disks would be 1 TB 15k SAS, and these retail for $225, so the math on disk cost alone is a lot more competitive.

        It becomes more competitive when you look at the performance -- 24 SSDs would give you close

    • by jon3k ( 691256 )

      So why aren't there more MLC based flash arrays

      What companies are you referring to? I just installed an EMC VNX2 with a tier of MLC flash, which uses FAST VP. Nimble's arrays also use MLC flash [nimblestorage.com] - not eMLC, MLC:

      Todayâ(TM)s SSDs degrade when burdened with continual patterns of random writes. When SSDs receive random writes, the write activity within the SSD is greater than the actual number of writes. This write amplification dramatically increases the number of write cycles that the SSD must support. Multi-level cell (MLC) flash is typically not suitable for traditional storage systems because it can only endure 5,000 to 10,000 write cycles. Instead, traditional systems must use single-level cell (SLC) SSDs and will soon begin using enterprise multi-level cell (eMLC) SSDs. SLC and eMLC technologies can endure up to 100,000 write cycles, but cost 4 to 6 times more than traditional MLC flash.

      Nimble Storage approaches the problem of write amplification differently. The CASL file system is optimized to aggregate a large number of random writes into sequential I/O stripes. It only writes to flash in multiples of full-erase block width sizes. As a result, write amplification is minimized, allowing the use of lower-cost MLC SSDs.

      • by swb ( 14022 )

        I know that Compellent uses MLC in their flash tiers, too, but they refer to it "read intensive" and in the certification class it was explained it's only used for cache reads.

        • by jon3k ( 691256 )
          Neither Pure or EMC market these for read only. The VNX2 we just installed uses it for FAST VP. Write's are cached on a handful of SLC based drives (2-4 disks usually), when possible, called "FAST Cache [emc.com]" to increase write performance. Then FAST VP [emc.com] moves the most used blocks to the MLC drives from the slower tiers (SAS, NL-SAS, SATA).
  • Calculate the cost of the replacement cycle too and suddenly SSDs look a lot cheaper. It's just that most people can't think beyond the end of their noses, so if the up-front cost looks expensive they stop right there.

    I bought my last HDDs last year. Two 4TB 'archival' drives for backups. My existing pile of new 1TB or 2TB HDDs (I have around a dozen 3.5" and half a dozen 2.5" left) will be dribbled out as needed, but won't be buying any new HDDs from now on. In fact, I couldn't even foist off some of

An adequate bootstrap is a contradiction in terms.

Working...