Consumer-Grade SSDs Survive Two Petabytes of Writes 125
crookedvulture writes The SSD Endurance Experiment previously covered on Slashdot has reached another big milestone: two freaking petabytes of writes. That's an astounding total for consumer-grade drives rated to survive no more than a few hundred terabytes. Only two of the initial six subjects made it to 2PB. The Kingston HyperX 3K, Intel 335 Series, and Samsung 840 Series expired on the road to 1PB, while the Corsair Neutron GTX faltered at 1.2PB. The Samsung 840 Pro continues despite logging thousands of reallocated sectors. It has remained completely error-free throughout the experiment, unlike a second HyperX, which has suffered a couple of uncorrectable errors. The second HyperX is mostly intact otherwise, though its built-in compression tech has reduced the 2PB of host writes to just 1.4PB of flash writes. Even accounting for compression, the flash in the second HyperX has proven to be far more robust than in the first. That difference highlights the impact normal manufacturing variances can have on flash wear. It also illustrates why the experiment's sample size is too small to draw definitive conclusions about the durability of specific models. However, the fact that all the drives far exceeded their endurance specifications bodes well for the endurance of consumer-grade SSDs in general.
HDD endurance? (Score:3, Interesting)
Re:HDD endurance? (Score:5, Informative)
In average desktop use, and even non video media workstation it's rare to see a drive that's written 10TB. Most people will never wear out a SSD due to straight out media wear.
Re: (Score:3)
A PVR drive could easily see 17TB of writes during a year and that's just a very conservative estimate based on a small number of tuners and broadcast content.
Re:HDD endurance? (Score:5, Insightful)
Of course video writing is the perfect application for hard drives. A constant datastream at a fixed rate and large amounts of data over time, with few random IO and only bulk delete. If you are trying to stick a SSD in a PVR you are doing it wrong.
Re: (Score:3)
A constant datastream at a fixed rate
Not completely fixed(depends on channel being recorded at the time), I would think, but yeah, a fixed rate that's substantially below a HD's write speed.
Remember that SSDs are relatively slow at writing compared to reading. HDs are generally equally fast in either direction, so given a sufficiently sequential write process I can see them actually being able to write faster than the SSD.
Re: (Score:3)
He said:
Remember that SSDs are relatively slow at writing compared to reading
You said:
Relatively slow compared to what, it's own read speeds?
So... yes?
Re: (Score:3)
It's been a while since you shopped for an SSD? The larger capacity ones have nearly equal read and write speeds except for the most extreme budget brands.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
They only hit their advertised write speeds with highly compressible data in most cases. With random data the write speed drops to less than half the read speed typically. It also depends on the nature of the writes, as some scenarios will cause a massive amount of write amplification.
Re: (Score:2)
Not all SSDs compress.
Re: (Score:3)
Japanese PVRs need multiple HDDs because a single one can't keep up. A few years ago they started to record everything... All over-the-air channels simultaneously, 24/7, allowing you to watch anything that was broadcast at any time in the last week. No need to set up recording for anything, just grab it any time up to a week after broadcast.
Once SSDs get up to capacity they would ideal for that application. Until they they use multiple HDDs and a fair size RAM cache.
Re: (Score:2)
I work for a Danish IPTV provider, and we do the same thing. Everything is recorded and kept for at least 7 days, so our customers can watch whenever they want. It's proven to be extremely popular.
Re:HDD endurance? (Score:4, Informative)
Recording TV is not a typical scenario. Besides, at around 8GB/hour (HD), that's around 2000 hours a year, which is little more than what my BeyondTV machine does, and its 3TB WD green is still alive and kicking. You just have to disable the insanely aggressive head parking on those drives otherwise they might die...
http://www.storagereview.com/h... [storagereview.com]
Re: (Score:2)
So then, 0.1 times what an SSD will take, even if you keep it for a decade?
Re: (Score:2, Informative)
Re:HDD endurance? (Score:5, Informative)
Re: (Score:3)
Well before the market crash on HDD prices you could see the rare one with a 10yr warranty for $150. The Fujitsu drives I used to use were consumer grade, and had a 10yr warranty. Of course then we went to 5yr, then 3yr, and I think some are even 1yr now. It's just like the market crash back in the late 90's early 00's. Give it a few years, and the warranties will start coming back up...that is if they survived SSD's becoming the mainstream choice for storage.
Re: HDD endurance? (Score:2)
Hairy difference is you can pick the controller. Sandforce and OCX RUN.
Crucial buggy and will die without update.
I run sansdisk and Samsung pros in raid 0 for years. Their controllers are good and asked microcenter which gave lowest RMA? You should switch Hairy it's not 2010 anymore and they improved. Yes I owned 2 2011 Seagates that died :-)
No way will I go back. Swtor and battlefield 4 are unusable on a mechanical.
Also Intel has trim in raid 0 that AMD doesn't which blows for AMD fans but with that combo
Re: (Score:2)
Re: HDD endurance? (Score:2)
I read maximum pc and Google a few others. Samsung and sansdisk are proprietary controllers. Toshiba uses Ocz which supposedly fixed their modded crappy sandforce.
Intel back in 2010 had a few buggy firmwares. New are fine. I do not trust ocz, anything sandforce, or crucial. The newer ones use maxwell too which is ok. But OCZ truly does suck. I still have a mechanical drive for backup files and one drive too.
I have 4 ssds for almost 2 years in 2 raids. Survived probably 10 reimages and full disk writes :-)
Re: (Score:2)
Doesn't anyone hibernate their computer at the end of the day? 8 GBx365 days = 3 TB in one year for my main machine.
Re: (Score:3)
Since every time I tried that it caused weird issues every week or so I would hazard a guess that next to nobody does that, yeah. At least not on Windows. /3TB = 300 years (or 100 if you count 16GB of non-hibernate writes per day). Thus the cell wear is not your most likely problem.
But even if you do SSDs can handle that load. 1 PB
Re: (Score:2)
I wouldn't use 1 PB as the benchmark. Only half of the drives in the sample made it that long. but 3 TB per year means 33 years to even reach 100 TB. It's pretty likely your entire computer will be obsolete by then, even if Moore's law bottoms out in the next decade or so.
Re: (Score:2)
Re:HDD endurance? (Score:4, Funny)
Just out of curiosity, how well do traditional HDD fare in comparison?
They cave in big time after around 150 win98 shutdown restarts ..
Re:HDD endurance? (Score:4, Informative)
Impossible to test in the same way due to time constraints. Filling the entire hard drive takes a very long time, unlike a much smaller and much faster SSD.
Re: (Score:2)
Sequential I/O with big write sizes can be pretty fast (~180MB/s) on modern large drives. SSDs can be ~4x that, so the tests would only take ~4x as long for similar data sizes.
Re: (Score:2)
Now remind yourself of comparable hard drive size, comprehend that you're looking at something that is several times slower AND several times larger.
Now consider that this test for SSDs has been running for well over a year now. How many years, or even decades would you need to get the same test of HDDs?
Re: (Score:3)
Let's do some math here, shall we? At 200 MB/s, you can overwrite a 1 TB drive in an hour. 1 PB you can reach in a month. The hard drives are a few times larger than the SSDs, so you'd need ~ 10 TB instead of 2, which means 10 months.
Include all the actual variables, and you might get a usable answer. Just blowing data on the disk isn't the only thing this is doing (AFAIK). You've gotta detect errors, so you've gotta read back the data and validate it. This page goes through their full testing methodology (hint: they're using Anvil, a static file collection that includes a copy of a windows install, some applications, some movies, and some incompressible data, among other things, and every file has its md5sum checked after writing): htt [techreport.com]
Re: (Score:2)
Partly it depends on whether you care about being able to write a certain amount of data, or rewrite the blocks the same number of times...
Most consumers of hard-drive services care about how much data they can safely store and retrieve and the speed & cost of the device, not how many times they can rewrite a flash block before it fails...
Re: (Score:2)
Not really, for streaming writes a HDD is only about 1/3rd the speed of these drives (WD Caviar Black 1TB, 150MB/s sustained streaming write vs Intel 335 at 450MB/s streaming writes).
Re: (Score:2)
Now compare the relative size of similar HDD. Now comprehend that this test has been running over a year now. Do the math on how many years it would take for similar test to be done on HDDs.
Then understand the original statement.
Re: (Score:2)
Eh? It would take less than 100 days to write a 1PB to a 3TB drive. One could write to a 240GB area of the drive repeatedly if they wanted to.
A 3TB drive can be filled roughly 3 to 4 times a day.
Re: (Score:3)
Seriously, you're a third person on slashdot who hasn't even read the OP.
The SSD test has been going on for over a year now. Consider the fact that best case scenario for HDD means it's several times slower as well as several times larger. Understand that you're looking at many years, possibly over a decade of test time.
Re: (Score:2)
You haven't done any math, if you had you'd realise they haven't been doing the test 24/7.
Look at the screengrab on this page:
http://techreport.com/review/2... [techreport.com]
Here:
http://techreport.com/r.x/ssd-... [techreport.com]
It shows that their test was rather slow, they were only writing at 208MB/s
At that speed it would take 58 Days 19 hours and 45 minutes to write 1 Petabyte.
I already did the math for a HDD - it would take about 100 days to write 1 Petabyte to a HDD.
Since they started the test in 2013, they could have done that easi
Re: (Score:2)
No the reason for the slowness is that they actually, shockingly, TEST the drive.
Not just write on it.
It's pretty sad that you actually went to the length of posting a link to the article that straight up debunks your claims.
Re: (Score:2)
No, sad is people who can't admit they're wrong.
Re: (Score:2)
Ditto. Now read the article and consider doing just that. I've been following the test for the year it's been running, and it's very obvious to anyone tech minded why this test is unfeasible on HDDs.
Re: (Score:2)
I have explained how it is feasible, you on the other hand have insisted it isn't feasible without giving any logical reason why.
Re: (Score:2)
No, you have explained why a straight up "write only and do nothing else" test you yourself devised is remotely feasible.
What you have not even touched upon was the actual subject - why the test that techreport has been performing is not feasible to perform on HDDs. This in spite of linking to the actual testing methodology a few posts before this one.
Re: (Score:2)
This isn't rocket science, all that is needed is some script to copy files to the hdd, delete files, rinse and repeat, check smart stats occasionally.
HDDs write at over 100mb/s. the test they did wasn't a whole lot faster. It is simple to deduce that the test is easily possible on a hdd. No-one has said the drives have to be filled the same number of times, merely that a petabyte has to be written, that can be done, there is no reason why it can't be done and you haven't given any valid reason why it can't
Re: (Score:2)
Thanks for sharing your inability to read your own link.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
They're leaving money on the table... (Score:4, Insightful)
No, I think it means that the first ones were over-engineered, and the next generation will meet their stated MTBF number to within 1 standard deviation.
Re: (Score:2)
Re: (Score:3)
Re: They're leaving money on the table... (Score:1)
Either way, telling us the number of samples is too small to accurately determine anything, then extrapolating a happy future for consumers based on the faulty setup seems rather Pollyannaish.
Most people write far less. (Score:5, Insightful)
Most hard drive I see in consumer and business use write far less than that over their lifetimes. I have a customers hard drive I am copying data from currently. Has 15,147 power on hours, it has only written 1.3TB of data. It's very uncommon to see drives with over 6TB of data written (in the 500GB to 1TB drive range).
The other client SSD in my computer is a Samsung 830 256GB SSD that I just migrated to a 1TB SSD for a customer. Was used for about a year and a half before they needed a bigger drive. They used Outlook, a number of Autocad applications, lots of project files, a good sized collection of work related photos. The drive has 995GB of writes and is showing no SMART issues.
Average computer users have nothing to worry about when it comes to wearing a SSD out. Power users might have a problem depending on the nature of their work, but they also get the most benefit from high write speeds and IOPS. Servers, depending on their usage patters could have a problem, I certainly recommend the enterprise style drives that reserve a much larger amount of space.
Re:Most people write far less. (Score:4)
However my company found that in testing, the more number of writes to a flash device, the shorter time before the data is leaked out. So after 10,000 writes to the same location, I can read the data a month later with no errors, but at 50,000 writes I start getting errors after about 2 hours. It seems like flash storage is like a bucket of water, each erase pokes a tiny hole in the bucket. After awhile those tiny holes add up and the bucket leaks pretty fast. So long term storage is not as safe as a conventional hard drive.
Wear leveling will prevent any cell from getting even close to that. The article is in reference to the wonder of SSDs getting over 2,000,000GB of writes across 240GB of flash. That's 8,300 erase cycles in what is certainly considered an "Extreme" scenario. In consumer desktop usage almost no one will pass the 1,000 mark, and most will stay below the 500 mark before they scrap their PC for a new one.
Re: (Score:3)
Re: (Score:1)
hmm this is interesting but how do you find the statistics for a drive (on a mac)?
Thank you
Re: (Score:1)
It's from S.M.A.R.T.
http://en.wikipedia.org/wiki/C... [wikipedia.org]
this has mac version too
http://www.smartmontools.org/w... [smartmontools.org]
or you can pay up to have graphics.
Re: (Score:2)
As the other AC said, a tool that shows SMART data for your OS. That said, some drives do not show LBA information. Some really sucky drives do not give accurate SMART information at all, though in general a drive in a Mac should.
Re: (Score:2)
I have a customers hard drive I am copying data from currently. Has 15,147 power on hours, it has only written 1.3TB of data.
How can you tell? Does the HDD keep track of this info somewhere in the firmware?
Re: (Score:1)
It's called S.M.A.R.T...
Re: (Score:1)
Use smartmontools or another SMART program. If you are using smartmontools, execute smartctl --scan and it will spit out device names. Then run smartctl -A device-name and it will usually tell you. It has other useful command as well like -a -t -c etc.
Re: (Score:2)
What's the math to be applied to LBAs? How big is an LBA? A 512 byte sector?
My nearly 4 year old Samsung shows just under 2 TB written if I multiply the SMART-provided Total LBAs written against a 512 byte block.
Re: (Score:2)
What's the math to be applied to LBAs? How big is an LBA? A 512 byte sector?
My nearly 4 year old Samsung shows just under 2 TB written if I multiply the SMART-provided Total LBAs written against a 512 byte block.
Correct.
Though there could be differences depending on the model of drive you have, it's very likely 512B LBAs:
http://www.samsung.com/global/... [samsung.com]
Since you said you have a samsung, you can run the Samsung Magician 4.0 and it'll do the conversions for you (assuming you're running Windows or Mac; AFAIK, Magician isn't available for Linux).
Even power users don't have much to worry about (Score:3)
I write a lot more to my SSDs than most do because of lost of application installs, playing with audio, etc, etc. 6TB to date, drive was purchased about 20 months ago. Ok well assuming I maintain that rate of writing (3.6TB/year) it would be 13 years before I'd hit 50 TB of writes, on a 512GB drive which can probably take 1PB or more.
Even if you hit it harder than the norm, you still don't hit it that hard. It really has to be used for something like database access or a file server or the like before endur
Re: (Score:3)
It really has to be used for something like database access or a file server or the like before endurance becomes an issue.
Even that isn't enough, because the drives in the test are being written essentially 24/7 (with just a little time off for the retention tests), and the drives remaining have been at it for 15 months.
You have to have an insanely busy database or file server to never have any time off from writes.
Re: (Score:2)
>And I am a programmer
Depending how big of projects you compile, some of them really hit the drive pretty hard with small writes. That said, it would take 20 years to write 100TB, which even the crappy drives wrote before seeing issues at your current usage, and no one expect spinning disks to last that long.
I have a 840 EVO 256GB myself. On Windows use of the RAPID mode can reduce the number of writes (greatly reducing write amplification), I don't know if OSX provides anything like that. At 220 days of
Re: (Score:1)
Re: (Score:2)
Servers, depending on their usage platters could have a problem
FTFY.
Re: (Score:1)
You expect us to believe that Outlook AND Windows fit on 256GB? Bullshit.
Random failures (Score:4, Interesting)
Great, so now we just need to fix the sudden random failures where the drive completely fails but it is 6 months old and showed no signs of degradation. A coworker of mine just had that happen with a Crucial SSD.
Re: (Score:3)
It's crucial that we find out!
Re:Random failures (Score:4, Informative)
Great, so now we just need to fix the sudden random failures where the drive completely fails but it is 6 months old and showed no signs of degradation.
Just counted - the stack on my workbench of completely dead SSD's is 13. I think I've seen one hard drive ever go completely dead. I literally don't understand how the vendors think they can get away with such junk on SSD controllers. I know flash will fail, but that's no reason to hang dead on the SATA bus and not talk to anybody. Admit defeat by SMART and move on.
I don't always use SSD's for journals, but when I do they're in a RAID configuration. Stay speedy, my friends.
Re: (Score:2)
In the shop I work out of we have stacks of hundreds of hard drives with bad sectors and a large number that are just dead. We see very few dead SSDs, but we only use Samsung or Intel cards. Don't use anything else.
Re: (Score:2)
Re: (Score:2)
As maligned as the DeathStars were, I never lost any data on them. They always gave signs of theilr impending doom, and lasted long enough to copy the data off of them. In comparison, I've seen enough SSDs suddenly just stop working, and anything stored on them is simply gone.
Re: (Score:2)
The problem with SSDs is that the data is not written linearly. They have a kind of internal filesystem to map the visible sectors to the flash memory.
Newer mechanical HDs are always starting to do this and the SMR drives do this extensively, even more so than SSDs.
Re: (Score:2)
Was it a Crucial M4? If so, maybe it hit that firmware bug where it craps out after a few thousand hours? There is a firmware update to fix it.
Re: (Score:2)
The SSDs will have lots of regulation on-board because there are very specific voltages required to read and write to Flash memory. They should be just as reliable as USB flash drives and RAM and CPU and video cards and other electrically-sensitive things that require particular voltages to operate.
Drives obsolete by the time the test completes (Score:3, Interesting)
Unfortunately these tests don't say much about the drives you can buy NOW, and write endurance in consumer drives is probably getting worse as geometry shrinks and relentless price pressure causes corners to be cut. It's good that the Samsung 840 Pro is holding up so well (its predecessor the 830 was also ridiculously durable) but it's now replaced by the 850 Pro which uses radical new technology (stacked chips). The Intel 320 was also very durable so the failure of the 335 doesn't bode very well for the idea that newer models should hold up better than older ones.
Write wear isn't everything anyway. Another thing to test is whether the drive can brick if the power fails while the drive is writing. Better drives have capacitors to deal with this event. Consumer drives lack them and can lose data or fail unrecoverably.
Re: (Score:3)
It's good that the Samsung 840 Pro is holding up so well (its predecessor the 830 was also ridiculously durable) but it's now replaced by the 850 Pro which uses radical new technology (stacked chips).
I suspect the 10 year warranty for the 850 Pro is a good indicator of how long Samsung expects it to last compared to the 840 Pro (which has a 5 year warranty).
Re: (Score:1)
Write wear isn't everything anyway. Another thing to test is whether the drive can brick if the power fails while the drive is writing. Better drives have capacitors to deal with this event. Consumer drives lack them and can lose data or fail unrecoverably.
This is one reason to check that the computer you're using includes capacitors to deal with this event -- so you can use consumer drives and not have to worry about whether they've got built-in protection circuitry.
Any criticism? (Score:2)
Re:Any criticism? (Score:4, Insightful)
The only weakness is that it needs to be repeated on newer ssds as they hit the market. The results of this test are relevant for drives released back when the experiment started in 2013, less so for drives released now and even less so for future drives. As the manufacturers realise that the drives are lasting much longer than they are specified to, they'll decide they are overengineered and rework them to wear out quicker. Aside from the obvious cost cutting benefit, it also keeps the market segmented in various grades between "low end consumer ssds" and "high end enterprise ssds".
Re: (Score:2)
Or, in the better case, they increase the warranty (and the price) and boast with the warranty.
Don't put too much stock in this... (Score:1)
I think the idea is neat, but nothing meaningful can be said by sampling _one_ of each drive.
Moreover, from what I understand about flash, the more writes you make to a cell, the more quickly those bits tend to rot when left alone.
So being able to overwrite again and again and again isn't particularly important if those worn cells would just forget their contents over a few hours, days, weeks, etc.
I'd much rather have a drive that can take a moderate write load and hold on to my data than an Alzheimer's dis
Re: (Score:2)
Maybe a tiering system would be useful. I've seen some drive arrays that use SSD for caching. So, a SSD that can take a lot of info and forgets it after a month or two can be good enough in this case, assuming enough ECC to realize the cache data is damaged and to fetch from the spinning platters the bits needed to complete the read. Another example of this would be a write cache on a HBA. That way, the machine could send writes to the SSD cache, the HBA tells the machine the write is complete and then
from experience, SSD failures not from wear (Score:4, Interesting)
From my experiences, most of SSD failures come from dead controllers and not wear. Or bad firmware, I'm looking at you Crucial and your 5000 hour bug. Also your weird incompatibles on your MX100 series.
how many? (Score:2)
Re: (Score:3)
Flash endurance (Score:2)