Endurance Experiment Kills Six SSDs Over 18 Months, 2.4 Petabytes 204
crookedvulture writes Slashdot has previously covered The Tech Report's SSD Endurance Experiment, and the final chapter in that series has now been published. The site spent the last 18 months writing data to six consumer-grade SSDs to see how much it would take to burn their flash. All the drives absorbed hundreds of terabytes without issue, far exceeding the needs of typical PC users. The first one failed after 700TB, while the last survived an astounding 2.4 petabytes. Performance was reasonably consistent throughout the experiment, but failure behavior wasn't. Four of the six provided warning messages before their eventual deaths, but two expired unexpectedly. A couple also suffered uncorrectable errors that could compromise data integrity. They all ended up in a bricked, lifeless state. While the sample size isn't large enough to draw definitive conclusions about specific makes or models, the results suggest the NAND in modern SSDs has more than enough endurance for consumers. They also demonstrate that very ordinary drives can be capable of writing mind-boggling amounts of data.
No warning ? (Score:5, Interesting)
The fact that 2 of them died without warning is disappointing. I would rather have a shorter life time, but a clear indication that the drive is going to die.
Re:No warning ? (Score:5, Insightful)
What pisses me off is that the Intel drive suicided. OK, I can understand that they track writes and shut it down once confidence goes down. I get that. However, the drive should be read-only after that!
If I had a drive that still held my perfect, pristine data, but I could not actually get to it, I would be pissed. What is wrong with going into a read-only mode?
Re: (Score:2)
I presume that the drive treats its firmware as just a special range of blocks/sectors, subject to the same management as everything else. Eventually, you power cycle it, and the bootloader can't find any viable firmware blocks. It then appears bricked. That's the only explanation I see.
Re: (Score:2)
But those sectors shouldn't have seen a write since the factory, why should they fail?
Re: (Score:2)
Re: (Score:2)
Sectors are logical, not physical.
Which means, if GP were correct that the firmware is also store in main NAND... it's terribly bad design.
No doubt about it, firmware should be on its own physical chip.
Re: (Score:2)
NAND also comes in sectors that are physical. Several logical drive sectors are mapped onto a physical NAND sector.
The physical NAND sectors the firmware is stored on shouldn't be altered once the drive leaves the factory.
Consider, does your BIOS in flash wear out?
Re: (Score:2)
Several logical drive sectors are mapped onto a physical NAND sector.
umm... no...
First, I'm going to assume you are just terminology-ignorant, because NAND comes in BLOCKS composed of PAGES.. so I am going to translate your poor use of "sector" with regards to flash as what you probably really meant.... "page" (if you meant something else, you are even more ignorant... so take this kindness)
A logical drive sector can be mapped to literally any physical page on the flash, and which page a specific logical sector maps to changes over time.
Now why the hell did you open
Re: (Score:2)
You are apparently only aware of one convention. Other documentation speaks of sectors and considers blocks a hard drive thing.
Sounds more like you're butthurt that I called you on saying something silly.
The mapping only changes when a logical block is written OR a sufficient number of logical blocks are invalidated in the sector.
Nobody who wasn't recently kicked in the head by a horse if going to put the drive's firmware somewhere where it will be moved around and re-written. Beyond the many other problems
Re: (Score:2)
IIRC that's what it was supposed to do, but it must have had a firmware bug and didn't quite manage it.
Re: (Score:2)
Re: (Score:2)
Long term, what really is needed are more sophisticated backup programs than the stuff we have now since once SSD fails, it fails for good. Backup programs not just for recovering files, but can handle bare metal restores, and are initated by the backup device (so malware on the backed up client can't trash the backup data.)
For desktops, this isn't too bad, because one can buy a NAS, or an external drive at minimum. For laptops, it becomes harder, especially if one factors in robust security measures whil
Re: (Score:2)
Ummmmm. Solid state drives don't actually HAVE heads. RTFA (actually read the first article in the series). The Intel drive counts the bytes written. When it reaches it's limit, boom. It goes read-only, but only until the next reboot. Then, it goes dead.
This happens NO MATTER WHAT the state of the spare sectors are.
Re: (Score:2)
SSD's don't have heads. They are Solid State- no moving parts.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Nonsense. In almost all scenarios losing all your data is worse than risking that some of your data has been silently corrupted - especially when you consider that many file formats will detect a wide range of corruption, as will various checksum systems (and there are a number of file systems that incorporate just such integrity measures, not to mention RAID)
If Intel is so concerned about "correctness", then the most user-friendly reaction would be to "brick" the drive instantly rather than switching to r
Re: (Score:2)
Er. I agree with you on the known failure case, which can be prepared for and alarmed against.
I'm not agreeing with you that no data is better than possibly corrupted data unless you're trying to actually operate a process with that data with no error checking of the recovered data. The disk should not force you to lose your possibly corrupt data, you should do that as part of your process for recovering the disk.
At the point it goes into fail-safe mode, most, if not all of the data is probably just fine a
Re: (Score:2)
It's a clearly defined failure mode, and it failed predictably and exactly as documented.
No one is questioning whether they did or didn't do exactly as they said. What is absolute garbage is the design of the failure mode.
Would you say it's "correct" if your car had a faulty indicator and suddenly vaporized while going down the road leaving you sliding your backside painfully along the bitumen at 100km/h just because that's how the failure mode was designed?
No the Intel drive is incredibly disappointing.
Re: (Score:3, Informative)
This is simply not true. NAND cell wear degradation causes them to stop holding charge, which means data consistency is not guaranteed once a cell is degraded.
Intel's limit may be artificial, but there's logic behind the decision.
It's an irrelevant point anyway. Intel's documented behavior says pretty much that the drive will stop functioning once the wear parameter is exceeded. You can tell at any time what that parameter is, and when it will be exceeded. Your failure to act on that information will cause
Re: (Score:3)
Right, but what of the cells that aren't yet failed? Why not allow them to be read to see what you can salvage? The ones that degraded and lost data will fail a checksum. With any luck, the most critical data is still there and passing checksum unless the drive sabotages you by not letting you even try to read it.
Re: (Score:2)
These things are still made with a certain amount of built in obsolescence in mind. Make them too durable, and you quickly saturate the market, resulting in a big price collapse. That's a big no-no.
According to the wear monitor on my laptop SSD, it's going to expire around the year 2200. I suspect I'll have replaced it with a bigger drive well before then.
Re: (Score:2)
These things are still made with a certain amount of built in obsolescence in mind. Make them too durable, and you quickly saturate the market, resulting in a big price collapse. That's a big no-no.
There's nothing wrong with "price collapse" in tech markets. If it didn't happen to some degree all the time, it would still cost $2000 for a PC with 2MB RAM and a 10MB hard drive.
The only time "price collapse" is a problem is when it leads to monopoly. Which does not usually happen, since tech companies are not generally reliant on a single product.
"Price collapse" is one of those many largely-failed Keynesian concepts.
Re: (Score:2)
A price collapse may or may not be good for consumers, depending on the consequences. (If it's from a glut that's not going to be permanent, and it puts manufacturers out of business, the remaining manufacturers may not have sufficient capacity when demand rebounds, and that's bad. If it doesn't drive manufacturers out of business, but forces them to cut prices and become more efficient, it's an overall win.)
A price collapse is not what the vendor wants, though, no matter what, and so a vendor will try
Re: (Score:2)
How often do we need to repeat this mantra to people?
BACKUP BACKUP BACKUP BACKUP BACKUP BACKUP BACKUP BACKUP BACKUP !!!!!!!!!
Backups are necessary and proper, but they won't help you recover any data that was written to the SSD recently (i.e. after the most recent backup was made). Not to mention that more than one person has found out the hard way (i.e. post-drive-crash) that their backup system had not been working correctly for some time.
Therefore it would still be useful if you could read your data off the failed SSD. In fact, I seem to recall that that was the one of the touted benefits of SSD technology -- that when it fa
Re: (Score:2)
You can try adding "and it isn't backed up unless you're sure you can recover it" to the mantra, but that's too complex to be easily absorbed by the consumer, who really wouldn't know how to check for a failing backup system.
Much better to provide an automatic backup system and allow the data to be recovered from as many places as possible.
Re: (Score:3)
Not quite so often as you think, if this is just an excuse to regard an accessible, but possibly degraded primary copy as worse than having no backup of your backup at all.
Having in my possession a ZFS backup with some corrupt nodes, I could still have a provable hash from the Merkle tree of the content desired, which I could recover from a corrupt primary copy (i.e. the live drive itself) with no concern whatsoever about the corruption, so long as the ch
Re: (Score:2, Informative)
The fact that 2 of them died without warning is disappointing. I would rather have a shorter life time, but a clear indication that the drive is going to die.
The fact that it is a drive means it is going to fail.
You have been warned.
Re: (Score:2)
The fact that 2 of them died without warning is disappointing. I would rather have a shorter life time, but a clear indication that the drive is going to die.
What I found disturbing is that TFA claims that the drives intentionally bricked themselves, rather than going into read-only mode. Why would they be designed to do that? I always assumed that even if a SSD died, I would still be able to recover the data. Apparently that isn't true.
Re: (Score:2)
That test found that Samsung SSDs are the best, though we found that they are not good.
Had a couple of 840 Pros (512GB) in a RAID1 array. After some time the array would slow down. One drive became slower than the other (fio reports 7k random write IOPS on one drive and the number is constant, but the other drive gets 6k IOPS and the numbers sometimes drops to 400).
OK, what about 850 Pro (1TB). Well, after some time the array became very slow due to one drive becoming slow.
No more Samsung SSDs for us. Obvio
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
My concern is that they brick. I understand that a newly written sector may fail miserably and that if it cannot find a functional empty sector it may lose that sector entirely, but why can't it allow the existing successfully written sectors to be read off in a read only mode?
Re:No warning ? (Score:4, Insightful)
I think better backup strategies apply here. If someone steals your computer you got just as much warning as the SSD drive. Just saying.
Ugly intel failure mode. (Score:5, Insightful)
Talk about your planned obsolescence - not a single sector reallocation registered, but the firmware counter says it's write-tolerance is reached so it kills itself. I suppose it's nice that it switches to read-only mode when it dies, except for the fact that it bricks itself entirely after a power cycle. I mean come on - if it's my OS and/or paging drive then switching to read-only mode is going to kill the OS almost immediately, and there goes my one chance at data recovery. Why not just leave it in permanent read-only mode instead? Sure it's useless for most applications, but at least I can recover my data at my leisure.
Why does a drive commit suicide when writes fail? (Score:5, Insightful)
Who thought this was a good idea? If the drive thinks future writes are unstable, good for it to go into read only mode. But to then commit suicide on the next reboot? What if I want to take one final backup, and I lose power?
Re: (Score:2)
Some additional info from an earlier article [techreport.com]:
According to Intel, this end-of-life behavior generally matches what's supposed to happen. The write errors suggest the 335 Series had entered read-only mode. When the power is cycled in this state, a sort of self-destruct mechanism is triggered, rendering the drive unresponsive. Intel really doesn't want its client SSDs to be used after the flash has exceeded its lifetime spec. The firm's enterprise drives are designed to remain in logical disable mode after the MWI bottoms out, regardless of whether the power is cycled. Those server-focused SSDs will still brick themselves if data integrity can't be verified, though.
SMART functionality is supposed to persist in logical disable mode, so it's unclear what happened to our test subject there. Intel says attempting writes in the read-only state could cause problems, so the fact that Anvil kept trying to push data onto the drive may have been a factor.
All things considered, the 335 Series died in a reasonably graceful, predictable manner. SMART warnings popped up long before write errors occurred, providing plenty of time—and additional write headroom—for users to prepare.
So, it sounds like this is the intended behavior for *enterprise* drives. It may not be the same for *consumer* drives, but that's a bit unclear.
While it may make you feel better if consumer SSD drives would go into a permanent read-only mode, it seems extremely unlikely that a typical consumer would ever actually reach this point in an SSD's life at all. So, I'm not really losing sleep that my own Intel SSD drives are going to brick themselves, when at a typic
Seat belt switch (Score:3)
Re: (Score:2)
If I trust the suicide, I suppose the upside would be that the drive can be safely tossed without worrying about the data on it.
With the proper equipment, I'm sure the data can be recovered. Still best to thoroughly destroy the drive.
Not particularly useful, unfortunately (Score:5, Interesting)
As SSD cells wear, the problem is that they hold charge for less time. Starting new, the time that the charge will be held would be years, but as the SSD wears, the endurance of the held charge declines.
Consequently, continuous write tests will continue to report "all good" with a drive that is useless in practice, because while the continuous write will re-write a particular cell once every few hours, it might only hold a charge for a few days - meaning if you turned it off for even a day or so, you'd suffer serious data loss.
SSDs are amazing but you definitely can't carry conventional wisdom from HDDs over.
Re:Not particularly useful, unfortunately (Score:5, Insightful)
PS: You DO have backups.... right?
Re: (Score:2)
PS: You DO have backups.... right?
That's what the other SSD's are for.
Oh, wait.
Re: (Score:2)
Re:Not particularly useful, unfortunately (Score:4, Insightful)
Conventional HDDs (and other magnetic storage) can suffer from random loss of magnetization. Any permanent magnet will slowly weaken over time, and the nature of magnetic media - especially high density - means neighboring domains can alter a weakened bit more easily.
The solution in both cases: Rewrite the data periodically to keep it "fresh" and include error correction to help mitigate minor losses.
=Smidge=
Re: (Score:2)
The solution is actually to read and check the ECC. If you get an error, re-construct with the ECC and reallocate the sector.
Re: (Score:2)
"Unpowered retention tests were performed after 300TB, 600TB, 1PB, 1.5PB, and 2PB of writes. The durations varied, but the drives were left unplugged for at least a week each time."
Re: (Score:2)
How is it not useful?
According to the tester:
Unpowered retention tests were performed after 300TB, 600TB, 1PB, 1.5PB, and 2PB of writes. The durations varied, but the drives were left unplugged for at least a week each time.
Let's Fix That (Score:2)
the results suggest the NAND in modern SSDs has more than enough endurance for consumers
"Challenge accepted." - some guy trying to invent octo-level-cell flash
And where were the tests of spinners? (Score:2)
Re: (Score:2)
Re: (Score:2)
Figuring out the resonant frequency of the platters and moving the heads back and forth at that frequency is much more fun.
I know it worked back in 5.25 days. The resonance of the platters should be higher with smaller drivers, but the heads should move fast enough.
Re: (Score:3)
I've owned Tivos since 2002 and I've only had one blow a drive, a series 3 I bought from WeakKnees with an upgraded disk in it. The drive didn't fail spectacularly or even completely, we just had a ton of playback problems and recordings that grew increasingly unreliable. That Tivo was bought in 2007 and the drive was replaced last fall.
The original Tivo I bought in 2002 finally got tossed without a drive failure when Comcast gave up on analog SD channels a couple of years ago. I think this was after bro
Re: (Score:2)
Re: (Score:2)
That's just crappy design. It should be able to hold an hour of programming in RAM and so never touch the disk unless you actually use the rewind feature.
Re: (Score:2)
Re: (Score:2)
RAM isn't all that expensive these days. As for the rest, like I said, crappy design.
I'd like a mix of drives on my next box (Score:2)
I'd like a mix of drives on my next box. A moderate "traditional" spinning oxide 1TB drive with a lot of cache for the primary boot, swap, and home directories, and an SSD mounted as my project workspace under my home directory. The work directory is where I do 99% of my writes, producing roughly 3GB for one particular project in about an hour's time.
My existing drive on my main box has survived a god-awful number of writes, despite it's 1TB size. My work is emphatically I/O bound for the past month o
Re: (Score:2)
I think what you'd really want is something where the SSD takes all writes, mirrors them to HDDs and caches all reads to SSD, but can read AND write to the HDDs if there is a loss of SSDs.
Bonus points for actual SAN-like behavior, where the total system capacity is actually measured by the HDD capacity and the system is capable of sane behavior, like redirecting writes to HDD if the SSD write cache overflows and preserving some portion of high-count read cache blocks so that unusually large reads don't dest
Re: (Score:2)
Yeah, but all the critical project data gets imaged to GitHub and to another machine, so there is no need to back it up from the SSD to a platter. When it only takes 10 minutes to restore data from offsite, there isn't much point backing it up to multiple devices locally (unless they're on different machines, of course.)
Re: (Score:2)
I still think there's so much performance advantage to be gained from the OS and apps on the SSD that the only real purpose of spinning rust is capacity and whatever reliability it provides over SSDs. The torture test seems to indicate that the reliability factor isn't that much to worry about.
Intel Bricks the device once it hits the limit!? (Score:2)
Instead, just set the drive to always boot in read-only mode, with secure erase being the only other allowed command. Then someone can recover their data and wipe the drive for good.
Intel doesn't have confidence in the drive at that point, so the 335 Series is designed to shift into read-only mode and then to brick itself when the power
What of other more disastrous modes of failure? (Score:2)
This experiment only documents the survivability of the NAND Flash itself, really. I've had two consumer SSDs and at least one SD fail completely for other reasons; they became completely un-usable, not just un-writable. In the case of the SSDs at least, I was told it was due to internal controller failure, meaning the NAND itself was fine but the circuits to control and access it were trashed. I suppose a platter-drive analog to that would be having the platters in mint condition with all data intact bu
Re: (Score:2)
Re: (Score:2)
That is good to know (and first I'm hearing of it). Thanks!
Then suddenly.... (Score:2)
Long story short (Score:2)
Re:Swap drive now? (Score:5, Insightful)
Being an AC, I would chalk this up to a joke or trolling. But.... on the off chance that you are serious, I will bite.
Yes, you COULD use an SSD as swap, but it will not help THAT much. An SSD is much faster than a mechanical disk, but still a couple of orders of magnitude slower than real RAM. That upgrade would be like the difference between jogging with 50 pounds on your back, and then lowering it to 35 pounds. Yes, it will make a difference and make things better, but how much better to have no weight at all?
Just get more RAM. If your system cannot hold more RAM, then get a new mobo. If you regularly go over 16 GB of actual RAM in use, even going to a slower processor will be an improvement if you stop swapping. Hitting the swap file is a great way to make a fast processor do nothing for a while.
Re: (Score:2, Interesting)
I'm actually not convinced it's a terrible idea. It seems to me that when swapping really gets slow, its because of thrashing the hard disk. SSD may be several orders of magnitude slower than memory, but it's also several orders of magnitude faster than a thrashing HDD. It seems like it might be a reasonable tradeoff to make. New memory might require a new motherboard (and maybe a new processor), costing several hundred dollars, a not-insignificant amount of effort to swap it out, possibilities of new bugs
Re: (Score:3)
If all programs and operating systems were perfect, increasing RAM would help more than faster swap. But things aren't perfect and operating systems like to talk to swap 'just because'.
On my aging Mac Pro with 32 GB RAM I put a 60 GB SSS (left over from a laptop upgrade) in as swap. Seems to make a modest difference with Premiere, After Effects and Vue, especially renders. But it really isn't all that noticeable. Not as noticeable as increasing the RAM. But if you have an extra drive caddy and an extr
Re: (Score:2)
As swap, it is nowhere near good as RAM, but it has one advantage -- SSD excels at random writes, which is what swap is usually doing, so just because of this, it is better than regular disk. To boot, if one has the bay for it in a desktop, it might just be worth tossing in a 100-200 gig drive and using it for swap, as well as possibly moving the OS's partition to it as well, although it is good to have a lot of free pages on a SSD to wear-level a swapfile.
Re: (Score:2)
Also, an SSD is several orders of magnitude cheaper than RAM, and makes regular disk access much faster. So for cost-benefit analysis, upgrade to SSD before trying to get a huge amount of RAM.
Re: (Score:2)
That was the whole idea behind Microsoft's Readyboost, which allowed you to plug in a USB (!) thumbdrive, and if Windows deemed it fast enough, it would use it as swap space rather than (or in addition to) the hard drive. My experience is that actually worked somewhat well, given a laptop with a 5400RPM drive and a hardware limitation of 3GB of ram. Though I eventually ditched it when I replaced the HDD with a SSD, which made a tremendous difference.
Interestingly, Readyboost won't let you use a SSD hooked
Re: (Score:2)
Okay, those numbers that you quoted are very arbitrary, I'd like to see anything to back that up. The near-instantaneous seek time of an SSD compared to a mechanical disk ought to be a major factor when it comes to swap performance, far more so than throughput. In any case, there are many SSD-only systems now, in which case the swap space is on the SDD whether you like it or not, so there's certainly not an unreasonable thing to try.
Re:Swap drive now? (Score:5, Informative)
SATA revision 3.0 = 6 Gbit/s
DDR3 - 1600 = 12800 MB/s
"MB" = Mega-BYTES, so multiply by 8 for bits/seconds
DDR3 - 1600 = 102400 Mbits/s
DDR3 - 1600 = 102.400 Gbits/s
So, the peak bandwidth is about 17 times faster!
Now, let's look at latency.
Typical DDR RAM latency is around 10 ns (give or take, but that is an average number)
Typical SSD latency is around 0.1 us, which is around 100 ns. About ten times more.
One more thing here about these numbers.... An SSD is **NOT** RAM. If you page, you have to get the data FROM the SSD and put it INTO your RAM. From there, the RAM must be read again. So, even IF your SSD were exactly the same speed as your RAM, it will still be slower because it must be copied into RAM first before it can be used.
As to whether it is unreasonable, that depends. It will not cost much to try, but still a rather bad idea if you do a LOT of swapping.
Re: (Score:3)
Hmm, wikipedia suggests that a typical SSD access time is actually about 10us, so about 1,000x slower than DDR RAM, not 10x
However, Typical HDD acces times are around 10ms, or about 1,000x slower still.
So while an SSD page file will be much slower than RAM, it will also be much faster than a HDD. Moreover, page file access patterns typically involve megabyte-sized bock writes and kilobyte sized (single page) random reads - which play directly to the strengths of a SSD.
Certainly more RAM will potentially gr
Re: (Score:2)
Re: (Score:2)
. In any case, there are many SSD-only systems now, in which case the swap space is on the SDD whether you like it or not, so there's certainly not an unreasonable thing to try.
The software that comes with my Samsung disables the windows swapfile if you want it to. Since I have plenty of RAM, that's okay with me.
Re: (Score:2)
You also have issues with memory fragmentation. If there is not a large enough contiguous free memory address, the OS can page out
Re: (Score:2)
You haven't actually tried it, have you? Putting a swap file on a SSD instead of HDD helps tremendously.
It's
Re: (Score:2)
Yes, swapping is better on a SSD. But, it is MUCH MUCH better to not swap at all. That is my point. If you have to have swap, you are better off just buying more RAM.
Re: (Score:2)
No, really, I've done it. Loaded up a 4GB ultrabook with lots of applications and Chrome tabs and a couple of Virtualbox CentOS instances, over 6GB in active use. Switching apps initially took a couple of seconds as it settled down to a realistic working set, but after than that you couldn't tell that it was swapping at all.
I've done it on spinning disk too, of course, and that couple of seconds was closer to a minute.
As long as you're not actively thrashing - as long as your working set still fits in RAM
Re: (Score:2)
If you can't get more RAM (especially with the trend in newer laptops being to have soldered in chips), buy as large a SSD as possible that you can dedicate to swap. The reason is that this gives the drive more cells to wear-level the swapfile writes over, prolonging the drive's life.
Re: (Score:2)
I have been using a Samsung 840 (not pro) 120GB SSD as disk drive with a 3gb swap partition since May 2013. It is my work computer so it sees a lot of action (since my work laptop only has 4gb of ram it ends up using a lot of swap), the SSD did not fail on me yet. I can corroborate what some other people are saying here, it still gets dog slow when I need to use the swap, but I did not compare with a disk-based swap.
Re: (Score:2)
Yes. I've done exactly this, on both cheap ultrabooks with 4GB ram and huge Linux servers with 512GB of RAM. (We have a 2.5TB Redis cluster that was running out of space waiting for additional nodes to be commissioned.)
It works. It works well. It's not a panacea, but it's an enormous improvement over swapping to spinning disk. Night and day.
Re: (Score:2)
Making your page file large enough could reduce writes or just get enough memory. I do find it interes
Re: (Score:2)
Computers with memory leaks?
Re: My first SSD died (Score:2)
I think that is a fair assessment.
Personally I have a budget ADATA SX900 128GB SSD in my primary workstation as the primary disk. To store all of my family's media, backup files, etc. I have 3x2TB HDDs: /home, Bittorrent Sync shares/folders (laptop, my phone, wifes phone), misc non-media files too big for primary SSD
- one for for media files, also exported as an NFS share across the subnet; music, movies, ebooks, etc)
- one for backups; daily rsync of
- one to backup aforementioned backup drive; cronjob set t
Re: (Score:2)
"Send us your encryption keys so we can properly secure your prize"
Re:My first SSD died (Score:5, Insightful)
Re: (Score:2)
Re:My first SSD died (Score:5, Informative)
Re: (Score:3)
Where did you get those numbers? I've worked for a manufacturer of computer retail products such as OCZ and knowing the market I know that 10% would have killed the business.
re: 10% would have killed the business (Score:5, Informative)
Funny you should mention that:
http://arstechnica.com/gadgets/2013/11/once-great-ssd-manufacturer-ocz-filing-for-bankruptcy/
Re: (Score:2)
I know that 10% would have killed the business.
You don't say :-)
Their HDDs had excellent performance, and were excellent value for money. And then they went bankrupt.
Re: (Score:2)
In the Pentium 4 days they had some of the best deals on low-latency 2-2-2-5 RAM after BH-5 ceased to exist. And since then, I've bought a number of great bang/buck PSUs and a SSD from OCZ as well. And did I mention none of these components ever failed before I retired them naturally? I've also in that time span had a Gigabyte motherboard, an Asus motherboard, an intel CPU, and two Western Digital hard drives fry.
To each their own. =)
Re: (Score:2)
Citation needed.
Would you ask the same if I told you the sky was blue? OCZ had an industry wide reputation of incredibly high failure rates, poor customer service, and that cost the company dearly resulting in their bankruptcy despite their drives offering excellent value for money and performance. In some cases it wouldn't even be dataloss. My Vertex 3 suffered from several computer locking issues related to the garbage firmware on their drives.
I consider myself lucky that I have a surviving OCZ drive, just like I had an
Re:My first SSD died (Score:5, Funny)
I haven't gone back to hard drives since I tried an IBM Deskstar. Windows 7 makes for a lot of floppy-swapping, but at least my data is safe.
Re: (Score:2)
Yeah OCZ had a string of shitty SSDs. Pretty much a thing of the past starting with the Intel G2 Postville, Samsung 470/830 and Crucial M4. Since then it's been smooth sailing if you stuck to the "premium" brands, as well as most cheapo brands. It's about time to give it another shot - go for something like a Samsung 850 Evo or 850 Pro and you'll be fine. Or, if you want to be extra careful, an 840 Pro, as it's been on the market for a while now.
Re: (Score:2)
I have the 1TB Samsung 850 Pro, and I'm not giving it up for anything. Any new laptop I buy from now on will have an SSD in it. It's the biggest performance improvement I've ever had, except that time when I installed my first graphics card :)
Re: (Score:2)
they don't spin much at all