Samsung SSD 840 EVO 250GB & 1TB TLC NAND Drives Tested 156
MojoKid writes "Samsung has been aggressively bolstering its solid state drive line-up for the last couple of years. While some of Samsung's earlier drives may not have particularly stood-out versus the competition at the time, the company's more recent 830 series and 840 series of solid state drives have been solid, both in terms of value and overall performance. Samsung's latest consumer-class solid state drives is the just-announced 840 EVO series of products. As the name suggests, the SSD 840 EVO series of drives is an evolution of the Samsung 840 series. These drives use the latest TLC NAND Flash to come out of Samsung's fab, along with an updated controller, and also feature some interesting software called RAPID (Real-time Accelerated Processing of IO Data) that can significantly impact performance. Samsung's new SSD 840 EVO series SSDs performed well throughout a battery of benchmarks, whether using synthetic benchmarks, trace-based tests, or highly-compressible or incompressible data. At around $.76 to $.65 per GB, they're competitively priced, relatively speaking, as well."
Call me old fashion (Score:3, Interesting)
How many effective READ/WRITE cycle can the chip in SSD perform, before they start degrading ?
Has there been any comparison made in between the reliability (eg read/write cycles) of old fashion spinning-plate HD versus that of SSD ?
Re: (Score:2)
Ok, you're old fashioned.
This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...
--Q
Re: (Score:3)
The question is still relevant. Manufacturers talk about erase cycles, but are there any massive-scale studies done by a third part on SSD failure modes?
Re: (Score:2, Informative)
Wearout is not a significant failure mode. Nearly all failures are to due to non-wearout effects such as firmware bugs and i/o circuit marginality.
Re: (Score:2)
Hence why we want to see a study that compares overall failure to old-fashioned drives, taking all failure modes into account.
I would like to see some evidence that SSDs are more reliable.
Hot vs Crazy (Score:5, Informative)
Here's the thing. SSDs are now more reliable than when this guy logged this report. [codinghorror.com]
But are still maybe not as steady Eddie as a good-quality HDD. But we still want them because having an SSD boot drive changes the whole computing experience due to their awesome speed. And since we are good about backups (Are we not?) we can be relaxed as we ride the SSD smokin' fast Roller Coaster. SSD or HDD then what's the problem if we have data security. Both are gonna FAIL. So what if Miss SSD stabs me for no good reason? It was a helluva ride, Bro. And well worth the stitches. I do wish SLC NAND was not priced out of reach, but, hey, when it comes to hottness we take what we can get. Right?
Okay. This is Slashdot we get no hottiness...no hottiness at all.. No no no hottiness. It's pathetic really. ....
Re: (Score:3)
Just don't buy OCZ crap?
http://www.behardware.com/articles/881-7/components-returns-rates-7.html [behardware.com]
http://www.behardware.com/articles/881-6/components-returns-rates-7.html [behardware.com]
It's not a study but it's good enough for me.
Re: (Score:3)
The problem with large scale studies on this is that it takes too long to happen to actually study. You need to study real world usage patterns, and in the real world it takes decades before the flash actually wears to death. Controller failure (as is possible with HDDs too) will generally happen long before the flash becomes unwritable.
Re:Call me old fashion (Score:5, Informative)
It depends on what you use it for. I managed to wear out an Intel XM-25 160GB SSD a few years ago by hitting the 14TB re-write limit.
Modern SSDs so a lot of compression and de-duplication to reduce the amount of data they write. If your data doesn't compress or de-duplicate well (e.g. video, images) the drive will wear out a lot faster. I think what did it for me was building large databases of map tiles stored in PNG format. Intel provide a handy utility that tells you how much data has been written to your drive and mine reached the limit in about 18 months so had to be replaced under warranty.
Re:Call me old fashion (Score:5, Insightful)
Intel provide a handy utility that tells you how much data has been written to your drive and mine reached the limit in about 18 months so had to be replaced under warranty.
You were (amplified?) writing 32.8 GB per day, on average.
Clearly you will run into SSD erase-limit problems at such a rate, but such workloads normally turn out to not be tasks that actually benefit from an SSD to begin with (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)
You were either very clever and knew you would hit the limit and get a free replacement, or very foolish and squandered the lifetime of an expensive device when a cheap deice would have worked.
In any event, in general the larger the SSD the longer its erase-cycle lifetime will be. For a particular flash process its a completely linear 1:1 relationship, where twice the size buys twice as many block erases (a 320GB SSD on the same process would have lasted twice as long as your 160GB SSD with that work load)
Re: (Score:2)
Clearly you will run into SSD erase-limit problems at such a rate, but such workloads normally turn out to not be tasks that actually benefit from an SSD to begin with (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)
Actually, most devices will survive several years at such a rate. GP was unlucky to see failures quite
Re:Call me old fashion (Score:5, Insightful)
(32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)
That's an odd way to look at it. You assume that GP spreads out his writes evenly over 24h, and has no desire to speed things up.
Re:Call me old fashion (Score:4, Informative)
In my case having an SSD made a huge impact. I was using offline maps of a wide area build from PNG tiles in an sqlite database with RMaps on Android. Compiling the databases was much faster with an SSD. I was doing it interactively, so performance mattered.
I can only tell you what I experienced. I installed the drive and I didn't think about it wearing out, just carried on as normal. The Intel tool said that it had written 14TB of data and sure enough writes were failing to the point where it corrupted the OS and I had to re-install.
I was using Windows 7 x64, done as a fresh install on the drive when I built that PC. I made sure defragmentation was disabled.
I'm now wondering if the Intel tool doesn't count bytes written but instead is some kind of estimate based on the amount of available write capacity left on the drive. I wasn't monitoring it constantly either so perhaps it just jumped up to 14TB when it noticed that writes were failing and free space had dropped to zero.
It was a non-scientific test, YMMV etc etc.
Re: (Score:2)
That is only true for SandForce based drives as the tech behind it is LSI's "secret sauce". Samsung, Marvell, and Toshiba do not do any kind of compression or dedupe; they write out on a 1:1 basis.
The latter group could probably create their own compression and dedupe tech if they really desired to, but it's a performance tradeoff rather than something that has clear and consistent gains. SandForce performance is
Only some do (Score:3)
New Intel drives do, as they use the Sandforce chipset. However Samsung drives don't. Samsung makes their own controller, and they don't mess with compression. All writes are equal.
Also 14TB sounds a little low for a write limit. MLC drives, as the XM25 was, are generally spec'd at 3000-5000 P/E cycles. Actually should be higher since that is the spec for 20nm class flash and the XM25 was 50nm flash. Even assuming 1000, and assuming a write amplification factor of 3 (it usually won't be near that high) you
Re: Call me old fashion (Score:2)
Intel provides* , not Intel provides.
Re: (Score:2)
Some reviewers take popular devices and see if they can kill them by bombarding them with writes.
So far, the consensus is that, for typical consumer workloads, the limits on NAND writes are high enough not be a problem, even with Samsung's TLC NAND.
Same should apply to heavy professional workloads when using decent devices (Samsung 830/840 Pro and similar).
As for servers, the question is a bit more difficult to answer, but even assuming a very bad case, SSDs make sense if they can replace a couple of mechan
Re: (Score:2, Informative)
Well said.
Nothing lasts forever. If a hard-driven SSD lasts 3-4 years, I don't really care that if it's used up some large fraction of it's useful lifetime, because I'm going to replace it just like I'd replace a 4 year old spinning disk.
And the replacement will be cheaper and better.
And if the SSD was used to serve mostly static data at the high speed they provide, then it's not going to have used up its write/erase cycle lifetime by then anyway.
Re:Call me old fashion (Score:4, Insightful)
For people that use their computers for regular stuff like browsing the web, streaming video off the web, playing video games, and software development.. then get the damn SSD -- its a no-brainer for you folks.. you will love it and it will certainly die of something other than the erase-limit long before you approach that limit.
Re: Call me old fashion (Score:3)
Hmmm I replace my hard drives when I start to see RAID errors. I don't plan to run SSD raid as the on board fault tolerance should be ok.
Would be nice to have hard data on expected failures so that I know whether to plan for a three or a six year lifespan. I generally replace my main machine on a six year cycle as I have a lot of expensive software. Looking to upgrade this year when the higher performance intel chips launch.
1tb is quite a lot. Probably more than I need in solid state. The price is also quit
Re: (Score:2)
it's == it is, not it's = it is
Re: (Score:3)
Ok, you're old fashioned.
This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...
--Q
Technically it becomes less and less reliable each time they do a die shrink on the flash. Adding a whole extra bit level makes things worse still. In the early 2000s you were looking at 100'000 P/E cycles, maybe a million for the really good stuff. Good TLC memory seems to be rated around 3000, with a figure of 1000 being widely quoted, and in some cases, less.
Realistically, they've designed the drive to fight tooth and nail to avoid doing rewrites, and in actual fact it looks like they've put a layer o
Re:Call me old fashion (Score:5, Interesting)
Technically it becomes less and less reliable each time they do a die shrink on the flash. Adding a whole extra bit level makes things worse still. In the early 2000s you were looking at 100'000 P/E cycles, maybe a million for the really good stuff. Good TLC memory seems to be rated around 3000, with a figure of 1000 being widely quoted, and in some cases, less.
Lets not neglect the fact that while every die shrink does reduce the erase-limit per cell, it also (approximately) linearly increases the number of cells for a given chip area. In other words, for a given die area the erase limit (as measured in bytes, blocks, or cells) doesnt actually change with improving density. What does change is overall storage capacities and price.
When MLC SSD's dropped from ~2000 cycles per cell to ~1000 cycles per cell, their capacities doubled (so erases per device remains about constant) and prices also dropped from ~$3/GB to about ~$1/GB. Now MLC SSD's are around ~600 cycles per cell, their capacities are larger still (again erases per device remain about constant), and they are selling for ~$0.75/GB (and falling.)
By every meaningful measure these die shrinks improve the technology.
So now lets take it to the (extreme) logical conclusion, where MLC cells have exactly 1 erase cycle (we have a name for this kind of device.. WORM: Write Once Read Many.) To compensate, the device capacities would be about 600 times that of todays current capacities, so in the same size package as todays 256 GB SSD's we would be able to fit a 153 TB SSD WORM drive, and it would cost about $200.
Re: (Score:2)
By every meaningful measure these die shrinks improve the technology.
How about data retention? That is also a function of the cell size, since the more electrons you have in the charge trap, the greater the difference between 1 and 0. Intel's drives, for example, are only guaranteed to hold their contents for three months without power. And when they are powered, they keep the data alive by periodically rewriting it, which I strongly suspect amounts to a P/E cycle. (Not sure about flash, but a lot of memory devices use an 'erase' to set the bits high, and then short out
Re:Call me old fashion (Score:5, Informative)
Yes, many sites have done the maths on such things. The conclusion "finite life" is not the same thing as "short life". SSDs will in general, outlast HDDs, and will in general die of controller failure (something which affects HDDs too), not flash lifespan.
The numbers for the 840 (which uses the same flash, with the same life span) showed that for the 120GB drive, writing 10GB per day, you would take nearly 12 years to cause the flash to fail. For the 240/480/960 options for the new version you're looking at roughly 23, 47 and 94 years respectively. Given that the average HDD dies after only 4 years (yes yes yes, we all know you have a 20 year old disk that still works, that's a nice anecdote), that's rather bloody good.
Re: (Score:2)
Re:Call me old fashion (Score:4, Informative)
Yes, they were solved in a firmware patch a long time ago.
Re: (Score:2)
Noting that march is a long time ago in tech terms, and that one of the (incredibly small sample of 2) HDDs suffered issues as well.
Re: (Score:2)
Yes, the 840 did indeed suffer from this, but as I said up the thread, the firmware was patched to fix the issue.
Re: (Score:2)
Better yet, have you got a link to an independent test that confirms whatever Samsung may be claiming?
Re: (Score:3, Informative)
Power failure?
You don't have a UPS or other standby power source available? You know its 2013 right...
Willing to spend hundreds on an ultra fast STORAGE device and have no backup power available? really? come on...
That's some messed up priorities there... Spend a hundred bucks on a UPS already.
Then you don't ever have to worry about data corruption. Or the much more common... Loss of unsaved work due to power failure...
Re: (Score:2)
I guess I'm old school, got used to saving regularly back before UPSes were a consumer product. If I lose power 3 times a year I've lost a total of maybe 15 minutes of work, and it's a rare year where I have three power outages while working. So the insurance of a UPS would cost me $100/ 0.25 = $400/hour. Even if it lasts a decade that comes to $40/hour for saving my ass from some inconvenience. Pretty steep price.
Re: (Score:2)
We get interruptions to our supply less than once every five years. Even at 95% efficiency a UPS would cost a fair bit to run. It would be better if, like hard drives, SSDs were simply designed not to die in the event of unexpected power failure.
Data corruption isn't an issue with modern file systems. I suppose there is loss of work, but the cost/benefit ratio is too low.
UPSes are usually near 100% efficient (Score:2)
Most UPSes these days are line-interactive. That means they are not doing any conversion during normal operation. The line power is directly hooked to the output. They just watch the line level. If the power drops below their threshold, they then activate their inverter and start providing power. So while their electronics do use a bit being on, it is very little. The cost isn't in operation, it is in purchasing the device and in replacing the batteries.
That aside SSDs don't have problems with it (it was a
Re: (Score:2)
That aside SSDs don't have problems with it (it was a firmware bug, Samsung fixed it) and if your data is important, you probably don't want to rely on your journal to make sure it is intact.
If it was a firmware bug and has been patched then the point is irrelevant, but journals wouldn't have made it better anyway. Because of wear leveling etc, stuff gets written all over an SSD basically at random. And when a block of data gets full the SSD logic will move anything useful in that block to other places on the disk and then erase the original block. If the power failed and a firmware bug meant that pointers weren't saved then you've essentially taken a shotgun to your disk and potentially a whol
Re: (Score:3)
It would be better if, like hard drives, SSDs were simply designed not to die in the event of unexpected power failure.
About 80% of the hard drive failures on our servers over the last few years have been due to power failures. They run fine for years, then the power goes out and they're dead on boot.
So 15k HDDs don't seem to like power failures either.
Re: (Score:2)
What is it with SSD controller failure?
The processor, bridges, memory controller and memory, and all the other chips in a modern computer, can run flat-out for many years without failure.
What makes the controller chips in a SSD fail so often? (And I don't believe you about the controllers in a HDD failing, I've never had one fail, or even known anyone who had one fail, out of hundreds of hard drives run for many years, but I've heard of several SSDs failing in just the few that my friends have tried). Do
Re: (Score:2)
What makes the controller chips in a SSD fail so often?
It isn't the controller chips that are failing (a hardware fault,) its buggy logic in the controller firmware (a software fault) that leaves the data stored within the flash in an incoherent state (garbage in, garbage out.)
Re: (Score:2)
I have, personally, had about eight hard drive controllers fail. In all cases, I was able to replace the controller board on the drive and recover my data. (And generally keep using the drives as scratch disks.) I'd get the boards by buying headcrashed drives.
Most of these were quite some time ago, when I was dealing with a lot of identical, fairly small hard drives. Back when SCSI controllers had an option for drives that took extra-long to spin up. (We called it 'Seacrate' mode.) I've also had my share of
Re: (Score:2)
What makes the controller chips in a SSD fail so often?
Intel and Samsung controllers are pretty reliable. Most SSDs from other vendors use either Corsair or Marvell controllers.
Marvell doesn't provide firmware at all, so vendors have to write their own. Many of these vendors are small companies with little in-house expertise, and what effort they do put in to their firmware is often devoted to focusing on speed (so they appear at the top of review sites' benchmarks) rather than stability.
Sandforce is at t
Re: (Score:2)
Given that the average HDD dies after only 4 years...
Sorry, not even close.
Re: (Score:2)
you have hard facts. 2007 google study said about six years for all enterprise and consumer grade magnetic disks, however for low utilization disks most fail in only three years (contrary to most people's expectations)
Re: (Score:2)
you have hard facts. 2007 google study said about six years for all enterprise and consumer grade magnetic disks, however for low utilization disks most fail in only three years (contrary to most people's expectations)
Bullshit. That's not at all what the google study said.
In fact, it said absolutely nothing about the six year timeframe, since it only had 4-5 years of data ;-)
Also most people don't write as much as they think (Score:2)
Usually, once you have your computer set up with your programs, you don't write a ton of data. A few MB per day or so. Samsung drives come with a little utility so you can monitor it.
As a sample data point I reinstalled my system back at the end of March. I keep my OS, apps, games, and user data all on an SSD. I have an HDD just for media and the like (it is a 512GB drive). I play a lot of games and install them pretty freely. In that time, I've written 1.54TB to my drive. So around 11GB per day averaged ou
Re: (Score:2)
I hear people say that, but my first SSD I used as a scratch disk for everything since it was so fast, it burned through the 10k writes/cell in 1.5 years. My current SSD (WD SiliconEdge Blue 128GB) has been treated far more nicely and has been operational for 1 year 10 months, SSDLife indicates it'll die in about 2 years for a total of 3 years 10 months. Granted it's been running almost 24x7 but apart from downloads running to a HDD it's been idle most of that time, unlike a HDD where the bearings wear out
Re: (Score:2)
Why, once again, actually figuring out the life of these devices shows that TLC devices will live a perfectly acceptable length of time, so why should we use SLC or MLC for consumer devices?
Re: (Score:2)
Actually, no, that maths was done with the assumption that wear levelling would be able to do the average case job for a consumer drive which is reasonably full. If you assume ideal conditions the life span is in fact much longer than that.
Re: (Score:2)
No, I mean this [anandtech.com], which has a detailed explanation of what's going on, and why you shouldn't care.
Re: (Score:2)
This is total bullshit. Every single SSD I had owned has failed within a year.
Damn it, I covered off one anecdote approach, and you found a different one.
For those who don't know how anecdotes work –if your sample size is incredibly small, you can not draw meaningful results from it. I don't care if you have a single 20 year old disk, 20 6 year old disks, or 20 SSDs that failed within the first year, none of these give you a full enough picture to actually tell you what's going on.
The actual stats on return rates show that SSD return rates are around 0.5% of all drives per yea
Re: (Score:2)
I don't buy it when you discount a large number of anecdotal experiences with short lived SSD's... add up all the anecdotes and eventually you reach reality when most of the stories match...
The problem is that for each person with anecdotal evidence of SSDs failing, there's 200 other people not commenting about their entirely working SSD.
Re: (Score:2)
This is total bullshit. Every single SSD I had owned has failed within a year.
Stop buying from OCZ and start using reputable brands.
Re: (Score:2)
And as an additional anecdote, because I can... I have 6 SSDs (2 Intel X25-M bought 2008, 1 OCZ Vertex 2, 1 OCZ Vertext 3, 2 Samsung 830s). One Intel died when it was about 90 days old, and Intel replaced it free of charge, free shipping both ways, and they cross shipped it. All 6 are still running today. The other Intel SSDs is in one of my Internet facing Web/FTP/Email/MySQL servers serving a few thousand clients and has been running 24x7 for 5 years. Last I checked it, it was less than 5% "used up" i
Re: (Score:2)
I guess I must have had exceptionally good HDDs. I only had one HDD failure....
As grandparent said "that is a nice anecdote"... but were you "writing 10GB per day"?
I don't know where the 4 years number comes from... but it's not completely unlikely. Just take a look at the research:
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/da//archive/disk_failures.pdf [googleusercontent.com]
Granted it's a few years old...
Re: (Score:2)
For an OS drive (page file off, 8+GB of RAM), I don't see any "premium" (i.e. non-cheapo) SSD with 120+GB of capacity failing within 10 years... There are a few forum posts where people have actually tested how much data they could write to SSDs (i.e. permanently writing at max. speed until the drive fails), and the results are pretty good. The few drives that did eventually break just switched to read-only mode... Can't for the life of me find the damned thread though. Maybe someone here knows which one I'
Re:Call me old fashion (Score:4, Interesting)
The problem with such tests of writing as much as you can as fast as you can is that they're rather deceptive. They don't allow TRIM and wear levelling to do their thing (as they normally would), and hence show a much worse scenario than you would normally be dealing with. Actual projections of real life usage patterns writing ~10GB to these drives per day show you can get their life span in years (specifically the 840 we're talking about here) by dividing the capacity (in gigabytes) by 10.
Re: (Score:2)
That's definitely true, but with the drives not showing any signs of abnormally early failure even in the worst-case scenario, I'd say that's a good thing. :)
Re: (Score:2)
Found it: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm [xtremesystems.org]
It's a bit out of date, but basically: Stay the hell away from OCZ and certain Intel drives, and you'll be fine in nearly all cases.
Re: (Score:2)
I'm sticking to Samsung - my 830 is still working - light workload. My guess is something else will kill it than NAND wear.
Comment removed (Score:5, Interesting)
Re: (Score:3)
TLC endurance was tested in this article:
http://uk.hardware.info/reviews/4178/10/hardwareinfo-tests-lifespan-of-samsung-ssd-840-250gb-tlc-ssd-updated-with-final-conclusion-final-update-20-6-2013 [hardware.info]
Re: Call me old fashion (Score:2)
OK. I have had an OWC SSD in my Mac for a year and get about 450 MB/s reads and writes. Totally worth it.
And it's "old fashioned", not "old fashion".
Re: (Score:2)
There are user-done studies on such matters [xtremesystems.org] and some of them are quite impressive - to the point where you'll scrap the computer first before encountering failure.
The main reason why SSDs fail prematurely is their tables get corrupt. An SSD uses a FTL (flash translation layer) that translates the externally visible sector address to the internal flash array address. FTLs are heavily patented algorithms and there ar
Re: (Score:3)
Modern SSDs have big capacitors in them to avoid that (well, some of them do...). They can complete pending writes on capacitor power alone.
Re: (Score:2)
Re: (Score:2)
Isn't "some" the word I used, AC?
Re: (Score:2)
ERASE cycles? around the 10k order of magnitude, for the latest generation TLC. And it will get smaller as they shrink the capacity of the floating gates (electrons get stuck there, and it fills quicker).
haha, try 300-500 for latest smallest TLC
Re: (Score:2)
Re: (Score:2)
Isn't wear leveling mostly orthogonal to erase cycles? I.e. it tries to spread out the erasures evenly among the flash blocks, but doesn't actually change how many times any given block can be erased.
In fact I would think wear leveling would actually *decrease* the total number of user visible erase cycles for a drive since it means that infrequently changing data is continuously being shuffled from low-usage areas to high-usage ones to make the low-usage areas available, and every one of those user-invisi
Re: (Score:2)
Not quite: in any practical usage, there's a lot of sectors that are only written once, and a few sectors that are written a lot more often. As an example: System files, and the TEMP folder, respectively. With wear leveling, the controller is free to swap them around, so that all the sectors are used more.
Of course, that sector rotation could result in the TOTAL number of writes/erases going up, but without wear leveling, all the sectors corresponding to the often-written files (TEMP, logs, pagefile/swap, .
Re: (Score:2)
Well yeah - which is why I said it's orthogonal: wear leveling has absolutely zero effect on the number of erase cycles a given block can handle, it just spreads the load around since otherwise some blocks will get hammered while others are almost never modified.
Similarly the number of write cycles available has almost no effect on wear leveling beyond setting the "danger limits" for each block.
Ah, okay, I think I see the source of confusion - yes, in most usage patterns wear leveling will dramatically incr
Re: (Score:2)
around the 10k order of magnitude, for the latest generation TLC.
You are thinking of SLC, not TLC, and are also probably off by a generation.
Re: (Score:2)
So what exactly is the difference between a write cycle and an erase cycle in practical terms? Yeah I know and erase block typically contains many write blocks, but for any given write block I can't write to it a second time until it's erased, therefore it would seem the number of write cycles can't possibly exceed the number of erasures.
Or is there some marketing mumbo-jumbo being applied to the terms now?
Re: (Score:3)
So what exactly is the difference between a write cycle and an erase cycle in practical terms?
The difference is that there is no such thing as a "write cycle." The guy that you are replying to doesnt actually know much about what he is talking about.
In regard to the general difference between writes and erases the terminology on the table are supposed to be write amplification, block size (typically 256KB), and page size (typically 4KB.) Write amplification occurs when data smaller than a page is frequently written or "modified."
In practice write amplification is typically below 2x on modern co
Re: (Score:2)
Certainly, but my point is that you can't meaningfully write to a cell without first erasing it, so the number of writes to a cell cannot exceed the number of times it block has been erased.
Perhaps it will be clearer if you consider an SSD where each cell can be erased independently - it would be more expensive to make, but in principle there's no reason you couldn't do such a thing.
Re: (Score:2)
Re: (Score:2)
Well yeah, but I figure in a discussion of hundreds or thousands of write cycles the very first write is irrelevant to the point - so okay, technically the number of writes can't exceed the number of erases +1.
As for TRIM, it's totally irrelevant to the discussion, those are logical-level operations on sectors, and wear leveling will bounce those sectors all over the drive. I'm discussing the physical flash memory cell that actually stores data. A breakdown from the perspective of the internal control cir
Re: (Score:2)
I'm pretty sure your idea of how TRIM works is flawed. Close, but flawed. An erase "cycle" turns all the bits to 0. You can then write to the sector to turn some of the bits from 0 to 1. You CAN re-write the same sector, but ONLY if you are turning on bits, and not turning any off. NAND cells wear out from performing too many erases, not reads or writes. Note that most sectors you aren't usually just turning on bits, so you typically have to do an erase and then write, but not always.
Now as for TRIM.
NAND, or Exclusive NOR? (Score:2)
Was the RAPID sw used throughout the test (Score:2)
One assumes this is windows software. Did the competing drives have their drivers installed too? I would want to see its performance without drivers installed and used as a plain SATA drive. And I would like to see with and without RAPID numbers.
Is RAPID a sophisticated buffer cache that is doing lazy writes to the SSD?
Re: (Score:2)
I would want to see its performance without drivers installed and used as a plain SATA drive. And I would like to see with and without RAPID numbers.
Is RAPID a sophisticated buffer cache that is doing lazy writes to the SSD?
There you go: http://www.anandtech.com/show/7173/samsung-ssd-840-evo-review-120gb-250gb-500gb-750gb-1tb-models-tested/5 [anandtech.com]
I wonder if RAPID could bring such huge performance improvements on Linux too, or if this just means the Windows cache sucks. Because from the article I still don't see eactly what RAPID does that the OS's cache shouldn't do already.
Re: (Score:2)
I keep hearing these drives are doing compression. Maybe the driver offloads the compression onto the cpu.
posted too soon (Score:2)
from your kind link, it looks to be doing lazy writes.
Re: (Score:2)
I keep hearing these drives are doing compression.
Only the SSDs using a SandForce controller use compression so it's not the case here.
from your kind link, it looks to be doing lazy writes.
The OS's cache should already be doing something like that. However the benchmarks normally force it to flush to disk at key points to ensure they test the performance of the disk and not the cache. So maybe the RAPID driver ignores the flush commands in some circumstances?
Is anyone building home SANs out of SSDs yet? (Score:2)
In the 2-5 TB range?
I previously would have maybe wanted this but not been willing to spend the money or expose my storage to disk failure with consumer SSD.
I'm thinking now it's getting to the point where it might be reasonable. I usually do RAID-10 for the performance (rebuild speed on RAID-5 with 2TB disks scares me) with the penalty of storage efficiency.
With 512GB SSD sort of affordable, I can switch to RAID-5 for the improved storage efficiency and still get an improvement in performance.
Why? (Score:2)
You'd need a better network to have any use. A modern 7200rpm drive is usually around the speed of a 1gbit link, sometimes faster, sometimes slower depending on the workload. Get a RAID going and you can generally out-do the bandwidth nearly all the time.
SSDs are WAY faster. They can slam a 6gbit SATA/SAS link, and can do so with nearly any workload. So you RAID them and you are talking even more bandwidth. You'd need a 10gig network to be able to see the performance benefits from them. Not that you can't h
Re: (Score:2)
Re: (Score:2)
The core reason why is to avoid the shitty reliability nightmare that contemporary mechanical HDDs represent and to get a bump in performance.
My current environment is 6x2TB Seagate 7200s in RAID-10 and I find with a virtualization workload good throughput dies off pretty quickly. Sure, a single contiguous large write or read can saturate the link, but 2 ESXi hosts and 6-8 busy VMs really brings up the latency and brings down the performance.
After setting this up last fall, I find I made it bigger than I r
Re: (Score:2)
Re: (Score:2)
A RAID array is not a SAN but I have yet to see an actual SAN configured JBOD only, if it's even an option. I'm not sure how you would aggregate the storage of single SSDs without RAID.
And I don't know what's moronic about home SANs, mine has ~7 TB storage and volumes exported via iSCSI and NFS to 3-4 systems.
What's wrong with RAID redundancy techniques for SSDs? Between the aggregation required for larger LUNs, I would think you would want to hedge the risk of a device failure.
Idiotic acronyms? (Score:2)
Real-time Accelerated Processing of IO Data
Nope, definitely not contrived at all.
TLC not worth it yet (Score:2)
I'd be willing to consider TLC despite its drawbacks if the price was considerably lower than with MLC-based drives, but that's currently not the case. The Samsung 840 EVO costs about $185 for the 250GB model, while the 840 Pro (using MLC) is about $230-$250. So we're talking about 75 cents a gigabyte for TLC, and about a buck a gigabyte for MLC. I'm willing to take the 25% cost hit for far better endurance. In my opinion, TLC really needs to get down to 25-40 cents a gigabyte before it would be worth it. I
A different technology is available (Score:2)
I hate to see this discussion go entirely to the "wearout" issue. Clearly there are some posters here heavily invested in spinning disk. There are more exciting flash technologies in the pipeline.
Samsung has a new flash technology for the Enterprise called 3D V-NAND [xbitlabs.com]. By using 24 separate layers of flash on one chip they can keep the feature size up and still keep pace with storage density. They believe they can go to hundreds of layers. There is talk of a 384GB single chip for smartphones and tablets [bgr.com].
Re: (Score:2)
Whoa. An AC with useful information and references? Who let you in here?
(No, I was not that AC.)
Re: (Score:2)
Reviewing the links now though I see no reference to Samsung, nor their multilayer developments that are a true departure from the traditional methods, nor product pipeline info. It's all "Crossbar". Also no reference to HP, who had some discovery in the memristor area. Maybe it's time to post a new article. When we see this theoretical stuff we usually think "a decade out, if ever."
Multilayer is a really big deal. One problem with Moore's law is that it has heretofore existed in flatland - dimensions
Re: (Score:3)
Re: (Score:2)
>all major manufacturers these days promise that even if all the cells failed in the whole drive you should be able to read them
and you should take such promises as having the same integrity as all other marketing claims - i.e they're probably not blatant lies.
A Samsung 840 endurance test posted in a comment above: They ran for ~3000 erase cycles until encountering the first unrecoverable read error at which point they declared the drive "dead" - it still seemed to work, but data had already been lost an
Re: (Score:2)
I bought a samsung 840 128 gig drive( not pro ) and Iove it.
Yeah, we all like the look of the "hot" axis on the graph.
We're worried about the "crazy" axis. Most of us have a long term relationship with our data.
(for those who don't know: http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html [codinghorror.com] )
nb. I've been using a 40Gb Intel SSD as my boot drive for a couple of years, it's still going strong AFAICT but there's not too many writes to that drive (swapfile and $home is on a velociraptor).
Re:Not paying for TLC at that price (Score:4, Informative)
Except if you actually bothered to educate yourself, you'd find that at the capacities samsung is offering you, if you write to them at 10GB a day, every day, they'll last entirely respectable times (12,23,47,94 years respectively for 120,240,480 and 960GB drives).
Re: (Score:2)
You're talking about testing a device that doesn't even saturate 3 PCIe lanes, and complaining the test bed "only" has 16? Really?
Re: (Score:2)
excuse me, is this the retard thread?