Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

Samsung SSD 840 EVO 250GB & 1TB TLC NAND Drives Tested 156

MojoKid writes "Samsung has been aggressively bolstering its solid state drive line-up for the last couple of years. While some of Samsung's earlier drives may not have particularly stood-out versus the competition at the time, the company's more recent 830 series and 840 series of solid state drives have been solid, both in terms of value and overall performance. Samsung's latest consumer-class solid state drives is the just-announced 840 EVO series of products. As the name suggests, the SSD 840 EVO series of drives is an evolution of the Samsung 840 series. These drives use the latest TLC NAND Flash to come out of Samsung's fab, along with an updated controller, and also feature some interesting software called RAPID (Real-time Accelerated Processing of IO Data) that can significantly impact performance. Samsung's new SSD 840 EVO series SSDs performed well throughout a battery of benchmarks, whether using synthetic benchmarks, trace-based tests, or highly-compressible or incompressible data. At around $.76 to $.65 per GB, they're competitively priced, relatively speaking, as well."
This discussion has been archived. No new comments can be posted.

Samsung SSD 840 EVO 250GB & 1TB TLC NAND Drives Tested

Comments Filter:
  • Call me old fashion (Score:3, Interesting)

    by Taco Cowboy ( 5327 ) on Sunday August 18, 2013 @04:24AM (#44598983) Journal

    How many effective READ/WRITE cycle can the chip in SSD perform, before they start degrading ?

    Has there been any comparison made in between the reliability (eg read/write cycles) of old fashion spinning-plate HD versus that of SSD ?

    • by quarrel ( 194077 )

      Ok, you're old fashioned.

      This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...

      --Q

      • The question is still relevant. Manufacturers talk about erase cycles, but are there any massive-scale studies done by a third part on SSD failure modes?

        • Re: (Score:2, Informative)

          by Anonymous Coward

          Wearout is not a significant failure mode. Nearly all failures are to due to non-wearout effects such as firmware bugs and i/o circuit marginality.

          • Hence why we want to see a study that compares overall failure to old-fashioned drives, taking all failure modes into account.

            I would like to see some evidence that SSDs are more reliable.

        • The problem with large scale studies on this is that it takes too long to happen to actually study. You need to study real world usage patterns, and in the real world it takes decades before the flash actually wears to death. Controller failure (as is possible with HDDs too) will generally happen long before the flash becomes unwritable.

          • by AmiMoJo ( 196126 ) * on Sunday August 18, 2013 @05:36AM (#44599171) Homepage Journal

            It depends on what you use it for. I managed to wear out an Intel XM-25 160GB SSD a few years ago by hitting the 14TB re-write limit.

            Modern SSDs so a lot of compression and de-duplication to reduce the amount of data they write. If your data doesn't compress or de-duplicate well (e.g. video, images) the drive will wear out a lot faster. I think what did it for me was building large databases of map tiles stored in PNG format. Intel provide a handy utility that tells you how much data has been written to your drive and mine reached the limit in about 18 months so had to be replaced under warranty.

            • by Rockoon ( 1252108 ) on Sunday August 18, 2013 @06:35AM (#44599327)

              Intel provide a handy utility that tells you how much data has been written to your drive and mine reached the limit in about 18 months so had to be replaced under warranty.

              You were (amplified?) writing 32.8 GB per day, on average.

              Clearly you will run into SSD erase-limit problems at such a rate, but such workloads normally turn out to not be tasks that actually benefit from an SSD to begin with (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)

              You were either very clever and knew you would hit the limit and get a free replacement, or very foolish and squandered the lifetime of an expensive device when a cheap deice would have worked.

              In any event, in general the larger the SSD the longer its erase-cycle lifetime will be. For a particular flash process its a completely linear 1:1 relationship, where twice the size buys twice as many block erases (a 320GB SSD on the same process would have lasted twice as long as your 160GB SSD with that work load)

              • Clearly you will run into SSD erase-limit problems at such a rate, but such workloads normally turn out to not be tasks that actually benefit from an SSD to begin with (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)

                Actually, most devices will survive several years at such a rate. GP was unlucky to see failures quite

              • by hackertourist ( 2202674 ) on Sunday August 18, 2013 @07:39AM (#44599465)

                (32.8GB/day = 380KB/sec, so the devices speed wasnt actually an issue for you)

                That's an odd way to look at it. You assume that GP spreads out his writes evenly over 24h, and has no desire to speed things up.

              • by AmiMoJo ( 196126 ) * on Sunday August 18, 2013 @07:52AM (#44599503) Homepage Journal

                In my case having an SSD made a huge impact. I was using offline maps of a wide area build from PNG tiles in an sqlite database with RMaps on Android. Compiling the databases was much faster with an SSD. I was doing it interactively, so performance mattered.

                I can only tell you what I experienced. I installed the drive and I didn't think about it wearing out, just carried on as normal. The Intel tool said that it had written 14TB of data and sure enough writes were failing to the point where it corrupted the OS and I had to re-install.

                I was using Windows 7 x64, done as a fresh install on the drive when I built that PC. I made sure defragmentation was disabled.

                I'm now wondering if the Intel tool doesn't count bytes written but instead is some kind of estimate based on the amount of available write capacity left on the drive. I wasn't monitoring it constantly either so perhaps it just jumped up to 14TB when it noticed that writes were failing and free space had dropped to zero.

                It was a non-scientific test, YMMV etc etc.

            • Modern SSDs so a lot of compression and de-duplication to reduce the amount of data they write.

              That is only true for SandForce based drives as the tech behind it is LSI's "secret sauce". Samsung, Marvell, and Toshiba do not do any kind of compression or dedupe; they write out on a 1:1 basis.

              The latter group could probably create their own compression and dedupe tech if they really desired to, but it's a performance tradeoff rather than something that has clear and consistent gains. SandForce performance is

            • New Intel drives do, as they use the Sandforce chipset. However Samsung drives don't. Samsung makes their own controller, and they don't mess with compression. All writes are equal.

              Also 14TB sounds a little low for a write limit. MLC drives, as the XM25 was, are generally spec'd at 3000-5000 P/E cycles. Actually should be higher since that is the spec for 20nm class flash and the XM25 was 50nm flash. Even assuming 1000, and assuming a write amplification factor of 3 (it usually won't be near that high) you

            • Intel provides* , not Intel provides.

        • Some reviewers take popular devices and see if they can kill them by bombarding them with writes.

          So far, the consensus is that, for typical consumer workloads, the limits on NAND writes are high enough not be a problem, even with Samsung's TLC NAND.

          Same should apply to heavy professional workloads when using decent devices (Samsung 830/840 Pro and similar).

          As for servers, the question is a bit more difficult to answer, but even assuming a very bad case, SSDs make sense if they can replace a couple of mechan

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Well said.

        Nothing lasts forever. If a hard-driven SSD lasts 3-4 years, I don't really care that if it's used up some large fraction of it's useful lifetime, because I'm going to replace it just like I'd replace a 4 year old spinning disk.

        And the replacement will be cheaper and better.

        And if the SSD was used to serve mostly static data at the high speed they provide, then it's not going to have used up its write/erase cycle lifetime by then anyway.

        • by Rockoon ( 1252108 ) on Sunday August 18, 2013 @06:47AM (#44599339)
          Indeed. I just don't see how the erase-limit issue applies for most people. The most common activity where it might apply is in a machine used as a DVR (dont use an SSD in a DVR), with the next being a heavily updated database server (you may still prefer the SSD if transaction latency is important.)

          For people that use their computers for regular stuff like browsing the web, streaming video off the web, playing video games, and software development.. then get the damn SSD -- its a no-brainer for you folks.. you will love it and it will certainly die of something other than the erase-limit long before you approach that limit.
        • Hmmm I replace my hard drives when I start to see RAID errors. I don't plan to run SSD raid as the on board fault tolerance should be ok.

          Would be nice to have hard data on expected failures so that I know whether to plan for a three or a six year lifespan. I generally replace my main machine on a six year cycle as I have a lot of expensive software. Looking to upgrade this year when the higher performance intel chips launch.

          1tb is quite a lot. Probably more than I need in solid state. The price is also quit

      • Ok, you're old fashioned.

        This was a thing, yes, but only for that brief period when you actually got your slashdot id. Since then? Not so much ...

        --Q

        Technically it becomes less and less reliable each time they do a die shrink on the flash. Adding a whole extra bit level makes things worse still. In the early 2000s you were looking at 100'000 P/E cycles, maybe a million for the really good stuff. Good TLC memory seems to be rated around 3000, with a figure of 1000 being widely quoted, and in some cases, less.

        Realistically, they've designed the drive to fight tooth and nail to avoid doing rewrites, and in actual fact it looks like they've put a layer o

        • by Rockoon ( 1252108 ) on Sunday August 18, 2013 @07:08AM (#44599393)

          Technically it becomes less and less reliable each time they do a die shrink on the flash. Adding a whole extra bit level makes things worse still. In the early 2000s you were looking at 100'000 P/E cycles, maybe a million for the really good stuff. Good TLC memory seems to be rated around 3000, with a figure of 1000 being widely quoted, and in some cases, less.

          Lets not neglect the fact that while every die shrink does reduce the erase-limit per cell, it also (approximately) linearly increases the number of cells for a given chip area. In other words, for a given die area the erase limit (as measured in bytes, blocks, or cells) doesnt actually change with improving density. What does change is overall storage capacities and price.

          When MLC SSD's dropped from ~2000 cycles per cell to ~1000 cycles per cell, their capacities doubled (so erases per device remains about constant) and prices also dropped from ~$3/GB to about ~$1/GB. Now MLC SSD's are around ~600 cycles per cell, their capacities are larger still (again erases per device remain about constant), and they are selling for ~$0.75/GB (and falling.)

          By every meaningful measure these die shrinks improve the technology.

          So now lets take it to the (extreme) logical conclusion, where MLC cells have exactly 1 erase cycle (we have a name for this kind of device.. WORM: Write Once Read Many.) To compensate, the device capacities would be about 600 times that of todays current capacities, so in the same size package as todays 256 GB SSD's we would be able to fit a 153 TB SSD WORM drive, and it would cost about $200.

          • By every meaningful measure these die shrinks improve the technology.

            How about data retention? That is also a function of the cell size, since the more electrons you have in the charge trap, the greater the difference between 1 and 0. Intel's drives, for example, are only guaranteed to hold their contents for three months without power. And when they are powered, they keep the data alive by periodically rewriting it, which I strongly suspect amounts to a P/E cycle. (Not sure about flash, but a lot of memory devices use an 'erase' to set the bits high, and then short out

    • by beelsebob ( 529313 ) on Sunday August 18, 2013 @05:00AM (#44599057)

      Yes, many sites have done the maths on such things. The conclusion "finite life" is not the same thing as "short life". SSDs will in general, outlast HDDs, and will in general die of controller failure (something which affects HDDs too), not flash lifespan.

      The numbers for the 840 (which uses the same flash, with the same life span) showed that for the 120GB drive, writing 10GB per day, you would take nearly 12 years to cause the flash to fail. For the 240/480/960 options for the new version you're looking at roughly 23, 47 and 94 years respectively. Given that the average HDD dies after only 4 years (yes yes yes, we all know you have a 20 year old disk that still works, that's a nice anecdote), that's rather bloody good.

      • by jamesh ( 87723 )
        have they solved the corruption-on-power-failure issues yet?
        • by beelsebob ( 529313 ) on Sunday August 18, 2013 @05:07AM (#44599083)

          Yes, they were solved in a firmware patch a long time ago.

        • Re: (Score:3, Informative)

          by Anonymous Coward

          Power failure?

          You don't have a UPS or other standby power source available? You know its 2013 right...

          Willing to spend hundreds on an ultra fast STORAGE device and have no backup power available? really? come on...

          That's some messed up priorities there... Spend a hundred bucks on a UPS already.

          Then you don't ever have to worry about data corruption. Or the much more common... Loss of unsaved work due to power failure...

          • I guess I'm old school, got used to saving regularly back before UPSes were a consumer product. If I lose power 3 times a year I've lost a total of maybe 15 minutes of work, and it's a rare year where I have three power outages while working. So the insurance of a UPS would cost me $100/ 0.25 = $400/hour. Even if it lasts a decade that comes to $40/hour for saving my ass from some inconvenience. Pretty steep price.

          • by AmiMoJo ( 196126 ) *

            We get interruptions to our supply less than once every five years. Even at 95% efficiency a UPS would cost a fair bit to run. It would be better if, like hard drives, SSDs were simply designed not to die in the event of unexpected power failure.

            Data corruption isn't an issue with modern file systems. I suppose there is loss of work, but the cost/benefit ratio is too low.

            • Most UPSes these days are line-interactive. That means they are not doing any conversion during normal operation. The line power is directly hooked to the output. They just watch the line level. If the power drops below their threshold, they then activate their inverter and start providing power. So while their electronics do use a bit being on, it is very little. The cost isn't in operation, it is in purchasing the device and in replacing the batteries.

              That aside SSDs don't have problems with it (it was a

              • by jamesh ( 87723 )

                That aside SSDs don't have problems with it (it was a firmware bug, Samsung fixed it) and if your data is important, you probably don't want to rely on your journal to make sure it is intact.

                If it was a firmware bug and has been patched then the point is irrelevant, but journals wouldn't have made it better anyway. Because of wear leveling etc, stuff gets written all over an SSD basically at random. And when a block of data gets full the SSD logic will move anything useful in that block to other places on the disk and then erase the original block. If the power failed and a firmware bug meant that pointers weren't saved then you've essentially taken a shotgun to your disk and potentially a whol

            • by 0123456 ( 636235 )

              It would be better if, like hard drives, SSDs were simply designed not to die in the event of unexpected power failure.

              About 80% of the hard drive failures on our servers over the last few years have been due to power failures. They run fine for years, then the power goes out and they're dead on boot.

              So 15k HDDs don't seem to like power failures either.

      • What is it with SSD controller failure?

        The processor, bridges, memory controller and memory, and all the other chips in a modern computer, can run flat-out for many years without failure.

        What makes the controller chips in a SSD fail so often? (And I don't believe you about the controllers in a HDD failing, I've never had one fail, or even known anyone who had one fail, out of hundreds of hard drives run for many years, but I've heard of several SSDs failing in just the few that my friends have tried). Do

        • What makes the controller chips in a SSD fail so often?

          It isn't the controller chips that are failing (a hardware fault,) its buggy logic in the controller firmware (a software fault) that leaves the data stored within the flash in an incoherent state (garbage in, garbage out.)

        • I have, personally, had about eight hard drive controllers fail. In all cases, I was able to replace the controller board on the drive and recover my data. (And generally keep using the drives as scratch disks.) I'd get the boards by buying headcrashed drives.

          Most of these were quite some time ago, when I was dealing with a lot of identical, fairly small hard drives. Back when SCSI controllers had an option for drives that took extra-long to spin up. (We called it 'Seacrate' mode.) I've also had my share of

        • What makes the controller chips in a SSD fail so often?

          Intel and Samsung controllers are pretty reliable. Most SSDs from other vendors use either Corsair or Marvell controllers.

          Marvell doesn't provide firmware at all, so vendors have to write their own. Many of these vendors are small companies with little in-house expertise, and what effort they do put in to their firmware is often devoted to focusing on speed (so they appear at the top of review sites' benchmarks) rather than stability.

          Sandforce is at t

      • by sribe ( 304414 )

        Given that the average HDD dies after only 4 years...

        Sorry, not even close.

        • you have hard facts. 2007 google study said about six years for all enterprise and consumer grade magnetic disks, however for low utilization disks most fail in only three years (contrary to most people's expectations)

          • by sribe ( 304414 )

            you have hard facts. 2007 google study said about six years for all enterprise and consumer grade magnetic disks, however for low utilization disks most fail in only three years (contrary to most people's expectations)

            Bullshit. That's not at all what the google study said.

            In fact, it said absolutely nothing about the six year timeframe, since it only had 4-5 years of data ;-)

      • Usually, once you have your computer set up with your programs, you don't write a ton of data. A few MB per day or so. Samsung drives come with a little utility so you can monitor it.

        As a sample data point I reinstalled my system back at the end of March. I keep my OS, apps, games, and user data all on an SSD. I have an HDD just for media and the like (it is a 512GB drive). I play a lot of games and install them pretty freely. In that time, I've written 1.54TB to my drive. So around 11GB per day averaged ou

      • by Kjella ( 173770 )

        I hear people say that, but my first SSD I used as a scratch disk for everything since it was so fast, it burned through the 10k writes/cell in 1.5 years. My current SSD (WD SiliconEdge Blue 128GB) has been treated far more nicely and has been operational for 1 year 10 months, SSDLife indicates it'll die in about 2 years for a total of 3 years 10 months. Granted it's been running almost 24x7 but apart from downloads running to a HDD it's been idle most of that time, unlike a HDD where the bearings wear out

    • For an OS drive (page file off, 8+GB of RAM), I don't see any "premium" (i.e. non-cheapo) SSD with 120+GB of capacity failing within 10 years... There are a few forum posts where people have actually tested how much data they could write to SSDs (i.e. permanently writing at max. speed until the drive fails), and the results are pretty good. The few drives that did eventually break just switched to read-only mode... Can't for the life of me find the damned thread though. Maybe someone here knows which one I'

      • by beelsebob ( 529313 ) on Sunday August 18, 2013 @05:09AM (#44599087)

        The problem with such tests of writing as much as you can as fast as you can is that they're rather deceptive. They don't allow TRIM and wear levelling to do their thing (as they normally would), and hence show a much worse scenario than you would normally be dealing with. Actual projections of real life usage patterns writing ~10GB to these drives per day show you can get their life span in years (specifically the 840 we're talking about here) by dividing the capacity (in gigabytes) by 10.

        • That's definitely true, but with the drives not showing any signs of abnormally early failure even in the worst-case scenario, I'd say that's a good thing. :)

      • Found it: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm [xtremesystems.org]

        It's a bit out of date, but basically: Stay the hell away from OCZ and certain Intel drives, and you'll be fine in nearly all cases.

        • by TheLink ( 130905 )
          That link contradicts your earlier post. Few of those SSDs fail in read-only mode.

          I'm sticking to Samsung - my 830 is still working - light workload. My guess is something else will kill it than NAND wear.
    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Sunday August 18, 2013 @05:15AM (#44599117)
      Comment removed based on user account deletion
    • OK. I have had an OWC SSD in my Mac for a year and get about 450 MB/s reads and writes. Totally worth it.

      And it's "old fashioned", not "old fashion".

    • by tlhIngan ( 30335 )

      How many effective READ/WRITE cycle can the chip in SSD perform, before they start degrading ?

      There are user-done studies on such matters [xtremesystems.org] and some of them are quite impressive - to the point where you'll scrap the computer first before encountering failure.

      The main reason why SSDs fail prematurely is their tables get corrupt. An SSD uses a FTL (flash translation layer) that translates the externally visible sector address to the internal flash array address. FTLs are heavily patented algorithms and there ar

  • The argument went on for about half an hour in EE lab, before the teacher came along and announced: "Yes."
  • One assumes this is windows software. Did the competing drives have their drivers installed too? I would want to see its performance without drivers installed and used as a plain SATA drive. And I would like to see with and without RAPID numbers.

    Is RAPID a sophisticated buffer cache that is doing lazy writes to the SSD?

    • by fgouget ( 925644 )

      I would want to see its performance without drivers installed and used as a plain SATA drive. And I would like to see with and without RAPID numbers.

      Is RAPID a sophisticated buffer cache that is doing lazy writes to the SSD?

      There you go: http://www.anandtech.com/show/7173/samsung-ssd-840-evo-review-120gb-250gb-500gb-750gb-1tb-models-tested/5 [anandtech.com]

      I wonder if RAPID could bring such huge performance improvements on Linux too, or if this just means the Windows cache sucks. Because from the article I still don't see eactly what RAPID does that the OS's cache shouldn't do already.

      • by Marrow ( 195242 )

        I keep hearing these drives are doing compression. Maybe the driver offloads the compression onto the cpu.

        • from your kind link, it looks to be doing lazy writes.

        • by fgouget ( 925644 )

          I keep hearing these drives are doing compression.

          Only the SSDs using a SandForce controller use compression so it's not the case here.

          from your kind link, it looks to be doing lazy writes.

          The OS's cache should already be doing something like that. However the benchmarks normally force it to flush to disk at key points to ensure they test the performance of the disk and not the cache. So maybe the RAPID driver ignores the flush commands in some circumstances?

  • In the 2-5 TB range?

    I previously would have maybe wanted this but not been willing to spend the money or expose my storage to disk failure with consumer SSD.

    I'm thinking now it's getting to the point where it might be reasonable. I usually do RAID-10 for the performance (rebuild speed on RAID-5 with 2TB disks scares me) with the penalty of storage efficiency.

    With 512GB SSD sort of affordable, I can switch to RAID-5 for the improved storage efficiency and still get an improvement in performance.

    • You'd need a better network to have any use. A modern 7200rpm drive is usually around the speed of a 1gbit link, sometimes faster, sometimes slower depending on the workload. Get a RAID going and you can generally out-do the bandwidth nearly all the time.

      SSDs are WAY faster. They can slam a 6gbit SATA/SAS link, and can do so with nearly any workload. So you RAID them and you are talking even more bandwidth. You'd need a 10gig network to be able to see the performance benefits from them. Not that you can't h

      • Comment removed based on user account deletion
      • by swb ( 14022 )

        The core reason why is to avoid the shitty reliability nightmare that contemporary mechanical HDDs represent and to get a bump in performance.

        My current environment is 6x2TB Seagate 7200s in RAID-10 and I find with a virtualization workload good throughput dies off pretty quickly. Sure, a single contiguous large write or read can saturate the link, but 2 ESXi hosts and 6-8 busy VMs really brings up the latency and brings down the performance.

        After setting this up last fall, I find I made it bigger than I r

        • Virtualization really eats up the IOPs. Generally you see a huge increase in VM performance going with SSD backed storage. It's also great for heavy video editing.
  • Real-time Accelerated Processing of IO Data

    Nope, definitely not contrived at all.

  • I'd be willing to consider TLC despite its drawbacks if the price was considerably lower than with MLC-based drives, but that's currently not the case. The Samsung 840 EVO costs about $185 for the 250GB model, while the 840 Pro (using MLC) is about $230-$250. So we're talking about 75 cents a gigabyte for TLC, and about a buck a gigabyte for MLC. I'm willing to take the 25% cost hit for far better endurance. In my opinion, TLC really needs to get down to 25-40 cents a gigabyte before it would be worth it. I

  • I hate to see this discussion go entirely to the "wearout" issue. Clearly there are some posters here heavily invested in spinning disk. There are more exciting flash technologies in the pipeline.

    Samsung has a new flash technology for the Enterprise called 3D V-NAND [xbitlabs.com]. By using 24 separate layers of flash on one chip they can keep the feature size up and still keep pace with storage density. They believe they can go to hundreds of layers. There is talk of a 384GB single chip for smartphones and tablets [bgr.com].

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...