Micron Releases 16nm-Process SSDs With Dynamic Flash Programming 66
Lucas123 writes: Micron's newest client flash drive line, the M600, uses its first 16nm process technology and dynamic write acceleration firmware that allows the flash to be programmed as SLC or MLC instead of using overprovisioning or reserving a permanent pool of flash cache to accelerate writes. The ability to dynamically program the flash reduces power use and improves write performance as much as 2.8 times over models without the feature, according to Jon Tanguy, Micron's senior technical marketing engineer. The new lithography process technology also allowed Micron to reduce the price of the flash drive to 45 cents a gigabyte.
Lifetime at 16nm? (Score:5, Insightful)
Seems like the durability of flash cells decreases with every process shrink. It makes me wonder what the lifetime of this new stuff will be. A 10% reduction in cost is no bargain if it comes with a 10% reduction in lifetime.
Re: Lifetime at 16nm? (Score:5, Informative)
I read it. It makes some claims that are not actually related to cell lifetime but rather to tricks they can play with the fancy firmware that allow them to do fewer writes and erases. That has nothing to do with the native cell lifetime.
Re: Lifetime at 16nm? (Score:4, Informative)
I don't understand how the native cell life is relevant.
You're not buying flash chips from them, you're buying an SSD. The write endurance of the drive is what matters. How that is achieved is irrelevant.
Re: (Score:2, Funny)
bc trim is application- dependant. Their assumptio (Score:4, Insightful)
Making assumptions about how often trim might be used for any given workload only obscures the actual write endurance. Much like a 100GB capacity tape that's marked as 200GB because dome data that the manufacturer chose compressed 2:1 before being sent to the tape drive. Your mpeg movies aren't going to compress, so you'll be able to put 100GB of movies on that 100GB tape. The 200GB number is pure marketing BS.
At least with tapes, all if the companies use the same 2:1 bs factor, so they can be compared. There's no telling what assumptions Micron made about the use of trim, so there's no way to compare this drive's endurance to any other, or to estimate it's actual endurance for any real workload.
Re: (Score:2)
And the native cell endurance has very little to do with it.
Erase block size is more important. As well as over-provisioning.
They don't go in to details, but if they can re-purpose a MLC cell as SLC after it has worn out too much to function as MLC, that's going to increase the drive endurance/decrease amount of required over provisioning.
Re:bc trim is application- dependant. Their assump (Score:4, Insightful)
Theres a lot of misconception here, so I'll try to address them.
Making assumptions about how often trim might be used for any given workload only obscures the actual write endurance.
TRIM has nothing to do with endurance. TRIM erases cells that are scheduled for erasure anyways; all TRIM does is try to time that erasure such that it occurs at a time that will not effect performance. What affects endurance is wear leveling, which is an entirely separate technique that does actually work. As capacity increases, wear-leveling ensures that the endurance of the drive as a total increases.
Much like a 100GB capacity tape that's marked as 200GB because dome data that the manufacturer chose compressed 2:1 before being sent to the tape drive. Your mpeg movies aren't going to compress, so you'll be able to put 100GB of movies on that 100GB tape. The 200GB number is pure marketing BS.
When tape manufacturers (or organizations, like the one behind LTO) cite a compression factor like 2:1, it is based on a standard body of data like the calgary corpus which includes both compressible and uncompressible data. This allows you to compare different technologies with different compression standards.
In the real world on LTO (which I assume you are referring to) I have seen compression factors ranging from ~1.5 to 2.5, so its not really accurate to call it marketing BS. They also always (as far as I have seen) mark the tapes something like "800GB/1600GB" with the subtext explaining that the smaller number is native, and that the entire thing is 2:1. Its not dishonest because the compression is part of the (well-defined) standard, and the native capacity is right next to the compressed capacity. Its also not the manufacturer doing this; those numbers are explicitly defined in the spec.
all if the companies use the same 2:1 bs factor,
Which begins to make sense when you realize that thats because LTO itself defines the compression factor of 2:1 based on calgary corpus.
There's no telling what assumptions Micron made about the use of trim
But, as we've established, TRIM has literally no effect on endurance, so its irrelevant what they might assume about it.
so there's no way to compare this drive's endurance to any other, or to estimate it's actual endurance for any real workload.
Not to be harsh, but there is if you actually took the time to understand the tech. They usually do provide endurance stats (ie, "100PB data endurance") and tests by Anandtech and others have often validated that as being realistic.
Re: (Score:2)
The article actually mentions 100 TB, not 100 PB (for the 128 GB). That's the equivalent of only about 100 full rewrites.
Re: (Score:3)
Anecdotal yes, but nice to know it was 3 x bigger than the manufacturers specs.
Re: (Score:2)
To clarify, 100PB is a number I pulled out of thin air. On reflection, you would not expect your SSD to do 100PB of data; II simply meant that a number IS usually provided, and that those numbers have been validated by multiple parties as generally being ballpark accurate.
Did you TFA? (Score:2)
Did you read rhe article? Micron claims their write endurance isn't a problem because of the way they implemented trim. That could make sense, if they avoided erasing and writing at all sometimes.
Re: (Score:2)
Honesty time: Didnt read the article, but to say that TRIM fixes write endurance problems is highly misleading.
TRIM does impact endurance in that it CAN reduce write amplification (I believe) which can reduce the lifetime of your SSD, but it does not really change the fact that erase cycles are REQUIRED in order to reuse a cell. Again, all TRIM really does is schedule when that erase occurs-- directly prior to when it is needed, or at some idle time. Apparently (according to Wikipedia) SSDs using their o
I said it was BS (Score:2)
> TRIM does impact endurance in that it CAN reduce write amplification
Yes. Like I originally said. Trim, by avoiding write amplification in some cases, increases endurance. However, it only helps for otherwise unused blocks, so the impact of trim is application dependent, as I said right in the subject line of my original post.
> TRIM has nothing to do with endurance. TRIM erases cells that are scheduled for erasure anyways; all TRIM does is try to time that erasure such that it occurs at a time that
Re: (Score:2)
I guess you now realize that's wrong. The main purpose of trim is to avoid reading and writing pages that are unused anyway. The SSD doesn't need to reallocate trimmed blocks, because the OS isn't using that data anyway. Less physical reading and writing == more endurance.
Its not wrong.
1) TRIM simply alerts the drive when a block is ready for erasure; its right there in the article I linked. Its primary purpose is not reallocation or anything else; its just garbage collection for performance reasons.
2) The endurance thing is ONLY if the firmware being used is using a hack to implement their own garbage collection which could induce write amplification. It does not, in itself, reduce endurance if the SSD isnt doing anything fancy / out-of-spec.
3) Rea
Re: Lifetime at 16nm? (Score:4, Informative)
Well, sometimes they make convenient little assumptions about the write amplification and other things in coming up with that number. Also it's the number they use for warranty claims, so it may not reflect the kind of endurance you'd normally expect. The latest trick is to basically use part of your drive as a semi-permanent SLC cache and only write it to MLC/TLC NAND later, if ever so what you actually get will depend on your usage pattern. If you just keep on rewriting a small file it'll probably not leave SLC at all, while if you use it as a scratch disk filling it up with large files and emptying it you'll hit the MLC/TLC hard. The rating is just to give consumers who don't want an in-depth look something to relate to.
Personally my first idea was, if they can deliver us a MLC drive at 45 cents/GB doesn't that mean they should be able to deliver us a SLC drive at 90 cents/GB? That's not disturbingly much, considerably faster and should have all the endurance you'll ever need. That said, TechReport got 3 (out of 6) consumer drives they've written >1 PB to, so I'm guessing most drives fail from something else than NAND exhaustion. And I don't reinstall my OS disk every day.... I just checked and I've used up 50 of my 3000 P/E cycles after 150 days of 24x7 running so at this rate it should take 25 years.
I know people who turn on their computer maybe 2-3 hours a day on average, just streaming no heavy media usage. Any SSD will last them forever, it's all about $/GB. Now if you want a guess they said 5000 P/E -> 3000 P/E (60%) for 25nm -> 20nm MLC, so I'm guessing 3000 * 0.6 = 1800 P/E for 16nm. And TLC is probably like 500 P/E, though this drive doesn't use that.
Re: (Score:2)
NAND exhaustion is going to depend on manufacturing tolerance and operating conditions. They figures manufacturers supply are conservative estimates. I'm not at all surprised a handful of drives considerably out perform their specs.
Re: (Score:2)
Really? Not even a little bit surprised that random samples outperform endurance specs by ONE HUNDRED TO ONE THOUSAND TIMES?
Re: (Score:2)
Not at all.
I'm pretty sure they were not testing them at the maximum operating temperature, or at the worst-case workload (according each drives specific implementation of wear levelling).
Re: (Score:2)
I don't understand how the native cell life is relevant.
You're not buying flash chips from them, you're buying an SSD. The write endurance of the drive is what matters. How that is achieved is irrelevant.
It's extremely relevant.
How do you securely erase a flash drive?
How do you securely erase a single file on a flash drive?
How do you (attempt to) restore lost data from a failing flash drive?
How do you (attempt to) restore lost data from a failed flash drive?
I'll never buy a spinning disk again, but this shit matters. If you're wondering about my answers to the four questions above, they're:
Fuck it, encrypt sensitive info.
Fuck it, encrypt sensitive info.
Fuck it, restore from backup.
Fuck it, restore from bac
Re: (Score:3)
All of those questions are about the controller and it's wear levelling software, not the flash chips.
In regards to your questions about security, the specific number of times a cell can be erased is irrelevant, only that wear level takes place and physical data is moved around to different locations and not immediately (or potentially, ever) erased from the old location.
In theory, you should just need to delete the encryption key, because the controller encrypts all the data on the flash chips 256bit AES e
Re: (Score:2)
All of those questions are about the controller and it's wear levelling software, not the flash chips.
In regards to your questions about security, the specific number of times a cell can be erased is irrelevant, only that wear level takes place and physical data is moved around to different locations and not immediately (or potentially, ever) erased from the old location.
In theory, you should just need to delete the encryption key, because the controller encrypts all the data on the flash chips 256bit AES encryption. Again, that's entirely in the controller software.
If you don't know the details of what your controller is doing, how can you be sure that a full format (including 0 fill) is actually hitting all the data? If your drive has more storage than it presents, your format utility has no way of actually overwriting data. If you grab a utility from the vendor and it doesn't explicitly spell out what it's doing, how do you know? If something dies, what's the best way to copy as much data off the device as possible in an attempt to recover? How do you tell the c
Re: (Score:2)
Yet going back to the original "tell me the native cell life", it's completely irrelevant to everything you've said.
Re: Lifetime at 16nm? (Score:5, Interesting)
It may not have to do with cell lifetime, but it does relate to overall endurance. If their "tricks" are legitimate algorithmic approaches to improving endurance, then the native cell lifetime becomes less of a solid metric to endurance. It would be the analogy to when clock speeds of CPUs became less relevant when manufacturers began focusing on increasing pipeline throughput instead of clock speed.
If a decrease from 20nm to 16nm feature size increases density by 25% and only decreases cell lifetime by 10%, then they will have more than enough new capacity to overprovision for the difference, and if their algorithmic improvements are legitimate, then that mitigates the need for additional over provisioning.
There's alot of "if"s in there of course, because you can't always take such PR at face value.
Re: (Score:2)
That has nothing to do with the native cell lifetime.
It does when capacity increases faster than durability decreases. This has been addressed many, many times at each process shrink. The net effect is generally that you're better off spending your money on the newer process SSDs, they will last longer per $ spent.
Re: (Score:2)
Re: (Score:1)
This is Slashdot - I ain't got time to read articles before commenting
Re: (Score:2)
seems like the average life expectancy of SSDs are well beyond the needs of most people at the moment, unless you're doing some serious content creation with massive amounts of read/writes.
Re: (Score:1)
Samsung went the opposite direction, to use larger features, but stack vertical.
Less critical lithography...
The V-flash gives the 500MB/s AND a 10 year warranty !
Re: (Score:2)
seems like the average life expectancy of SSDs are well beyond the needs of most people at the moment, unless you're doing some serious content creation with massive amounts of read/writes.
The lifetime has been exaggerated from Day 1. Further, multiplying this problem manyfold, is that when an SSD fails, it tends to fail totally. In contrast, when a hard drive i failing, you tend to get a few bad sectors which flag an impending problem, and you main lose a file or two. Bad SSD usually means "everything gone with no warning".
If you use SSD you should have a good HDD backup.
Re: (Score:3)
Any important data should be backed up.
Re: (Score:2)
And if you use a HDD, you still should have a good backup as well.
Re: (Score:2)
Anandtech disagrees [anandtech.com]. Techreport [techreport.com]. So, in fact, do huge numbers of user reports which suggest that SSDs really do last a long time.
Further, multiplying this problem manyfold, is that when an SSD fails, it tends to fail totally.
I have seen this happen, but its not due to endurance of the flash cells but on the quality of the firmware / controller. The actual cell failures apparently cause reallocations (according to techreport's tests, and to common sense). And you create an interesting dichotomy; what does it look like for an SSD or HDD or CPU or RAM to fail "not totally"? You get most of your bits
Re: (Score:2)
Anandtech disagrees. Techreport. So, in fact, do huge numbers of user reports which suggest that SSDs really do last a long time.
This is not "disagreement". I didn't claim they don't last a long time. What I stated was that the claims of average lifetime have tended to be exaggerated. They can still last a long time.
I have seen this happen, but its not due to endurance of the flash cells but on the quality of the firmware / controller.
Absolute nonsense, and the manufacturers themselves will tell you. The issue *IS* endurance of the flash cells, and the tremendous improvement in firmware is a direct result of this limitation. The manufacturers have expended enormous effort to produce management schemes that mitigate the short lives of the cells, which i
Re: (Score:2)
When a hard drive fails, it is almost always the electronics or the bearings. The interface boards can be replaced, leaving the data on the drive intact. When bearings sieze, it is usually possible to free them up long enough to recover the data. As I mentioned before: I know because I've done it.
The only truly permanent, unrecoverable error on a hard drive is a catastrophic head crash, and those are extremely rare. But they do happen. I opened one up once to try to recover a guy's data
Re: (Score:1)
Seems like the durability of flash cells decreases with every process shrink. It makes me wonder what the lifetime of this new stuff will be. A 10% reduction in cost is no bargain if it comes with a 10% reduction in lifetime.
Are you trashing hard drives that much faster than the hardware they are sitting in?
I guess I'm not so worried about all this warranty talk. I think we're past the era of catastrophic failures occurring way too early in SSD hardware. My average spinning rust drive lifespan was/is 7-10 years. If it lasts at least 3 though, I'm pretty happy. Much like tires on a car, it's a piece of hardware that you constantly back up because you're concerned from day one that you might suffer a failure regardless of fac
Re:Lifetime at 16nm? (Score:5, Interesting)
HP has gone one step further and is creating a dynamically allocated mram system that works as both system memory and data storage, so your harddrive and memory is all from the same pool. This reduces power usage dramatically and increases performance dramatically. At least in their own load test, they've gotten about an 8x reduction in datacenter power usage and almost a 2x increase in average workload throughput.
They're currently working on custom Linux kernels that can dynamically allocate memory and storage instead of having to partition the pool between the two. A cool side effect is that "memory mapped files" are literally in memory all the time as storage is memory.
Re: (Score:1)
A cool side effect is that "memory mapped files" are literally in memory all the time as storage is memory.
It's Multics all over again!
http://www.fact-index.com/m/mu/multics.html
Re: (Score:2)
At least in their own load test, they've gotten about an 8x reduction in datacenter power usage
WOW! So the datacenter is generating 7 times the electricity it used to consume?!?!?!
?!?!?!?!
?!?!?!?!?!
Mixed units (Score:3)
In other news, Nanon is expected to release 16m ICs soon.
Re:Mixed units (Score:5, Funny)
Stupid Slashdot can't even display UTF8 correctly. That was supposed to read "16um".
Thanks for nothing, "nerds" website. We're in 2014, get with the damn program instead of fucking about with your stupid beta layout.
Re: (Score:2)
Do I really have to explain the joke to you?
Re: (Score:3)
In completely unrelated news, Slashcode 14.08 has full UTF-8 functionality, and has been live on Soylent for almost a month now [soylentnews.org].
Re: (Score:2)
yes but only if you want to see slashdot in beta.
I stay logged in in classic mode just so I never see the beta. I always know when I get logged out as slashdot looks like shit.
Re: (Score:2)
Soylent is unrelated to news you say?
Re:Mixed units (Score:4, Informative)
The problem was back then people were abusing that functionality to screw with everything. If you google "site:slashdot.org erocS" [google.ca] that gives hints of what people were doing. If you don't get what that string is, try "5:erocS".
As a result, /. implemented a Unicode whitelist because they keep adding all sorts of stuff to Unicode.
Re:Mixed units (Score:4, Insightful)
It sounds like you're saying /. doesn't support Unicode. Make all the excuses you want about it being hard -- they might be true -- but Unicode support on /. does not exist. The idea that a whitelist (that doesn't even include mu) is evidence of support is like claiming that an F1 car is road-legal because you added headlights.
Re: (Score:3)
Well, you must also know the HTML entities, even in plain text mode... writing æøå doesn't work, but æøå works. In this case µ doesn't work though. And I think all languages have Unicode support good enough to strip control characters and shit if you're not lazy. My impression was that it was more to sabotage the ASCII "art" than anything else.
Re: (Score:3)
As a result, /. implemented a Unicode whitelist because they keep adding all sorts of stuff to Unicode.
Is there anything in this whitelist?
Re: (Score:1)
A bunch of white. Think, McFly. Think!
Confusion over TRIM (Score:4, Interesting)
To deal with the added write amplification, Tanguy said Micron increased the TRIM command set, meaning blocks of data no longer required can be erased and freed up more often
Did they mean "implemented" rather than "increased?" Or did they mean that they added something new to the TRIM command?
Re: (Score:3)
Well a 'command set' implies a set of functions, 1 command per function. So if you increase the command set, you've increased the number of functions, which means they added something new to the TRIM command.
What that might be, I don't know. Going by his description, it sounds like they managed to implement some detection of non-allocated cells, which would allow them to re-allocate said cells without actually copying junk data to the new location.
IE the system decides that block 105 is under-used and 657