Flash Destroyer Tests Limit of Solid State Storage 229
An anonymous reader writes "We all know that flash and other types of solid state storage can only endure a limited number of write cycles. The open source Flash Destroyer prototype explores that limit by writing and verifying a solid state storage chip until it dies. The total write-verify cycle count is shown on a display — watch a live video feed and guess when the first chip will die. This project was inspired by the inevitable comments about flash longevity on every Slashdot SSD story. Design files and source are available at Google Code."
Interesting! (Score:4, Interesting)
Re:Interesting! (Score:5, Informative)
Still, since this test isn't on an actual, shipping solid state drive (SSD) product, the results will be discounted by a lot of critics.
Re: (Score:2)
Re:Interesting! (Score:5, Insightful)
Or connect the drive inside any computer running a Prescott P4 with 100% CPU utilization.
Re:Interesting! (Score:4, Funny)
He said an oven, not a nuclear fusion core.
Re: (Score:3, Funny)
When I first read the title of the summary, I thought to myself "Shit, yet another one about Apple versus Adobe..."
Re:Interesting! (Score:4, Insightful)
Assuming that the flash is of equivalent technology (e.g. SLC NAND, cell size, etc) to that used for SSD, then this would present a best case test, since it is exercising all cells equally.
An SSD tries to do wear leveling (distribute writes evenly), but that can't done perfectly, as is done in this test.
Re:Interesting! (Score:4, Informative)
So, I'm violating my usual rule of not responding to ACs, only because you're such an idiot (which conveniently explains why you are posting AC).
See, that's the thing. Once a sector is written to, it won't be touched again, unless the data changes. You end up with some subset of sectors which are frequently modified, while others never are. That is NOT an even distribution of writes across all sectors, nor is it "perfect" in any sense of the word. So, fill up 75% of your SSD with files which don't change, then beat up on the remaining sectors 4 times as much as truly evenly distributed writes would cause. It's not clear what you "MLC" comment was about, since I specifically mentioned that as an example of flash technology.
So keep track of how many times each erase block has been written, and if some blocks get erased too often relative to the rest, move data from the least-erased blocks onto the most-erased blocks. You do a few extra writes this way, but a negligible number if you set the thresholds high enough. And then you'll get fully leveled writes. I'm sure the clever folks at places like Intel have figured out strategies like this (although for the cheap stuff, who knows).
Re:Interesting! (Score:5, Informative)
And in fact, the more advanced wear leveling algorithms do this already. There are spare blocks specifically such that the data can be moved, then the old block that was not used can be freed.
Re:Interesting! (Score:5, Informative)
Actually, I believe *you* are incorrect. Different AC here, but I had to respond because your response doesn't match what I understand to be the case as an engineer working with vendors selecting NAND flash for use in consumer devices. I'll be interested to see if I'm incorrect or if this even gets read as an AC post.
Specifically, it doesn't matter to the flash device if the host has written a sector and never touched it again, that sector *will* be moved when it's been read enough times that the ECC indicates it's likely to become unreadable soon. This is called read disturbance and it can happen surprisingly frequently with MLC cells in small process sizes (i.e. at sufficient density to make multi-GB modules). It also happens on SLC devices but to a lesser extent because they can cope with more voltage decay per bit and still be able to read the bit correctly. This is done as a function of even the simplest block-access controllers because otherwise you wouldn't be able to read your own data back more than a few hundred times. In fact, if you wish to get technical about it, it also has a massive dependency upon the temperature the module is at when the data was originally written since this directly impacts the amount of electrons which can be stored.
In addition to moving data to counter read disturbance, most controllers (even the very simple ones in SD Cards & eMMC devices) will move sectors (actually not filesystem sectors, but individual blocks although the distinction isn't important here) around in order to optimise wear across the entire device even if the content hasn't changed. If you think about it, this has to happen at some level even without wear levelling since the sector is massively smaller than the superblock size for most of the densities we have available today - it's not unusual to see a device with an erase block size of 256KB, which is normally way larger than a sector.
I don't know much about SSD controllers, they're far too expensive for our devices, but they can't possibly work the way you think they do - not if they use the same raw NAND that is used for other block storage abstractions.
It's not the worst case (Score:3, Insightful)
The AC actually posited a worse case scenario, in that the whole disk was filled, and only one "spot" was repeatedly changed.
There are two ways to handle this:
Re:Interesting! (Score:5, Insightful)
Re:Interesting! (Score:5, Funny)
A SSD with flash that averages 1,000,000 writes before blocks start to fail but does it gracefully with little/no data loss could be better than one that averages 2,000,000 but goes out in a blaze of glory as soon as the first block fails.
That depends on how you define "better", and for my personal definition, it depends on exactly how glorious a blaze it is. :)
Re:Interesting! (Score:4, Funny)
That depends on how you define "better", and for my personal definition, it depends on exactly how glorious a blaze it is. :)
Really. Don't all of us Slashdotters love a good explosion? Sure, we mostly prefer them to be scheduled explosions but, still, an explosion is an explosion.
Re:Interesting! (Score:4, Funny)
That brings to mind an old favorite of mine: the Light Emitting EPROM. The power pins on EPROM chips are in opposite corners. Plug in the EPROM chip backwards and you've hooked the power up backwards. Result: A light emitting EPROM, though one with a very limited service life.
Re:Interesting! (Score:4, Interesting)
And honestly it's a pretty valid argument. This is definitely going to be informative, but I'm just as interested in how a particular SSD handles the flash blocks failing as when they fail. A SSD with flash that averages 1,000,000 writes before blocks start to fail but does it gracefully with little/no data loss could be better than one that averages 2,000,000 but goes out in a blaze of glory as soon as the first block fails.
Flash fails on write - if the write succeeds, you will be able to read it baring catastrophic events like ESD exposure.
Re:Interesting! (Score:5, Informative)
In fact, they are read back. At the flash component level.
The flash cell is a charged gate. when programmed the uC in the flash device compares the charge state with a reference voltage. Not enough? Add more charge. Still not enough? Cell is bad, mark it (block level, so you lose xx bits for one bad one) and move on.
This is fairly high level and not exactly how it works, but close enough.
Re: (Score:3, Informative)
Informative? How about wrong!
128seconds *1M operations = 128,000,000 seconds
Seconds in a day = 86400
128M/86400 = 1481.48 days
Or roughly 4 years.
For some reason, you divided 128M by the number of minutes in a day (1440) to arrive at your ludicrous 243years.
Hence you are out by a factor of 60.
Re:Interesting! (Score:5, Informative)
Cause wear leveling only picks another sector to write to from among the unused sectors. Simplified, if your drive is 80% full, you write to the same sectors five times as often.
Especially because once blocks start failing, other blocks start failing too, at an accellerating rate, and they rapidly reach a state of being completely unusable.
That's a contradiction. If the wear-leveling algorithm was ineffective then you'd have a relatively constant rate of block failure. A good wear-leveling algorithm ensures you won't get a significant number of block failures until almost every block has been worn out. Then you get a bunch. So the behavior described is failing exactly as intended, and indicates the wear-leveling algorithm worked almost perfectly.
But you're right in that a wear algorithm that only uses free space would be terrible. That's one reason no device uses one like that. The primary reason though, is because the SSD has no idea which blocks are empty and which are free, unless it is told via the TRIM command (later generation SSDs with newer OSes). The filesystem knows, but an SSD is filesystem agnostic. Moving data is the cause behind the performance drop-off when the drive runs out of unused/un-TRIM'd blocks.
Personally, I have the cheapest, buggiest SSD in common knowledge (the one that can get bogged down to 4 IOPS), and it has worked beautifully for me. Just checking a diagnostic tool, in the past two years I've power cycled it 5,666 times (which probably explains why I kill HDDs so quickly), the average block has been erased 7,333 times, and no block has been erased more than 7,442 times. I've got zero ECC failures. Honestly, I'm a little surprised I've written 234 TB of data to my poor 32 GB drive, but my usage is a bit heavy (~10 complete Gentoo compiles with countless updating, ~5 DISM'd Windows 7 installs, ~5 DISM'd Vista installs, ~30 Haiku installs, ~20 SVNs of 10 GB projects, and a good amount of downloading).
But, in my experience, the wear leveling algorithm is only ~3% away from being "perfect".
Re:Interesting! (Score:5, Informative)
Yeah, the title seems misleading, since they're writing and verifying data on an EEPROM, which is not used in solid state drives last time I checked.
Re:Interesting! (Score:5, Informative)
Re: (Score:3, Informative)
Re:Interesting! (Score:4, Interesting)
One should not forget companies might have "chip lotteries", i.e. use chips that are less robust and cheaper to manufacture without majority of consumers knowing the difference.
They do this in the LCD monitor industry where they have "panel lotteries" that use cheaper parts and are not what is advertised due to consumer ignorance. See Article on Anand here about panel lotteries:
http://forums.anandtech.com/showthread.php?t=39226 [anandtech.com]
Re:Interesting! (Score:4, Insightful)
I just hang around on the NCIX forums, and every day or two there's a person complaining about having to RMA their SSD because programs started crashing, and then finally they couldn't even boot it.
I saw lots of people replying in threads, saying theirs were still working fine. I started asking everyone how long they had owned theirs. Most with working SSDs were in the 8-15 months range, and most with serious problems were in the 12-24 months range.
I've noticed that SSD warranties from a lot of manufacturers have dropped from the original 5 years down to ~2. That's quite a drop. There must be a reason.
I suspect a heavy disk user like myself would burn through one well before the warranty is up.
Note: My sample is pretty small compared to the amount sold, but I do wonder how many die without the owners being vocal about it.
I'm wondering if close to two years ago manufacturers flipped to cheaper NAND to get the prices down? Now prices are going back up, so maybe manufacturers realized their mistake? Even since January, SSD prices have gone up 20-30% on average. $89.99 SSDs are now $120+
http://www.newegg.com/Product/Product.aspx?item=N82E16820167025&Local=y [newegg.com]
Re:Interesting! (Score:5, Interesting)
I would like to see a comparison with a mechanical drive doing the same thing in parallel.
While the Solid Sate has a theoretical Limited number of writes vs. the mechanical drive, it would be interesting to see what real world has to offer.
Re: (Score:2)
I vaguely recall reading that the more writes flash has, the less likely it is to remember what is written to it over time - kind of like volatile storage, but with the length of time the data lasts being inversely related to the number of writes.
Given what I know about flash, I'm not quite sure how this could happen physically. I believe this was mentioned when I was looking into ssd caches for zfs, where this type of failure would be insignificant. It could be completely incorrect, too.
If it is correct,
Re:Interesting! (Score:5, Informative)
Re:Interesting! (Score:5, Insightful)
Mechanical disks have lots of great failure modes. You can do seek tests until the arm breaks or voice coil fails, you can do write/read tests until you get enough bad sectors that they can't recover the data any more, or you can do start-stop of the drive motor until it dies. Another good one is to stop the motor for a while, then see if it starts up or has stiction (sic), but that test takes a long time. If the drive is not held rigidly enough, vibration will kill it, and it it isn't cooled properly, heat will kill it. Did I miss any?
Re: (Score:2)
Re:Interesting! (Score:5, Interesting)
I'm just curious, why use sic in your own posts? Wouldn't you just correct whatever you are sic-ing?
IMHO, this kind of use of [sic] is perfectly valid. It means "this is not a typo, it's really how it is spelled" (literally "thus"). In this case it refers to an unusual word that may look like a misspelling of a more common word. However, it can also refer to a genuine misspelling, when you are referring to what somebody else wrote.
Re: (Score:3, Interesting)
Mechanical disks have lots of great failure modes.
My favorites are the ones that make loud sounds during the failure event. When a piece of the head breaks off, for example.. that thing bounces around in there like crazy when the drive is spinning around thousands of times per minute.
Re:Interesting! (Score:5, Funny)
If you have any important data on that drive, urine trouble...
live stream (Score:5, Funny)
a live stream linked on slashdot.. ouch..
Re: (Score:3, Insightful)
They should have a bit torrent-like system for streams. Like, you just connect to the swarm and request a fairly recent image. Everyone keeps the past minute or so cached to send to new people in the swarm. Maybe a tiered system so that the people who have been connected longest are closest to the original stream.
Let's say I connect to Joe and Mary, who're connected to the original server. They send me frames two or three frames behind the server. Jack connects, and he's getting a bit lagged images too, rig
Re:live stream (Score:5, Informative)
Works pretty well actually.
Re:live stream (Score:5, Informative)
https://www.cisco.com/en/US/products/ps6552/products_ios_technology_home.html [cisco.com]
Re: (Score:3, Interesting)
And which works great for IPTV solutions. The end points subscribe to a channel by setting their IP, and then the upstream router decides if it needs to do the same, heading further back until it hits another router that's got the channel already subscribed.
Similar for when you leave the channel. Once the router decides it's not got any clients for a given channel, it'll unsubscribe from it and those will bubble back.
Very elegant, imo.
Re: (Score:3, Interesting)
It is very nice. And it was around for a long, long time before people started using it for everyday television (IPTV). We used to call it the Mbone [wikipedia.org].
Re: (Score:2)
I thought that was what multicast was for. Ouch! Too early.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Doesn't multicast help any? Given a bunch of people who want to view the same exact stream, the server should be sending the same packets and letting the viewers' players deal with sync, starting at a key frame (and not in the middle of some crumbly diff frames), et cetera. With that, the server could just concentrate on the list of viewers' IPs, send packets far less often, and the /. arson fails.
Live streams, to me, seem easier than webpages because the viewer always wants the current frames of a live v
Re: (Score:2)
Die? (Score:2)
You would think after the write cycles were exceeded the chips would be more or less read-only instead of 'dead.'
Am I mistaken on this presumption?
Re: (Score:2)
You would think after the write cycles were exceeded the chips would be more or less read-only instead of 'dead.'
Am I mistaken on this presumption?
Yep. When it dies, you can still write. It is just what you write won't be right. :) Hence the verify part of the test.
Re: (Score:3, Informative)
Depends - if the chips are using some sort of error correction, they may well just fail. I have USB-based Flash die all the time and it DIES, as in not even presenting a usable device to the OS despite being "detected". The theory is that they fail nicely but the chances are that any non-premium flash will just die a death. Why bother making the device fail gracefully if it's failed anyway?
Literally - I've never seen a flash device in such a "read-only" mode, even for a single bit, but I can't even begin
Re: (Score:3, Insightful)
Re: (Score:2)
It depends on the architecture of the flash cells, but yes, I would expect that the chips would fail into some mode where erase and program operations have no effect. (Being a software guy rather than a Flash memory guy, I wouldn't want to guess whether over-erased cells would be at logic 1, logic 0 or a mix of the two.)
Re:Die? (Score:4, Informative)
(Being a software guy rather than a Flash memory guy, I wouldn't want to guess whether over-erased cells would be at logic 1, logic 0 or a mix of the two.)
Well I'm not an expert on flash, but I know a little about how they work. In NOR flash the data line is pulled up to one, so that's the default state for any bit. There's a transistor connected to ground, and if the floating gate has a charge in it and the transistor is on, then it pulls the data line down to 0. "Erasing" a NOR flash sets all the bits to 1, and programming it sets certain bits to 0.
The most common failure mode as I understand it is that electrons get trapped in the floating gate even after erase cycles such that it's very close to or over Vt for the transistor, so that bit would be stuck in the "programmed" state of logic 0.
NAND memory is the opposite, the erased state is 0 and the programmed state is 1, so a permanently charged floating gate should result in a stuck-at-1 fault.
Which, relating to the OP's question, means either way the memory wouldn't be good for much of anything. Your NAND SSD is going to fail during an erase-program (aka "write") cycle, and except in the extremely unlikely case that the pattern you were writing did not involve changing any previously stored 1s to 0s on stuck bits, then the result is going to be wrong. You could read it, but you'd be reading the wrong data.
Re: (Score:3, Informative)
Your description is a bit backwards, at least for the NOR flash I work on. When the floating gate has charge (electrons), it turns the transistor off. The negative charge on the FG cancels out the positive voltage on the control gate. The bit is read via a current sense -- no current is a zero, lots of current is a one.
The main failure mechanism (that I know of) is oxide damage due to high energy electrons. Program and erase (technically, Fowler-Nordheim tunneling) take high voltages, which gives electrons
Huh? (Score:2)
"Read-only" refers to storage which contains useful information, in that it was written once with the desired data, even if it can't be again (ROM or PROM). So even though it's read-only, it still fulfills its intended purpose.
In any case, read-only = useful, dead = not useful; worn out flash
Re: (Score:2)
Right, but failing gracefully into a "no more writes" state is far better than an "I'm dead and I took your data with me" scenario.
I honestly don't know which is more common or if it varies amongst various flash storage devices, hence why I raised the question.
People in general should but don't have backups of their data so this distinction is pretty important.
Re: (Score:2)
Re:Huh? (Score:5, Insightful)
Graceful as in data not related to your recent failed writes are still readable so they can be backed up and migrated to a new drive. Not sure why that concept is so difficult. I consider something dead as "completely unreadable, ALL your data has been destroyed - have a nice day."
No longer reliable but still semi recoverable isn't quite "dead."
Maybe I'm just using a stricter interpretation of the word dead than you are?
Let's use a marker on a white board analogy. If I was storing all my data on a suitably large white board using a marker and I completely exhausted my marker's supply of ink, I'd be pissed if this resulted in a blank whiteboard, wouldn't you? On that same note, if I wiped a small section of my whiteboard with the intent of writing something new in that area and only then realized that my marker was no longer suitably supplied with ink and my write failed, I would find the blank void in that section alone acceptable.
Does that clarify things?
Subject here (Score:4, Funny)
Flash! Aa-aaahhh!!
Re:Subject here (Score:5, Funny)
Now do that a million more times and we'll see if you wear out. Don't forget to include the live video feed.
Re: (Score:3, Informative)
for the guy (Score:3, Insightful)
Re:for the guy (Score:4, Funny)
Link to thread in question...
http://ask.slashdot.org/story/10/05/23/1547202/Scientific-RampD-At-Home
Re: (Score:2)
Here, take mine, I don't need it anymore, apparently. I recognized the reference too.
Just kidding. I don't have one either.
Die Flash, DIE! (Score:5, Funny)
Wait, which flash are we talking about here?
Re: (Score:3, Funny)
> Wait, which flash are we talking about here?
We're talking about flash photography [funnychill.com], of course.
Re: (Score:2)
Re: (Score:2)
The one that'll save every one of us, of course! [youtube.com]
Re:Die Flash, DIE! (Score:4, Funny)
We need more of this (Score:2)
Excellent work! Given that the chance that the manufacturers will provide this data approaches zero, this is the only way we're going to get realistic figures for the longevity of flash chips. Hopefully, this will encourage more independent hardware testing in other fields
dull (Score:5, Funny)
I was expecting something cool, like storing a picture, displaying it, and then constantly XORing each pixel with some random number twice, repeatedly, and watching the image decay over time. Although it would appear that it'd need quite a lot of time.
Ha! (Score:3, Funny)
This project was inspired by the inevitable comments about flash longevity on every Slashdot SSD story.
Take that every 'dotter that says bitching on this website doesn't get anything done!
/removestonguefromcheek
SSD's? no. (Score:5, Informative)
article says: We used a Microchip 24AA01-I/P 128byte I2C EEPROM (IC2), rated for 1million write cycles.
Um, SSDs don't use anything like this part as their storage.
Re: (Score:2)
Yeah, that's what I was wondering too the moment I saw the 1 million cycles... what I heard was that SLC is usually rated for ~100k writes and MLC for ~10k writes, so completely different type of chip. So I'm not sure what this data will be relevant for, but it's not SSDs... what's this for, BIOS chips or something?
Re:SSD's? no. (Score:5, Informative)
More importantly, the test pattern does not resemble normal SSD usage. Complete writes are very unusual for SSD and a cycle is not completed nearly as quickly as a cycle on this EEPROM (400 cycles per minute). When an SSD is written to in normal usage, a wear leveling algorithm distributes the data and avoids writing to the same physical blocks again and again. The German computer magazine C't has run continuous write tests with USB sticks and never managed to destroy even a single visible block on a stick that way. The first test (4 years ago) wrote the same block more than 16 million times before they gave up. The second test (2 years ago) wrote the full capacity over and over again. The 2GB stick did not show any signs of wear after more than 23TB written to it.
Re: (Score:2)
Oddly, the NAND I deal with (MLC and SLC) tend to have ~1M writes for SLC, and at least 100k writes for MLC. The 10k flash chips I used were high capacity Intel Strataflash (MLC, but NOR), which
Re: (Score:3, Informative)
Re: (Score:2, Funny)
+1 Informative :)
Re: (Score:2)
Re: (Score:2)
You mean, like reptiles?
Re: (Score:3, Informative)
Re:SSD's? no. (Score:5, Informative)
Okay, I'll bite. Let me introduce you to this thing called "functional equivalence". You do realize that even though they are all "nonvolatile storage," there is a difference between EEPROM and Flash, and that there are many different kinds of low- and high-density Flash and they all have different proprietary silicon designs with different characteristics?
Microchip EEPROMs are specifically designed for low-density, high-reliability applications, and are totally different at the transistor level from high-density MLC Flash used in solid state disks.
The more of you that watch, the faster it dies (Score:3, Funny)
I bet the server's IP address is untraceable.
Re: (Score:3, Informative)
Looking back on it, that was a pretty bad movie.
The display only goes to 9,999,999! (Score:3, Insightful)
The display only goes to 9,999,999! I think that won't be enuf... should be 100M or 1G.
Re: (Score:2)
I know (Score:5, Funny)
Re: (Score:3, Funny)
But it will be *over nine thousand!!!*
Re: (Score:2)
When I read his post I assumed he meant 1 Googol. Needless to say, I was awed by his optimism.
Myth Busters (Score:5, Funny)
Now, to see how much explosives it takes to MAKE it fail!
This is my favorite part! :-)
Is this like (Score:2)
That Castrol commercial with 50 engines running on engine stands with no oil in them?
But how much data does it write? (Score:4, Insightful)
Most modern flash memories have their controllers check which blocks are dying or dead and re-route write and read requests to good blocks. So while your flash may seem to be working perfectly well, various blocks inside it may be dying and its storage size may be progressively decreasing.
So I hope they are rewriting the entire flash in their test. Otherwise it is not representative.
Re: (Score:2)
In other words, it will probably take about a month to intentionally brute-force a full-bandwidth-kill of a 32GB SSD. Larger SSD's would take proportionally longer.
Re: (Score:3, Insightful)
Nonsense, it's completely representative of normal use. That's exactly the point. Until data loss occurs, or there are no more free blocks to use, the flash memory is objectively perfectly good.
Re:But how much data does it write? (Score:4, Informative)
They're testing an EEPROM: it is bit addressable and it does not contain any wear leveling algorithm.
Apples and hippos (Score:5, Informative)
They're testing an EEPROM: while the underlining physics of storing data in an EEPROM and Flash RAM are the same - floating gate transistors - EEPROMs use best-of-breed implementations, single-bit addressable floating gate, while the Flash RAM found in SSDs is the cheapest, lest enduring MLC NAND. MLC NAND are the cheapest per bit, and have a write cycle endurance of two to three orders of magnitude lower than EEPROMs.
SSDs do not contain EEPROMs. They don't even contain SLC (NOR or NAND). In fact, SSDs don't even contain NOR MLCs. Only the cheapest will do, for SSDs.
Re: (Score:3, Informative)
Some, usually the more expensive models, will use SLC NAND. No SSD uses NOR for data storage due to a total lack of density on that technology. They may for storing firmware/FPGA data, however.
Re: (Score:3, Informative)
Intel's Extreme line, for one. The X25-E [intel.com] goes up to 64GB. It's a 2.5" form factor, but it's a SATA drive and you can use a 3.5" bay with mounting rails to put it in a desktop.
GP is right about being expensive...expect to pay over $600 for the 64GB model.
Re:Apples and hippos (Score:4, Interesting)
All of these ones: http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=2010150636%201749646482&name=SLC [newegg.com]
"Flash Destroyer"? (Score:2)
Never heard of him. Is he a Marvel Universe character?
I wonder... (Score:4, Funny)
3700 overwrite Cycles (Score:4, Informative)
This is what I got from a 2GB Kingston Flash Key. After that there were errors in almost all overwrites. However the real kicker is that while the key read back wrong data, there never ever was any error reported. Since doing that beginning of 2009, I do not rust USB Flash anymore.
Set-up: Linux, 1MB random data replicated to fill the chip, then read back to compare. Repeat with new random data. I had one isolated faulty read-back around 3500 cycles and then from arounf 3700 cycles 90% (and pretty soon 100%) faulty read-backs. Language was Python, no errors for the device on STDERR, or the systemlogs. And I looked carefully.
This is a bad test (Score:5, Informative)
I am working on flash write/erase cycling right now in my day job and I can tell you that this is not a very good test. Temperature affects cycling endurance (and this is reflected in the spec), so if your SSD is 20-30C higher than room temp it's going to make a difference. Fowler-Nordheim tunneling (which NAND flash uses for program and erase) is hardest at cold temperatures, so the first operation after powerup might be the worst case in a PC. (Yes, I know they're not using an SSD here, but they are doing their cycling at room temp.)
Another thing to keep in mind is that continuous cycling is not realistic. The wear-out mechanism here is charge trap-up, where electrons get stuck in the floating gate oxide and repel other electrons, slowing down program and erase. Over time, thermal energy lets the electrons detrap. So irregular usage in a hot PC should actually be nicer environment for endurance.
A final factor is process variation, which can only be covered by using a large sample size (>100) and/or using units from separate lots with known characteristics, none of which an end user will likely have access to. Even that doesn't tell you anything about the defect rate.
There are really two types of tests that people are talking about here. The first is a spec compliance test, which uses the extreme conditions I mentioned above to guarantee that all units will have the spec endurance under all spec conditions. This should be done by the manufacturer. The second is a real world usage test, which will only give realistic results if done under actual use conditions. The number you get from the article's test probably won't tell you much.
[Disclaimer: I work on embedded NOR flash, not NAND, but the bits are the same and the article's talking about EEPROM so I figure I can butt in.]
Re: (Score:2)
They're writing and verifying a pattern to a 128 byte storage chip.
This is vaguely similar to what you might get without wear-leveling.