Long-Term Performance Analysis of Intel SSDs 95
Vigile writes "When the Intel X25-M series of solid state drives hit the market last year, there was little debate that they were easily the best performing MLC (multi-level cell) offerings to date. The one area in which they blew away the competition was with write speeds — initial reviews showed consistent 80MB/s results. However, a new article over at PC Perspective that looks at Intel X25-M performance over a period of time shows that write speeds are dramatically reduced from everyday usage patterns. Average write speeds are shown to drop to half (40MB/s) or less in the worst cases, though the author does describe ways that users can recover some of the original drive speed using standard HDD testing tools."
Reader MojoKid contributes related SSD news that researchers from the University of Tokyo have developed a new power supply system which will significantly reduce power consumption for NAND Flash memory.
Re:Why? (Score:5, Informative)
A simple change to 1 byte means a read of the entire 64KiB block that byte is in, a change of the data and then a write of 64KiB.
If the filesystem isn't flash-aware you can suffer a theoretical performance hit of being 65536 times slower because of this.
So what you really need is a filesystem that stores files in 64KiB blocks and groups reads and writes to the same blocks together as 1 operation.
I am waiting for these SSDs (Score:4, Informative)
I am patiently waiting for these SSDs and plan to test them on a MythTV distro box. I will get a fully compatible Linux SSD notebook onto which a MythTV distro will be installed.
Then with 3 TV cards, I will see how these SSDs measure up on reading/writing/transcoding etc. My intention is to work the SSD for about a week. Watch this space for results.
I do not think that Intel will deliver the "golden" SSD. I think Samsung's SSD [samsung.com] effort will bear results faster. Those videos say a lot.
Re:Why? (Score:5, Informative)
Essentialy what the intel write-combining technology is doing is combining multiple small (4KB) writes into a single block, and letting the old block become fragmented (having a bunch of 4KB holes in it.)
The scenario in the nutshell:
You have a 1MB file and a program which modifies a single 4KB chunk of it. Intels technology marks the original 4KB chunk within its original "block" as erased, and then allocates a new block (using the wear leveling algorithm) to hold the new version of the 4KB chunk and additionally combines it with any other small writer operations that may have recently occured or will recently occur. Up to 128 such 4KB writes can be combined into a single block write.
After this is done many hundreds of thousands of times, however, the drive begins to be in a state where nearly every "block" is only partialy used. The write combiner itself is stuck with whatever the wear leveling algorithm handed it, which is now a partialy used block instead of a fully virgin block. It can no longer combine 128 small 4KB writes together, but maybe only has space to combine 10 of them, or in the worst case scenario.. 1 of them.
Re:Why? (Score:3, Informative)
I thought it could flip a bit too 1 at will without having to rewrite the whole block but if you want to write a 0 it needs to read the whole block, wipe it out and rewrite it with the same data but with the bit flipped to 0. But I wouldn't really know, I'll use SSDs when they cost about the same as hard drives.
Re:Bullshit (Score:1, Informative)
Access time != bandwidth. But you're right, RAM still has much quicker access times. Still, both seem instantaneous to humans; is a 0.2 ms access time really so bad for most applications?
Re:TL:DR (Score:4, Informative)
As far as I can tell from some quick googling and checking on Wikipedia, jffs2 isn't much of a competitor at this point, e.g., it's apparently not really usable on flash chips bigger than 512 Mb. Maybe UBIFS or LogFS? None of them seem to be really mature.
Re:Why? (Score:5, Informative)
Um... no.
When cells age, they take longer to erase. This happens over 5,000, 10,000 cycles or longer. It's not dramatic, and eventually the cells fail in a way more severe than can be corrected by the ECC.
Because there is a (software) process to bring full speed back to the drive, we can safely conclude that none of the slowdown is related to cell aging or other cell-level issues. It's more of an organization and fragmentation issue.
Re:Why? (Score:5, Informative)
Actually, NAND flash comes in 2 block sizes - small block (16kiB/block, 512bytes/page, 32 pages/block), and large block (128kiB/block, 2048bytes/pages, 64 pages/block).
Also, in NAND flash, a "write" operation can turn a "1" bit to a "0" bit. An "erase" operation turns a "0" bit into a "1" bit. Writes can work at the bit level, erases at the block level. (Though, large block NAND can NOT be partial-page programmed, so you must write 2048 bytes at once, but you can read all 2048 bytes, flip one bit, then write it all back). This characteristic is used by the flash management routines in order to manage the flash block. Marking pages as "discard" or "ready for erase" is done by flipping a 1 bit to 0 since that's easy. You can write a block partially, so you don't have to incur a huge 128kiB write always.
Given this, it's a block device, so you can't write 1 byte anyhow - you must write the sector size, which is emulated as 512 bytes. What normally happens is that the SSD will mark a page as "dirty" to indicate it's not to be used, and remap that page's contents onto a new page elsewhere, thus only performing a 2048 byte write (plus 64 out of band bytes).
Now, what happens when all the blocks are used? The flash routines have to erase a block, but before erasing a block, it has to make sure all the pages within it are "dirty". If there are non-dirty pages, they're copied to another block, and when all non-dirty pages are copied, that block is erased. If your access pattern is such that all the blocks have non-dirty pages, it takes a little while to actually move all the data around to get blocks that can be erased. Do enough random I/O, and this can happen quite easily.
Closer, but.. no. (Score:5, Informative)
NAND blocks are *erased* in large blocks, probably 128KB or larger in this case.
However, the read and write operations occur at a *page* level, not block. NAND pages today are typically 2K or 4KB in size.
So you can read and write in smaller units than 128KB.
However, to erase any byte of the NAND, you have to relocate the preserved data and erase a whole block.
Because these drives operate on huge aggregate arrays of NAND, their block structure may be much larger, or they may have very complicated and smart algorithms to re-map write new data while waiting to perform erases much later.
Re:SLC vs MLC (Score:2, Informative)
Once someone releases an SSD that solves ALL of those sticky points, and ideally delivers enough random-access throughput to saturate the 300MB/s SATA line (or whatever bus is mainstream by then), that's when I'll jump on board.
Well, like myself, you will be waiting for a non-flash based SSD then.
Inevitably, something like PRAM [wikipedia.org] will displace Flash, and it can't happen soon enough. Until then, I would much rather see some of that fab capacity reclaimed for DRAM production.
Still better than the alternative (Score:2, Informative)
Looking at the big picture, I'd rather have a slow SSD than keep dealing with the data losses of (criminally unreliable) HDs.
Re:Why? (Score:3, Informative)
Older flash devices allowed multiple writes to one page, but new ones do not.
The higher-density MLC devices do not allow you to read a page, flip a bit to 0 and overwrite it. They require that pages be written just one, and in order.
This is causing no end of frustration for the Microsoft mobile filesystems, which frequently overwrote pages to flag them.
Re:Why? (Score:2, Informative)
There was no difference in how long it took to fragment. If we wrote a nasty enough mix of smaller file sizes to the drive, performance would drop right at the point where all flash was written to at least once (i.e. just over the 80GB mark).
After running HDDErase on the drive, it went the same *exact* 80 MB/sec write speed each and every time. Additionally, running successive software secure erasures (writing 0's across all 80GB) showed 0 drop in speed even after 10 passes.
In testing several different SSD brands / types, I have yet to see a slowdown that would suggest block erasures take longer over time. I suspect the block erase timing is based on flash that is at or near its end of life.
Al Malventano
PCPer Editor
Re:SLC vs MLC (Score:5, Informative)