All Solid State Drives Suffer Performance Drop-off 150
Lucas123 writes "The recent revelation that Intel's consumer X25-M solid state drive had a firmware bug that drastically affected its performance led Computerworld to question whether all SSDs can suffer performance degradation due to fragmentation issues. It seems vendors are well aware that the specifications they list on drive packaging represent burst speeds when only sequential writes are being recorded, but after use performance drops markedly over time. The drives with better controllers tend to level out, but others appear to be able to suffer performance problems. Still not fully baked are benchmarking standards that are expected out later this year from several industry organizations that will eventually compel manufacturers to list actual performance with regard to sequential and random reads and writes as well as the drive's expected lifespan under typical conditions."
Just a small dip in performance (Score:5, Informative)
And this seems like the sort of issue that will be resolved in the next generation, anyway.
Not a bug. (Score:1, Informative)
"Drastically affected its performance" (Score:5, Informative)
"Drastically effected its performance"
This is patently false. Whats really happening is that SUSTAINED WRITE PERFORMANCE decreases by about 20% on a full drive as compared to a fresh drive. You might say 20% is too much, and I'd probably agree with you, except that ONLY sustained write performance is being affected.
Your read speed will not decrease. Your read latency will not increase. Unless you're using your SSDs as the temp drive for a high definition video operation (And why the hell would you for that? Platter drives are far better suited to that task between sequential write speed and total storage space) then you have nothing to worry about.
This happens on all drives, as the article title correctly states. The solution is a new write command that pre-erases blocks as you use them, so the odds that you have to erase-then-write as you go along are decreased. Win7 knows how to do this.
Nonetheless, it is totally overblown and your SSD will perform better than any platter based drive even when totally full.
good article on the reasons behind this (Score:5, Informative)
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=4 [anandtech.com]
Anandtech has a very detailed article that explains all about this and some ways to recover the lost speed (sometimes).
Re:HMMMMMMM (Score:5, Informative)
Think I'll stick with the tried and true IDE/SATA tech.
Psst: SSD drives connect via SATA.
Re:All? (Score:2, Informative)
Can a fail be insightful?
NAND is the culprit (Score:5, Informative)
The fundamental problem with NAND-based solid-state drives is that they use NAND flash memory--the same stuff that you find in USB flash drives, media cards, etc.
The advantages of NAND is that NAND is both ubiquitous and cheap. There are scads of vendors who already make flash-memory products, and all they need to do to make SSDs are to slap together a PCB with some NAND chips, a SATA 3Gb/s interface, a controller (usually incorporating some sort of wear-leveling algorithm) and a bit of cache.
The disadvantages of NAND include limited read/write cycles (typically ~10K for multi-level cell drives) and the fact that writing new data to a block involves copying the whole block to cache, erasing it, modifying it in cache, and rewriting it.
This isn't a problem if you're writing to blank sectors. But if you're writing, say, 4KB of data to a 512KB block that previously contained part of a larger file, you have to copy the whole 512KB block to cache, edit it to include the 4KB of data, erase the block, and rewrite it from cache. Multiply this by a large sequence of random writes, and of course you'll see some slowdown.
SSDs will always have this problem to some degree as long as they use the same NAND flash architecture as any other flash media. For SSDs to really effectively compete with magnetic media they need to start from scratch.
Of course, then we wouldn't have the SSD explosion we see today, which is made possible by the low cost and high availability of NAND flash chips.
Re:Wait just a minute... (Score:5, Informative)
...you mean to tell me that fragmentation *reduces* the performance of storage???
Fragmentation on hard disks reduces performance because of the time it takes to physically move the disk heads around. There are no physical heads to be moved around in SSDs, therefore it's perfectly reasonable to assume that that mechanism of performance hit will not occur on SSDs, and therefore it's not an issue. I did a small test [googlepages.com] years ago on the effects of flash memory fragmentation in a PDA, and I, and most people I discussed the matter with seemed to be quite surprised with the results at the time. I never got a good technical explanation of why the performance hit was so large. Doubt that's the same mechanism at work as with modern SSDs, but sort of relevant anyway.
Re:Wait just a minute... (Score:5, Informative)
The reason the performance hit is large is because writing to SSDs is done in blocks. Fragmentation causes part-blocks to be used. When this happens, the SSD must read the block, combine the already-present data with the data it's writing and write the block back, rather than just overwriting the block. That's slow.
Re:Just a small dip in performance (Score:5, Informative)
Totally different issue. The problem here is inherent in all flash drives, but can be mitigated by clever controller design. AnandTech made an extensive report on this issue [anandtech.com].
Re:NAND is the culprit (Score:4, Informative)
Samsung has begun manufacture of their PRAM which promises to be a replacement for NAND:
http://www.engadget.com/2009/05/05/samsungs-pram-chips-go-into-mass-production-in-june/ [engadget.com]
Wikipedia writeup on PRAM:
http://en.wikipedia.org/wiki/Phase-change_memory [wikipedia.org]
This type of "flash" memory will make much better SSD drives in the near future.
Re:NAND is the culprit (Score:3, Informative)
SSDs will always have this problem to some degree as long as they use the same NAND flash architecture as any other flash media. For SSDs to really effectively compete with magnetic media they need to start from scratch.
Of course, then we wouldn't have the SSD explosion we see today, which is made possible by the low cost and high availability of NAND flash chips.
Or...I dunno, maybe they could create a filesystem specifically for NAND flash [wikipedia.org].
http://en.wikipedia.org/wiki/JFFS2 [wikipedia.org]
Re:Just a small dip in performance (Score:5, Informative)
All flash drives *do not* have the issue. What really happens is your small write IOPS will be on the low side and your sequential writes will always be full speed *unless* you implement some form of write combining. The write combining cheats a bit by taking small random writes and writing them in a more sequential fashion to the flash itself.
The catch is that when you come past that now fragmented area, the controller has to play musical chairs with the data while trying to service the write originally requested by the OS. End result - slower write speed.
Some well behaved controllers (Intel, Samsung) will take a little extra time to defragment the block while it's servicing the sequential write. Optimized controllers (Intel M series) will now rarely fall below their advertised write speed of 80 MB/sec.
Other more immature controllers leave the data fragmented and simply move the whole block elsewhere. This results in a compounded fragmentation, which can eventually drop write speed to 1/3 to 1/5 of it's write speed when new.
I authored the original articles on the matter:
http://www.pcper.com/article.php?aid=669 [pcper.com]
http://www.pcper.com/article.php?aid=691 [pcper.com]
Allyn Malventano
Storage Editor, PC Perspective
Re:Just a small dip in performance (Score:5, Informative)
To elaborate on the issue, think about how regular disks have sectors that tend to be around 512 bytes. This expectation is "baked" into a lot of filesystems, because of how ubiquitous it is.
This has to do with fragmentation of device sectors when those sectors don't conform to filesystem expectations. That is, the filesystem writes out a 512 byte sector, but the flash drive can only commit 128kb slices at a time. Each of those slices contains 256 sectors. That's not a good situation, because if the filesystem or the controller attempting to correct for the filesystem's writes handles the situation poorly, then that 128kb slice soon becomes fragmented. The fragmentation within that block causes your problems. The problem is when contiguous 512-byte sectors can no longer be allocated contiguously in the 128kb slices. Say you have a few sectors there, a few there, a larger chunk there, maybe even a whole 128kb slice that all corresponds to a single linear mapping of filesystem sectors.
All filesystems that write out blocks smaller than 128kb have this problem. Or whatever the size is the flash drive uses. The problem is twofold: filesystems were written with hard disk drive specifications expected, and those specifications were allowed to become so ubiquitous and the momentum behind them so great that writing for a different set of hardware becomes more difficult. And the second problem is that because of the inertia in expecting every SATA device to be a hard disk, that both filesystem driver and hardware manufacturers have come to expect, there is no "alternative" addressing or better or dynamic method of accessing a disk.
Everything, to the filesystem, is a spinning disk. If you made a disk that actually had 128kb sectors, you would probably encounter the same problem these flash drives have. The stupid realization now, a decade or more on from when many of these standards were drafted, is that these standards permitted very little, if any, deviation. All SATA disk devices are written with the expectation that they are spinning media. All SATA disk devices and the filesystem and the OS have expected this, and certain particular numerical definitions for things like sectors for so long, that no one ever thought to write a standard that would allow filesystem writers to create a filesystem that reasonably deals with new devices.
And because the SATA layer is so simplified and dumbed down for spinning media, there's no way in the SATA communication channel to manually defragment a sector. Apparently SATA/SAS are so set in stone, so stubborn that Intel and every other SSD manufacturers has to emulate a spinning media device, and there's no other good way to do it.
Literally, the filesystem cannot fix this problem, the OS cannot fix this problem. Because at that level of operation, the disk is seen as another dumb spinning disk with 512 byte sectors. You can't actually "see" where those sectors are located or attempt to defragment them from that level. The only way to do it is to issue a SATA command that wipes the entire drive, which allows the controller to flush its internal sector mapping.
Re:"Drastically affected its performance" (Score:5, Informative)
20% is too little. I've seen drives, even SLC drives, drop by more than 50%. Only some drives bounce back properly. Others rely on TRIM to clean up their fragmentation mess.
A more important note is that some initial TRIM implementations have been poorly implemented, resulting in severe data corruption and loss:
http://www.ocztechnologyforum.com/forum/showthread.php?t=54770 [ocztechnologyforum.com]
I posted elsewhere regarding the fragmentation issue here:
http://hardware.slashdot.org/comments.pl?sid=1227271&cid=27883769 [slashdot.org]
Allyn Malventano
Storage Editor, PC Perspective
Re:Not News (Score:5, Informative)
Intel has solved theirs about 95%, but they are helped by their write speeds being limited to 80 MB/sec. With the new firmware, it is *very* hard to get an X25-E to drop below its rated write speed.
http://www.pcper.com/article.php?aid=691&type=expert&pid=5 [pcper.com]
OCZ has not yet solved it. They currently rely on TRIM, and in my testing that alone is not sufficient to correct the fragmentation buildup. IOPS falls off in this condition as well.
Allyn Malventano
Storage Editor, PC Perspective
Re:To test (Score:1, Informative)
Uh, not really. You need to test real usage conditions. The most common hard-drive (ab)using applications are going to be databases, development (edit, save, compile, rebuild; uses lots of disk access), and work-horse applications like VMware, Photoshop, Video editing software, 3D rendering, etc.
Especially things like SQLite which really tends to hammer the drives and is embedded in lots of applications.
I have some experience with SSD's on development machines and database servers. I speak from experience when I say they are much less reliable than the old spinning hard-drives when used in those environments. In fact, I have never seen a single one last more than a year.
Not *ALL* (Score:3, Informative)
Looks like it. they're all borked. Every single one of them. I said so in the title, and I only bother reading the title in Slashdot stories these days.
http://4onlineshop.stores.yahoo.net/an5insax1ram.html [yahoo.net]
The ANS9010 and 9010B suffer no such issues since they are ram-based. They also have a CF backup slot in addition to a backup battery. Very slick and a better solution for a boot drive than a typical SSD if you absolutely must have maximum speed. Pricing with RAM is comparable to an enterprise-level SSD, just roughly 1/2 to 1/4 the capacity is all.
Re:Just a small dip in performance (Score:5, Informative)
Yeah, but the SSD wear leveling sits at a level below RAID. When you write to a specific area of the SSD, the wear leveling can remap that to whereever it needs.
Re:Not a bug. (Score:2, Informative)
Actually, SSDs don't currently support TRIM. The last I read, the manufacturers can't agree on a common way to do it (there are 2 competing ideas about how to implement it). They are actually trying to do the right thing by working out their differences and implementing it all the same, rather than both doing their own thing. Until they come to an agreement, they can't release any firmware supporting TRIM. For the same reason, Windows 7 doesn't support TRIM either, and it is supposed to be shipping without TRIM support. That support is supposed to come shortly after Win7 is released.
Re:Not a bug. (Score:5, Informative)
Re:Not News (Score:3, Informative)
Correction to my last. I was speaking of X25-M, not E.