Forgot your password?
typodupeerror
Data Storage Hardware

Solid State Drives Tested With TRIM Support 196

Posted by samzenpus
from the try-them-out dept.
Vigile writes "Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues. The promised land is supposed to be the mighty TRIM command — a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files. Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers. A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy."
This discussion has been archived. No new comments can be posted.

Solid State Drives Tested With TRIM Support

Comments Filter:
  • Re:High failure rate (Score:4, Informative)

    by Darkness404 (1287218) on Wednesday June 17, 2009 @08:05PM (#28368017)
    What in the world are you talking about? The nice things about SSDs is that yes, they do fail, but they fail (or are supposed to) in a predictable, non-catastrophic way that leaves the data readable just not writable. I have had two SSDs and haven't had either fail despite heavy usage, and I don't think you could patent SSDs because the technology is everywhere because it is flash memory and even if it is patented more companies make them than just one.
  • Re:fragmentation? (Score:5, Informative)

    by Vigile (99919) * on Wednesday June 17, 2009 @08:06PM (#28368019)

    This older Slashdot post linked in the story links to a story that covers that topic very well: http://www.pcper.com/article.php?aid=669 [pcper.com]

  • by vadim_t (324782) on Wednesday June 17, 2009 @08:22PM (#28368151) Homepage

    That's because JFFS and such are intended to be used on top of a raw flash device.

    SSDs do wear levelling internally already, so a filesystem that tries to do it as well is redundant.

  • Re:fragmentation? (Score:5, Informative)

    by cbhacking (979169) <been_out_cruising-slashdot@yahoo. c o m> on Wednesday June 17, 2009 @08:27PM (#28368199) Homepage Journal

    Disclaimer: I am not a SSD firmware author, although I've spoken to a few.*

    As best I can understand it, the problem is that writes are scattered across the physical media by wear-leveling firmware on the disk. In order to do this, the firmware must have a "free list" of sorts that allows it to find an un-worn area for the next write. Of course, this unworn area also needs to not currently be storing any relevant data.

    Now, consider a SSD in use. Initially, the whole disk is free, and writes can go anywhere at all. They do, too - you end up with meaningful (at some point) data covering the entirety of the physical memory cells pretty quickly (consider things like logfiles, pagefiles, hibernation data, temporary data, and so forth). Obviously, most of that data doesn't mean anything anymore - to the filesystem, only perhaps 20% of the SSD is actually used, after 6 months. However, the SSD's firmware things that every single part has now been used.

    Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings. The other problem is that these tables get *huge* - a typical home system might have between 100K and 1M files on it after a few months of usage, but probably generates and deletes many thousands per day (consider web site cookies, for example - each time they get updated, the wear leveling will write that data to a new portion of the physical storage).

    Maintaining the tables themselves is possible, and when a logical block gets overwritten to a new physical location, the old location can be freed. The problem is that this freeing comes at the same time that the SSD needs to find a new location to write to, and the only knowledge it has about physical blocks which can safely be overwritten is ones where the logical block has been overwritten already (to a different physical location). Obviously, the lookup into the table of active blocks has to be indexed by logical block, which may make it difficult to locate the oldest "free" physical blocks. This could lead to searches that, even with near-instant IO, result in noticeable slowdowns.

    Enter the TRIM command, whereby an OS can tell the SSD that a given range of logical blocks (which haven't been overwritten yet) are now able to be recycled. This command allows the SSD to identify physical blocks which can safely be overwritten, and place them in its physical write queue, before the next write command comes down from the disk controller. It's unlikely to be a magic bullet, but should improve things substantially.

    * As stated above, I don't personally write this stuff, so I may be mis-remembering or mis-interpreting. If anybody can explain it better, please do.

  • by blitzkrieg3 (995849) on Wednesday June 17, 2009 @08:42PM (#28368283)
    You beat me to it, but in the spirit of adding value, there's a good article here [linux-mag.com]. Another benefit of nilfs2 is that you can easily snapshot and undelete files, giving it a sort of built in "time machine" technology (to use apple's terminology).

    I'm just surprised that none of the linux distros are talking about it yet. You would think with the apple and ibm laptops using SSD today that there would be some option somewhere. I think everyone is distracted by btrfs.
  • by rm999 (775449) on Wednesday June 17, 2009 @08:54PM (#28368347)

    Actually, magnetic disks have exponentially increased in capacity since the 50s. In fact, the rate of increase has been higher than the growth of transistor count.

    See: http://www.scientificamerican.com/article.cfm?id=kryders-law [scientificamerican.com]

  • Re:fragmentation? (Score:5, Informative)

    by aztektum (170569) on Wednesday June 17, 2009 @09:02PM (#28368399)

    For a thorough (RE: long) primer on SSDs and long term performance woes, Anand's overview [anandtech.com] is a must read.

  • Re:fragmentation? (Score:3, Informative)

    by 42forty-two42 (532340) <bdonlan@@@gmail...com> on Wednesday June 17, 2009 @11:12PM (#28369185) Homepage Journal
    The problem isn't scanning metadata - the problem is relocating data prior to an erase. Flash memory is built into erase blocks that are quite large - 64k to 128k is typical. You can write to smaller regions, but to reset them for another write you have to pave over the neighborhood. However the OS is sending writes at the 512-byte sector granularity. So the drive has to essentially mark the old location for the data as obsolete, and place it somewhere else.

    When the drive has been used enough, however, it may have trouble finding an empty, erased sector to write to. So it has to erase some erase block. But if all erase blocks still have good data (eg, each has half used, important data and half obsolete, overwritten data), you need to relocate some of that data elsewhere.

    What the trim command does is tell the drive that it need not preserve the data of a given sector - otherwise, if you were to delete a file, the drive would still have to preserve its data each time one of these relocation operations occur, since it doesn't know anything about the filesystem's allocation maps. By using TRIM, the drive is aware of what data is deleted, and can thus be discarded when it's time to erase blocks. It also increases the percentage of truly unused flash sectors, increasing the probability that a write can go through without having to wait for a relocation.

    Note that this is completely independent from filesystem fragmentation - indeed, a defrag can even make things worse, by making the flash drive think both old and new locations for some data need preserving.
  • by Courageous (228506) on Wednesday June 17, 2009 @11:43PM (#28369359)

    Flash drives have longer MTBF than spinning media... so they last longer. However, a less well known fact is that flash drives have a URE rate 10-100X worse than spinning media does typically today. It's getting fixed, but the fellow you're replying to is basically wrong.

    C//

  • Re:fragmentation? (Score:5, Informative)

    by 7 digits (986730) on Wednesday June 17, 2009 @11:51PM (#28369397)

    Once upon a time, a technical subject on /. gave insightful and informative responses that were modded up. Time changes, I guess.

    The "fragmentation" that SSD drive have don't really come from wear leveling, or from having to find some place to write things, but from the following properties:

    * Filesystems read and write 4KiB pages.
    * SSD can read many time 4KiB pages FAST, can write ONCE 4KiB pages FAST, but can only erase a whole 512KiB blocks SLOWLY.

    When the drive is mostly empty, the SSD have no trouble finding blanks area to store the 4KiB write from the OS (he can even cheat with wear leveling to re-locate 4K pages to blank spaces when the OS re-write the same block). After some usage, ALL THE DRIVE HAVE BEEN WRITTEN TO ONCE. From the point of view of the SSD all the disk is full. From the point of view of the filesystem, there is unallocated space (for instance, space occupied for files that have been deleted).

    At this point, when the OS send a write command to a specific page, the SSD is forced to to the following:

    * read the 512KiB block that contain the page
    * erase the block (SLOW)
    * modify the page
    * write back the 512KiB block

    Of course, various kludges/caches are used to limit the issue, but the end result is here: writes are getting slow, and small writes are getting very slow.

    The TRIM command is a command that tell the SSD drive that some 4KiB page can be safely erased (because it contains data from a delete file, for instance), and the SSD stores a map of the TRIM status of each page.

    Then the SSD can do one of the following two things:

    * If all the pages of a block are TRIMed, it can asynchronously erase the block. So, the next 4KiB write can be relocated to that block with free space, and also the 127 next 4KiB writes.
    * If a write request come and there is no space to write data to, the drive can READ/ERASE/MODIFY/WRITE the block with most TRIMed space, which will speed up the next few writes.
    (of course, you can have more complex algorithms to pre-erase at the cost of additional wear)

  • by setagllib (753300) on Thursday June 18, 2009 @12:20AM (#28369557)

    If you can afford an SSD, why would you waste it on swap? Why not just buy more RAM? If you ever actually need swap, you are doing something wrong.

  • Re:fragmentation? (Score:2, Informative)

    by Anonymous Coward on Thursday June 18, 2009 @12:41AM (#28369717)

    browser.cache.disk.parent_directory

  • by SanityInAnarchy (655584) <ninja@slaphack.com> on Thursday June 18, 2009 @01:33AM (#28369999) Journal

    Not true at all -- that's why I mentioned a BIOS.

    I'm pretty sure even Windows is smart enough to just use the BIOS-provided access, if it doesn't have a driver. If it does, provide it in a driver.

    It would likely also require a different filesystem.

    Nope. You can do exactly the same pretend-it's-a-hard-drive approach, until additional filesystems are developed. And there's nothing preventing a third party from developing a filesystem for Windows.

    Again, see the BIOS approach. In fact, look at nVidia's fakeraid -- software RAID done with BIOS support and a Windows driver. This is neither a particularly new idea, nor a particularly difficult one, especially if you're preinstalling.

    Hardware manufacturers are not going to wait for Microsoft to catch up

    Indeed. But I don't want to wait for both Microsoft and the hardware manufacturers to catch up.

  • by CyberDragon777 (1573387) <[cyberdragon777] [at] [gmail.com]> on Thursday June 18, 2009 @07:19AM (#28371875)

    FYI Windows 7 disables defragmenting on SSD drives.

    The automatic scheduling of defragmentation will exclude partitions on devices that declare themselves as SSDs. Additionally, if the system disk has random read performance characteristics above the threshold of 8 MB/sec, then it too will be excluded. The threshold was determined by internal analysis.

    Support and Q&A for Solid-State Drives [msdn.com]

Facts are stubborn, but statistics are more pliable.

Working...