Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Data Storage Hardware

Solid State Drives Tested With TRIM Support 196

Vigile writes "Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues. The promised land is supposed to be the mighty TRIM command — a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files. Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers. A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy."
This discussion has been archived. No new comments can be posted.

Solid State Drives Tested With TRIM Support

Comments Filter:
  • by earthforce_1 ( 454968 ) <earthforce_1@[ ] ['yah' in gap]> on Wednesday June 17, 2009 @07:48PM (#28367891) Journal

    Which Linux filesystem works best with SSDs? I don't intend to touch Win7.

  • by Darkness404 ( 1287218 ) on Wednesday June 17, 2009 @08:10PM (#28368053)
    Gamers, gamers, gamers and gamers. Seriously, the early adopters of any technology that is supposed to be faster on the consumer level will be gamers. Considering that most games are Windows-only it makes sense.
  • by mrmeval ( 662166 ) <<mrmeval> <at> <>> on Wednesday June 17, 2009 @08:13PM (#28368075) Journal

    Because someone got paid to do it. You don't think /. editors work for free do you?

  • by Anonymous Coward on Wednesday June 17, 2009 @08:28PM (#28368209)

    I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.

    Well damn, I'll just have to tell our customer that has something like a 30 petabyte TAPE archive that's growing by about a terabyte or more each and every day that they're spending money on something you say is, umm, outdated and these newfangled devices that the next power surge will totally fry are the wave of the future.

    Guess what? There's a whole lot more money spent on proven rock-solid technology by large organizations then you apparently know.

    Tape and hard drives are going NOWHERE. For a long, long time to come.

  • Re:fragmentation? (Score:3, Insightful)

    by Bigjeff5 ( 1143585 ) on Wednesday June 17, 2009 @08:28PM (#28368211)

    In very simple terms (because I'm no expert), it's because of the way SSDs deal with wear leveling and the fact that a single write is non-sequential. When it writes data, it is writing to multiple segments across multiple chips. It is very fast to do it this way, in fact the linear alternative creates heavy wear and is significantly slower (think single chip usb flash drives) than even spinning disk tech, and so this non-sequential write is essential.

    Now, to achieve this, each chip is broken down into segments, and those segments are broken down into smaller segments, which are broken down into bytes, which are then broken down bits. When the SSD writes, it writes to the next available bit in the next available segment on each of the chips in the drive. Because it keeps track of exactly where it left off, this process is extremely fast, as all new data goes to the next place in line.

    The problem comes when you fill up the hard drive and then delete data. When you delete data, you are deleting little bits spread all over the physical drive. Unless it is a tiny file, every chip will have a little bit of the file. What's worse, unless it was a massive file, you probably wont be clearing whole sequential segments on the drive. To add to that even further, the OS doesn't actually delete anything, it just flags it! So what this means is after you cleared a bunch of room on your hard drive, when writing new data your SSD is still massively fragmented, and to write new data the drive has to find free bits and clear them first. Think worst case scenario for spinning disk fragmentation and that's what you have - and you will get it every single time you fill up an SSD. You can actually re-format the drive and it won't necessarily fix the fragmentation problem, because formating won't reset the segments on the chip to factory state and update the internal drive index in such a way that it maximizes speed again.

    Now, because the SSD is sort of like a very large RAID array with very tiny disks, even in this state is still faster than a conventional spinning-disk hard drive. But it is nowhere near as fast it was when it was clean and new.

    Thus, the TRIM functions that have been mentioned. Basically these go through and do a de-frag of the data, which requires maximising the space at the "back" of each chip, then re-setting those free segments to the factory state. Depending on how much needs to be moved, this can have wear concerns, so you don't really want to do this all the time. The idea with SSDs is to fill them all the way up, then clear out as much room as you possibly can before trimming the drive. Once trimmed the drive should be back to pre-fragmentation speeds, but you have also just written many more times to some bits on the drive than others, which raises wear concers if the process has to be repeated too many times.

  • by vadim_t ( 324782 ) on Wednesday June 17, 2009 @08:29PM (#28368215) Homepage

    That's a statistic that doesn't make any sense.

    20% under what conditions, and in what timeframe? Over a long enough time period everything has a 100% failure rate.

    Normal hard disks also will eventually fail, due to physical wear.

    Also if it lasts long enough, at some point, reliability will stop being important. Even if it still works, very few people will want to use a 100MB hard disk from 15 years ago.

  • by SanityInAnarchy ( 655584 ) <> on Wednesday June 17, 2009 @08:38PM (#28368267) Journal

    That's my biggest complaint about them, actually -- these "teething problems" people mention are pretty much directly a result of OSes treating SSDs as though they were spinning magnetic disks.

    No, the OS should be able to do its own wear leveling. If you need to pretend it's a hard drive, do it in the BIOS and/or the drivers, not in the silicon -- at least that way, you can upgrade it later when things like this come out.

  • by Bigjeff5 ( 1143585 ) on Wednesday June 17, 2009 @08:51PM (#28368337)

    I've never heard of a 20% fail rate for SSDs. I've heard of wear concerns, as each little bit on the drive can only be written a set number of times (it's at 10,000 or so, if I remember correctly). However, thanks to the majic of wear leveling and the large amount of separate chips in an SSD drive, you can fill up your drive completely and you will have only written to each bit exactly once. That means you could theoretically fill your SSD up 10,000 times before you would expect failure. Reality is a bit lower than that, maybe 3,000-5,000 times due to having to TRIM to re-arrange the bits, but it's still significant.

    Of course, even with the performance hit TFA talks about after filling your SSD (which is fixed with the TRIM function TFA also talks about) the fastest spinning disks are still much much slower than all but the very worst SSDs out there.

    Anyway, the 20% fail rate may have been a specific manufacturer of SSDs, there are already some really shitty ones out there.


    Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way. Proprietary == Dead

    You may need to get some more education about how patents work, because if that were true IBM would not have the fastest SSD on the markent. See, they do this thing called licensing, which basically means company Y purchases an agreement from company X to use their technology to manufacture a product. It creates an incentive for company X to allow other manufacturers to use their technology, flooding with the market with both quality and crap, but ultimately lowering the price and speeding innovation regardless of the high quality stuff (and improving the quality of the cheap stuff, it works both ways usually).

    It's actually the reason patents exist. We only get in a fuss when people patent stuff that either a.) should never need a patent (which means the patentor can sue for damages for infringement) or b.) some company goes around buying patents from legitimate inventors for the sole purpose of hoping said patents become infringed upon by an unwitting third party. The former is a failure in the patent system, and the latter is patent trolling, which is an unethical and disgusting abuse of the process.

  • by macraig ( 621737 ) <mark.a.craig@gmai[ ]om ['l.c' in gap]> on Wednesday June 17, 2009 @09:13PM (#28368449)

    Just a small tangential nitpick: we were already more than a factor of ten past that HDD capacity fifteen years ago. The 1GB barrier was broken very early in the Nineties. I still have an HP 1GB SCSI drive from about '91 or '92, IIRC.

    As far as failure rates go, I still have ALL of my disk drives (one or two outright failed) from the 15-20 years, and every single one of them still functions at least nominally. I'm still more trusting of magnetic media than I am either rewritable optical or Flash-based media.

  • by Phishcast ( 673016 ) on Wednesday June 17, 2009 @10:40PM (#28368975)
    How many multi-petabyte enterprise data centers have you seen running SSDs as their primary storage? None. Yeah, that's what I thought.

    Agreed that SSDs have a long way to go on price to compete, but it's simply not true that they're not yet ready for the enterprise datacenter. All the larger enterprise storage array vendors (EMC, HDS, IBM, NetApp) say they're ready, and most are shipping them with decent sales. Despite their price and the "fact" you've so eloquently stated, you'll find them in many Fortune 500 datacenters simply because they outperform spinning disks by such a factor that they're cheaper per IO. I believe today the vast majority of vendors providing enterprise-class SSD drives are sourcing them from STEC. They play some tricks to work around write limits, but they've got ~5 year MTBF ratings.

  • by gad_zuki! ( 70830 ) on Wednesday June 17, 2009 @10:47PM (#28369019)

    No way, lets have the firmware do this. The problem with your approach is that the OS wont understand the drive as well as the manufacturer does, so it will always be a sub-optimal solution. Dont tie the hands of the manufacturer to put intelligence in his drives. For instance, the best way to wipe a disk is via an ATA command [], and not through multi-passes of wipes. The manufacturer knows where the heads are and how the drive writes. The SSD situation is somewhat similar.

  • by steveha ( 103154 ) on Wednesday June 17, 2009 @10:56PM (#28369095) Homepage

    Something as simple as deleting the wrong partition becomes an irreversible operation if you do it using a tool that supports TRIM on TRIM-enabled hardware.

    This seems needlessly verbose. Let me shorten it for you:

    Deleting a partition should always be considered an irreversible operation.

    Hmmm, even shorter:

    Don't delete a partition unless you want it to go away forever.

    Even if you restore the partition table from a backup, you will likely suffer silent file system corruption, which may even not be apparent until it's too late.
    If TRIM support is actually implemented on the device, the device is free to 'lose' data on TRIMmed blocks until they are written at least once.

    If I understand you correctly, you are suggesting that a disk partitioning tool will use TRIM to not only wipe the partition table itself, but also nuke the partition data from orbit. And you the point out that it would not be adequate to rewrite just the sectors of the partition table.

    If so, then the answer is: you don't just restore the partition table, you restore the whole partition (including data) from backup.

    I for one consider much-faster write speeds to be a bigger advantage than possibly being able to reverse a partition deletion.


  • by geekboy642 ( 799087 ) on Wednesday June 17, 2009 @10:59PM (#28369115) Journal

    I can buy a terabyte hard drive for around $100. For the same hundred dollars, the best SSD I can find is 32GB. On my computer, Steam's cache folder is bigger than 32GB. My music player has a 120GB drive, my DVR has a 350GB drive, and my backup server has a 1.5TB raid. Just because expensive mobile gadgets use expensive solid-state drives does not mean hard drives are dead, dying, or even decaying.

  • by j-turkey ( 187775 ) on Wednesday June 17, 2009 @11:23PM (#28369233) Homepage

    ...and unlike hard drives, flash storage isn't rapidly becoming less reliable as the density increases....

    I can see the logic behind the argument that hard drives should become more failure prone as the platter density increases, but I've yet to see any data substantiating this point. Your claim that hard drives are rapidly becoming more unreliable makes your statement come off as even more dubious to me.

    I don't mean to attack you or come off as a complete dickhole, but do you know of any data to back this up? I'm legitimately curious, as in my (completely anecdotal) experience, magnetic hard drives seem to be getting more and more reliable.

    (Mind you, I'm seriously knocking on wood...I know that I'm going eat my words when I wake up to multiple simultaneous drive failures just for opening my big fat mouth about my good fortune with magnetic data.)

  • by rcw-home ( 122017 ) on Wednesday June 17, 2009 @11:30PM (#28369289)

    Yeahhh... give me the one that costs 36 times more, takes up 4 times more space, requires 8 times more controllers and is guaranteed to wear out in a few years. If your I/O patterns are so messed up that today's horrendous SSDs actually lower your cost per I/O, you need to rethink your information architecture.

    There are two schools of thought regarding SSDs:

    1. Those who talk shit about them
    2. Those who have used them []
  • by morgan_greywolf ( 835522 ) on Thursday June 18, 2009 @08:17AM (#28372263) Homepage Journal

    I think he was pointing to the "reviews". Here's the thing: none of those reviews were from enterprise-class users.

    Once you start getting into 10 drive RAID arrays (and up), speed of each drive is no longer your limiting factor, provided you're using some kind of striping. That's the reason SATA RAID arrays have started to become popular in the enterprise for less critical systems -- there's almost no performance difference at all. You need to go fibre channel before you see any marked difference in performance.

  • Re:fragmentation? (Score:3, Insightful)

    by fimbulvetr ( 598306 ) on Thursday June 18, 2009 @10:44AM (#28373977)

    It's extremely unfair to link to the print version of that article. Anand put an immense amount of time into that (and everything before it!) and scarred quite a few bridges to bring it to light for his readers - there are very, very few reviewers out there that would do that for their readerbase. The least you could do is offer him and his site _some_ respect.

Thus spake the master programmer: "When a program is being tested, it is too late to make design changes." -- Geoffrey James, "The Tao of Programming"