Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Solid State Drives Tested With TRIM Support 196

Vigile writes "Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues. The promised land is supposed to be the mighty TRIM command — a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files. Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers. A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy."
This discussion has been archived. No new comments can be posted.

Solid State Drives Tested With TRIM Support

Comments Filter:
  • But its the future (Score:5, Interesting)

    by telchine ( 719345 ) * on Wednesday June 17, 2009 @07:44PM (#28367857)

    I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.

  • by Robotbeat ( 461248 ) on Wednesday June 17, 2009 @08:21PM (#28368147) Journal

    Even the best consumer-level SSDs like the Intel x-25m/e use a volatile RAM cache to speed up the writes. In fact, with the cache disabled, random write IOPS drops to about 1200, which is only about three or four times as good as a 15k 2.5" drive. The more expensive truly-enterprise SSD drives which don't need a volatile write cache cost at LEAST $20/GB, so the $/(safe random write iop) ratio is actually still pretty close, and cheap SATA drives may actually be even on that metric as the fast enterprise SSDs. Granted, this shouldn't be the case in a year, but that's where it is right now. (Also, the performance-per-slot is a lot higher for SSDs, which can translate into different $ and power and space savings.)

  • Re:fragmentation? (Score:4, Interesting)

    by sexconker ( 1179573 ) on Wednesday June 17, 2009 @08:27PM (#28368197)

    Because, basically, flash drives are laid in levels.

    When you delete, you simply map logical space as free.

    If you go to use that free space later, you find that area, and drop shit into it. It's I dunno, a 32 KB block of memory called a page. If the page is full (to the point where you can't fit your new shit) of "deleted" files, you first need to write over those deleted files, then write your actual data.

    If the logical space is full with good, fragmented (with deleted files interspersed) files, you need to read out to memory, reorder the living data and remove the deleted data, add in the full page back.

    Think of it as having a notebook.
    You can write to 1 page at a time, only.

    Page 1 write

    Page 2 write

    Page 3 write

    Page 2 delete

    Page 2 write (still space)

    Page 2 write (not enough space, write to page 4 instead)

    Page 2 delete

    Page 2 write (not enough space, no more blank pages, read page 2 and copy non-deleted shit to scratch paper, add new shit to scratch paper, cover page 2 in white out, copy scratch paper to whited-out page 2)

  • by Anonymous Coward on Wednesday June 17, 2009 @08:30PM (#28368217)
    The mechanicals may be able to stay ahead in capacity for a long long time, even though they obviously have no hope of competitng in the performance arena ever again.

    I disagree with this. With mechanicals, we're adding 250-500GB with each iteration. With SSD, they're doubling with each iteration. Considering they're basically at 1/8 the capacity of mechanical drives, it'll only be another couple of years before they surpass mechanical drives.
  • by quazee ( 816569 ) on Wednesday June 17, 2009 @08:48PM (#28368319)
    Something as simple as deleting the wrong partition becomes an irreversible operation if you do it using a tool that supports TRIM on TRIM-enabled hardware.
    Even if you restore the partition table from a backup, you will likely suffer silent file system corruption, which may even not be apparent until it's too late.
    If TRIM support is actually implemented on the device, the device is free to 'lose' data on TRIMmed blocks until they are written at least once.
  • by MeatBag PussRocket ( 1475317 ) on Wednesday June 17, 2009 @08:54PM (#28368349)

    if by "proven rock-solid" you mean horrid fidelity and media degradation rates, i'd say you are correct about tapes. if you're client has a 30 petabyte tape archive there is probably some horrible inefficiency goin on. (i'm sure you probably have little control ofer the situation, i have similar clients) but if they have 30Pb of data on tape that they access regularly, they're wasting a LOT of time just retrieving data. you should really consider a SAN NAS or similar. HDD storage is very cheap these days and LTO4 tapes are pretty pricey. we all know they have shoddy storage quality to boot. if they dont access it regulary then its probably a real waste of money to own, record and store 30Pb of data. either way, just the physical storage of that many tapes is probably about equivelant to the sq. footage needed for a rack or 2 (or 3) of blade servers with the same storage capacity.

  • Re:fragmentation? (Score:4, Interesting)

    by ls671 ( 1122017 ) * on Wednesday June 17, 2009 @09:29PM (#28368551) Homepage

    Very interesting, I assumed the problem was similar to fragmentation and wondered why nobody compared it as such.

    Now, your explanation makes things much more clearer, the global problem is amplified by the additional problem you described.

    Now would implementing the logic to control the SSD entirely at the OS/FS level be much slower than implementing it in silicon in the SSD itself ?

    As you said, I now understand that the OS/FS would now have to be aware of the underlying media ;-)

  • by dgatwood ( 11270 ) on Wednesday June 17, 2009 @09:54PM (#28368677) Homepage Journal

    Things have changed a lot in four years. Since 2005, hard drives have only increased from 500 GB to 2 TB---a factor of 4. In that same time, Compact Flash cards increased from 8GB to 128 GB---a factor of 16. Flash density increases are severely outpacing hard drive density increases, and unlike hard drives, flash storage isn't rapidly becoming less reliable as the density increases....

  • Re:fragmentation? (Score:3, Interesting)

    by sootman ( 158191 ) on Wednesday June 17, 2009 @09:59PM (#28368699) Homepage Journal

    Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings.

    Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore? I know it's not an ideal solution but rsync or Time Machine would make it pretty painless.

    Also, if I had an SSD and was browsing a lot I could see making a ramdisk for things like browser cache files. Too bad Safari and Firefox don't seem to let you specify where you want your cache to be anymore, like old browsers used to. I guess you could make a symlink or something but then you'd HAVE to have that drive mounted.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Wednesday June 17, 2009 @11:02PM (#28369137) Homepage

    All the larger enterprise storage vendors are full of shit. They say the SSD is "ready" because it's the hottest buzzword in the industry, which always commands huge profit margins.

    On one hand, I can use cheap fast 2.0TB SATA drives for 11 cents a gig, or I can go the SSD route with 256gb drives at $4.00 a gig. That's OEM cost, which means EMC and friends will triple that number, to convince your boss these drives are "special".

    Yeahhh... give me the one that costs 36 times more, takes up 4 times more space, requires 8 times more controllers and is guaranteed to wear out in a few years. If your I/O patterns are so messed up that today's horrendous SSDs actually lower your cost per I/O, you need to rethink your information architecture.

  • by rtfa-troll ( 1340807 ) on Thursday June 18, 2009 @01:21AM (#28369935)

    How about hibernate to disk? If you have lots of good SSD that should be very fast shouldn't it?

  • by Anonymous Coward on Thursday June 18, 2009 @07:40AM (#28372001)

    if by "proven rock-solid" you mean horrid fidelity and media degradation rates, i'd say you are correct about tapes.

    [citation needed]

    Tapes probably have a better unrecoverable error rate that drives and don't have bits flip randomly while data is at rest like hard drives are found to do. See the talk entitled "No Terabyte Left Behind" given by Andrew Hume at LISA '07 (Wed. 4-6pm):

    http://www.usenix.org/events/lisa07/tech/

    hey're wasting a LOT of time just retrieving data.

    High speed drives can go from shelf to data in a maximum of 60 seconds.

    you should really consider a SAN NAS or similar.

    Tapes have the highest measure of density when it comes to TB/kW and TB/sq. of data centre floor space. LTO-4 is up to 800 GB native in a tiny little package that takes no power, and LTO-5 is currently in draft and will be 1.6 TB native (add compression for fun). With LTO-4 (and other tape standards as well) and above there's also a standardized way to encrypt the data (AES-256), so if it goes offsite you don't have to worry about data loss.

    Tape may not be for everyone, but there are certain things for which there is no replacement for. CERN is using tape to archive the 15 PB/year of data that's going to be generated by LHC: do you want to know how much power to would take to have 15 PB on SAN / NAS? Then take that power and multiply it by 2 or 3 to running the cooling.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...