Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Data Storage Desktops (Apple) Hardware Apple

The Curious Case of SSD Performance In OS X 205

mr_sifter writes "As we've seen from previous coverage, TRIM support is vital to help SSDs maintain performance over extended periods of time — while Microsoft and the SSD manufacturers have publicized its inclusion in Windows 7, Apple has been silent on whether OS X will support it. bit-tech decided to see how SSD performance in OS X is affected by extended use — and the results, at least with the Macbook Air, are startling. The drive doesn't seem to suffer very much at all, even after huge amounts of data have been written to it."
This discussion has been archived. No new comments can be posted.

The Curious Case of SSD Performance In OS X

Comments Filter:
  • by mrsteveman1 ( 1010381 ) on Sunday July 04, 2010 @07:44PM (#32794660)

    I've got a Vertex 60 in a While unibody macbook and it works fine, boot is 7-9 seconds, apps load almost instantly even if i start 10 at the same time.

    I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.

  • This is possible. (Score:5, Informative)

    by anethema ( 99553 ) on Sunday July 04, 2010 @07:51PM (#32794696) Homepage
    It depends a lot on how the drive works.

    Intel drives actually use the whole drive for scratch space. Until a sector is written to. Then without TRIM it only has its tiny bit of extra scratch space to work with. That's why intel drives degrade so badly without TRIM.

    Indilinx Barefoot controllers on the other hand ONLY use their scratch space, they never use the normal writing space of the drive as scratch space.

    See here.

    While it does show the synthetic tests degrading with lack of trim, even more than the intel drives, the real world use tests show they suffer almost 0% loss in performance.

    Depending on which controller the drive is using, TRIM could make almost no difference or a world of difference.

    Anand explains it best:

    "Only the Indilinx drives lose an appreciable amount of performance in the sequential write test, but they are the only drives to not lose any performance in the more real-world PCMark Vantage HDD suite. Although not displayed here, the overall PCMark Vantage score takes an even smaller hit on Indilinx drives. This could mean that in the real world, Indilinx drives stand to gain the least from TRIM support. This is possibly due to Indilinx using a largely static LBA mapping scheme; the only spare area is then the 6.25% outside of user space regardless of how used the drive is."
  • Flawed article (Score:5, Informative)

    by ShooterNeo ( 555040 ) on Sunday July 04, 2010 @08:05PM (#32794760)

    The article writers made 2 major mistakes that cause their results to be meaningless.

    1. They didn't secure erase the drive, which is what actually puts a drive back into a virgin state. They instead wrote zeroes to every sector, which means that the drive controller probably still thinks those zeroed out sectors are still in use.

    2. The Samsung drive controller has a form of self cleanup that greatly reduces the need for TRIM.

    3. Regardless, the SSD they used was slow as a dog and barely worth using over a HDD.

  • by joe_bruin ( 266648 ) on Sunday July 04, 2010 @08:12PM (#32794780) Homepage Journal

    The impact of the TRIM command is vastly overrated. It is effective on "naive" devices that don't allocate a reserve block pool and therefore have to erase before doing every write. On a modern SSD, the disk controller reserves 5-10% of the physical blocks (beyond those that the host can see) as an extended block pool. These blocks are always known to be free (since they're out of the scope of that OS) and are therefore preemptively erased. So, when your OS overwrites a previously written data block, one of these pre-erased blocks is actually written to and the old block is put in the reserve pool for erasing later at the device's leisure.

    The one case where this isn't true is if you're constantly writing gigs of data to an empty drive. With TRIM commands, most of your drive may have been pre-erased, whereas without it you may overrun the reserve pool's size and then will be waiting on block erase. For normal desktop users, this is a pathological case. In servers and people who do a lot of heavy video editing it may matter a lot more.

  • by broken_chaos ( 1188549 ) on Sunday July 04, 2010 @08:40PM (#32794922)

    if you read it you'll either get random data, or zeros (probably the later)

    If you read a TRIMmed block directly, most drives will kick back zeroes. You can do this with hdparm -- particularly useful as a method to test if TRIM works (and it even uncovered a bug in ext4's TRIM implementation in data=writeback mode, where TRIM only works on metadata). Run hdparm -I on a SSD, and it'll actually say something along the lines of "Deterministic read ZEROs after TRIM" for most drives.

    In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.

    Very true. There are only two methods I know of to 'clean slate' a full drive -- either TRIM the entire thing (with a tool like hdparm -- this is tricky to get right) or run an ATA Secure Erase command. Most SSDs take the secure erase command and just blank every NAND chip they have (taking ~2 minutes compared to the multiple hours that rotational drives take for the Secure Erase command) -- I've done this on my X-25M and it works brilliantly.

    Unless Apple's Disk Utility actually does a Secure Erase command (which is very unlikely), then their testing methodology is entirely flawed, and their 'resetting' of the drive instead made it behave as if it was entirely, completely, 100% filled to the brim.

  • by Jane Q. Public ( 1010737 ) on Sunday July 04, 2010 @09:09PM (#32795024)
    That's not the problem with their antenna. As it turns out, the algorithm they used to show signal strength ("bars") was incorrect... generally showing more signal strength than there really was. So what was happening was this: people thought they had a good signal when they really didn't. Then the slight loss from bridging the antennas pretty much took their connection down. It SHOWED as a big difference, but the real difference was very small.

    The same tests done in an area where there is actually a good signal get completely different results. There is almost no different whether the antennas are bridged or not.
  • Re:Bad Summary (Score:5, Informative)

    by mrsteveman1 ( 1010381 ) on Sunday July 04, 2010 @09:49PM (#32795132)

    Actually TRIM ensures there are free blocks of flash ready to be written quickly, otherwise they have to be erased first even if the OS thinks that particular logical address wasn't being used, which has a substantial performance penalty.

  • by onefriedrice ( 1171917 ) on Sunday July 04, 2010 @09:49PM (#32795134)

    I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.

    It probably is taking care of itself. Some OCZ drives, including the Vertex series, can have firmware which forgoes Trim support in favor of some form of garbage collecting.

  • Re:Bad Summary (Score:1, Informative)

    by Nimey ( 114278 ) on Sunday July 04, 2010 @09:55PM (#32795168) Homepage Journal

    Mod parent overrated. TRIM is to help keep the drive running at a good pace once it's been written to over a period.

    The problem with SSDs is that they can read/write only a page at a time; they do not address individual bits or sectors. Over time, this means that a given page (I forget the size, maybe 64KB) will have parts of multiple files in it. Therefore, writing to one of the files in that page means that the drive has to re-write each file, and if one of the affected files is spread across several pages, then those pages have to get re-written, and so on. Write performance, and to a lesser extent read performance, goes into the toilet.

    Enter TRIM, which keeps files to a minimum number of pages and is typically run in the background as a garbage-collection measure.

  • by dfghjk ( 711126 ) on Sunday July 04, 2010 @10:24PM (#32795276)

    "If you put a good SSD (e.g. Intel) in your Mac the behavior will be completely different."

    Completely different from what?

    I have put an Intel SSD in a Mac, in fact 2 in a RAID 0 configuration, and it doesn't behave like you are insinuating it does.

    The performance of the Samsung drives does suck but it isn't because they are "pre-degraded".

  • Re:Bad Summary (Score:5, Informative)

    by scdeimos ( 632778 ) on Sunday July 04, 2010 @10:34PM (#32795312)
    You both need to read up on TRIM (and I'll leave it at that)... []
  • by wvmarle ( 1070040 ) on Sunday July 04, 2010 @11:52PM (#32795780)

    TFA stated this is actually a follow-up test on tests done on Windows, where they found TRIM to have a large effect on the drive's performance. This included a drive they suspect to be technically similar to the one in the Mac (same manufacturer, age) - though unfortunately no direct comparison of actual hardware.

  • Re:Bad Summary (Score:1, Informative)

    by Anonymous Coward on Monday July 05, 2010 @01:19AM (#32796340)

    No, no, no. TRIM is not a SSD memory management command, and has nothing to do with fragmentation. It is also important for storage technologies other than SSD.

    Of course blocks/sectors on any device will have multiple files on each sector. SSDs are no different in that regard. What gets presented to the filesystem is still a block device.

    It probably does not help NTFS that much for use on SSDS that by default it has a 4K block size (whereas HFS+ has a 16KB blocksize).

    A SSD's erase block size is generally 128KB, and a properly configured filesystem, will therefore be 128K aligned. So the "perfect" allocation cluster size for a SSD for optimal performance would be 128K block size, but alas, few file systems other than ZFS will do so..

    The ATA TRIM command is equivalent to the SCSI PUNCH command.

    It is basically a way for the OS to tell the disk subsystem, that certain blocks of the disk is now NOT USED; in other words, the disk layer can throw them away.

    This is referred to block layer discard

    It allows the disk to know that a bunch of blocks are no longer used by the filesystem and can be discarded.

    If the block device is a SSD, once it receives a TRIM command, and a whole 128K erase block has been trimmed, it would generally make sense for the device to schedule the underlying flash pages that block resides on to be erased and prepared to have new data written to it.

    Unlike with a standard hard drive, when you are dealing with flash memory: a flash memory page that had any data on it has to be completely erased before any new data can be written to it.

    Write performance will go to hell if the SSD runs out of non-erased memory pages to store data to. Since it means a usable page will have to be erased to store that block.

    When an existing block gets overwritten, new (unused) flash memory has to be consumed as if it was an entirely new block.

    Therefore, if there ARE non-erased memory pages available to store data to, When you overwrite a non-TRIM'ed block, one of the free/remaining non-erased underlying memory pages will likely be consumed (since there is not enough time to quickly erase a page to implement 'overwriting')

  • by billcopc ( 196330 ) <> on Monday July 05, 2010 @01:24AM (#32796358) Homepage

    TRIM performance is directly tied to an SSD's erase performance. All TRIM does is tell the SSD which blocks are free to erase in its spare time. EVERY write to a NAND cell requires an erase, so if the SSD can perform that step while the system is idle, the next write to that cell can be performed immediately.

    TRIM allows SSDs to "cheat" by doing some of the work ahead of time. The dirty performance is the normal, sustainable speed you should expect from your SSD. Any gains from TRIM are the result of pre-erasing cells when you're idling... if your system never idles (e.g. a busy database), TRIM won't help at all.

  • by m.dillon ( 147925 ) on Monday July 05, 2010 @02:46AM (#32796836) Homepage

    Strange reasoning. In anycase, the Intel SSDs appear to use a combination of static and dynamic wear leveling and it seems to do a really good job. A really, really good job. I have over a half dozen of the 40G drives and have not noticed any reduction in read or write performance.

    There seem to be dozens of different write combining and wear leveling implementations across vendors. Dozens and dozens. Variations between vendors are significant and even variations between revisions from the same vendor can be significant. Drives sold just two years ago are likely to have primitive weal leveling and write combining verses drives sold today. Vendor technology can be years apart.

    I guess you can thank MLC flash for the radical improvement in wear leveling and write combining algorithms over the last few years. Vendors can't really cheat when they use MLC flash... the algorithms have to work properly or the device has an early death due to the limited cell durability.

    Personally speaking, I am very confident about Intel's technology. OCZ seems to be pretty good too but it is also full of hacks and does not properly support SATA NCQ. I'm sure there are some other good vendor technologies out there but there are also definitely some very bad ones that are years behind Intel. In the SSD space, the quality of the software matters a lot.


  • by Mathinker ( 909784 ) * on Monday July 05, 2010 @03:18AM (#32797084) Journal

    Yes, and as we know, solid state disks lose performance when files are fragmented, because, when the disk spins, err, i mean the electrons, the heat goes around, ah, fuck it.

    Your reply was witty, but all of the EEs here on Slashdot will tell you that, for example, trying to write to random addresses in, for example, DDR2 memory when you could have written the same data to contiguous addresses, is a very bad idea. The only reason programmers don't feel this difference quite so much is that the cache hierarchy of the CPU is babying them.

  • by makomk ( 752139 ) on Monday July 05, 2010 @05:54AM (#32797760) Journal

    An inaccurate pro-Mac story too, by the looks of it. For the Mac test, they didn't properly erase the SSD to its initial state - instead they used a tool that filled the disk with zeros, marking the entire drive as in-use. It's no surprise that they failed to see any performance degradation as the drive filled up: the performance was already maximally degraded from the start!

  • by jabbathewocket ( 1601791 ) on Monday July 05, 2010 @10:39AM (#32799288)

    The point is that while it would allow them to isolate whether the *drive* itself in the macbook air is somehow immune to the issue, or if its something to do with the operating system/file system preventing the issue to occur.

    IE if You do the tests in OS X, rezero the drive install windows xp/vista run them again (in NTFS on the mac's ssd), then finally install windows 7 also to a clean drive under NTFS and run the tests a final time (this last test would have trim enabled of course)

    Chances are that what is happening is that the OS X has an artificial limit in speed (for whatever reason) masking the problem.. If that theory is correct then installing windows 7 to a freshly zero'd drive on the air, should result in significantly faster initial performance than measured under OS X, with a dropoff to roughly the levels measured under OS X over time.

Nothing is finished until the paperwork is done.