Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Data Storage Desktops (Apple) Hardware Apple

The Curious Case of SSD Performance In OS X 205

Posted by timothy
from the and-mysteriouser dept.
mr_sifter writes "As we've seen from previous coverage, TRIM support is vital to help SSDs maintain performance over extended periods of time — while Microsoft and the SSD manufacturers have publicized its inclusion in Windows 7, Apple has been silent on whether OS X will support it. bit-tech decided to see how SSD performance in OS X is affected by extended use — and the results, at least with the Macbook Air, are startling. The drive doesn't seem to suffer very much at all, even after huge amounts of data have been written to it."
This discussion has been archived. No new comments can be posted.

The Curious Case of SSD Performance In OS X

Comments Filter:
  • by Anonymous Coward on Sunday July 04, 2010 @06:35PM (#32794614)

    That is startling!

    • by Trogre (513942)

      Yes, as opposed to a Mac Pro story which would be, well, less so.

    • by makomk (752139) on Monday July 05, 2010 @04:54AM (#32797760) Journal

      An inaccurate pro-Mac story too, by the looks of it. For the Mac test, they didn't properly erase the SSD to its initial state - instead they used a tool that filled the disk with zeros, marking the entire drive as in-use. It's no surprise that they failed to see any performance degradation as the drive filled up: the performance was already maximally degraded from the start!

    • by kestasjk (933987) *
      We would prefer you say "stunning" ("breathtaking" would also have been fine)
    • by PRMan (959735)
      The number one thing I got out of the article is that if you buy a Mac, you can pay through the nose for a 2-year-old below-average SSD that performs the same as a modern HDD.
  • by Wesley Felter (138342) <wesley@felter.org> on Sunday July 04, 2010 @06:42PM (#32794654) Homepage

    You can't really draw any conclusions from these results. In one particular Mac, Apple ships a particular Samsung SSD that doesn't degrade, probably because its "clean" performance is already terrible (you might think of it as "pre-degraded"). In other Macs Apple ships Toshiba SSDs that may have completely different behavior. If you put a good SSD (e.g. Intel) in your Mac the behavior will be completely different.

    • by iotaborg (167569) <exaNO@SPAMsofthome.net> on Sunday July 04, 2010 @08:55PM (#32795166) Homepage

      FWIW, I've been using an Intel X-25M 160 GB SSD on my Mac Pro for over half a year, and my read/write speeds are essentially unchanged from when I got it... this is using xbench to check.

    • by dfghjk (711126) on Sunday July 04, 2010 @09:24PM (#32795276)

      "If you put a good SSD (e.g. Intel) in your Mac the behavior will be completely different."

      Completely different from what?

      I have put an Intel SSD in a Mac, in fact 2 in a RAID 0 configuration, and it doesn't behave like you are insinuating it does.

      The performance of the Samsung drives does suck but it isn't because they are "pre-degraded".

    • by fermion (181285)
      I would say looking for this problem in an operating system that has been running SSD for 30 months is the silly thing. Apple is a system designer. When something happens, we don't get a run around with the OEM blaming MS and MS blaming the OEM. It is pretty clear it is Apples fault. So if there was a problem we would have heard about it.

      OTOH, MS Windows is designed to be flexible enough to run whatever hardware is thrown at it. The downside is that a driver has to written for every single piece of ha

    • Re: (Score:3, Informative)

      by billcopc (196330)

      TRIM performance is directly tied to an SSD's erase performance. All TRIM does is tell the SSD which blocks are free to erase in its spare time. EVERY write to a NAND cell requires an erase, so if the SSD can perform that step while the system is idle, the next write to that cell can be performed immediately.

      TRIM allows SSDs to "cheat" by doing some of the work ahead of time. The dirty performance is the normal, sustainable speed you should expect from your SSD. Any gains from TRIM are the result of pre

      • by beelsebob (529313)

        You're missunderstanding.

        The SSD doesn't sit there erasing blocks. It merely marks them as erased. The problem is that a write to a block requires an entire cell to be written. If there's data in that cell, the SSD has to read the data out, assemble a new layout for the cell with the new data in it, and then write that back. TRIM allows the SSD to see that there's no data in the cell, and hence skip the read the data out, and reconstruct the contents of the cell steps.

        • by Tacvek (948259)

          Whoa! It can go either way an erased flash block is all ones or all zeros, depending on convention. Assuming the (less common) all zero convention, an erased black can have bits set one at a time, but any time a bit must be unset, the block must be erased. The fastest write performance occurs when the block is already erased.
          The next fastest is when it can be erased, and then just the new data written. The slowest is to read, erase, and the write the whole flash block.

          It is entirely permissible and even sen

    • by spongman (182339)

      yeah, they stated that they wiped the drive with zeros using DiskUtility. given that OSX doesn't support TRIM, that would mean that EVERY page is dirty, thus EVERY write requires a read-merge-write cycle, and thus the performance would never degrade since it's already as shitty as it can possibly get. maybe if they'd TRIMmed the whole drive with a decent OS and started from there without zeroing they'd have gotten significantly different results.

      • by dfghjk (711126)

        "maybe if they'd TRIMmed the whole drive with a decent OS and started from there without zeroing they'd have gotten significantly different results."

        maybe. maybe not. I own a mac with a Samsung drive. It's performance sucks. It sucks no less than any benchmark I've seen published on it though.

    • You're saying the "Samsung SSD doesn't degrade because it's initial performance is already so terrible"?!

      That makes no sense at all. Think about it. Even if the drive was slow enough. out of the box, that a read operation took 10 seconds to complete, that should result in it being more like 20 or 30 seconds when the drive is all fragmented up.

      Unless a drive had 0 performance (never returned a result when you did a read or write), it should be possible to measure it degrading in performance from a clean set

  • by mrsteveman1 (1010381) on Sunday July 04, 2010 @06:44PM (#32794660)

    I've got a Vertex 60 in a While unibody macbook and it works fine, boot is 7-9 seconds, apps load almost instantly even if i start 10 at the same time.

    I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.

    • by joe_bruin (266648) on Sunday July 04, 2010 @07:12PM (#32794780) Homepage Journal

      The impact of the TRIM command is vastly overrated. It is effective on "naive" devices that don't allocate a reserve block pool and therefore have to erase before doing every write. On a modern SSD, the disk controller reserves 5-10% of the physical blocks (beyond those that the host can see) as an extended block pool. These blocks are always known to be free (since they're out of the scope of that OS) and are therefore preemptively erased. So, when your OS overwrites a previously written data block, one of these pre-erased blocks is actually written to and the old block is put in the reserve pool for erasing later at the device's leisure.

      The one case where this isn't true is if you're constantly writing gigs of data to an empty drive. With TRIM commands, most of your drive may have been pre-erased, whereas without it you may overrun the reserve pool's size and then will be waiting on block erase. For normal desktop users, this is a pathological case. In servers and people who do a lot of heavy video editing it may matter a lot more.

      • by PopeRatzo (965947) *

        It is effective on "naive" devices

        When you say "naive devices" are you referring to the controller? (sorry I'm not as well informed about storage as I'd like)

      • by Jeff- (95113) on Sunday July 04, 2010 @08:37PM (#32795082) Homepage

        You're missing something.

        Erase blocks and data blocks are not the same size. The block size is the smallest atomic unit the operating system can write to. The erase block size is the smallest atomic unit the SSD can erase. Erase blocks typically contain hundreds of data blocks. They must be relatively larger so they can be electrically isolated. The SSD maintains a map from a linear block address space to a physical block addresses. The SSD may also maintain a map of which blocks within an erase block are valid and fills them as new writes come in.

        Without TRIM, once written, the constituent blocks within an erase block are always considered valid. When one block in the erase block is overwritten, the whole thing must be RMW'd to a new place. With TRIM the drive controller can be smarter and only relocate those blocks that still maintain a valid mapping. This can drastically reduce the overhead on a well used drive.

        • by Rockoon (1252108)

          Without TRIM, once written, the constituent blocks within an erase block are always considered valid.

          I think that many folks are confusing logical sectors with physical sectors.

          The device is not reading a block, erasing it, making some changes to some of its sectors, and then writing back to the same now-erased block. It is always writing to some different block.

          For simplicity-sake lets say that each block contains 4 sectors, A, B, C, and D. When the file system writes to sector A, the SSD reads the entire ABCD block, updates A, and then grabs a virgin block to write ABCD to. The original ABCD block i

          • by Tacvek (948259)

            Perhaps The best way to do this is to define some terms.

            One term I will define is logical Many interfaces label this a block. This is the minimum amount of addressable data. It is what the kernel sees as the block size of the disk drive, and also used in the drive communication protocols. The file systems also often use them as a sector size, but that is completely independent of the size of a logical block in disk signaling, which is what most kernels view the device in terms of. From this point forward I

    • Re: (Score:3, Informative)

      by onefriedrice (1171917)

      I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.

      It probably is taking care of itself. Some OCZ drives, including the Vertex series, can have firmware which forgoes Trim support in favor of some form of garbage collecting.

    • You missed out the important detail of how long you've been using it.
  • by Anonymous Coward on Sunday July 04, 2010 @06:44PM (#32794662)

    Maybe they should take the drive over to a Windows XP and Windows 7 box and see if it's the drive hardware being resilient or the OS. The G1 intel drives don't drop a ton after use, and they don't support TRIM. It looks like the Intel G1 flash is artificially capped by the controller. It could be similar here.

    • Re: (Score:3, Informative)

      by wvmarle (1070040)

      TFA stated this is actually a follow-up test on tests done on Windows, where they found TRIM to have a large effect on the drive's performance. This included a drive they suspect to be technically similar to the one in the Mac (same manufacturer, age) - though unfortunately no direct comparison of actual hardware.

  • This is possible. (Score:5, Informative)

    by anethema (99553) on Sunday July 04, 2010 @06:51PM (#32794696) Homepage
    It depends a lot on how the drive works.

    Intel drives actually use the whole drive for scratch space. Until a sector is written to. Then without TRIM it only has its tiny bit of extra scratch space to work with. That's why intel drives degrade so badly without TRIM.

    Indilinx Barefoot controllers on the other hand ONLY use their scratch space, they never use the normal writing space of the drive as scratch space.

    See here.

    http://www.anandtech.com/show/2829/9

    While it does show the synthetic tests degrading with lack of trim, even more than the intel drives, the real world use tests show they suffer almost 0% loss in performance.

    Depending on which controller the drive is using, TRIM could make almost no difference or a world of difference.

    Anand explains it best:

    "Only the Indilinx drives lose an appreciable amount of performance in the sequential write test, but they are the only drives to not lose any performance in the more real-world PCMark Vantage HDD suite. Although not displayed here, the overall PCMark Vantage score takes an even smaller hit on Indilinx drives. This could mean that in the real world, Indilinx drives stand to gain the least from TRIM support. This is possibly due to Indilinx using a largely static LBA mapping scheme; the only spare area is then the 6.25% outside of user space regardless of how used the drive is."
    • by Joce640k (829181) on Sunday July 04, 2010 @08:29PM (#32795066) Homepage

      FTA: "We simulated this by copying 100GB of mixed files (OS files, multiple game installs, MP3s and larger video files ) to the SSD, deleting them, and then repeating the process ten times,"

      Surely you should be deleting half the files - every other file - then rewriting them. If you copy a bunch of files then delete them all you're leaving the drive in pretty much the same state as it was at the start, the only difference between passes wil be due to the wear levelling algorithms inside the drive. Overall performance at the end will mostly be a result of the initial condition of the drive, not what happened during the test.

      • by TheLink (130905)

        No. When an OS deletes stuff, most drives do not know you've deleted stuff on the drive. All they know is the OS has said: "Write this on to that block".

        The drives do not know that the newly written block means that a huge bunch of other blocks are no longer in use.

        To take the extreme case, say you write to the entire drive and deleted the partition, the drive doesn't know anything about partitions - it just knows you've overwritten a few blocks, it doesn't know that almost all the blocks in the entire driv

  • Two quotes stick out (Score:5, Interesting)

    by izomiac (815208) on Sunday July 04, 2010 @07:02PM (#32794746) Homepage
    While skimming the article two parts really stood out. First:

    Apple's description of the zeroing format method we used fits the description of what we wanted in terms of resetting the SSD to a clean state

    Zeroing is not the same operation as TRIM. TRIM marks a block as unused, and if you read it you'll either get random data, or zeros (probably the later). Zeroing marks it as in-use, and if you read it you'll get zeros. The SSD's wear management algorithm will move the latter around as though it were real data, whereas it knows the former is "empty" so it won't bother (so the SSD will be faster). In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.

    Secondly, the SSD in the Macbook Air really isn't very fast at all

    Which strengthens the hypothesis that they were comparing one "full" state with another. Pop out the drive, TRIM the whole disk in another OS, and run the benchmark again. It'll probably be a lot faster. It wouldn't surprise me if installing the Mac OS at the factory caused every block on the SSD to be used at least once (e.g. a whole disk image was written), which would mean you'd already be at the worse possible performance degradation.

    • by broken_chaos (1188549) on Sunday July 04, 2010 @07:40PM (#32794922)

      if you read it you'll either get random data, or zeros (probably the later)

      If you read a TRIMmed block directly, most drives will kick back zeroes. You can do this with hdparm -- particularly useful as a method to test if TRIM works (and it even uncovered a bug in ext4's TRIM implementation in data=writeback mode, where TRIM only works on metadata). Run hdparm -I on a SSD, and it'll actually say something along the lines of "Deterministic read ZEROs after TRIM" for most drives.

      In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.

      Very true. There are only two methods I know of to 'clean slate' a full drive -- either TRIM the entire thing (with a tool like hdparm -- this is tricky to get right) or run an ATA Secure Erase command. Most SSDs take the secure erase command and just blank every NAND chip they have (taking ~2 minutes compared to the multiple hours that rotational drives take for the Secure Erase command) -- I've done this on my X-25M and it works brilliantly.

      Unless Apple's Disk Utility actually does a Secure Erase command (which is very unlikely), then their testing methodology is entirely flawed, and their 'resetting' of the drive instead made it behave as if it was entirely, completely, 100% filled to the brim.

    • by AllynM (600515) * on Sunday July 04, 2010 @09:26PM (#32795282) Journal

      Apple's description of the zeroing format method we used fits the description of what we wanted in terms of resetting the SSD to a clean state

      Zeroing is not the same operation as TRIM. TRIM marks a block as unused, and if you read it you'll either get random data, or zeros (probably the later). Zeroing marks it as in-use, and if you read it you'll get zeros. The SSD's wear management algorithm will move the latter around as though it were real data, whereas it knows the former is "empty" so it won't bother (so the SSD will be faster). In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.

      Not only that, but writing to all free space of many SSD's will *drop* their IOPS performance since the drive now has to track *all* sectors in the LBA remap table. This is especially true with Intel drives (even the 2nd gen). Additionally, without TRIM, most drives will then continue to track all LBA's as long as used in that same Mac.

      Secondly, the SSD in the Macbook Air really isn't very fast at all

      A Macbook Air is just about the worst test of SSD performance, as its SATA and other busswork is run in a much reduced power mode, meaning the bottleneck is not the SSD at all. A worst-case degraded SSD in an Air will still be faster than the other bottleneck in that system.

      Allyn Malventano, CTNC, USN
      Storage Editor, PC Perspective

    • . It wouldn't surprise me if installing the Mac OS at the factory caused every block on the SSD to be used at least once (e.g. a whole disk image was written), which would mean you'd already be at the worse possible performance degradation.

      Disk images on the mac made by hdiutil (which is what Disk Utility is largely a front-end to) are almost always a copy based off files or just in-use blocks to a new image- the image is copied bit-by-bit back (for speed) and then the filesystem is expanded afterwards.

  • Flawed article (Score:5, Informative)

    by ShooterNeo (555040) on Sunday July 04, 2010 @07:05PM (#32794760)

    The article writers made 2 major mistakes that cause their results to be meaningless.

    1. They didn't secure erase the drive, which is what actually puts a drive back into a virgin state. They instead wrote zeroes to every sector, which means that the drive controller probably still thinks those zeroed out sectors are still in use.

    2. The Samsung drive controller has a form of self cleanup that greatly reduces the need for TRIM.

    3. Regardless, the SSD they used was slow as a dog and barely worth using over a HDD.

    • by AHuxley (892839)
      Yes it was a pain to read, I just wonder how good the OS X clean up is and the Samsung hardware clean up last?
      Seems sandforce and Intel are the current neat SSD's until we get super sized and next gen controllers :)
      I wonder whats holding Trim back on OS X? Can trim in windows 7 nuke a drive?
      Hows the Linux Trim supporting branches?
      • by Rockoon (1252108)

        Yes it was a pain to read, I just wonder how good the OS X clean up is and the Samsung hardware clean up last?

        The answer is of course that the OS/X cleanup doesnt do jack-shit for the SSD, although it still may help out OS/X. The SSD doesnt see "cleanup operation" .. it sees "writing data that must be saved"

    • Re: (Score:2, Interesting)

      by ekran (79740) *

      I think that the biggest mistake in the test was that they wrote zeros to the drive first, which means that the blocks got allocated (dirty) and had to be read/rewritten with new data when the next phase of the test started. So, basically both tests are the same and it's no wonder they got about the same test results.

    • by Sycraft-fu (314770)

      They didn't test it on a high performance platform. The Air is, unsurprisingly, optimized as a low battery usage mobile device. This implies a number of things, none of them good for trying to do high speed SSD testing. A more appropriate platform would be a Mac Pro, and perhaps look at adding on a high speed SATA card for good measure.

      Now I can appreciate testing on a non-performance system too, but not if your objective is to test TRIM. In that case you need to be on a high end system so that the system i

  • by whoever57 (658626) on Sunday July 04, 2010 @07:23PM (#32794844) Journal
    Consider this statement:

    Consider the Vertex: without TRIM, and when used, its sequential read speed for 1,024KB files is 137MB/sec; the Macbook Air manages 105MB/sec. With TRIM, the Vertex manages 258MB/sec in this same test.

    According to their tests, TRIM has a big impact on read speeds, yet according to their explanation, TRIM should only have a significant affect on write speeds.

  • The HD in my Macbook Pro was failing and when I was shopping for parts, I noticed that PowerbookMedic (normally I'd just go buy a hard drive locally but I needed to replace the DVD drive as well so I figured why not just get it all in one go) had an SSD available at a reasonable price so I purchased it on the theory that whatever they were shipping was a decent fit for the Mac - they didn't have any maker info on the page but I figured that the only real difference between SSDs would be max bandwidth and an

  • by AHuxley (892839) on Sunday July 04, 2010 @09:15PM (#32795244) Homepage Journal
    How are Mac users with Mercury Extreme SSD's or the Mushkin ect units doing?
    They might be a bit different to the units Apple found at a set price point to max the profits.
    Most of the Apple users sites seem to like the idea of a ~ TRIM via a removal of all data, zeroing and a copy back of the OS. Thanks
  • Trim is a hack (Score:2, Interesting)

    by KonoWatakushi (910213)

    The problem is that a Flash based SSD needs to have a pool of unused blocks to work around the block-erase stupidity. However, trim only "solves" the problem when there is a good deal of free space on the drive anyway; when the drive nears full, it is useless. At the current pricing, people don't buy SSDs to keep them empty, and one would not expect an SSD to perform badly when full, as with rotating rust.

    The solution is to provision enough extra blocks on the drive beyond the advertised capacity. While

  • by redelm (54142) on Sunday July 04, 2010 @10:37PM (#32795666) Homepage
    TRIM just seems to be yet another abstruse abstraction layer. Why not just allow filesystems (HPFS, UFS, ext[234]) to have large blocksizes?

    Jes, I know the Intel MMU pages are either 4k or 4M. And people like "saving disk" since on average half a blocksize is wasted per file. But 4k is a tiny blocksize, set IIRC for newspools that few use. It only wastes 10 MB on a 5,000 file/dir system. That is not enough!

    A 128 kB blocksize to match the hardware bs would "waste" a more reasonable 320 MB. Only 1% of the minimum 32 GB SSD.

    • by putaro (235078)

      I think it's reasonable but I'm sure there's a lot of code down in the FS layers that would break. No one has had to deal with a non-512 byte sector disk for a while.

    • I though so, too, but the block size would have to be 512 kB... Not sure if it's worth it.

  • TRIM equivalent (Score:4, Interesting)

    by m.dillon (147925) on Monday July 05, 2010 @01:22AM (#32796690) Homepage

    All SSDs have a bit more storage than their rating. Partitioning a little less space on a vendor-fresh drive can double or triple the extra storage available to the SSD's internal wear leveling algorithms. For all intents and purposes this gives you the equivalent of TRIM without having to rely on the OS and filesystem supporting it. In fact, it could conceivably give you better performance than TRIM because you don't really know how efficient the TRIM implementation is in either the OS or the SSD. And because TRIM is a serialized command and cannot be run concurrently with read or write IOs. There are a lot of moving parts when it comes to using TRIM properly. Systems are probably better off not using TRIM at all, frankly.

    In case people haven't figured it out, this is one reason why Intel chose multiples of 40G for their low-end SSDs. Their 40G SSD competes against 32G SSDs from other vendors. Their 80G SSD competes against 64G SSDs from other vendors. You can choose nominal performance by utilizing 100% of the advertised space or you can choose to improve upon the already excellent Intel wear leveling algorithms simply by partitioning it for (e.g.) 32G instead of 40G.

    We're already seeing well over 200TB in endurance from Intel's 40G drives partitioned for 32G. Intel lists the endurance for their 40G drives at 35TB. I'm afraid I don't have comparitive numbers for when all 40G is used but I am already very impressed when 32G is partitioned for use out of the 40G available.

    Unfortunately it is nearly impossible to stress test a SSD and get results that are even remotely related to the real world, since saturated write bandwidth eventually causes erase stalls when the firmware can no longer catch up. In real-world operation write bandwidth is not pegged 100% of the time and the drive can pre-erase space. Testing this stuff takes months and months.

    Also, please nobody try to compare USB sticks against real (SATA) SSDs. SSDs have real wear leveling algorithms and enough ram cache to do fairly efficient write combining. USB sticks have minimal wear leveling and basically no ram cache to deal with write combining.

    -Matt

  • They used the "write zero" disk erase method, which in fact un-erase every NAND block of the disk, which in turn forces the disk to erase each block again as it writes. Thats why they see such consistency of results : they are measuring the worst possible case where the disk is forced to the slow path for each block.

    To erase NAND, you need to erase it by block, and the resulting block is full of 1's. Writing to NAND is a question of writing zeros in places, you can't write 1's on NAND unless you erase it.

    So

"If truth is beauty, how come no one has their hair done in the library?" -- Lily Tomlin

Working...