Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Desktops (Apple) Hardware Apple

The Curious Case of SSD Performance In OS X 205

mr_sifter writes "As we've seen from previous coverage, TRIM support is vital to help SSDs maintain performance over extended periods of time — while Microsoft and the SSD manufacturers have publicized its inclusion in Windows 7, Apple has been silent on whether OS X will support it. bit-tech decided to see how SSD performance in OS X is affected by extended use — and the results, at least with the Macbook Air, are startling. The drive doesn't seem to suffer very much at all, even after huge amounts of data have been written to it."
This discussion has been archived. No new comments can be posted.

The Curious Case of SSD Performance In OS X

Comments Filter:
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Sunday July 04, 2010 @07:42PM (#32794654) Homepage

    You can't really draw any conclusions from these results. In one particular Mac, Apple ships a particular Samsung SSD that doesn't degrade, probably because its "clean" performance is already terrible (you might think of it as "pre-degraded"). In other Macs Apple ships Toshiba SSDs that may have completely different behavior. If you put a good SSD (e.g. Intel) in your Mac the behavior will be completely different.

  • by Anonymous Coward on Sunday July 04, 2010 @07:44PM (#32794662)

    Maybe they should take the drive over to a Windows XP and Windows 7 box and see if it's the drive hardware being resilient or the OS. The G1 intel drives don't drop a ton after use, and they don't support TRIM. It looks like the Intel G1 flash is artificially capped by the controller. It could be similar here.

  • Two quotes stick out (Score:5, Interesting)

    by izomiac ( 815208 ) on Sunday July 04, 2010 @08:02PM (#32794746) Homepage
    While skimming the article two parts really stood out. First:

    Apple's description of the zeroing format method we used fits the description of what we wanted in terms of resetting the SSD to a clean state

    Zeroing is not the same operation as TRIM. TRIM marks a block as unused, and if you read it you'll either get random data, or zeros (probably the later). Zeroing marks it as in-use, and if you read it you'll get zeros. The SSD's wear management algorithm will move the latter around as though it were real data, whereas it knows the former is "empty" so it won't bother (so the SSD will be faster). In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.

    Secondly, the SSD in the Macbook Air really isn't very fast at all

    Which strengthens the hypothesis that they were comparing one "full" state with another. Pop out the drive, TRIM the whole disk in another OS, and run the benchmark again. It'll probably be a lot faster. It wouldn't surprise me if installing the Mac OS at the factory caused every block on the SSD to be used at least once (e.g. a whole disk image was written), which would mean you'd already be at the worse possible performance degradation.

  • by izomiac ( 815208 ) on Sunday July 04, 2010 @08:27PM (#32794870) Homepage
    Well, to be honest I did look a bit more at that part before criticizing it. On page two of the article they elaborate:

    The Macbook Air we received from Apple had previously been sent to other reviewers, so we first needed to get the SSD into a "clean" state. While we have an established way of doing with Windows systems - using HDDerase to perform a low level format - we needed to find a way of doing this with OS X. The OS X installer actually allows you to load an app called Disk Utility, which can partition and format the drive - and one of the options is for a format which zeroes all the data.

    According to Apple, "the "Zero all data" option... takes the erasure process to the next level by converting all binary in the empty portion of the disk to zeros, a state that might be described as digitally blank." The next level? Count. Us. In.

    And the link goes to an Apple site that explains what zeroing the data does and how to do it.

    Once we'd done this with the clean SSD, we then proceeded to give it a damn good thrashing. As with our Windows SSD testing, we filled the SSD with around 112GB of files from a USB hard disk - the files included OS files, game installs and media. We then deleted these files, then copied them across again, repeating the process ten times, so that we'd written over 1TB of data to the SSD.

    So... they wrote over the whole disk with a format, then copied and deleted a bunch of files in hopes to get full coverage of the disk (i.e. exactly what the zeroing format did). That's about three levels of abstraction above what they're trying to test.

  • by Joce640k ( 829181 ) on Sunday July 04, 2010 @09:29PM (#32795066) Homepage

    FTA: "We simulated this by copying 100GB of mixed files (OS files, multiple game installs, MP3s and larger video files ) to the SSD, deleting them, and then repeating the process ten times,"

    Surely you should be deleting half the files - every other file - then rewriting them. If you copy a bunch of files then delete them all you're leaving the drive in pretty much the same state as it was at the start, the only difference between passes wil be due to the wear levelling algorithms inside the drive. Overall performance at the end will mostly be a result of the initial condition of the drive, not what happened during the test.

  • by jedidiah ( 1196 ) on Sunday July 04, 2010 @09:38PM (#32795090) Homepage

    > I've been running Windows XP since beta2, and it really kicks ass. I don't
    > have to recompile my kernel when I want to install an ethernet card, it

            You didn't have to do this with Slackware in 1994.

    > automatically detects it and installs the drivers no matter who the
    > manufacturer is. Dual monitors? No chore with windows, get two video cards,

            Windows 7 doesn't even do this with hardware slightly older than it.

  • Re:Bad Summary (Score:5, Interesting)

    by DJRumpy ( 1345787 ) on Sunday July 04, 2010 @09:53PM (#32795154)

    I have to wonder why they didn't boot into Windows on the same PC and repeat the tests. That would have identified if it was a hardware issue, or a software (Filesystem) issue that caused the irregularity.

  • by iotaborg ( 167569 ) <exa@sof t h o m e.net> on Sunday July 04, 2010 @09:55PM (#32795166) Homepage

    FWIW, I've been using an Intel X-25M 160 GB SSD on my Mac Pro for over half a year, and my read/write speeds are essentially unchanged from when I got it... this is using xbench to check.

  • by AllynM ( 600515 ) * on Sunday July 04, 2010 @10:26PM (#32795282) Journal

    Apple's description of the zeroing format method we used fits the description of what we wanted in terms of resetting the SSD to a clean state

    Zeroing is not the same operation as TRIM. TRIM marks a block as unused, and if you read it you'll either get random data, or zeros (probably the later). Zeroing marks it as in-use, and if you read it you'll get zeros. The SSD's wear management algorithm will move the latter around as though it were real data, whereas it knows the former is "empty" so it won't bother (so the SSD will be faster). In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.

    Not only that, but writing to all free space of many SSD's will *drop* their IOPS performance since the drive now has to track *all* sectors in the LBA remap table. This is especially true with Intel drives (even the 2nd gen). Additionally, without TRIM, most drives will then continue to track all LBA's as long as used in that same Mac.

    Secondly, the SSD in the Macbook Air really isn't very fast at all

    A Macbook Air is just about the worst test of SSD performance, as its SATA and other busswork is run in a much reduced power mode, meaning the bottleneck is not the SSD at all. A worst-case degraded SSD in an Air will still be faster than the other bottleneck in that system.

    Allyn Malventano, CTNC, USN
    Storage Editor, PC Perspective

  • by scdeimos ( 632778 ) on Sunday July 04, 2010 @11:02PM (#32795412)

    I read the Apple link (http://support.apple.com/kb/HT1820) and I don't see how it would be equivalent to TRIM on an SSD device.

    Zeroing a device, as per the link, just writes zeroes to every block on it. From an SSD's point of view this means that every block is in use and just happens to contain zeros. Next time a request is made to write to a block it requires a (comparatively slow) read-modify-erase-write cycle. It won't make an SSD faster, but it may serve to securely erase the data on it (depending on its absence of wear levelling techniques).

    TRIM on the other hand is used by the file system driver to say "I've deleted all data in this block, you can disregard it now." The device will erase it at its leisure. Next time a write is made to that block it's a simple (and fast) write cycle, without the read-modify and costly erase. Saving the erase cycle from interfering with a write is how TRIM-enabled drives gain their performance boost.

    TFA's tests were invalid, because the drive was always in a degraded mode (assuming of course the 2008-era Samsung even supported TRIM).

  • Trim is a hack (Score:2, Interesting)

    by KonoWatakushi ( 910213 ) on Sunday July 04, 2010 @11:09PM (#32795466)

    The problem is that a Flash based SSD needs to have a pool of unused blocks to work around the block-erase stupidity. However, trim only "solves" the problem when there is a good deal of free space on the drive anyway; when the drive nears full, it is useless. At the current pricing, people don't buy SSDs to keep them empty, and one would not expect an SSD to perform badly when full, as with rotating rust.

    The solution is to provision enough extra blocks on the drive beyond the advertised capacity. While the vendors refuse to do this, you may do so yourself. Simply create an empty partition the drive--just make sure it isn't ever written or zeroed.

    Anyway, the vendors' motivation to cut corners here should be perfectly clear.

  • Re:Bad Summary (Score:3, Interesting)

    by mysidia ( 191772 ) on Monday July 05, 2010 @01:33AM (#32796402)

    I have the feeling that if you match the block size of files written to the block size of the SSD (optionally padding with zeros to fill it up) you could get quite a performance boost - and only when the drive starts to fill up start using the unused, zeroed out fragments from the blocks.

    With SSDs it's erase block size that matters most. When a page of memory on a SSD device is being overwritten, the page has to be erased first, and then its contents rewritten.

    Of course if your OS implements SCSI PUNCH or ATA TRIM, immediately when a block is freed, your SSD drive can have the memory erased ahead of time, and ready to use with minimal write latency.

    Most SSDs use an erase block size of 128K. So you would want your entire filesystem's allocation blocks 128K aligned.. in NTFS world, this would mean you need to align the disk partition, at least prior to Vista/Windows 2008.

    I believe MacOS automatically aligns partitions. This issue alone could explain why things might get so bad with Windows, let alone the ridiculously small 4K block size, or fragmentation issues that plague NTFS filesystems.

    And the ideal allocation cluster size is 128K (by default NTFS cluster size is 4K, which is not suitable. for best performance change in such a scenario it to 64K)

    You can think of the performance issue that can occur as something like RAID stripe crossing.

    If you have two 4K blocks side by side, and you need to overwrite just those two blocks.. if they are on different 128K erase blocks, your SSD will be having to write 256K of memory to change 8K of data.

    If those 2 NTFS blocks are on the same SSD block, your ssd flushes and writes 128K of data, maybe.

    This is before we start thinking about wear-levelling algorithms,

    or the fact an efficient SSD will likely keep a pool of non-erased blocks, to satisfy write requests against.

    So as long as there is memory left that was scheduled for erasure and flushed due to TRIM or flushed due to being overwritten, there would be no reason to require an extra in-line ERASE (increasing write latency).

    So the pool of non-erased pages are important, and the critical elements effecting write performance are (1) time to update header blocks to re-map disk blocks to unused flash pages, and (2) time to copy the erase block with changes to the newly mapped location.

  • TRIM equivalent (Score:4, Interesting)

    by m.dillon ( 147925 ) on Monday July 05, 2010 @02:22AM (#32796690) Homepage

    All SSDs have a bit more storage than their rating. Partitioning a little less space on a vendor-fresh drive can double or triple the extra storage available to the SSD's internal wear leveling algorithms. For all intents and purposes this gives you the equivalent of TRIM without having to rely on the OS and filesystem supporting it. In fact, it could conceivably give you better performance than TRIM because you don't really know how efficient the TRIM implementation is in either the OS or the SSD. And because TRIM is a serialized command and cannot be run concurrently with read or write IOs. There are a lot of moving parts when it comes to using TRIM properly. Systems are probably better off not using TRIM at all, frankly.

    In case people haven't figured it out, this is one reason why Intel chose multiples of 40G for their low-end SSDs. Their 40G SSD competes against 32G SSDs from other vendors. Their 80G SSD competes against 64G SSDs from other vendors. You can choose nominal performance by utilizing 100% of the advertised space or you can choose to improve upon the already excellent Intel wear leveling algorithms simply by partitioning it for (e.g.) 32G instead of 40G.

    We're already seeing well over 200TB in endurance from Intel's 40G drives partitioned for 32G. Intel lists the endurance for their 40G drives at 35TB. I'm afraid I don't have comparitive numbers for when all 40G is used but I am already very impressed when 32G is partitioned for use out of the 40G available.

    Unfortunately it is nearly impossible to stress test a SSD and get results that are even remotely related to the real world, since saturated write bandwidth eventually causes erase stalls when the firmware can no longer catch up. In real-world operation write bandwidth is not pegged 100% of the time and the drive can pre-erase space. Testing this stuff takes months and months.

    Also, please nobody try to compare USB sticks against real (SATA) SSDs. SSDs have real wear leveling algorithms and enough ram cache to do fairly efficient write combining. USB sticks have minimal wear leveling and basically no ram cache to deal with write combining.

    -Matt

  • Re:Flawed article (Score:2, Interesting)

    by ekran ( 79740 ) * on Monday July 05, 2010 @03:54AM (#32797272) Homepage

    I think that the biggest mistake in the test was that they wrote zeros to the drive first, which means that the blocks got allocated (dirty) and had to be read/rewritten with new data when the next phase of the test started. So, basically both tests are the same and it's no wonder they got about the same test results.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...