Forgot your password?
typodupeerror
Data Storage Desktops (Apple) Hardware Apple

The Curious Case of SSD Performance In OS X 205

Posted by timothy
from the and-mysteriouser dept.
mr_sifter writes "As we've seen from previous coverage, TRIM support is vital to help SSDs maintain performance over extended periods of time — while Microsoft and the SSD manufacturers have publicized its inclusion in Windows 7, Apple has been silent on whether OS X will support it. bit-tech decided to see how SSD performance in OS X is affected by extended use — and the results, at least with the Macbook Air, are startling. The drive doesn't seem to suffer very much at all, even after huge amounts of data have been written to it."
This discussion has been archived. No new comments can be posted.

The Curious Case of SSD Performance In OS X

Comments Filter:
  • by whoever57 (658626) on Sunday July 04, 2010 @07:23PM (#32794844) Journal
    Consider this statement:

    Consider the Vertex: without TRIM, and when used, its sequential read speed for 1,024KB files is 137MB/sec; the Macbook Air manages 105MB/sec. With TRIM, the Vertex manages 258MB/sec in this same test.

    According to their tests, TRIM has a big impact on read speeds, yet according to their explanation, TRIM should only have a significant affect on write speeds.

  • by fuzzyfuzzyfungus (1223518) on Sunday July 04, 2010 @07:29PM (#32794882) Journal
    The complicating factor, when trying to make any useful statements about "What Operating System Whatever does on SSDs" is that SSDs are totally free to do whatever crazy stuff internally, so long as they can present a coherent block device abstraction over the IDE/SATA/etc. bus.

    TRIM happens to make the design problem easier; because it allows you to throw data on the floor when the OS says that they are no longer needed, rather than having to treat everything that isn't explicitly deleted as still good. However, before TRIM was available, manufacturers tended to do their best to design around its absence. This generally had a cost(either monetary, with lots of reserve flash driving up the cost, performance, with the drive either starting fast and taking a nosedive or just not being all that fast, or predictability, a given write operation could take milliseconds or it could take literal seconds, depending on the drive state).
  • by Jeff- (95113) on Sunday July 04, 2010 @08:37PM (#32795082) Homepage

    You're missing something.

    Erase blocks and data blocks are not the same size. The block size is the smallest atomic unit the operating system can write to. The erase block size is the smallest atomic unit the SSD can erase. Erase blocks typically contain hundreds of data blocks. They must be relatively larger so they can be electrically isolated. The SSD maintains a map from a linear block address space to a physical block addresses. The SSD may also maintain a map of which blocks within an erase block are valid and fills them as new writes come in.

    Without TRIM, once written, the constituent blocks within an erase block are always considered valid. When one block in the erase block is overwritten, the whole thing must be RMW'd to a new place. With TRIM the drive controller can be smarter and only relocate those blocks that still maintain a valid mapping. This can drastically reduce the overhead on a well used drive.

  • by Nerdfest (867930) on Sunday July 04, 2010 @09:11PM (#32795228)

    I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.

    That's because like the iPad, OS X is magical.

  • by AHuxley (892839) on Sunday July 04, 2010 @09:15PM (#32795244) Homepage Journal
    How are Mac users with Mercury Extreme SSD's or the Mushkin ect units doing?
    They might be a bit different to the units Apple found at a set price point to max the profits.
    Most of the Apple users sites seem to like the idea of a ~ TRIM via a removal of all data, zeroing and a copy back of the OS. Thanks
  • by redelm (54142) on Sunday July 04, 2010 @10:37PM (#32795666) Homepage
    TRIM just seems to be yet another abstruse abstraction layer. Why not just allow filesystems (HPFS, UFS, ext[234]) to have large blocksizes?

    Jes, I know the Intel MMU pages are either 4k or 4M. And people like "saving disk" since on average half a blocksize is wasted per file. But 4k is a tiny blocksize, set IIRC for newspools that few use. It only wastes 10 MB on a 5,000 file/dir system. That is not enough!

    A 128 kB blocksize to match the hardware bs would "waste" a more reasonable 320 MB. Only 1% of the minimum 32 GB SSD.

  • Re:Bad Summary (Score:3, Insightful)

    by billcopc (196330) <vrillco@yahoo.com> on Monday July 05, 2010 @12:17AM (#32796322) Homepage

    Yes and no. Wear-leveling happens with or without TRIM, but what TRIM does is tell the SSD controller that a block can be treated as "virgin", meaning it can be overwritten in a single pass. This is an optimization as normally one must read the contents of an entire block, combine it with the incoming data, erase the block on-disk and finally write the newly-merged data.

    Contrary to popular ignorance, the slowdown is not caused by "fragmentation" - that's backwards, when the drive is clean it is cheating by skipping part of the write cycle. In other words, an SSD's dirty performance is its normal, sustainable speed. When it is clean, it can go faster because it is being LAZY, by short-cutting the Read-Erase-Update cycle. For obvious reasons, most SSD vendors, particularly in the consumer segment, advertise the maximum speed when really the honest thing would be to advertise the minimum speed.

"Never give in. Never give in. Never. Never. Never." -- Winston Churchill

Working...