The Curious Case of SSD Performance In OS X 205
mr_sifter writes "As we've seen from previous coverage, TRIM support is vital to help SSDs maintain performance over extended periods of time — while Microsoft and the SSD manufacturers have publicized its inclusion in Windows 7, Apple has been silent on whether OS X will support it. bit-tech decided to see how SSD performance in OS X is affected by extended use — and the results, at least with the Macbook Air, are startling. The drive doesn't seem to suffer very much at all, even after huge amounts of data have been written to it."
Wow, a pro Mac story (Score:5, Funny)
That is startling!
Re: (Score:2)
Yes, as opposed to a Mac Pro story which would be, well, less so.
Re:Wow, a pro Mac story (Score:5, Informative)
An inaccurate pro-Mac story too, by the looks of it. For the Mac test, they didn't properly erase the SSD to its initial state - instead they used a tool that filled the disk with zeros, marking the entire drive as in-use. It's no surprise that they failed to see any performance degradation as the drive filled up: the performance was already maximally degraded from the start!
Re: (Score:2)
Re: (Score:2)
Re:Bad Summary (Score:5, Informative)
Actually TRIM ensures there are free blocks of flash ready to be written quickly, otherwise they have to be erased first even if the OS thinks that particular logical address wasn't being used, which has a substantial performance penalty.
Re:Bad Summary (Score:5, Funny)
Yes, and as we know, solid state disks lose performance when files are fragmented, because, when the disk spins, err, i mean the electrons, the heat goes around, ah, fuck it.
Funny but not totally correct (Score:4, Informative)
Yes, and as we know, solid state disks lose performance when files are fragmented, because, when the disk spins, err, i mean the electrons, the heat goes around, ah, fuck it.
Your reply was witty, but all of the EEs here on Slashdot will tell you that, for example, trying to write to random addresses in, for example, DDR2 memory when you could have written the same data to contiguous addresses, is a very bad idea. The only reason programmers don't feel this difference quite so much is that the cache hierarchy of the CPU is babying them.
Re: (Score:2)
Clockwise in the northern hemisphere, and anti-clockwise in the sourthern hemisphere (or is that backwards, I can never remember).
Re:Bad Summary (Score:5, Interesting)
I have to wonder why they didn't boot into Windows on the same PC and repeat the tests. That would have identified if it was a hardware issue, or a software (Filesystem) issue that caused the irregularity.
Re: (Score:2)
No, using a windows partition on the Mac and NTFS as the file system via Boot Camp. It would eliminate hardware and allow Apples to Apples (no pun intended) between OS's. It's trivial to set up such a configuration on a Mac.
Re: (Score:3, Informative)
The point is that while it would allow them to isolate whether the *drive* itself in the macbook air is somehow immune to the issue, or if its something to do with the operating system/file system preventing the issue to occur.
IE if You do the tests in OS X, rezero the drive install windows xp/vista run them again (in NTFS on the mac's ssd), then finally install windows 7 also to a clean drive under NTFS and run the tests a final time (this last test would have trim enabled of course)
Chances are that what i
Re: (Score:2)
It should have a negligible effect on an SSD as each OS is in it's own logical container. Things like outside edge vs. inside edge of a plater affecting seek speed are meaningless for solid state drives.
I also don't buy the fact that the drives in the Macbook Air mentioned in the article may be such poor performers that they would skew the results, although they were definitely not top of their class, they were far above drives that are actively experiencing this problem, meaning the drives performing at th
Re: (Score:3, Insightful)
Yes and no. Wear-leveling happens with or without TRIM, but what TRIM does is tell the SSD controller that a block can be treated as "virgin", meaning it can be overwritten in a single pass. This is an optimization as normally one must read the contents of an entire block, combine it with the incoming data, erase the block on-disk and finally write the newly-merged data.
Contrary to popular ignorance, the slowdown is not caused by "fragmentation" - that's backwards, when the drive is clean it is cheating b
Re:Bad Summary (Score:5, Informative)
Re: (Score:2)
So basically if I understand it well, this is in effect an anti-fragmentation measure. When files get fragmented performance starts to degrade. In a way I have the feeling that if you match the block size of files written to the block size of the SSD (optionally padding with zeros to fill it up) you could get quite a performance boost - and only when the drive starts to fill up start using the unused, zeroed out fragments from the blocks.
Or would it be possible use the anti-fragmentation measures built in
Re: (Score:2)
the latter is correct. filesystems that mac and linux use don't have fragmentation issues anywhere near the extreme that NTFS does.
Re: (Score:2)
That gives me the idea that an SSD should be tested with different file systems as well to see effects. Such as ext2 (no journaling), ext3/4 (ext2 with journaling: often seen as an issue because of the many writes), Apple's HSF+, and NTFS, and then as far as possible with and without TRIM enabled. Could be interesting. Preferably identical hardware, each file system with it's native OS.
That said I don't recall having seen any story here about such tests: which file system is best for your SSD (performance
Re: (Score:2)
I did add noatime to ext4 mount options for my SSD.
Re: (Score:3, Interesting)
I have the feeling that if you match the block size of files written to the block size of the SSD (optionally padding with zeros to fill it up) you could get quite a performance boost - and only when the drive starts to fill up start using the unused, zeroed out fragments from the blocks.
With SSDs it's erase block size that matters most. When a page of memory on a SSD device is being overwritten, the page has to be erased first, and then its contents rewritten.
Of course if your OS implements SCSI PU
Re: (Score:2)
When files get fragmented performance starts to degrade
Nope, fragmented files are not a problem at all for flash. The problem is fragmented free space. Flash is partitioned into cells. Once a cell is blank (all ones), you can write to it once, then you have to erase it. Imagine that your cell size is 64KB and you write 32KB files all over it, then delete every alternate one. Now, before you can write any files, you have to erase a cell. If you want to write a 1KB file, you have to read a 32KB file into a buffer somewhere, erase a 64KB block, rewrite the 3
Re: (Score:2)
When files get fragmented performance starts to degrade
Nope, fragmented files are not a problem at all for flash.
With fragmented in this case I meant fragmented over many partial cells instead of using only complete cells except for the last one. Not fragmented as in the sense for platters where it means the file is scattered all over the disk.
Re: (Score:2)
Block Size vs. Storage Efficiency Problem (Score:2)
Definitely you can do that - the problem you run into is the tradeoff between block size and storage efficiency, because you lose space at the end of a block of data. And you care about it more on small expensive disks than on big cheap ones. Some file systems deal with this by having two different block sizes, the smaller one used for frags, but even so, the bigger the big block size, the more you lose.
However, I'd expect Apple to do a better job of tuning their file systems to the media type than Windo
Re: (Score:2)
Euh, the SSD doesn't know anything about the filesystem layout, thus files.
Re: (Score:2)
Therefore, writing to one of the files in that page means that the drive has to re-write each file, and if one of the affected files is spread across several pages, then those pages have to get re-written, and so on.
What? Why? Yes, you can only erase an SSD a block at a time, but you don't have to re-write any other blocks. You just read the affected block into memory, erase the block, update the copy in memory with the changed data, and write it back. The only other data you have to change is the block lookup table used for wear leveling, which maps a logical block ID to a physical block of memory.
I wonder if they've done something non-standard but clever to mimic the functionality of TRIM? Like explicitly overwriti
Mod parent up (Score:2)
There's a lot of misinformation in these comments, and parent actually seems to know what they're talking about...
Re: (Score:2)
Actually, you can clear bits to 0 at any time, bit by bit if desired. You can only set bits back to 1 sector by sector. That erase process is not fast. Because of that, we would like to queue up sectors for erase as soon as they're no longer needed. However, most filesystems just mark a block as free and assume it can be freely overwritten without penalty.
TRIM informs the SSD that a given block of data (which is likely smaller than the erase sector) will not be needed again. That way, when the erase sector
OS X has nothing to do with it (Score:3, Interesting)
You can't really draw any conclusions from these results. In one particular Mac, Apple ships a particular Samsung SSD that doesn't degrade, probably because its "clean" performance is already terrible (you might think of it as "pre-degraded"). In other Macs Apple ships Toshiba SSDs that may have completely different behavior. If you put a good SSD (e.g. Intel) in your Mac the behavior will be completely different.
Re:OS X has nothing to do with it (Score:4, Interesting)
FWIW, I've been using an Intel X-25M 160 GB SSD on my Mac Pro for over half a year, and my read/write speeds are essentially unchanged from when I got it... this is using xbench to check.
Re:OS X has nothing to do with it (Score:4, Informative)
Strange reasoning. In anycase, the Intel SSDs appear to use a combination of static and dynamic wear leveling and it seems to do a really good job. A really, really good job. I have over a half dozen of the 40G drives and have not noticed any reduction in read or write performance.
There seem to be dozens of different write combining and wear leveling implementations across vendors. Dozens and dozens. Variations between vendors are significant and even variations between revisions from the same vendor can be significant. Drives sold just two years ago are likely to have primitive weal leveling and write combining verses drives sold today. Vendor technology can be years apart.
I guess you can thank MLC flash for the radical improvement in wear leveling and write combining algorithms over the last few years. Vendors can't really cheat when they use MLC flash... the algorithms have to work properly or the device has an early death due to the limited cell durability.
Personally speaking, I am very confident about Intel's technology. OCZ seems to be pretty good too but it is also full of hacks and does not properly support SATA NCQ. I'm sure there are some other good vendor technologies out there but there are also definitely some very bad ones that are years behind Intel. In the SSD space, the quality of the software matters a lot.
-Matt
Re:OS X has nothing to do with it (Score:5, Informative)
"If you put a good SSD (e.g. Intel) in your Mac the behavior will be completely different."
Completely different from what?
I have put an Intel SSD in a Mac, in fact 2 in a RAID 0 configuration, and it doesn't behave like you are insinuating it does.
The performance of the Samsung drives does suck but it isn't because they are "pre-degraded".
Re: (Score:2)
OTOH, MS Windows is designed to be flexible enough to run whatever hardware is thrown at it. The downside is that a driver has to written for every single piece of ha
Re: (Score:3, Informative)
TRIM performance is directly tied to an SSD's erase performance. All TRIM does is tell the SSD which blocks are free to erase in its spare time. EVERY write to a NAND cell requires an erase, so if the SSD can perform that step while the system is idle, the next write to that cell can be performed immediately.
TRIM allows SSDs to "cheat" by doing some of the work ahead of time. The dirty performance is the normal, sustainable speed you should expect from your SSD. Any gains from TRIM are the result of pre
Re: (Score:2)
You're missunderstanding.
The SSD doesn't sit there erasing blocks. It merely marks them as erased. The problem is that a write to a block requires an entire cell to be written. If there's data in that cell, the SSD has to read the data out, assemble a new layout for the cell with the new data in it, and then write that back. TRIM allows the SSD to see that there's no data in the cell, and hence skip the read the data out, and reconstruct the contents of the cell steps.
Re: (Score:2)
Whoa! It can go either way an erased flash block is all ones or all zeros, depending on convention. Assuming the (less common) all zero convention, an erased black can have bits set one at a time, but any time a bit must be unset, the block must be erased. The fastest write performance occurs when the block is already erased.
The next fastest is when it can be erased, and then just the new data written. The slowest is to read, erase, and the write the whole flash block.
It is entirely permissible and even sen
Re: (Score:2)
yeah, they stated that they wiped the drive with zeros using DiskUtility. given that OSX doesn't support TRIM, that would mean that EVERY page is dirty, thus EVERY write requires a read-merge-write cycle, and thus the performance would never degrade since it's already as shitty as it can possibly get. maybe if they'd TRIMmed the whole drive with a decent OS and started from there without zeroing they'd have gotten significantly different results.
Re: (Score:2)
"maybe if they'd TRIMmed the whole drive with a decent OS and started from there without zeroing they'd have gotten significantly different results."
maybe. maybe not. I own a mac with a Samsung drive. It's performance sucks. It sucks no less than any benchmark I've seen published on it though.
Huh? What kind of "logic" is this?? (Score:2)
You're saying the "Samsung SSD doesn't degrade because it's initial performance is already so terrible"?!
That makes no sense at all. Think about it. Even if the drive was slow enough. out of the box, that a read operation took 10 seconds to complete, that should result in it being more like 20 or 30 seconds when the drive is all fragmented up.
Unless a drive had 0 performance (never returned a result when you did a read or write), it should be possible to measure it degrading in performance from a clean set
Re: (Score:2)
Perhaps you should have posted that comment in the right place.
I've never seen a problem (Score:5, Informative)
I've got a Vertex 60 in a While unibody macbook and it works fine, boot is 7-9 seconds, apps load almost instantly even if i start 10 at the same time.
I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.
Re:I've never seen a problem (Score:5, Informative)
The impact of the TRIM command is vastly overrated. It is effective on "naive" devices that don't allocate a reserve block pool and therefore have to erase before doing every write. On a modern SSD, the disk controller reserves 5-10% of the physical blocks (beyond those that the host can see) as an extended block pool. These blocks are always known to be free (since they're out of the scope of that OS) and are therefore preemptively erased. So, when your OS overwrites a previously written data block, one of these pre-erased blocks is actually written to and the old block is put in the reserve pool for erasing later at the device's leisure.
The one case where this isn't true is if you're constantly writing gigs of data to an empty drive. With TRIM commands, most of your drive may have been pre-erased, whereas without it you may overrun the reserve pool's size and then will be waiting on block erase. For normal desktop users, this is a pathological case. In servers and people who do a lot of heavy video editing it may matter a lot more.
Re: (Score:2)
When you say "naive devices" are you referring to the controller? (sorry I'm not as well informed about storage as I'd like)
Re:I've never seen a problem (Score:5, Insightful)
You're missing something.
Erase blocks and data blocks are not the same size. The block size is the smallest atomic unit the operating system can write to. The erase block size is the smallest atomic unit the SSD can erase. Erase blocks typically contain hundreds of data blocks. They must be relatively larger so they can be electrically isolated. The SSD maintains a map from a linear block address space to a physical block addresses. The SSD may also maintain a map of which blocks within an erase block are valid and fills them as new writes come in.
Without TRIM, once written, the constituent blocks within an erase block are always considered valid. When one block in the erase block is overwritten, the whole thing must be RMW'd to a new place. With TRIM the drive controller can be smarter and only relocate those blocks that still maintain a valid mapping. This can drastically reduce the overhead on a well used drive.
Re: (Score:2)
Without TRIM, once written, the constituent blocks within an erase block are always considered valid.
I think that many folks are confusing logical sectors with physical sectors.
The device is not reading a block, erasing it, making some changes to some of its sectors, and then writing back to the same now-erased block. It is always writing to some different block.
For simplicity-sake lets say that each block contains 4 sectors, A, B, C, and D. When the file system writes to sector A, the SSD reads the entire ABCD block, updates A, and then grabs a virgin block to write ABCD to. The original ABCD block i
Re: (Score:2)
Perhaps The best way to do this is to define some terms.
One term I will define is logical Many interfaces label this a block. This is the minimum amount of addressable data. It is what the kernel sees as the block size of the disk drive, and also used in the drive communication protocols. The file systems also often use them as a sector size, but that is completely independent of the size of a logical block in disk signaling, which is what most kernels view the device in terms of. From this point forward I
Re: (Score:3, Informative)
I know TRIM doesn't work yet in OS X but the drive seems to take care of itself just fine.
It probably is taking care of itself. Some OCZ drives, including the Vertex series, can have firmware which forgoes Trim support in favor of some form of garbage collecting.
Re: (Score:2)
Lazy reporting, try different OSes (Score:4, Interesting)
Maybe they should take the drive over to a Windows XP and Windows 7 box and see if it's the drive hardware being resilient or the OS. The G1 intel drives don't drop a ton after use, and they don't support TRIM. It looks like the Intel G1 flash is artificially capped by the controller. It could be similar here.
Re: (Score:3, Informative)
TFA stated this is actually a follow-up test on tests done on Windows, where they found TRIM to have a large effect on the drive's performance. This included a drive they suspect to be technically similar to the one in the Mac (same manufacturer, age) - though unfortunately no direct comparison of actual hardware.
This is possible. (Score:5, Informative)
Intel drives actually use the whole drive for scratch space. Until a sector is written to. Then without TRIM it only has its tiny bit of extra scratch space to work with. That's why intel drives degrade so badly without TRIM.
Indilinx Barefoot controllers on the other hand ONLY use their scratch space, they never use the normal writing space of the drive as scratch space.
See here.
http://www.anandtech.com/show/2829/9
While it does show the synthetic tests degrading with lack of trim, even more than the intel drives, the real world use tests show they suffer almost 0% loss in performance.
Depending on which controller the drive is using, TRIM could make almost no difference or a world of difference.
Anand explains it best:
"Only the Indilinx drives lose an appreciable amount of performance in the sequential write test, but they are the only drives to not lose any performance in the more real-world PCMark Vantage HDD suite. Although not displayed here, the overall PCMark Vantage score takes an even smaller hit on Indilinx drives. This could mean that in the real world, Indilinx drives stand to gain the least from TRIM support. This is possibly due to Indilinx using a largely static LBA mapping scheme; the only spare area is then the 6.25% outside of user space regardless of how used the drive is."
I didn't understand the 'benchmark' (Score:4, Interesting)
FTA: "We simulated this by copying 100GB of mixed files (OS files, multiple game installs, MP3s and larger video files ) to the SSD, deleting them, and then repeating the process ten times,"
Surely you should be deleting half the files - every other file - then rewriting them. If you copy a bunch of files then delete them all you're leaving the drive in pretty much the same state as it was at the start, the only difference between passes wil be due to the wear levelling algorithms inside the drive. Overall performance at the end will mostly be a result of the initial condition of the drive, not what happened during the test.
Re: (Score:2)
No. When an OS deletes stuff, most drives do not know you've deleted stuff on the drive. All they know is the OS has said: "Write this on to that block".
The drives do not know that the newly written block means that a huge bunch of other blocks are no longer in use.
To take the extreme case, say you write to the entire drive and deleted the partition, the drive doesn't know anything about partitions - it just knows you've overwritten a few blocks, it doesn't know that almost all the blocks in the entire driv
Re: (Score:2, Interesting)
> I've been running Windows XP since beta2, and it really kicks ass. I don't
> have to recompile my kernel when I want to install an ethernet card, it
You didn't have to do this with Slackware in 1994.
> automatically detects it and installs the drivers no matter who the
> manufacturer is. Dual monitors? No chore with windows, get two video cards,
Windows 7 doesn't even do this with hardware slightly older than it.
Re: (Score:2)
You didn't have to do this with Slackware in 1994.
Yes you did. Linux didn't get kernel modules until around 1995 (and they weren't in common usage until years after that).
Windows 7 doesn't even do this with hardware slightly older than it.
Windows 7 installs perfectly on my old Dell Precision M60 laptop, released in 2003. Not only that, but it actually works properly with my docking station and external monitors - something none of the popular Linux distributions manage to do.
Two quotes stick out (Score:5, Interesting)
Apple's description of the zeroing format method we used fits the description of what we wanted in terms of resetting the SSD to a clean state
Zeroing is not the same operation as TRIM. TRIM marks a block as unused, and if you read it you'll either get random data, or zeros (probably the later). Zeroing marks it as in-use, and if you read it you'll get zeros. The SSD's wear management algorithm will move the latter around as though it were real data, whereas it knows the former is "empty" so it won't bother (so the SSD will be faster). In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.
Secondly, the SSD in the Macbook Air really isn't very fast at all
Which strengthens the hypothesis that they were comparing one "full" state with another. Pop out the drive, TRIM the whole disk in another OS, and run the benchmark again. It'll probably be a lot faster. It wouldn't surprise me if installing the Mac OS at the factory caused every block on the SSD to be used at least once (e.g. a whole disk image was written), which would mean you'd already be at the worse possible performance degradation.
Re:Two quotes stick out (Score:5, Informative)
if you read it you'll either get random data, or zeros (probably the later)
If you read a TRIMmed block directly, most drives will kick back zeroes. You can do this with hdparm -- particularly useful as a method to test if TRIM works (and it even uncovered a bug in ext4's TRIM implementation in data=writeback mode, where TRIM only works on metadata). Run hdparm -I on a SSD, and it'll actually say something along the lines of "Deterministic read ZEROs after TRIM" for most drives.
In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.
Very true. There are only two methods I know of to 'clean slate' a full drive -- either TRIM the entire thing (with a tool like hdparm -- this is tricky to get right) or run an ATA Secure Erase command. Most SSDs take the secure erase command and just blank every NAND chip they have (taking ~2 minutes compared to the multiple hours that rotational drives take for the Secure Erase command) -- I've done this on my X-25M and it works brilliantly.
Unless Apple's Disk Utility actually does a Secure Erase command (which is very unlikely), then their testing methodology is entirely flawed, and their 'resetting' of the drive instead made it behave as if it was entirely, completely, 100% filled to the brim.
Re: (Score:2)
I haven't done it on a Mac, only on a PC under Linux (more or less following this documentation [kernel.org]). You may have to find an alternate tool to send the ATA Security commands to the drive, if hdparm isn't working.
Mind you, the drive has to support ATA Security commands (some may not) and has to be in an 'unfrozen' state (many BIOSes/EFI firmware freezes the drive at boot). This may mean you'd need to power cycle (disconnect/connect) the drive while the computer is running to unfreeze it (which, as long as the d
Re:Two quotes stick out (Score:4, Interesting)
Apple's description of the zeroing format method we used fits the description of what we wanted in terms of resetting the SSD to a clean state
Zeroing is not the same operation as TRIM. TRIM marks a block as unused, and if you read it you'll either get random data, or zeros (probably the later). Zeroing marks it as in-use, and if you read it you'll get zeros. The SSD's wear management algorithm will move the latter around as though it were real data, whereas it knows the former is "empty" so it won't bother (so the SSD will be faster). In other words, they don't seem to be using a "clean state" at all, which would explain why there's no difference.
Not only that, but writing to all free space of many SSD's will *drop* their IOPS performance since the drive now has to track *all* sectors in the LBA remap table. This is especially true with Intel drives (even the 2nd gen). Additionally, without TRIM, most drives will then continue to track all LBA's as long as used in that same Mac.
Secondly, the SSD in the Macbook Air really isn't very fast at all
A Macbook Air is just about the worst test of SSD performance, as its SATA and other busswork is run in a much reduced power mode, meaning the bottleneck is not the SSD at all. A worst-case degraded SSD in an Air will still be faster than the other bottleneck in that system.
Allyn Malventano, CTNC, USN
Storage Editor, PC Perspective
disk images are only of in-use blocks (Score:2)
. It wouldn't surprise me if installing the Mac OS at the factory caused every block on the SSD to be used at least once (e.g. a whole disk image was written), which would mean you'd already be at the worse possible performance degradation.
Disk images on the mac made by hdiutil (which is what Disk Utility is largely a front-end to) are almost always a copy based off files or just in-use blocks to a new image- the image is copied bit-by-bit back (for speed) and then the filesystem is expanded afterwards.
Re:Two quotes stick out (Score:5, Interesting)
The Macbook Air we received from Apple had previously been sent to other reviewers, so we first needed to get the SSD into a "clean" state. While we have an established way of doing with Windows systems - using HDDerase to perform a low level format - we needed to find a way of doing this with OS X. The OS X installer actually allows you to load an app called Disk Utility, which can partition and format the drive - and one of the options is for a format which zeroes all the data.
According to Apple, "the "Zero all data" option... takes the erasure process to the next level by converting all binary in the empty portion of the disk to zeros, a state that might be described as digitally blank." The next level? Count. Us. In.
And the link goes to an Apple site that explains what zeroing the data does and how to do it.
Once we'd done this with the clean SSD, we then proceeded to give it a damn good thrashing. As with our Windows SSD testing, we filled the SSD with around 112GB of files from a USB hard disk - the files included OS files, game installs and media. We then deleted these files, then copied them across again, repeating the process ten times, so that we'd written over 1TB of data to the SSD.
So... they wrote over the whole disk with a format, then copied and deleted a bunch of files in hopes to get full coverage of the disk (i.e. exactly what the zeroing format did). That's about three levels of abstraction above what they're trying to test.
Re: (Score:3, Interesting)
I read the Apple link (http://support.apple.com/kb/HT1820) and I don't see how it would be equivalent to TRIM on an SSD device.
Zeroing a device, as per the link, just writes zeroes to every block on it. From an SSD's point of view this means that every block is in use and just happens to contain zeros. Next time a request is made to write to a block it requires a (comparatively slow) read-modify-erase-write cycle. It won't make an SSD faster, but it may serve to securely erase the data on it (depending on i
Flawed article (Score:5, Informative)
The article writers made 2 major mistakes that cause their results to be meaningless.
1. They didn't secure erase the drive, which is what actually puts a drive back into a virgin state. They instead wrote zeroes to every sector, which means that the drive controller probably still thinks those zeroed out sectors are still in use.
2. The Samsung drive controller has a form of self cleanup that greatly reduces the need for TRIM.
3. Regardless, the SSD they used was slow as a dog and barely worth using over a HDD.
Re: (Score:2)
Seems sandforce and Intel are the current neat SSD's until we get super sized and next gen controllers
I wonder whats holding Trim back on OS X? Can trim in windows 7 nuke a drive?
Hows the Linux Trim supporting branches?
Re: (Score:2)
Yes it was a pain to read, I just wonder how good the OS X clean up is and the Samsung hardware clean up last?
The answer is of course that the OS/X cleanup doesnt do jack-shit for the SSD, although it still may help out OS/X. The SSD doesnt see "cleanup operation" .. it sees "writing data that must be saved"
Re: (Score:2, Interesting)
I think that the biggest mistake in the test was that they wrote zeros to the drive first, which means that the blocks got allocated (dirty) and had to be read/rewritten with new data when the next phase of the test started. So, basically both tests are the same and it's no wonder they got about the same test results.
4. (Score:2)
They didn't test it on a high performance platform. The Air is, unsurprisingly, optimized as a low battery usage mobile device. This implies a number of things, none of them good for trying to do high speed SSD testing. A more appropriate platform would be a Mac Pro, and perhaps look at adding on a high speed SATA card for good measure.
Now I can appreciate testing on a non-performance system too, but not if your objective is to test TRIM. In that case you need to be on a high end system so that the system i
Something wrong with testing methodology? (Score:5, Insightful)
According to their tests, TRIM has a big impact on read speeds, yet according to their explanation, TRIM should only have a significant affect on write speeds.
My experience with SSD & a Mac (Score:2)
The HD in my Macbook Pro was failing and when I was shopping for parts, I noticed that PowerbookMedic (normally I'd just go buy a hard drive locally but I needed to replace the DVD drive as well so I figured why not just get it all in one go) had an SSD available at a reasonable price so I purchased it on the theory that whatever they were shipping was a decent fit for the Mac - they didn't have any maker info on the page but I figured that the only real difference between SSDs would be max bandwidth and an
Re: (Score:2)
Hmmm...I thought I had seen a real reference somewhere but I looked and all I can find are anecdotes. It *should* work that way :-). Manufacturers are not releasing much information on this internal fragmentation issue, especially the ones that have a real problem with it.
TRIM is a hack as well. It would probably be easier to just up the block size to match the native cell size but that would break a lot of existing filesystems since everybody standardized on 512 byte disk sectors ages ago.
Re: (Score:2)
The SSD is a level below the file system so if it marks something as "unused" that won't really affect how long the filesystem thinks the file is.
If you write a block of zeros to it and it gives you a block of zeros back, then you still have the zeros "in" your file. If internally it just checks a flag first to see if the block is marked unused and returns all zeros if it is, you can't tell the difference.
Re: (Score:2)
It could erase the page and still mark it as being in use. Thus the page isn't used for wear-levelling by the drive, but if you ever rewrite to that page, it'll be a lot faster since the whole block containing the page won't have to be rewritten. Of course that still means the drive runs out of unused blocks after a while, but it's still an improvement over a drive with no TRIM support at all (or a drive used in an OS without TRIM, e.g. Ubuntu 10.04).
How are the sandforce units going? (Score:3, Insightful)
They might be a bit different to the units Apple found at a set price point to max the profits.
Most of the Apple users sites seem to like the idea of a ~ TRIM via a removal of all data, zeroing and a copy back of the OS. Thanks
Trim is a hack (Score:2, Interesting)
The problem is that a Flash based SSD needs to have a pool of unused blocks to work around the block-erase stupidity. However, trim only "solves" the problem when there is a good deal of free space on the drive anyway; when the drive nears full, it is useless. At the current pricing, people don't buy SSDs to keep them empty, and one would not expect an SSD to perform badly when full, as with rotating rust.
The solution is to provision enough extra blocks on the drive beyond the advertised capacity. While
Why not just open fs.blocksize to 64-256k? (Score:3, Insightful)
Jes, I know the Intel MMU pages are either 4k or 4M. And people like "saving disk" since on average half a blocksize is wasted per file. But 4k is a tiny blocksize, set IIRC for newspools that few use. It only wastes 10 MB on a 5,000 file/dir system. That is not enough!
A 128 kB blocksize to match the hardware bs would "waste" a more reasonable 320 MB. Only 1% of the minimum 32 GB SSD.
Re: (Score:2)
I think it's reasonable but I'm sure there's a lot of code down in the FS layers that would break. No one has had to deal with a non-512 byte sector disk for a while.
Re: (Score:2)
I though so, too, but the block size would have to be 512 kB... Not sure if it's worth it.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Trim has both purposes. If you trim a large file, it may have several physical erasable blocks become blank, thus allowing the drive to erase those whole physical blocks when idle, allowing fast writing to those physical blocks. It also tells it which of the logical blocks in a physical block are considered garbage, allowing the drive to skip them when reading that block, when rewriting that group of logical blocks.
The second purpose becomes redundant if the file system uses larger sector sizes, and the OS
Re: (Score:2)
TRIM equivalent (Score:4, Interesting)
All SSDs have a bit more storage than their rating. Partitioning a little less space on a vendor-fresh drive can double or triple the extra storage available to the SSD's internal wear leveling algorithms. For all intents and purposes this gives you the equivalent of TRIM without having to rely on the OS and filesystem supporting it. In fact, it could conceivably give you better performance than TRIM because you don't really know how efficient the TRIM implementation is in either the OS or the SSD. And because TRIM is a serialized command and cannot be run concurrently with read or write IOs. There are a lot of moving parts when it comes to using TRIM properly. Systems are probably better off not using TRIM at all, frankly.
In case people haven't figured it out, this is one reason why Intel chose multiples of 40G for their low-end SSDs. Their 40G SSD competes against 32G SSDs from other vendors. Their 80G SSD competes against 64G SSDs from other vendors. You can choose nominal performance by utilizing 100% of the advertised space or you can choose to improve upon the already excellent Intel wear leveling algorithms simply by partitioning it for (e.g.) 32G instead of 40G.
We're already seeing well over 200TB in endurance from Intel's 40G drives partitioned for 32G. Intel lists the endurance for their 40G drives at 35TB. I'm afraid I don't have comparitive numbers for when all 40G is used but I am already very impressed when 32G is partitioned for use out of the 40G available.
Unfortunately it is nearly impossible to stress test a SSD and get results that are even remotely related to the real world, since saturated write bandwidth eventually causes erase stalls when the firmware can no longer catch up. In real-world operation write bandwidth is not pegged 100% of the time and the drive can pre-erase space. Testing this stuff takes months and months.
Also, please nobody try to compare USB sticks against real (SATA) SSDs. SSDs have real wear leveling algorithms and enough ram cache to do fairly efficient write combining. USB sticks have minimal wear leveling and basically no ram cache to deal with write combining.
-Matt
Silly testing procedure (Score:2)
They used the "write zero" disk erase method, which in fact un-erase every NAND block of the disk, which in turn forces the disk to erase each block again as it writes. Thats why they see such consistency of results : they are measuring the worst possible case where the disk is forced to the slow path for each block.
To erase NAND, you need to erase it by block, and the resulting block is full of 1's. Writing to NAND is a question of writing zeros in places, you can't write 1's on NAND unless you erase it.
So
Re: (Score:2)
I think you give credit a bit generously here. And in any case, how do you explain 1) the linearity and consistency of results 2) the fact it's consistently slower than a windows with Trim support ?
If the blocks were truely erased, at the very least the peak write would be significantly faster, but it's not.
The only other wacky possibility would be for the OS to be the bottleneck.
Re: (Score:3, Insightful)
TRIM happens to make the design problem easier; because it allows you to throw data on the floor when the OS says that they are no longer needed, rather than having to treat everything that isn't explicitly deleted as still good.
How does it work is a mystery to me (Score:2)
First of all, Apple HFS format family has been critized for using the exact same areas of disk (for B Trees, journal) eventually wearing them off. I didn't buy the claims until I noticed an external USB drive has couple of bad sectors on exact "metadata"/B-tree area. Another USB drive got wasted (actually converted to fat32) and I finally agreed to that claim. It is magnetic drive I talk about, on SSD things can really get ugly.
They could move the B-Trees and journal, actually advanced disk optimizers like
Re: (Score:2)
BTW; obvious but, nobody should "defrag" a SSD drive.
That's not necessarily the case and it also depends on the defragmenter strategy.
With SSD typically having 128kb or 512kb erase blocks and their client file systems typically having 4kb-64kb clusters it can actually make sense to defragment a TRIM-enabled SSD every now and again. Gathering all of the directory/file blocks together allows the file system to TRIM as many blocks as possible and improve performance - otherwise you'll have all these partially-filled erase blocks hanging around that may only cont
Re:cheese penis (Score:5, Funny)
2.5 million B.C.: OOG the Open Source Caveman develops the axe and releases it under the GPL. The axe quickly gains popularity as a means of crushing moderators' heads.
100,000 B.C.: Man domesticates the AIBO.
10,000 B.C.: Civilization begins when early farmers first learn to cultivate hot grits.
3000 B.C.: Sumerians develop a primitive cuneiform perl script.
2920 B.C.: A legendary flood sweeps Slashdot, filling up a Borland / Inprise story with hundreds of offtopic posts.
1750 B.C.: Hammurabi, a Mesopotamian king, codifies the first EULA.
490 B.C.: Greek city-states unite to defeat the Persians. ESR triumphantly proclaims that the Greeks "get it".
399 B.C.: Socrates is convicted of impiety. Despite the efforts of freesocrates.com, he is forced to kill himself by drinking hemlock.
336 B.C.: Fat-Time Charlie becomes King of Macedonia and conquers Persia.
4 B.C.: Following the Star (as in hot young actress) of Bethelem, wise men travel from far away to troll for baby Jesus.
A.D. 476: The Roman Empire BSODs.
A.D. 610: The Glorious MEEPT!! founds Islam after receiving a revelation from God. Following his disappearance from Slashdot in 632, a succession dispute results in the emergence of two troll factions: the Pythonni and the Perliites.
A.D. 800: Charlemagne conquers nearly all of Germany, only to be acquired by andover.net.
A.D. 874: Linus the Red discovers Iceland.
A.D. 1000: The epic of the Beowulf Cluster is written down. It is the first English epic poem.
A.D. 1095: Pope Bruce II calls for a crusade against the Turks when it is revealed they are violating the GPL. Later investigation reveals that Pope Bruce II had not yet contacted the Turks before calling for the crusade.
A.D. 1215: Bowing to pressure to open-source the British government, King John signs the Magna Carta, limiting the British monarchy's power. ESR triumphantly proclaims that the British monarchy "gets it".
A.D. 1348: The ILOVEYOU virus kills over half the population of Europe. (The other half was not using Outlook.)
A.D. 1420: Johann Gutenberg invents the printing press. He is immediately sued by monks claiming that the technology will promote the copying of hand-transcribed books, thus violating the church's intellectual property.
A.D. 1429: Natalie Portman of Arc gathers an army of Slashdot trolls to do battle with the moderators. She is eventually tried as a heretic and stoned (as in petrified).
A.D. 1478: The Catholic Church partners with doubleclick.net to launch the Spanish Inquisition.
A.D. 1492: Christopher Columbus arrives in what he believes to be "India", but which RMS informs him is actually "GNU/India".
A.D. 1508-12: Michaelengelo attempts to paint the Sistine Chapel ceiling with ASCII art, only to have his plan thwarted by the "Lameness Filter."
A.D. 1517: Martin Luther nails his 95 Theses to the church door and is promptly moderated down to (- 1, Flamebait).
A.D. 1553: "Bloody" Mary ascends the throne of England and begins an infamous crusade against Protestants. ESR eats his words.
A.D. 1588: The "IF I EVER MEET YOU, I WILL KICK YOUR ASS" guy meets the Spanish Armada.
A.D. 1603: Tokugawa Ieyasu unites the feuding pancake-eating ninjas of Japan.
A.D. 1611: Mattel adds Galileo Galilei to its CyberPatrol block list for proposing that the Earth revolves around the sun.
A.D. 1688: In the so-called "Glorious Revolution", King James II is bloodlessly forced out of power and flees to France. ESR again triumphantly proclaims that the British monarchy "gets it".
A.D. 1692: Anti-GIF hysteria in the New World comes to a head in the infamous "Salem GIF Trials", in which 20 alleged GIFs are burned at the stake. Later investigation reveals that many of the supposed GIFs were actually PNGs.
A.D. 1769: James Watt patents the one-click steam engine.
A.D. 1776: Trolls, angered by CmdrTaco's passage of the Moderation Act, rebel. After a several-year flame war, the trol
Re: (Score:2)
Can I get this as a Civilization IV mod?
Re: (Score:2)
The funny part is that you think the caveman was dumb enough to release his new invention under GPL and give up every advantage he had.
aw come on... (Score:2)
A good joke all the way to the end, yet you missed a perfect opportunity to add an AYB reference, however old and dusty and outdated it might be.
A.D. 2101: War was beginning. Japanese-to-English translators found to be in particularly high demand.
Re: (Score:2, Informative)
The same tests done in an area where there is actually a goo
Re: (Score:2)
All cell phone vendors goose the signal strength meter. All that happened was that Apple goosed it so much they got caught red-handed and were forced to admit it. It certainly was NOT a software bug or a mistake. There is no way it could have been anything but intentional (before they got caught) IMHO.
-Matt
Re: (Score:2)
Are you being sarcastic or just proving my point?
24db loss is FAR from a slight loss. It means the range you can get an equivalent signal has just been divided by 8!
Re: (Score:2)
By way of comparison, they also found that the way you hold an Android Nexus can attenuate your signal up to 18db... even without bridging any antennas. So it's not like the iPhone 4 has a totally unique problem. It's just a little worse than some.
Re: (Score:2)
1 phone is not some phones.. it's 1 phone and it's been critized as well when it was released.. The iphone4 has 24db loss. :p
This is considerable
Every 3db you double the range. 24/3 = lose 8x range. 18/3 = 6x range
So, unless you're right next to the antenna you actually don't get a signal at all, that's why so many are affected!
Think about it:
When you have the best reception, you are at around -50dbm.
When you are at the worse reception you are at around -110dbm (which is pretty good reception capabilities b
Re: (Score:2)
1 phone is not some phones.. it's 1 phone and it's been critized as well when it was released.. The iphone4 has 24db loss. This is considerable :p
You are generalizing too much. They also listed other phones that lose considerable signal depending on how you hold them. I just gave one example. It really isn't a unique problem to the iPhone.
Frankly, I'll take AnandTech's tests over your theory any day. Theory and practice should be the same... in theory. In practice, they rarely are.
-110dbm is pretty sensitive, I'll grant. But I did not say that 24db was little at all. Please do not put words in my mouth. I made two claims only: (1) that things
Re: (Score:2)
Re: (Score:2)
Well you see I don't put words into your mouth :P
I know only of the Nexus one has other phone that really has a problem with this, other phones have a much lower attenuation.
All phones have the "issue" because it's how radio waves work, however, the iPhone 4 has a real design flaw. It's a shame, since the receiver is actually good otherwise.
My "theory" is not really one, it's just data you can lookup and actually agrees with Anandtech.
Simply, many people think it's really a software issue and 24db is not mu
Re: (Score:2)