Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage

Four SSDs Compared — OCZ, Super Talent, Mtron 206

MojoKid writes "Solid State Drive technology is set to turn the storage industry on its ear — eventually. It's just a matter of time. When you consider the intrinsic benefits of anything built on solid-state technology versus anything mechanical, it doesn't take a degree in physics to understand the obvious advantages. However, as with any new technology, things take time to mature and the current batch of SSDs on the market do have some caveats and shortcomings, especially when it comes to write performance. This full performance review and showcase of four different Solid State Disks, two MLC-based and two SLC-based, gives a good perspective of where SSDs currently are strong and where they're not. OCZ, Mtron and Super Talent drives are tested here but Intel's much anticipated offering hasn't arrived to market just yet."
This discussion has been archived. No new comments can be posted.

Four SSDs Compared — OCZ, Super Talent, Mtron

Comments Filter:
  • Re:1+1+1 != 4 (Score:2, Informative)

    by Anonymous Coward on Friday September 05, 2008 @10:50AM (#24888301)
    One manufacturer makes both an SLC and MLC drive. RTFM.
  • Re:1+1+1 != 4 (Score:4, Informative)

    by Zymergy ( 803632 ) * on Friday September 05, 2008 @10:50AM (#24888303)
    They tested two (2) different OCZ SSD models, one with SLC NAND Flash memory chips, and the other with MLC NAND Flash memory chips. 2+1+1=4
    I know, I RTA...
  • by tepples ( 727027 ) <tepples.gmail@com> on Friday September 05, 2008 @11:13AM (#24888609) Homepage Journal

    For instance, MLC NAND memory has between 1,000 and 10,000 write cycles per cell, SLC memory about 100,000. Some applications will be more write intensive, so they'll wear out the memory faster.

    That's why modern CF, SD, and SSD controllers spread writes to a single logical sector over multiple physical sectors [wikipedia.org]. They also dedicate 5 to 7 percent of their space to spare sectors in case one wears out; this accounts for the difference between a GB and a GiB. For example, a half-full 16 GB SSD with blocks of 128 KiB has over 60,000 free blocks. If your app makes 864,000 writes per day (10 writes per second 24/7), then the wear leveling circuitry would go through the entire free memory just under 15 times a day. If your SLC is guaranteed for 100,000 erases per block, then it should still last over 18 years.

  • Re:Oh For God's Sake (Score:2, Informative)

    by _bug_ ( 112702 ) on Friday September 05, 2008 @11:21AM (#24888703) Journal

    The whole point of SSDs is that they have no moving parts, so they don't have the seek time and rotational latency of spinning disks.

    Indeed, but it's nice to have some hard numbers to back that claim up. And it's nice to see HOW much faster they are versus a traditional drive.

    So what do they measure? Sequential transfer rates.

    Actually they measured the performances against each other. They show us that not all SSDs are created equally and they show tell us which SSDs they think are worth the money.

    What they measure, then, is value for money. And that's always nice to know before buying something.

  • by mstahl ( 701501 ) <marrrrrk@@@gmail...com> on Friday September 05, 2008 @11:39AM (#24888939) Homepage Journal

    You can set up your machine this way right now if you want. Just put /home on a traditional disk and have the kernel and maybe a couple more trees of system files on an SSD. This way your SSD doesn't wear out as fast and you have super-quick read access to the kernel and settings.

    If you're running something other than linux I'm sure there's a less transparent way of doing this. Mac OS doesn't really let you set mountpoints with Disk Utility but it won't freak out if you put in your own (MacFUSE does this). You may have to do so in a script though since Mac OS ignores the contents of /etc/fstab I thought. Someone out there probably knows for sure.

  • by amdpox ( 1308283 ) on Friday September 05, 2008 @11:45AM (#24889023)
    Lots. For a high-speed SLC (i.e. something that will equal a cheap 7200rpm spinning platter), you'll pay $400+ for a 64gb and $700+ for a 128gb at this point. Basically, they're completely economically infeasible at anything larger than the 4/8gb you see being used to store the OS and apps in netbooks, unless you have a critical need to access a lot of data at high speed while driving a truck over a small post-apocalyptic wasteland.
  • by gnasher719 ( 869701 ) on Friday September 05, 2008 @11:46AM (#24889043)

    I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).

    You need not wonder. The disks have a limited life time - like the brakes on your car, or the tyres, they will wear out eventually, and then you have to replace them. Nothing you can do about that. But that is not the same as "failures". A "failure" happens when your tyre blows after only 10,000 miles of normal use. Let's say a tyre is worn out after 800 hours of normal use. And one in thousand tyres has a failure before it is worn out, then you have 800,000 hours MTBF but only 800 hours life time.

  • by Courageous ( 228506 ) on Friday September 05, 2008 @12:05PM (#24889301)

    SSD's will reach $/GB equity for enterprise disks within 2 years. They already beat them on $/IOPS, and will soon on $/MB/s.

    A reasonable projection for SATA is 6-7 years. However, if you know technology, that's like talking about what's going to happen in a thousand years. One just cannot know. The cross-industry pressure is definitely going to incentivize the spinning media makers to work on areal density.

    In spite of that, I feel pretty sure that SSD's are going to wipe out Tier 1 entirely. Tier 1 is an IOPS-centric thing. The real formula is something like $/IOPS/GB or some weighted mutation. When that hits, 15K drives are DEAD.

    And I doubt very much you will EVER see a 20K drive. Power is something like the cube of the RPM. Such a drive would be dead on arrival.

    C//

  • Re:1+1+1 != 4 (Score:2, Informative)

    by AllynM ( 600515 ) * on Friday September 05, 2008 @12:09PM (#24889367) Journal

    These guys are idiots. A few points:

    - They 'cheated' on ATTO, only configuring it to start at 8k. Last I checked, default sector size is 512b. Regular day-to-day apps, such as Outlook, perform random sector-level access to the PST when downloading mail.

    - If you're going to do an SSD roundup, how about at lest grabbing a few drives off of the SSD top 10. Specifically, Memoright (#1 on that list) makes an SLC drive that competes with the other SLC drives on price, yet outperforms them all: http://www.storagesearch.com/ssd-top10.html [storagesearch.com]

    - Disclaimer: I own a Memoright drive. I don't claim to be a fanboy, I just did my research beforehand (along with trying out a few other drives), and found the best thing going at the time.

    - The Intel drives, expected to come out this month, are likely to bury everything on that review.

  • by lewiscr ( 3314 ) on Friday September 05, 2008 @01:31PM (#24890507) Homepage

    I don't find a benefit or obvious advantage in a device that requires wear-leveling to keep from wearing itself out. The fact that it degrades its storage capacity gracefully instead of all at once doesn't offset that swap files can really work over mass storage devices and the first bad sectors have been known to start showing up after only weeks of use in some cases.

    Magnetic media does this too, just not as intelligently. Magnetic media waits until a sector is nearing failure, then reads the data (hopefully) and moves it to a new sector.

    You can query your magnetic drive to get a list of bad sectors. The list grows over time.

  • by billstewart ( 78916 ) on Friday September 05, 2008 @03:45PM (#24892639) Journal

    The big win with SSDs is low latency read access - you don't have to wait for rotation or seek time to start fetching your data. That's really useful for many kinds of data applications, speeding up transactions in databases, etc. If you RTFA, and look at some of the benchmarks like Windows Startup, they totally smoke rotating disks - and if you're trying to run servers in a datacenter, you've got less downtime if you ever have to reboot the things.
    They also consume less power, which is good for some kinds of applications, though they cost enough you're not going to save any money.

    Battery-backed-RAM-based SSDs are a different game entirely, because they also give you very fast write speed, and that's where a lot of the whoop comes from; according to this article, the SSDs were a bit slower than a 10Krpm disk drive, so that doesn't apply here. The RAM type are really useful for database commits, where you need to get the journal saved to stable storage so you can go on to the next transaction. But even there, the low read latency of the flash-based disks is going to help a lot, especially for multi-user applications.

    There's also the perception of reliability - I've certainly had lots of disk drive failures on mechanical disks.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...