Forgot your password?
typodupeerror
Data Storage

Four SSDs Compared — OCZ, Super Talent, Mtron 206

Posted by kdawson
from the be-vewwy-vewwy-quiet dept.
MojoKid writes "Solid State Drive technology is set to turn the storage industry on its ear — eventually. It's just a matter of time. When you consider the intrinsic benefits of anything built on solid-state technology versus anything mechanical, it doesn't take a degree in physics to understand the obvious advantages. However, as with any new technology, things take time to mature and the current batch of SSDs on the market do have some caveats and shortcomings, especially when it comes to write performance. This full performance review and showcase of four different Solid State Disks, two MLC-based and two SLC-based, gives a good perspective of where SSDs currently are strong and where they're not. OCZ, Mtron and Super Talent drives are tested here but Intel's much anticipated offering hasn't arrived to market just yet."
This discussion has been archived. No new comments can be posted.

Four SSDs Compared — OCZ, Super Talent, Mtron

Comments Filter:
  • Ultimately I think we're going to see systems with the OS essentially in ROM on a solid state disk, with room for application installation. Data will end up being stored on a traditional disk. I sincerely hope that the developers of next gen Windows, Linux, MacOS, and others, are taking this scenario and building an OS that is optimized for it. I think Linux certainly has a head start.
    • Re: (Score:3, Insightful)

      I had a 286 laptop with MS-DOS in ROM.

      • I had a 286 laptop with MS-DOS in ROM.

        I had a Sinclair ZX-81 with OS and BASIC language in ROM. It was the first in a series of machines I owned in which the OS and BASIC were either included in onboard ROM or came on a ROM cartridge that plugged in.

        Man, I just realized how old I'm getting ...

      • by nog_lorp (896553) *

        I have a nametag with a BASIC interpreter in EEPROM.

    • Re: (Score:3, Interesting)

      by DaveWick79 (939388)
      My point being, they spent so much time measuring performance with sequential data transfer and write speed, when at least in the short term (next 5-10 years) these are pretty much just going to be OS drives where those benchmarks are inconsequential. Let's test system performance in the setup I mentioned. Test Autocad performance with the app on the SSD. Test Crysis performance with the game data on the SSD. Run PCmark or similar benchmark utility installed on the SSD and compare it to the typical 7200
    • Re: (Score:3, Informative)

      by mstahl (701501)

      You can set up your machine this way right now if you want. Just put /home on a traditional disk and have the kernel and maybe a couple more trees of system files on an SSD. This way your SSD doesn't wear out as fast and you have super-quick read access to the kernel and settings.

      If you're running something other than linux I'm sure there's a less transparent way of doing this. Mac OS doesn't really let you set mountpoints with Disk Utility but it won't freak out if you put in your own (MacFUSE does this).

      • Re: (Score:3, Funny)

        by apoc.famine (621563)
        I set my Asus EEE up this way. The SSD has the OS on it, only. I added in an SD card to hold the temp, var, swap, and home directories. While it's not super speedy, it saves the SSD from major use. And should I ever need to boot it under duress at the border, given a few seconds warning, the camera won't have any pictures on its SD card, and the laptop won't boot due to the pictures on its card.
    • by drsmithy (35869)

      Ultimately I think we're going to see systems with the OS essentially in ROM on a solid state disk, with room for application installation. Data will end up being stored on a traditional disk. I sincerely hope that the developers of next gen Windows, Linux, MacOS, and others, are taking this scenario and building an OS that is optimized for it. I think Linux certainly has a head start.

      Maybe it's just me, but putting the OS data (least performance sensitive and most easily replaced) on the SSD (most reliab

    • For server systems, you can build a fully functionally ram based server image which will run in less than 200mb of ram.

      Just add 4-128gb of memory as required.

      Course, all that goes completely out of the window the instant the words "Gnome", or "Java" are mentioned. You are welcome to your rotating metal disk levels of performance there.

    • Ultimately I think we're going to see systems with the OS essentially in ROM on a solid state disk, with room for application installation.

      There's really no need for a rom. You can just make a read-only partition for files of this nature. You also shouldn't have to worry about this to begin with if you've properly partitioned your OS during the install.

      Oh wait, are you talking about Windows?

  • You know, we keep talking about solid state as its better because there are no moving parts, and less wear, but chips and circuits have plenty of moving electrons and go through a lot of thermal stress. I know that for a lot of applications a circuit can seem to be more reliable, but do we really have a sufficient experience to make such a sweeping statement that in fact solid state is more -reliable- than a mechanical system? There are some steam trains out there that are running and are over 100 years o

    • Re: (Score:3, Insightful)

      by ShieldW0lf (601553)
      They gave him a bunch of free drives to play with. Therefore, they are better. Don't you understand how these reviews work?
      • Don't know why this was modded flamebait - there's often more than just a bit of truth in this statement.
    • by Yvan256 (722131) on Friday September 05, 2008 @09:56AM (#24888381) Homepage Journal

      Well, let's see:
      - Magnetic hard drive = solid state (ICs, buffers, etc) + magnetic platter + mechanical (rotating platter(s) + moving heads)
      - SSD = solid state

      As soon as the price per GB of SSDs is at parity with the magnetic drives, I'm switching. It probably puts out less heat and require less power, meaning quieter drives too.

      • by ericspinder (146776) on Friday September 05, 2008 @10:29AM (#24888817) Journal

        As soon as the price per GB of SSDs is at parity with the magnetic drives, I'm switching.

        Actual price parity will likely only occur once the older technology become a rarity, and I suspect that for the next decade, magnetic drives will continue to be the cheapest mass storage out there. That being said, for me, I'll buy a SSD when I can get a decently rated 120 gig drive for less than $150.

      • by Blice (1208832)
        You're right, except for one thing.

        You're going to use more power. Yes, SSDs use less power, that much is obvious. But what you're forgetting is that the CPU does a lot of idling because of waiting for harddrives to give it the data it needs to process... With an SSD, the data comes faster and the CPU spends less time idling and more time working, and in turn ends up using more power.

        Seriously, go replace your laptop's HDD with an SSD and watch your battery life actually go down. It's because your CPU
        • by MrNaz (730548)

          Yes, but if your CPU loads data faster you'll be able to finish your work sooner which means you'll be able to turn the computer off sooner.

          For the reasons above I conclude that SSD are directly related to the number of pirates in the world, and thus have a huge effect on global warming.

        • by EMeta (860558)
          More power for time in use? Yes. More power per computer task done? No.

          I've been debating on which one is more important to me though. (/sarcasm)
        • The problem is, that most modern OSes do journalling on filesystem reads, which causes a lot of write actions... To really get the most from SSDs, OS installs need a light-write option, where writes for filesystem journaling, and logging is queued... even a moderate write-cache, and enough capacitor power to handle it would go a long way towards performance... Not to mention, that the constant writes tend to have the devices powered on more than desirable. Also, sequential read/write becomes less necessa
      • I know, let's hit that old dead horse again.

        But Microsoft is, in a way, just symbolic for the software developers in general. We've had growing SSDs for quite some time, now (let's think thumbdrive, CF, Ramdisk, and others).

        The problem with this is that as RAM becomes cheaper, the software developers deliberately bloat their software, and thus make SSDs again impractical for the latest software. Since government purchases and requirements drive everyone to the new level, SSDs have remained impractic

    • by eln (21727) on Friday September 05, 2008 @10:05AM (#24888513) Homepage

      There are some steam trains out there that are running and are over 100 years old... do we really think that a CPU or a RAM or a motherboard can live that long?

      I agree completely. I, too, am dismayed at the lack of development in steam-powered computing.

    • by mapsjanhere (1130359) on Friday September 05, 2008 @10:10AM (#24888579)
      I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).
      • I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).

        Makes you wonder why they're permitted to claim 1,000,000+ h MTBF in their literature when they don't give any assurance. Seems kind of like the sort of scummy propaganda that ought to be illegal. Saturate the media with consistent but unsubstantiated claims, and you make bullshit into
      • Re: (Score:3, Informative)

        by gnasher719 (869701)

        I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).

        You need not wonder. The disks have a limited life time - like the brakes on your car, or the tyres, they will wear out eventually, and then you have to replace them. Nothing you can do about that. But that is not the same as "failures". A "failure" happens when your tyre blows after only 10,000 miles of normal use. Let's say a tyre is worn out after 800 hours of normal use. And one in thousand tyres has a failure before it is worn out, then you have 800,000 hours MTBF but only 800 hours life time.

      • by Kjella (173770)

        I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases).

        Which if you do the math is well over a year of 24/7 operation, 20000 is almost 2.5 years. Last I checked you didn't normally get more than three years (in many cases less) on consumer HDDs too, so draw whatever conclusion you like but I don't think they're very different from other sellers.

      • 1,000,000 hours is over 114 years. Your estate would be making those warranty claims.

        Tony.

        • That was exactly my point. To get to these estimates (and they are actually having numbers there in excess of 200 years for some parts) you'd need to run some very extensive tests since you have to get a certain number of devices to fail to show you're on some form of normal distribution curve pointing to a max at that age. You shouldn't see any failures in the first 10 years to begin if you're following some bell shape curve unless you have a huge number of test subjects.
          So the number is presumable "sim
      • I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).

        Because those are completely different things. One is "this drive has an expected lifespan of 10,000 hours", the other is "if you are using 1,000,000 drives that haven't reached the end of their expected lifespan, you can expect approximately one to fail every hour" (a way to measure how likely a drive is to fail before the end of its expected lifespan). Remember the "bathtub curve"? MTBF is how high the low middle part is, lifespan is where the rise at the end is. Of course, this assumes that the "bathtub

    • by xZgf6xHx2uhoAj9D (1160707) on Friday September 05, 2008 @10:16AM (#24888643)
      It's a good point. SSDs are so new that we can't really say empirically that they'll last for a lot of years. If nothing else, though, they'll be relatively safe against dropping your laptop on the floor.
    • by Indras (515472)

      There are some steam trains out there that are running and are over 100 years old...

      I hope you also realize how much continual maintenance the average train requires, or any large piece of machinery for that matter. I should know, I am a maintenance technician for a plastics factory. Even our most dependable and reliable machines require at least annual maintenance. Tear down and check for part wear, replace or weld up worn parts, change belts, fluids, etc.

      If a hard disk could be maintained (replace worn motor, lubricate bearings, etc) then I would agree with you. But they are disposab

    • by maxume (22995) on Friday September 05, 2008 @10:24AM (#24888751)

      Are those steam trains really running with 100 year old parts?

      Or do you regularly go in and maintain the various components of you hard drives?

      • by wattrlz (1162603)
        I have read about, but can't for the life of me locate a link to, a steam engine that ran continuously for over a century. The train in question probably goes in for regular maintenance when it needs it, but it's possible that the drivetrain hasn't been modified in all that time.
    • by larkost (79011)

      But do you think that the steam trains are still working with all-original parts? Or do you think that those steam trains have been disasembled and replacement parts swapped in repeatedly over the course of that 100 years? If you were (able) to treat hard drives the same way, then you could expect longer lives out of them.

      And we don't expect computer components to last that long (at least not in production use) much for the same reason none of those steam trains are in regular production use (novelty use is

    • by lewiscr (3314)

      There are some steam trains out there that are running and are over 100 years old... do we really think that a CPU or a RAM or a motherboard can live that long?

      I assume that these steam engines have been taken apart and rebuilt so many times that none of the original parts are still present.

      A train is more analogous to a computer. Swap out the motherboard when it fails, swap out a drive in the RAID1 array when it fails, etc. And you *can* keep a computer running that long... if you can find the parts.

  • Oh For God's Sake (Score:5, Insightful)

    by SQL Error (16383) on Friday September 05, 2008 @09:53AM (#24888335)

    Yet another SSD review by clueless PC dweebs.

    The whole point of SSDs is that they have no moving parts, so they don't have the seek time and rotational latency of spinning disks. That translates into faster random access. As the review says:

    What was absolutely impressive however, were the random access and seek times, along with the benefits that come with them and Solid State Storage in general.

    So what do they measure? Sequential transfer rates.

    Gah.

    • Re: (Score:2, Informative)

      by _bug_ (112702)

      The whole point of SSDs is that they have no moving parts, so they don't have the seek time and rotational latency of spinning disks.

      Indeed, but it's nice to have some hard numbers to back that claim up. And it's nice to see HOW much faster they are versus a traditional drive.

      So what do they measure? Sequential transfer rates.

      Actually they measured the performances against each other. They show us that not all SSDs are created equally and they show tell us which SSDs they think are worth the money.

      What they

    • Makes sense to me... We already know the random access on an SSD blows standard HDDs out of the water (Three of the drives are 1 MS access).

      The main things people are looking at now are price and performance comparisons with HDDs.

      One of the places that SSDs have been shown to be lackluster (and also perform at various levels) is in sequential transfer rates, so it makes sense to be focusing on that.

      As it stands... When the prices drop a bit more, I'm seriously considering RAIDing a couple of these

  • >> When you consider the intrinsic benefits of anything built on solid-state technology versus anything mechanical

    As far as I can see there really aren't any, at least for conventional desktop PC use. The most obvious one would be performance, except suprisingly when comapred with the fastest of todays mechanical drives there's not much if any performance advantage. In some cases SSDs are actually worse.

    There's still a lot of other disadvanteges to SSDs, like a more limited number of write operations,

    • Re:Disagree (Score:5, Insightful)

      by Ngarrang (1023425) on Friday September 05, 2008 @10:08AM (#24888541) Journal

      Someone wake me up when there's a 1TB SSD for $250 that can do unlimited rewrite ops.

      Um, even mechanical hard drives cannot promise unlimited rewrite ops. Maybe you want lower your sights jut a tad?

    • Re:Disagree (Score:5, Insightful)

      by Gewalt (1200451) on Friday September 05, 2008 @10:43AM (#24888993)

      Someone wake me up when there's a 1TB SSD for $250 that can do unlimited rewrite ops.

      Let me guess, you want a car's drivetrain to promise "unlimited mileage" and your homes A/C refrigerant to promise "unlimited compression/decompression cycles".
       
      I hate to be the one to break it to you, but words like "unlimited" are marketing words only. EVERYTHING is limited and finite. In this case, consumer protection laws state that 7 years of normal usage is long enough to be considered "lifetime" or "infinite" or "unlimited" and all sorts of other key words and tricky phrases.

      Those mechanical drives you are comparing SSDs to? They don't offer "unlimited rewrites" except in the marketing sense. 7 years of normal usage. In that same sense, SSDs are already offering unlimited rewrites as they have enough rewrite cycles to last 7 years of normal usage. Just like the mechanical drives.

    • Re: (Score:2, Funny)

      Someone wake me up when there's a 1TB SSD for $250 that can do unlimited rewrite ops.

      Ah, finally! We've waited for you to go away forever, and here you offer it to us on a silver platter!

  • I was actually surprised to see the capacities and prices. As someone who's never had a hard drive bigger than 80GB (and even then only used half of it), the capacities of SSDs are starting to look pretty decent. The prices are still an order of magnitude away from what they'd need to be to get me to switch, but hopefully that's only a year or two away?

    The big thing for me is the durability to shock. I don't own a desktop; I'm a purely notebook guy. Just recently I toasted a (mechanical) hard drive by drop

  • by erroneus (253617) on Friday September 05, 2008 @10:31AM (#24888849) Homepage

    I am not an expert by any stretch but it seems to me that write speed issues, at least when it comes to relatively small amounts of writing, could easily be mitigated with a very long on-board RAM buffer controlled by the drive... and by very large, I mean like 1GB at least. And to keep it stable, a capacitor should be enough to keep it alive when power drops to commit any changes in buffer to the SSD storage. Maybe what I speak of is impossible or ridiculously expensive, but I don't think either is the case.

    • by Spatial (1235392)
      That sounds like a pretty good idea actually. RAM is cheap as dirt [newegg.com] nowadays.
    • by Courageous (228506) on Friday September 05, 2008 @11:00AM (#24889243)

      You're not the first person to think of such a thing. Problem is: it's pretty risky.

      Most high end RAID controllers do this already, if you set to write-back. But they also have big batteries attached to them, and even then, you have something like 24 hours to power back on or total system corruption can occur. This means that mentioned systems must be affirmatively managed.

      Can you imagine what a hassle this would be for the HD makers, particularly in the notebook use case? It would be a never ending chain of angry users blaming the HD maker for their data loss...

      I think the right place to do this is way up in the OS, with a file system that is aware of the issues of small page commits to these devices, and therefore doing some kind of page-coalescence thing. Sun's ZFS can do this. Now we just need something over in consumer space.

      C//

    • by m.dillon (147925)

      Write staging is virtually irrelevant with a modern filesystem. Any log-based, undo-baesd, or recursive-update (zfs) based filesystem has virtually no synchronous writing constraints. HAMMER for example can issue a ridiculous amount of write I/O asynchronously even when it is flushing or fsync()ing.

      So there are two things going on: First, the OS is caching the write data and disconnecting it from the application. Second, a modern filesystem is further disconnecting writes by reducing write-write dependa

  • I have magnetic media drives that are 15 years old and still work fine. Would I be able to say the same about any SSD "drive"? I doubt it. Unlike magnetic media, flash RAM technology is known to have a finite, limited, and unpredictable lifespan; it will only tolerate so many rewrites and then begin to fail, and you'll never know exactly when that will be. That sad day will come sooner than 15 years down the road, and you'll have paid a premium for it. Same for phase-change optical media, really. That

    • by m.dillon (147925)

      Actually that isn't correct. Magnetic media has thermal bleeding and magnetic cells will deteriorate over time. However, many of the issues surrounding magnetic media are directly related to reading and writing, verses just sitting on a shelf, and if you really needed the data you would still be able to recover it many years down the line even after the bearing lubrication turned to mush.

      With flash you are dead in the water once a cell dies. There is no way to recover the data. And flash cells will leak

  • The usefulness of SSDs is undeniable in small devices, but there isn't much of a point of actually using it as a high performance write-heavy device with the limited life of its flash cells. Any heavy write use will quickly wear the device out, and this has already been proven pretty much with people trying to use them as generic HD replacements on microsoft systems with swap enabled.

    The amount of storage is also very tiny compared to a modern hard drive in the same form factor. Combine those two together

    • by bbn (172659)

      You are not very informed. The Mtron device tested has a write endurance of 140 years at 50 GB/day writing. Yes, you will probably claim that you write more than that (unlikely), but you can write terrabytes per day on average, and still not have a problem.

      It will be less than a year before every top of the line system has SSD for performance reasons. The only downside is the price, but that has never bothered the people buying top of the line.

      • by DDumitru (692803)

        Watch out for MFG figures on writes.

        140 years at 50 GB/day is for linear writes. If you are doing random writes you have to scale this based on your average write size.

        If your average random write size is 12K, then the amount of writes will be 12KB/2MB of the above amount or about 0.58% as much data. This is 170x worse.

        Now this is an SLC drive, so you are probably still OK.

      • by m.dillon (147925)

        Don't take manufacturer claims at face value, they never tell the whole story. 50 GBytes/day worth a block I/O operations is not the same as adding 50 GBytes/day in new files or data. It isn't even close.

        A typical consumer machine running typical office software, or a game, or a typical laptop being used by a typical consumer might get in under the limit and such a claim might be reasonable in that context. Turn on paging to swap, though, and do any sort of moderate workload and you will blow out that 50

        • by bbn (172659)

          All those laptops sold today with SSD do have swap enabled. And they do not seem to fail within weeks.

          The SSD can not tolerate writes or swap is no longer true. End of story.

          Besides, my machine has 4 GB of swap. The OS has chosen to use 0 bytes of it. Why? Because before you spend a fortune on a SSD drive, you will buy enough RAM that you do not need to swap. Swap is SLOW if you did not notice.

  • When you consider the intrinsic benefits of anything built on solid-state technology versus anything mechanical, it doesn't take a degree in physics to understand the obvious advantages.

    I don't find a benefit or obvious advantage in a device that requires wear-leveling to keep from wearing itself out. The fact that it degrades its storage capacity gracefully instead of all at once doesn't offset that swap files can really work over mass storage devices and the first bad sectors have been known to start sh

    • Re: (Score:3, Informative)

      by lewiscr (3314)

      I don't find a benefit or obvious advantage in a device that requires wear-leveling to keep from wearing itself out. The fact that it degrades its storage capacity gracefully instead of all at once doesn't offset that swap files can really work over mass storage devices and the first bad sectors have been known to start showing up after only weeks of use in some cases.

      Magnetic media does this too, just not as intelligently. Magnetic media waits until a sector is nearing failure, then reads the data (hopefully) and moves it to a new sector.

      You can query your magnetic drive to get a list of bad sectors. The list grows over time.

  • by tezza (539307) on Friday September 05, 2008 @11:48AM (#24889847)
    I got a Core 64GB. I build large java projects. This is for my workstation, not a laptop. Power and quiet were not the reasons for my experimental purchase.

    I aimed to slash my build time for complex scenarios.
    I thought the Compile -> Jar -> War -> Deploy -> Expand -> Launch would be greatly spead up as the files would be accessed quickly.

    I hoped effectively for a much more targeted and capacious file cache/ RAM disk.

    Unfortunately, the hype does not turn out to be true.

    The enormous time cost of writing files smaller than 8MB (!) [see footnotes] completely counters any read speed increase. Building a proect is making thousands of 2KiB files : one of the most pathological cases for these drives.

    So is it slow? No, it's just as quick as a sluggish 7K250, but then again I just coughed up £179 for the privelege of the same speed.

    So I'm ebaying mine to someone who wants it for a light and quiet laptop, perfect.

    -----------------
    Some "Terrible small write performance" links I found during research:

    * http://www.xbitlabs.com/articles/storage/display/ssd-iram_6.html [xbitlabs.com]
    * http://www.alternativerecursion.info [alternativ...rsion.info]/?p=106
  • "Four SSIDs Compared - Jeffz2Wire, Belkin_N_Wireless_8882D7, Linksys, and THOMPSON_HOME"
  • Since there's a lot less moving parts and no thick metal parts like hard-drives are the SSDs more Green-Friendly in the manufacturing phase and end-of-life to recycle? Since there's been a lot of consideration in the industry about environmental friendliness I through this might be something that should be brought up.

Life is cheap, but the accessories can kill you.

Working...