Become a fan of Slashdot on Facebook


Forgot your password?
Data Storage Upgrades Hardware

Performance Showdown - SSDs vs. HDDs 259

Lucas123 writes "Computerworld compared four disks, two popular solid state drives and two Seagate mechanical drives, for read/write performance, bootup speed, CPU utilization and other metrics. The question asked by the reviewer is whether it's worth spending an additional $550 for a SSD in your PC/laptop or to plunk down the extra $1,300 for an SSD-equipped MacBook Air? The answer is a resounding No. From the story: "Neither of the SSDs fared very well when having data copied to them. Crucial (SSD) needed 243 seconds and Ridata (SSD) took 264.5 seconds. The Momentus and Barracuda hard drives shaved nearly a full minute from those times at 185 seconds. In the other direction, copying the data from the drives, Crucial sprinted ahead at 130.7 seconds, but the mechanical Momentus drive wasn't far behind at 144.7 seconds."
This discussion has been archived. No new comments can be posted.

Performance Showdown - SSDs vs. HDDs

Comments Filter:
  • Stupid Test (Score:5, Informative)

    by phantomcircuit ( 938963 ) on Tuesday April 29, 2008 @12:11PM (#23239496) Homepage
    They only tested burst speeds, there was no random access testing.

    SSD works best when accessing files randomly.
  • by alan_dershowitz ( 586542 ) on Tuesday April 29, 2008 @12:15PM (#23239608)
    Two things: first, booting is ideally going to be largely sequential reads because OS X caches files used in the boot process in order to speed up the boot by removing random access. SSD's have an advantage over hard drives in random reads because there's comparatively no seek time. So I wouldn't expect to see a huge advantage. Secondly, I'm not going to be using my macbook air's tiny SSD drive for analog video capture or something anyway, so high write speed is really not that relevant to me. On the other hand the thing is supposed to be light and use little battery, so SSD seems like it wins for the reasons it was used. Also, the tests bear out a higher average read speed, which is also what I would have expected. I don't see anything surprising here.
  • Re:Why a "drive"? (Score:4, Informative)

    by Alioth ( 221270 ) <no@spam> on Tuesday April 29, 2008 @12:24PM (#23239768) Journal
    Well, the IDE bus isn't mechanically oriented anyway - we don't actually use cylinders, heads and sectors (and haven't for years), we use block addressing and the drive electronics has figured out how to move the mechanics. Block addressing isn't all that far off from addressing an individual byte in memory anyway - except you're addressing a whole block rather than a single byte (and for mass storage, whether it's mechanical or flash, you're going to want to do it that way so you don't have an absurdly wide address bus). Parallel ATA uses a 16 bit wide data bus.
  • by Overzeetop ( 214511 ) on Tuesday April 29, 2008 @12:41PM (#23240060) Journal
    That's true, but is almost a technicality with today's processors and video cards. With anything but the slowest ultra-portables, having a hd running just doesn't suck up much juice. A Seagate Momentus (5400rpm) takes between 1.9 and 2.3W when reading/writing/seeking, and only 0.8 watts when idle (not standby - that's .2W). Given a typical laptops with between 50 and 80 Wh batteries and a 2 to 3 hour charge life, you're HD comprises about 3% of the average draw at idle, and about 7-8% at full tilt - for those of you running active SQL servers on your lappies. If I give you 30% at non-idle, it's about 4%. That's more power than with a SSD at 0.2W, but it's really only about 4 to 6 minutes of extra time on a charge.
  • Re:Combination? (Score:3, Informative)

    by T-Bone-T ( 1048702 ) on Tuesday April 29, 2008 @12:59PM (#23240324)
    There are already drives that have platters and flash. They cache frequently used files in flash and bootup files when you shut down.
  • by AlexCV ( 261412 ) on Tuesday April 29, 2008 @01:01PM (#23240352)
    Not only does wear leveling on very large (> 100 GB) completely moot the point *even* with 100000 cycle life, but modern high-capacity flash has cycle life in the millions of cycles, they have extra capacity set aside to deal with cell failures, they have error correction/detection, they have wear leveling. We're a long way off using FAT16 on straight 128 megabits flash chips...

    I don't know why everyone keeps repeating flash "problems" from the mid/late 90s. We're in 2008, flash has been widely used in huge quantities for well over a decade with thousands of engineers applying well understood solutions to the problems.
  • Re:bad test (Score:1, Informative)

    by Anonymous Coward on Tuesday April 29, 2008 @01:24PM (#23240686)
    A few details:

    In fact MLC NAND (erase) block size is more likely 256 kilobytes.

    JFFS2 does not scale above about 1 gigabyte, and linux (MTD) does not currently support more than 4 gigabyte flash anyway.

    LogFS is not main line and is not ready yet.

    There is also UBIFS.

    Conventional wisdom is that SSDs are better than flash file systems because they allow the use of existing tools (block structured file systems), and are cheaper because the technology (think USB sticks) is already used so much. However it is not clear that proposition has ever really been put to the test.

  • FUD (Score:3, Informative)

    by MushMouth ( 5650 ) on Tuesday April 29, 2008 @01:33PM (#23240864) Homepage
    Wrong, this myth simply will not go away. All modern drives have write leveling technology built in. Also unlike a mechanical drive which generally fails on read, SSDs fail on write which allows the drive itself to trap all failures, and redirect the bytes to another unused sector. Anyone who care about performance shuts off atime as it is.
  • by Anonymous Coward on Tuesday April 29, 2008 @01:35PM (#23240886)
    CCP the company behind EVE online was one of the first major companies to install RAMSAN disks for the EVE database (a HUGE database, with a crazy amount of transactions per second).

    It improved certain aspects of the game 100x. It was an impressive improvement, even considering the money spent to do it.
  • My apologies for a long post. There will be some adverts embedded, but I will try to keep things informative.

    The reason that Flash SSDs act "wierd" in benchmarks is that they have asymmetric performance patterns when reading and writing. Particularly with random operations, this asymmetry is huge. Here are a couple of example "drives":

    * Mtron 7000 series: >14,000 4K random reads. ~130 4K random writes.
    * SanDisk 5000 series: ~7,000 4K random reads. 13 4K random writes.
    * Cheap CF card or USB stick: ~2,500 4K random reads. 3.3 4K random writes.

    This is a 100:1 performance deficit when doing random writes versus the random reads. This has some really weird impacts on system performance. For example, if you run Outlook and tell it to "index" your system, it will build a 1-4 GB index file in-place with 100% random writes. If you do this on a hard disk, the job takes a long time and drags down your laptop, but the operation is still pretty smooth. Do the same think on an SSD and the system slugs to molasses. One of our customers described it as "totally unusable" with 2+ minutes to bring up task manager. What happens is that the fast reads allow the application to dirty write buffer faster and this then swamps system RAM, you get a 100+ deep write queue (at 13/sec), and you want to throw the machine off of a bridge.

    This fix as some have described it is not some magic new controller glue or putting the flash closer to the CPU. It is organizing the write patterns to more closely match what the Flash chips are good at. Numerous embedded file systems like JFFS do this, but they are really designed for very small devices and are more concerned with wear and lifespan issue than performance.

    Now here comes the advert (flames welcome). A little over 2 years ago, I wrote a "block translation" layer for use with Flash storage devices. It is somewhat similar to a LogFS, but it is not really a file system and it does not play be all of the rules of a LogFS. It does however remap blocks and linearize writes. Thus it plays well with Flash. It also appears to be an "invention", and thus my patent lawyer is well paid.

    The working name of the driver layer itself is "Fast Block Device" (fbd) and the marketing name is "Manged Flash Technology". And what this does is to transparently map one block device into another view. You can then put whatever file system you want into the mix.

    In terms of performance, it is all about bandwidth. Build a little raid-5 array with 4 Mtron drives and you will get over 200 MB/sec of sustained write throughput. With MFT in place, this directly translates into 50,000 4K random writes/sec. Even better, you tend to end up with something that is much closer to symmetric in terms of random read/write performance.

    MFT is production on Linux (it has actually been shipping since last summer) and is in Beta test on Windows. It works with single drives as well as small to medium sized arrays. It does work with large arrays, but the controllers don't tend to keep up with the drives, so large arrays are useful for capacity but don't really help performance a lot. Once you get to 50,000 IOPS it is hard for the controllers to go much faster.

    Consumer testing with MFT tends to produce some laughable results. We ran PCMark05's disk test on it and produced numbers in the 250K range. This was with a single Mtron 3025. Our code is fast, but we fooled the benchmark in this case.

    There are several white papers on MFT posted in the news link of our website: []

    My apologies for the advert, but I see a lot of talk about SSDs without actually knowing what is going on inside.

    I am happy to answer any questions on-line of off.

    Doug Dumitru
    EasyCo LLC
    610 237-2000 x43 [] [] []

  • Re:bad test (Score:1, Informative)

    by Anonymous Coward on Tuesday April 29, 2008 @02:15PM (#23241584)
    > Not only Flashes write relatively slow, but if file system has e.g. cluster size of 8K, every write to it in worst case would also (re)write redundantly 64K-8K=56K.

    Dude, this isn't how they work at all. If you write 8K, you write 8K. Flash translation layers would carve up the larger sectors into smaller ones. -1 for the people who modded this up.
  • by CommanderData ( 782739 ) <kevinhi&yahoo,com> on Tuesday April 29, 2008 @02:48PM (#23242080)
    Well, I can supply my own experiences for you, after using a 32GB Samsung SSD for a year, and a 64GB Samsung SSD for several months...

    1) Mine have been formatted NTFS, running Windows XP (and additionally Apple HFS Journaled recently when experimenting with OS X). I do not defragment the SSD, there is no point. Read speeds have always been better than write speed, but I see no difference in performance over time.
    2) Both of the drives I have are fully functional, even though I abused the 32GB one mercilessly. That laptop has only 1GB of RAM and I would run so many programs that things were swapping constantly for the past year.
    3) The 32GB SSD has been through airport scanners approximately 50 times now, no problems. The 64GB is too new, only travelled a few times so far.
    4) My laptops are always on the go, brought into many factories as a consultant. While in my bag it has taken falls down sets of stairs. The laptop itself (a Fujitsu P1610) has been dropped from a height of 3.5 to 4 feet onto a metal catwalk while running with no adverse affects (other than a few scuffs and dents on the corners).
    5) Not sure how well they stand up to static, but it has stood up well to a variety of high EM fields, and high/low temperatures. No data loss. I have had regular hard disks die from working next to large transformers (and their magnetic fields) for an afternoon.

    Hope that helps you. For my line of work, they have been incredible. I used to go through 3 or 4 laptop hard disks per year due to various issues. Now the only reason I bought the 64GB SSD is increased storage capacity.
  • by v(*_*)vvvv ( 233078 ) on Tuesday April 29, 2008 @04:21PM (#23243456)
    All the brand notebooks with SSD options use first generation SSDs. These have the shattering access speeds, high durability, no noise, and power efficient benefits, but read/write performance is still mediocre.

    The second generation SSDs would cost you more than a whole notebook, but have significant performance improvements:

    Memoright GT vs Mtron vs Raptor vs Seagate []

    Memoright nails it. It is easily twice as fast as what Mac puts in their notebooks.

    If you *really* want an SSD, buy one separately and install it yourself. You will not be disappointed.

    BTW the file indexing that causes SSDs to slow cause HDDs to slow as well. Many people have reported unbearable slowdown, and that is with HDDs. I am sure anything slower than that would make you want to return the whole thing, but this can be fixed. Most people will tell you to just turn it off []. Google has also complainted about Microsoft pre-installing an indexing system that sucks [].

  • Re:Apples to Oranges (Score:1, Informative)

    by Anonymous Coward on Tuesday April 29, 2008 @07:52PM (#23246174)

    Huh. I've always thought that the cache on Hard drives was amazingly small. 16MB? Heck, give me a drive with at least a gigabyte of cache. When I boot up my computer, it should just start reading any sectors that have been used frequently.
    That is, at least theoretically, what Vista does by using your main system RAM.

Due to lack of disk space, this fortune database has been discontinued.