Please create an account to participate in the Slashdot moderation system


Forgot your password?
Data Storage Upgrades Hardware

Performance Showdown - SSDs vs. HDDs 259

Lucas123 writes "Computerworld compared four disks, two popular solid state drives and two Seagate mechanical drives, for read/write performance, bootup speed, CPU utilization and other metrics. The question asked by the reviewer is whether it's worth spending an additional $550 for a SSD in your PC/laptop or to plunk down the extra $1,300 for an SSD-equipped MacBook Air? The answer is a resounding No. From the story: "Neither of the SSDs fared very well when having data copied to them. Crucial (SSD) needed 243 seconds and Ridata (SSD) took 264.5 seconds. The Momentus and Barracuda hard drives shaved nearly a full minute from those times at 185 seconds. In the other direction, copying the data from the drives, Crucial sprinted ahead at 130.7 seconds, but the mechanical Momentus drive wasn't far behind at 144.7 seconds."
This discussion has been archived. No new comments can be posted.

Performance Showdown - SSDs vs. HDDs

Comments Filter:
  • bad test (Score:5, Insightful)

    by Werrismys ( 764601 ) on Tuesday April 29, 2008 @12:05PM (#23239362)
    In typical use most of the time is spent seeking, not just reading or writing sequential blocks. The Windows XP disk IO is especially brain damaged in this regard (does not even try to order or prioritize disk I/O). Copying DVD images from one drive another is not typical use case.
    • Re:bad test (Score:5, Interesting)

      by SanityInAnarchy ( 655584 ) <> on Tuesday April 29, 2008 @12:17PM (#23239642) Journal
      Consider, also, that when you're doing anything other than the contrived "copy from one device to another"... HD-DVD has a minimum guaranteed throughput of something like 30 mbits, Blu-Ray needs 50. It looks like the worst numbers on the solid state devices were still at least some 30 megabytes per second, meaning you could play five Blu-Ray movies at once.

      Skimming the article, it seems very likely that the person responsible has read just enough to be dangerous (they know the physics of why seeking is slow), but not enough to have a clue what kind of behavior would trigger seeking. The one measure was boot time, during which they acknowledge that Vista does a bunch of background stuff after boot, but don't measure it.

      He did get one thing right, though -- they are not exactly living up to their potential. For one thing, there are filesystems explicitly designed for flash media, but you need to actually access it as flash (and the filesystem does its own wear leveling) -- these things pretend to be a hard disk, and are running filesystems optimized for a hard disk, so the results are not going to be at all what they could be.
      • Re: (Score:3, Funny)

        by Dirtside ( 91468 )

        meaning you could play five Blu-Ray movies at once
        I think someone's just invented a new metric unit for measuring bandwidth!
    • by peipas ( 809350 )
      Anecdotally, I have a 32 GB SSD in my Dell M1330. I got stuck with Vista with this machine, but in its "User Experience" rating I get a 5.8 for hard drive. The scale is based on 5.0 being the fastest available at Vista's release. I assume "fastest" refers to consumer machines, but have conventional hard drives somehow become that much more efficient all of a sudden that they meet or exceed this performance?
    • Re:bad test (Score:5, Interesting)

      by ThePhilips ( 752041 ) on Tuesday April 29, 2008 @12:48PM (#23240162) Homepage Journal

      XP IO subsystem is pretty OK.

      The problem with SSD is that flash based storage has much much higher block size.

      While conventional HDDs have block size 512 bytes, actual SSDs have block size of 64 kilobytes.

      Not only Flashes write relatively slow, but if file system has e.g. cluster size of 8K, every write to it in worst case would also (re)write redundantly 64K-8K=56K.

      Test is realistic - if you want to see how bad most applications can be with SSDs. But that's going to change with SSD becoming more and more common place.

      If they really wanted to test SSD performance they would have taken Linux with jffs2 or newer logfs. Though this two have their own problems.

      • If they really wanted to test SSD performance they would have taken Linux with jffs2 or newer logfs.

        Does anybody have a decent solution for using a flash drive to boost performance of a regular drive?

        I just ordered a new laptop, and it has an ExpressCard slot into which I could drop 4 or 8 GB of solid-state disk at a reasonable price. That could serve as a giant cache, one that unlike RAM could be safely used as a write cache.

        It seems like there would be a clever way to treat the SSD plus the regular hard drive as one unit so that the hard drive could be spun down for hours of normal working situations,

  • by smitty97 ( 995791 ) on Tuesday April 29, 2008 @12:06PM (#23239400)
    Unfortunately there's no comparisons of battery life and speed tests with fragmented files.
    • by Ethanol-fueled ( 1125189 ) * on Tuesday April 29, 2008 @12:10PM (#23239484) Homepage Journal
      ...And the picture won't be complete until we have real-world failure data for the solid-state drives.
      • ...And the survivability of mechanical drives in the ultra-portable form factor (more likely to be droped or tossed, more concentrated heat problems, etc.)

        Although some data from the Palm LifeDrive (featuring a mecanical Microdrive CF module) could answer the drop-survivability in small form factor.

        So, in short, they managed to produce only 1 single data i.e. bulk speed (well, not exactly. They also mentioned random access from a synthetic test, but no actual real-world application) when users would need ab
    • Unfortunately there's no comparisons of battery life and speed tests with fragmented files.

      Is file fragmentation really that big of a problem?

      I know at one time I used to defragment a lot, but the difference has always been negligible for me. I only did it with the thought of keeping it "in tune", but even once a year doesn't make much apparent difference in computer performance.
  • Noise? Heat? (Score:3, Interesting)

    by pipatron ( 966506 ) <> on Tuesday April 29, 2008 @12:06PM (#23239402) Homepage
    Dunno about the author of this article, but got an "SSD" (hello buzzword) to get rid of the noise, the heat, and the annoying spin-up delay. A compactflash card doesn't cost eleventy billion dollars either.
    • Not to mention shock-insensitivity and power consumption. Write speed to me is fairly irrelevant by now.
      • Write speed may be irrelevant to the applications you happen to run. But it's pretty relevant to your OS [].
        • Unless you disable virtual memory... contrary to popular myth, doing this on Windows does not have any negative effects if you're running applications written in the last 10 years or so. It actually speeds up performance noticeable, since Windows does a horrible job at managing swap space.
        • I haven't used any swapspace for years on my desktops, memory is so cheap now that there's no point. On my servers, of course, but then again it's 99.9% unused.

          For example, this thinkpad has 1.25GB RAM, and I've seen at most 300MB used. Then again, I don't run Vista.

          • Re: (Score:3, Funny)

            by winkydink ( 650484 ) *

            I haven't used any swapspace for years on my desktops, memory is so cheap now that there's no point. On my servers, of course, but then again it's 99.9% unused.

            For example, this thinkpad has 1.25GB RAM, and I've seen at most 300MB used. Then again, I don't run Vista.

            You don't run Firefox either then..

  • by esocid ( 946821 )
    It's nice to know all that buzz is worth ignoring since I just bought a fancy new 750gig sata hdd. Even 16mb caches beat them solidly, I wonder how 8 and 32 would compare. It's worth noting they didn't mention seek times, although I'm not sure how that would transfer into ssd terms.
  • by MrKevvy ( 85565 ) on Tuesday April 29, 2008 @12:08PM (#23239446)
    Computerworld compared four disks, two popular solid state drives and two Seagate mechanical drives, for read/write performance, bootup speed, CPU utilization and other metrics.

    But of course not the metrics that really matter, which SSD's vastly excel at and make them worth the price for many people: MTBF, power consumption, ruggedness and noise level.
    • by Mordok-DestroyerOfWo ( 1000167 ) on Tuesday April 29, 2008 @12:20PM (#23239682)
      If I remember correctly the first LCD monitors were exorbitantly expensive and couldn't hold a candle to their CRT brothers. But since they saved so much space and energy, within a few years those problems vanished. I'd say it's still too early to close the books on SSDs.

      I know it's not a car analogy, I humbly beg the forgiveness of the /. community.
      • I know it's not a car analogy, I humbly beg the forgiveness of the /. community.
        SSDs are just heated mirrors in a fancy 2.5" form factor.
      • LCD monitors still don't match up to a decent Trinitron. The only thing that comes close, in my opinion, is the massive old Samsung SyncMaster 240T that I've been using at work. It's 24" widescreen, does 1920x1200, and has a power brick that is actually pretty close to brick size. It's a tank. It would have been something like $5,000 back when it was new... and I salvaged two of them from the re-app pile.
    • by Isao ( 153092 )
      ...MTBF, power consumption, ruggedness and noise level.

      Similar story over at StorageMojo [] and Robin draws a similar conclusion.

      MTBF - Infant failures about the same as discs, return rates higher
      Power - Flash already near the bottom of the power curve, drives appear to have room to drop
      Ruggedness - No moving parts a plus, perhaps countered by whole-block rewrites on write. Not enough data here
      Noise - Flash wins, no contest

      Bottom line? Not enough improvement to justify the cost, except in certain

    • I can't hear the hd on my laptop, and I rarely hear the fan. The newest 2.5" drives are super-quiet.

  • Power Consumption (Score:4, Insightful)

    by Ironsides ( 739422 ) on Tuesday April 29, 2008 @12:08PM (#23239448) Homepage Journal
    Too bad he didn't include power consumption. If I'm going to use an SSD for anytime soon, it will be in a laptop where power is my key concern. Performance is more of a desktop/high end issue right now.
  • by avdp ( 22065 ) * on Tuesday April 29, 2008 @12:10PM (#23239480)
    IMHO, performance is not the critical factor regarding SSD. Power usage, and mostly no-moving-part (quiet and rugged) is why you want SSD in your laptop.

    But on the performance front, they compared with 7200RPM hard drives, last time I checked (admittedly a while ago) most laptop are outfitted with 5400RPM drives.
    • by Sancho ( 17056 ) * on Tuesday April 29, 2008 @12:21PM (#23239716) Homepage [] indicates that the battery usage (at least compared to the HDD shipped with the Macbook Air) is negligible. No moving parts is nice, though manufacturers have addressed some of the ruggedness issues by including drop sensors. Actual, real world wear hasn't had a chance to surface yet--I'll definitely be curious to find out if SSDs live up to the speculation.
      • by avdp ( 22065 ) *
        And since this is a 4200RPM drive, it seems it's impossible to get a good picture of all the metrics side by side.

        Another concern, which I forgot from my original post is heat. And I am sure heat is key concern with a laptop like the Mac Air
    • by esocid ( 946821 )
      Most probably were, but the two they compared the to are laptop hdds. Since the comparison is talking about Macbook Air, I looked at the specs:

      Apple MacBook Air - 1.6GHz OS X 10.5.1 Leopard; Intel Core 2 Duo 1.6GHz; 2,048MB DDR2 SDRAM 667MHz; 144MB Intel GMA X3100; 80GB Samsung 4,200rpm

      The stock hdd is 4200rpm so even that 5400 figure you had was over the stock drive speed. So they should have compared those two options as well as what they did to get a good idea. As well as including drives with 8mb and

  • Why a "drive"? (Score:4, Interesting)

    by Ossifer ( 703813 ) on Tuesday April 29, 2008 @12:10PM (#23239490)
    Am I the only one questioning why these devices are implemented using a mechanical drive interface? Maybe it's a negligible cost, but to me it would seem that a memory bus optimized for flash memory would be a better way to go, than trying to piggy-back a mechanical drive's bus. How much faster could these be if their existence was planned into, say Intel's chipsets?
    • We'll find out soon, since Intel is adding a flash controller to its chipsets.
    • Re:Why a "drive"? (Score:4, Informative)

      by Alioth ( 221270 ) <no@spam> on Tuesday April 29, 2008 @12:24PM (#23239768) Journal
      Well, the IDE bus isn't mechanically oriented anyway - we don't actually use cylinders, heads and sectors (and haven't for years), we use block addressing and the drive electronics has figured out how to move the mechanics. Block addressing isn't all that far off from addressing an individual byte in memory anyway - except you're addressing a whole block rather than a single byte (and for mass storage, whether it's mechanical or flash, you're going to want to do it that way so you don't have an absurdly wide address bus). Parallel ATA uses a 16 bit wide data bus.
      • Re: (Score:3, Interesting)

        by Ossifer ( 703813 )
        Thanks for the information and insight, but I wonder, why wouldn't we want a (maybe not "absurdly") wide address bus? A 16-bit wide bus seems a bit underscaled, considering core memory buses are 128 bit, and with block addressing we're obviously reading/writing much more than that. The core memory bus is already 16 times bigger than the smallest addressable unit. Granted, with say a 512-byte block, I'm not suggesting a 64k bit wide bus (16 * 512 * 8), but it would seem that 16 bit is simply not a good ch
    • SSDs are not even close to maxing out the drive interface anyway, so is it really even a relevant consideration?
  • Stupid Test (Score:5, Informative)

    by phantomcircuit ( 938963 ) on Tuesday April 29, 2008 @12:11PM (#23239496) Homepage
    They only tested burst speeds, there was no random access testing.

    SSD works best when accessing files randomly.
    • [Channel-surfing]
      Rex: Go back, go back, you missed it.
      Hamm: Too late, I'm already on the 40's, gotta go around the horn, it's faster.
    • Re: (Score:3, Insightful)

      by Sleepy ( 4551 )

      This is like a hybrid vehicle vs normal gas shootout, with each vehicle towing something. It's irrelevant.

      He boiled down all the variables and performance profiles into just one - the one that favors traditional drives. There is NO WAY this should have been published as-is.

      I can't attribute this to malice, but basically Bill O'Brien of Computerworld DOESN'T KNOW WHAT HE'S DOING, and neither does his editor for letting this slide. This was probably a case of a traditional drive maker whispering in his ear
  • by jskline ( 301574 ) on Tuesday April 29, 2008 @12:12PM (#23239536) Homepage
    You really have to look deep into the advertising sometimes. Only a trained person willing to do the math on these would be able to see the differences. Clearly, these devices have a legitimate purpose and place, but at this point in time, its not in the client computer. The speeds need to come up to be really practical.

    Now a good purpose for these might be in desktop bound short-stack storage arrays instead of that large tera-byte drive array. They're just quick enough for data retention backups off of the mechanical drives in the client PC.

    Another use is small-scale server apps that usually are bound into hardware in some form of internet controllable appliance. Speed isn't really a major factor here for this and these would potentially work well.

    Just my opinion. Subject to change.
  • by ncw ( 59013 ) on Tuesday April 29, 2008 @12:13PM (#23239540) Homepage
    As any sysadmin knows, on a busy server what creams the disk isn't Megabytes per second, it is IO transactions per second.

    According to the article the Crucial SSD has an access time of 0.4 ms which equates to 2500 IOs/s as compared to the Barracuda HDD with 13.4 ms access time which equates to a mere 75 IOs/s.

    So for servers SSDs are 33 times better!

    Bring them on ;-)
    • Exactly. I guess the point of this article was to examine weather or not it made sense in the case of a laptop,as many are now starting to offer one as an option. But it would have been nice to point out the real awesome potential they have for servers.
    • Have fun changing out the drives every year as you've surpassed the maximum number of writes.
      • Re: (Score:2, Interesting)

        by hardburn ( 141468 )

        If your filesystem is designed to distribute the writes properly, the failure time is comparable to the MBF of hard drives.

        Though personally, I think the way to go on servers is to use 64GB of RAM and put most of it as a RAM disk. Depending on your application, you can either have a shell script copy the data back to a hard drive for persistent data, or use that kernel driver to mirror the data to a hard drive. Software RAID 1 would work, too.

        • If your filesystem is designed to distribute the writes properly, the failure time is comparable to the MBF of hard drives.
          Sure if we were talking about normal desktop usage. He was implying that he would be trying to do thousands of IO operations per second on these SSDs which is going to wear out the drive much much faster.
    • Well, they would be if they had unlimited read-write cycles. But flash is rather more limited in that regard, some estimates are as low as 100,000 cycles.
      If your 2500 IOs/s hit the same sector, your server SSD is fried in 7 min. SSD are distinctively NOT server suitable if you have a lot of write cycles (probably less of an issue if it's just answering read requests).
      • by vux984 ( 928602 )
        If your 2500 IOs/s hit the same sector, your server SSD is fried in 7 min.

        One would think that would actually be an ideal scenario. Your cache hits would be through the roof. Even if it wrote the sector back to the flash drive once every 2 seconds, that would be 5000 IO's worth of updates in one write op.

        Factor in drive wear leveling (so that it moves the data sector around on the empty space on the physical disk rather than in the same physical place each time), and the disk would probably last nearly for
        • Re: (Score:2, Informative)

          by AlexCV ( 261412 )
          Not only does wear leveling on very large (> 100 GB) completely moot the point *even* with 100000 cycle life, but modern high-capacity flash has cycle life in the millions of cycles, they have extra capacity set aside to deal with cell failures, they have error correction/detection, they have wear leveling. We're a long way off using FAT16 on straight 128 megabits flash chips...

          I don't know why everyone keeps repeating flash "problems" from the mid/late 90s. We're in 2008, flash has been widely used in h
          • by Yvan256 ( 722131 )
            Is there any way to know if a certain device has built-in wear leveling capabilities?

            More specifically, I'm talking about CompactFlash cards, since they can be used as an IDE drive with a simple 10$ adapter.

        • I'm aware that you can get around this issue, I was just trying to point out that the raw "possible IO" number is not all it's cracked up to be.
          Similar for using it as a specialty device for read heavy applications; it was the general "ideal server device" that I had a problem with.
          When I first read about SSD it sounded like the second coming of sliced bread, it was the "devil in the detail" that soured me, especially the write limitations that seem to be a physical limitation, not something you can eng
      • by LWATCDR ( 28044 )
        That would really depend on the server.
        A good example where an SSD might be a good solution is one of the database servers at my office.
        A record gets created and then updated maybe twice. It then my get read a few hundred thousand times.
        So yes for some servers it might be a really good thing. Lots of databases are very very read heavy and write light.
        • Re: (Score:2, Insightful)

          by AlexCV ( 261412 )

          A modern SSD is able to handle write intensive database application with reliability on par with HDDs. The SSD logic spreads the writes around the disk to prevent premature wear so a record updated a million times might well never be written twice over any given flash cells. And even at 512 bytes each, there's 195 millions of them on a 100GB SSD. Each of which has about a 1 million cycle life and there's normally spare cells to handle failures. I'll take that over a SCSI disk.
      • But in the case for read requests it is better. Obviously HDs are currently better for some applications, but likewise SSDs are also better for some applications. Yet again, people seem to be herding each other into camps and throwing rocks at each other rather than just learning the merits of each other's viewpoints and using the best tool for the job.
    • by cameldrv ( 53081 )
      This is only true for reads. For random writes, Flash based SSDs are much slower than hard drives, and that's why they didn't do well in this test. The problems are surmountable, but the first step is for tests like this to be widely publicized, so that users start looking more closely at the specs of drives.
  • That's the (potentially)biggest benefit of using SSDs over HDDs. No moving parts==less power used==longer battery life.
    • Re: (Score:3, Informative)

      by Overzeetop ( 214511 )
      That's true, but is almost a technicality with today's processors and video cards. With anything but the slowest ultra-portables, having a hd running just doesn't suck up much juice. A Seagate Momentus (5400rpm) takes between 1.9 and 2.3W when reading/writing/seeking, and only 0.8 watts when idle (not standby - that's .2W). Given a typical laptops with between 50 and 80 Wh batteries and a 2 to 3 hour charge life, you're HD comprises about 3% of the average draw at idle, and about 7-8% at full tilt - for th
      • by Nexus7 ( 2919 )
        While true, consider also the case where I'm using the browsing the web... a lot of small files keep getting written to the disk, as soon as I finish reading one web page and go to the next. So the HDD keeps spinning up. So I increase the idle timeout. Now the disk just keeps spinning, and the palm-rest above the disk gets hot, decreasing the disk's lifetime. For some reason, even putting Firefox's cache into /dev/shm doesn't seem to help, disk spins up frequently. Things like ndiswrapper like to write to d
  • SSD's performance boost is in battery life due to its lower power consumption from zero moving parts. Flash-based storage has always had a problem with writing; don't forget about the fact that it can only be written to ~1000 times.

    Furthermore, SSD is just temporary relief for batteries; I envision a laptop with both SSD and HDD that almost never writes to the SSD; on Windows, C:\WINDOWS and C:\Program Files would live in SSD while C:\Documents & Settings would live on HDD and C:\WINDOWS\Temp (or whe

    • by Alioth ( 221270 )
      That's not correct: even NOR flash (what you use for ROM, rather than mass storage) has been rated at 10,000 erase/write cycles for years - per sector (rather than the whole device). The typical flash mass storage is up to 100K erase/writes.

      Swap is the main concern here - the solution is to give the machine enough RAM that you can turn swap off.
      • I don't think there's any reason to believe that even swap would become an issue unless the drive has an extremely bad wearlevelling system. Why shouldn't any decent wearlevelling keep track of block use and swap heavily used blocks with those that have a read-only characteristic. With a wanted lifespan of 10 years the number of writes per year you'll get should be 10000 * number of blocks on the drive (using the conservative 100 000 write cycle limit).

        With a 64 gigabyte drive with a block size of 256 ki
    • Flash-based storage has always had a problem with writing; don't forget about the fact that it can only be written to ~1000 times.
      You're a few orders of magnitude off. It's around 400-500 thousand reads for average flash drives. The more expensive, high performance stuff can max out at a few million writes.
  • by alan_dershowitz ( 586542 ) on Tuesday April 29, 2008 @12:15PM (#23239608)
    Two things: first, booting is ideally going to be largely sequential reads because OS X caches files used in the boot process in order to speed up the boot by removing random access. SSD's have an advantage over hard drives in random reads because there's comparatively no seek time. So I wouldn't expect to see a huge advantage. Secondly, I'm not going to be using my macbook air's tiny SSD drive for analog video capture or something anyway, so high write speed is really not that relevant to me. On the other hand the thing is supposed to be light and use little battery, so SSD seems like it wins for the reasons it was used. Also, the tests bear out a higher average read speed, which is also what I would have expected. I don't see anything surprising here.
  • I would have thought that in a laptop, solid state drives would have a noticeable advantage in terms of power consumption leading to increased battery life.

    Admittedly the article described itself as a performance showdown, but I'm disappointed that the reviewer made no attempt to compare power consumption and battery life.

    If nothing else, I would have thought a solid state drive would eliminate that annoying pause when a hard drive awakes from sleep and spins up, and that this would feel like a worthwhile "
    • However, the flash drives are only 32 GB. How small and how low power could you make a drive that only needed to be 32 GB. You could probably go for a much smaller form factor. Which would mean smaller platters, which would take less energy to spin. It would also mean that the read/write heads wouldn't have to move as far to reach the data. Comparing a 32 GB SSD to a 3.5 inch, 250 GB HDD is not a very good comparison.
    • From a quick googling, SSDs seem to use about 1/4 of the power of HDDs (1 watt compared to about 4 watts), but when you put that alongside the screen (~4 watts), speakers (maybe 20 watts in a laptop? Not that you'd always have them up that loud :P ), wifi (~1 watt idle, up to 4 watts while in use) and such, I'd think that you aren't gaining much. It is a step in the right direction of course.. [] is the link I used for a few of those figures
  • by pancrace ( 243587 ) on Tuesday April 29, 2008 @12:17PM (#23239644) Homepage
    We installed one of these for processing millions of small, read-only database transactions. The database only gets written once a day, but is too big for efficient cacheing. Even with a U320 15k drive we were still suffering, only being able to run about 700/min. With a flash drive, we're running over 25,000/min, peaking at 50,000/min. But the weekly copy of the database takes about 20 minutes, vs the 3 or 4 minutes it used to take.

    - p
  • That just seems silly. I'd like to see performance tests on a system where the disk's performance affects the end result, rather than all of the results being homogenized by the operating system's poor I/O capability. Given Vista's adoption, it's not even a test of what disk performance will be like "in the real world."

  • So to sum up what the reviewer did wrong:
    • Only tested sustained write speeds.Has the impression that performance is copying multigigabyte files around all day.
    • Ignored the silence advantage.
    • Didn't consider power savings.
    • Didn't test seek speeds.
      When asked about ignoring the 20:1 advantage SSDs have in seek speed, responded:

      But keep in mind that it's only one component of the overall operation. These were all freshly formatted drives so fragmentation shouldn't be an issue and the longer the operation under that condition, the less it tends to matter.

      SSDs might even slow down slightly because some are built intelligently enough to not write to the same location each time (and thus prematurely "wear out" segments of memory which are, after all, limited use within context).

    • Comparing 3.5 inch 7200 rpm drives to 2.5 inch SSDs

    Anything else to add?

  • HD Tach test:

    Burst Speed Average Read Random Access CPU Utilization
    Crucial SSD 137.3MB/sec 120.7MB/sec 0.4ms 4%

    Barracuda HDD 135.0MB/sec 55.0MB/sec 13.4ms 4%


    Cold Boot Restart
    Crucial SSD 39.9 78.4
    Barracuda HDD 39.9 59.9

    Yeah I know synthetic tests are problematic, but the two tests gives contrary results.
    Is it because a MS Vista boot and reboot doesn't involve much random R/W and therefore doesn't shows the appearrent strength of SSD's? Or is it because an extremely lo

  • SSDs have greatly improved, and typically utilize wear leveling methods to more evenly distribute writes across memory cells.

    However, in real-world situations, do SSD write limitations ever pose a problem or is it a total non-issue these days?

  • Flash media, like compact flash cards, are supposed to be very shock resistant compared to hard drives. That would give these SSD drives a big advantage in machines designed to be very rugged.
  • Completely missed the point. SSD's are not about extremely fast sequential access, they're designed for near-instantaneous random access. No seeking means faster random access, which also means MUCH improved performance when multiple processes are hitting the disk at once.

    Just think back to when you moved to a dual-core CPU how much more responsive it was. Now take that same jump to I/O, which is always the performance bottleneck. We're leaving the age of simple increases in horsepower - Mhz, RPM, and throu
  • It is amazing to me that even other geeks have fallen for the corporate hype machine. This current gen of "SSD" has little to do with the actual promise of a solid state drive. Have you all forgotten the original point? We were getting tired of the slow incremental increase in speed that magnetic platter hard drive technology was giving us. Hard drives were and still are typically the bottleneck in many applications. They are what is holding us back from instant response times.

    These flash based drives are l
  • In other tests I've seen, the only time the SSD drives come out on top is when configured in performance RAID style, so that writes are parallelized across two or more SSD units.

    If someone could put together a convenient RAID type package, the extra cost might actually result in extra, noticable speed improvements, even for writes. And two 64GB SSD units arranged in a performance RAID package would give a more usable 128GB "hard disk" to store things on anyway.

  • by weave ( 48069 ) *

    All I care about is MTBF. I am so sick and tired of trying to get data off of crashed drives and restoring computers for family members (and myself) . Even with current backups, it's a hassle and disks fail at the most inconvenient time.

    My wife wanted a laptop recently and I made her spend the extra money for an SSD.

    • All I care about is MTBF. I am so sick and tired of trying to get data off of crashed drives and restoring computers for family members (and myself) . Even with current backups, it's a hassle and disks fail at the most inconvenient time.

      I have had much more frequent and less predictable failures from flash drives than from hard drives. Granted it is not a completely fair comparison since the flash drives were being used in portable applications (i.e. cameras) and the hard drives were sitting comfortably in my computer case. However I am less than impressed with what I have seen of flash drive reliability. I have been using the same Seagate hard drive for a system drive for the last 10 years. Can any frequently used flash drive ever hope t

  • As others have said, using these things with streaming I/O doesn't make much sense.

    I recently built myself a new system. The new processor (Xeon E3110, aka Core 2 Duo E8400) certainly did make boot time somewhat faster, but not dramatically so. Likewise for initial login -- the KDE desktop came up somewhat faster, but it wasn't overwhelming.

    Then it occurred to me to move my root and home directory partitions from an older 250 GB 1.5 Gb/sec SATA drive to my newer 500 GB 3.0 Gb/sec compatible drive. There
  • 1. Drop both drives from a 3 meters height. 2. Do the test again 3. Repeat until one disk has performance problems.
  • It would be nice if the SSDs they tested were actual high performance models, instead of crappy cut rate ones. They should do the same test with Samsung, Sandisk, and MTron drives. As it was these crappy disks destroyed the mechanical disks in IOps.
  • Right now, using SSD is not about saving money or having a faster drive (unless your other choice is an 1.8" drive, like the MacBook Air).

    SSD is supposed to be about power savings, which should be one of your top priorities when designing a portable device (see also Nintendo GameBoy vs Sega GameGear).

  • For years, us pilots have been looking for a viable way of taking data to altitude. Typical hard drives fail at altitudes above ~15,000 feet. SSDs have basically eliminated that problem altogether.

    Using SSDs on our portables, we can now go to extreme altitudes in unpressureized enclosures without the fear of drive failure.

  • My apologies for a long post. There will be some adverts embedded, but I will try to keep things informative.

    The reason that Flash SSDs act "wierd" in benchmarks is that they have asymmetric performance patterns when reading and writing. Particularly with random operations, this asymmetry is huge. Here are a couple of example "drives":

    * Mtron 7000 series: >14,000 4K random reads. ~130 4K random writes.
    * SanDisk 5000 series: ~7,000 4K random reads. 13 4K random writes.
    * Cheap CF card or USB stick: ~2,500 4K random reads. 3.3 4K random writes.

    This is a 100:1 performance deficit when doing random writes versus the random reads. This has some really weird impacts on system performance. For example, if you run Outlook and tell it to "index" your system, it will build a 1-4 GB index file in-place with 100% random writes. If you do this on a hard disk, the job takes a long time and drags down your laptop, but the operation is still pretty smooth. Do the same think on an SSD and the system slugs to molasses. One of our customers described it as "totally unusable" with 2+ minutes to bring up task manager. What happens is that the fast reads allow the application to dirty write buffer faster and this then swamps system RAM, you get a 100+ deep write queue (at 13/sec), and you want to throw the machine off of a bridge.

    This fix as some have described it is not some magic new controller glue or putting the flash closer to the CPU. It is organizing the write patterns to more closely match what the Flash chips are good at. Numerous embedded file systems like JFFS do this, but they are really designed for very small devices and are more concerned with wear and lifespan issue than performance.

    Now here comes the advert (flames welcome). A little over 2 years ago, I wrote a "block translation" layer for use with Flash storage devices. It is somewhat similar to a LogFS, but it is not really a file system and it does not play be all of the rules of a LogFS. It does however remap blocks and linearize writes. Thus it plays well with Flash. It also appears to be an "invention", and thus my patent lawyer is well paid.

    The working name of the driver layer itself is "Fast Block Device" (fbd) and the marketing name is "Manged Flash Technology". And what this does is to transparently map one block device into another view. You can then put whatever file system you want into the mix.

    In terms of performance, it is all about bandwidth. Build a little raid-5 array with 4 Mtron drives and you will get over 200 MB/sec of sustained write throughput. With MFT in place, this directly translates into 50,000 4K random writes/sec. Even better, you tend to end up with something that is much closer to symmetric in terms of random read/write performance.

    MFT is production on Linux (it has actually been shipping since last summer) and is in Beta test on Windows. It works with single drives as well as small to medium sized arrays. It does work with large arrays, but the controllers don't tend to keep up with the drives, so large arrays are useful for capacity but don't really help performance a lot. Once you get to 50,000 IOPS it is hard for the controllers to go much faster.

    Consumer testing with MFT tends to produce some laughable results. We ran PCMark05's disk test on it and produced numbers in the 250K range. This was with a single Mtron 3025. Our code is fast, but we fooled the benchmark in this case.

    There are several white papers on MFT posted in the news link of our website: []

    My apologies for the advert, but I see a lot of talk about SSDs without actually knowing what is going on inside.

    I am happy to answer any questions on-line of off.

    Doug Dumitru
    EasyCo LLC
    610 237-2000 x43 [] [] []

  • by v(*_*)vvvv ( 233078 ) on Tuesday April 29, 2008 @04:21PM (#23243456)
    All the brand notebooks with SSD options use first generation SSDs. These have the shattering access speeds, high durability, no noise, and power efficient benefits, but read/write performance is still mediocre.

    The second generation SSDs would cost you more than a whole notebook, but have significant performance improvements:

    Memoright GT vs Mtron vs Raptor vs Seagate []

    Memoright nails it. It is easily twice as fast as what Mac puts in their notebooks.

    If you *really* want an SSD, buy one separately and install it yourself. You will not be disappointed.

    BTW the file indexing that causes SSDs to slow cause HDDs to slow as well. Many people have reported unbearable slowdown, and that is with HDDs. I am sure anything slower than that would make you want to return the whole thing, but this can be fixed. Most people will tell you to just turn it off []. Google has also complainted about Microsoft pre-installing an indexing system that sucks [].

"You can have my Unix system when you pry it from my cold, dead fingers." -- Cal Keegan