Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

All Solid State Drives Suffer Performance Drop-off 150

Lucas123 writes "The recent revelation that Intel's consumer X25-M solid state drive had a firmware bug that drastically affected its performance led Computerworld to question whether all SSDs can suffer performance degradation due to fragmentation issues. It seems vendors are well aware that the specifications they list on drive packaging represent burst speeds when only sequential writes are being recorded, but after use performance drops markedly over time. The drives with better controllers tend to level out, but others appear to be able to suffer performance problems. Still not fully baked are benchmarking standards that are expected out later this year from several industry organizations that will eventually compel manufacturers to list actual performance with regard to sequential and random reads and writes as well as the drive's expected lifespan under typical conditions."
This discussion has been archived. No new comments can be posted.

All Solid State Drives Suffer Performance Drop-off

Comments Filter:
  • To test (Score:5, Interesting)

    by Fri13 ( 963421 ) on Friday May 08, 2009 @04:51PM (#27882681)

    Just place SSD drives to usenet or torrent servers and use them as /var/log mountpoints... you soon see real tests how well those work when comparing to old fashion harddrives!

    • Re: (Score:2, Interesting)

      by AHuxley ( 892839 )
      I would pack a drive to about 8% of 'full'
      Fill it with applications, OS (Mac, Win, Linux over 3 drives) , mps3, lots of jpegs, text files and short and long movie files (2~50~650 mb)
      Get the RAM down to 1-2 gb and let the OS's thrash as they page in/out and watch the 3 computers over a few weeks.
      Automate some HD intensive tasks on the small amount of space left, let then run 24/7.
      Hope that Mac or Linux will keep files in different ways and use the little space in strange ways too. We can hope OS X and t
  • by Bellegante ( 1519683 ) on Friday May 08, 2009 @04:51PM (#27882685)
    Even the article itself says that it isn't much of a big deal, once you get past the headline, of course.

    And this seems like the sort of issue that will be resolved in the next generation, anyway.
  • ... these things aren't going to be a big deal in the long run, I mean who wasn't expecting some amount of technological immaturity? We shouldn't forget though that even with it's immaturity it's still much faster then hard disk drives but the SATA interface controller was not designed to handle such high speeds, not to mention much software is not geared, nor optimized for SSD usage.

    Still price has come down considerably on many SSD's over the last 6 months, I was thinking about picking up an X-25 M for a

  • by Cowclops ( 630818 ) on Friday May 08, 2009 @04:57PM (#27882783)

    "Drastically effected its performance"

    This is patently false. Whats really happening is that SUSTAINED WRITE PERFORMANCE decreases by about 20% on a full drive as compared to a fresh drive. You might say 20% is too much, and I'd probably agree with you, except that ONLY sustained write performance is being affected.

    Your read speed will not decrease. Your read latency will not increase. Unless you're using your SSDs as the temp drive for a high definition video operation (And why the hell would you for that? Platter drives are far better suited to that task between sequential write speed and total storage space) then you have nothing to worry about.

    This happens on all drives, as the article title correctly states. The solution is a new write command that pre-erases blocks as you use them, so the odds that you have to erase-then-write as you go along are decreased. Win7 knows how to do this.

    Nonetheless, it is totally overblown and your SSD will perform better than any platter based drive even when totally full.

    • by Kjella ( 173770 )

      Unless you're using your SSDs as the temp drive for a high definition video operation (And why the hell would you for that? Platter drives are far better suited to that task between sequential write speed and total storage space)

      Not if you use torrents, they're very much non-sequential as it's downloading tons of pieces all over the file. When I had a regular HDD as OS disk it was big so I'd download torrents to it - always slowed the machine down, was better to use a different disk but leaving hundreds of GBs unused didn't seem to make sense either. With an SSD you hardly notice the torrents running, I usually download to SSD, watch it then move it to file server. Everyone should get an SSD really, it's the greatest revolution sin

    • by AllynM ( 600515 ) * on Friday May 08, 2009 @06:58PM (#27883799) Journal

      20% is too little. I've seen drives, even SLC drives, drop by more than 50%. Only some drives bounce back properly. Others rely on TRIM to clean up their fragmentation mess.

      A more important note is that some initial TRIM implementations have been poorly implemented, resulting in severe data corruption and loss:
      http://www.ocztechnologyforum.com/forum/showthread.php?t=54770 [ocztechnologyforum.com]

      I posted elsewhere regarding the fragmentation issue here:
      http://hardware.slashdot.org/comments.pl?sid=1227271&cid=27883769 [slashdot.org]

      Allyn Malventano
      Storage Editor, PC Perspective

    • by Lennie ( 16154 )

      "your SSD will perform better than any platter based drive even when totally full"

      I suggest you first read up on that:

      http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8 [anandtech.com]

      Not just any SSD, some have stutter, some degrade in very bad ways, I would say: "if you choose wisely your SSD will perform better than any platter based drive. But you won't be buying the cheapest SSD" or something of that nature.

      Good SSD's are very expensive in comparison to HDD's.

  • by LodCrappo ( 705968 ) on Friday May 08, 2009 @04:58PM (#27882795)

    http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=4 [anandtech.com]

    Anandtech has a very detailed article that explains all about this and some ways to recover the lost speed (sometimes).

  • by rascher ( 1069376 ) on Friday May 08, 2009 @05:00PM (#27882817)

    ...you mean to tell me that fragmentation *reduces* the performance of storage???

    • by Tx ( 96709 ) on Friday May 08, 2009 @05:17PM (#27882997) Journal

      ...you mean to tell me that fragmentation *reduces* the performance of storage???

      Fragmentation on hard disks reduces performance because of the time it takes to physically move the disk heads around. There are no physical heads to be moved around in SSDs, therefore it's perfectly reasonable to assume that that mechanism of performance hit will not occur on SSDs, and therefore it's not an issue. I did a small test [googlepages.com] years ago on the effects of flash memory fragmentation in a PDA, and I, and most people I discussed the matter with seemed to be quite surprised with the results at the time. I never got a good technical explanation of why the performance hit was so large. Doubt that's the same mechanism at work as with modern SSDs, but sort of relevant anyway.

      • by beelsebob ( 529313 ) on Friday May 08, 2009 @05:28PM (#27883105)

        The reason the performance hit is large is because writing to SSDs is done in blocks. Fragmentation causes part-blocks to be used. When this happens, the SSD must read the block, combine the already-present data with the data it's writing and write the block back, rather than just overwriting the block. That's slow.

        • Your explanation of the cause of the slowdown is correct, but it has very little if anything to do with fragmentation. They are two separate issues. If anything in the block needs to be re-written, regardless of whether it is contiguous or not, then the whole block will be re-written. There is no getting around it.

          Since the memory controllers in SSDs deliberately distribute your data across the flash memory, "fragmentation" in its usual sense is pretty meaningless.
          • But if a certain write needs to modify 100 blocks instead of 10 due to fragmentation, fragmentation is a major performance factor.

      • by AHuxley ( 892839 )
        Memory controllers cost real cash to write for Windows, Linux and Macs.
        A real skill needing real support. The cheaper units are in a race to the bottom with what ever they can buy off the shelve.
        Other firms try to spread high end from pro desktop users.
        If you want a memory controller that works, you will pay.
        No brand is ready to upset that mix at this time.
        Old stock to sell before they can hire professionals at the low end.
        At the top end, why end a good thing?
  • I purchased a Lenovo X301 with a 120 GB flash drive last September and have been nothing but pleased with the performance of the drive. I boot Vista and also run openSUSE in a vm. The drive speed is high and consistent. The drive in the X301 is supposed to have better controllers than some, and it certainly does better than a USB stick. Any theoretical problems with write speed don't appear to me to affect typical real world use.
    • I've used it as a desktop drive for four months so far, and, using hdparm -T as a benchmark (I know, I know, but it's on a desktop!) it has the same throughput as it ever did. I download torrents to it.

      It would copy completed torrents to a platter-drive at 60MB/sec when new and will still do 60MB/sec now.

      I don't see a problem. Opera/Firefox open in less that one second (by wristwatch!) instead of ten on a platter-drive (Pentium-M, 1.6 GHz). The whole computer seems more responsive -- even modern in terms of

      • by Lennie ( 16154 )

        hdparm -T ? How is that an indication of your SSD ?

        from the manualpage:

        "This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test."

    • I purchased a Lenovo X301 with a 120 GB flash drive last September and have been nothing but pleased with the performance of the drive. I boot Vista and also run openSUSE in a vm. The drive speed is high and consistent. The drive in the X301 is supposed to have better controllers than some, and it certainly does better than a USB stick.

      Any theoretical problems with write speed don't appear to me to affect typical real world use.

      I have also a SSD in a macbook air and one thing I am very pleased with is the consistent speed of the SSD HD (less with the air and its prevalent heating issues never fixed by Apple). I assume the entire issue is way overblown, since there might be some degration but given that it occurs only in continous writes and that is a rare situation you wont notice in real world use. In fact normal ops usually are a mixture of read, random write and calculation cycles and the advantage to normal hds really is huge!

  • by Anonymous Coward on Friday May 08, 2009 @05:13PM (#27882927)

    One that can relocate MFTs, most used files and swap to the chips on the outer edge of the circuit board, where the throughput is faster.

    • Re: (Score:3, Funny)

      by TheRaven64 ( 641858 )
      This technique requires you to spin your flash chips very fast, which is a feature only supported on enterprise-grade SSDs.
  • NAND is the culprit (Score:5, Informative)

    by thewesterly ( 953211 ) on Friday May 08, 2009 @05:14PM (#27882941)

    The fundamental problem with NAND-based solid-state drives is that they use NAND flash memory--the same stuff that you find in USB flash drives, media cards, etc.

    The advantages of NAND is that NAND is both ubiquitous and cheap. There are scads of vendors who already make flash-memory products, and all they need to do to make SSDs are to slap together a PCB with some NAND chips, a SATA 3Gb/s interface, a controller (usually incorporating some sort of wear-leveling algorithm) and a bit of cache.

    The disadvantages of NAND include limited read/write cycles (typically ~10K for multi-level cell drives) and the fact that writing new data to a block involves copying the whole block to cache, erasing it, modifying it in cache, and rewriting it.

    This isn't a problem if you're writing to blank sectors. But if you're writing, say, 4KB of data to a 512KB block that previously contained part of a larger file, you have to copy the whole 512KB block to cache, edit it to include the 4KB of data, erase the block, and rewrite it from cache. Multiply this by a large sequence of random writes, and of course you'll see some slowdown.

    SSDs will always have this problem to some degree as long as they use the same NAND flash architecture as any other flash media. For SSDs to really effectively compete with magnetic media they need to start from scratch.

    Of course, then we wouldn't have the SSD explosion we see today, which is made possible by the low cost and high availability of NAND flash chips.

    • by bbn ( 172659 )

      A smart device might do things a bit differently. It will not do your described cycle of read-block/change-data/erase/write-same-block. Instead it will buffer up enough changes until it has a full block and then write it to a _different_ block. One that is already preerased. There is no need to store sectors in the original order - just keep a table with sector locations.

      A small capacitor makes it safe to delay writting by storing enough power to do emergency flush during powerloss.

      I am sure makers of these

    • by eyepeepackets ( 33477 ) on Friday May 08, 2009 @05:44PM (#27883225)

      Samsung has begun manufacture of their PRAM which promises to be a replacement for NAND:

      http://www.engadget.com/2009/05/05/samsungs-pram-chips-go-into-mass-production-in-june/ [engadget.com]

      Wikipedia writeup on PRAM:

      http://en.wikipedia.org/wiki/Phase-change_memory [wikipedia.org]

      This type of "flash" memory will make much better SSD drives in the near future.

      • by KonoWatakushi ( 910213 ) on Friday May 08, 2009 @06:45PM (#27883703)

        This is excellent news. As you allude, PRAM will finally make good on the promise of solid state storage. It will allow for both higher reliability and deterministic performance, without the ludicrous internal complexity of Flash based devices.

        I can't help but cringe every time I hear the terms Flash and SSD used interchangeably. If anything, the limitations inherent to Flash devices described by the GP mean they have more in common with a hard disk, as they also have an inherent physical "geometry" which must be considered.

        PRAM will basically look like a simple linear byte array, without all the nonsense associated with Flash. Even if Flash retains a (temporary) advantage in density, it will never compete with hard disks on value for bulk storage, nor will it ever compete with a proper SSD on a performance basis. It makes for a half-assed "SSD", and I can't wait for it to disappear.

    • Power management can turn off sections of the flash memory. This is good, of course, to reduce battery consumption in laptops and netbooks. But the process of turning a section's power off and then turning another section's power on can slow down the access. With very random access, expect that to simply happen a lot. So random hopping around the storage, while not as slow as a mechanical hard drive, will be slower than sequential.

      Add wear leveling into the picture and you have a layer of memory transla

    • Re: (Score:3, Informative)

      by BikeHelmet ( 1437881 )

      SSDs will always have this problem to some degree as long as they use the same NAND flash architecture as any other flash media. For SSDs to really effectively compete with magnetic media they need to start from scratch.

      Of course, then we wouldn't have the SSD explosion we see today, which is made possible by the low cost and high availability of NAND flash chips.

      Or...I dunno, maybe they could create a filesystem specifically for NAND flash [wikipedia.org].

      http://en.wikipedia.org/wiki/JFFS2 [wikipedia.org]

      • by pyite ( 140350 )

        Or...I dunno, maybe they could create a filesystem specifically for NAND flash.

        It makes much more sense for existing filesystems to include awareness of SSD and use them accordingly. ZFS is doing this; eventually others will, too.

    • So according to what you're saying, and what the Anandtech article said, the headline is just plain Wrong!

      http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=1 [anandtech.com]

      The slowdown is only particular to NAND Flash. Dynamic RAM based solid state drives don't suffer from this phenomenon. (Gigabyte i-Ram and ACARD ANS-9010) However, they are definitely also Solid State Drives.

    • How about using smaller blocks?
    • Re: (Score:3, Interesting)

      by Lord Ender ( 156273 )

      Is there a fundamental reason why they can't just shrink the block size?

      • by svirre ( 39068 )

        It increases the cost of the flash dies, or it reduces performance (By interleaving across less dies)

  • Not News (Score:3, Interesting)

    by sexconker ( 1179573 ) on Friday May 08, 2009 @05:28PM (#27883099)

    This is old news, and both the Intel drives and the OCZ Vertex have updated firmwares/controllers that remedy (but do not completely solve) the issue.

    When we get support for TRIM, it will be even less of an issue, even on cheapo drives with crappy controllers/firmware.

    The issue won't be completely solved ever, because of how SSD arranges flash memory and how flash memory can't really be overwritten in a single pass.

    See anandtech's write up if you want details.
    http://www.anandtech.com/printarticle.aspx?i=3531 [anandtech.com]

    • Meh, if you see the random write data rate of the OZC drive (which uses a controller chip from a 3rd party company) the SSD drives totally obliterate the other drives in write speed, safe the expensive Intel ones.

      We'll just wait a bit and buy either OCZ or another party that uses the controller with a stable firmware. Currently you will have to be on the lookout for bad/old firmware from OZC, or buy a drive that messes up write performance, or one darn expensive one from Intel.

      Of course, if you mostly start

      • OCZ Vertex drives have new controllers with good frimware.
        Get your facts right.

        • The Anandtech article is dated 18 march of this year. They've just received some new firmware from OCZ. Are you saying that all the drives in the channel are already flashed with this firmware? Otherwise is it more prudent to say that OCZ drives may have new controllers with good firmware. Although with the current house on Vertex drives the probability of drives with correct firmware is increasing. My retailer however does not list the BIOS version of the thing.

          Personally I'll just wait a bit longer until

          • IMPORTANT NOTE: To continually improve and optimize the Vertex SSD for the latest platforms OCZ will constantly release new firmware updates. Detailed firmware information can be found on our support forums and a step-by-step flashing guide is available here

            All VERTEX drives contain the good controller.
            Just upgrade the firmware when you get it if you're worried. Either way, the good controller with the "bad" firmware is still awesome.

            The OTHER OCZ drives are using the older crappy controller all other cons

          • Found it.

            As far as I know, this is the one of the only reviews (if not the only) at the time of publication thatâ(TM)s using the new Vertex firmware. Everything else is based on the old firmware which did not make it to production. Keep that in mind if youâ(TM)re looking to compare numbers or wondering why the drives behave differently across reviews. The old firmware never shipped thanks to OCZ's quick acting, so if you own one of these drives - you have a fixed version.

        • Rereading my post it is a bit strong though. I'll have some coffee and let my mood and writing skills get up to par again.

          What I was trying to say is that you may not get a drive with the latest firmware if you are shopping at your local store. Even if you can put the firmware up directly, it *will* cost you time and inconvenience, and the chances of another bug popping up are much higher than when you buy, for instance, a hard drive.

          Sorry if I've offended you in any way.

    • Re:Not News (Score:5, Informative)

      by AllynM ( 600515 ) * on Friday May 08, 2009 @07:05PM (#27883835) Journal

      Intel has solved theirs about 95%, but they are helped by their write speeds being limited to 80 MB/sec. With the new firmware, it is *very* hard to get an X25-E to drop below its rated write speed.

      http://www.pcper.com/article.php?aid=691&type=expert&pid=5 [pcper.com]

      OCZ has not yet solved it. They currently rely on TRIM, and in my testing that alone is not sufficient to correct the fragmentation buildup. IOPS falls off in this condition as well.

      Allyn Malventano
      Storage Editor, PC Perspective

      • Re: (Score:3, Informative)

        by AllynM ( 600515 ) *

        Correction to my last. I was speaking of X25-M, not E.

      • http://www.pcper.com/article.php?aid=691&type=expert&pid=5 [pcper.com]

        OCZ has not yet solved it. They currently rely on TRIM, and in my testing that alone is not sufficient to correct the fragmentation buildup. IOPS falls off in this condition as well.

        Sorry, but that article does not say anything about Vertex drives, it does not say anything about firmware updates of Vertex drives and I've seen nothing in the Anandtech article requiring you to use TRIM commands to get the more balanced performance.

        As a storage editor, I would like you to point to an article refuting Anands claims about the Vertex. Mods, someone just chiming in as an editor of a PC mag should not get free mod points, even if they provide a link. Although the part of the Intel drives is in

    • I agree that it is not news (the topic, including performance reviews, was thoroughly covered by Tom's Hardware about 2 months ago), but the OCZ Vertex did not in fact "update" the controller. The new top-of-the-line Intel SSD does indeed have a new controller, that helps to minimize (but not eliminate) the issue. OCZ, on the other hand, just threw in another of the old controllers, and divided the memory between them to increase average throughput. The problem is still there, but it is somewhat less notice
      • What I understood is that OCZ relies on a (single) controller which is more like a regular CPU underneath, and can be updated through firmware. The last firmware should be able to solve the problem, but you need to backup and restore all your data for it to work. See the Anandtech article for more details.

        If I remember correctly the dual controller path is true for other SSD drives using the Micron controller. Not the Vertex.

  • Tom's Hardware (Score:4, Insightful)

    by Jane Q. Public ( 1010737 ) on Friday May 08, 2009 @09:54PM (#27884899)
    wrote a thorough review of SSDs, including the X25, complete with a full technical explanation of exactly what causes the performance degradation.

    About 2 months ago.
  • However...

    As long as they can get the read, write, random read, random write performance to be substantially better than a hard disk - across the board I don't care too much.
    Example: many many years ago, on my 286, I extracted a floppy disk with 1,800 .ico files on it (754 bytes?) in a zip file.
    That took about an hour to do.
    I then learned about 'smartdrv.sys' (or was it EXE?)
    The time to do it went from an hour to about 30 to 60 seconds.

    The way the FAT16 worked on my machine with a 20mb drive and a 286 CPU

  • Once again I'm shocked by how terrible Slashdot is for anything hardware related. Just as has been said every time anyone has mentioned these pathetic articles from magazines like Computerworld (!), THIS ISN'T NEWS - ANANDTECH EXPLAINED IT VERY CLEARLY MONTHS AGO.

    http://www.anandtech.com/storage/showdoc.aspx?i=3531 [anandtech.com]

    Even without reading an article, I'm surprised this isn't intuitively obvious to most Slashdot users. I'm also surprised that the majority of hardware articles posted here come from jokes like C

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...