Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage

All Solid State Drives Suffer Performance Drop-off 150

Lucas123 writes "The recent revelation that Intel's consumer X25-M solid state drive had a firmware bug that drastically affected its performance led Computerworld to question whether all SSDs can suffer performance degradation due to fragmentation issues. It seems vendors are well aware that the specifications they list on drive packaging represent burst speeds when only sequential writes are being recorded, but after use performance drops markedly over time. The drives with better controllers tend to level out, but others appear to be able to suffer performance problems. Still not fully baked are benchmarking standards that are expected out later this year from several industry organizations that will eventually compel manufacturers to list actual performance with regard to sequential and random reads and writes as well as the drive's expected lifespan under typical conditions."
This discussion has been archived. No new comments can be posted.

All Solid State Drives Suffer Performance Drop-off

Comments Filter:
  • To test (Score:5, Interesting)

    by Fri13 ( 963421 ) on Friday May 08, 2009 @05:51PM (#27882681)

    Just place SSD drives to usenet or torrent servers and use them as /var/log mountpoints... you soon see real tests how well those work when comparing to old fashion harddrives!

  • by blahplusplus ( 757119 ) on Friday May 08, 2009 @05:55PM (#27882745)

    ... these things aren't going to be a big deal in the long run, I mean who wasn't expecting some amount of technological immaturity? We shouldn't forget though that even with it's immaturity it's still much faster then hard disk drives but the SATA interface controller was not designed to handle such high speeds, not to mention much software is not geared, nor optimized for SSD usage.

    Still price has come down considerably on many SSD's over the last 6 months, I was thinking about picking up an X-25 M for a mere $350 bucks, would be worth it (to me) just to reduce load times in games like Empire total war IMHO.

  • by bzzfzz ( 1542813 ) on Friday May 08, 2009 @06:10PM (#27882897)
    I purchased a Lenovo X301 with a 120 GB flash drive last September and have been nothing but pleased with the performance of the drive. I boot Vista and also run openSUSE in a vm. The drive speed is high and consistent. The drive in the X301 is supposed to have better controllers than some, and it certainly does better than a USB stick. Any theoretical problems with write speed don't appear to me to affect typical real world use.
  • Not News (Score:3, Interesting)

    by sexconker ( 1179573 ) on Friday May 08, 2009 @06:28PM (#27883099)

    This is old news, and both the Intel drives and the OCZ Vertex have updated firmwares/controllers that remedy (but do not completely solve) the issue.

    When we get support for TRIM, it will be even less of an issue, even on cheapo drives with crappy controllers/firmware.

    The issue won't be completely solved ever, because of how SSD arranges flash memory and how flash memory can't really be overwritten in a single pass.

    See anandtech's write up if you want details.
    http://www.anandtech.com/printarticle.aspx?i=3531 [anandtech.com]

  • by KonoWatakushi ( 910213 ) on Friday May 08, 2009 @07:45PM (#27883703)

    This is excellent news. As you allude, PRAM will finally make good on the promise of solid state storage. It will allow for both higher reliability and deterministic performance, without the ludicrous internal complexity of Flash based devices.

    I can't help but cringe every time I hear the terms Flash and SSD used interchangeably. If anything, the limitations inherent to Flash devices described by the GP mean they have more in common with a hard disk, as they also have an inherent physical "geometry" which must be considered.

    PRAM will basically look like a simple linear byte array, without all the nonsense associated with Flash. Even if Flash retains a (temporary) advantage in density, it will never compete with hard disks on value for bulk storage, nor will it ever compete with a proper SSD on a performance basis. It makes for a half-assed "SSD", and I can't wait for it to disappear.

  • Re:To test (Score:2, Interesting)

    by AHuxley ( 892839 ) on Friday May 08, 2009 @08:32PM (#27884019) Journal
    I would pack a drive to about 8% of 'full'
    Fill it with applications, OS (Mac, Win, Linux over 3 drives) , mps3, lots of jpegs, text files and short and long movie files (2~50~650 mb)
    Get the RAM down to 1-2 gb and let the OS's thrash as they page in/out and watch the 3 computers over a few weeks.
    Automate some HD intensive tasks on the small amount of space left, let then run 24/7.
    Hope that Mac or Linux will keep files in different ways and use the little space in strange ways too. We can hope OS X and the lack of large contiguous chunks of free space will stress the OS and SSD in fun ways.
    Windows will do what is expected.
    Then run the HD 'tests' again.
    Something should show up on a graph via the 3 different OS.
  • by Lord Ender ( 156273 ) on Friday May 08, 2009 @08:53PM (#27884173) Homepage

    Is there a fundamental reason why they can't just shrink the block size?

  • by dgatwood ( 11270 ) on Friday May 08, 2009 @09:48PM (#27884491) Homepage Journal

    Come again? I'm not aware of SSDs doing any mapping at that level of granularity; to do so would mean that a 256 GB hard drive would require something like 2 GB of storage (assuming an even 32-bit LBA) just to hold the sector mapping table, and that would have to be NOR flash or single-level NAND flash, making it really freakishly expensive.

    All SSDs that I'm aware of work like this: you have a logical block address (sector number). These are grouped (contiguously) into pages (those 128KB "slices" you're referred to). If you have a 128 MB page size, then sectors 0-255 would be logical page 0, sectors 1-511 are logical page 1, and so on.

    These logical pages are then arbitrarily mapped to physical pages. This results in a mapping table that is a much more plausible 8 megabytes in size for a 256 GB flash drive.

    Thus, any fragmentation within a flash page is caused entirely by the filesystem being fragmented, not by anything happening in the flash controller. The performance degradation from full flash has nothing to do with fragmentation. It is caused by the flash controller having to erase an already-used page before it writes to it. The only way to avoid that is to erase unmapped spare pages ahead of time.

  • Re:Not *ALL* (Score:1, Interesting)

    by Anonymous Coward on Saturday May 09, 2009 @08:01PM (#27892601)
    I was curious and got trough TFA, but instead of an analysis I found a lot of dissertation about theories and one single freaking test of one single drive, the intel x-25 which was the only to which this issue was reported, and even this test about performance degradation over time was done just a couple of times, without any workload in between.

    worst. article. ever.

"If it ain't broke, don't fix it." - Bert Lantz

Working...