Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Data Storage Power Upgrades

New Middleware Promises Dramatically Higher Speeds, Lower Power Draw For SSDs 68

mrspoonsi (2955715) writes "A breakthrough has been made in SSD technology that could mean drastic performance increases due to the overcoming of one of the major issues in the memory type. Currently, data cannot be directly overwritten onto the NAND chips used in the devices. Files must be written to a clean area of the drive whilst the old area is formatted. This eventually causes fragmented data and lowers the drive's life and performance over time. However, a Japanese team at Chuo University have finally overcome the issue that is as old as the technology itself. Officially unveiled at the 2014 IEEE International Memory Workshop in Taipei, the researchers have written a brand new middleware for the drives that controls how the data is written to and stored on the device. Their new version utilizes what they call a 'logical block address scrambler' which effectively prevents data being written to a new 'page' on the device unless it is absolutely required. Instead, it is placed in a block to be erased and consolidated in the next sweep. This means significantly less behind-the-scenes file copying that results in increased performance from idle."
This discussion has been archived. No new comments can be posted.

New Middleware Promises Dramatically Higher Speeds, Lower Power Draw For SSDs

Comments Filter:
  • Compared To What? (Score:5, Insightful)

    by rsmith-mac ( 639075 ) on Saturday May 24, 2014 @08:00AM (#47082403)

    I don't doubt that the researchers have hit on something interesting, but it's hard to make heads or tails of this article without knowing what algorithms they're comparing it to. The major SSD manufacturers - Intel, Sandforce/LSI, and Samsung - all already use some incredibly complex scheduling algorithms to collate writes and handle garbage collection. At first glance this does not sound significantly different than what is already being done. So it would be useful to know just how the researchers' algorithm compares to modern SSD algorithms in both design and performance. TFA as it stands is incredibly vague.

Heuristics are bug ridden by definition. If they didn't have bugs, then they'd be algorithms.