Garbage Collection Algorithms Coming For SSDs 156
MojoKid writes "A common concern with the current crop of Solid State Drives is the performance penalty associated with block-rewriting. Flash memory is comprised
of cells that usually contain 4KB pages that are arranged in blocks of 512KB. When a cell is unused, data can be written to it relatively quickly. But if a cell already contains some data, even if it fills only a single page in the block, the entire block must be re-written. This means that whatever data is already present in the block must be read, then it must be combined or replaced, and the entire block is then re-written. This process takes much longer than simply writing data straight to an empty block. This isn't a concern on fresh, new SSDs, but over time, as files are written, moved, deleted, or replaced, many blocks are a left holding what is essentially orphaned or garbage data, and their long-term performance degrades because of it. To mitigate this problem, virtually all SSD manufacturers have incorporated, or soon will incorporate, garbage collection schemes into their SSD firmware which actively seek out and remove the garbage data. OCZ, in combination with Indilinx, is poised to release new firmware for their entire line-up of Vertex Series SSDs that performs active garbage collection while the drives are idle, in order to restore performance to like-new condition, even on a severely 'dirtied' drive."
The logical next step... (Score:4, Insightful)
A weakness was found in first generation drives, the second generation drives fixed it.
Film at 11.
Re: (Score:1, Informative)
This is the third generation, the second was to fix speed degradation through fragmentation.
Re:The logical next step... (Score:5, Funny)
This is the third generation, the second was to fix speed degradation through fragmentation.
And the fourth generation will fix SSD's small life longevity due to massive GC activity.
Re: (Score:2)
Good point I'm worried about that too. SSD already has poor life I'm not so sure about the garbage collection idea.
Re: (Score:3, Insightful)
How would this make a difference? The blocks would have to be wiped out next time they are written to anyway, the only difference here is that the blocks are cleared during idle time so you don't have to wait for it.
Re: (Score:2)
Block A is full on day 100
10 cells are invalid (i.e. ready to be deleted) but 99% of the blocks still have space
10 cells get wiped cleaan. While allows for 10 new writes. It fills up on day 120 with 10 cells invalid....
day 200 there aren't free blocks anymore
block A has 40 cells that are invalid
they all get deleted. Since there are 40 free cells this lasts till day 280. Say on day 300 it gets deleted and 40 cells are free again....
So in one situation you are flipping every 20 days, the other every 100
Re:The logical next step... (Score:4, Interesting)
Anything measures in rewrites over hours or larger time spans is not (or shouldn't be) that much of a problem for modern flash. Someone calculated that you'd have to be reflashing a particular device every 15 minutes for 5 years to reach the flash's rewrite limit. That was several years ago. (It may have been 5 minutes as opposed to 15, but I'll give the less reliable number. This number appears to be from 2000 or 2001, as the device was the Agenda VR3 dating from about then.)
Assuming it's as good as the flash from that example, rewriting every hour results in 20 years. I don't know about you, but I don't have many hard drives from 20 years ago.
Now, if it's rewriting all the time, that could go down drastically, and quality might be different, but every 20 days shouldn't be a problem unless you've got really really crappy flash, by the standards of 9 years ago.
Re: (Score:2)
Where are you getting that from? To the best of my knowledge 10,000 rewrites is pretty standard except for the Intel X25e which is 68k.
There are 10,000 minutes in a week (roughly). You can burn through it very quickly. Or if you were to do it every 15 minutes you would burn through the rewrites in under 4 months and every 5 minutes in a bit over a month.
You do have a good point though I hadn't thought about. If I expect the drive to last 5 years I can do 5 rewrites a day and still come in under 10,000.
Re: (Score:2)
I looked this up and 10k is out of date. It looks like 100k - 2m is the norm for new drives. That's probably fine for all but large scale date servers if you include wear leveling. People just don't flip enough data often enough.
Re: (Score:2)
Assuming it's as good as the flash from that example, rewriting every hour results in 20 years. I don't know about you, but I don't have many hard drives from 20 years ago.
It isn't as good. Back then the only kind of flash was SLC which gets roughly 1 million writes before death. Most consumer-grade flash drives today use MLC which is generally only good for an order of magnitude less writes.
Re: (Score:2)
Why not use indirection? (Score:2)
You'd need a separate memory to keep a table of pointers, just like an object table in some OO language implementations. This way, you wouldn't need Garbage Collection so much as just plain old compaction. The extra level of indirection wouldn't be so bad, because it's Flash -- random low-latency access is what it's good at.
(The separate memory would have a granularity of just 64 bits, so rewriting just one pointer will have less overhead.)
Re: (Score:2)
Re: (Score:2)
Wiping near clean blocks still creates the same sort of situation the block gets wiped more frequently as it moves towards being more empty.
Re: (Score:2, Interesting)
Not to be cynical, but these new algorithms, if implemented poorly, have the potential to run down the limited number of write cycles on the cells. Not that this could be strategically manipulated in any way...
Re:The logical next step... (Score:5, Informative)
Re: (Score:2)
Wouldn't the number of erases drastically drop? Imagine that the every block on the SSD has been written two once. If you want to write anything now you'd have to read a page, erase, write back with the change. So if one assumes that on the filesystem this page was mostly empty a whole lot of garbage was just written unnecessarily. If this page was preemptively erased you'd only have to write the new stuff and the rest of the page will still be free.
Re: (Score:2)
...cause if I buy an SSD and it wanders around eliminating random files cause I haven't accessed them in 2 months...
Surely you're suggesting this in jest?
Re: (Score:2)
especially when the point is that you don't exist to write pointless wall-of-text-comments
bravo, sir
Re:The logical next step... (Score:4, Informative)
Taking from your list of actions: Pick a random block:
1. GC comes along, swoops up block, eliminates junk by flashing entire block into 1's (awhile later)
2. OS requires write, swoops up block, writes only the 0's from the file leaving everything else untouched.
In this manner each step does half of the writing amounting to one write when combined. This is exactly how all SSDs work. The major difference announced in the article is that they are separating the two steps.
Normally this is impossible because the SSD doesn't know if something can be cleared until the OS is trying to overwrite it. This makes writes take longer. The new firmware hopes to make writes faster by moving the first step into the idle time of the drive (by figuring out when a overwritten block is unused) sort of like how you can set up a download to only run when your not using the internet connection. It allows for more efficient use of time that the drive would otherwise be doing nothing with.
Re: (Score:3, Informative)
I *think* you're misunderstanding how this works, actually.
When a block is written to, the entire block (512KiB) has to be wiped and rewritten from a blank state. When a block is emptied entirely, it does not get touched - just marked as empty. When new data is written to it, the 'empty' block has to actually be wiped, and then the new data written on the just-blanked block.
What this seems to be proposing is to, periodically, actually wipe the blocks marked as empty, when the SSD is otherwise idle - meaning
Re:The logical next step... (Score:5, Informative)
I think you're somewhat close, but there are some inaccuracies...
Block devices (typically HD's) have two operations (read and write). These operations are what most modern operating system use. Flash SSD's emulate a block device, but the underlying flash memory uses three operations (read, write, and erase). The main difference, therefore, between the block device (what the OS references) and the underlying flash itself is the extra erase operation.
To write to a flash drive, assuming a cell has already been erased, all a user must do is a write operation. This operation is typically fast and does not affect the lifespan of the flash. A write can change any or all of the bits in a block from 1 to 0. Once this is complete, the requested data is written. However, if a user wants to overwrite or change existing data, they must first perform a block erase. This sets every bit in the block back to 1, and is typically very slow (compared to a write). In addition, this is what wears out the flash block, so we really want to avoid these operations.
Since flash blocks each have their own lifespan, we want to spread the erase operations around the disk. This is called wear leveling. To do this, the flash device appears like a block device to the operating system, but it remaps where the data is actually located at the physical flash layer with a remap table. For instance, let's say you overwrite a block in Linux. If there is an available free flash block, it may not even overwrite that block--it may allocate a new block for the file and write it there (updating the remap table). This avoids an erase command. Furthermore, there are a few files on a filesystem which change frequently, and if we did not move their location around the physical flash, we would wear out one cell in flash extremely quickly, even though the remainder of the cells had plenty of life left.
The garbage collection comes in due to this remapping. Typical block sizes for most OS filesystems are around 4k, but flash blocks are typically 512KB in flash devices. This means that if you send data to a SSD, it may or may not take up an entire page, as you may only be using 4k of actual data. Eventually, as writes are leveled around the drive and often fragmented (as we may not be occupying the entire 512KB block), future writes begin taking one (or more erase cycles). For instance, if you request that 512KB of data be written to the drive, but all the cells in the flash are physically occupied by a small amount of data, then data from multiple cells must be combined into one cell (multiple reads+erase+write), and then the destination cell that you are writing to must also be erased and written. This is what causes flash SSD's performance to significantly degrade over time.
By performing this recombining in the background (as a garbage collection), this should allow flash SSD's to maintain like-new performance even when containing a lot of data. In essence, they are performing background defragmentation on the SSD. As a sidenote, NEVER defragment an SSD from the Operating System, as this defragments the filesystem, but performs a ton of erase+write operations to the flash. At best, on new SSDs (Intel, Indilinx), this will wear out the drive sooner. On old SSD's, this will also increase fragmentation at the flash remap layer, causing further performance loss.
So to address your initial comment, rewrites would also see a performance increase by this garbage collection, as "rewriting" data in flash is virtually equivalent to a new write, since the remap table essentially moves the data anyway.
Source:
http://en.wikipedia.org/wiki/Flash_memory#Block_erasure [wikipedia.org]
Re: (Score:2)
Thanks for your exposition, I found it clear and interesting.
Specifically, I am very interested by the 512KB claim (I assumed, here, you meant KiB - i.e. 512*(2^10 bytes)... though that's not what interests me...)
I've read various claims from extremely non-authoritative sources about 'block sizes' for flash devices - and I've read about everything from 512 bytes up-to 64KiB - and 512KiB is the largest I've heard about.
It strikes me that, in the context of applications that store data on flash, where the app
Re: (Score:2)
Shic:
Yeah, I suppose I meant KiB, not technically KB. In general, which one KB refers to is unambiguous, and so I don't go through the trouble of looking up which specific term I need. :) If I ever write a hardware spec, though, I'll be sure to specify which one I mean!
Good questions, unfortunately I'm afraid I may not have most of the information you're looking for...
1. The answer to this would be that each manufacturer may or may not document this, at their whim. It would certainly be easy for them to
Re: (Score:2)
Thanks for your reply. I'm actually considering writing some software that needs non-volatile storage of blocks of data - where constant-time random-access (for read, and especially write) is very desirable. I am free (at this stage) to chose an arbitrary block size (or collection of supported block sizes) if it will make any difference. I'm expecting that it will make a difference (if I get the block-size right) since RAM-IO has a bandwidth significantly higher than Fash-IO... making flash IO the bottle
Re: (Score:2)
"I imagine rewrites would stay comparatively slow, though."
thats the extra beauty in this method.. a rewrite.. doesn't have to actualy rewrite the same block.. it can read the block merg and write to a "empty&wiped" block.. then go back and mark the source "empty" and let GC go back later and "&wiped".. meaning that your rewrites would speed up also .. as it wouldn't have to wipe the source block after reading it.
Re: (Score:2, Funny)
The garbage man can, Marge, the garbage man can!
Do cleanup in the OS (Score:2)
It seems like this function should be performed in the operating system. The firmware should just make available the info and commands an OS needs to do the right thing.
Re:Do cleanup in the OS (Score:4, Informative)
Re: (Score:2)
Um...How 'bout NO? This kind of thing absolutely should NOT be handled by the operating system. It should be entirely platform independent.
Re: (Score:2)
Re: (Score:2)
Will the firmware in the drive be able to do this without understanding the filesystem?
Just off the top of my head I can see where the onboard controller would have a big advantage. If we simplify the case and say the drive uses 2k blocks and the file system can't be modified to use 2k blocks, (lame!) then the onboard controller should watch for situations where a cell (of four 512 byte blocks) is frequently being reflashed because a single one of the four is being changed. Then if it could take a look at
Re: (Score:3, Informative)
How does the firmware know what sectors are empty if it doesn't understand this stuff?
I am curious how it works, if it doesn't need knowledge of the filesystem. FAT, NTFS, UFS, EXT2/3/4, ZFS, etc are all very different.
The filesystem tells the SSD "LBA's x to y are now not in use" using the ATA trim command.
http://www.theregister.co.uk/2009/05/06/win_7_ssd/ [theregister.co.uk]
Over-provisioned SSDs have ready-deleted blocks, which are used to store bursts of incoming writes and so avoid the need for erase cycles. Another tactic is to wait until files are to be deleted before committing the random writes to the SSD. This can be accomplished with a Trim operation. There is a Trim aspect of the ATA protocol's Data Set Management command, and SSDs can tell Windows 7 that they support this Trim attribute. In that case the NTFS file system will tell the ATA driver to erase pages (blocks) when a file using them is deleted.
The SSD controller can then accumulate blocks of deleted SSD cells ready to be used for writes. Hopefully this erase on file delete will ensure a large enough supply of erase blocks to let random writes take place without a preliminary erase cycle.
Actually I used to work on an embedded system that used M Systems' TrueFFS. There the flash translation layer actually understood FAT enough to work out when a cluster was freed. I.e. it knew where the FAT was and when it was written it would check for clusters being marked free at which point it would mark them as garbage internally.
Re: (Score:2)
I was wondering if anyone was going to mention the TRIM command.
If the OS and the SSD both support the TRIM command, the problem is solved. Next news item please.
Re:Do cleanup in the OS (Score:4, Interesting)
why? its low level but it doesn't affect the above filesystem.
on the list of reasons why it SHOULD be done by the OS not the firmware are:
*OS has a better clue about idleness
*OS can create idleness by holding unimportant writes for a while (ext4 style) and using this time to do GC
*OS can decide to save power by not doing this while on batterypower
on the list AGAINST i only have:
*jtownatpunk.net thinks it should be platform independent and thinks this can't be achieved without doing it in firmware
put out the essence of the driver in public-domain and code a version for windows/mac if required, that way all oses will use the same logic even if they have completely different drivers.
Re: (Score:3, Interesting)
Re: (Score:2, Insightful)
Re: (Score:2)
Personally I will never buy a SSD disk until crap like wear leveling is unecessary and random write performance does not suck.
Oh, so you bought a SSD 9 months ago?
Re: (Score:2)
If you want the OS to handle it, the option already exists [linuxjournal.com]. MTD, on the linux side(I assume WinCE has an equivalent, I've no idea what; the NT series OSes don't) is a mechanism for
Who had to creative/hates "defragmentation"? (Score:5, Insightful)
"Garbage collection" has already quite different usage in CS. And while what has to be done to those SSDs isn't technically the same as defragmentation on HDDs, it is still "performing drive maintenance to combat performance-degrading results of prolonged usage, deletion of files".
Re:Who had to creative/hates "defragmentation"? (Score:5, Informative)
Re: (Score:2)
I wonder if it can do that while block has data in some of it cells; or does it have to move, while idle, chunks of data from cells all around the disk to fill some other block and then restore, now empty of data, "garbage blocks" to pristine state...which has again some similarities to defragmentation.
But really, I wasn't going so much into technical details, more into language conventions/familiarity. From the point of view of...almost everybody this new SSD mechanism is practically synonymous with defrag
Re: (Score:2, Informative)
Right, but recall that SSD can only be erased in large blocks, though it can be written to in smaller ones. Erases are what eventually kill a block.
So if I take a block that has only 25% garbage and I want to wipe it, I have to copy the good data over to another block somewhere before I can do that. So I've written 3/4 of a wipable sector's worth of data to a new sector to get rid of the 25% of garbage. Do that a lot, and you do a lot of unnecessary erases and the drive dies faster.
If, instead, you take
Re: (Score:2)
It's my understanding that the balance that SSD manufacturers like Intel have struck is a drive with excellent performance that is expected to life at least 3 or 4 years of heavy usage. Being extra conservative with erases at the expense of performance would increase the lifespan of the device, but in most circumstances, 3 or 4 years is good enough. Most people will have upgraded to a new drive by that point anyway. And quite a few hard drives die within that span as well.
Re: (Score:2)
"performing drive maintenance to combat performance-degrading results of prolonged usage, deletion of files"
This is what defragmentation does from the user's point of view, but not what it means. Defragmentation, as the word implies, is the process of reducing fragmentation, i.e. make files contiguous. This is achieved by moving chunks of data around, and the performance benefit comes from reducing the number of HD seeks required to read the data. Since a SSD is random-access, it doesn't have to seek so def
Re: (Score:2)
Well, it's a bit of a semantics issue (like my whole point ;p ), but...no, garbage collection isn't that. It is reclaiming memory that IS used, but no longer needed. In case of SSDs it's NOT putting it into usable pool, just making memory that isn't used already a bit more healthy.
Furthermore, the mechanism by which SSD degrades is very similar to fragmentation.
Re: (Score:2)
Sure it does. Consider the following; Flash has 512K blocks that must be erased as a unit. Hard drives write in 4K blocks. A 512K block has been filled, and the OS then re-writes over some of those blocks. Now they can't be overwritten in place, they are instead remapped to another empty block. So now the first filled block has an empty hole in it that the SSD knows about, but can't use until the rest of the 512K block has been relocated.
Sounds a bit like generational garbage collection to me.
when drives are "idle" ? (Score:2)
If you have an app that needs SSD, when will the drives ever be idle ?
Re: (Score:2)
They will be Idle whenever Slashdot needs something stupid to post.
Re: (Score:2)
All PCs can benefit from SSDs, and they are often idle. Technology isn't just for those who "need" it.
Re: (Score:2)
The indilinx guys are definitely a step above jmicron, who just suck, but tend to com
Re: (Score:2, Informative)
Re: (Score:2)
The drives don't have to be idle, just the portion being garbage collected.
It depends on how the drive controller maps the sector numbers. If the upper bits of the sector number select a chip, you have the equivalent of a JBOD array, and you can optimize the first 1/4 of the drive while the OS is doing stuff on the rest. But if the lower bits of the sector number select a chip, you have striping as in RAID 0, and no chip will be "idle" for any reasonable amount of time unless the whole array is idle. And then there are the tiny-form-factor (CompactFlash size) SSDs that may only ha
Re: (Score:2)
If you have an app that needs SSD, when will the drives ever be idle ?
When the battery on the device that runs the app is charging.
The cops'll love it. (Score:4, Interesting)
So what does this do when forensics are being done on one of these drives? Is the firmware just doing a better job of marking a dirty block available or do the dirty blocks have to be zeroed at some point. Even if the blocks are just marked will they output zeros if 'dd'ed by an OS?
Re: (Score:2)
I imagine they would still contain the data, but once you write to anywhere in that block the garbaged data would be zeroed out because it isn't being read into the "combine-read-data-with-new-to-be-written-data" buffer.
Privacy advocates will love it.
Re: (Score:2)
A page is the minimum amount you can write, say 4KB
A block is the minimum you can delete, and it is made up of multiple pages, say 5.
If a block contains pages to keep and pages to delete, the circuitry has to delete the whole block, then rewrite the valid pages just to clear the deletable pages. This takes more time.
Current SDDs when told to delete a file just update the file table and leave the time-consuming delete task to when something
Re: (Score:2)
If you want a SSD wiped, best to use the manufacturer's low level format tools. There's literally gigabytes of extra space that might not be erased. If you wanted something kept safe it's generally better to keep it encrypted from the start anyway. I'm guessing forensic firms will do well on recovering from SSDs...
Re: (Score:2)
Lifetime of drives (Score:1, Redundant)
This will significantly increase writes. I'm sure it's still worth it, but we ought to know what kind of effect this will have on the time before one hits max writes on the flash device.
Re: (Score:2)
Not necessarily. An SSD has to collect garbage sometime; whether it GCs proactively or lazily causes the same wear.
Leave the garbage (ie my porn) alone!!! (Score:5, Funny)
I don't want my porn garbage collected thank you very much. Who died and made you king of deciding what's garbage.
Filesystem info (Score:4, Interesting)
Wouldn't the drive benefit from a real understanding of the filesystem for this sort of thing? If it knew a sector was unallocated on a filesystem level, it would know that sectors were empty/unneeded, even if they had been written to nicely. Or should computers now have a way of tagging a sector as "empty" on the drive?
Either way, it looks like an OS interaction would be very helpful here.
Or are modern systems already doing this, and I'm just behind the times?
Re:Filesystem info (Score:5, Informative)
There is an extensions that was recently added to ATA, the TRIM command. The TRIM command allows an OS to specify a blocks data is no longer useful and the drive should dispose of it. No productions support it, but several beta firmwares do. There are also patches for the Linux kernel that adds support to the black layer along with appropriate support to most filesystems. Windows 7 also has support for it.
There is a lot of confusion about this on the OCZ boards, with people thinking GC somehow magically obviates the needs for TRIM. As you pointed out the GC doesn't know what is data and what is not with respect to deleted files in the FS. I wrote a blog post [blogspot.com] (with pictures and everything) explaining this just a few days ago
Re: (Score:2)
It's worth noting that while SSDs are just now trying to implement this, Linux and Windows 7 have supported it for at least 6 months, and its use has essentially been finalized for almost two years (draft in the specification). Adding this feature to SSDs is long overdue.
Re:Filesystem info (Score:4, Informative)
You're about two months ahead of the times. The ATA TRIM command will allow the filesystem to tell the SSD which sectors are used and which are unused. The SSD won't have to preserve any data in unused sectors.
Re: (Score:2)
There is no need as a standard ATA TRIM command exists by which the OS can tell the device when a block is no longer in use. LWN [lwn.net] wrote about this almost a year ago.
Re: (Score:2)
I have a radical concept: let the filesystem manage the drive, rather than making the drive do it. Isn't that the point of file systems, to provide optimal use of the media? Some filesystem
Re: (Score:2)
Isn't that the point of file systems, to provide optimal use of the media?
They want to sell these to people who run Windows.
At what cost? (Score:4, Interesting)
Re:At what cost? (Score:5, Insightful)
Possibly shorter drive life. If each cell can be rewritten 100,000 times (don't remember exactly) then - for exactly the same reason you're doing this in the first place (rewriting an entire cell on every write) you'll be wearing out the cell.
Probably a net gain, though. This and wear-leveling algorithms probably will make drives last longer.
Don't be quite so cynical. Usually I'd agree with you - but SSD (not flash) is so new that improvements can be made for free by just changing some techniques.
Re: (Score:2)
You obviously don't understand the technologies behind flash. There is no extra write.
In either case, if you wrote to the non-GC'ed sectors, you need to do a sector clear, then write (because flash can only write to cleared sectors -- essentially). Although writes are really free, it's the clear that has a limited number of uses per sector. Reads are free, writes are free, clears are limited. You can't write to a non-clear sector, so for simplicity -- people say flash has a limited number of write cycle
Re: (Score:2)
That should have read "eliminates the need for a clear is so small"
Re: (Score:2)
The cost would be that it would have to keep track of which blocks of data are in use, so it would have to have a small bit of the SSD storage set aside for this purpose.
There is no performance or lifespan penalty since this only affects what happens when data is written -- currently, the block is always read, combined, and then written. If the block is marked as not in use, the first two steps can be skipped. If the block is in use, we're just doing old behavior, no loss (except we needed to look up to s
Re:At what cost? (Score:5, Informative)
Simple. Well, not really, but...
SSD's can be written to in small increments, but can only be erased in larger increments. So, you've got a really tiny pencil lead that can write data or scribble an "X" in an area to say the data is no longer valid, but a huge eraser that can only erase good-sized areas at a time, but you can't re-write on an area until it's been erased. There's a good explanation for this that involves addressing and pinouts of flash chips, but I'm going to skip it to keep the explanation simple. Little pencil lead, big eraser.
Let's call the small increment (what you can write to) a "block" and the larger increment (what you can erase) a "chunk". There are, say, 512 "blocks" to a "chunk".
So when a small amount of data is changed, the drive writes the changed data to a new block, then marks the old block as "unused". When all the blocks in a chunk are unused, the entire chunk can then be safely wiped clean. Until that happens, if you erase a chunk, you lose some data. So as time goes on, each chunk will tend to be a mix of current data, obsolete data, and empty blocks that can still be written to. Eventually, you'll end up with all obsolete data in each chunk, and you can wipe it.
However, it's going to be rare that ALL the blocks in a chunk get marked as unused. For the most part, there will be some more static data (beginnings of files, OS files, etc) that changes less, and some dynamic data (endings of files, swap/temp files, frequently-edited stuff) that changes more. You can't reasonably predict which parts are which, even if the OS was aware of the architecture of the disc, because a lot of things change on drives. So you end up with a bunch of chunks that have some good data and some obsolete data. The blocks are clearly marked, but you can't write on an obsolete block without erasing it, and you can't erase a single block - you have to erase the whole chunk.
To fix this, SSD drives take all the "good" (current) data out of a bunch of partly-used chunks and write it to a new chunk or set of chunks, then marks the originals as obsolete. The data is safe, and it's been consolidated so there are fewer unusable blocks on the drive. Nifty, except...
You can only erase each chunk a certain number of times before it dies. Flash memory tolerates reads VERY well. Erases, not so much.
So if you spend all of your time optimizing the drive, you're moving data around unnecessarily and doing a LOT of extra erases, shortening the hard drive's life.
But if you wait until you are running low on free blocks before you start freeing up space (which maximizes the lifespan of the drive), you'll run into severe slowdowns where the drive has to make room for the data you want to write, even if the drive is sitting there almost empty from the user's perspective.
So, SSD design has to balance between keeping the drive as clean and fast as possible at a cost of drive life, or making the drive last as long as possible but not performing at peak all the time.
There are certain things you can do to benefit both, such as putting really static data into complete chunks where it's less likely to be mixed with extremely dynamic data. But overall, the designer has to choose somewhere on the continuum of "lasts a long time" and "runs really fast".
you're not moving the data unnecessarily (Score:2)
Next time you write into that block, this operation will be performed anyway. This is why some SSDs have huge delays on writes, because they delay your write until the data merging is done. Also, not every block needs merging anyway, an area of a file that spans 512K (128 pages) is written in one chunk anyway and never needs re-merging.
To be honest, the data retention time on NAND (where the data just drains out like DRAM) is becoming as big a factor as write wear anyway. You're going to have to move the da
Re: (Score:2)
This sounds like something you could improve by something analogous to "generational GC"?
Damn it! (Score:1, Funny)
The new drive deleted all my %INSERT_POPSTAR% songs!
Captcha is "tragedy", how fitting...
800lb Gorilla (Score:1)
Close the blinds. Put your dresses on. Apply lipstick.
Read about Java gc. You'll find the section on generational gc interesting.
Take out the "Tash"? (Score:2)
Re: (Score:2)
What I'm inferring from the context within the sentence is that "take-out-the-tash" means he wishes to assassinate Rico Smith.
Check out NILFS (Score:2)
No, not MILFs.
http://www.linux-mag.com/cache/7345/1.html [linux-mag.com]
This GC stuff is only needed as long as the FS uses small blocks. File systems should be able to use arbitrarily sized blocks.
Hopefully btrfs will be able to use large blocks efficiently too.
Re: (Score:2)
GC: Garbage Collection
FS: File System
BTRFS: B-tree File System
NILFS: New Implementation of a Log structured File System
(I had to look NILFS up though...)
Re: (Score:2)
IMHO, GC on SSD's is not going to be good enough, 512 kB blocks are way too large. Instead, they should build SSD's that can erase smaller blocks.
There's good reason most filesystems donot use extremely large blocksizes:
1) Using large block sizes means you reduce the maximum IO operations/second (something which SSD's are supposed to be good at). On traditional Hard Drives, you won't notice this until you go past 64 kB due to seek latency taking the largest part of such a transfer. On SSD's however you m
OCZ already released the GC tool, but for Win only (Score:2, Interesting)
I see OCZ already released some sort of garbage collection tool, but it only works on Windows. Kind of annoying since I bought their "Mac Edition" drive for my MacBook. Hopefully they'll put this in a firmware update, too, and hopefully I won't have to boot DOS on my Mac to update the firmware with a utility that blows over my partition table this time. That was a lot of fun going from version 1.10 to 1.30 firmware.
Re:OCZ already released the GC tool, but for Win o (Score:3, Informative)
No, OCZ released wiper, which is a trim tool. Trim and GC are different; in particular, GC requires no tools or OS support.
Wrong data in article? (Score:3, Informative)
In the article it says
But if a cell already contains some data--no matter how little, even if it fills only a single page in the block--the entire block must be re-written
Is this correct?
From whatever I read in AnandTech [anandtech.com], it looked like we need not rewrite the entire block unless the available data is less than total - (obsolete + valid) data.
Also, the article is light in details. How are they doing the GC? Do they wait till the overall performance of the disk is less than a threshold and rewrite everything in a single stretch or do they rewrite based on local optima? If the former, what sort of algorithms are used (and are the best for it) ?
Re: (Score:3, Informative)
No, what the actual situation is is that a block consists of some number of pages (currently on the flash used in SSDs it tends to be 128). The pages can be written individually, but only sequentially (so, write page 1, then page 2, then page 3), and the pages cannot be erased individually, you need to erase the whole block.
The consequence of this is that when the FS says "Write this data to LBA 1000" the SSD cannot overwrite the existing page it is stored without erasing its block, so instead it find somew
Which department? (Score:2)
from the take-out-the-tash dept.
Is "tash" a play on words regarding SSD's, or does the taking-out-the-TRASH department have a job opening for a grammar nazi?
Oh Wow (Score:2)
Re:Oh Wow (Score:5, Informative)
You need to read up much, much more on the state of SSDs before making such sweeping, and incorrect, generalizations.
There are algorithms in existence, such as clever "garbage collection" (which is a bad name for this process when applied to SSDs - it's only a bit like "garbage collection" as it is traditionally known as a memory management technique in languages like Java) combined with wear levelling algorithms, and having extra capacity not reported to the OS to use as a cache of "always ready to write to" blocks, that can keep SSD performance excellent in 90% of use cases, and very good in most of the remaining 10%. Point being that for the majority of use cases, SSD performance is excellent almost all of the time.
Intel seems to have done the best job of implementing these smart algorithms in its drive controller, and their SSD drives perform at or near the top of benchmarks when compared against all other SSDs. They have been shown to retain extremely good performance as the drive is used (although not "fresh from the factory" performance, there is some noticeable slowdown as the drive is used, but it's like going from 100% of incredibly awesome performance to 85% of incredibly awesome performance - it's still awesome, just not quite as awesome as brand new), and except for some initial teething pains caused by flaws in their algorithms that were corrected by a firmware update, everything I have read about them - and I have done *alot* of research on SSDs, indicates that they will always be faster than any hard drive in almost every benchmark, regardless of how much the drive is used. And they have good wear levelling so they should last longer than the typical hard drive as well (not forever, of course - but hard drives don't last forever either).
Indilinx controllers (which are used in newer drives from OCZ, Patriot, etc) seem to be second best, about 75% as good as the Intel controllers.
Samsung controllers are in third place, either ahead, behind, or equal to Indilinx depending on the benchmark and usage pattern, but overall, and especially in the places where it counts the most (random write performance), a bit behind Indilinx.
There are other controllers that aren't benchmarked as often and so it's not clear to me where they sit (Mtron, Silicon Motion, etc) in the standings.
Finally, there's JMicron in a very, very distant last place. JMicron's controllers were so bad that they singlehandedly gave the entire early-generation SSD market a collective black eye. The one piece of advice that can be unequivically stated for SSD drives is, don't buy a drive based on a JMicron controller unless you have specific usage patterns (like, rarely doing writes, or only doing sequential writes) that you can guarantee for the lifetime of the drive.
I've read many, many articles about SSDs in the past few months because I am really interested in them. Early on in the process I bought a Mtron MOBI 32 GB SLC drive (I went with SLC because although it's more than 2x as expensive as MLC, I was concerned about performance and reliability of MLC). In the intervening time, many new controllers, and drives based on them, have come out that have proven that very high performance drives can be made using cheaper MLC flash as long as the algorithms used by the drive controller are sophisticated enough.
Bottom line: I would not hesitate for one second to buy an Intel SSD drive. The performance is phenomenal, and there is nothing to suggest that the estimated drive lifetime that Intel has specified is inaccurate. I would also happily buy Indilinx-based drives (OCZ Vertex or Patriot Torx), although I don't feel quite as confident in those products as I do in the Intel ones; in any case they all meet or exceed my expectations for hard drives. I've already decided that I'm never buying a spinning platter hard drive again. Ever. I have the good fortune of not being a movie/music/software pirate so I rarely use more than a couple dozen gigs on any of my systems anyway, so the smal
Re: (Score:2)
You must have had a faulty setup in some way.
I put an OCZ 256GB Vertex into my laptop, and it got so fast it's just unbelievable. I did have to upgrade the firmware, the stock firmware was both buggy and slow. Since then, several major revisions of the firmware have been released. I'll try this new GC firmware when I upgrade to Windows 7.
Still, I have more confidence in the long term reliability of the Intel drives. A friend of mine purchased a Vertex drive as well, and it died a week after he got it, and m
Re: (Score:2)
Hmm... one thing that might be going on is that when you upgrade to an SSD, suddenly everything becomes CPU limited.
I noticed that before my SSD, the CPU would only rarely go above 30% for any length of time, but now it hits 100% regularly.
The Intel Core i7 in your desktop is at least 3x the speed of your laptop CPU: it has double the cores, and each core is more efficient.
I saw a similar thing recently: I have a Vertex drive in a 2.66 GHz C2D laptop (4GB, running Windows 2008 x64), and a co-worker put an i
Re: (Score:2)
everyone who jumped to it early because it was the new bright and shiny thing should consider being a bit more cautious next time.
The only regret I have about my X25-M is that I didn't get one when they first came out but waited till 6 months ago. The only comparable speed increase in Linux I have ever experienced was when I upgraded my parents' 486 from 8MB to 20MB RAM.
Re: (Score:2)
Even with none of the proposed improvements, quality SSD's already slaughter mechanical drives' performance. It's the biggest single advance in mass storage since the PC was invented.
Misconceptions / errors in parent article (Score:5, Informative)
I have been working closely with OCZ on this new firmware and wanted to clear things up a bit. This new firmware *does not*, *in any way at all*, remove or eliminate orphaned data, deleted files, or anything of the like. It does not reach into the partition $bitmap and figure out what clusters are unused (like newer Samsung firmwares). It does not even use Windows 7 TRIM to purge unused LBA remap table entries upon file deletions.
What it *does* do is re-arrange in-place data that was previously write-combined (i.e. by earlier small random writes taking place). If data was written to every LBA of the drive, then all files were subsequently deleted, all data would remain associated with those LBAs. This actually puts OCZ above most of the pack, because their algorithm restores performance without needing to reclaim unused flash blocks, and does so completely independent of the data / partition type used. This is particularly useful for those concerned with data recovery of deleted files, since the data is never purged or TRIMmed.
Slashdot-specific Translation: This firmware will enable an OCZ Vertex to maintain full speed (~160 MB/sec) sequential writes and good IOPS performance when used under Mac and Linux.
Hardware-nut Translation: This firmware will enable OCZ Vertex to maintain full performance when used in RAID configurations.
I'll have my full evaluation of this firmware up at PC Perspective later today. Once available, it will appear at this link:
http://www.pcper.com/article.php?aid=760 [pcper.com]
Regards,
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
Is there any information about when TRIM support will be added? It's nice to be able to undelete files, but most of us are fine with the consequences.
You fail, Slashdot (Score:2)
Re: (Score:2)
Imagine that. Integrated obsolescence disguised as a feature.
What ever will they think of next?
Re: (Score:2)
What ever will they think of next?
Why, they'll think of disguising integrated obsolescence as a feature next, but they'll call it a Data Relocation Mechanism (DRM).