Solid State Drives Tested With TRIM Support 196
Vigile writes "Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues. The promised land is supposed to be the mighty TRIM command — a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files. Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers. A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy."
But its the future (Score:5, Interesting)
I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.
Re: (Score:2)
Re: (Score:2)
Not true I don't think. SSDs already beat HDDs in terms of data density, as can be observed by the fact that you can buy 512GB 2.5" SSDs (for silly money), but only 500GiB HDDs. The only factor that HDDs win on now is price, and I don't expect it to take long before that changes.
Re:But its the future (Score:5, Informative)
Actually, magnetic disks have exponentially increased in capacity since the 50s. In fact, the rate of increase has been higher than the growth of transistor count.
See: http://www.scientificamerican.com/article.cfm?id=kryders-law [scientificamerican.com]
Re: (Score:3, Interesting)
Things have changed a lot in four years. Since 2005, hard drives have only increased from 500 GB to 2 TB---a factor of 4. In that same time, Compact Flash cards increased from 8GB to 128 GB---a factor of 16. Flash density increases are severely outpacing hard drive density increases, and unlike hard drives, flash storage isn't rapidly becoming less reliable as the density increases....
Re: (Score:3, Insightful)
...and unlike hard drives, flash storage isn't rapidly becoming less reliable as the density increases....
I can see the logic behind the argument that hard drives should become more failure prone as the platter density increases, but I've yet to see any data substantiating this point. Your claim that hard drives are rapidly becoming more unreliable makes your statement come off as even more dubious to me.
I don't mean to attack you or come off as a complete dickhole, but do you know of any data to back this up? I'm legitimately curious, as in my (completely anecdotal) experience, magnetic hard drives seem to
Re:But its the future (Score:4, Informative)
Flash drives have longer MTBF than spinning media... so they last longer. However, a less well known fact is that flash drives have a URE rate 10-100X worse than spinning media does typically today. It's getting fixed, but the fellow you're replying to is basically wrong.
C//
Re: (Score:2)
Re:It is yesterdays future ... (Score:5, Insightful)
I can buy a terabyte hard drive for around $100. For the same hundred dollars, the best SSD I can find is 32GB. On my computer, Steam's cache folder is bigger than 32GB. My music player has a 120GB drive, my DVR has a 350GB drive, and my backup server has a 1.5TB raid. Just because expensive mobile gadgets use expensive solid-state drives does not mean hard drives are dead, dying, or even decaying.
Re: (Score:2)
I can buy a terabyte hard drive for around $100. For the same hundred dollars, the best SSD I can find is 32GB. On my computer, Steam's cache folder is bigger than 32GB. My music player has a 120GB drive, my DVR has a 350GB drive, and my backup server has a 1.5TB raid. Just because expensive mobile gadgets use expensive solid-state drives does not mean hard drives are dead, dying, or even decaying.
I totally agree, the fat lady hasn't sang when it comes to magnetic hard drives. It does seem like SSD's will soon find their place in performance-oriented systems though. I'm looking forward to having them sorted out enough that my next desktop will have a SSD for an OS, swap, and perhaps applications (which all tend to be hindered by the slow random access of magnetic media) - and a big honkin' magnetic drive for storage.
Re: (Score:3, Informative)
If you can afford an SSD, why would you waste it on swap? Why not just buy more RAM? If you ever actually need swap, you are doing something wrong.
Re: (Score:3, Interesting)
How about hibernate to disk? If you have lots of good SSD that should be very fast shouldn't it?
Re: (Score:3, Funny)
Think of it as a luxury expense from the cash we save building our own systems.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It will be more than 6 years before SSD's surpass the commodity SATA segment in $/GB, and $/GB is definitively what drives Tier 2 storage. So while the enterprise 10K/15K's years are numbered (I expect days of total destruction NLT 2011), SATA will be around for a while.
C//
Re: (Score:2)
I can see how you would draw that conclusion alone on just the numbers, but I think you failed to take into account that hard drives are selling at much faster rates than they did during the times of HDDs, and those rates are accelerating. Since volume is ramping up considerably, manf have insane incentives to increase there economies of scales at commenserate rates, thus lowering the costs.
If you don't believe it, use your same reasoning for SIMMs vs. DIMMs. Then try it against DIMMs vs. DDR1, then DDR2. I
Re: (Score:2)
Your alegation is true *IF* both continue to grow at the same rate. There is no basis to believe that this is a fact.
In my opinion, nothing can be said about the future.
Re: (Score:3, Insightful)
I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.
Well damn, I'll just have to tell our customer that has something like a 30 petabyte TAPE archive that's growing by about a terabyte or more each and every day that they're spending money on something you say is, umm, outdated and these newfangled devices that the next power surge will totally fry are the wave of the future.
Guess what? There's a whole lot more money spent on proven rock-solid technology by large organizations then you apparently know.
Tape and hard drives are going NOWHERE. For a long, lon
Re:But its the future (Score:4, Interesting)
if by "proven rock-solid" you mean horrid fidelity and media degradation rates, i'd say you are correct about tapes. if you're client has a 30 petabyte tape archive there is probably some horrible inefficiency goin on. (i'm sure you probably have little control ofer the situation, i have similar clients) but if they have 30Pb of data on tape that they access regularly, they're wasting a LOT of time just retrieving data. you should really consider a SAN NAS or similar. HDD storage is very cheap these days and LTO4 tapes are pretty pricey. we all know they have shoddy storage quality to boot. if they dont access it regulary then its probably a real waste of money to own, record and store 30Pb of data. either way, just the physical storage of that many tapes is probably about equivelant to the sq. footage needed for a rack or 2 (or 3) of blade servers with the same storage capacity.
Re: (Score:3, Insightful)
Agreed that SSDs have a long way to go on price to compete, but it's simply not true that they're not yet ready for the enterprise datacenter. All the larger enterprise storage array vendors (EMC, HDS, IBM, NetApp) say they're ready, and most are shipping them with decent sales. Despite their price and the "fact" you've so eloquently stated, you'll find them in many Fo
Re: (Score:3, Interesting)
All the larger enterprise storage vendors are full of shit. They say the SSD is "ready" because it's the hottest buzzword in the industry, which always commands huge profit margins.
On one hand, I can use cheap fast 2.0TB SATA drives for 11 cents a gig, or I can go the SSD route with 256gb drives at $4.00 a gig. That's OEM cost, which means EMC and friends will triple that number, to convince your boss these drives are "special".
Yeahhh... give me the one that costs 36 times more, takes up 4 times more spac
Re: (Score:2, Insightful)
There are two schools of thought regarding SSDs:
Re: (Score:2, Insightful)
I think he was pointing to the "reviews". Here's the thing: none of those reviews were from enterprise-class users.
Once you start getting into 10 drive RAID arrays (and up), speed of each drive is no longer your limiting factor, provided you're using some kind of striping. That's the reason SATA RAID arrays have started to become popular in the enterprise for less critical systems -- there's almost no performance difference at all. You need to go fibre channel before you see any marked difference in per
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Hint: peak oil doesn't mean "oil has run out". It could just mean that the price of oil has had a long term decline and people are no longer trying as hard to get oil out and never will. You need to show a bit more evidence than "you don't have a fucking clue" before we can believe you are right. To be honest, his statement is probably more a hopeful prediction about the level of usage of wind power and other renewable energy sources than about oil reserves.
Re: (Score:2)
Artificial restrictions by OPEC that dwindle supply b
Re: (Score:2)
Eventually they will virtually disappear as paper tape, cards and, more recently, floppies have -- but it will take a lot longer than most expect.
Now get off my lawn.
What I really want to know (Score:3, Insightful)
Which Linux filesystem works best with SSDs? I don't intend to touch Win7.
Re:What I really want to know (Score:4, Informative)
NILFS - http://www.linux-mag.com/id/7345/ [linux-mag.com]
ReiserFS 3 (Score:2)
Re: (Score:2)
In Linux TRIM commands are issued by ext4 and btrfs. Btrfs also have two SSD modes for allocator, but is not meant for production now. There are probably other linux filesystem issuing TRIM, as it's implemented few kernel releases ago.
Re: (Score:3, Informative)
That's because JFFS and such are intended to be used on top of a raw flash device.
SSDs do wear levelling internally already, so a filesystem that tries to do it as well is redundant.
Re:What I really want to know (Score:4, Insightful)
That's my biggest complaint about them, actually -- these "teething problems" people mention are pretty much directly a result of OSes treating SSDs as though they were spinning magnetic disks.
No, the OS should be able to do its own wear leveling. If you need to pretend it's a hard drive, do it in the BIOS and/or the drivers, not in the silicon -- at least that way, you can upgrade it later when things like this come out.
Re:What I really want to know (Score:4, Insightful)
No way, lets have the firmware do this. The problem with your approach is that the OS wont understand the drive as well as the manufacturer does, so it will always be a sub-optimal solution. Dont tie the hands of the manufacturer to put intelligence in his drives. For instance, the best way to wipe a disk is via an ATA command [zdnet.com], and not through multi-passes of wipes. The manufacturer knows where the heads are and how the drive writes. The SSD situation is somewhat similar.
Re: (Score:2)
The problem with your approach is that the OS wont understand the drive as well as the manufacturer does, so it will always be a sub-optimal solution.
The problem is, we currently have a sub-optimal solution, for just that reason -- the manufacturers make assumptions, even regarding things like FAT size and location, when tuning the performance of these drives.
We're adding hack upon hack upon hack -- the OS will have some sort of defragmenter, and will attempt to avoid fragmentation, just as the device is deliberately scattering writes throughout the disk.
If we want to let the drive provide some sort of BIOS-like scheme -- wherein some code (appropriately
Re: (Score:2)
Yes and No.
The linux kernel's recent UBIFS [wikipedia.org] flash support is I believe separated into 2 distinct layers. There's a layer for logical to physical address translations with wear-leveling and free space tracking (UBI). And a separate layer for organising the storage of the filesystem within those used blocks while keeping stored data in block sizes that match the underlying physical media and re-writing the whole block at once.
I think that kind of abstraction is useful enough for the OS, potentially with the
Re: (Score:2)
It is all NTFS'es fault. Impossible to turn off journaling, OS doesn't move journal etc.
On HFS+ , journal is also an ordinary file and backwards compatible. In fact theoretically, OS X can even journal FAT if it wanted to.
So, if you turn off journaling, half (or more) of the potential problem is gone. First, there won't be a journal in some area being written over and over. Second, OS X won't enable "hot band" function which puts the most accessed files (hot files) to beginning of disk, a specific area in m
Re: (Score:2)
I would say, disabling journaling is a bad idea. You want a wandering log, or just a log-based filesystem, or soft updates.
You don't want to go back to the days of having to fsck after crashes.
Re: (Score:2)
Well, I am assuming these things hates data written over and over to same spot. It is obvious that journal moves from time to time but I don't think it is enough, I don't know the exact reason of "journal pointers have been reset!" message either.
Trust me, for a consumer laptop which is good enough to consider SSD, fsck_hfs doesn't take that long. Of course, with journaling it would be a lot better but they are adopting early, not my fault :) In fact, an updated OS X without root level running hacks or outd
Re: (Score:3, Informative)
Not true at all -- that's why I mentioned a BIOS.
I'm pretty sure even Windows is smart enough to just use the BIOS-provided access, if it doesn't have a driver. If it does, provide it in a driver.
It would likely also require a different filesystem.
Nope. You can do exactly the same pretend-it's-a-hard-drive approach, until additional filesystems are developed. And there's nothing preventing a third party from developing a filesystem for Windows.
Again, see the BIOS approach. In fact, look at nVidia's fakeraid -- software RAID done with BIOS support and a Windo
Re: (Score:2)
Re: (Score:2)
Again, see the BIOS approach. In fact, look at nVidia's fakeraid -- software RAID done with BIOS support and a Windows driver. This is neither a particularly new idea, nor a particularly difficult one, especially if you're preinstalling.
No OS has actually used the BIOS for I/O for a very longgggggg time. It's just too slow and too crappy to contemplate.
The BIOS part of the fakeraid goes away once the OS loads. The driver does all the work. In Linux, because it already has a well tested RAID driver, the objective with fakeraids is to make it look like a JBOD.
That's why it's being done in the drive firmware. It's the drive's job to make itself look like a contiguous array of blocks identified by a single index value. The TRIM command is an
Re: (Score:2)
I tested Windows 7 RC on a Mac Mini just 2 days ago, for 24 hours.
Microsoft is really, really interesting company. While entire industry talks about SSD and even Disk Defragmenter producing (and ethical) companies tell their customers "Do not defragment SSD drives", Windows 7 defaults to weekly defrag, without even asking to user.
It is a release candidate, coded and packaged in 2009. It is not "windows 2000" or something. Their users already live problems because of NTFS and they decide to weekly defrag, on
Re: (Score:2, Informative)
FYI Windows 7 disables defragmenting on SSD drives.
The automatic scheduling of defragmentation will exclude partitions on devices that declare themselves as SSDs. Additionally, if the system disk has random read performance characteristics above the threshold of 8 MB/sec, then it too will be excluded. The threshold was determined by internal analysis.
Support and Q&A for Solid-State Drives [msdn.com]
Re: (Score:2)
Way to completely miss the point.
Yes, wear leveling is done by the SSD itself, which is a total waste. The OS already has to do allocation; there's no reason a filesystem can't do wear leveling, too, and several Linux filesystems do.
There's also the block size itself, and the fact that the OS is optimized to place things sequentially, which is completely pointless -- especially if it tries to defrag -- when the device is an SSD.
And no, it won't put repair software out of business. If anything, it'll generat
Re:What I really want to know (Score:5, Informative)
I'm just surprised that none of the linux distros are talking about it yet. You would think with the apple and ibm laptops using SSD today that there would be some option somewhere. I think everyone is distracted by btrfs.
Re: (Score:2)
By the way, it's true what they say: An SSD is the one component that will provide you with the most notic
Re: (Score:2)
Re: (Score:2)
Lots of RAM and an SSD will make a box fly.
Re: (Score:2)
since when has ext been the best choice for anything, ext has always been about balance i doubt its the best choice for SSD, Id put my money on a log filesystem [wikipedia.org], e.g you couldn't be more wrong and GP is correct because NILFS2 will write to used blocks much less often than conventional systems. OFC ext will be better than FAT because file-allocation table block is going to be a problem and it turns out ext4 with COW will also be good (but not as good as a log system and the journal itself will be a problem)
fragmentation? (Score:2)
can someone explain why fragmentation in the mapping between logical blocks and
physical addresses causes performance degradation?
is it an issue with logically sequential reads being spread across multiple pages?
a multi-level lookup to perform the mapping?
?
Re:fragmentation? (Score:5, Informative)
This older Slashdot post linked in the story links to a story that covers that topic very well: http://www.pcper.com/article.php?aid=669 [pcper.com]
Re: (Score:3, Insightful)
It's extremely unfair to link to the print version of that article. Anand put an immense amount of time into that (and everything before it!) and scarred quite a few bridges to bring it to light for his readers - there are very, very few reviewers out there that would do that for their readerbase. The least you could do is offer him and his site _some_ respect.
Re:fragmentation? (Score:4, Interesting)
Because, basically, flash drives are laid in levels.
When you delete, you simply map logical space as free.
If you go to use that free space later, you find that area, and drop shit into it. It's I dunno, a 32 KB block of memory called a page. If the page is full (to the point where you can't fit your new shit) of "deleted" files, you first need to write over those deleted files, then write your actual data.
If the logical space is full with good, fragmented (with deleted files interspersed) files, you need to read out to memory, reorder the living data and remove the deleted data, add in the full page back.
Think of it as having a notebook.
You can write to 1 page at a time, only.
Page 1 write
Page 2 write
Page 3 write
Page 2 delete
Page 2 write (still space)
Page 2 write (not enough space, write to page 4 instead)
Page 2 delete
Page 2 write (not enough space, no more blank pages, read page 2 and copy non-deleted shit to scratch paper, add new shit to scratch paper, cover page 2 in white out, copy scratch paper to whited-out page 2)
Re:fragmentation? (Score:5, Funny)
If you go to use that free space later, you find that area, and drop shit into it.
Knock it off with all the fancy jargon!
Re:fragmentation? (Score:5, Informative)
Disclaimer: I am not a SSD firmware author, although I've spoken to a few.*
As best I can understand it, the problem is that writes are scattered across the physical media by wear-leveling firmware on the disk. In order to do this, the firmware must have a "free list" of sorts that allows it to find an un-worn area for the next write. Of course, this unworn area also needs to not currently be storing any relevant data.
Now, consider a SSD in use. Initially, the whole disk is free, and writes can go anywhere at all. They do, too - you end up with meaningful (at some point) data covering the entirety of the physical memory cells pretty quickly (consider things like logfiles, pagefiles, hibernation data, temporary data, and so forth). Obviously, most of that data doesn't mean anything anymore - to the filesystem, only perhaps 20% of the SSD is actually used, after 6 months. However, the SSD's firmware things that every single part has now been used.
Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings. The other problem is that these tables get *huge* - a typical home system might have between 100K and 1M files on it after a few months of usage, but probably generates and deletes many thousands per day (consider web site cookies, for example - each time they get updated, the wear leveling will write that data to a new portion of the physical storage).
Maintaining the tables themselves is possible, and when a logical block gets overwritten to a new physical location, the old location can be freed. The problem is that this freeing comes at the same time that the SSD needs to find a new location to write to, and the only knowledge it has about physical blocks which can safely be overwritten is ones where the logical block has been overwritten already (to a different physical location). Obviously, the lookup into the table of active blocks has to be indexed by logical block, which may make it difficult to locate the oldest "free" physical blocks. This could lead to searches that, even with near-instant IO, result in noticeable slowdowns.
Enter the TRIM command, whereby an OS can tell the SSD that a given range of logical blocks (which haven't been overwritten yet) are now able to be recycled. This command allows the SSD to identify physical blocks which can safely be overwritten, and place them in its physical write queue, before the next write command comes down from the disk controller. It's unlikely to be a magic bullet, but should improve things substantially.
* As stated above, I don't personally write this stuff, so I may be mis-remembering or mis-interpreting. If anybody can explain it better, please do.
Re:fragmentation? (Score:5, Informative)
For a thorough (RE: long) primer on SSDs and long term performance woes, Anand's overview [anandtech.com] is a must read.
Re: (Score:3, Interesting)
Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings.
Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore? I know it's not an ideal solution but rsync or Time
Re: (Score:2)
It might ease the problem, but it wouldn't solve it. The controller on the drive needs to clear the pages and re-set everything back to zero, it doesn't do that with just a format as far as I am aware. You'd need a format -and- a trim to get it back to like-new speeds.
I like your idea for ramdisks and cache though.
Re: (Score:2, Informative)
browser.cache.disk.parent_directory
Re: (Score:2)
Yes this would work very well.
BUT. you MUST tell the drive you've done this, with the previous drives the only way is to use the drive's secure erase command to wipe the drive.
With these new drives you could just TRIM all the free space occasionally.
Re: (Score:3, Informative)
When the drive has been used enough, however, it m
Re: (Score:3, Insightful)
In very simple terms (because I'm no expert), it's because of the way SSDs deal with wear leveling and the fact that a single write is non-sequential. When it writes data, it is writing to multiple segments across multiple chips. It is very fast to do it this way, in fact the linear alternative creates heavy wear and is significantly slower (think single chip usb flash drives) than even spinning disk tech, and so this non-sequential write is essential.
Now, to achieve this, each chip is broken down into se
Re: (Score:2)
When you delete data, you are deleting little bits spread all over the physical drive.
The biggest problem is that a delete in most filesystems simply marks the space in the index on the device as free. However most filesystems leave the deleted data in place without writing anything over the top until that space is re-allocated. Hard disks don't typically need to know which sectors of the physical storage are actually in use. If you tell an SSD that this block is no longer required it can start erasing the physical chips and add them to the internal free list ready for the next data to be wr
Re:fragmentation? (Score:4, Interesting)
Very interesting, I assumed the problem was similar to fragmentation and wondered why nobody compared it as such.
Now, your explanation makes things much more clearer, the global problem is amplified by the additional problem you described.
Now would implementing the logic to control the SSD entirely at the OS/FS level be much slower than implementing it in silicon in the SSD itself ?
As you said, I now understand that the OS/FS would now have to be aware of the underlying media ;-)
Re: (Score:2)
probably not. but neither OS vendors nor SSD manufacturers would want that.
There's no backwards compatibility, SSD vendors have to write drivers for every OS, and so on.
Re: (Score:2)
Well, my understanding is too fresh to judge the case but based on experience; standard interfaces usually emerge after a period of, say, 10 to 20 years after the raw technology with proprietary drivers emerges.
Please correct me if I am wrong but wasn't this the case for hard drives ?
Of course, the technology has to last that long for standards to emerge. ;-))
Re:fragmentation? (Score:5, Informative)
Once upon a time, a technical subject on /. gave insightful and informative responses that were modded up. Time changes, I guess.
The "fragmentation" that SSD drive have don't really come from wear leveling, or from having to find some place to write things, but from the following properties:
* Filesystems read and write 4KiB pages.
* SSD can read many time 4KiB pages FAST, can write ONCE 4KiB pages FAST, but can only erase a whole 512KiB blocks SLOWLY.
When the drive is mostly empty, the SSD have no trouble finding blanks area to store the 4KiB write from the OS (he can even cheat with wear leveling to re-locate 4K pages to blank spaces when the OS re-write the same block). After some usage, ALL THE DRIVE HAVE BEEN WRITTEN TO ONCE. From the point of view of the SSD all the disk is full. From the point of view of the filesystem, there is unallocated space (for instance, space occupied for files that have been deleted).
At this point, when the OS send a write command to a specific page, the SSD is forced to to the following:
* read the 512KiB block that contain the page
* erase the block (SLOW)
* modify the page
* write back the 512KiB block
Of course, various kludges/caches are used to limit the issue, but the end result is here: writes are getting slow, and small writes are getting very slow.
The TRIM command is a command that tell the SSD drive that some 4KiB page can be safely erased (because it contains data from a delete file, for instance), and the SSD stores a map of the TRIM status of each page.
Then the SSD can do one of the following two things:
* If all the pages of a block are TRIMed, it can asynchronously erase the block. So, the next 4KiB write can be relocated to that block with free space, and also the 127 next 4KiB writes.
* If a write request come and there is no space to write data to, the drive can READ/ERASE/MODIFY/WRITE the block with most TRIMed space, which will speed up the next few writes.
(of course, you can have more complex algorithms to pre-erase at the cost of additional wear)
A tip for people reading "fragmentation (Score:2)
Coriolis Systems (who produces iDefrag) jokingly referred to that issue on their blog.
" Ironically even SSDs, where you would expect the uniform access time to render fragmentation a problem of the past, still have various problems caused by exactly the same issue(1)'
of course, they add:
1 For avoidance of doubt, we strongly recommend that you don't try to defragment your SSD-based volumes. The fragmentation issue on SSDs is internal to their implementation, and defragmenting the filesystem would only make m
Re: (Score:2)
In a nutshell, the page size of the flash is larger than the logical sector size, but flash can only erase whole pages at a time (and erase isn't a fast operation). So when blocks get re-written, the old content doesn't go away by default. Instead, the logical to physical mapping is changed to point to an already blank area and the old content is marked for reclaimation.
When the last logical block in a page is invalidated, then the page can be scheduled to be erased and returned to the available list.
The ca
Potential data recovery problems (Score:3, Interesting)
Even if you restore the partition table from a backup, you will likely suffer silent file system corruption, which may even not be apparent until it's too late.
If TRIM support is actually implemented on the device, the device is free to 'lose' data on TRIMmed blocks until they are written at least once.
Re: (Score:2)
I would never, ever trust a filesystem after an event like this. Ever. Do your backups.
Re:Potential data recovery problems (Score:4, Insightful)
Something as simple as deleting the wrong partition becomes an irreversible operation if you do it using a tool that supports TRIM on TRIM-enabled hardware.
This seems needlessly verbose. Let me shorten it for you:
Deleting a partition should always be considered an irreversible operation.
Hmmm, even shorter:
Don't delete a partition unless you want it to go away forever.
Even if you restore the partition table from a backup, you will likely suffer silent file system corruption, which may even not be apparent until it's too late.
If TRIM support is actually implemented on the device, the device is free to 'lose' data on TRIMmed blocks until they are written at least once.
If I understand you correctly, you are suggesting that a disk partitioning tool will use TRIM to not only wipe the partition table itself, but also nuke the partition data from orbit. And you the point out that it would not be adequate to rewrite just the sectors of the partition table.
If so, then the answer is: you don't just restore the partition table, you restore the whole partition (including data) from backup.
I for one consider much-faster write speeds to be a bigger advantage than possibly being able to reverse a partition deletion.
steveha
Re: (Score:2)
There will NOT be silent file corruption. If the OS TRIMs the entire partition it will appear to have been wiped by the drive. The physical blocks will be moved back onto the drive's free list the logical blocks will be mapped to the 'null block'.
End result TRIM looks like a fast wipe and the data will be completely erased at the ATA level but the data won't be erased at the flash level until later.
Re: (Score:2)
Or, even if the drive didn't erase physical Flash cells yet, it could already mangle the mapping between the logical and physical blocks.
In fact, I have a cheap CompactFlash card that does exactly that when you yank power from it while writing - the drive appears completely scrambled (with blocks reordered) when you restore power to it.
SSDs?! (Score:2)
Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues.
Who cares how they perform. All they have to do is sit there and scare away enemy fleets.
Ooh yeah! (Score:2)
I'd love to get me some trim! [urbandictionary.com]
TRIM needs a driver, a windows driver? (Score:2)
The most important point of hard disks are being amazingly multi platform. I didn't like the sound of "Windows driver", "OS support" to perform nicely.
SSD guys really better stick to the standards and never, ever do anything requiring a "driver" on host OS. For example, there are G4 Mac owners who happily upgrades their "old tech" magnetic drives to 500 GB or even 1TB. Who will write driver for them? Apple? SSD vendor? I don't think so.
In fact, HD vendors really better stay away from writing anything except
Revision 1571? (Score:2)
What about hybrids? (Score:2)
Where are all the wonderful SSD/HD hybrid drives that were supposed to come out and prevent many of the problems SSDs have?
My dad bought me an SSD for my birthday, and it was one of those models that has horrible studdering issues (which, from what I'm reading, covers most SSDs). That was a lot of money for a drive that let me install drivers about 20 times slower than a hard drive would, and caused my machine to freeze for 20-40 seconds at a time while just surfing the web. That was after disabling all t
Re: (Score:2)
Never mind about the mixing of flash types. I did my research.
I still want to know where the hybrids are.
Re:High failure rate (Score:4, Informative)
Re:High failure rate (Score:5, Insightful)
That's a statistic that doesn't make any sense.
20% under what conditions, and in what timeframe? Over a long enough time period everything has a 100% failure rate.
Normal hard disks also will eventually fail, due to physical wear.
Also if it lasts long enough, at some point, reliability will stop being important. Even if it still works, very few people will want to use a 100MB hard disk from 15 years ago.
Re:High failure rate (Score:5, Insightful)
Just a small tangential nitpick: we were already more than a factor of ten past that HDD capacity fifteen years ago. The 1GB barrier was broken very early in the Nineties. I still have an HP 1GB SCSI drive from about '91 or '92, IIRC.
As far as failure rates go, I still have ALL of my disk drives (one or two outright failed) from the 15-20 years, and every single one of them still functions at least nominally. I'm still more trusting of magnetic media than I am either rewritable optical or Flash-based media.
Re: (Score:3, Insightful)
I've never heard of a 20% fail rate for SSDs. I've heard of wear concerns, as each little bit on the drive can only be written a set number of times (it's at 10,000 or so, if I remember correctly). However, thanks to the majic of wear leveling and the large amount of separate chips in an SSD drive, you can fill up your drive completely and you will have only written to each bit exactly once. That means you could theoretically fill your SSD up 10,000 times before you would expect failure. Reality is a bi
Re: (Score:2)
I've heard that the failure rate on SSD's can be as high as 20%.
As Heinlein put it wonderfully in 'Tunnel in the Sky':
The death rate is the same for us as for anybody ... one person, one death, sooner or later. - Cpt. Helen Walker
Re: (Score:2)
I've heard that the failure rate on SSD's can be as high as 20%.
As Heinlein put it wonderfully in 'Tunnel in the Sky':
The death rate is the same for us as for anybody ... one person, one death, sooner or later. - Cpt. Helen Walker
Except Lazarus Long of course.
Re: (Score:3, Insightful)
Re: (Score:2)
Meh. Just stick in 50GB worth of RAM in there. No one's filling a Blu-Ray disk with 3d environment data yet, are they?
Why should a game even hit the disk except when saving, these days?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
most ram does yes, but there are always exceptions [wikipedia.org] to the rule.
Re: (Score:2)
Compress the data into a tarball (or .zip file, whatever) and write it to the hard drive when you are finished working/playing, reverse the process when setting up.
Real speed junkies use RAMdisks!
Re: (Score:2)
> Gamers, gamers, gamers and gamers.
Steve, is that you ?
Re: (Score:3, Insightful)
Because someone got paid to do it. You don't think /. editors work for free do you?
Re:Why Windows 7 in the summary? (Score:4, Interesting)
Even the best consumer-level SSDs like the Intel x-25m/e use a volatile RAM cache to speed up the writes. In fact, with the cache disabled, random write IOPS drops to about 1200, which is only about three or four times as good as a 15k 2.5" drive. The more expensive truly-enterprise SSD drives which don't need a volatile write cache cost at LEAST $20/GB, so the $/(safe random write iop) ratio is actually still pretty close, and cheap SATA drives may actually be even on that metric as the fast enterprise SSDs. Granted, this shouldn't be the case in a year, but that's where it is right now. (Also, the performance-per-slot is a lot higher for SSDs, which can translate into different $ and power and space savings.)
Re: (Score:2)
Re: (Score:2)
Yet another happy Microsoft customer vents his wrath on those wise enough to use something else.