Enterprise SSDs, Powered Off, Potentially Lose Data In a Week 184
New submitter Mal-2 writes with a selection from IB Times of special interest for anyone replacing hard disks with solid state drives:
The standards body for the microelectronics industry has found that Solid State Drives (SSD) can start to lose their data and become corrupted if they are left without power for as little as a week. ... According to a recent presentation (PDF) by Seagate's Alvin Cox, who is also chairman of the Joint Electron Device Engineering Council (JEDEC), the period of time that data will be retained on an SSD is halved for every 5 degrees Celsius (9 degrees Fahrenheit) rise in temperature in the area where the SSD is stored. If you have switched to SSD for either personal or business use, do you follow the recommendation here that spinning-disk media be used as backup as well?
I call BS (Score:3, Insightful)
Re: (Score:3, Insightful)
Re:I call BS (Score:5, Informative)
The statements are actually completely accurate, but a bit misleading. First, this is about what JEDEC requires, not what actual SSDs deliver. Second, this is when SSDs are stored in idle at 55C. And third the JEDEC requirements for minimum off-time data-retention are only 3 months @40C for enterprise-grade SSDs and only 12 months for consumer SSDs at 30C. These are kind of on the low side, although I have lost some OCZ drives that were off for just about a year. (Never buying their trash again...)
That said, anybody conversant with SSD technology knows that SSDs are unsuitable for offline data storage as data obviously has potentially far shorter lifetimes than on magnetic disks, which in turn again have far shorter data lifetime than archival-grade tape. These is absolutely no surprise here for anybody that bothered to find out what the facts are. Of course, there are always those that expect every storage tech to keep data forever, and those dumb enough to have no or unverified backups and those often on media not suitable for long-term storage. Ignoring reality comes at a price.
My personal solution is mixed HDD/SSD Raid1 and, of course, regular backups.
Re: (Score:2)
My personal solution is mixed HDD/SSD Raid1
Uhh... doesn't that mean that the RAID controller has to wait for the HDD on every read/write to verify it's the same as on the SSD, so effectively you get HDD performance?
Re:I call BS (Score:5, Informative)
Every write, not every read. Reads are satisfied as soon as either drive returns the data. And if the raid controller has a battery or supercap so it can cache writes, you'll almost never notice the difference.
Re: (Score:3)
"Reads are satisfied as soon as either drive returns the data. And if the raid controller has a battery or supercap so it can cache writes, you'll almost never notice the difference."
RAID controllers do not launch reads on all involved drives. That would be stupid.
Implementing battery backed write back cache on an array that uses SSD would be similarly stupid.
RAID 1 with mixed SSD/HDD is the worst of both worlds further complicated by people who don't understand it.
Re: (Score:3)
RAID controllers do not launch reads on all involved drives. That would be stupid.
?
For a RAID1, most RAID controllers (and software RAID implementations) will absolutely read from all devices so as to service the read ASAP.
For distributed parity forms of RAID, you inherently have to read from all devices.
For dedicated parity disk forms of RAID, you have to read from all devices except the parity device.
I've never tried a mixed RAID1 of SSD and magnetic disk, but with a large enough write cache the theory se
Re: (Score:2)
For dedicated parity disk forms of RAID, you have to read from all devices except the parity device.
I think the idea is to make a dedicated parity disk RAID with one data SSD and one parity HDD.
Re: (Score:3)
For a RAID1, most RAID controllers (and software RAID implementations) will absolutely read from all devices so as to service the read ASAP.
For distributed parity forms of RAID, you inherently have to read from all devices.
The problem is guaranteed with distributed parity raid; the controller will have to wait for the slowest disk to complete the read. Both reads and writes will be limited to mechanical disk performance levels.
With a RAID1 mirror set, you can get a performance improvement on reads since the SSD would presumably service all of them. Writes will still be delayed by the mechanical drive(s).
In addition, most RAID controllers do not support mixing drive types. Most of them don't even recommend mixing drive speeds
Re: (Score:3)
Implementing battery backed write back cache on an array that uses SSD would be similarly stupid.
How do you figure? Write to ram is a whole lot faster than write to flash, especially if the flash block has to erase first.
Re: (Score:3)
RAID controllers do not launch reads on all involved drives. That would be stupid.
I think you mean that they do not launch a read request for the same chunk of data on all drives in a raid mirror. That would be accurate. However, they usually will read from both drives (read chunk 1 from drive A, read chunk 2 from drive B... doing so in parrallel can significantly increase read performance using a mirror).
RAID 1 with mixed SSD/HDD is the worst of both worlds further complicated by people who don't understand it.
Do you mean people like you?
Look up "md raid write-mostly", or try this page (one of many found): http://tansi.info/hybrid/ [tansi.info]
That setup is for a linux software RAID 1 mirror with one side
Re: (Score:2)
Every write, not every read. Reads are satisfied as soon as either drive returns the data. And if the raid controller has a battery or supercap so it can cache writes, you'll almost never notice the difference.
Ah, I thought RAID1 would warn you somehow of bit flips which I assume would be the way heat-deteriorated storage would show up. Guess it won't, you'll need ZFS or something like that.
Re: (Score:2)
Thats very dependant on whose implementation of raid 1. I've seen everything from read from one drive, stripe reads, and read from both and compare. Linux will actually let you choose from among some of those options.
ZFS and btrfs add a crc for a group of blocks and and detect which drive has the bad data, correct that and tract that it happened.
Re: I call BS (Score:2)
use ZFS l2arc/zil or flashcache/dm-cache to get a happy medium.
Re: (Score:3)
I've been thinking about getting a bunch of cheap usb sticks and building a zfs pool out of them with some redundancy and then using that for a zfs pool for a usenet spool just to see what can go wrong. If anything can abuse a disk its usenet.
Re: (Score:2)
The modern day floppy-raid. [wired.com]
Re: (Score:3)
I've done this, experimentally, using not-super-cheap 128GiB patriot and hyper-x usb3 sticks.
For a USENET load, performance will depend on whether your incoming feed is predominantly batched, streaming, or effectively random -- small writes bother these devices individually, and aggregating them into a pool works best if you can maximize write size. One way to do that is to use a wide stripe (e.g. zpool create foo raidz stick0 stick1 stick2 stick3 stick4 ...), which works well if your load is mainly batch
Re: (Score:2)
Ah, cannot ETA, so: decent USB 3 sticks make *excellent* l2arc devices in front of spinny disk pools. They can often deliver a thousand or two IOPS each with a warm l2arc, and that would otherwise mean disk seeks. I use them regularly.
(The downside is that in almost all cases l2arc is non-persistent across reboots, crashes and pool import/export, and can take a long time to heat up so that a good chunk of seeks are being absorbed by them, so you're limited by the IOPS of the storage vdevs and the size
Re:I call BS (Score:5, Informative)
I might be wrong but isn't it also when the SSD is stored at 55C AFTER having been stress tested at 55C to their endurance rating in terabytes written (page 39) under a given workload?
And even then the cherry picked value was in example data submitted by Intel for unknown hardware and very likely extrapolated and quite possibly meaningless because it wasn't part of the chart targeted for the standard.
The article seems to have totally misrepresented the presentation's purpose: which is to lay out endurance testing methodology/standards.
The only important values were on page 26 where they set the minimum requirements of 40C 8hr/day load/30C 1 year retention for consumer (with a higher error ratio) and 55C 24hr/day load/40C 3 months retention for enterprise (with a lower error ratio.)
And it looks like they haven't actually worked out the consumer workload for testing yet.
Re: (Score:3)
Cherry-picked for "a week" yes, but still disturbing. It's not an issue for datacenters, but for offices.
Imagine an office PC set next to the radiator - oh, the employees are free to set up their desks as they like, and they really don't care about stuff like that. Given employee going for a holiday break for a month, taking the family for a skiing trip. The PC experiencing 50C on regular basis. That's quite enough to cause the data loss.
Yes, in a responsible company there will be backups - or the data will
Re: (Score:2)
I don't know about that. Even at every office that I've ever done work at even the women refuse to sit near/next to the radiators. That includes in super cold areas where it hits -40C to -50C in the late fall/winter. And in the cases where the 'rad' is pumping out 55C temps, there is already 30-100CM of space around them to simply stop possible burns from the rad which gives you plenty of space to normalize the air temperature. Most places are now on forced air from the ceiling.
Re: (Score:2)
Yes, it would be insane to use flash memory for archival purposes as well, but it still should easily retain its contents for at least a decade.
Nope. It's nowhere near that long--more like 1 year, not 10. And that time reduces as the flash wears through being written.
Re: (Score:2)
You must be joking. We would be in deep trouble if flash memory held its information for only about a year.
Nope, not joking. It also depends on the process size. Nice big (i.e. low capacity in a large die size) SLC flash cells hold data for quite a long time. The higher-density they get (and the less electrons per bit used to store data) the worse it gets.
So a lot of device firmwares and BIOSes, which generally use nice big chunky flash cells, will last 10-20 years. High-capacity flash storage, not so much.
That's just the way it is. Flash is currently the least worst solid state storage solution we have
How powered off is "powered off"? (Score:2)
There's full shutdown, there's power left on at the wall, there's hibernate with wall power left on, there's sleep. Lots of laptops come with (toshiba, for example) "sleep-and-charge" where they will supply current to USB ports while asleep. I rarely do a full shutdown on laptops or desktops, would this be enough to avoid the problem?
Or would there need to be a BIOS feature to ensure current supply to SSDs as well as USB ports?
Re: (Score:2)
Or would there need to be a BIOS feature to ensure current supply to SSDs as well as USB ports?
there would need to be a hardware feature. the power supply doesn't send power to the drives when the computer is not turned on. they are unpowered even in some sleep modes.
Re: (Score:2)
this isnt even the same kind of ssd that's in your pc probably cap based.flash memory based can stay powered off forever and not lose data.
Non-enterprise are in the presention rated to retain data for a year.
Re:How powered off is "powered off"? (Score:5, Informative)
Re: (Score:3, Interesting)
When ever something is "Enterprise" class it means it is vastly inferior to other solutions on the market but cost 3x as much.
For some reason this isn't a buzzword that sends shivers down every IT workers spine yet.
Re:How powered off is "powered off"? (Score:5, Interesting)
Re: (Score:2)
For example I do not know any consumer SSD which has power loss capacitors (Intel 320 is not produced anymore).
The Intel 730 series have these capacitors.
Re: (Score:2)
You may be right in case of other equipment, but enterprise grade drives are really better.
BackBlaze disagrees with you:
https://www.backblaze.com/blog... [backblaze.com]
Overall, I argue that the enterprise drives we have are treated as well as the consumer drives. And the enterprise drives are failing more.... Enterprise drives do have one advantage: longer warranties. That’s a benefit only if the higher price you pay for the longer warranty is less than what you expect to spend on replacing the drive. This leads to an obvious conclusion: If you’re OK with buying the replacements yourself after the warranty is up, then buy the cheaper consumer drives.
Re: (Score:2)
It really is unsurprising. IIRC, enterprise drives and consumer drives are usually the same basic drive mechanism, with the main differences being the lack of a park ramp in enterprise drives and firmware differences, such as parking the heads less frequently, spinning down less frequently, etc. And while those firmware differences could make a difference if they happen to tickle some serious design flaw, for the most part, I'd expect the differences to be lost in the noise.
There are some exceptions, of
Re: (Score:2)
Still happens, just over a longer lifetime than it used to. Lubricant is also lost over time whether the drive is spinning or not, so at some point (probably well over ten years now) that old drive in storage is not going to spin up.
Re: (Score:2)
Re: (Score:3)
reason being that you shouldn't be buying Enterprise grade through a brick retailer. You should be leasing it via a support contract: the premium is with a tech on the other end of the phone who's out in a couple hours to replace a dud drive and have your RAID rebuilt before the day ends, rather than you running to the nearest PC World for a TB Seagate pocket spinny. Like I've just had to. If I'd had a support contract (hence Enterprise grade drives as they generally insist on anyway since they're easier fo
Does anyone still return faulty drives? (Score:3)
You should be leasing it via a support contract: the premium is with a tech on the other end of the phone who's out in a couple hours to replace a dud drive and have your RAID rebuilt before the day ends
I always think the idea of giving back a storage device that has had real data on it under some long-term warranty or rapid on-site service agreement is mostly marketing spin.
Every company I work with has a simple policy on this, for basic security/privacy reasons: a drive that is DOA can go back, but anything that has ever been touched by real data is written off and securely destroyed. Any warranty longer than a few days is therefore worthless to us, as is any rapid service support contract if all it's go
Re: (Score:2)
All my HP FC SAN arrays have DMR support for the warranty period and again when under maintenance. On a 25 drive chassis, with 900Gb 2.5" SAS drives, it's about 5% of the total cost. All have dual controllers and power supplies, of course.
The disks are arranged as 4x 6-disk RAID6 sets, then presented to VMWare as 4x 3.6Tb VMFS disks.
The 25th disk? It goes on the shelf and is the cold spare for that array. When a disk fails, it goes in straight away and a phone call to HP sees a new disk is sent out overnigh
Re: (Score:2)
You DO know that the big vendors have options that let you keep the drive instead of requiring you to return it?
Re: (Score:2)
Yes, I do. Did you actually read my post to the end before replying?
Re: (Score:2)
Decay within a week is pretty aggressive for anything you'd have the nerve to sell; but all flash memory can be expected to lose its contents over time.
Re: (Score:3)
Yes, conceptually all flash data has to decay when powered off. But implementation tradeoffs vary widely. A dirt-cheap Microchip PIC18F2580 [microchip.com] microcontroller has flash data retention without refresh "conservatively estimated" at 40 YEARS MINIMUM and 100 years typical (page 3, 10, 435). The number of previous erase/program cycles that retention is predicated on is not given, but is probably a single one, or a few, out of a typical endurance of 100,000 cycles and a minimum of 10,000 from -40 to +85 C. AFAIK t
Re: (Score:2)
Untrue. Very, very untrue. Even industrial-grade long-term storage FLASH has often only 10 year data retention. And these are selected SLC cells and you pay about 1000x per capacity of what a regular SSD costs. "Cap based" storage is called "DRAM" and loses data after seconds to minutes.
Seriously, do a bit of research before claiming complete nonsense.
"Cap based" storage (Score:2)
You can make reliable fast access NVM using DRAM plus battery or cap based backup to run the refresh engine during power-off. So not complete nonsense.
Re: (Score:2)
Complete nonsense for almost all scenarios then. You can buy these, but they are extremely expensive and usually come as a 19" unit. And complete nonsense because these are not what is called "SSD" today.
Seriously, you can nit-pick all day, the fact of the matter is that the "contribution" by luther349 was of negative value and I pointed that out.
Re: (Score:2)
There's confusion here between various storage devices that happen to have a capacitor. There's DRAM, of course, and there are RAID controllers that keep enough power to "finish a write" during power loss, to greatly speed write caching. But there are also SSDs, and SSDs have the same problem with "finishing a write" during power loss. Most consumer SSDs risk losing data in a power hit, while the better ones have a capacitor just to avoid that problem. I think that's what they're talking about here: the
Re: (Score:2)
By definition, all drives lose data when power disappears suddenly. You can't guarantee that a filesystem metadata change won't be halfway complete unless the OS is in control over when the drive powers down.
Consumer drives shouldn't be at any higher risk of data loss than enterprise drives in this regard. Either data is on disk or it isn't. If a file gets halfway written, it will still be halfway written whether the drive stays up for an extra few hundred milliseconds to flush its buffer or not. More
Re: (Score:2)
You can't guarantee that a filesystem metadata change won't be halfway complete unless the OS is in control over when the drive powers down.
The filesystem needs to be able to trust that a drive has completed its write when the I/O completes. This is a difference between SSD and spinning rust - most SSDs lie, as they're often still shuffling things around when they report I/O completion on the bus. What's more, the SSD tend to have far more internal metadata, and you can actually lose the whole volume on some of these SSD due to sudden power loss. Finishing up the write internally fixes that.
It's the same old problem with RAID controllers - N
Re: (Score:2)
Actually, as the Linux FS folks found out some time ago, most HDDs do lie too. Makes them look better in benchmarks.
If you really need to have a reliable 2-phase commit, use an UPS that forces an orderly shutdown way before it runs out of power or put the journal on a device with reliable flush. You may also want a redundant PSU. This is a solved problem. It is just that for consumer-grade and cheaper server-grade hardware it does not make sense to implement the solutions and, in addition, people have unrea
Re: (Score:2)
There is also a grace-period with a good-quality PSU: The SSD detects when the power is going down (like all disks do) and if the gradient is flat enough, has quite some time do get into a safe state. The OS also has at least 20ms with a good PSU, as it will keep voltages stable for that time or longer after power_good gets de-asserted. With a sane firmware design, that should be enough for the SSD to make sure no data that already was written gets corrupted (due to large sectors). Of course, with a catastr
Re: (Score:2)
You're way optimistic with that "forever". Flash memory is based on electromagnetic charge which has the nasty property of leaking.
I tried some MicroSD cards that's been sitting in my drawer for past 4 years. All needed to be reformatted. SSDs may retain data longer, but it's by no means 'eternal'.
Re:How powered off is "powered off"? (Score:5, Informative)
Your stuff's cargo container was not heated during shipping. If it was stacked below the deck line, where it was not exposed to the sun, it didn't reach 55C during the journey.
no problem (Score:5, Funny)
Scotty will have the power back just in time.
Re: (Score:2)
But first he'll tell you it's impossible.
Re: no problem (Score:2)
What it really says... (Score:5, Interesting)
The relevant table is on 27. page.
In short: if you use the SSD in a cold environment AND store it in hot environment than you may lose data quite quickly. Quicker than two weeks.
Client drives are also affected, but the data loss occurs slighly later. I guess reason of the difference is that enerprise drives assume a higher work temperature.
So the advice is that if you use the SSD in your air conditioned basement in a good case then do not store your SSD on the sun for extended periods.
And no, I do not use spinning media as a backup. I use tapes. Using spinning media for proper backups is almost impossible. See http://www.taobackup.com/ [taobackup.com]
Comment removed (Score:5, Interesting)
Re: (Score:2)
If the step 2 of your failure scenario is "hurricane", you...
1)- Have plenty of possible mitigation. Hurricans are trackable and reasonably predictable, and you could load your things into an evac van, or back them up, or have a generator. If you don't have a generator, it's super possible to acquire one- those things are sold by the side of the road after a real storm.
2)- Have a pretty good plan. A hurricane hitting your data center, house, or anything at all is certainly able to destroy it anyway, w
Re: (Score:2)
Re: (Score:2)
Definitely lost a house in a hurricane and lived in hurricane country for years, so yes, I know what I'm talking about.
> Essentially, yes, you get some warnings.
"Some warnings" = Paper talks about it, all over the internet, all over the news, weather station tracks it a huge percent of the time, government issues statements.
You also have a huge window to evacuate in. This is not some sky-is-falling thing- the power of the storms are well known, their trajectory is iffy when they are mid-Atlantic but w
Re: (Score:2)
Also, you'd have to get really unlucky to have 100 degree days right after a hurricane. It can happen, but it's by no means likely.
Re: (Score:2)
The master paused for one minute, then suddenly produced an axe and smashed the novice's disk drive to pieces.
I'll bet the master is also the one who infected the novice's computer with a virus, and set fire to the novice's building.
Re: (Score:2)
Using spinning media for proper backups is almost impossible. See http://www.taobackup.com/ [taobackup.com]
There is nothing in that story to suggest that HDDs are considered inappropriate for backup media. What is your theory? I've used HDDs for deduplicating daily snapshots for the last 15+ years and found them to be every bit as reliable as tapes, and far far easier to use.
Re: (Score:2)
And no, I do not use spinning media as a backup. I use tapes. Using spinning media for proper backups is almost impossible. See http://www.taobackup.com/ [taobackup.com]
Your link doesn't really seem to explain how using "spinning media for proper backups is almost impossible", so you'll either need to point to exactly where it says that, provide some other reference, or expand on that on your own.
Backup won't help you (Score:5, Insightful)
FTFS:
If you have switched to SSD for either personal or business use, do you follow the recommendation here that spinning-disk media be used as backup as well?
So how do backups help you? Except for ZFS and btrfs (?), no file systems check for data integrity. You're not going to detect the bitrot taking place, and you'll happily send that rotten data to your backup until the corruption is noticed in some other way.
Re:Backup won't help you (Score:5, Insightful)
The bit rot happens when the drives have been powered off for an extended period. The backups are taken before the power is removed.
Re: (Score:2)
Backup to disk is not much use (TM).
Re: (Score:2)
I read it multiple times. Still can't see the point.
While the system is running you are making backups and no data is being lost. If your SSD is powered off for an extended period it starts to lose data. If you have any sort of reasonable data management you would now assume the data on the SSD to be unreliable and restore the backups before it is used. What, exactly, is the problem? Or do you think 'bring it online and wait til someone complains before restoring backups' is a reasonable data managem
Re: (Score:2)
Real enterprise has not gone to SSD (Score:2)
Re: (Score:3)
Re: (Score:2)
Which is why you don't keep your data centre in the basement, in fact nothing of importance should be kept in the basement. Sandy and Fukushima are prime examples of that.
On this site there was a report after Sandy from a guy who literally had his team carry fuel up flights of stairs to the generators - the data centre and the generators were out of harms way, but the fuel was in the basement, along with the pumps. I'm guessing that storing fuel half way up a building is frowned upon. I don't know the feasi
My strategy (Score:2)
Scenario (Score:2)
Bring laptop with SSD to Death Valley, leave it in the car stuck in the sun and go hiking. How long until your data is in trouble? However, I just looked at the specs for the Samsung 840 EVO, since it was the first to pop up:
Temperature
Operating: 0C to 70C
Non-Operating: -55C to 95C
I would assume the 95C is with data? It would be a rather small caveat if the drive survived but your data was fried.
Real world use cases needed (Score:3)
Backups (Score:3)
If you have switched to SSD for either personal or business use, do you follow the recommendation here that spinning-disk media be used as backup as well?
First, anything stored on any kind of drive should be backed up if you care about it.
Second, if you do backup, who backs up to SSD? You don't need frequent fast random-access on backups, and SSD is about the most expensive storage technology around per-GB. Anybody doing bulk storage is going to be doing it on either hard disks, tape, or something optical.
So, if you're backing your data up, you'll be backing it up to something safe most likely.
Of course, this does bring up the need for the ability to verify the integrity of your data at-rest, and right now I'd say ZFS/btrfs are the best way of accomplishing this. You could also do hashing above the filesystem layer, but that requires a lot of overhead if your files change frequently. If your files don't change much than something like tripwire would be fine. You'd want to run that more often than you rotate your backup media so that you don't discard the last-known-good version of anything.
Re: (Score:2)
I suggest that the vast majority of backups by home users today are on thumb drives and/or flash cards. Not the greatest method by a long shot, but you can't expect someone with only a couple gigs of photos and music to invest in a real backup solution.
Re: (Score:2)
No, its a stupid recommendation. Spinning rust doesn't last very long on a shelf. It will rapidly go bad mechanically if you keep switching between shelf and active. SSDs are far superior and data retention is going to remain very high until they really dig into their durability. If you still care, there's no reason why you can't just leave them disconnected from a computer but still powered... they eat no real current compared to a hard drive. SSD-based data retention should be 30+ years if left power
Now I'm not as excited about Uber as I used to be. (Score:2)
Slide 10:
Unrecoverable Bit Error Ratio = ( number of data errors ) / ( number of bits read )
A Data Point (Score:2)
Charge trap vs floating gate NAND... (Score:5, Interesting)
Newer 3D NAND is using a charge trap design which basically solves the electron leakage issue found with the older floating gate NAND...
Also, the move to the newer 3D NAND brings us back up to 40nm processes vs the 10nm gates we are currently working with, allowing for much better reliability.
Disclaimer: I've been selling enterprise flash storage for the last 6 years.
Re: (Score:2)
There's no news here. (Score:2)
These tests explicitly state that the SSD is rewritten until it reaches its endurance rating before the retention test is done. At that point the flash in a consumer would not be expected to retain data unpowered for more than 1 year.
If you write your data to a fresh SSD once, multiply the number by at least 10.
-Matt
NOT consumer drives (Score:2)
From TFA - consumer SSDs can expect 2 years, which is better than lots of HDs which probably won't spin up. Enterprise SSDs are faster but more ephemeral.
On the other hand I wouldn't count on this - cell drift is what causes the Samsung 840 slowdown after just a month.
And yes, I back all my stuff up constantly since I don't want to lose it. To platter drives just because it's much cheaper and speed doesn't matter.
Re: (Score:2)
Re: (Score:2)
Read again - more temperature means less time. So if there's no lower limit, SSD without power will be best stored at 0 Kelvin
No I read the article. It's related to electron mobility at the time the data is written. If you heat up the SSD after it's turned off, electron mobility increases and you'll get more leakage. But if you wrote data when it was hot, you have a better signal to noise ratio.
Re: (Score:2)
Re:Thumb Drives (Score:5, Insightful)
Tape is some of a myth.
The only safe media, is that which you keep copying before it deteriorates. Not HDDs, not SSDs, not CDs, not thumbdrives, and not tape. Any media you leave untouched past its data retention period, will lose data.
What you need is to check every copy of your data for any sign of degradation, and replace it with a fresh copy as soon as, or before, it begins to fail. Tape may give you the most time between checks, but it doesn't change the fact that data you forget about is data you will lose.
Re: (Score:2)
I looked into this because I once wondered why no one uses thumb drives for backup. Thumb drives are reasonably safe for a couple years but after that many can degrade. I saw many sites indicating that flash is not a safe media. This caused me to wonder what they did different in SSD technology. The only safe media that I know of is tape.
Tape is some of a myth.
The only safe media, is that which you keep copying before it deteriorates. Not HDDs, not SSDs, not CDs, not thumbdrives, and not tape. Any media you leave untouched past its data retention period, will lose data.
What you need is to check every copy of your data for any sign of degradation, and replace it with a fresh copy as soon as, or before, it begins to fail. Tape may give you the most time between checks, but it doesn't change the fact that data you forget about is data you will lose.
You're talking about archives I think, and the previous guy was talking about long term backups.
An archive might be the only copy of your data, and everything you said applies. But backups only have to live as long as your retention period, so once you meet that requirement, you're set.
The biggest myth is that backups or archives are simple :\
archival optical (Score:2)
There are some pretty fancy optical discs which are supposed to last for decades if kept in a reasonable environment, and which are written in ordinary drives. They're a cheap intermediate step before tape.
Re: (Score:3)
Re:I image my SSD regularly (Score:4, Insightful)
Re: (Score:2)
And I remember when Al Shugart took that claim from Shugart Corporation with his newer company.
Re: (Score:2)
or drill a hole in it
Re: (Score:2)
Re: (Score:2)
one hole per package of NAND will be sufficient
Re: (Score:2)
They're too expensive and inconvenient to back up any serious amount of data to.
If you have some personal data, photos, whatever you want to save, they're fine for that, but it just takes too many discs to back up a goodly chunk of things.
(I use plain BD-R, not M-disc, but when I wanted to back up some things just-in-case before working on my backup drive, it took me all day to write about 30 discs. If I wasn't doing something else at the time and just swapping as needed, it would have been horribly frustra
Re: (Score:2)
The chance is the same AFR of the rest of the product, but yes, it's very small.
Your worst case is that you cycle your SSD to 100% of its capability (which basically no user does anyway) inside a freezer, then put it on your dashboard as you park your black-on-black sports car in death valley for a 6 month hiking trip.
If you're not doing all 3 of those things simultaneously I wouldn't worry.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
More charge gets transferred to each transistor? The insulating layer doesn't break down as much? Some weird physics effect?