Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Software The Internet Hardware Technology

Ask Slashdot: Best File System For the Ages? 475

New submitter Kormoran writes: After many, many years of internet, I have accumulated terabyte HDDs full of software, photos, videos, eBooks, articles, PDFs, music, etc. that I'd like to save forever. The problem is, my HDDs are fine, but some files are corrupting. Some videos show missing keyframes and some photos are ill-colored. RAID systems can protect online data (to a degree), but what about offline storage? Is there a software solution, like a file system or a file format, specifically tailored to avoid this kind of bit rot?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Best File System For the Ages?

Comments Filter:
  • by Anonymous Coward on Monday March 06, 2017 @05:41PM (#53988011)

    I prefer to chisel the 0s and 1s into a stone tablet. Very secure, no bit rot.

  • bit rot (Score:5, Informative)

    by Anonymous Coward on Monday March 06, 2017 @05:43PM (#53988025)

    zfs

    • Re:bit rot (Score:5, Insightful)

      by Narcocide ( 102829 ) on Monday March 06, 2017 @05:52PM (#53988127) Homepage

      It's pretty sad that in this day and age, only one person has highlighted the relevance of ZFS here, and they're an AC. Someone mod parent up. RAID is borderline necessary if you don't have multiple backups, (to recover from in the event of random corruption caused by gamma rays from outer space or a butterfly flapping their wings on another continent or whatever) but so far as I know, only ZFS has built-in checksumming to detect/prevent the data corruption in the first place.

      • Re:bit rot (Score:5, Informative)

        by tlhIngan ( 30335 ) <slashdot&worf,net> on Monday March 06, 2017 @06:20PM (#53988419)

        It's pretty sad that in this day and age, only one person has highlighted the relevance of ZFS here, and they're an AC. Someone mod parent up. RAID is borderline necessary if you don't have multiple backups, (to recover from in the event of random corruption caused by gamma rays from outer space or a butterfly flapping their wings on another continent or whatever) but so far as I know, only ZFS has built-in checksumming to detect/prevent the data corruption in the first place.

        No, RAID Is not sufficient to prevent bit-rot. In fact, RAID can accelerate it. You see, using a redundant mode like 1, 5, 6, most controllers (software and hardware) will only read enough disks to get the data, 1 drive in the case of RAID1, N-1 for RAID5 and N-2 for RAID6 (the non-parity ones, to save a parity calculation). But the drives can return bit errors - it's rare, but it does happen (there's a undetectable fault error rate, something along the lines of 1 in 10^20 bytes read or so will have an undetected error). And this the RAID controller will happily return to you since it didn't check the redundant drives to verify correctness. And it's possible it gets written back corrupted, thus causing corruption.

        You really need something like ZFS which puts a checksum on every file and verifies it, so if it does get an error it can resolve it.

        • Re:bit rot (Score:5, Informative)

          by __aaclcg7560 ( 824291 ) on Monday March 06, 2017 @06:23PM (#53988453)

          You really need something like ZFS which puts a checksum on every file and verifies it, so if it does get an error it can resolve it.

          ZFS also has its own flavors of RAID 1/5/6.

        • Re:bit rot (Score:5, Insightful)

          by nmb3000 ( 741169 ) on Monday March 06, 2017 @08:35PM (#53989483) Journal

          (there's a undetectable fault error rate, something along the lines of 1 in 10^20 bytes read or so will have an undetected error)

          I just want to call this out because it's so important. That number, 10^20, sounds big, but considering the size of modern drives it's really not.

          Randomly picking the WD 8TB Red NAS drive (WD60EFRX), which is designed for consume RAID as an example:

          The spec sheet [wdc.com] says the URE (unrecoverable read error) rate is at worst 1 x 10^14 per bits read. However, that drive holds 8 x 10^12 bytes! If you were to read every single byte there is about a 64% chance that at least 1 bit is read incorrectly.

          (8 x 8 (bits per byte) x 10^12) / (1 x 10^14) = 64,000,000,000,000 / 100,000,000,000,000 = 0.64

          Correct my math if I'm wrong, but this should make anyone think twice about using any kind of RAID as a "backup" solution. If you have a disk fail you have a better than 50/50 chance of introducing corrupt data during the rebuild process!

          Frankly, ZFS-style checksumming is the future of files systems. It has to be for any data you care about.

          • Re:bit rot (Score:5, Funny)

            by grcumb ( 781340 ) on Monday March 06, 2017 @09:38PM (#53989817) Homepage Journal

            (there's a undetectable fault error rate, something along the lines of 1 in 10^20 bytes read or so will have an undetected error)

            I just want to call this out because it's so important. That number, 10^20, sounds big, but considering the size of modern drives it's really not.

            Vhrist, you guys. Why so p[aranoid? FAT has been workking just fine since day one, and there's not reason to beliveve it won't keep workingn that way for

      • Re:bit rot (Score:5, Informative)

        by MightyMartian ( 840721 ) on Monday March 06, 2017 @06:26PM (#53988483) Journal

        Whose to say zfs will be around in a few decades?

        The real solution here is relatively frequent backups, multiple copies in different filesystem and physical formats (ie. flash, hard drive, optical). Over time you just keep moving your file store to the new mediums. I have files that are over twenty five years old now, some of them coming from DOS and Windows 3.1, others from my old original Slackware 3 installs. Along the way some of those files have been on CD-Rs, DVDs, early USB thumb drives, various hard drives running everything from FAT, FAT32, ReiserFS, HPFS, NTFS, ext2 and ext3. And I'll keep on doing that until I drop dead, and I'll leave it up to my family to decide whether they want to keep any of the documents, pictures, music files, videos and so on that I've been collecting.

        At no point do I ever assume a mere file system sitting on one physical and/or logical volume is ever going to do the job of keeping my files available over the long haul. RAID and file systems in all their glory are not intended for that. Multiple physical copies at multiple locations on multiple types of media, that's the only real way to assure your files remain accessible and safe over time.

        • Whose to say zfs will be around in a few decades?

          Why wouldn't it be? The only things that could wipeout all implementations of a widely used format like ZFS would be nuclear war or an ELE asteroid strike. In either event, reading disk drives would be the least of your problems.

        • Multiple copies may be one solution, but it introduces another problem that doesn't have an elegant solution... you need a tool that can verify the integrity of your data (across the multiple copies). How do you choose which one is "correct" when you migrate and copy to a new system? In addition, how are you sure that any given copy is actually complete? What if you want to permanently delete a file from your archive?

          I mitigated some of these problems for my photo library by using version control softwar

      • I tried ZFS once on a backup server, my system crumbled under the load. Did the same config with ext4 and it has been running fine since.
        • by higuita ( 129722 )

          ZFS was build to be run on big server, with lot of ram, with battery protected raids. On whatever OS it runs, it tried to use lot more resources than other Filesystems, specially if you enable dedupe and compression. It is a good filesystem, but lot of people think that it is a good general filesystem for all users, where it is not. It can be used, but it is not light, it works best in dedicated fileservers

      • by Vairon ( 17314 )

        Btrfs and ZFS have metadata and data checksum support.
        XFS has only metadata checksum support.

        • by AaronW ( 33736 )

          I have been using XFS for many years and have found it to be quite reliable and have been able to recover data when the underlying data store got corrupted. It's also quite mature in Linux and relatively fast. My last experience with BTRFS was a failure (several years ago) due to it being incredibly slow when there were thousands of small files in directories. Once ZFS is stable in the Linux kernel I'll give it a try.

    • by Noryungi ( 70322 )

      zfs

      ZFS is a pretty good solution. Multiple NAS ZFS systems [freenas.org] with snapshots and replication are even better.

      I personally like XFS in production (including LVM), but ZFS is hard to beat if bitrot is your #1 concern.

    • by Calibax ( 151875 ) *

      Like all hardware, disk drives have two states - failed and going to fail. Bitrot will also occur with long term storage, whether you notice or not.

      A self-healing file system with substantial redundancy capabilities like ZFS is the obvious answer.

      However, there are many ways to configure ZFS, and some configurations have better redundancy than others. A misconfigured system would be worse than useless because of the false sense of security. Exactly how many terabytes of data you have also matters for cre

    • That would've been my vote a few years ago. But since Oracle demonstrated they're willing to sue people who use software Sun formerly released as open source, ZFS is dead. Nobody is going to touch it with a 10 ft pole, and Oracle has shown little interest in continuing to develop it.

      We're just gonna have to wait for the great features in ZFS to be re-implemented in some other filesystem, free of Oracle's clutches.
    • Re:bit rot (Score:5, Informative)

      by Spazmania ( 174582 ) on Monday March 06, 2017 @07:28PM (#53989017) Homepage

      He said a filesystem for the ages. While it has wonderful features, ZFS isn't even a filesystem for this age, let along ages to come. FAT32 and ISOFS are your best bets for being readable 20 years from now.

      Bear in mind that your hard disk checksums each block and returns an error if the block is uncorrectable upon read rather than give you bad data. So, if you're getting bit rot at all then you have a hardware problem.

      With or without a hardware problem you want to be able to recover your data. The answer is par2, such as parchive or QuickPar. Par2 uses a Reed-Solomon code to take a set of source files and produce a set of recovery files such that the original files can be checked for correctness and up to N original files can be corrected where N is the number of recovery files created.

      And that's your answer. A filesystem like FAT32 or ISOFS that's likely to still be implemented in future OSes and a recovery files which let you rebuild anything that suffers from bit rot.

  • by DogDude ( 805747 ) on Monday March 06, 2017 @05:43PM (#53988031)
    I've got somewhere between 20-30 TB that has been accumulating for more than 20 years on NTFS, and I've never seen any examples of "bit rot". My files today are identical to what they were 20+ years ago. I have to wonder what kind of filesystem that the poster is using.
    • Not a single example in 30TB over 20 years? I think you should check again.

    • I tried downloading an old attachment (6-7 years ago now) from my gmail account but the attachment is corrupted. No matter how many times I download it or to what computer, it's corrupted. I wonder what Google is using?

      • I tried downloading an old attachment (6-7 years ago now) from my gmail account but the attachment is corrupted. No matter how many times I download it or to what computer, it's corrupted. I wonder what Google is using?

        What type of file is it? It might be a media format the player software no longer recognises (find an older player). Or if it is an exe it might be a 16 or 32 bit exe that won't run in a 64 bit environment. (find an older operating system). If it's not confidential, could you post a link so we can try it?

    • ...'ve got somewhere between 20-30 TB that has been accumulating for more than 20 years on NTFS...

      Given what appears to be Microsoft's strategy slowing morphing away from [consumer] OS's, I'd be reluctant to need to rely on Microsoft for anything long-term.

  • The only historically tried and proven method of storing information for the long term.
  • by archer, the ( 887288 ) on Monday March 06, 2017 @05:46PM (#53988069)

    If the bits on your drive are changing while the drive is offline, that isn't a filesystem issue. A filesystem issue would be if your OS wrote the wrong information to the drive, but that can't happen with an offline drive.

  • RAID (Score:4, Informative)

    by buchner.johannes ( 1139593 ) on Monday March 06, 2017 @05:46PM (#53988079) Homepage Journal

    RAID systems can protect online data (to a degree), but what about offline storage?

    Still RAID is a good choice for your redundancy of choice.

    Or paper: http://ollydbg.de/Paperbak/#1 [ollydbg.de]

    • RAID is not a backup. RAID checksums are only evaluated when you read data and are only calculated when writing. If the data is just sitting there for years without any kind of access you can guarantee that it's going to die from bitrot.
    • Not all RAIDs are equal. If you want your data be safe use RAID 1 with second volume in a remote location (aka. offline backup).
  • Joking...but not really. From today's Reddit Science AMA with Yaniv Erlich: https://www.reddit.com/r/scien... [reddit.com]
  • by raymorris ( 2726007 ) on Monday March 06, 2017 @05:50PM (#53988105) Journal

    The magic phrase to Google is "error correction codes" (ECC).

    PAR2 uses Reed-Solomon error correction. parchive is the ECC file format specification, for Linux you will want PyPar or par2tbb, and on Windows you use a GUI called QuickPar.

    Btrfs can be set to use ECC on a single disk.

    You can slice a single disk into partitions and then use RAID1 or LVM mirroring, or RAID5 or RAID6. LVM can alao be useful to divide (and combine) any number of drives into any number of volumes, then you can RAID across the volumes.

    If you Google "ecc disk", "ecc backup", or "ecc archive" you'll find other options, with details about each option.

    • by heypete ( 60671 ) <pete@heypete.com> on Monday March 06, 2017 @07:29PM (#53989025) Homepage

      QuickPar on Windows is long-obsolete. MultiPar [vector.co.jp] is the more modern variant.

    • by Kjella ( 173770 )

      I agree on PAR2, simply because it's a file you can easily copy around, take backup off and so on. From a 1GB file I have ~3000 source blocks and ~30 recovery blocks, so I can recover from a lot of bit flips or failed 4kb sectors for a 1% size gain. If it's a photo set I usually make sure I can recover at least one completely missing photo. The nice thing is that it's sufficiently overkill you can probably go through several hardware generations without checking/repairing before you accumulate an unrecovera

  • ext4 is journaled and prevents loss in case of some file-corruption-prone events (like a sudden shutdown).
  • by Chewbacon ( 797801 ) on Monday March 06, 2017 @05:52PM (#53988123)

    ZFS will guard against bit rot. That's not enough. RAID isn't enough. You need redundancy outside your home or office. Cloud maybe expensive for the amount of data you have, but Amazon S3 maybe the most affordable in that range. You could get S3 for maybe $15-20 a month if you have a terabyte of data. If that's cost prohibitive, rotate external drives regularly and keep one at work. You'll lose very little data since you're archiving things.

    • by hawguy ( 1600213 )

      ZFS will guard against bit rot. That's not enough. RAID isn't enough. You need redundancy outside your home or office. Cloud maybe expensive for the amount of data you have, but Amazon S3 maybe the most affordable in that range. You could get S3 for maybe $15-20 a month if you have a terabyte of data. If that's cost prohibitive, rotate external drives regularly and keep one at work. You'll lose very little data since you're archiving things.

      AWS S3 pricing is $0.023/GB or $23/TB/month.

      But for infrequently accessed data, AWS Glacier offers the same durability of S3 for only $0.004/GB or $4/TB/month. There's an infrequent access tier in between those two for $12.50/TB/month.

      Volume discounts kick in above 50TB.

      • by heypete ( 60671 )

        But for infrequently accessed data, AWS Glacier offers the same durability of S3 for only $0.004/GB or $4/TB/month. There's an infrequent access tier in between those two for $12.50/TB/month.

        Volume discounts kick in above 50TB.

        Online.net's C14 service [online.net] is even cheaper, at EUR 0.002/GB/month plus EUR 0.01/GB for "operations" (such as creating an archive from the temporary staging area, manually verifying archives on demand, or recovering an archive), and offers the same 99.999999999% durability as Glacier. No bandwidth costs and no complicated retrieval speed costs like Glacier, and you can use rsync to upload to the staging area. Naturally, they perform behind-the-scenes error checking and repair, but the manually-selected verifi

  • Any Linux FS (Score:3, Interesting)

    by MouseR ( 3264 ) on Monday March 06, 2017 @05:52PM (#53988125) Homepage

    I'd go for any Linux file system because Linux is the platform that evolves the least. It's still in the 90s so in 2037 it will still be current.

    (Watch out of the hater storm! Here they come!)

    But it's kinda true if you omit the snideness of the first statement. Because it's maintained by the user base, it's less likely to "devolve" into something incompatible due to market pressure. I, myself, would go for an Apple file system but Apple isn't so keep in keeping the Mac current and it doesn't bode well for the future. There might be a great change in the horizon.

  • by hcs_$reboot ( 1536101 ) on Monday March 06, 2017 @05:53PM (#53988129)
    That's a well known problem to photographers, photos colors are affected over time. Keep the photo negatives in a safe place!
    • That's a well known problem to photographers, photos colors are affected over time. Keep the photo negatives in a safe place!

      That struck me as odd too. If the colours in digital photos or movies don't look right, I would try to display them with different software. It's more likely that the software that displays is reading and interpreting the format of the file differently than bit-rot would only affect the colour pallette and not make the whole file unreadable.

      • by erice ( 13380 )

        That struck me as odd too. If the colours in digital photos or movies don't look right, I would try to display them with different software. It's more likely that the software that displays is reading and interpreting the format of the file differently than bit-rot would only affect the colour pallette and not make the whole file unreadable.

        Or the OP is using a different monitor. It doesn't matter if the new monitor better or worse than the old one. If it is different and the photos are adjusted for the old monitor, it will look "off".

  • Backblaze made a report of what SMART drives they see indicating imminent drive failure: https://www.backblaze.com/blog... [backblaze.com]

  • No media is perfect. There's just varying likelyhood of error rates over time, depending on the quality of the media. Without knowing ahead of time whether a specific piece of media is going to fail, the question needs to change from "How do I keep it from getting corrupted" to "How do I mitigate eventual corruption?"

    And the question basically boils down to one answer: redundency.

    Off the top of my head, I can think of three things you can do, and these are not mutually exclusive.
    1. Multiple copies of dat

  • "Is there a software solution, like a file system or a file format, specifically tailored to avoid this kind of bit rot?"

    Yes, ZFS is specifically tailored for this. Configure a zpool running RAID-Z2 with a hot spare or RAID-Z3. Half a dozen 6TB or 8TB disks should suffice.

    Set it to auto-scrub regularly. Send logs and warnings to your email, and pay attention to them. (This is the hard part). Especially pay attention if they stop arriving. (This is even harder).

    I have used Nexenta for some time, but the free

  • 'Forever' is a long time.

    'Offline' is difficult to deal with long-term (i am thinking decades to centuries) such is the nature of technology and the lack of any real history we have of digital data management,
      Personally I would say the best bet is keeping your data 'live' online to some extent, it is the only real way to monitor and control the inevitable decay.
      Basically your data's lifespan is related to how long you can convince someone to care for it for you.

  • Pick your poison:

    - Tape: inexpensive and slow, require frequent testing (backup we do, it's restoration the problem!), usually unreadable after 6 to 12 months or less (that's in production people).

    - WORM: more expensive than tape and just as slow, work well in the medium term (meaning 10 years top).

    - XFS NAS: faster than the above, require good hardware and a bit more work than either tape or worm. Don't forget to setup replication to multiple systems. May suffer from bitrot in the long term (checksumming/h

    • by Nutria ( 679911 )

      usually unreadable after 6 to 12 months or less

      What kind of crappy tapes do you you use? We've restored DLT tapes after 7 years in Iron Mountain.

  • if bits were randomly changing you'd have corruption issues not faded images and videos missing keyframes. This is ridiculous.
  • HDDs will die. If you want something that will last for many decades or even centuries without getting corrupted then you need to stop using a volatile filesystem. The best option is to go with write once media. The best option I know is M-DISC.

    M-DISC's design is intended to provide greater archival media longevity.[3][4] Millenniata claims that properly stored M-DISC DVD recordings will last 1000 years.[5] While the exact properties of M-DISC are a trade secret,[6] the patents protecting the M-DISC technology assert that the data layer is a "glassy carbon" and that the material is substantially inert to oxidation and has a melting point between 200 and 1000 C.[7][8] -- Wikipedia

    • HDDs will die. If you want something that will last for many decades or even centuries without getting corrupted then you need to stop using a volatile filesystem. The best option is to go with write once media. The best option I know is M-DISC.

      M-DISC's design is intended to provide greater archival media longevity.[3][4] Millenniata claims that properly stored M-DISC DVD recordings will last 1000 years.[5] While the exact properties of M-DISC are a trade secret,[6] the patents protecting the M-DISC technology assert that the data layer is a "glassy carbon" and that the material is substantially inert to oxidation and has a melting point between 200 and 1000 C.[7][8] -- Wikipedia

      Did you even bother reading the wiki you linked to or did you just copy and paste the first paragraph ?

      "However, according to the French National Laboratory of Metrology and Testing at 90 C and 85% humidity the DVD+R with inorganic recording layer such as M-DISC show no longer lifetimes than conventional DVD±R.[11]"

  • by silas_moeckel ( 234313 ) <silas AT dsminc-corp DOT com> on Monday March 06, 2017 @06:15PM (#53988377) Homepage

    ZFS is nice I use it it makes assumptions about sane gear that are not safe on desktop grade hardware. BTRFS I also use works great. But for your specific use case snapraid is the thing to use. By that use case things that never change a big pile of files you keep adding to. Mind you your going to have to replace drives over time.

  • A archival optical format. M-DISC DVDs and Blu-ray are theoretically able to retain data for 1000 years. And DVD uses some error correcting codes already, Reed-Solomon I believe.
    An SSD is a bad choice for archival, in some cases MLC Flash can decay and accumulate errors in 3 months while unpowered [extremetech.com].
    For a file system that is likely to be understood in the distance future, ISO 9660 with no file larger than 2 GiB should do the trick.
    Packing your data into a custom archive file format that has more sophisticated

  • 1. Add lots of redundancy in the form of PAR2 files.
    2. Store the whole lot as a tar format, dumped to the drive as a block device. This format is so simple that a future programmer will have no trouble reverse-engineering it, even if all documentation has somehow been lost, and there are no key structures which will render the whole thing impossible to read if lost. Just to be sure, the first thing going on there is a copy of the tar format specification.
    3. Include also a copy of the par2 software for sever

  • Delete shit (Score:2, Insightful)

    by Anonymous Coward

    Seriously, minimalism is underrated. There is such a thing as too much useless data. It's hard to catalog, it's hard to track, and if you sat down and sorted out what you actually could still use, most of it is probably worthless or you'd never find the time to use ever again. You might ask "well it's still worth storing IN CASE I ever find a use for it", but that's a typical data-hoarder sentiment that is unsustainable. You can't just keep buying media to store everything and never delete, it's a managemen

  • Just RAID it (preferably mirroring)store multiple redundant copies, physically separated. Either use a checksumming filesystem (i.e. zfs) or make your own checksums so you can recognize bitrot.

    But you'll never know when things have degraded beyond recovery, .

    Unless you're prepared to regularly validate that the data is still readable, you'd be better off storing the data at any major cloud vendor and let *them* verify integrity over time. Or better, mirror the data across multiple cloud providers.

    My most im

  • https://www.backblaze.com/blog... [backblaze.com] There is also rsbep, see https://www.thanassis.space/rs... [www.thanassis.space]
  • by swb ( 14022 ) on Monday March 06, 2017 @06:29PM (#53988501)

    You've got terabytes of information you will never access again. How about just getting rid of most of it? Pick some subset you want to keep and then buy 3 HDDs and create triple copies of it Repeat this every year and you'll probably not lose any of the information.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I completely agree. Someday you will die. Maybe .01% of what you have stored will be valuable to your posterity (including photos and videos). After two generations - nobody will care except for a picture or two.

      Too bad you have now dumped Terabytes of uncurrated junk on them that they will now have to spend days of time checking for anything useful. Do your posterity a favor and save the useful stuff and junk the rest.

      By the way, if you don't, this will repeat with your children, and childrens children

      • by swb ( 14022 )

        It's worse than that. A friend of mine is in the estate sale business. He and his partner have been doing it, along with sidelines in collectible art and furniture, for close to 30 years, and cater to a who's-who list of local old money families.

        Unless you are an extremely serious collector of high value objects, about half your stuff will sell for 10 cents on the dollar and the rest will go to the landfill. Vintage silver service? Valued at the melt value of the silver.

        I helped him move stuff to the du

  • BTRFS another option (Score:2, Informative)

    by Anonymous Coward

    in addition to ZFS, BTRFS also handles bitrot. I'm running a 4 disk BTRFS RAID 10 in my closet, mounting to a development machine on my desk via NFS, it's been working fine for about a year, and I scheduled a scrub a couple times a month whose purpose is exactly this, to catch and correct bitrot. It does so by using a CRC32 check, and if it detects a problem on one slice it overwrites that slice from the data on the good slice.

    Also I have offline and offsite backups of very important items.

    When using BTRFS

  • by williamyf ( 227051 ) on Monday March 06, 2017 @06:47PM (#53988683)

    That a job for Linear Tape FileSystem

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    Tape is (still) the best medium for Long Term Storge. Over the years tape (or more likely, the engineers) has agresively incorporated in the standards things like FEC codes (from reed-solomon to more exotic ones nowadays).

    And since 2010, with LTFS, you can aceess the files with the convenience of a normal filesystem (but bear in mind, access is slow as hell).

    Back up your data to tape (more than one set), and send it to specialized offline storage facilities (cimate controlled: ie. temperature/humidity/dust/light control) from different providers, in diferente geographical areas.

    Since now there is only one true-tape standard (LTO-7 released in 2015, the tape business has been shrinking, so the proliferation os standards seems to be over now), so, if you use that today, chances are you will still find equipment to read it 50 years from now. Nonetheless, keep a few (as in two or more) SYSTEMS (Computer+Drive+SW) set up so that you can re-read. A cheapo micro formfactor mobo with an Atom Pocessor (but NOT the Atom C2000series PLEASE), linux, a 1Gbps nic and a tape drive should be more than enough. ....

    Now, for Online, as other posters have said, ZFS WITH ECC memory (and therefore, a very expensive Xeon, or AMD server type mobo) and JBOD will do the trick.

  • Its not the only solution of its type, but it is imo the best:

    http://www.snapraid.it/ [snapraid.it]

    It is perfect for your kind of situation - long term, reliable, efficient storage of lots of data that seldom changes. Think of it as offline RAID backup, it works like RAID, but it computes parity during your backup operations "offline"..

    The beauty of it, imo, is that is is not file system dependent. It works with NTFS, EXT2, HFS, whatever. It works on Linux, Windows, Macs, whatever. You don't need special controllers, and

  • Paper Tape - As long as you don't damage it, it will never suffer data loss.

  • by GuB-42 ( 2483988 ) on Monday March 06, 2017 @07:47PM (#53989173)

    A lot of bit rot is actually caused by faulty RAM.
    When data is moved around, it has to go through RAM, and even smart filesystems like ZFS may not help you there. Servers usually have ECC memory for that reason and ZFS explicitly recommends it.

  • Make copies of things you care about occasionally on new media. If you don't care about something, let it rot. It's very liberating, kind of like burning your desk.

Garbage In -- Gospel Out.

Working...