Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Cloud

Ask Slashdot: What's the Ultimate Backup System? Cloud? Local? Sync? Dupes? Tape...? (bejoijo.com) 289

Long-time Slashdot reader shanen noticed a strange sound in one of their old machines, prompting them to ponder: what is the ultimate backup system? I've researched this topic a number of times in the past and never found a good answer...

I think the ultimate backup would be cloud-based, though I can imagine a local solution running on a smart storage device — not too expensive, and with my control over where the data is actually stored... Low overhead on the clients with the file systems that are being backed up. I'd prefer most of the work to be done on the server side, actually. That work would include identifying dupes while maintaining archival images of the original file systems, especially for my searches that might be based on the original folder hierarchies or on related files that I can recall being created around the same time or on the same machine...

How about a mail-in service to read old CDs and floppies and extract any recoverable data? I'm pretty sure I spotted an old box of floppies a few months ago. Not so much interested in the commercial stuff (though I do feel like I still own what I paid for) as I'm interested in old personal files — but that might call for access to the ancient programs that created those files.

Or maybe you want to share a bit about how you handle your backups? Or your version of the ultimate backup system...?

Slashdot reader BAReFO0t recommends "three disks running ZFS mirroring with scraping and regular snapshots, and two other locations running the same setup, but with a completely independent implementation. Different system, different PSU, different CPU manufacturer, different disks, different OS, different file system, different backup software, different building construction style, different form of government, etc."

shanen then added "with minimal time and effort" to the original question — but leave your own thoughts and suggestions in the comments.

What's your ultimate backup solution?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What's the Ultimate Backup System? Cloud? Local? Sync? Dupes? Tape...?

Comments Filter:
  • by Tablizer ( 95088 ) on Sunday November 15, 2020 @11:36PM (#60728890) Journal

    Slashdot favors dupes.

  • by SuperKendall ( 25149 ) on Sunday November 15, 2020 @11:47PM (#60728908)

    Cloud is great and good for reliability, but it can't be the only answer as in the end you should never trust something so totally out of your control 100%.

    So you really need to add something offsite, for me it's a hard disc I keep in a different city (with a relative). I used to try to update it once a month but now it's more like once a year, lazy but it's still better than nothing.

    Add to that you need a local copy as well, of at least some things so you can access a backup more immediately... for some perhaps the cloud solution could serve in that role

    One other reason local copies are still good to have, is so that you can verify the offsite backups from time to time, you don't want to be doing that from the cloud version often if at all.

    • I convert the files to QR codes and print them on organically sourced parchment. The sales rep says it's guaranteed to outlast me.

      Seriously though, consumer backup systems are universally crap . Acronis, Norton, Macrium, Windows Backup... all crap. Every single one of them only starts after a whole bunch of incomprehensible option dialogs and every single one of them fails to manage the space on the destination disk meaning people call me all the time saying "the backup failed!".

      Why can't I just mark a USB

      • by Bert64 ( 520050 )

        Apple's time machine backup behaves the way you describe...
        Turn it on and give it a disk to use, whenever that disk is available automatic incremental backups occur hourly, with older backups automatically removed once space becomes short. It's pretty good for a consumer backup system.

        I've also seen lots of enterprise backup systems which failed catastrophically. Everything looks like its working, but when you actually come to restore something there's a major problem and nothing works. The most recent one

        • I've also seen lots of enterprise backup systems which failed catastrophically. Everything looks like its working, but when you actually come to restore something there's a major problem and nothing works.

          I really don't get why doing test restores isn't basically a required part of everyone's backup task flow... but, from what I've seen, it's uncommon. I've seen, first-hand, someone get burned because it turned out recovering stuff from their expensive DLT tape backups didn't work. Frankly, the guy was lucky he didn't lose his job.

          • by shanen ( 462549 )

            This gave me a weird idea. How about if the backup system sometimes intervenes when you look at an old file? Instead of getting the latest local file, it would substitute the backup copy so you can start screaming bloody murder if there is something wrong with the backup. Of course if there's no problem, then the backup system has to make sure the local copy gets updated if you actually change the file (but the older the file the less likely that you'll be changing it (in general)).

            One aspect I left out of

        • by AmiMoJo ( 196126 )

          Sounds like you need to be very careful if it deletes old copies when space fills up. You don't want to ever allow the drive to get more than 50% full because if you get hit with ransomware it will save all the encrypted files and purge the good copies.

      • But you CAN!

        At least on Linux, you can add a trigger to /etc/udev/rules.d/ that runs a program whenever your connect something that matches your rules.
        Have that program trigger your dialog (e.g. using kdialog), and run your backup if the dialog returns the right value.
        (Note that you cannot just show a dialog from some script not started from X, as it won't know that X is running and won't have the authorization to access it too. There are various solutions for doing that though. E.g. look at other software

        • Well,
          the main question is, you you seem to support it with your suggestion: why the heck would anyone have a Y/N dialog popping up, when he connects a dedicated back up volume?
          This is 1980s bad user interface design!

      • Apple TimeMachine work under Windows, too.
        Just get one.

    • by fermion ( 181285 ) on Monday November 16, 2020 @12:26AM (#60729012) Homepage Journal
      Back when we had to do local backups, it was tapes swapped out every day,. One went home, one was in the drive.

      If I was dealing with large data that I had to accessible, it would be RAID, not a backup per se, but if you are trying to get back to work quickly you can beat 50 terabytes that can be recovered just by putting in a new drive.

      For better or worse, unless you are paying your own staff, outsourcing data storage is the wave of the now. It is insecure, another two huge breaches were just reported today, but in 20 years of increasing use of online backs, I have not lost any real data. I do have data on some 1990 machines, though, that I say I am going to boot up and recover, but never do.

      • Raid is in no way alternative to a backup. Saying that you prefer raid over backup is like saying that you prefer your house foundation over its walls.

      • by shanen ( 462549 )

        This one reminded me of a funny story about backups. I was "tentatively" hired by a kind of weird company to take care of their computers and networks. I should have gotten suspicious since I knew the previous guy, known he was pretty competent and even friendly, and also knew that he was already gone. I was basically walking into a vacuum. (Though I knew him from some computer club or something, I think he had interviewed me a few years earlier as an assistant, but I had decided against joining the company

    • by Kisai ( 213879 )

      Cloud is not a reliable backup solution. It relies on network connectivity. It's a solution for "I need my data everywhere", not "I need redundancy"

      There is, at present, no suitable permanent backup medium. Basically we're stuck between LTO tapes and BD/AD

      M-disc approached this, but it apparently doesn't work as advertised. We are now on "archival disc" which has a target capacity of 1TB. At present current write-once disc's for "alternative to tape" are 3.3TB (ODA format) and cost like $125 each. It's chea

      • I just had a motherboard fail in such a way as it trashed two hard drives.

        Fortunately, I have rsync copying my data to another machine, and both machines have LTO tape drives.

        I do not backup my OS - I backup /home, /etc and some things in /var. Most of /var is stuff which is recreated from dumps, not backed up. (Postgresql dump and WAL logging) web sites are created from svn repos, repos are dumped, but also hot copies maintained. LTO tapes with Grandfather/Father/Son rotation - with "Father" tape rota

    • by AmiMoJo ( 196126 )

      Duplicati is great for this. It supports multiple cloud services as well as stuff like SFTP for your own servers. It's cross platform too.

      For off-site backups if you know someone who you can do a swap with then you can host each other's data. It's all encrypted of course. Sync overnight.

  • I've been thinking about this for about 22 years, ever since I tried to restore from floppy backups and they were corrupted.

    Maybe this will finally be the year to create one?

    One thing I do know: ZFS is for morons who are not in fact sitting around running command line programs on their filesystem every week, which would be required to make it better than ext4.

    • What is your alternative? I've run ext3 & 4 for years on md raid arrays, but data still fails on reads. What can do error reporting on reads along with writes, snapshotting, and deduping?

      • Re: (Score:3, Insightful)

        by Aighearach ( 97333 )

        ZFS is a filesystem, the idea that it has something to do with creating backups is just a moron flag on the speaker.

        If you're not a hardcore filesystem nerd who already checks their filesystem stats by hand on the cli frequently, then it is just another filesystem and believing it makes your data safer makes your data less safe. A false sense of a security is not harmless.

        If you have backups, then you restore the corrupted file. It is simple. And even if you were a filesystem nerd, faster than fucking with

        • by BAReFO0t ( 6240524 ) on Monday November 16, 2020 @01:14AM (#60729110)

          You know, I used to believe that "RAID is not a backup solution".
          And it is not wrong. It is just that any other backup solution is not a backup solution then too.
          Correctly used RAID can be part of a backup solution!

          And ZFS is by far the best one here. Because, let's be honest, triplicating and checksumming everything makes far more sense than the other weird RAID schemes, and that scrubbing is not part of every file system, is messed-up. And triplicating is what you should do as part of a backup solution too!

          It's just that of course you need to have the other properties of a backup solution too! Namely, versioning, and off-site storage. Snapshots, used correctly, provide versioning. And off-site can be done by having the ZFS-using system be separate from your normal system. Any other safety can be provided by 1. adding more copies, and 2. using more different implementations, as monocultures are risky.

          Looking at the emotional personal attack you started your comment with, you seem to never have thought about that one, and so just knee-jerked a often-parroted meme of frankly, dangerous half-knowledge. Hey, I did that very thing too, not more than a few months ago... So I hope you don't feel too offended to accept what I said too...

          • Correctly used RAID can be part of a backup solution!

            No. Correctly used RAID is a resiliency/uptime solution. It's not part of the backup solution. It can be used to minimise the frequency of accessing backups, but it isn't a backup.

          • by dfghjk ( 711126 )

            Correctly used RAID can be part of a backup solution!

            Especially when you get to define what "backup solution" means. RAID is online storage, by definition it is an availability solution.

            And ZFS is by far the best one here.

            Especially when you get to define what "RAID" means. RAID is a collection of block-based redundancy schemes that operate underneath a filesystem. ZFS is a filesystem.

            Because, let's be honest, triplicating and checksumming everything makes far more sense...

            That's not honest, just naive. RAID relies on block devices implementing ECC codes. Checksumming on top of that only make potential sense, not even "far more sense", when you suspect those ECC mechanisms are

        • I've never used ZFS, but here is my predicament:

          Had some files on a working disk. Backed that up to an ext raid array, backed that up to an external device for 3rd backup.

          Problem is that I keep sending backups to both locations, yet the data was corrupted. Later reading from either source contained the corruption.

          Another time the backup was corrupt, but the original file legit.

          How is this simple to solve?

          • Problem is that I keep sending backups to both locations, yet the data was corrupted. Later reading from either source contained the corruption.

            If the source started out as being corrupt, then a backup won't ever help. If the source got corrupted after the backups started, then the backup system needs to version the changes, allowing you to roll back to a previously good version. Versioned backups increases storage requirements so they usually have rolling time windows (eg an hourly backup, a daily backup, a weekly backup etc). You'd want some kind of scrubbing system to detect corruptions so you know when you might need to restore from backup.

            Another time the backup was corrupt, but the original file legit.

            Chec

    • Are you seriously so clueless that you say "command line programs" like it's a bad thing?

      We call people like you militant luddites around here. I do not believe name-calling improves anything, and just hope you get over your triggers and can start filling that hole in your education for a better life through more power.

    • Lord... restore from floppy backups.

      I couldn't reliably get a floppy to carry homework back to school the next day. When I needed that was around the same time we got our first DSL connection, and that became the reason for me to set up my first web server. It was something called Savant that ran on Windows. Don't worry, I got into Linux soon after.

      • by shanen ( 462549 )

        I feel like I need to clarify that I didn't mean to suggest that I want to restore from floppies. I just want to recover the old data, but I see that as related to recovering old data files. Many of the ancient files have quite possibly migrated along with me over the many years... (Actually I rather doubt that many of the old floppies could still be read, though I did manage to pull a bunch of data off of some old floppies a few years ago. It was a "target of opportunity" when an old employer let me have o

  • Is of course more than one backup, and preferably using different methods.

    Our computers are currently all Macs, so we have two separate time machine disks in our house. On top of that, each computer also uses Arq to do encrypted backups onto Backblaze's servers.

    The reason we have two different time machine disks at home is because one of them is called "offsite backup" but hasn't actually made it offsite yet.

    FWIW I used to back up every file on a machine, but now I exclude most of the system stuff (although

  • by beheaderaswp ( 549877 ) * on Sunday November 15, 2020 @11:57PM (#60728942)

    I'm not going to name the system I use. Never want to endorse anything.

    For my company I've got 60TB of backup storage onsite. We image everything for speedy recovery of workstations.

    Servers are mostly on Hyper-V. We can do an instant recovery of most assets. If everything went down it might take a few hours to recover. Straggler servers would be WSUS and some other non-critical stuff (which run on hardware).

    We offsite the stuff we cannot afford to lose. Mostly because moving 50TB of backup data isn't really possible in our rural circumstance.

    There is no "ultimate". There is only "degrees of survivable". We have priorities:

    1. Engineering and production file services (might take 15 minutes to bring back up in a diminished performance capacity)

    2. Everything else.

    Keep the cash register running. Engineering and production.

    Other companies have other priorities... there's no "best".

    • This question relates to personal machines.

      Obviously, for business systems an administrator will consider numerous factors and there is no "ultimate" solution.

    • by Bert64 ( 520050 )

      That "instant recovery" assumes that the hyper-v infrastructure is intact, if you lose that how easy is it to restore to bare metal?
      I've seen lots of backup systems which require lots of support infrastructure to be running, which is great when you want to restore one or two specific things but falls quite badly when what you need to restore is part of the support infrastructure that the backup software requires.

      You want your backups in as simple, non proprietary and easily recoverable form as possible so y

      • If our Hyper-V infrastructure went down- I'd assume three hours for fresh installs.

        We do image the Hyper-V server's boot drives. But in the case of a compromise I'd airgap the network and reinstalll.

  • by fgouget ( 925644 ) on Monday November 16, 2020 @12:00AM (#60728956)
    Borg backup [readthedocs.io] : networked through ssh, incremental backups, deduplication, local encryption and compression. So daily backups are fast, each backup is independent so you can prune them to only keep daily backups for a few weeks, when weekly backups, etc. And finally you can save them offsite : just rsync the backup repository to another computer, or rclone it [rclone.org] to the cloud (remember, it's already encrypted so it's fine).

    The only things that are missing are multithreaded compression and support for multiple simultaneous backup processes.

  • They key is redundancy and that variety tends to lead to, and this maybe your hesitation, paralysis by analysis. I use Acronis TrueImage which backs up disk images to my NAS. It also backs up files and folders albeit more frequently. These backups are incremental and go back about 2-3 months depending on their size. I copy the backups from the NAS to an external drive weekly and I physically store the external drive in my fire safe.

    Now as far as irreplaceable data (mostly pics and videos of my kids, there

  • by Nkwe ( 604125 ) on Monday November 16, 2020 @12:14AM (#60728988)
    It's only a backup if it's stored offline and has been used in a restore test. If disaster recovery is a goal, it also need to be offsite. It's okay to have some online (local or cloud) backups as long as you *also* have an offline copy. If your backup is online (meaning that your computer or someone with access to your credentials can erase it), then you are just a user error, ransomware attack, or a software bug away from losing your data.
  • ... the sheer length of time you have ot wait to restore a couple of terrabytes of data.

    But it's still the best backup system money can buy, IMO.

  • by dfn5 ( 524972 ) on Monday November 16, 2020 @12:17AM (#60728996) Journal
    Everyone thinks backing up to the cloud is great. You don't need to pay for extra equipment. You don't need to pay for people who know how the backups work. And everything seems great right up until you need to actually do a restore. I saw one person who lost a whole system and needed to do a complete restore and the estimated time to completion was on the order of a week. I'm like this is what you asked for when you got rid of on premises backups.

    Many people tell me that tape is dead. I personally feel that tape has good archival qualities and I feel pretty good about a tape sitting on a shelf, in a safe, or offsite.

    Moral of the story is whatever you go with, actually test a restore to see how long it will take. You don't want to be surprised in a disaster.

    • Oh you do pay! Because THEY still have to pay for that hardware.
      Sure, you can save money by sharing parts of the hardware. But then cooperate with others. Having a third party with third party interests do it adds a whole host of problems.
      So one way or another, you are paying.
      Turns out stupidity hurts after all. It's just that one can't tell. Because one is stupid. ;)

    • If you backup into a cloud, your limit is your band width.
      Depending in what order "files" are restored, you have not even a system that could boot after hours.

    • by Dixie_Flatline ( 5077 ) <.vincent.jan.goh. .at. .gmail.com.> on Monday November 16, 2020 @09:08AM (#60729866) Homepage

      Backblaze will courier you a harddrive (or several) with your data on it. If you want to keep the drive, you pay for it, otherwise you can ship it back to them. Obviously, you're paying for the shipping costs, but if it's time-critical, then having a drive overnighted to you is probably worth it.

      I have a time machine backup on a RAID-1 on my desk. The whole machine is backed up to backblaze as well. Important files are also backed up to a small external drive (which I used to keep at the office when I went to the office), and on iCloud. Every file that's important should be in 3 distinct places for it to be considered 'backed up', and at least one of those places has to be off-site. This is a fairly low-cost consumer solution. Backblaze plans are pretty cheap, a big storage drive doing Time Machine backups is cheap, and a small SSD for extra important files is cheap, but all together, they're still quite effective.

  • Some say cloud limits control. But, I believe "control" is more of an emotional thing. One should consider the probability that something will go wrong, not whose fault it is. If you know you are truly diligent with things like backups, then go ahead and run your own show. But, if you are a bit absent minded, then perhaps cloud is a better bet, probability-wise. Be honest with yourself.

    This applies on the corporate level also. Many orgs are slop-heads when it comes to IT prevention and planning. Cloud is pr

    • 1. Control is the very thing that makes you a person! An individual! It is literally what distinguishes you from a tool or inanimate object.
      Hell yeah it is emotional! You say that as if it was a bad thing. But hey, I'm open to alternative forms of existence. If you are happy being an ant or Borg, then I'm happy for you. (Not even being sarcastic here. Because after all, you made *that* choice. I think it's the ultimate laziness: To sell literally your existence for convenience and simplicity. But hey, maybe

  • The answer?

    Copy to a backup drive to another system
    Copy from that to a cloud (or other offsite)

    And use time machine to another disk, because backups don't protect against corruption (unless each backup is a snapshot).

  • This really depends on the amount of data and what you want from your backup. In my case, JBOD and manual backups is great: I need cold storage, can tolerate slow r/w access, and want something ultra-cheap without ongoing costs (like the cloud). One usb HDD docking station, a stack of hard drives and some time. It works for my specific needs. But I doubt my needs are typical. They may even be on the fringe. Which just illustrates that the original question needs more specification before one could answer it

  • A backup solution must enforces the 3 copies rule for a complete protection. Copy No1 is always Online and Onsite by definition. It is the copy accessed by the users Copy No2 must be offsite to protect against physical threats Copy No3 must be offline to protect against logical threats Also, all backups are broken until they are proven functional by a complete restore test. For that reason, at least once a year, a full restore must be achieved to confirm everything is in order. Miss any of the 3 copies or
  • by BAReFO0t ( 6240524 ) on Monday November 16, 2020 @12:43AM (#60729044)

    1. Don't trust any third party. They do not care for your data, other than to leak it and use it against you.
    2. Triple ALL the things. Backups, disks, systems, locations (as in: buildings/cities/countries), restoration hardware, you name it. (This is because I follow the scientific principle of trust through statistical reliability. Ideally it would be more than three, but three is a healthy cut-off point, unless your got crazy mission critical stuff, swim in money, and want to go full six sigma ;)
    3. Version management for ALL the things. (Like file system snapshots.) Because what good are backups if you back up creeping corruption or user errors too?
    4. Assume that everything that can fail *will* fail. Even your heart. Prepare accordingly. (NASA has this principle.) (I've got the most important stuff on special paper in special containers at locations I will not disclose. Don't even try, it's encrypted with one-time pads.)
    5. Verify as much as you back up. Backup is useless without restoration.

    I myself recommend three disks running ZFS mirroring with scraping and regular snapshots, and two other locations running the same setup, but with a completely independent implementation. Different system, different PSU, different CPU manufacturer, different disks, different OS, different file system, different backup software, different building construction style, different form of government, etc.
    I wrote one backup solution myself, had one written by a friend that thinks quite different from me, and use Bacula on the third because lol.

    Yes, I have been hurt before.

    Hey, you asked for the ultimate approach! :)

  • Do nothing (Score:4, Funny)

    by hcs_$reboot ( 1536101 ) on Monday November 16, 2020 @12:47AM (#60729048)
    Just ask Google for your data when the time comes.
  • The expression "Set in Stone" is surprisingly valid.

  • The ultimate backup is the one that works for you. There is no single right answer for this, it depends on your needs, what you are trying to protect against and also how much bandwidth you have. No point in having cloud if you are making many GB/TB of change a day but only have a sub 100megabit pipe. For me I have a 2 tier backup system. my NAS has a copy of everything and then within that I sync certain critical folders to cloud storage.
  • I have an external HDD that I backup all my disks to on a regular basis. I have too much data (and too slow a network connection and not enough money) to do cloud backups (encrypted or otherwise) and I don't have anywhere to store an offsite external disk unless I spent a chunk of money paying someone to store it for me (e.g. a bank deposit box)

    I have considered backing specific things onto a set of DVDs but the things that would be the hardest to replace are also things that change often enough that a set

    • by Bert64 ( 520050 )

      Emails inherently have a backup in the form of client and server, assuming you use a traditional desktop email client and your mail server is hosted elsewhere.

      For source code you can take the linus torvalds approach - release it openly and let everyone else back it up for you, keep copies on gitlab/github/sourceforge etc too.

      Things like notes tend to be small, so synchronising them across several devices isn't too hard even with a slow connection.

      I also keep a NAS in an outbuilding separate from the house (

  • The particular products can be swapped out, but FWIW this is what I do for my personal files....

    First of all....I'm a Mac user (dodges brick thrown at head).

    In my home network, I have a TrueNAS server, which has one primary 4-disk "data" pool that uses raidz1 and that is shared on the local network using SMB (yes, I know that isn't a "backup"). The TrueNAS server also has several single-disk pools residing on external HDDs that I use for local backups. Most of my long-term data resides on this "data" po
  • by bb_matt ( 5705262 ) on Monday November 16, 2020 @01:06AM (#60729100)

    For personal data backups, before even considering what backup solution you need, how well organised is your data?

    Is your file system a disorganised mess? - I'd be willing to bet most people are in this situation.
    Clearly you need a local backup first and foremost - a mirror of everything you have, if you are that disorganised.
    Then you'll need remote backup - and because your data is a mess, you have to backup damn well everything.
    Something like back blaze?

    You are then in a position of at least having your data available if a catastrophic failure happens - but you are backing up huge amounts of redundancy, which costs and wastes time.

    On the flip side, if you have decent organisation skills, your backups are going to be much more refined.
    You never have duplicate data. If you store photographs, you have already removed duplicates or been brutal and just kept the best.
    "Will I ever need that photo of my shoes in the rain? - probably not. Delete."

    Do you really need to remote backup your collection of movies or music?
    Is your curation so good, that you identify what you would really miss and only remote backup those.

    It all starts with data organisation.

    From that point, local backup for *everything* - may as well, it's cheap enough.
    Remote backup only for the data you would need if your house burned down, or if you have your computer stolen.

    The messier your data, that harder and more expensive this exercise will be - it really is that simple.

    The ultimate system, is then:
    Organisation of data, complete local backup, limited remote backup.

    But the one most people will need to use is:
    Complete local backup, complete remote backup.

    And the one that most people use currently is:
    No backup, wing and a prayer.

  • If itâ(TM)s life critical, more than one. Personally, life critical stuff gets put on a home Synology, backed up to Backblaze B2 (From Synology) and mirrored (with Resilio) to my office and then separately encrypted and copied up to Dropbox (with Hyper Backup). So at any given time, Iâ(TM)ve got like 3-4 copies of data. Computer catches fire/stolen. Iâ(TM)m good. House burns down. Iâ(TM)m good. Earthquake takes out PNW, still good. Less critical stuff, I just mirror off site. Sure, wou
  • If you have multiple copies of your data available at a moment, you may have high-availability... but you probably don't have disaster recovery.

    The "ultimate backup" is a WORM system in triplicate (or more) on three different continents. Assuming you don't have a true hardware WORM system (note: paper and microfiche count as WORM), you need to be damn sure that you have incremental off-line backups. If someone can press a button somewhere and delete all your backups, you do not have backups. That's probably

    • Thus why tape is still heavily used. Ransomware can not encrypt what it can not access and WORM tapes are a thing.

  • by imidan ( 559239 ) on Monday November 16, 2020 @01:27AM (#60729140)

    All my work, any file that I create with any value to me, is committed to an offsite svn repo. A cron job on that VM makes a nightly tarball of /var/svn and copies it to a third site. I have all the svn repos checked out on my local linux box and svn update them regularly. If I lose a local file, I still have the remote repo. If I lose a local file and the remote repo, I still have the repo backups. If I lose all of that, I still have a checked out, recently updated version on another machine.

    In addition, I have a batch file that copies the meaningful parts of my Windows hard drive to a remote disk using robocopy. Robocopy, unlike most other of the built-in copy utilities of Windows, is robust to network interruption and latency, and will continue to retry a file copy even if there's a bit of a connectivity hiccup.

    I've been meaning to adjust my Windows backups to be pull-only from a separate machine to make the backups safe from ransomware, but I haven't gotten there, yet. I don't worry about my data surviving some kind of seismic apocalypse that destroys the entire west coast. If that comes about, losing my python script for scraping the covers of Rolling Stone and graphing the most commonly appearing people is going to be the least of my concerns.

  • Comment removed based on user account deletion
  • It used to be hard, but now you can pretty easily run nextcloud as your data store locally, and sync the data to a blob in the cloud for off-site (if you don’t have a friend or family ,ember you can use). It might not be perfect, but it sure beats tape plus versioning.
  • by shess ( 31691 ) on Monday November 16, 2020 @02:03AM (#60729190) Homepage

    I was elbow deep in a computer's guts and pulled out a molex connector and the system powered down. Oops, I think I found the most dangerous item likely to be near my computer! And, unfortunately, my elaborate RAID setup was NOT going to protect me from that item. So I acquired bits to build out an rsnapshot server, so that I could have a backup on an entirely distinct system. I originally intended it to have no incoming connectivity, including ssh, but I decided that was too much.

    Then I got 2 external drives to act as mirrors. The external drives do not stay plugged in, because when they're plugged in they are susceptible to accidental destruction (or ransomware). On the first of each month, I plug one in and a script runs to update the mirror of the rsnapshot volume. Then I take it offsite and swap for the other external mirror, which gets plugged in to update. Then I unplug it. So this means I have the primary data, the rsnapshot mirror in case the primary gets a power surge, an external copy in case all my computers get a power surge, and an offsite copy in case my house burns down. But the rsnapshot mirror is most useful. This all happens once a month because that's an overhead I can commit to successfully.

    A few years back I added an offsite mirror because it didn't seem like a loss. Also I added comparison and scrubbing scripts to make sure things don't bitrot in place.

    An important thing about my scheme is that each copy has a reason for existence which is easy to nail down, and if something happens I know exactly where to find the next mirror in the system. I don't have to inspect them to figure out if they are current. Each copy is complete, so I don't have to worry about have duplicates of half my data and two freshly-scrubbed drives which contained the other half (oops!).

    I could care less about whether I can quickly re-create a bootable system off of this data, because if the machine had a hard hardware error, I'd probably have to replace it with new equipment anyhow, and for that a fresh install is probably safest (I don't want to spend three weeks debugging an OS install which is running on different hardware than it is installed on). This aspect is VERY different from corporate backup, where you just pull a clone system out of stock and reimage it.

    One bit which isn't obvious is that you should distinguish between data you want to backup because it is irreplaceable and data which is just inconvenient to restore. Your 20TB of ripped media probably doesn't need to be backed up, but probably does need to be on a reliable RAID device. If you put that 20TB of media in with your 500GB of photos and tax info and the like, you're making things more complex, and more complex means more risk. If you can get things under 500GB, it's pretty reasonable to just commit to having full redundant copies on easily-available drive units.

    • by shess ( 31691 )

      Hmm, and I'll add: I use rsnapshot in spite of there being whizzier systems available because with rsnapshot, I can drop to the command line and inspect that my backups are good. This is because rsnapshot is just a structure over top of rsync. So I don't have to rely on a tool to provide access to the data, or worry about a vendor bug scribbling all over everything. Worst case, all I need is one of my external drives and a download of a live ISO and I'm ready to rebuild any of my Linux systems. For Wind

  • Let the internet community backup your data at no expense to you. Data can be restored swiftly as long as there are enough seeders around.

    • I forgot to mention P2P options for backup, though I've speculated about offering some of my excess disk space to other people for their backups in exchange for their hosting my backups. Preferably with some kind of scattered distribution, so that none of them can actually read the files, but I can restore as long as I can contact enough of them.

  • by LordHighExecutioner ( 4245243 ) on Monday November 16, 2020 @02:29AM (#60729226)
    Rosetta stone is about 2,200 years old and can be read without problems. Just dump your data on a stone slab using three different encoding methods, and some guy in the future will have no problems to decode them. As an added bonus, your data will be exposed in a big museum!
  • That said, at the very least it should be offline and stored off-site, i.e. cloud, sync, RAID, NAS, etc. all do not even qualify as backup.

    Best all around except for cost is professional tape. Best for a home-user is probably USB disks. You can store a backup disk in a locker you have at work or in your gym or the like. Remember to use at the very least 3 independent media sets. Encrypt if there is a risk backup media may get stolen.

  • Slashdot reader BAReFO0t recommends "three disks running ZFS mirroring with scraping and regular snapshots, and two other locations running the same setup, but with a completely independent implementation. Different system, different PSU, different CPU manufacturer, different disks, different OS, different file system, different backup software, different building construction style, different form of government, etc."

    You can't run different file systems and still use ZFS.

  • Easy: encode all your data in porn videos using steganography and upload them on pornhub! You're guarantied they'll be safely accessible for the next hundred years...

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday November 16, 2020 @04:09AM (#60729394)
    Comment removed based on user account deletion
  • Up until 2017, Crashplan had a service where in addition to local and cloud backups, you could do a peer-to-peer backup via the internet. So you could set up a NAS, do the initial backup, bring it to a friend's house and do incrementals to that NAS.

    This struck me as ideal: offsite backups that don't depend on regular exchange of physical media. I haven't found a replacement for it.

  • It would actually make alot of sense to protect against cryptorandsomeware. Have to set of backup disks. Before backup, turn on the power to one set of the disks using whatever solution you want, can be a timer, can be a poweroutlet connected to the network. Wait for the disk to start. Do incremental backup. Turn off the power to the disks. For the next backup the next set of disk should be used. If shit hits the fan, ether by crypto or just malfunction, you just one backup off. Or you can use the secon
  • How about just copying files manually to a USB hard drive and flash drive on a monthly reminder schedule? You can still have an automatic backup as well but doing it manually and checking the result gives more confidence
  • lots of scripts (Score:5, Interesting)

    by Orgasmatron ( 8103 ) on Monday November 16, 2020 @04:53AM (#60729462)

    rsync has a feature where it compares tree A to tree B, and if a file matches, it makes a hard link in tree C to tree B, and if they don't match it makes a copy from tree A in tree C.

    So, I've got a shell script that cron runs every 5 minutes. It goes through my list and if that tree's last copy is more than INTERVAL seconds old, it makes a new copy. The script that does the actual rsync exists in several versions - one for local LVM filesystems (uses snapshots), one for SMB shares (girlfriend's laptop), etc. Some of them use mysqldump if they have related databases.

    Another script prunes them in a slightly complicated way, keeping one annual forever, one monthly for 10 years, one weekly for 1 year, one daily for 2 months, and everything for 2 weeks.

    Another set of scripts scans those trees, hashing all of the files, and then stores the files by hash. Purely ephemeral and frequently changing files are excluded - I don't need to archive every snapshot of every file from /var/log/

    Files with new hashes are synced to another computer off-site and written to rewritable DVD. Once enough files are queued up, it also writes two copies to M-disc BD-R, which get stored in boxes off-site, and then the DVD-RWs get recycled.

    I also run LTO tapes. LTFS sucks, so I just use tar. (It is possible that LTFS may suck less with a paid enterprise Linux.) Lots of things get appended to the current tape, which gets stored off-site when full.

    I'm pretty sure that I could more-or-less recover from a proper disaster with this setup, but it would be painful. I've done tests like "If the files in [random folder] got corrupted two months ago, could I get them back? Could I put it back together entirely from optical media?" While the tests have been successful so far, I really need to write some scripts to assist with that second part.

    I may add a tape rotation to my setup, with the most current copies dumped weekly and the last few weeks stored in my office.

  • by getuid() ( 1305889 ) on Monday November 16, 2020 @04:55AM (#60729466)

    Pull backups within your local setup onto a dedicated machine; the important part is pull, not push. Keep that machine as blank as possible and don't enable ssh. You will need to grab a chair and sit in front of it for administration, but it's a pretty good protection against encryption malware. Of course, this comes at the price of your backup machine needing to know all passwords it needs to access your company data, so it better be a safe room.

    Use that machine to push data off-site, e.g. to a cloud service, or to a 2nd site.

    On the local machine use a deduplicating storage method -- take ZFS if you have to, but you can go for borg if the data is below 1 TB, or restic otherwise.

  • I have a Synology DS918+ with 55TB of disk in RAID that I back up to. It is configured to backup to a connected USB drive and also to cloud, giving me a cascade of backups with multiple layers.

    Having the Synology locally shortens my local backup and restore times, so I appreciate having it close. And, in case we need to evacuate, we can unplug the Synology and carry it out with us,

    I thought the Synology was enough on its own (no backup of the backups), until of course I had my first Synology crash and nearl

  • I use a system from https://cloudbox.ull.at/ [cloudbox.ull.at] It uses a raid 1 system with owncloud and an officebox on another location. On the main system there are also snapshots. All data is encrypted. Protection against ransomware trojans. From my perspective the best system. No need to hand your data to some cloud provider, all the data is in your hand.
  • by dark.nebulae ( 3950923 ) on Monday November 16, 2020 @06:52AM (#60729594)

    A backup is not a backup unless you have tested the restore process.

    Many system and network engineers, home enthusiasts, etc will talk about their great backup process with layers and tiers, etc.

    But they forget the first rule of backups - testing the restore process.

    If you don't test your restore process, you have no idea if what you are pushing to a local drive or a NAS or to tape or to the cloud will actually work or not.

    And i don't just mean can you restore a file and see it again, I'm talking full-on disaster recovery testing.

    If you have a system backup, that sounds great, but have you tried restoring the system to see that it will boot and run successfully after the restore? If not, you don't really know if the system can be restored at all.

    If you have a backup of a critical application, have you tried restoring and launching the app on a clean infrastructure? What if you're missing a database or some configuration files or environmental properties, registry keys, license files, etc? If you haven't tried, you don't know if you can actually restore the application or not.

    Even a simple file restore; until you've tried it, you don't know if the backup software you're using is capturing it all correctly, storing it correctly and restoring it back to the original form.

    Without testing, you really won't know if you have a backup at all...

  • by ArchieBunker ( 132337 ) on Monday November 16, 2020 @08:38AM (#60729764)

    The best bang for your buck right now is tape drives, LTO-4 to be specific. Both drives and new tapes are inexpensive.

If computers take over (which seems to be their natural tendency), it will serve us right. -- Alistair Cooke

Working...