I recently implemented a backup workflow for me. I heavily use restic for desktop backup and for a full system backup of my local server. It works amazingly good. I always have a versioned backup without a lot of redundant data. It is fast, encrypted and compressed.

But I wondered, how do you guys do your backups? What software do you use? How often do you do them and what workflow do you use for it?

  • @heythatsprettygood@feddit.uk
    link
    fedilink
    English
    12 days ago

    I use Pika Backup (GUI that uses Borg Backup on the backend) to back up my desktop to my home server daily, then overnight that server has a daily backup using Borg to a Hetzner Storage Box. It’s easy to set it and forget it (other than maybe verifying the backups every once in a while), and having that off site back up gives me peace of mind.

  • @bitcrafter@programming.dev
    link
    fedilink
    13 days ago

    I created a script that I dropped into /etc/cron.hourly which does the following:

    1. Use rsync to mirror my root partition to a btrfs partition on another hard drive (which only updates modified files).
    2. Use btrfs subvolume snapshot to create a snapshot of that mirror (which only uses additional storage for modified files).
    3. Moves “old” snapshots into a trash directory so I can delete them later if I want to save space.

    It is as follows:

    #!/usr/bin/env python
    from datetime import datetime, timedelta
    import os
    import pathlib
    import shutil
    import subprocess
    import sys
    
    import portalocker
    
    DATETIME_FORMAT = '%Y-%m-%d-%H%M'
    BACKUP_DIRECTORY = pathlib.Path('/backups/internal')
    MIRROR_DIRECTORY = BACKUP_DIRECTORY / 'mirror'
    SNAPSHOT_DIRECTORY = BACKUP_DIRECTORY / 'snapshots'
    TRASH_DIRECTORY = BACKUP_DIRECTORY / 'trash'
    
    EXCLUDED = [
        '/backups',
        '/dev',
        '/media',
        '/lost+found',
        '/mnt',
        '/nix',
        '/proc',
        '/run',
        '/sys',
        '/tmp',
        '/var',
    
        '/home/*/.cache',
        '/home/*/.local/share/flatpak',
        '/home/*/.local/share/Trash',
        '/home/*/.steam',
        '/home/*/Downloads',
        '/home/*/Trash',
    ]
    
    OPTIONS = [
        '-avAXH',
        '--delete',
        '--delete-excluded',
        '--numeric-ids',
        '--relative',
        '--progress',
    ]
    
    def execute(command, *options):
        print('>', command, *options)
        subprocess.run((command,) + options).check_returncode()
    
    execute(
        '/usr/bin/mount',
        '-o', 'rw,remount',
        BACKUP_DIRECTORY,
    )
    
    try:
        with portalocker.Lock(os.path.join(BACKUP_DIRECTORY,'lock')):
            execute(
                '/usr/bin/rsync',
                '/',
                MIRROR_DIRECTORY,
                *(
                    OPTIONS
                    +
                    [f'--exclude={excluded_path}' for excluded_path in EXCLUDED]
                )
            )
    
            execute(
                '/usr/bin/btrfs',
                'subvolume',
                'snapshot',
                '-r',
                MIRROR_DIRECTORY,
                SNAPSHOT_DIRECTORY / datetime.now().strftime(DATETIME_FORMAT),
            )
    
            snapshot_datetimes = sorted(
                (
                    datetime.strptime(filename, DATETIME_FORMAT)
                    for filename in os.listdir(SNAPSHOT_DIRECTORY)
                ),
            )
    
            # Keep the last 24 hours of snapshot_datetimes
            one_day_ago = datetime.now() - timedelta(days=1)
            while snapshot_datetimes and snapshot_datetimes[-1] >= one_day_ago:
                snapshot_datetimes.pop()
    
            # Helper function for selecting all of the snapshot_datetimes for a given day/month
            def prune_all_with(get_metric):
                this = get_metric(snapshot_datetimes[-1])
                snapshot_datetimes.pop()
                while snapshot_datetimes and get_metric(snapshot_datetimes[-1]) == this:
                    snapshot = SNAPSHOT_DIRECTORY / snapshot_datetimes[-1].strftime(DATETIME_FORMAT)
                    snapshot_datetimes.pop()
                    execute('/usr/bin/btrfs', 'property', 'set', '-ts', snapshot, 'ro', 'false')
                    shutil.move(snapshot, TRASH_DIRECTORY)
    
            # Keep daily snapshot_datetimes for the last month
            last_daily_to_keep = datetime.now().date() - timedelta(days=30)
            while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_daily_to_keep:
                prune_all_with(lambda x: x.date())
    
            # Keep weekly snapshot_datetimes for the last three month
            last_weekly_to_keep = datetime.now().date() - timedelta(days=90)
            while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_weekly_to_keep:
                prune_all_with(lambda x: x.date().isocalendar().week)
    
            # Keep monthly snapshot_datetimes forever
            while snapshot_datetimes:
                prune_all_with(lambda x: x.date().month)
    except portalocker.AlreadyLocked:
        sys.exit('Backup already in progress.')
    finally:
        execute(
            '/usr/bin/mount',
            '-o', 'ro,remount',
            BACKUP_DIRECTORY,
        )
    
  • @zeca@lemmy.eco.br
    link
    fedilink
    74 days ago

    i do backups of my home folder with Vorta, tha uses borg in the backend. I never tried restic, but borg is the first incremental backup utility i tried that doesnt increase the backup size when i move or rename a file. I was using backintime before to backup 500gb on a 750gb drive and if I moved 300gb to a different folder, it would try to copy those 300gb again onto the backup drive and fail for lack of storage, while borg handles it beautifully.

    as an offsite solution, i use syncthing to mirror my files to a pc at my fathers house that is turned on just once in a while to save power and disc longevity.

  • Strit
    link
    fedilink
    74 days ago

    My systems are all on btrfs, so I make use of subvolumes and use brkbk to backup snapshots to other locations.

  • @tasankovasara@sopuli.xyz
    link
    fedilink
    3
    edit-2
    4 days ago
    • daily important stuff (job stuff, Documents folder, Renoise mods) is kept synced between laptop, desktop and home server via Syncthing. A vimwiki additionally also syncs with the phone. Sync happens only when on home network.

    • the rest of the laptop and desktop I’ll roll into a tar backup every now and then with a quick bash alias. The tar files also get synced onto home server’s big file system (2 TB ssd) via Syncthing. Home server backs itself up on it’s own once a week.

    • clever thing is that the 2 TB ssd replaced an old 2 TB spinning disk. I kept the old disk and set up a systemd thing that keeps it spun down, but starts and mounts it once a week and rsyncs the changes to the ssd over, then unmounts it so that it sleeps again for a week. That old drive is likely to serve for years still with this frugal use.

  • Hamburger
    link
    fedilink
    23 days ago
    • Offline Backup on 2 separate HDD/SSD
    • Backup on HDD within my desktop pc
    • Backup offsite with restic to Hetzner Storage Box
  • @ColdWater@lemmy.ca
    link
    fedilink
    23 days ago

    I use external drive for my important data and if my system is borked (which never happen to me) I just reinstall the OS

    • @floquant@lemmy.dbzer0.com
      link
      fedilink
      23 days ago

      External drives are more prone to damage and failures, both because they’re more likely to be dropped/bumped/spilled on etc, and because of generally cheaper construction compared to internal drives. In the case of SSDs the difference might be negligible, but I suggest you at least make a copy on another “cold” external drive if the data is actually important

  • Radioactive Butthole
    link
    fedilink
    English
    24 days ago

    I have a server with a RAID-1 array, that makes daily, weekly, and monthly read only btrfs snapshots. The whole thing (sans snapshots) is sync’d with syncthing to two rPi’s in two different geographic locations.

    I know neither raid nor syncthing are “real” backup solutions, but with so many copies of the files living in so many locations (in addition to my phone, laptop, etc.) I’m reasonably confident its a decent solution.

  • @tankplanker@lemmy.world
    link
    fedilink
    13 days ago

    Borg daily to the local drive then copied across to a USB drive, then weekly to cloud storage. Script is triggered by daily runs of topgrade before I do any updates

  • @haque@lemm.ee
    link
    fedilink
    13 days ago

    I use Duplicacy to backup to my TrueNAS server. Crucial data like documents are backed up a second time to my GDrive, also using Duplicacy. Sadly it’s a paid solution, but it works great for me.

  • @blade_barrier@lemmy.ml
    link
    fedilink
    34 days ago

    Since most of the machines I need to backup are VMs, I do it by the means of hypervisor. I’d use borg scheduled in crontab for physical ones.

  • hallettj
    link
    fedilink
    English
    34 days ago

    My conclusion after researching this a while ago is that the good options are Borg and Restic. Both give you incremental backups with cheap timewise snapshots. They are quite similar to each other, and I don’t know of a compelling reason to pick one over the other.

    • ZenlixOP
      link
      fedilink
      English
      24 days ago

      As far as I know, by definition, at least restic is not incremental. It is a mix of full backup and incremental backup.

  • @poinck@lemm.ee
    link
    fedilink
    34 days ago

    This looks a bit like borgbackup. It is also versioned and stores everything deduplicated, supports encryption and can be mounted using fuse.

    • ZenlixOP
      link
      fedilink
      54 days ago

      Thanks for your hint towards borgbackup.

      After reading the Quick Start of Borg Backup they look very similar. But as far as I can tell, borg can be encrypted and compressed while restic is always. You can mounting your backups in restic to. It also seems that restic supports more repository locations such as several cloud storages and via a special http server.

      I also noticed that borg is mainly written in python while restic is written in go. That said I assume that restic is a bit faster based on the language (I have not tested that).

      • @drspod@lemmy.ml
        link
        fedilink
        24 days ago

        It was a while ago that I compared them so this may have changed, but one of the main differences that I saw was that borg had to backup over ssh, while restic had a storage backend for many different storage methods and APIs.

      • @ferric_carcinization@lemmy.ml
        link
        fedilink
        English
        14 days ago

        I haven’t used either, much less benchmarked them, but the performance differences should be negligible due to the IO-bound nature of the work. Even with compression & encryption, it’s likely that either the language is fast enough or that it’s implemented in a fast language.

        Still, I wouldn’t call the choice of language insignificant. IIRC, Go is strongly typed while Python isn’t. Even if type errors are rare, I would rather trust software written to be immune to them. (Same with memory safety, but both languages use garbage collection, so it’s not really relevant in this case.)

        Of course, I could be wrong. Maybe one of the tools cannot fully utilize the network or disk. Perhaps one of them uses multithreaded compression while the other doesn’t. Architectual decisions made early on could also cause performance problems. I’d just rather not assume any noticeable performance differences caused by the programming language used in this case.

        Sorry for the rant, this ended up being a little longer than I expected.

        Also, Rust rewrite when? :P