If you use Backblaze B2 on a device that doesn't have it built in or on proxmox directly, they do have a good CLI tool. Level 2 Original Poster 1 point 1 year ago. I automate provisioning of VMs and applications (e.g. Ansible with the proxmox and proxmoxkvm roles), and if an application produces or uses data which needs to survive beyond a VM delete it either mounts it over NFS or (in the case of SQLite apps) maintains state in the VM storage but has systemd timer regularly firing a job to back up to NFS.
PBS datastore on Windows 10 via cifs, Backblaze Personal Backup for offsite backups
Hi,I've set up the new stable release of PBS with the datastore on a Windows 10 cifs share and it's working great. However it got me thinking, wouldn't I need to restore the entire volume/datastore if I ever needed to restore a VM older than my retention?Could I/is there a way to only restore..- Thread
- backblaze backup pbs
- Replies: 1
- Forum: Proxmox Backup: Installation and configuration
Remote backup to B2
How exactly should PBS be backed up to systems other than another PBS host? In particular I was wanting to set up an offsite backup to Backblaze B2. Is it fine to just take a normal file level backup of the datastore using some third party backup software? I'm assuming the backups are just..- Thread
- backblaze ceph off-site
- Replies: 2
- Forum: Proxmox Backup: Installation and configuration
Proxmox backup VM over object storage B2 / S3
Hi there,I'm been working finding a way to backup VM over an object storage like S3 or B2 Backblaze..I've finally found a solution that is quite stable and perform well.If someone is interested I'm doing in this way:- VM backup over local storage- made a script that with rclone move..- Thread
- backblaze object storage rclone remote backup scaleway
- Replies: 4
- Forum: Proxmox VE: Installation and configuration
(Or: Backblaze B2 cloud backups from a Proxmox Virtual Environment)
Proxmox Backup Server Backblaze
Backups are one of those things that have a tendency to become unexpectedly expensive – at least through the eyes of a non-techie: Not only do you need enough space to store several generations of data, but you want at least twice that, since you want to protect your information not only from accidental deletion or corruption, but also from the kind of accidents that can render both the production data and the backup unreadable. Ultimately, you’ll also want to spend the resources to automate as much of the process as possible, because anything that requires manual work will be forgotten at some point, and by some perverse law of the Universe, that’s when it would have been needed.
In this post I’ll describe how I’ve solved it for full VM/container backups in my lab/home environment. It’s trivial to adapt the information from this post to apply to regular file system backups. Since I’m using a cloud service to store my backups, I’m applying a zero trust policy to them at the cost of increased storage (and network) requirements, but my primary dataset is small enough that this doesn’t really worry me.
Backblaze currently offers 10 GB of B2 object storage for free. This doesn’t sound like a lot today, but it will comfortably fit several compressed and encrypted copies of my reverse proxy, and my mail and web servers. That’s Linux containers for you.
Proxmox Backblaze Backup
First of all, we’ll need an account at Backblaze. Save your Master Application Key in your password manager! We’ll need it soon. Then we’ll want to create a Storage Bucket. In my case I gave it the wonderfully inventive name “pvebackup”.
Next, we shall install a program called rclone on our Proxmox server. The version in the apt repository as I write this seems to have a bug vis à vi B2, that will require us to use the Master Application Key rather than a more limited Application Key specifically for this bucket. Since we’re encrypting our cloud data anyway, I feel pretty OK with this compromise for home use.
EDIT 2018-10-30: Downloading the current dpk package of rclone directly from the project site did solve this bug. In other words it’s possible and preferable to create a separate Application Key with access only to the backup bucket, at least if the B2 account will be used for other storage too.
Now we’ll configure the program:
Type n to create a new remote configuration. Name it b2, and select the appropriate number for Backblaze B2 storage from the list: In my case it was number 3.
The Account ID can be viewed in the Backblaze portal, and the Application Key is the master key we saved in our password manager earlier. Leave the endpoint blank and save your settings. Then we’ll just secure the file:
We’ll want to encrypt the file before sending it to an online location. For this we’ll use gpg, for which the default settings should be enough. The command to generate a key is gpg –gen-key, and I created a key in the name of “proxmox” with the mail address I’m using for notification mails from my PVE instance. Don’t forget to store the passphrase in your password manager, or your backups will be utterly worthless.
Next, we’ll shamelessly steal and modify a script to be used for hooking into the Proxmox VE backup process (I took it from this github repository and repurposed it for my needs).
Edit 2018-10-30: I added the –b2-hard-delete option to the job-end phase of deleting old backups, since the regular delete command just hides files in the B2 storage, adding to the cumulative storage used.
Store this script in /usr/local/bin/vzclouddump.pl and make it executable:
Best free programs for mac. The last cli magic for today will be to ensure that Proxmox VE actually makes use of our fancy script:
To try it out, select a VM or container in the PVE web interface, select Backup -> Backup now. I use Snapshot as my backup method and GZIP as my compression method. Hopefully you’ll see no errors in the log, and the B2 console will display a new file with a name corresponding to the current timestamp and the machine ID.
Conclusion
The tradeoffs with this solution compared to, for example, an enterprise product from Veeam are obvious, but so is the difference in cost. For a small business or a home lab, this solution should cover the needs to keep the most important data recoverable even if something bad happens to the server location.