Install Proxmox Backup Server: step-by-step tutorial 2026
Step-by-step PBS 4.2 install in 2026: prerequisites, ISO, ZFS partitioning, datastore, first backup and post-install tuning.
When we deploy a new Proxmox Backup Server on one of our managed servers, the ISO install takes a few minutes. ZFS tuning, datastore creation, the first backup tested and at least one VM backed up takes one to two hours. This tutorial follows the sequence we apply internally, adapted for someone installing their first PBS.
The goal is that by the end you have a working PBS 4.2, a properly tuned ZFS datastore, a Proxmox VE host pushing its backups to it, and a restore test that proves the chain works. That is the minimum to call a PBS “production-ready”. All commands have been verified on PVE 9 and PBS 4.2, the reference versions in 2026.
Hardware and network prerequisites
PBS is light on CPU and RAM. On disks, much less so. We start there because it is the line item people get wrong most often.
CPU and RAM. For a small-fleet PBS (up to 30-40 VMs, 5-10 TB of useful data), 4 cores and 16 GB of RAM are enough. Above that, plan roughly 0.5 GB of RAM per TB of raw datastore for ZFS ARC, plus 4-8 GB for the system and PBS jobs. Our typical sizing for a dedicated PBS handling 50-100 VMs sits at 8 cores and 32-64 GB.
Disks. Three roles to separate:
- System: 2x SSDs in a ZFS mirror. 240 GB is plenty.
- Main datastore: HDD or NVMe array depending on budget. RAIDZ2 if double-disk resilience matters most, mirror if random-IO performance matters most.
- ZFS special device (recommended for HDD pools): 2x NVMe SSDs in mirror, sized to about 0.3% of the pool size. That is what turns a 6-hour PBS garbage collection into a 40-minute one on HDD pools.
Network. 2x 10 Gb interfaces minimum if you target backup throughput above 500 MB/s. At 1 Gb you saturate at 110 MB/s, which works for 5-10 TB of data but becomes limiting beyond that. Plan LACP or failover across the two links.
OS and version. PBS 4.2 is based on Debian 13. x86_64 only, ARM is not officially supported. We do not recommend stacking PBS on an existing PVE: technically possible, but it mixes responsibilities, complicates reboots, and creates dependencies you regret on incident day.
Install: ISO, boot, partitioning
The PBS ISO is downloaded from the Proxmox site (Downloads section, Proxmox Backup Server, ISO Installer). Verify the SHA-256 checksum next to the link. The PBS 4.2 ISO weighs about 1.3 GB.
Two boot options depending on your infrastructure:
- Physical server: burn to USB with
dd if=proxmox-backup-server_4.2-1.iso of=/dev/sdX bs=1M status=progress conv=fsync, or via IPMI/iDRAC virtual media. - VM: mount the ISO as a virtual CD. Under PVE this works well in UEFI mode with a virtio disk.
The graphical installer walks you through seven screens:
- EULA acceptance.
- System disk choice. Pick your two SSDs as
zfs (RAID1)via the Options button. Keepcompress=on,checksum=on,ashift=12defaults. - Localisation: country, timezone, keymap. The default
uskeymap is fine if you SSH from a US-layout terminal. Pick consciously. - Root password and admin email. The admin email receives ZFS alerts, job notifications, prune reports. Put an actually-watched address.
- Network configuration. Static IP recommended, FQDN that resolves in your DNS, gateway and DNS pointing to your LAN. If you plan a Let’s Encrypt certificate, the FQDN must be reachable via HTTP-01 from the Internet or solvable via DNS-01.
- Summary and install kick-off. Allow 6-10 minutes depending on SSD speed.
- Reboot. Eject the virtual media or USB stick, restart.
After reboot you land on a PBS console showing the URL https://<ip>:8007. That is the web UI. SSH listens in parallel on port 22 with the root user created during install.
First login and ZFS datastore
Log into https://<ip>:8007 as root@pam with the install password. The browser will complain about the self-signed certificate. Normal, we fix that later.
Update first. Datacenter >
Before creating the PBS datastore, we prepare the ZFS pool on the data array, straight from the web UI.
Open Administration > Disks. The view lists every disk the system has detected, with size, model, serial number and usage state. Check that the disks destined for the datastore are all present and unused. If one of them shows residual usage from an older install, select it and click Wipe Disk to clear it.
In Administration > Disks > ZFS, click Create: ZFS. The form that opens is where the structural decisions of the datastore are made:
- Name:
tank(or any short identifier that makes sense to your ops team). - RAID Level: RAIDZ2 for a six-HDD pool with double-disk resilience. Mirror if the priority is random-IO performance on two disks.
- Compression: zstd. It is the recommended default since OpenZFS 2.1, with a better ratio than
lz4for negligible CPU cost on modern processors. - ashift: 12 (4 KB block size, the right value for almost any modern disk).
- Add Storage: leave unchecked. PBS manages the datastore itself, we do not want it registered as a PVE storage.
- Devices: tick the HDDs to include in the pool.
Confirm with Create. PBS builds the pool in a few seconds. Then verify in the ZFS list that the pool shows up with online status.
To add an SSD special device to the pool (recommended for HDD pools, see the hardware prerequisites), go back to Administration > Disks > ZFS, select the tank pool and use the Add: Special Device action to attach the two NVMe drives in mirror. This is non-destructive and can be done after the fact, without service interruption. The special device receives metadata and small files, drastically speeding up PBS garbage collection.
Once the pool is ready, create the PBS datastore proper: Datastore > Add Datastore. Backing path = /tank/datastore (or the exact mountpoint of the pool), Name = a short identifier (ds01). Tick Mount on boot. PBS creates the chunks and namespaces structure.
While you are at it, set up a proper certificate. If the PBS is on a public FQDN, request a Let’s Encrypt from the UI (Certificates > ACME). If it is internal, use your PKI or a wildcard. The backups themselves are encrypted client-side, but the web API deserves a trusted cert so admins do not get used to ignoring warnings.
Connect a Proxmox VE host and run a test backup
On PBS, create an API token rather than using the root password. Datacenter > Permissions > API Tokens > Add. User = root@pam, Token name = pve-prod-01. Uncheck Privilege Separation for this first test. Note the token value, it will not be displayed again.
Grab the SHA-256 fingerprint of the PBS certificate: Datacenter > Certificates, copy the fingerprint value. PVE uses it to verify it is talking to the right server.
On PVE, add PBS as storage: Datacenter > Storage > Add > Proxmox Backup Server.
- ID:
pbs01 - Server: PBS FQDN or IP
- Username:
root@pam!pve-prod-01(the!namesuffix indicates a token) - API token: the value copied above
- Datastore:
ds01 - Fingerprint: the SHA-256 value pasted
Validate. PVE tests the connection, the storage icon turns green if all is well. You can now trigger a manual backup: pick a VM, Backup > Backup now > Storage = pbs01. For a first test, take a 20-50 GB VM, it goes fast and fills the datastore enough to validate the full chain.
Once the backup is done, go back to the PBS UI, Datastore > ds01 > Content. Your snapshot is listed with hash, raw size, compressed size, dedup ratio. Run the restore test immediately, otherwise you will never run it: pick the snapshot, Restore, target a new VMID. PVE restores into the chosen storage. On a 30 GB Linux VM, allow 3-5 minutes on a 10 Gb LAN.
Post-install tuning we always apply in production
Three things we configure systematically before calling a PBS “ready”.
Cap the ZFS ARC if needed. By default ZFS uses up to 50% of RAM for its cache. On a dedicated PBS that is fine. If you want predictable memory consumption or if the machine hosts something else:
echo 'options zfs zfs_arc_max=8589934592' >> /etc/modprobe.d/zfs.conf # 8 GB cap
update-initramfs -u
Reboot to apply. Check with arc_summary | grep 'ARC Size'.
Garbage collection and prune schedule. Datacenter > Datastore > ds01 > Prune & GC. We configure:
- Prune:
keep-daily=7, keep-weekly=4, keep-monthly=6to start. Tune later for your retention policy. - Garbage collection: once daily at an off-peak hour (typically 3 a.m.). GC frees the space of unreferenced chunks. Without GC, the datastore never reclaims the space of expired backups.
Email notifications. Datacenter > Notifications. Configure SMTP (internal relay or local Postfix + DKIM if deliverability matters), and turn on notifications for failed jobs, failed ZFS scrubs, prune errors. Without that, you discover the problem three months after a disk has been dead for three months too.
A last tip many people skip: schedule a monthly ZFS scrub. On Debian/PBS, the zfs-scrub-monthly@<pool>.timer is shipped but disabled. systemctl enable --now zfs-scrub-monthly@tank.timer settles it.
Going further
With a working PBS, a tuned ZFS datastore, and a PVE host pushing to it, you have the base. The next topics to tackle, in order:
- 3-2-1 copy. The PBS we just installed is a single site. To meet the 3-2-1-1-0 rule, add a remote target (sync to a second PBS, or S3 export via the PBS 4 native support). The risk that ransomware reaches the local PBS and encrypts the backups is not theoretical.
- Client-side encryption. PVE can encrypt chunks before sending them with a key PBS never sees. One extra notch of security with no perceivable performance cost.
- Multi-tenancy via namespaces. If several teams share the same PBS, isolate their backups in distinct namespaces with fine-grained ACLs. That stops a junior sysadmin from listing or deleting another team’s backups by accident.
- Formal restore tests. An untested backup is not a backup. NIS2 and DORA require documented tests with measured restore times.
If you would rather hand the install, the tuning and the day-to-day operations over to a dedicated operator than handle them in-house, our Cloud-PBS managed PBS offering covers every step described in this tutorial, from initial commissioning to 24/7 monitoring. To dig into the why behind all this (RTO, RPO, the 3-2-1-1-0 rule, DORA audit), our complete Proxmox backup guide for 2026 covers the strategy upstream. And if you are coming from a Veeam environment and weighing the switch, the PBS vs Veeam 2026 comparison walks through the arbitration.