PBS seed transfer time calculator

Estimate how long your first full Proxmox backup will take to upload, and when a physical seed drive becomes the faster option.

The initial full backup to any new Proxmox Backup Server target, whether self-hosted or Cloud-PBS, must transfer every chunk at least once. Transfer time is bounded by the slower of the client uplink and the server link, times an efficiency factor that accounts for protocol overhead and packet loss. The calculator applies that model and flags the point where shipping a physical seed drive becomes the faster path.

Your seed

1000 GB

Nominal speed of the internet link from the site that pushes the backup.

Nominal speed of the link on the PBS target side. Cloud-PBS datacenters expose 10 Gbps at the rack.

Advanced assumptions
85%

Share of the bottleneck bandwidth actually usable after TCP overhead, contention and PBS protocol framing. Clean fiber links deliver 80 to 90%. Shared home connections during business hours sit closer to 50 to 70%.

Estimated transfer time

Bottleneck

1 Gbps

The slower of the two links sets the ceiling.

Effective throughput

850 Mbps

Transfer time

2 h 37 min

Methodology

  1. Transfer time assumes a sustained link with no retries. Packet loss, path MTU issues and provider shaping can slow real-world transfers by 10 to 30%.
  2. Effective throughput defaults to 85% of the bottleneck. This matches what we measure on clean fiber links between typical PVE hosts and Cloud-PBS datacenters.
  3. Recommendations change at 3 days and 10 days of continuous transfer. Under 3 days, over the wire almost always wins on operational simplicity. Over 10 days, a physical seed is practically mandatory.
  4. The dataset size here is the first full backup. PBS content-defined chunking will deduplicate incrementals from day two onward, so steady-state bandwidth is much lower.

About seed transfers

Does Cloud-PBS actually offer physical seed drives?
Yes, on request. We ship a pre-formatted drive by registered post. You copy the dataset locally using the PBS client or rsync, and ship it back. We import it into your datastore, preserving deduplication state, so incrementals from day one go straight into the existing chunk set. Contact us for pricing and timelines.
What bandwidth does Cloud-PBS expose on the server side?
Cloud-PBS datastores are reachable at 10 Gbps from the rack on every managed plan. The real ceiling in a seed is your own uplink. Select 1 Gbps or 10 Gbps on the client side of this calculator to see the difference.
What protocol does PBS use for the seed?
PBS uses its own HTTP/2 based protocol over TLS. Overhead is low by modern standards but non-zero. On a clean fiber link expect 80 to 90% of nominal bandwidth. On a shared home link during business hours, closer to 50 to 70%.
Can I throttle the seed so it does not saturate my link?
Yes. The Proxmox Backup Client respects standard Linux tc rate limits, and PBS 3.x supports per-job bandwidth caps directly. For a production site you would typically cap the seed at 50 to 70% of uplink during business hours and lift the cap overnight.
How much longer will my daily incrementals take after the seed?
Typical production fleets ship 2 to 5% of the dataset per day after the initial seed, thanks to PBS content-defined chunking. For a 5 TB seed that means about 100 to 250 GB of actual new chunks per day, which a 1 Gbps link moves in under an hour.

Need a physical seed?

We ship seed drives on request, ingest them on arrival, and your incrementals go straight into the deduplicated datastore with no format conversion.