PVE Storage replication with different name

Replicating a VM on Proxmox is still a challenging task, requiring several prerequisites, such as:

  • VM storage must use ZFS.
  • The ZFS storage name must be identical on both nodes.
  • Nodes must be part of a cluster.

After setting up my new nodes in a hyper-converged configuration, I was unable to maintain consistent naming across the board. Consequently, replicating my VMs through the GUI became impossible, leading me to create the tutorial below.

It's important to note that this feature was first requested in 2017, and as of the current date (March 17, 2024), Proxmox has yet to fully develop this functionality. Here is the bug tracker link: Proxmox Bug Tracker Issue #2087

Prerequisites for Replicating Your VMs on Proxmox Using My Script

  • ZFS storage
  • SSH keys between nodes for passwordless authentication

Script for VM Replication

Below is the Bash script I use for VM replication:

#!/bin/bash

# Define the array containing the VM IDs
ids=(100 101)

# Define the variable for the destination host
destinationHost="h2"

# Define the variable for the destination ZFS pool
destinationPool="rpool/Replica"

# Loop through the array of IDs
for id in "${ids[@]}"; do
    # Execute the qm config command to retrieve specific configuration and extract desired information
    while IFS= read -r line; do
        # Create the ZFS snapshot using the specified pool and the result of the previous command
        eval "zfs destroy ${line}@Replication"
        eval "zfs snapshot ${line}@Replication"
        # Variable to store the same name as local storage
        diskName=$(echo "${line}" | awk -F'/' '{print $2}')
        eval "zfs send ${line}@Replication | ssh ${destinationHost} zfs receive ${destinationPool}/${diskName} -F"
    done < <(qm config "$id" | grep -E 'scsi[0-9]+:|virtio[0-9]+:' | awk -F': |,' '{print $2}' | tr : /)
done

The ids variable contains the IDs of the VMs I wish to replicate. The destinationHost variable specifies the replication host, and the destinationPool variable contains the destination ZFS pool (which can be created using zfs create rpool/<pool name>).

This script retrieves the configuration of each node, lists all your disks, creates a snapshot, and then sends it to the destination host. It's a simple script that replicates storage as Proxmox would.

To mount your VM on the replication host, you can either:

  • Copy the VM's configuration from the source host and add its disks.
  • Recreate the configuration and import the replicated disks.

I opted for the second option for simplicity and scalability over time. In the event of losing the source host, it takes a maximum of 5 minutes per VM to reconfigure and import the disks.

Note: This script destroys and recreates the snapshot each time to save space. However, this means the snapshot is fully sent to the destination server each time, generating significant network/disk traffic. In my case, this isn't an issue due to having 20GB/s symmetric and NVME storage.