Script to automate interface renaming on OPNsense after VM restore

Started by deajan, Today at 04:50:27 PM

Previous topic - Next topic
I've written a (quick and dirty) script to handle a specific scenario for my OPNSense VM.

I'm hosting OPNSense on a KVM hypervisor with 10Gb NICs (named ixl0..9).
I backup that VM every night, and restore it on a second much smaller hypervisor which has 1Gb NICs (named igb0..9).
This is done because I don't want to handle a high availability scenario which is a quite insane amount of work (every interface needs a VRRP, and some of the plugins aren't HA aware, meaning more work).

Anyway, once I've restored the VM on my spare device, it won't work unless I rename my interfaces from ixl to igb, which makes sense.
In order to make things more smooth, I came up with (perhaps a bad) idea:

1. Mount restored disk image locally on backup KVM hypervisor
2. Replace interface names in config.xml
3. Unmount disk image
4. Profit !

So far, I came up with a script to handle this.
The script supposes that the host hypervisor has a running ZFS implementation, and can mount qemu disk images via qemu-nbd.

I'm pretty sure there might be better solutions, perhaps you could tell me how you manage cross hypervisor OPNsense backup/restore scenarios ?

Anyway, please find the script at https://github.com/deajan/linuxscripts/blob/master/virsh/offline_rename_opnsense_interfaces.sh

Any feedback is appreciated.

Yes, there is a better solution: Eliminate the hardware dependencies completely by using virtio interfaces, like depicted here: https://forum.opnsense.org/index.php?topic=44159.0

With that approach, you do not even have to save and backup your configuration, you can just backup and restore the whole OpnSense VM.

I do that on cloud-hosted KVM hosts by using a script that clones the main OpnSense VM in case I destroy it via some bone-headed manoeuvre. The clone is kept with startup option = no and otherwise, has the same machine configuration (including MACs).
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Indeed, but with virtio, I previously had a -300% throughput difference once I use suricata, hence the reason I passthrough physical interfaces.
Perhaps this is solved more recently ?

Probably the CPU emulation type?
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+