I've written a (quick and dirty) script to handle a specific scenario for my OPNSense VM.
I'm hosting OPNSense on a KVM hypervisor with 10Gb NICs (named ixl0..9).
I backup that VM every night, and restore it on a second much smaller hypervisor which has 1Gb NICs (named igb0..9).
This is done because I don't want to handle a high availability scenario which is a quite insane amount of work (every interface needs a VRRP, and some of the plugins aren't HA aware, meaning more work).
Anyway, once I've restored the VM on my spare device, it won't work unless I rename my interfaces from ixl to igb, which makes sense.
In order to make things more smooth, I came up with (perhaps a bad) idea:
1. Mount restored disk image locally on backup KVM hypervisor
2. Replace interface names in config.xml
3. Unmount disk image
4. Profit !
So far, I came up with a script to handle this.
The script supposes that the host hypervisor has a running ZFS implementation, and can mount qemu disk images via qemu-nbd.
I'm pretty sure there might be better solutions, perhaps you could tell me how you manage cross hypervisor OPNsense backup/restore scenarios ?
Anyway, please find the script at https://github.com/deajan/linuxscripts/blob/master/virsh/offline_rename_opnsense_interfaces.sh
Any feedback is appreciated.
Yes, there is a better solution: Eliminate the hardware dependencies completely by using virtio interfaces, like depicted here: https://forum.opnsense.org/index.php?topic=44159.0
With that approach, you do not even have to save and backup your configuration, you can just backup and restore the whole OpnSense VM.
I do that on cloud-hosted KVM hosts by using a script that clones the main OpnSense VM in case I destroy it via some bone-headed manoeuvre. The clone is kept with startup option = no and otherwise, has the same machine configuration (including MACs).
Indeed, but with virtio, I previously had a -300% throughput difference once I use suricata, hence the reason I passthrough physical interfaces.
Perhaps this is solved more recently ?
Probably the CPU emulation type?
honestly, I've fiddled around with like all possible solutions, tried on qemu v8 and v9, with various cpus.
Finally, I came up with this solution to add in `/boot/loader.conf` to get good performance.
```
hw.vtnet.X.tso_disable="1"
hw.vtnet.tso_disable="1"
hw.vtnet.lro_disable="1"
hw.vtnet.X.lro_disable="1"
hw.vtnet.csum_disable="1"
hw.vtnet.X.csum_disable="1"
```
Took the solution from https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059
Even with those settings, speed is good but far from what it should be (other people had the same https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059#c44).
Since I didn't check this bug report up for a while, I noticed that vtnet offload improvements landed in freebsd only a couple of days ago.
Perhaps things will be better now. I'll definitly need some testing.
In the meantime, I have a made that basic script to "cold" modify / inject data into an offline opnsense VM that solves an issue that may be gone soon.
To disable all types of offloading is a recommendation in the official docs, anyway. You can do this globally or for the individual NICs, or, for some drivers, as sysctl parameters. Neither do you have to do that manually in /boot/loader.conf, nor should you do that, because system tunables could overwrite such settings.
The recommendation to disable hardware checksumming is explicitely noted in the Proxmox guide, as well.
Yes, the situation will hopefully get better once the offloading will be implemented per default.
However, as is also explained in the Proxmox guide, for very high speeds > 1 GBit/s, a passthru will give you speed gains.