opnsense : 25.7.2
proxmox : 8.4.10
CPU : 24 Cores / Xeon(R) CPU L5640 (yes old ones)
RAM : 128GB
Hello
I am testing LAN network speed with iperf3 from and to the OPNSense 25.7 installation as a VM on proxmox. All involved devices in test are connected on the same proxmox virtual network bridge as VMs or over a 10Gbps physical switch. Say I have the following setting:
OPNSense VM on proxmox : br00 , 8 cores, 16 GB RAM
proxmox node host : br00
VM1 ubuntu on proxmox : br00
VM2 ubuntu on proxmox : br00
Dev1 ubuntu : br00 <-> 10Gbps physical switch
- Using ipref3 between proxmox node, VM1 and VM2 (in all combinations server/client) the speed is about 15-20Gbps. Obviously speed to the device Dev1 on the attached physical switch is about 9.8Gbps. Those speeds are ok.
- Using ipref3 as a client on OPNSense towards the other VMs and/or proxmox host the speed is about 0.9 - 1.2Gbps only. The same is true if OPNSense is used as an iperf3 server. OPNSense CPU core load is during speed test max. 50% and RAM usage is also less than 25% (on the OPNSense dashboard). The proxmox total CPU load is less than 50% during the speed test.
The involved net interface of OPNSense VM attached to the proxmox virtual bridge br00 is defined of type VirtIO (like all interfaces of the other VMs) and shows up as 10Gbase-T full duplex; thus, that seems to be ok.
So my question is why the throughput/network speed of OPNSense VM is limited to about 0.9 - 1.2Gbps and this as well if it is used in the iperf3 test as an iperf3 client. Operating as iperf3 client towards the attached LAN under "normal" circumstances, the firewall performance should not be the source of speed limitation (?).
Many thanks in advance for advice.
You need multithreading support and RSS, see https://forum.opnsense.org/index.php?topic=44159.0 and https://forum.opnsense.org/index.php?topic=42985.0, #10.
That being said, you should not test with OpnSense as being the source or target of an iperf test, because it puts stress on the CPU. You cannot determine routing performance like that. If at all, use "-P n" in order to use more than one TCP stream.
Hello
Many thanks for your advice. I read your HOWTO about virtualization (proxmox) before.
Meanwhile, I did some other tests and configuration changes:
- changed on OPNSense on interface to "disable HW VLAN offload" and "disable HW CRC offload" :
> resulting in a slightly better speed. "disable HW TOS" and "disable HW LOR" worsened the speed (from 1.1Gbs to 0.2Gbs. - changed on the OPNSense VM guest hardware nic queues to the number of cores (8) :
> resulting in the same speed limitation. - changed on the OPNSense VM guest hardware nic drive from vtnet (virtio) to vmxnet3:
> resulting to in speed increase of factor 4 (from 1.1Gbs to ca. 5Gbs). Unfortunately "disabling HW offload" settings worked different on vmxnet3 than in virtio vtnet type interfaces and created some anomalies (speed increases and strong speed limitations on the same time) with other VMs with virtio interfaces (seems that vmxnet3 and virtio nics can not coexists on the same proxmox bridge?). - installed a fresh OPNSense VM (25.7.2) on a second identical proxmox node with the same VM characteristics (8 cores, 16 GB RAM, virtio nic with 8 queues) and tested with iperf3 targeting an Ubuntu VM on the same node:
> resulting in the same speed limitation as on the original proxmox node. Disabling firewall functionality on OPNsense resulted only in a small (10%) speed increase. - as virtio vtnet drivers on FreeBSD are "said" to be not fully compatible or problematic (for proxmox environment), installed on the same proxmox node with a virtio nic on the same proxmox virtual bridge as the OPNsense installation an FreeBSD VM guest (14.3, June 2025):
> resulting that with iperf3 on FreeBSD VM speed as a source or target to an other Ubunut VM on the same node was about 10Gps, as it is on the same proxmox node for other Ubuntu VMs.
So some general speed limitation is given by the old hardware host with proxmox installed on (OPNsense performs perfectly on a VMware ESXi 6.5 with Xeon Silver 4210 CPU). But OPNsense appears to be special on proxmox in behavior regarding the network speed, even when considering FreeBSD VM installations behave as other Linux VMs on the same node and on the same virtual bridge with 10Gbps. However, my experience seems to correlate with the issues reported on this topic (https://forum.opnsense.org/index.php?topic=45870 (https://forum.opnsense.org/index.php?topic=45870)).
Generally speaking, best practice says to never use any HW offloading on OpnSense. Though theoretically, you could gain speed if real hardware is involved, it can have idiosyncrasies. I tested on bare-metal when I first got into OpnSense, but after I while, I noticed that even on well-supported hardware, under certain circumstances, it can fail.
With a virtualized environment, using "hardware" acceleration obviously cannot give better results, but sometimes fails in a big way (e.g. the missing checksum implementation for virtio under FreeBSD).
My point being: Do not even try to use HW offloading of anything under OpnSense, even less so under virtualisation.