hello, I run OPNsense as a VM using Unraid as Host.
All Interfaces als virualized using VIRTIO (also tested VIRTIO-NET and E1000).
In Web-UI they are shown as "10Gbase-T <full-duplex>".
When copying files (and testing with iperf3) I only get 1 gbit speed (iperf3 shows 850 Mbits/sec).
Other VMs on the same Host (ubuntu, freebsd13) are running successfully with 10gbit speed (iperf3 shows 18 Gbits/sec).
Settings in OPNsense: CRC, TSO, LRO are deactivated.
Tested on fresh install withoud Shaper.
My Network runs at 10gb and 2.5gbit and 1gbit - and OPNsense is the only limiting element.
Please give some hints what I could try to get full speed.
You could try the speedtest plugin from this repository to measure your outgoing speed directly from OPNsense to give you another data point: https://www.routerperformance.net/opnsense-repo/
I don't need another method to test network speed.
I need a method to improve speed to a level all other VMs (Ubuntu, FreeBSD13) reach "out of the box", and OPNsense does not.
What speed is your cpu on the host?
I greatly increased my throughput by passing the pci device directly to opnsense to work with. Speed improved tremendously. Is this an option for you?
you are right, CPU Speed does matter.
I've tested following:
My Unraid Server (HOST System) runs on an older XEON 4-Core at 3,3Ghz (Xeon® CPU E3-1230 V2)
with following results testing VM-to-Host-Networkspeed via iperf3:
OPNSense only 850mbit/s
Other VMs on the same Host (ubuntu, freebsd13) 18 Gbits/sec
Other VMs on a newer Test-System (Ryzen7-5750G): 25GBits/sec
=> so CPU speed is not the reason for such poor values on OPNSense (hmm.. maybe OPNSense implements some kind of huge overhead, so it's so much slower than other Operating Systems???)
=> passing through PCIe devices is not an option for me, because I use Brigdes on Host for other VMs and Dockers.
Since OPNsense 22.1 is FreeBSD 13 I would first use ifconfig to compare the interface settings of a generic FreeBSD VM that - as you wrote - performs well and the OPNsense VM.