OPNsense Forum

Archive => 24.1, 24.4 Legacy Series => Topic started by: oleg on March 11, 2024, 07:30:38 PM

Title: Poor speed in virtual environment.
Post by: oleg on March 11, 2024, 07:30:38 PM
Environment


Host:
Proxmox Virtual Environment 8.1.4

VM:
OPNsense 24.1.3_1-amd64
FreeBSD 13.2-RELEASE-p10
OpenSSL 3.0.13



For OPNsense VM several network cards are passed through as real PCIe devices. They are combined in bridge with a virtual interface (VirtIO paravirtualized) from the host machine.
Also I disabled hardware offloading in `Interfaces -> Settings`:

- localhost to localhost gives ~40Gbit/s
- Linux VM to Host shows about 20Gbit/s speed in iperf3 simple test with default parameters.
- While the OPNsense VM (to the same host) shows only about 1.3-1.5Gbit/s.
- Doing the test with loopback interface inside OPNsense gives the number about 16-17Gbit/s.

Сould you tell me which way to look?


Title: Re: Poor speed in virtual environment.
Post by: mrElement on March 11, 2024, 10:48:41 PM
Hi,

I am neither a PVE nor an OPNsense expert. However in your original post (according to my knowledge) there seems to be a mix-up of terms regarding 'passthrough' and 'linux bridges'.

As I understand it for a single NIC it's either one path or the other. If you try to implement both you will most likely cause conflicts at best or lack of operation. This means that for a single NIC:

So if you've mixed NICs, like for example, let's imagine NIC_A, and you passed that through as PCI device to a VM then you "shouldn't" also create a VM Linux bridge for it and pass the bridge to the same or any other VM for that matter. I don't know how Proxmox deals with this but normally it shouldn't let you.

Vice versa, if you have NIC_A and you created a Linux VM Bridge for it, let's say VMBR0 , and you use vmbr0 for various VMs then you shouldn't passthrough your NIC_A to a VM as that would potentially "block" your VMBR0.

In addition to all that I've found that when using VM Linux bridge interfaces the model which I mentioned earlier plays a role (for me) in the speeds I am achieving. For exaxmple with VirtIO (paravirtualized) I am getting the full speed of my ISP on my WAN interface. But when I go E1000 I am auotmatically losing ~50Mbit for some reason.

Hope that helps somehow. Again I could be wrong regarding the Proxmox stuff but that's where I've  ended up through my research on the thing.

Good luck!
Title: Re: Poor speed in virtual environment.
Post by: oleg on March 12, 2024, 09:05:01 AM
mrElement, thanks for the reply.
I described everything correctly. As it is. Two types of NICs. Some of them are passed through, and other ones is just a bridged virtual interfaces.

But it doesn't matter. Even with one virtual interface the speed is quite low.

upd.
I know that it is the most frequent problem.
The simple advices (disable hardware offloading) helps only in quite low borders -- not the 250Mbit/s, but 1.5Gbit/s. It is not a game changer, maybe only in combination with something else.
I also increased max buffer sizes for kernel and for tcp specifically. Up to 8MiB.
But the difference is 15x !!  Instead of about 20Gbit/s i receives only 1.3Gbit.
Title: Re: Poor speed in virtual environment.
Post by: oleg on March 13, 2024, 11:37:28 AM
I conducted several tests after I've changed values in tunable.
Increased available window size (sendbuf/recvbuf), increased start size for sendspace/recvspace, disable net.inet.tcp.tso, disable security flags vm.pmap.pti and hw.ibrs_disable, enabled and tuned the RSS.

I've reached the maximum speed for real network cards at 1Gbit/s (they are passed through as real PCIe devices).

But the speed between Host and OPNsense VM is still low.
Inner (loopback) speed test shows about 18Gbit/s (almost what I expected).
Host <-> OPNsense test shows 1.5-1.7Gbit/s (but should also be about 17-18Gbit/s).


The question is to the developers of OPNsense. Is it a limit of FreeBSD overt virtio driver, or there are other options I can change in order to get closer to the expected speed ?
(i've already used all the hints from your documentation from https://docs.opnsense.org/troubleshooting/performance.html)

and one more observation:
In all tests the transfer speed from any outer machine (real or virtual) to the OPNsense is higher than the speed for reverse direction test, when the data is transferred to any machine from the OPNsense.
Title: Re: Poor speed in virtual environment.
Post by: tkost on July 18, 2024, 10:11:50 PM
Were you able to look up a solution? I have the same problem.
Debian VM > Host is about 20 Gbps
Debian VM > another Debian VM is about 20 Gbps
OPNSense > Host is about 1.5 Gbps
OPNSense > Debian VM is about 1.5 Gbps.