Running OPNSense (22.7) On Proxmox - Half Bandwidth Issue

Started by jarodmerle, July 29, 2022, 05:32:42 AM

Previous topic - Next topic
To preface:
I am new to both OPNSense and Proxmox, just recently switching my Omada setup to use OPNsense for routing/firewall after being frustrated with the lack of features and capabilities in Omada.  I am approaching this from the perspective that the most likely problem is my lack of knowledge.

The problem:
I have a 1 Gbps symmetrical fiber connection through AT&T, but I'm only able to receive roughly half of that bandwidth down, and about 350 Mbps up using the community speedtest plugin directly from the OPNSense instance (less than that from connected devices).  I had no issues getting full bandwidth (at least 900/900) on both the Omada router, or my previous Asus router setup. 

Things I've tried:
- I have seen all the posts (like this one) about needing to assign a parent interface to the WAN, but I don't think that applies to me because my WAN interface is just using the actual virtual network adapter passed from Proxmox rather than some vlan off of it.
- I've seen a few posts about various tunable options  (hw.ibrs_disable and vm.pmap.pti) and tried those to no noticeable effect. 
- I've tried various CPU virtualization options in Proxmox (host, multiple cores of QEMU, etc.), but CPU usage never really gets much about 50% when running a speedtest, and generally is in the single digits.
- All the "hardware offload" options are disabled as suggested in several older posts.

My setup: 
- HUNSN NRJ02 Mini Firewall PC (Intel Celeron N5105, 4x Intel 2.5G LAN), 32 GB RAM, 500 GB NVMe SSD.
- Proxmox 7.2-7, with OPNSense as the only VM currently.  I've assigned it the host CPU and 8GB RAM at the moment.
- OPNSense is at the latest version, 22.7, but same thing was occurring on 22.1 before upgrading on the off chance it might help this.

Wrapping up:
What else should I try at this point?  I'm really enjoying learning OPNSense (and Proxmox) and for the most part things work "well enough" but I hate to think that I'm basically leaving half or more of my bandwidth unused.  I thought for sure this little firewall PC would have more than enough horsepower for my fairly simple home setup, and allow me to run another server VM or two as well, but was I just wrong to think that?

Any help or guidance from the experts here would certainly be appreciated, and I'll be glad to provide any more insight I can to help solve this.

Curious if you are passing through either the WAN or LAN nic to OPNsense with iommu/sriov or using a standard proxmox linux bridge with virtio?

Could be that proxmox, if using virtio, or FreeBSD, if using pass through, has an issue with the Intel 2.5GbE I225-V nic drivers.

Hello, I run OPNsense 22.7_4-amd64 on Proxmox 7.2-7 with three Intel nics passthrough. If I add "iommu=pt" to my boot command line, it reduces my nics to 100TX from 1000TX.

I add only "quiet intel_iommu=on"

/etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

I dont know if this helps you, but it work for me. No idea way.
I played around with VYOS and had the same problem. So i dont think its a OPNsense problem. For me anyway.
4 x Intel(R) Celeron(R) N5105 @ 2.00GHz

@jarodmerle Thank you for posting the issue. I have also experienced an issue with OpnSense 22.7 running on the latest Proxmox 7.2 on top of the Intel-based hardware: on a symmetrical 1Gbps fiber connection I have 930+ Mbps download and less than 1 (one) Mbps upload speed with vitrio NICs passed through, and about 250-300 Mbps download/upload speeds with e1000 NICs passed through instead. Notably, on the same software/hardware OpnSense 21.7 demonstrates pretty stable 930+ Mbps download/upload speed with vitrio.

Since the older version doesn't seem to be affected, my guess is it is either a FreeBSD 13.1 kernel issue, or there is something wrong inside OpnSense 22.7.

Update: Fixed the issue with a poor upload performance thanks to this reddit advice.

My setup provides for a few virtio NICs, one of which has several assigned VLANs. In 21.7 I used to have the parent interface (vtnet0) unassigned and everything worked great in terms of download/upload bandwidth. For 22.7 to work properly I had to assign this parent interface and enable it with no IPv4/v6 configured.

I also played around VLAN Hardware Filtering selector (set to disabled by default) and Disable Hardware CRC/TSO/LRO checkboxes (checked by default) but these options didn't have any effect in my tests. Thus, it is purely the parent interface assignment that matters.

As for me, I decided to stick to OpnSense 21.7 for a while. Continue testing 22.7 against 21.7.

@Vesalius - I am just using the standard "Linux Bridge" devices.  I'm not familiar with the other option you mention.  The options I have for creating network devices are:  Linux Bridge, Linux Bond, Linux VLAN, OVS Bridge, OVS Bond, and OVS IntPort.  Haven't really dove into what the differences are, but maybe one of these equates to what you're talking about.

@MCMLIX - This doesn't seem to be what I'm experiencing on the surface, because I can get well over 100 Mbps at least, but I will look into what you're mentioning and see if anything stands out.

@vnxme - Yes, the issue and solution you've landed on for the 21.7 > 22.x upgrade is definitely the one I've seen mentioned many times online and linked to a similar discussion of in my original post, but it doesn't seem to be applicable to me.  My WAN interface is just linked directly to the network device I am passing in without any VLANs involved.  I have three VLANs on the LAN interface, but the parent LAN is certainly assigned too. I have no unassigned interfaces at all on the "Assign Interfaces" page, for what it's worth.




Quote from: Vesalius on July 29, 2022, 03:08:25 PM
Curious if you are passing through either the WAN or LAN nic to OPNsense with iommu/sriov or using a standard proxmox linux bridge with virtio?

Could be that proxmox, if using virtio, or FreeBSD, if using pass through, has an issue with the Intel 2.5GbE I225-V nic drivers.

Just to follow up:  This reply from @Vesalius got me researching and realized that I was not, in fact, actually using VirtIO network adapters, and instead using the "Intel E1000" emulation option which I subsequently learned is not as performant and sucks more CPU resources to boot.  I switched that over (which unfortunately caused me to have to reconfigure some things and upset my family temporarily while the internet was down), but now I'm consistently getting over 900/900 Mbps, which is about what I would expect, and CPU usage is also quite a bit lower (spiking to around 40% instead of 60-70%).

Thanks to all for the suggestions!

Glad you got it working.

I'm really happy with Proxmox and OPNsense using virtio and linux bridges.
With two hosts in a pve cluster you can live migrate OPNsense with no downtime.

Don't forget to add the qemu guest agent plugin within OPNsense to get better integration with Proxmox.