1
Hardware and Performance / Poor Performance with OpnSense 23.1 and Hyper-V 2019
« on: February 16, 2023, 11:13:18 pm »
Apologies if I've missied an existing solution somewhere... I did do searches on this and found a thread back around the 22.1 RC timeframe that 'might' be related but did not seem to offer a conclusive remediation and might not be the same issue I'm experiencing...
Background: I have been running OpnSense as a VPN ("only") gateway for the past couple years on a single NIC Intel NUC so everything other than the interface assigned to LAN is a tagged vLAN... It has multiple WAN links (1gb, 300mb, LTE) and multiple VPN links (one for each WAN) all handled by an L2 vLAN switch... The performance has been excellent with full gigabit throughput from a physical PC on the LAN to internet hosts with consistent speed test results on 21.1->23.1
I am now building out a Hyper-V VM with a slightly different configuration (1 WAN link, 2 VPN tunnels, LAN + several additional vLANs that will be firewalled and have limited or no access between them)... The WAN and LAN are on separate virtual NICs defined on the VM at the HV level and I have a Win 10 VM with a single vNIC on the LAN side on the same vSwitch as well as a physical PC on the LAN side to test from...
Upload speeds are fine but download speed is about 25-30% of what I would expect...
Details:
* Win 10 VM is on the same virtual "10 gb" switch as all the OpnSense vNICs/vLANs
* vSwitch tied to a physical 3 NIC "team" (LAG) between the host server and L2 vLAN switch in the server rack
* Rack switch has 1gb uplink to main L2 vLAN Switch near ISP router
* ISP router has 1gb connection to main L2 vLAN Switch
* both physical switches have plenty of backplane bandwidth and are not handling excessive traffic
Thoughts:
* no bottleneck on 10gb vSwitch
* no bottleneck on 3gb LAG
* data transfer between PC on main switch and other (Win Server 2019) VMs on the same vSwitch/Physical switch are fast
* some "potential" limitations of 1gb fiber link between switches but should not limit downlaod to 25-30% of normal
My gut says there is something about OpnSense or FreeBSD that isn't working well with my Hyper-V setup as I've done many other things with this host and set of switches (even using multiple vLANs and other virtual router configs) -- I have not done a lot of deep granular tweaking of Hyper-V network settings other than turning off VMQ on the physical NICs (they are Broadcom and turning that off has long been recommend on these NICs) and I'm not very familiar with low level settings on OpnSense or the underlying networking of Hardened BSD...
Hoping someone else has already experienced this and has a fix for me or that this does in fact relate to whatever changed (and caused issues) in 22.1 RC and there is a remedy via tweaks on OpnSense or HV (or both)...
Please advise!
Background: I have been running OpnSense as a VPN ("only") gateway for the past couple years on a single NIC Intel NUC so everything other than the interface assigned to LAN is a tagged vLAN... It has multiple WAN links (1gb, 300mb, LTE) and multiple VPN links (one for each WAN) all handled by an L2 vLAN switch... The performance has been excellent with full gigabit throughput from a physical PC on the LAN to internet hosts with consistent speed test results on 21.1->23.1
I am now building out a Hyper-V VM with a slightly different configuration (1 WAN link, 2 VPN tunnels, LAN + several additional vLANs that will be firewalled and have limited or no access between them)... The WAN and LAN are on separate virtual NICs defined on the VM at the HV level and I have a Win 10 VM with a single vNIC on the LAN side on the same vSwitch as well as a physical PC on the LAN side to test from...
Upload speeds are fine but download speed is about 25-30% of what I would expect...
Details:
* Win 10 VM is on the same virtual "10 gb" switch as all the OpnSense vNICs/vLANs
* vSwitch tied to a physical 3 NIC "team" (LAG) between the host server and L2 vLAN switch in the server rack
* Rack switch has 1gb uplink to main L2 vLAN Switch near ISP router
* ISP router has 1gb connection to main L2 vLAN Switch
* both physical switches have plenty of backplane bandwidth and are not handling excessive traffic
Thoughts:
* no bottleneck on 10gb vSwitch
* no bottleneck on 3gb LAG
* data transfer between PC on main switch and other (Win Server 2019) VMs on the same vSwitch/Physical switch are fast
* some "potential" limitations of 1gb fiber link between switches but should not limit downlaod to 25-30% of normal
My gut says there is something about OpnSense or FreeBSD that isn't working well with my Hyper-V setup as I've done many other things with this host and set of switches (even using multiple vLANs and other virtual router configs) -- I have not done a lot of deep granular tweaking of Hyper-V network settings other than turning off VMQ on the physical NICs (they are Broadcom and turning that off has long been recommend on these NICs) and I'm not very familiar with low level settings on OpnSense or the underlying networking of Hardened BSD...
Hoping someone else has already experienced this and has a fix for me or that this does in fact relate to whatever changed (and caused issues) in 22.1 RC and there is a remedy via tweaks on OpnSense or HV (or both)...
Please advise!