1
Hardware and Performance / 10GB LAN Performance
« on: December 03, 2021, 11:21:39 am »
Hi All,
I've got a LAN performance issue that I'm having problems isolating and I could really use some help.
A simplified version of the infrastructure is set out in the diagram:
OPNsense 21.7
Dell R620, Dual Xeon E5-2680 v2 @ 2.80GHz CPUs
Dual Chelsio T520-CR 10GB NICS
Stacked Dell Force10 S4810s
OPNsense and Proxmox/Windows servers LACP bond to the S4810s
It all works, but here is what I'm finding:
If I run speedtest-cli from OPNsense I get throughput between 5 and 8 Gbps depending on the time of day. All good
If I run speedtest from a Proxmox or Windows Server connected through OPNsense the throughput ranges from 850Mbps to 1900Mbps i.e. 10% of the WAN throughput
If I run iperf3 as a server on the OPNsense LAN interface and hit it with a Proxmox or Windows Server client same result i.e. max throughput a bit over 1Gbps
This is only a problem with connections to OPNsense. I have LAGGs on other internal networks that are getting nearly line speed with an identical configuration.
I've checked and rechecked the switch and server configurations:
The switchports comprising the LAGGs all show as connected at 10Gbps
The OPNsense Proxmox/Windows server LAGGs on the switch shows as connected at 20Gbps
The LAGGs are all configured correctly and the partners are all bundled
I've tried using Jumbo packets and tweaking the kernel on the Proxmox server but doesn't make much difference.
I just don't get it. If the performance hit were associated with packet filtering I would expect to see some hit on the OPNsense CPUs, but the dashboard has them at barley 20% during testing. Anyway, I get the same result with packet filtering completely disabled on OPNsense.
Pulling my hair out on this. Any tips or pointers would be greatly appreciated.
johno
I've got a LAN performance issue that I'm having problems isolating and I could really use some help.
A simplified version of the infrastructure is set out in the diagram:
OPNsense 21.7
Dell R620, Dual Xeon E5-2680 v2 @ 2.80GHz CPUs
Dual Chelsio T520-CR 10GB NICS
Stacked Dell Force10 S4810s
OPNsense and Proxmox/Windows servers LACP bond to the S4810s
It all works, but here is what I'm finding:
If I run speedtest-cli from OPNsense I get throughput between 5 and 8 Gbps depending on the time of day. All good
If I run speedtest from a Proxmox or Windows Server connected through OPNsense the throughput ranges from 850Mbps to 1900Mbps i.e. 10% of the WAN throughput
If I run iperf3 as a server on the OPNsense LAN interface and hit it with a Proxmox or Windows Server client same result i.e. max throughput a bit over 1Gbps
This is only a problem with connections to OPNsense. I have LAGGs on other internal networks that are getting nearly line speed with an identical configuration.
I've checked and rechecked the switch and server configurations:
The switchports comprising the LAGGs all show as connected at 10Gbps
The OPNsense Proxmox/Windows server LAGGs on the switch shows as connected at 20Gbps
The LAGGs are all configured correctly and the partners are all bundled
I've tried using Jumbo packets and tweaking the kernel on the Proxmox server but doesn't make much difference.
I just don't get it. If the performance hit were associated with packet filtering I would expect to see some hit on the OPNsense CPUs, but the dashboard has them at barley 20% during testing. Anyway, I get the same result with packet filtering completely disabled on OPNsense.
Pulling my hair out on this. Any tips or pointers would be greatly appreciated.
johno

