1
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: August 15, 2022, 10:22:46 pm »
Update: I don't know if others have made the same mistake, but do a traceroute from your iperf client to iperf server and make sure it looks right. Do a netstat -rn on your opnsense box too and make sure the routing table looks sane. In my testing I was putting the wan side into my normal network, and the lan side into an isolated proxmox bridge with no physical port attached. For some reason, OPNsense is routing the traffic all the way to WAN's upstream gateway, which is a physical 1gb router outside of my proxmox environment. I'm not sure why yet, but here's what's happening:
vtnet0 WAN 10.0.0.1->WAN gateway (physical router) 10.0.0.254
vtnet1 LAN 10.0.1.1
iperf client on lan side: 10.0.1.1
iperf server on wan network: 10.0.0.100
traceroute from iperf client to iperf server (through opnsense):
1 10.0.1.1
2 10.0.0.254
3 10.0.0.100
traceroute from iperf client to iperf server (through pfsense):
1 10.0.1.1
2 10.0.0.100
Deleted the route entry from system->routes->status and it works as expected now, but how did that entry get there in the first place? I have a second opnsense test instance that did the same thing.
Original post:
Anyone have any updates on this? Is this now considered a known bug? I saw early in the thread a link to a Github PR that was merged, but it looks like it's been included in 22.7. I setup two identical VMs in Proxmox, one Pfsense 2.6.0, one OPNsense 22.7. VMs have 12 E5-2620 cores (vm cpu set to "host"), 4GB of ram, and two virtio nics. Nothing was changed other than setting a static lan IP for each instance. Traffic is tested as such with all VMs on the same proxmox host (including the iperf client and server):
iperf client->iperf server: 10gb/s
iperf client->pfsense lan->pfsense wan->iperf server: 2.5gb/s
iperf client->OPNsense lan->OPNsense wan->iperf server: 0.743gb/s
I then set hw.ibrs_disable=1 (note if CPU is set to the default KVM64, this isn't needed and performance is the same)
iperf client->OPNsense lan->OPNsense wan->iperf server: 0.933gb/s
Also tested with multiple iperf streams (-P 20) and got the same speeds.
CPU usage was high when testing, but then I enabled multiqueue on the Proxmox nics (six on each nic) and CPU usage dropped to basically nothing, and then I topped out right at 940mb/s, exactly the max TCP speed on a gigabit link. I find it pretty suspicious and it makes me think something in the chain is being limited to gigabit ethernet. It does show the nics as "10gbaseT <full duplex>" in the UI, and again my iperf client VM and iperf server VM both have 10g interfaces and when connected directly to each other, pull a full 10gb/s.
vtnet0 WAN 10.0.0.1->WAN gateway (physical router) 10.0.0.254
vtnet1 LAN 10.0.1.1
iperf client on lan side: 10.0.1.1
iperf server on wan network: 10.0.0.100
traceroute from iperf client to iperf server (through opnsense):
1 10.0.1.1
2 10.0.0.254
3 10.0.0.100
traceroute from iperf client to iperf server (through pfsense):
1 10.0.1.1
2 10.0.0.100
Deleted the route entry from system->routes->status and it works as expected now, but how did that entry get there in the first place? I have a second opnsense test instance that did the same thing.
Original post:
Anyone have any updates on this? Is this now considered a known bug? I saw early in the thread a link to a Github PR that was merged, but it looks like it's been included in 22.7. I setup two identical VMs in Proxmox, one Pfsense 2.6.0, one OPNsense 22.7. VMs have 12 E5-2620 cores (vm cpu set to "host"), 4GB of ram, and two virtio nics. Nothing was changed other than setting a static lan IP for each instance. Traffic is tested as such with all VMs on the same proxmox host (including the iperf client and server):
iperf client->iperf server: 10gb/s
iperf client->pfsense lan->pfsense wan->iperf server: 2.5gb/s
iperf client->OPNsense lan->OPNsense wan->iperf server: 0.743gb/s
I then set hw.ibrs_disable=1 (note if CPU is set to the default KVM64, this isn't needed and performance is the same)
iperf client->OPNsense lan->OPNsense wan->iperf server: 0.933gb/s
Also tested with multiple iperf streams (-P 20) and got the same speeds.
CPU usage was high when testing, but then I enabled multiqueue on the Proxmox nics (six on each nic) and CPU usage dropped to basically nothing, and then I topped out right at 940mb/s, exactly the max TCP speed on a gigabit link. I find it pretty suspicious and it makes me think something in the chain is being limited to gigabit ethernet. It does show the nics as "10gbaseT <full duplex>" in the UI, and again my iperf client VM and iperf server VM both have 10g interfaces and when connected directly to each other, pull a full 10gb/s.