iperf3 -c192.168.178.8 -R -P3 -t30
[SUM] 0.00-30.00 sec 10.5 GBytes 3.00 Gbits/sec 3117 sender[SUM] 0.00-30.00 sec 10.5 GBytes 3.00 Gbits/sec receiver
[SUM] 0.00-30.00 sec 23.8 GBytes 6.82 Gbits/sec 514 sender[SUM] 0.00-30.00 sec 23.8 GBytes 6.82 Gbits/sec receiver
[SUM] 0.00-30.00 sec 29.3 GBytes 8.40 Gbits/sec 0 sender[SUM] 0.00-30.00 sec 29.3 GBytes 8.40 Gbits/sec receiver
@athurdentDo you think SR-IOV also helps if host (virtualized env. platform) uses vSwitches ?I work with ESXi hosts where a NIC goes directly to vSwitch and so the NIC seems not to be "sliced" for VM guests.Thanks for the benchmarks btw.T.
Quote from: testo_cz on September 28, 2021, 09:24:52 pm@athurdentDo you think SR-IOV also helps if host (virtualized env. platform) uses vSwitches ?I work with ESXi hosts where a NIC goes directly to vSwitch and so the NIC seems not to be "sliced" for VM guests.Thanks for the benchmarks btw.T.Hi, not sure about the ESXi implementation, they seem to have documentation on it though. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-CC021803-30EA-444D-BCBE-618E0D836B9F.htmlThe card itself definitely has integrated switching capabilities. If I use a VLAN only on the card for 2 VMs to communicate (VLAN is not configured or allowed on the hardware switch the card is connected to), then I get around 18G throughput, which is done on the card internally.
When IPsec is active - even if the relevant traffic is not part of the IPsec policy - throughput is decreased by nearly 1/3. This seems like a real performance issue / bug in the FreeBSD/HardenedBSD kernel. I will need to try with VTI based IPsec routing to see if the in-kernel policy matching is a problem.
I'm chiming in to say I have seen similar issues. Running on proxmox, I can only route about 600 mbps in opnsense using virtio/vtnet. A related kernel process in opnsense shows 100% cpu usage and the underlying vhost process on the proxmox host is pegged as well.
[ ID] Interval Transfer Bitrate[ 5] 0.00-1.00 sec 97.0 MBytes 814 Mbits/sec[ 5] 1.00-2.00 sec 109 MBytes 911 Mbits/sec[ 5] 2.00-3.00 sec 111 MBytes 934 Mbits/sec[ 5] 3.00-4.00 sec 103 MBytes 867 Mbits/sec[ 5] 4.00-5.00 sec 100 MBytes 843 Mbits/sec[ 5] 5.00-6.00 sec 112 MBytes 937 Mbits/sec[ 5] 6.00-7.00 sec 109 MBytes 911 Mbits/sec[ 5] 7.00-8.00 sec 75.7 MBytes 635 Mbits/sec[ 5] 8.00-9.00 sec 68.9 MBytes 578 Mbits/sec[ 5] 9.00-10.00 sec 96.6 MBytes 810 Mbits/sec[ 5] 10.00-11.00 sec 112 MBytes 936 Mbits/sec
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 12 root -92 - 0B 400K CPU0 0 21:42 94.37% [intr{irq29: virtio_pci1}]51666 root 4 0 17M 6600K RUN 1 0:18 68.65% iperf3 -s 11 root 155 ki31 0B 32K RUN 1 20.4H 13.40% [idle{idle: cpu1}] 11 root 155 ki31 0B 32K RUN 0 20.5H 3.61% [idle{idle: cpu0}]
[ 5] 166.00-167.00 sec 112 MBytes 941 Mbits/sec[ 5] 167.00-168.00 sec 112 MBytes 941 Mbits/sec[ 5] 168.00-169.00 sec 112 MBytes 941 Mbits/sec[ 5] 169.00-170.00 sec 112 MBytes 941 Mbits/sec[\code]And NIC processing load dropped to just 25% or so:[code] PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 155 ki31 0B 32K RUN 1 3:14 77.39% [idle{idle: cpu1}] 11 root 155 ki31 0B 32K RUN 0 3:06 71.26% [idle{idle: cpu0}] 12 root -92 - 0B 400K WAIT 0 0:55 28.35% [intr{irq29: virtio_pci1}]91430 root 4 0 17M 6008K RUN 0 0:43 21.94% iperf3 -s
https://www.mayrhofer.eu.org/post/firewall-throughput-opnsense-openwrt/QuoteWhen IPsec is active - even if the relevant traffic is not part of the IPsec policy - throughput is decreased by nearly 1/3. This seems like a real performance issue / bug in the FreeBSD/HardenedBSD kernel. I will need to try with VTI based IPsec routing to see if the in-kernel policy matching is a problem.
dev.ix.2.queue3.rx_packets: 2959840dev.ix.2.queue2.rx_packets: 2158082dev.ix.2.queue1.rx_packets: 9861dev.ix.2.queue0.rx_packets: 4387dev.ix.2.queue3.tx_packets: 2967255dev.ix.2.queue2.tx_packets: 2160888dev.ix.2.queue1.tx_packets: 15955dev.ix.2.queue0.tx_packets: 8725