1
23.7 Legacy Series / Re: [Tutorial/Call for Testing] Enabling Receive Side Scaling on OPNsense
« on: March 24, 2022, 10:04:17 am »
Hi,
I've been trying to get this working on ESX and a VM OPNsense with 8 vcpu. For some reason if I run an iperf with single tcp stream I only get about 600Mbit/s and watching on OPNsense with top -P I can see 1 of the 8 cores going to 0% idle. Using the VMXNET3 adapter.
My current settings, ive played a lot with these but havent gotten anything above 800Mbits/sec:
I have also played with VM advanced settings on ESXi side, tried these two which I found from Paloalto site:
But no help, driver inside ESXi is:
Ive been running a paloalto firewall also and I dont seem to have these problems with that. Wanting to switch from Paloalto to OPNsense but want to figure out this problem first. Any ideas what to try next? OPNsense version is 22.1.3-amd64
I've been trying to get this working on ESX and a VM OPNsense with 8 vcpu. For some reason if I run an iperf with single tcp stream I only get about 600Mbit/s and watching on OPNsense with top -P I can see 1 of the 8 cores going to 0% idle. Using the VMXNET3 adapter.
Code: [Select]
root@OPNsense:~ # netstat -Q
Configuration:
Setting Current Limit
Thread count 8 8
Default queue limit 256 10240
Dispatch policy hybrid n/a
Threads bound to CPUs enabled n/a
Protocols:
Name Proto QLimit Policy Dispatch Flags
ip 1 1000 cpu hybrid C--
igmp 2 256 source default ---
rtsock 3 256 source default ---
arp 4 256 source default ---
ether 5 256 cpu direct C--
ip6 6 1000 cpu hybrid C--
ip_direct 9 256 cpu hybrid C--
ip6_direct 10 256 cpu hybrid C--
Workstreams:
WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled
0 0 ip 0 0 0 1097010 0 0 1097010
0 0 igmp 0 0 0 0 0 0 0
0 0 rtsock 0 0 0 0 0 0 0
0 0 arp 0 0 0 0 0 0 0
0 0 ether 0 0 1173731 0 0 0 1173731
0 0 ip6 0 0 0 63446 0 0 63446
0 0 ip_direct 0 0 0 0 0 0 0
0 0 ip6_direct 0 0 0 0 0 0 0
1 1 ip 0 34 0 475830 0 38 475868
1 1 igmp 0 0 0 0 0 0 0
1 1 rtsock 0 0 0 0 0 0 0
1 1 arp 0 1 0 0 0 12712 12712
1 1 ether 0 0 539495 0 0 0 539495
1 1 ip6 0 2 0 63626 0 4216 67842
1 1 ip_direct 0 0 0 0 0 0 0
1 1 ip6_direct 0 0 0 0 0 0 0
2 2 ip 0 0 0 412891 0 0 412891
2 2 igmp 0 0 0 0 0 0 0
2 2 rtsock 0 0 0 0 0 0 0
2 2 arp 0 1 0 0 0 510 510
2 2 ether 0 0 420304 0 0 0 420304
2 2 ip6 0 1 0 7412 0 1 7413
2 2 ip_direct 0 0 0 0 0 0 0
2 2 ip6_direct 0 0 0 0 0 0 0
3 3 ip 0 0 0 653430 0 0 653430
3 3 igmp 0 0 0 0 0 0 0
3 3 rtsock 0 0 0 0 0 0 0
3 3 arp 0 1 0 0 0 53 53
3 3 ether 0 0 676969 0 0 0 676969
3 3 ip6 0 0 0 23539 0 0 23539
3 3 ip_direct 0 0 0 0 0 0 0
3 3 ip6_direct 0 0 0 0 0 0 0
4 4 ip 0 23 0 354980 0 11847 366827
4 4 igmp 0 0 0 0 0 0 0
4 4 rtsock 0 0 0 0 0 0 0
4 4 arp 0 0 0 0 0 0 0
4 4 ether 0 0 358176 0 0 0 358176
4 4 ip6 0 1 0 3074 0 1 3075
4 4 ip_direct 0 0 0 0 0 0 0
4 4 ip6_direct 0 0 0 0 0 0 0
5 5 ip 0 1 0 855737 0 2 855739
5 5 igmp 0 0 0 0 0 0 0
5 5 rtsock 0 3 0 0 0 4717 4717
5 5 arp 0 0 0 0 0 0 0
5 5 ether 0 0 859020 0 0 0 859020
5 5 ip6 0 1 0 3281 0 159 3440
5 5 ip_direct 0 0 0 0 0 0 0
5 5 ip6_direct 0 0 0 0 0 0 0
6 6 ip 0 0 0 1513336 0 0 1513336
6 6 igmp 0 0 0 0 0 0 0
6 6 rtsock 0 0 0 0 0 0 0
6 6 arp 0 0 0 0 0 0 0
6 6 ether 0 0 1517246 0 0 0 1517246
6 6 ip6 0 1 0 3910 0 1 3911
6 6 ip_direct 0 0 0 0 0 0 0
6 6 ip6_direct 0 0 0 0 0 0 0
7 7 ip 0 0 0 335859 0 0 335859
7 7 igmp 0 0 0 0 0 0 0
7 7 rtsock 0 0 0 0 0 0 0
7 7 arp 0 0 0 0 0 0 0
7 7 ether 0 0 341939 0 0 0 341939
7 7 ip6 0 0 0 6080 0 0 6080
7 7 ip_direct 0 0 0 0 0 0 0
7 7 ip6_direct 0 0 0 0 0 0 0
root@OPNsense:~ # vmstat -i
interrupt total rate
irq1: atkbd0 2 0
irq17: mpt0 314344 5
irq18: uhci0 110225 2
cpu0:timer 1335718 21
cpu1:timer 569148 9
cpu2:timer 597637 9
cpu3:timer 592890 9
cpu4:timer 590653 9
cpu5:timer 593208 9
cpu6:timer 592515 9
cpu7:timer 609112 10
irq24: ahci0 41838 1
irq26: vmx0:rxq0 188954 3
irq27: vmx0:rxq1 138552 2
irq28: vmx0:rxq2 71792 1
irq29: vmx0:rxq3 162662 3
irq30: vmx0:rxq4 109552 2
irq31: vmx0:rxq5 166029 3
irq32: vmx0:rxq6 317057 5
irq33: vmx0:rxq7 63136 1
irq43: vmx1:rxq0 1759 0
irq44: vmx1:rxq1 2393 0
irq45: vmx1:rxq2 4260 0
irq46: vmx1:rxq3 557 0
irq47: vmx1:rxq4 1137 0
irq48: vmx1:rxq5 3461 0
irq49: vmx1:rxq6 4689 0
irq50: vmx1:rxq7 1468 0
irq60: vmx2:rxq0 73391 1
irq61: vmx2:rxq1 153881 2
irq62: vmx2:rxq2 54965 1
irq63: vmx2:rxq3 75044 1
irq64: vmx2:rxq4 98827 2
irq65: vmx2:rxq5 277362 4
irq66: vmx2:rxq6 63113 1
irq67: vmx2:rxq7 69899 1
Total 8051230 127
My current settings, ive played a lot with these but havent gotten anything above 800Mbits/sec:
Code: [Select]
vmx0: Using MSI-X interrupts with 9 vectors
vmx1: Using MSI-X interrupts with 9 vectors
vmx2: Using MSI-X interrupts with 9 vectors
vmx0: Using MSI-X interrupts with 9 vectors
vmx1: Using MSI-X interrupts with 9 vectors
vmx2: Using MSI-X interrupts with 9 vectors
vmx0: Using MSI-X interrupts with 9 vectors
vmx1: Using MSI-X interrupts with 9 vectors
vmx2: Using MSI-X interrupts with 9 vectors
net.inet.rss.bucket_mapping: 0:0 1:1 2:2 3:3 4:4 5:5 6:6 7:7
net.inet.rss.enabled: 1
net.inet.rss.debug: 0
net.inet.rss.basecpu: 0
net.inet.rss.buckets: 8
net.inet.rss.maxcpus: 64
net.inet.rss.ncpus: 8
net.inet.rss.maxbits: 7
net.inet.rss.mask: 7
net.inet.rss.bits: 3
net.inet.rss.hashalgo: 2
net.isr.numthreads: 8
net.isr.maxprot: 16
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 1
net.isr.maxthreads: 8
net.isr.dispatch: hybrid
hw.vmd.max_msix: 3
hw.vmd.max_msi: 1
hw.sdhci.enable_msi: 1
hw.puc.msi_disable: 0
hw.pci.honor_msi_blacklist: 1
hw.pci.msix_rewrite_table: 0
hw.pci.enable_msix: 1
hw.pci.enable_msi: 1
hw.mfi.msi: 1
hw.malo.pci.msi_disable: 0
hw.ix.enable_msix: 1
hw.bce.msi_enable: 1
hw.aac.enable_msi: 1
machdep.disable_msix_migration: 0
machdep.num_msi_irqs: 2048
machdep.first_msi_irq: 24
dev.vmx.2.iflib.disable_msix: 0
dev.vmx.1.iflib.disable_msix: 0
dev.vmx.0.iflib.disable_msix: 0
iperf3 -c 10.0.0.4 -p 4999 -P 1
Connecting to host 10.0.0.4, port 4999
[ 5] local 192.168.2.100 port 57384 connected to 10.0.0.4 port 4999
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 72.6 MBytes 609 Mbits/sec 20 618 KBytes
[ 5] 1.00-2.00 sec 83.8 MBytes 703 Mbits/sec 0 714 KBytes
[ 5] 2.00-3.00 sec 83.8 MBytes 703 Mbits/sec 9 584 KBytes
[ 5] 3.00-4.00 sec 83.8 MBytes 703 Mbits/sec 0 684 KBytes
[ 5] 4.00-5.00 sec 83.8 MBytes 703 Mbits/sec 1 556 KBytes
[ 5] 5.00-6.00 sec 80.0 MBytes 671 Mbits/sec 0 659 KBytes
[ 5] 6.00-7.00 sec 85.0 MBytes 713 Mbits/sec 1 525 KBytes
[ 5] 7.00-8.00 sec 83.8 MBytes 703 Mbits/sec 0 636 KBytes
[ 5] 8.00-9.00 sec 83.8 MBytes 703 Mbits/sec 0 732 KBytes
[ 5] 9.00-10.00 sec 85.0 MBytes 713 Mbits/sec 5 611 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 825 MBytes 692 Mbits/sec 36 sender
[ 5] 0.00-10.00 sec 821 MBytes 689 Mbits/sec receiver
I have also played with VM advanced settings on ESXi side, tried these two which I found from Paloalto site:
Code: [Select]
ethernet1.pnicFeatures = "4"
ethernet2.pnicFeatures = "4"
ethernet3.pnicFeatures = "4"
ethernet1.ctxPerDev = "1"
ethernet2.ctxPerDev = "1"
ethernet3.ctxPerDev = "1"
But no help, driver inside ESXi is:
Code: [Select]
esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: Auto, 100BaseT/Full, 1000BaseT/Full, 10000BaseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:03:00:0
Driver: ixgben
Firmware Version: 0x8000038e
Version: 1.8.7
Link Detected: true
Link Status: Up
Name: vmnic0
PHYAddress: 0
Pause Autonegotiate: false
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver:
Virtual Address: 00:50:56:50:0d:98
Wakeon: MagicPacket(tm)
esxcli system module parameters list -m ixgben
Name Type Value Description
------- ------------ ----- --------------------------------------------------------------------------------------------------------------------------------
DRSS array of int DefQueue RSS state: 0 = disable, 1 = enable (default = 0; 4 queues if DRSS is enabled)
DevRSS array of int Device RSS state: 0 = disable, 1 = enable (default = 0; 16 queues but all virtualization features disabled if DevRSS is enabled)
QPair array of int Pair Rx & Tx Queue Interrupt: 0 = disable, 1 = enable (default)
RSS array of int 1,1 NetQueue RSS state: 0 = disable, 1 = enable (default = 1; 4 queues if RSS is enabled)
RxITR array of int Default RX interrupt interval: 0 = disable, 1 = dynamic throttling, 2-1000 in microseconds (default = 50)
TxITR array of int Default TX interrupt interval: 0 = disable, 1 = dynamic throttling, 2-1000 in microseconds (default = 100)
VMDQ array of int Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default = 8)
max_vfs array of int Maximum number of VFs to be enabled (0..63)
Ive been running a paloalto firewall also and I dont seem to have these problems with that. Wanting to switch from Paloalto to OPNsense but want to figure out this problem first. Any ideas what to try next? OPNsense version is 22.1.3-amd64