Call for testing: official netmap kernel

Started by mb, September 16, 2020, 06:53:51 PM

Previous topic - Next topic
Hi @FullyBorked,

It's on our agenda now. It seems to be related to the ordering of bpf(4) processing in iflib(4). One of the first issues we'll have a look following 20.7.4 release.

Quote from: mb on October 21, 2020, 01:06:01 AM
Hi @FullyBorked,

It's on our agenda now. It seems to be related to the ordering of bpf(4) processing in iflib(4). One of the first issues we'll have a look following 20.7.4 release.

Awesome, thanks for the update and the hard work.   8)

I updated to 20.7.4, which includes the modified netmap, but I still have the bandwidth problem. If I enable Sensei my connection drops from 850Mb to about 420Mb. I'm using em driver on Vmware 6.7.

Hi @RickyTR, can you share the hardware details of the hypervisor? Did you have a chance to try with vmx driver?

Hi @mb. The hardware is 2 CPUs x Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz with 8GB RAM. I don't think the problem may be related to hw performance because with Sensei active the Cpu is around 5% and never goes over 60%. I tried with vmx with the same result.

Hi RickyTR,

Make sure you track cpu usage via "top -SP" and see if any CPU core is fully utilized.

And one more question: are you testing the speed to the internet or between two local LANs?


Hi mb,

top -SP is not showing anything strange, no single CPU is at 100%. I tried testing the speed to the internet.

Hi @RickyTR, thanks for more information. Your CPU looks decent. You should be able to saturate a 1 Gig WAN connection.

With a Dual core Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz / OPNsense 20.7.4; I am able to do 930-940Mbps (both Sensei/Suricata on):

https://www.speedtest.net/result/10321829371

Virtualization comes into the scene as a new variable which might change things. Any chances that you can reach out support? Let us run a few tests on your system.

Hi @Rickytr,

I had similar problems in ESXi with the vmx adapters. I found that increasing the interfaces transmit and receive descriptors provided a big improvement. You could also try increasing the amount of transmit and receive queues as well.

This option (hw.pci.honor_msi_blacklist="0") is required in order to enable the number of transmit and receive queues to be overridden. Also the number of queues need to be less then or equal to the number of CPU cores assigned to your VM.

These values override the transmit (ntxqs) and receive (nrxqs) queues:
dev.vmx.0.iflib.override_ntxqs="4"
dev.vmx.0.iflib.override_nrxqs="4"

These values override the transmit (ntxds) and receive (nrxds) descriptors:
dev.vmx.0.iflib.override_ntxds="0,2048"
dev.vmx.0.iflib.override_nrxds="0,1024,0"

Since I have 2 vmx interfaces I specify which interface these overrides are applied to:
1st vmx interface (dev.vmx.0) - LAN
2nd vmx interface (dev.vmx.1) - WAN

Create /boot/loader.conf.local if not already created, and add the following and change the ntxqs/nrxqs queue values to match the number of your CPU cores. Also you can adjust the ntxds/nrxds values as needed, but keep in mind that the vmx interface has a maximum value of 4096 for the transmit descriptors and a maximum value of 2048 for the receive descriptors.

# VMware tunables for vmx interfaces
hw.pci.honor_msi_blacklist="0"
dev.vmx.0.iflib.override_ntxqs="4"
dev.vmx.0.iflib.override_nrxqs="4"
dev.vmx.0.iflib.override_ntxds="0,2048"
dev.vmx.0.iflib.override_nrxds="0,1024,0"
dev.vmx.1.iflib.override_ntxqs="4"
dev.vmx.1.iflib.override_nrxqs="4"
dev.vmx.1.iflib.override_ntxds="0,2048"
dev.vmx.1.iflib.override_nrxds="0,1024,0"


Remember to take a snapshot of your VM before making any changes, so you have an easy way to recover just in case.

There is a good explanation of these tunables here:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237166

The tunables provided in the link below have no affect, however the information on the queues and descriptors are still valid:
https://www.freebsd.org/cgi/man.cgi?query=vmx&sektion=4

I use ESXi 7.0.1 (VMs upgraded to same version) and recently have upgraded from 20.1.9 to 20.7.4. Esxi runs on Supermicro board with Intel NICs (and vmxnet3 in VMs). Sensei is active on vlans parent interface, no Suricata running.
Inside my LAN I'm able to saturate my 1 Gb network:

Connecting to host 172.16.1.1, port 59242
[  5] local 172.17.0.2 port 36110 connected to 172.16.1.1 port 59242
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   111 MBytes   932 Mbits/sec   15    704 KBytes       
[  5]   1.00-2.00   sec   109 MBytes   913 Mbits/sec    0    816 KBytes       
[  5]   2.00-3.00   sec   109 MBytes   912 Mbits/sec    0    912 KBytes       
[  5]   3.00-4.00   sec   109 MBytes   912 Mbits/sec    0   1000 KBytes       
[  5]   4.00-5.00   sec   110 MBytes   923 Mbits/sec    0   1.06 MBytes       
[  5]   5.00-6.00   sec   106 MBytes   891 Mbits/sec  235    594 KBytes       
[  5]   6.00-7.00   sec   108 MBytes   902 Mbits/sec   31    554 KBytes       
[  5]   7.00-8.00   sec   109 MBytes   912 Mbits/sec    0    690 KBytes       
[  5]   8.00-9.00   sec   109 MBytes   912 Mbits/sec    0    802 KBytes       
[  5]   9.00-10.00  sec   108 MBytes   902 Mbits/sec    0    899 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.06 GBytes   911 Mbits/sec  281             sender
[  5]   0.00-10.01  sec  1.06 GBytes   908 Mbits/sec                  receiver


iperf Done.

And never had a problem to almost saturate my internet connection 300 Mb/s up/down.
But still I use tunable "vmxnet3.netmap_native" as per Re: Call for testing: New netmap enabled kernel Shall I remove it?
OPNsense on:
Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (4 cores)
8 GB RAM
50 GB HDD
and plenty of vlans ;-)

For me nothing helped but switching to e1000 AND change the ips algorithm to the new keen style variant.

Note: at least in my case, that only seems to work with 20.7.4 and nothing before


Gesendet von iPhone mit Tapatalk

Hello all,

I am trying to install Zenarmor onto my firewall. I have several ports(Intel I350 NICs) in a LAG and Zenarmor is saying the driver for the interface(LAG?) is incompatible with Netmap. I am trying to run Zenarmor in passive mode for now. How can I resolve this? I am running 22.1.5 of OPNsense.

Thanks,
Steve