iperf3 -c 10.0.3.1 -t 60 -i 10Connecting to host 10.0.3.1, port 5201[ 5] local 10.0.3.2 port 35558 connected to 10.0.3.1 port 5201[ ID] Interval Transfer Bitrate Retr Cwnd[ 5] 0.00-10.00 sec 3.03 GBytes 2.60 Gbits/sec 0 252 KBytes [ 5] 10.00-20.00 sec 2.99 GBytes 2.57 Gbits/sec 0 246 KBytes [ 5] 20.00-30.00 sec 2.98 GBytes 2.56 Gbits/sec 0 243 KBytes [ 5] 30.00-40.00 sec 2.96 GBytes 2.54 Gbits/sec 0 209 KBytes [ 5] 40.00-50.00 sec 2.93 GBytes 2.52 Gbits/sec 0 277 KBytes [ 5] 50.00-60.00 sec 2.97 GBytes 2.55 Gbits/sec 0 260 KBytes - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-60.00 sec 17.9 GBytes 2.56 Gbits/sec 0 sender[ 5] 0.00-60.00 sec 17.9 GBytes 2.56 Gbits/sec receiveriperf Done.iperf3 -c 10.0.3.1 -t 60 -i 10 -RConnecting to host 10.0.3.1, port 5201Reverse mode, remote host 10.0.3.1 is sending[ 5] local 10.0.3.2 port 36642 connected to 10.0.3.1 port 5201[ ID] Interval Transfer Bitrate[ 5] 0.00-10.00 sec 3.82 GBytes 3.28 Gbits/sec [ 5] 10.00-20.00 sec 3.89 GBytes 3.35 Gbits/sec [ 5] 20.00-30.00 sec 3.82 GBytes 3.28 Gbits/sec [ 5] 30.00-40.00 sec 3.75 GBytes 3.22 Gbits/sec [ 5] 40.00-50.00 sec 3.60 GBytes 3.09 Gbits/sec [ 5] 50.00-60.00 sec 3.76 GBytes 3.23 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-60.00 sec 22.6 GBytes 3.24 Gbits/sec 8384 sender[ 5] 0.00-60.00 sec 22.6 GBytes 3.24 Gbits/sec receiveriperf Done.
iperf3 -c 10.0.3.1 -t 60 -i 10Connecting to host 10.0.3.1, port 5201[ 5] local 10.0.3.2 port 37868 connected to 10.0.3.1 port 5201[ ID] Interval Transfer Bitrate Retr Cwnd[ 5] 0.00-10.00 sec 2.80 GBytes 2.40 Gbits/sec 0 5.66 KBytes [ 5] 10.00-20.00 sec 2.81 GBytes 2.42 Gbits/sec 0 272 KBytes [ 5] 20.00-30.00 sec 2.78 GBytes 2.38 Gbits/sec 0 223 KBytes [ 5] 30.00-40.00 sec 2.79 GBytes 2.40 Gbits/sec 0 240 KBytes [ 5] 40.00-50.00 sec 1.53 GBytes 1.32 Gbits/sec 4 1.41 KBytes [ 5] 50.00-60.01 sec 0.00 Bytes 0.00 bits/sec 2 1.41 KBytes - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-60.01 sec 12.7 GBytes 1.82 Gbits/sec 6 sender[ 5] 0.00-61.65 sec 12.7 GBytes 1.77 Gbits/sec receiveriperf Done.iperf3 -c 10.0.3.1 -t 60 -i 10 -RConnecting to host 10.0.3.1, port 5201Reverse mode, remote host 10.0.3.1 is sending[ 5] local 10.0.3.2 port 38420 connected to 10.0.3.1 port 5201[ ID] Interval Transfer Bitrate[ 5] 0.00-10.00 sec 1.40 GBytes 1.21 Gbits/sec [ 5] 10.00-20.00 sec 1.37 GBytes 1.17 Gbits/sec [ 5] 20.00-30.00 sec 1.40 GBytes 1.20 Gbits/sec [ 5] 30.00-40.00 sec 1.39 GBytes 1.19 Gbits/sec [ 5] 40.00-50.00 sec 1.40 GBytes 1.20 Gbits/sec [ 5] 50.00-60.00 sec 1.41 GBytes 1.21 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-60.00 sec 8.37 GBytes 1.20 Gbits/sec 18 sender[ 5] 0.00-60.00 sec 8.37 GBytes 1.20 Gbits/sec receiveriperf Done.
Is it possible that only XEN virtualized OPNsense instances could be affected?
cd /usr/src/tools/tools/netmap/make bridge
/usr/obj/usr/src/amd64.amd64/tools/tools/netmap/bridge -i netmap:vmx2 -i netmap:vmx2
I haven't seen performance issues on my setup between these versions, but since there have been quite some fixes around netmap in different kernels, it's often a good idea to check if it's Suricata causing issues or netmap.There is a simple "bridge" tool for netmap available in the kernel source directory, if people want to check netmap behaviour on their hardware they can always build the tool and create a bridge between the card and the host-stack to rule out certain driver issues. To install it, you need the kernel source directory in place (/sur/src), you can use the build tools (https://github.com/opnsense/tools) to checkout all sources on your machine.When the sources are in place, you can build the tools using the following commands (on amd64):Code: [Select]cd /usr/src/tools/tools/netmap/make bridgeNext make sure netmap isn't used (no Suricata or Sensei) and create a bridge between the physical connection and the host stack, assuming the interface in question is called vmx2, the command would look like:Code: [Select]/usr/obj/usr/src/amd64.amd64/tools/tools/netmap/bridge -i netmap:vmx2 -i netmap:vmx2Wait a few seconds and start the test again with OPNsense in between. When netmap isn't interfering the test with or without bridge should show roughly the same numbers.Best regards,Ad
@klamath I'm not sure why your quoting my message, but as said if it's purely about netmap, one could try to use bridge to pinpoint issues if they exists for their setup. It's definitely not the case that there are issues in all releases, we test the hardware we provide on periodic bases and haven't seen a lot of (major) issues ourselves. Quite some reports about performance are related to too optimistic assumptions (e.g. expecting 1Gbps IPS on an apu board for example) or drivers which aren't very well supported (we ship what's being offered upstream, if support isn't great in FreeBSD for netmap, it highly likely isn't great on our end). IPS needs quite some computing power and isn't comparable to normal routing/firewall functions at all in terms of requirements.When it comes to testing, we tend to offer test kernels and release candidates on periodic bases. To help catching issues up front, please do test, document behaviour when experiencing issues, and try to track them to FreeBSD bug reports if they exists. When there are fixes available upstream, we often assess if we can backport them into our system. Quite some fixes have been merged in the last versions for various drivers (with quite some help from the Sensei people as Franco mentioned), I haven't seen side affects in terms of performance myself, but that doesn't mean they don't exist for some drivers.Best regards,Ad