Did a clean install today and decided to give 22.1-RC2 a try (and I like it ;-)). On the way I noticed a few things and have some questions, hope someone can help...
*** Intel QAT
I was able to manually load the new FreeBSD 13 qat driver. I couldn't select it in the "Settings:Miscellaneous" crypto hw accel dropdown however, is there any way to utilize the qat device already or is this something for a future release ?
*** WebGUI Theme – Carrier Detect
I'm a big fan of the "cicada" webgui theme and using it for quite some time now. When clicking through the "default" theme after install I noticed the "Interface:Assignments" page doesn't show active carrier on interfaces. With "cicada" you get nice red and green icons for interfaces that are (dis)connected, is this a "cicada" feature or a default theme bug/regression ?
*** Suricata, Intel IX & Jumbo's
Still struggling with configuring Suricata for IX interfaces and/or Jumbo's. I'm using IX interfaces in both "bare" and LACP lagg configuration, both configs fail with Jumbo's enabled.
According to THIS (https://forum.opnsense.org/index.php?topic=24843.0) post I'm using Suricata directly on VLAN (VLAN->LAGG->IX) with promiscuous mode DISABLED. If I'm using Jumbo's (MTU >9000) I get all kind of buffer and NS_MOREFRAG feature errors. Increasing the Netmap buffer size "dev.netmap.buf_size" to higer and/or insane values (65000) doesn't seem to work. With standard MTU (1500) and buffer size (2048) this VLAN setup actually works.
688.547884 [ 849] iflib_netmap_config txr 12 rxr 12 txd 2048 rxd 2048 rbufsz 4096
688.555391 [ 849] iflib_netmap_config txr 12 rxr 12 txd 2048 rxd 2048 rbufsz 4096
688.563094 [ 849] iflib_netmap_config txr 12 rxr 12 txd 2048 rxd 2048 rbufsz 4096
688.595215 [ 849] iflib_netmap_config txr 12 rxr 12 txd 2048 rxd 2048 rbufsz 4096
688.602700 [2222] netmap_buf_size_validate error: using NS_MOREFRAG on ix0 requires netmap buf size >= 4096
*** RSS
What's the best option to configure/tune a 12 core system ? The RSS tutorial talks about 4/8/16 core machines for the "net.inet.rss.bits" tunable. Should I use 3 (8-core) or 4 (16-core) for the best results ? I'm using value 4 now, but I don't know if the 4 extra "buckets" can do any harm, netstat looks nice though...
net.inet.rss.bucket_mapping: 0:0 1:1 2:2 3:3 4:4 5:5 6:6 7:7 8:8 9:9 10:10 11:11 12:0 13:1 14:2 15:3
# netstat -Q
Configuration:
Setting Current Limit
Thread count 12 12
Default queue limit 256 10240
Dispatch policy deferred n/a
Threads bound to CPUs enabled n/a
Protocols:
Name Proto QLimit Policy Dispatch Flags
ip 1 3000 cpu hybrid C--
igmp 2 256 source default ---
rtsock 3 256 source default ---
arp 4 256 source default ---
ether 5 256 cpu direct C--
ip6 6 1000 cpu hybrid C--
ip_direct 9 256 cpu hybrid C--
ip6_direct 10 256 cpu hybrid C--
Workstreams:
WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled
0 0 ip 0 48 0 1186 0 127492 128678
0 0 igmp 0 0 0 0 0 0 0
0 0 rtsock 0 0 0 0 0 0 0
0 0 arp 0 0 0 0 0 0 0
0 0 ether 0 0 70140 0 0 0 70140
0 0 ip6 0 1 0 0 0 203 203
0 0 ip_direct 0 0 0 0 0 0 0
0 0 ip6_direct 0 0 0 0 0 0 0
1 1 ip 0 193 0 107 0 694180 694287
1 1 igmp 0 0 0 0 0 0 0
1 1 rtsock 0 0 0 0 0 0 0
1 1 arp 0 2 0 0 0 288 288
1 1 ether 0 0 14552 0 0 0 14552
1 1 ip6 0 1 0 0 0 207 207
1 1 ip_direct 0 0 0 0 0 0 0
1 1 ip6_direct 0 0 0 0 0 0 0
2 2 ip 0 32 0 80 0 205258 205338
2 2 igmp 0 0 0 0 0 0 0
2 2 rtsock 0 2 0 0 0 175 175
2 2 arp 0 1 0 0 0 84 84
2 2 ether 0 0 142672 0 0 0 142672
2 2 ip6 0 1 0 0 0 74 74
2 2 ip_direct 0 0 0 0 0 0 0
2 2 ip6_direct 0 0 0 0 0 0 0
3 3 ip 0 463 0 69 0 380352 380421
3 3 igmp 0 0 0 0 0 0 0
3 3 rtsock 0 0 0 0 0 0 0
3 3 arp 0 0 0 0 0 0 0
3 3 ether 0 0 135986 0 0 0 135986
3 3 ip6 0 1 0 0 0 145 145
3 3 ip_direct 0 0 0 0 0 0 0
3 3 ip6_direct 0 0 0 0 0 0 0
4 4 ip 0 11 0 0 0 177655 177655
4 4 igmp 0 0 0 0 0 0 0
4 4 rtsock 0 0 0 0 0 0 0
4 4 arp 0 0 0 0 0 0 0
4 4 ether 0 0 128748 0 0 0 128748
4 4 ip6 0 2 0 0 0 48 48
4 4 ip_direct 0 0 0 0 0 0 0
4 4 ip6_direct 0 0 0 0 0 0 0
5 5 ip 0 9 0 9 0 73864 73873
5 5 igmp 0 0 0 0 0 0 0
5 5 rtsock 0 0 0 0 0 0 0
5 5 arp 0 0 0 0 0 0 0
5 5 ether 0 0 165365 0 0 0 165365
5 5 ip6 0 1 0 0 0 14 14
5 5 ip_direct 0 0 0 0 0 0 0
5 5 ip6_direct 0 0 0 0 0 0 0
6 6 ip 0 26 0 0 0 306593 306593
6 6 igmp 0 0 0 0 0 0 0
6 6 rtsock 0 0 0 0 0 0 0
6 6 arp 0 0 0 0 0 0 0
6 6 ether 0 0 71276 0 0 0 71276
6 6 ip6 0 2 0 0 0 141 141
6 6 ip_direct 0 0 0 0 0 0 0
6 6 ip6_direct 0 0 0 0 0 0 0
7 7 ip 0 475 0 0 0 169312 169312
7 7 igmp 0 0 0 0 0 0 0
7 7 rtsock 0 0 0 0 0 0 0
7 7 arp 0 1 0 0 0 34 34
7 7 ether 0 0 309366 0 0 0 309366
7 7 ip6 0 2 0 1 0 199 200
7 7 ip_direct 0 0 0 0 0 0 0
7 7 ip6_direct 0 0 0 0 0 0 0
8 8 ip 0 34 0 64406 0 75441 139847
8 8 igmp 0 0 0 0 0 0 0
8 8 rtsock 0 0 0 0 0 0 0
8 8 arp 0 2 0 0 0 1002 1002
8 8 ether 0 0 3487916 0 0 0 3487916
8 8 ip6 0 1 0 23 0 4 27
8 8 ip_direct 0 0 0 0 0 0 0
8 8 ip6_direct 0 0 0 0 0 0 0
9 9 ip 0 13 0 80 0 408429 408509
9 9 igmp 0 0 0 0 0 0 0
9 9 rtsock 0 0 0 0 0 0 0
9 9 arp 0 3 0 0 0 499 499
9 9 ether 0 0 15999 0 0 0 15999
9 9 ip6 0 4 0 0 0 268 268
9 9 ip_direct 0 0 0 0 0 0 0
9 9 ip6_direct 0 0 0 0 0 0 0
10 10 ip 0 970 0 61 0 167884 167945
10 10 igmp 0 0 0 0 0 0 0
10 10 rtsock 0 0 0 0 0 0 0
10 10 arp 0 0 0 0 0 0 0
10 10 ether 0 0 74687 0 0 0 74687
10 10 ip6 0 1 0 0 0 43 43
10 10 ip_direct 0 0 0 0 0 0 0
10 10 ip6_direct 0 0 0 0 0 0 0
11 11 ip 0 450 0 104 0 170014 170118
11 11 igmp 0 0 0 0 0 0 0
11 11 rtsock 0 0 0 0 0 0 0
11 11 arp 0 2 0 0 0 2262 2262
11 11 ether 0 0 140111 0 0 0 140111
11 11 ip6 0 1 0 1 0 37 38
11 11 ip_direct 0 0 0 0 0 0 0
11 11 ip6_direct 0 0 0 0 0 0 0
For QAT please raise a ticket on GitHub so we can discuss inclusion. I'm not sure if the drop-down is suitable since AESNI is not mutually exclusive?
https://github.com/opnsense/core/issues/new/choose
Plugin theme request should go below. Haven't heard from Cicada maintainer in a while.
https://github.com/opnsense/plugins/issues/new/choose
Using IPS mode on LAGG/VLAN is still risky overall for the simple fact that the operating system support is mostly tailored for direct hardware interface use.
For RSS please defer to the other thread:
https://forum.opnsense.org/index.php?topic=24409.0
Thanks,
Franco
Thanks for sharing this. I posted this March 2021: https://forum.opnsense.org/index.php?topic=22233.msg105419#msg105419
I was waiting until the 22.1 release to finally get use of the QAT driver since pfsense only allows this in their own hardware and it "would" come in de CE edition, but not waiting on that anymore. Already decided a while ago to switch to OPNsense as soon as QAT comes available in OPNsense by using at a minimum a tunable. Good to know it is not possible yet, was already planning the migration weekend ;-) So holding my horses off for now.
Could you update your post as soon it is possible to activate the QAT hardware in OPNsense?
Don't wait making the switch to 22.1, with or without QAT, you'll love it anyway ;-)
Reading your post from March, I guess you have to lower your expectations for QAT a bit. For most (small to medium) workloads AESNI will probably outperform QAT in every use case. You need to feed it enough data before _any_ HW acceleration is done and only with large buffers you get the real benefits.
Although another OS in the link below you find some quick performance tests:
https://forum.openwrt.org/t/intel-quick-assist-v1-5-drivers-and-openssl-1-1-1e-acceleration-engine-for-19-07-2/58692/
We are not intentionally hampering QAT so if you want to use it on 22.1 you very likely can do so. The manual page should give some hints on what to do and if you want you can report back with steps to activate that worked for you and what your usage results are.
% ls /boot/kernel/qat*
/boot/kernel/qat.ko /boot/kernel/qat_c3xxxfw.ko /boot/kernel/qat_d15xxfw.ko
/boot/kernel/qat_c2xxxfw.ko /boot/kernel/qat_c62xfw.ko /boot/kernel/qat_dh895xccfw.ko
https://www.freebsd.org/cgi/man.cgi?query=qat&apropos=0&sektion=0&manpath=FreeBSD+13.0-current&arch=default&format=html
That would be extremely useful for getting it into OPNsense as opposed to waiting longer for someone else to do it. ;)
Cheers,
Franco
Did you get QAT running? I got a 8970 card and just updated to 22.1 today, so I am very curious if anyone did manage to get it going.