Hi all,
I'm using OPNsense 23.1_6-amd64 and my hardware is an IPU405 system( https://www.ipu-system.de/produkte/ipu450.html (https://www.ipu-system.de/produkte/ipu450.html)).
Technical specs:
- CPU: Intel Core i3-5005U Broadwell-U Dual Core (4 Threads) 2,0 GHz, 15W TDP
- Cache: 64 KByte L1 Instruction, 64 KByte L1 Data, 512 KByte L2, 3 MByte L3
- Features: AES-NI, Hyper-Threading, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, FMA3, Enhanced Intel
- SpeedStep Technology (EIST), Intel 64, XD bit, Intel VT-x, Intel VT-d, Smart Cache
- 4 x 10/100/1000 MBit/s Intel i211-AT
I now tried to configure some tuneables as described here --> https://docs.opnsense.org/troubleshooting/performance.html (https://docs.opnsense.org/troubleshooting/performance.html)
When configuring
net.isr.maxthreads to
net.isr.maxthreads = 1
everythings work fine. When setting it to any other value than "1", e.g. to "-1" or higher than "1" I encounter a strange issue on the firewall host itself.
In the GUI, when checking the firmware status I never get a response from the remote server. On the SSH console on the firewall I tried curl to connect to http://pkg.FreeBSD.org/ (http://pkg.freebsd.org/) and it shows the following response:
curl -vv http://pkg.FreeBSD.org/
* Trying 147.28.184.43:80...
* Trying [2604:1380:4091:a001::50:2]:80...
* Immediate connect fail for 2604:1380:4091:a001::50:2: No route to host
Sometimes I get a correct response, but almost 9 of 10 times the result is like above.
As soon as I set the value of
net.isr.maxthreads back to "1" it works again (of course, reboot required):
curl -vv http://pkg.FreeBSD.org/
* Trying 147.28.184.43:80...
* Connected to pkg.FreeBSD.org (147.28.184.43) port 80 (#0)
> GET / HTTP/1.1
> Host: pkg.FreeBSD.org
> User-Agent: curl/7.87.0
> Accept: */*
The clients in the local network don't have any issues, even with
net.isr.maxthreads not equal to "1", it seems only the firewall host itself is affected.
Does anyone has an idea, what the reason could be?
I would have thought that whether the tunable is in use or not, it wouldn't make a difference to the behavour you see. "No route to host" is no route and not directly related to a tunable. Just a thought for now.
Check the tunable is available on your version. I'm on 22.7.X:
[penguin@OPNsense ~]$ uname -a
FreeBSD OPNsense.moomooland 13.1-RELEASE-p3 FreeBSD 13.1-RELEASE-p3 stable/22.7-n250262-83840459d88 SMP amd64
[penguin@OPNsense ~]$ sysctl -a | grep net.isr
net.isr.numthreads: 2
net.isr.maxprot: 16
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 1
net.isr.maxthreads: 2
net.isr.dispatch: direct
I actually think so, too... But I can reproduce it every time 100%...
As soon as I set net.isr.maxthreads to "-1" or higher than "1" the issue occurs (Nothing else changed, just this setting!).
The tuneables are available on my system:
sysctl -a | grep net.isr
net.isr.numthreads: 1
net.isr.maxprot: 16
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 1
net.isr.maxthreads: 1
net.isr.dispatch: direct
Also weird is, that it seems that curl is trying to connect to an IPv6 address after the attempt to IPv4:
* Trying 147.28.184.43:80...
* Trying [2604:1380:4091:a001::50:2]:80...
* Immediate connect fail for 2604:1380:4091:a001::50:2: No route to host
I currently really have no clue and I'm totally confused...
https://github.com/opnsense/core/issues/5415
Bottom line is, do you even need to change from defaults?
Actually, no, I don't need it (anymore). My other tunings were successfully and I gained much more performance.
Anyhow, I wanted to post it and ask if anyone have an idea what the reason could be of this strange behaviour.
From my point of view, I don't have an issue with it at the moment.
Just to add to this, on 23.7 (just upgraded from an older 22.1), I see the same - but only when RSS is enabled:
net.isr.maxthreads=-1
net.isr.defaultqlimit=4096
net.isr.bindthreads=1
#net.inet.rss.enabled=1
#net.inet.rss.bits=2
- Could not check for updates
- NTP would fail to sync
- Telnet-ing to an external port would work less than 5% of the time
... when running a telnet test, I could see the SYN/ACK coming back and being retransmitted by the remote end but not 'seen' by the OS (only by the NIC in tcpdump).
I have 2 other boxes slightly different hardware, RSS works without issue.
EDIT: Seems to be some suggestions that RSS (even though it seems to be supported on Intel spec sheet) has some problems on 82574L.
EDIT 2: One of my other boxes also has 82574L and works with RSS just fine. Weird.