RSS Error on Intel E810-XXV: ice_add_rss_cfg on VSI 0 could not configure every

Started by pikachu937, June 01, 2025, 06:49:09 PM

Previous topic - Next topic
Hello,

I'm encountering an issue on OPNsense 25.1.7_4 (FreeBSD 14.2-RELEASE-p3) with an Intel E810-XXV network adapter. The following error appears in logs for both ice0 and ice1 interfaces:

ice0: ice_add_rss_cfg on VSI 0 could not configure every requested hash type
ice1: ice_add_rss_cfg on VSI 0 could not configure every requested hash type

Configuration:
OPNsense: 25.1.7_4 (FreeBSD 14.2-RELEASE-p3)
Network adapter: Intel E810-XXV
Driver: ICE 1.43.2-k (dev.ice.0.iflib.driver_version: 1.43.2-k)
Firmware: NVM 4.80 (dev.ice.0.fw_version: fw 7.8.2 api 1.7 nvm 4.80 etid 8002053c netlist 4.4.5000-1.16.0.fb344039 oem 1.3805.0)
DDP: ICE OS Default Package 1.3.41.0 (dev.ice.0.ddp_version: ICE OS Default Package version 1.3.41.0, track id 0xc0000001)
Settings: 32 Rx/Tx queues (dev.ice.0.iflib.override_nrxqs=32, dev.ice.0.iflib.override_ntxqs=32), IPv6 disabled on interfaces.

Traffic: 99% is UDP (RTP/RTCP).

Issue: The RSS error prevents even distribution of network queues across CPU cores, reducing performance. The issue affects both ice0 and ice1 interfaces. Since 99% of traffic is UDP (RTP/RTCP), filtering UDP is not an option.

Steps Taken:
Attempted to load DDP 1.3.53.0 by placing ice.pkg in /lib/firmware/intel/ice/ddp/ and adding hw.ice.ddp_override="1" to /boot/loader.conf.local. However, DDP 1.3.53.0 does not load; the system uses 1.3.41.0 (log: ice1: DDP package already present on device).
Tried updating NVM firmware using Intel NVM Update Utility, but the version remains 4.80.
Disabled IPv6 on interfaces via ifconfig ice0 inet6 -accept_rtadv and OPNsense web interface.
Tested reducing queues to 16 (override_nrxqs=16, override_ntxqs=16), but the error persists.
Attempted filtering UDP/SCTP via firewall rules, with no effect, as UDP (RTP/RTCP) constitutes 99% of traffic.
Compiling a new driver is not possible due to missing kernel source in OPNsense.

dmesg | grep DDP
ice0: The DDP package was successfully loaded: ICE OS Default Package version 1.3.41.0, track id 0xc0000001.
ice1: DDP package already present on device: ICE OS Default Package version 1.3.41.0, track id 0xc0000001.

dmesg | grep ice | grep rss
ice0: ice_add_rss_cfg on VSI 0 could not configure every requested hash type
ice1: ice_add_rss_cfg on VSI 0 could not configure every requested hash type

Questions:
How can I resolve the RSS error, given that 99% of traffic is UDP (RTP/RTCP)? Is it related to the driver or DDP 1.3.41.0?
Why does DDP 1.3.53.0 fail to load despite hw.ice.ddp_override="1"?
Is there a way to configure RSS hash functions for UDP without sysctl dev.ice.0.rss_hash_config?

Could upgrading OPNsense resolve the issue?

Any suggestions or insights would be greatly appreciated! I can provide additional logs if needed.

I want to say I might have same problem, but I'm not sure at this stage.

I do see same RSS errors in dmesg.

Recently my internet speed dropped from 25gbp/s to 8-9. I do run Intel E810-XXV dual port adapter on Opnsense 25.1.10
I used to get 25gbp/s throughput via Opnsense as verified via iperf3 tests (both ipv4 and ipv6) from LAN client to Internet iperf3 server.

What puzzles me, is that I easily get 25gbp/s in iperf3 by running it on LAN interface of OPnsense from a LAN client.
Also, when I look at top during the above test, I see all CPUs busy doing something :

last pid: 12095;  load averages:  0.95,  0.45,  0.28                                                                                                                                      up 0+01:48:02  21:13:50
83 processes:  1 running, 82 sleeping
CPU 0:  0.0% user,  0.0% nice, 56.3% system,  0.4% interrupt, 43.4% idle
CPU 1:  0.4% user,  0.0% nice, 23.0% system,  2.7% interrupt, 73.8% idle
CPU 2:  3.5% user,  0.0% nice,  2.7% system, 16.0% interrupt, 77.7% idle
CPU 3:  0.4% user,  0.0% nice, 26.2% system,  1.6% interrupt, 71.9% idle
CPU 4:  0.4% user,  0.0% nice, 68.4% system,  5.1% interrupt, 26.2% idle
CPU 5:  0.0% user,  0.0% nice, 57.4% system,  1.2% interrupt, 41.4% idle
CPU 6:  0.4% user,  0.0% nice, 57.4% system,  0.4% interrupt, 41.8% idle
CPU 7:  0.0% user,  0.0% nice, 69.5% system,  0.0% interrupt, 30.5% idle
Mem: 142M Active, 313M Inact, 763M Wired, 305M Buf, 6643M Free


So to recap :

LAN client -> ice1 LAN Opnsense (iperf3 -s) -> 25gpb/s easily
LAN client -> ice1 -> ice0 (WAN) -> Internet iperf3 -> Dropped from 24-25 to 8-9


At this stage, I'm not sure if this is Opnsense or my ISP...