Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bugacha

#1
Quote from: pfry on December 29, 2025, 04:21:30 PMThey're not paired. The driver will work fine (and not complain) with a "later-than-recommended" NVM. I'd always go for the latest NVM, but the E810 has been around for long enough (2019?) that the major bugs should have been killed by now. I'd have to look at the release notes to be sure. At any rate, I'll update if convenient or necessary (I experienced the latter with some old X710s).

What issue were you having with the update? Your link is for Windows; I don't know what the package includes. (I use the EFI updater.)

Nah, the driver (I guess by driver I mean DDP which I care about) wouldn't work if it doesn't match the firmware. I learned it hard way

[1] ice0: <Intel(R) Ethernet Network Adapter E810-XXV-2 - 1.43.3-k> mem 0x380000000000-0x380001ffffff,0x380002000000-0x38000200ffff irq 16 at device 0.0 on pci1
[1] ice0: Loading the iflib ice driver
[1] ice0: Error configuring transmit balancing: ICE_ERR_AQ_ERROR
[1] ice0: An unknown error occurred when loading the DDP package.  Entering Safe Mode.
[1] ice0: fw 7.10.1 api 1.7 nvm 4.91 etid 800214ab netlist 4.4.5000-1.18.0.db8365cf oem 1.3909.0
[1] ice0: Using 1 Tx and Rx queues
[1] ice0: Using MSI-X interrupts with 2 vectors
[1] ice0: Using 1024 TX descriptors and 1024 RX descriptors
[1] ice0: Ethernet address: 50:7c:6f:79:ca:e8
[1] ice0: PCI Express Bus: Speed 16.0GT/s Width x8
[1] ice0: ice_init_dcb_setup: No DCB support
[1] ice0: link state changed to UP
[1] ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None
[1] ice0: netmap queues/slots: TX 1/1024, RX 1/1024
[1] ice1: <Intel(R) Ethernet Network Adapter E810-XXV-2 - 1.43.3-k> mem 0x380800000000-0x380801ffffff,0x380802000000-0x38080200ffff irq 16 at device 0.0 on pci2
[1] ice1: Loading the iflib ice driver
[1] ice0: link state changed to DOWN
[1] ice1: Error configuring transmit balancing: ICE_ERR_AQ_ERROR
[1] ice1: An unknown error occurred when loading the DDP package.  Entering Safe Mode.
[1] ice1: fw 7.10.1 api 1.7 nvm 4.91 etid 800214ab netlist 4.4.5000-1.18.0.db8365cf oem 1.3909.0
[1] ice1: Using 1 Tx and Rx queues
[1] ice1: Using MSI-X interrupts with 2 vectors
[1] ice1: Using 1024 TX descriptors and 1024 RX descriptors
[1] ice1: Ethernet address: 50:7c:6f:79:ca:e9
[1] ice1: PCI Express Bus: Speed 16.0GT/s Width x8
[1] ice1: ice_init_dcb_setup: No DCB support
[1] ice1: link state changed to UP
[1] ice1: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: FC-FEC/BASE-R, Autoneg: False, Flow Control: None
[1] ice1: netmap queues/slots: TX 1/1024, RX 1/1024
[1] ice1: link state changed to DOWN
[9] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[10] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[11] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[12] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[14] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[15] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[19] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[20] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[21] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[22] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[24] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[25] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[26] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[27] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[28] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[28] ice1: Unable to configure the main VSI for Tx: ENODEV
[29] ice1: Failed to add VLAN filters:
[29] ice1: - vlan 2, status -105
[29] ice1: Failure adding VLAN 2 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[30] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[30] ice1: Unable to configure the main VSI for Tx: ENODEV
[31] ice1: Failed to add VLAN filters:
[31] ice1: - vlan 2, status -105
[31] ice1: Failure adding VLAN 2 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[32] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[32] ice1: Unable to configure the main VSI for Tx: ENODEV
[34] ice1: Failed to add VLAN filters:
[34] ice1: - vlan 20, status -105
[34] ice1: Failure adding VLAN 20 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[35] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[35] ice1: Unable to configure the main VSI for Tx: ENODEV
[36] ice1: Failed to add VLAN filters:
[36] ice1: - vlan 20, status -105
[36] ice1: Failure adding VLAN 20 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[37] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[37] ice1: Unable to configure the main VSI for Tx: ENODEV
[38] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[39] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[40] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[41] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[42] ice1: Could not add new MAC filters, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[42] ice1: Failed to synchronize multicast filter list: EIO

Both ports would be offline, only revert to 4.50 nvm helped. 4.60 didn't work as well.

I didn't try to compile Intel's FreeBSD driver, which is much newer than what 14.3-p5 comes with.
#2
Quote from: pikachu937 on June 01, 2025, 06:49:09 PMHello,

I'm encountering an issue on OPNsense 25.1.7_4 (FreeBSD 14.2-RELEASE-p3) with an Intel E810-XXV network adapter. The following error appears in logs for both ice0 and ice1 interfaces:

ice0: ice_add_rss_cfg on VSI 0 could not configure every requested hash type
ice1: ice_add_rss_cfg on VSI 0 could not configure every requested hash type

Configuration:
OPNsense: 25.1.7_4 (FreeBSD 14.2-RELEASE-p3)
Network adapter: Intel E810-XXV
Driver: ICE 1.43.2-k (dev.ice.0.iflib.driver_version: 1.43.2-k)
Firmware: NVM 4.80 (dev.ice.0.fw_version: fw 7.8.2 api 1.7 nvm 4.80 etid 8002053c netlist 4.4.5000-1.16.0.fb344039 oem 1.3805.0)
DDP: ICE OS Default Package 1.3.41.0 (dev.ice.0.ddp_version: ICE OS Default Package version 1.3.41.0, track id 0xc0000001)
Settings: 32 Rx/Tx queues (dev.ice.0.iflib.override_nrxqs=32, dev.ice.0.iflib.override_ntxqs=32), IPv6 disabled on interfaces.

Traffic: 99% is UDP (RTP/RTCP).

Issue: The RSS error prevents even distribution of network queues across CPU cores, reducing performance. The issue affects both ice0 and ice1 interfaces. Since 99% of traffic is UDP (RTP/RTCP), filtering UDP is not an option.

Steps Taken:
Attempted to load DDP 1.3.53.0 by placing ice.pkg in /lib/firmware/intel/ice/ddp/ and adding hw.ice.ddp_override="1" to /boot/loader.conf.local. However, DDP 1.3.53.0 does not load; the system uses 1.3.41.0 (log: ice1: DDP package already present on device).
Tried updating NVM firmware using Intel NVM Update Utility, but the version remains 4.80.
Disabled IPv6 on interfaces via ifconfig ice0 inet6 -accept_rtadv and OPNsense web interface.
Tested reducing queues to 16 (override_nrxqs=16, override_ntxqs=16), but the error persists.
Attempted filtering UDP/SCTP via firewall rules, with no effect, as UDP (RTP/RTCP) constitutes 99% of traffic.
Compiling a new driver is not possible due to missing kernel source in OPNsense.

dmesg | grep DDP
ice0: The DDP package was successfully loaded: ICE OS Default Package version 1.3.41.0, track id 0xc0000001.
ice1: DDP package already present on device: ICE OS Default Package version 1.3.41.0, track id 0xc0000001.

dmesg | grep ice | grep rss
ice0: ice_add_rss_cfg on VSI 0 could not configure every requested hash type
ice1: ice_add_rss_cfg on VSI 0 could not configure every requested hash type

Questions:
How can I resolve the RSS error, given that 99% of traffic is UDP (RTP/RTCP)? Is it related to the driver or DDP 1.3.41.0?
Why does DDP 1.3.53.0 fail to load despite hw.ice.ddp_override="1"?
Is there a way to configure RSS hash functions for UDP without sysctl dev.ice.0.rss_hash_config?

Could upgrading OPNsense resolve the issue?

Any suggestions or insights would be greatly appreciated! I can provide additional logs if needed.



your issue is in firmware version

Firmware: NVM 4.80 (dev.ice.0.fw_version: fw 7.8.2 api 1.7 nvm 4.80 etid 8002053c netlist 4.4.5000-1.16.0.fb344039 oem 1.3805.0)


4.80 isnt supported by FreeBSD 1.43.3-k driver

#3
E810 here

Tried to update to 4.91 from https://www.intel.com/content/www/us/en/download/19625/non-volatile-memory-nvm-update-utility-for-intel-ethernet-network-adapters-e810-series-windows.html


And it is such a PITA, wasted 3 hrs trying to make it work.


Long story short, the latest firmware that Opnsense 25.7.8 ice driver supports is 4.50


[1] ice0: fw 7.5.4 api 1.7 nvm 4.50 etid 8001d8ba netlist 4.3.5000-1.14.0.99840ef4 oem 1.3597.0


#4
25.7, 25.10 Series / Re: 25.7 Upgrade OK
July 23, 2025, 08:41:45 PM
Upgrade okay, Init7 25gbps on iperf3 with E810+ddp works good.
#5
Quote from: MoonbeamFrame on June 23, 2025, 12:29:21 PMXGS-PON is becoming available from my service providers and I have started to look for hardware to facilitate migration to these services.

While I can find hardware with combinations of 2.5Gbit/s copper with SPF+, I am yet to find much in the SOHO market with 10Gbit/s copper with SPF+.

Does anyone have any recommendations for hardware that will run OPNsense?

I run 25gbps internet on MS-A2 9955hx. In past used 14700, non-k version.

Important consideration is the network card that you will use. E810 or Connect4-x Lx both work great for me.

I get 25gbps routing performance out of OpenSense easily on 8 cores
#6
I want to say I might have same problem, but I'm not sure at this stage.

I do see same RSS errors in dmesg.

Recently my internet speed dropped from 25gbp/s to 8-9. I do run Intel E810-XXV dual port adapter on Opnsense 25.1.10
I used to get 25gbp/s throughput via Opnsense as verified via iperf3 tests (both ipv4 and ipv6) from LAN client to Internet iperf3 server.

What puzzles me, is that I easily get 25gbp/s in iperf3 by running it on LAN interface of OPnsense from a LAN client.
Also, when I look at top during the above test, I see all CPUs busy doing something :

last pid: 12095;  load averages:  0.95,  0.45,  0.28                                                                                                                                      up 0+01:48:02  21:13:50
83 processes:  1 running, 82 sleeping
CPU 0:  0.0% user,  0.0% nice, 56.3% system,  0.4% interrupt, 43.4% idle
CPU 1:  0.4% user,  0.0% nice, 23.0% system,  2.7% interrupt, 73.8% idle
CPU 2:  3.5% user,  0.0% nice,  2.7% system, 16.0% interrupt, 77.7% idle
CPU 3:  0.4% user,  0.0% nice, 26.2% system,  1.6% interrupt, 71.9% idle
CPU 4:  0.4% user,  0.0% nice, 68.4% system,  5.1% interrupt, 26.2% idle
CPU 5:  0.0% user,  0.0% nice, 57.4% system,  1.2% interrupt, 41.4% idle
CPU 6:  0.4% user,  0.0% nice, 57.4% system,  0.4% interrupt, 41.8% idle
CPU 7:  0.0% user,  0.0% nice, 69.5% system,  0.0% interrupt, 30.5% idle
Mem: 142M Active, 313M Inact, 763M Wired, 305M Buf, 6643M Free


So to recap :

LAN client -> ice1 LAN Opnsense (iperf3 -s) -> 25gpb/s easily
LAN client -> ice1 -> ice0 (WAN) -> Internet iperf3 -> Dropped from 24-25 to 8-9


At this stage, I'm not sure if this is Opnsense or my ISP...






#7
Quote from: Monviech (Cedrik) on May 08, 2025, 05:48:40 PM
Quote from: bugacha on May 08, 2025, 05:42:49 PMI read this thread and I still don't understand few things.


I use Unbound as DNS and don't want to change to Dnsmasq.

I'm all in favor to drop ISC DHCP and migrate to Kea but I need Router Advertisement support for IPV6.

What are my options ?


That is easy, you use:
- Services/Unbound DNS
- Services/Kea DHCPv4
- Services/Router Advertisements

Apologies, I also use DHCPv6 and RA runs in Assisted mode today.

It's a standard setup, I get IPV6 prefix from ISP and I use DHCPv6 to assign IPs from one of the subnets.
#8
I read this thread and I still don't understand few things.


I use Unbound as DNS and don't want to change to Dnsmasq.

I'm all in favor to drop ISC DHCP and migrate to Kea but I need Router Advertisement support for IPV6.

What are my options ?
#9
I added few backends to Upstream, then used them in SNI based routing. After I finished testing, removed SNI based routing but enable to remove unused Upstream.

Just getting "Item in used by" message

Nginx - {Nginx.sni_hostname_upstream_map_item.5dbfd85e-f1aa-43ca-bfa2-313a684a199c}
Item is not referenced anywhere 100%, nothing at all in nginx.conf

Tried to manually grep files in /etc or /usr/local/etc - no matches at all.

Nginx restart or reboot didn't help at all




Any ideas how to fix this ?
#10
i7-14700
Proxmox
Opnsense 25.1.1
Mellanox ConnectX-4 Lx
1 port passthrough WAN
1 port bridged LAN into opnsense and TrueNAS


speedtest from TrueNAS

# bin/speedtest -s 43030

   Speedtest by Ookla

      Server: Init7 AG - Winterthur (id: 43030)
         ISP: Init7
Idle Latency:     1.44 ms   (jitter: 0.08ms, low: 1.42ms, high: 1.70ms)
    Download: 23453.57 Mbps (data used: 23.0 GB)
                  3.30 ms   (jitter: 3.67ms, low: 1.13ms, high: 26.84ms)
      Upload: 22000.78 Mbps (data used: 22.7 GB)
                  1.25 ms   (jitter: 0.11ms, low: 1.08ms, high: 1.94ms)
 Packet Loss:     0.0%

https://www.speedtest.net/result/c/db97cbad-a4d3-4d23-af27-980535ffbe23
#11
Quote from: Netfloh on April 24, 2024, 10:36:00 PMTo finish this up ...

The Intel E810-XXVDA2 needs some developer love to work with OPNsense, I change to Broadcom P225P and this card works out of the box without compiling  kernel or ports.

Thanks Netnut for your help !

Did you try 25.1 ?

ice_ddp driver has been upgraded to latest version
#12
So I'm on 25.1 and I get same error for ice_ddp.ko in dmesg

# dmesg | grep ice_ddp
ice0: The DDP package module (ice_ddp) failed to load or could not be found. Entering Safe Mode.
ice0: The DDP package module cannot be automatically loaded while booting. You may want to specify ice_ddp_load="YES" in your loader.conf
ice1: The DDP package module (ice_ddp) failed to load or could not be found. Entering Safe Mode.
ice1: The DDP package module cannot be automatically loaded while booting. You may want to specify ice_ddp_load="YES" in your loader.conf

EDIT: Actually never mind, I added both of these :
ice_ddp_load = YES and if_ice_load = YES in Tunnables

and by the looks of it, everything works now :

ice0: <Intel(R) Ethernet Network Adapter E810-XXV-2 - 1.43.2-k> mem 0x380000000000-0x380001ffffff,0x380002000000-0x38000200ffff irq 16 at device 0.0 on pci1
ice0: Loading the iflib ice driver
ice0: DDP package already present on device: ICE OS Default Package version 1.3.41.0, track id 0xc0000001.
ice0: fw 7.3.4 api 1.7 nvm 4.30 etid 8001b891 netlist 4.2.5000-1.14.0.2b9b23c0 oem 1.3415.0
ice0: Using 8 Tx and Rx queues
ice0: Reserving 8 MSI-X interrupts for iRDMA
ice0: Using MSI-X interrupts with 17 vectors
ice0: Using 1024 TX descriptors and 1024 RX descriptors
ice0: Ethernet address: 50:7c:6f:79:ca:e8
ice0: ice_add_rss_cfg on VSI 0 could not configure every requested hash type
ice0: PCI Express Bus: Speed 16.0GT/s Width x8
ice0: Firmware LLDP agent disabled
ice0: Link is up, 10 Gbps Full Duplex, Requested FEC: None, Negotiated FEC: None, Autoneg: False, Flow Control: None
ice0: link state changed to UP
ice0: netmap queues/slots: TX 8/1024, RX 8/1024
#13
Does 25.1 now fully support ice_ddp driver ?