ice driver (ddp) / latest NVM firmware (4.91)

Started by bugacha, December 29, 2025, 12:15:22 PM

Previous topic - Next topic
E810 here

Tried to update to 4.91 from https://www.intel.com/content/www/us/en/download/19625/non-volatile-memory-nvm-update-utility-for-intel-ethernet-network-adapters-e810-series-windows.html


And it is such a PITA, wasted 3 hrs trying to make it work.


Long story short, the latest firmware that Opnsense 25.7.8 ice driver supports is 4.50


[1] ice0: fw 7.5.4 api 1.7 nvm 4.50 etid 8001d8ba netlist 4.3.5000-1.14.0.99840ef4 oem 1.3597.0



They're not paired. The driver will work fine (and not complain) with a "later-than-recommended" NVM. I'd always go for the latest NVM, but the E810 has been around for long enough (2019?) that the major bugs should have been killed by now. I'd have to look at the release notes to be sure. At any rate, I'll update if convenient or necessary (I experienced the latter with some old X710s).

What issue were you having with the update? Your link is for Windows; I don't know what the package includes. (I use the EFI updater.)

December 29, 2025, 07:50:02 PM #2 Last Edit: December 29, 2025, 07:51:56 PM by bugacha
Quote from: pfry on December 29, 2025, 04:21:30 PMThey're not paired. The driver will work fine (and not complain) with a "later-than-recommended" NVM. I'd always go for the latest NVM, but the E810 has been around for long enough (2019?) that the major bugs should have been killed by now. I'd have to look at the release notes to be sure. At any rate, I'll update if convenient or necessary (I experienced the latter with some old X710s).

What issue were you having with the update? Your link is for Windows; I don't know what the package includes. (I use the EFI updater.)

Nah, the driver (I guess by driver I mean DDP which I care about) wouldn't work if it doesn't match the firmware. I learned it hard way

[1] ice0: <Intel(R) Ethernet Network Adapter E810-XXV-2 - 1.43.3-k> mem 0x380000000000-0x380001ffffff,0x380002000000-0x38000200ffff irq 16 at device 0.0 on pci1
[1] ice0: Loading the iflib ice driver
[1] ice0: Error configuring transmit balancing: ICE_ERR_AQ_ERROR
[1] ice0: An unknown error occurred when loading the DDP package.  Entering Safe Mode.
[1] ice0: fw 7.10.1 api 1.7 nvm 4.91 etid 800214ab netlist 4.4.5000-1.18.0.db8365cf oem 1.3909.0
[1] ice0: Using 1 Tx and Rx queues
[1] ice0: Using MSI-X interrupts with 2 vectors
[1] ice0: Using 1024 TX descriptors and 1024 RX descriptors
[1] ice0: Ethernet address: 50:7c:6f:79:ca:e8
[1] ice0: PCI Express Bus: Speed 16.0GT/s Width x8
[1] ice0: ice_init_dcb_setup: No DCB support
[1] ice0: link state changed to UP
[1] ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None
[1] ice0: netmap queues/slots: TX 1/1024, RX 1/1024
[1] ice1: <Intel(R) Ethernet Network Adapter E810-XXV-2 - 1.43.3-k> mem 0x380800000000-0x380801ffffff,0x380802000000-0x38080200ffff irq 16 at device 0.0 on pci2
[1] ice1: Loading the iflib ice driver
[1] ice0: link state changed to DOWN
[1] ice1: Error configuring transmit balancing: ICE_ERR_AQ_ERROR
[1] ice1: An unknown error occurred when loading the DDP package.  Entering Safe Mode.
[1] ice1: fw 7.10.1 api 1.7 nvm 4.91 etid 800214ab netlist 4.4.5000-1.18.0.db8365cf oem 1.3909.0
[1] ice1: Using 1 Tx and Rx queues
[1] ice1: Using MSI-X interrupts with 2 vectors
[1] ice1: Using 1024 TX descriptors and 1024 RX descriptors
[1] ice1: Ethernet address: 50:7c:6f:79:ca:e9
[1] ice1: PCI Express Bus: Speed 16.0GT/s Width x8
[1] ice1: ice_init_dcb_setup: No DCB support
[1] ice1: link state changed to UP
[1] ice1: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: FC-FEC/BASE-R, Autoneg: False, Flow Control: None
[1] ice1: netmap queues/slots: TX 1/1024, RX 1/1024
[1] ice1: link state changed to DOWN
[9] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[10] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[11] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[12] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[14] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[15] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[19] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[20] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[21] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[22] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[24] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[25] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[26] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[27] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[28] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[28] ice1: Unable to configure the main VSI for Tx: ENODEV
[29] ice1: Failed to add VLAN filters:
[29] ice1: - vlan 2, status -105
[29] ice1: Failure adding VLAN 2 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[30] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[30] ice1: Unable to configure the main VSI for Tx: ENODEV
[31] ice1: Failed to add VLAN filters:
[31] ice1: - vlan 2, status -105
[31] ice1: Failure adding VLAN 2 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[32] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[32] ice1: Unable to configure the main VSI for Tx: ENODEV
[34] ice1: Failed to add VLAN filters:
[34] ice1: - vlan 20, status -105
[34] ice1: Failure adding VLAN 20 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[35] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[35] ice1: Unable to configure the main VSI for Tx: ENODEV
[36] ice1: Failed to add VLAN filters:
[36] ice1: - vlan 20, status -105
[36] ice1: Failure adding VLAN 20 to main VSI, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[37] ice1: Failed to set LAN Tx queue 0 (TC 0, handle 0) context, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[37] ice1: Unable to configure the main VSI for Tx: ENODEV
[38] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[39] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[40] ice0: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_TIMEOUT aq_err OK
[41] ice1: ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[42] ice1: Could not add new MAC filters, err ICE_ERR_AQ_FW_CRITICAL aq_err OK
[42] ice1: Failed to synchronize multicast filter list: EIO

Both ports would be offline, only revert to 4.50 nvm helped. 4.60 didn't work as well.

I didn't try to compile Intel's FreeBSD driver, which is much newer than what 14.3-p5 comes with.

Quote from: bugacha on December 29, 2025, 07:50:02 PMNah, the driver (I guess by driver I mean DDP which I care about) wouldn't work if it doesn't match the firmware. I learned it hard way [...]

Huh! I have to say, I only use the E810 under FreeBSD (not OPNsense); my soon-to-be-wiped machine has a slightly older driver (1.43.2-k) and I use the default DDP (loader.conf: if_ice_load, ice_ddp_load = yes). (I don't believe I'll update from NVM 4.90 to 4.91 as there don't appear to be any updates relevant to my installation.)