Client IPv6 temporary addresses not regenerating after some time

Started by OPNenthu, December 04, 2024, 06:55:59 AM

Previous topic - Next topic
As I said, IPv6 privacy does not involve RS. These obviously do only get send when a client needs a prefix and router (i.e. on startup), but not on renewal of temporary adressea. So you should not "expect" any RS at these times.

RAs are send periodically or when explicitely requested by an RS.

So, your clients should have everything they need to fetch their new temporary addresses. As I wrote, the only thing you could potentially (I haven't tried) expect to see at prolongation time are neighbor discovery packets to avoid duplicate adresses. If those are getting blocked, there could be problems.

All of that being said, I see my clients get new IPv6 temporary adresses for days on end and they go through their lifecycles just as expected:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
    link/ether d6:35:77:88:44:00 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    altname ens18
    inet 192.168.10.3/24 metric 100 brd 192.168.10.255 scope global dynamic eth0
       valid_lft 15274sec preferred_lft 15274sec
    inet6 2001:a61:5555:c210:950e:5002:8710:8753/64 scope global temporary dynamic
       valid_lft 74539sec preferred_lft 2426sec
    inet6 2001:a61:5555:c210:dbcf:584:836c:c541/64 scope global temporary deprecated dynamic
       valid_lft 60254sec preferred_lft 0sec
    inet6 2001:a61:5555:c210:7752:80f8:e451:1cb9/64 scope global temporary deprecated dynamic
       valid_lft 45966sec preferred_lft 0sec
    inet6 2001:a61:5555:c210:19f0:484c:537b:be19/64 scope global temporary deprecated dynamic
       valid_lft 31680sec preferred_lft 0sec
    inet6 2001:a61:5555:c210:3506:86f5:9ed7:1496/64 scope global temporary deprecated dynamic
       valid_lft 17393sec preferred_lft 0sec
    inet6 2001:a61:5555:c210:9be0:e5f3:8892:c32f/64 scope global temporary deprecated dynamic
       valid_lft 3106sec preferred_lft 0sec
    inet6 2001:a61:5555:c210:d435:77ff:fe88:2299/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 85900sec preferred_lft 13900sec
    inet6 fe80::d435:77ff:fe88:4400/64 scope link
       valid_lft forever preferred_lft forever

I am at a loss why the clients stop renewing their privacy IPs. It cannot be RS, since they are never generated, it cannot be RAs, so it must be something that keeps the clients from assuming their new IPs, like DAD.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on December 14, 2024, 01:25:50 PM[...] so it must be something that keeps the clients from assuming their new IPs, like DAD.

As I was researching DAD to learn how to trace it, I got the idea to cross-check my Wireshark capture against the 'pf' logs in OPNsense:

2024-12-14 03:53:34.651149    fe80::xxxx:xxxx:xxxx:c2e    ff02::1:ff01:bb57    ICMPv6    86    Neighbor Solicitation for 26xx:xx:xxxx:xxx3:xxxx:xxxx:xxxx:bb57 from 64:xx:xx:xx:xx:2e

2024-12-14T03:53:32-05:00    Informational    filterlog     36,,,acdbb900b50d8fb4ae21ddfdc609ecf8,vlan0.30,match,pass,out,6,0x00,0x00000,255,ipv6-icmp,58,32,fe80::xxxx:xxxx:xxxx:c2e,ff02::1:ff01:bb57,datalength=32

So apparently, I have a >2s time skew between my Windows box and OPNsense.  Confirmed by comparing the system dates.

I don't know if this would impact the linux clients as they don't seem to have the same skew, but I will be kicking myself very hard if this was the cause all along.  Will know in a couple days I guess.

Now the stumper question: who has the correct time?

- OPNsense is using its own NTP service with [0-3].opnsense.pool.ntp.org.
- Windows is set to 'time.nist.gov'

It's time for a break...



UPDATE:

It's DNS - I was getting SERFVFAIL for time.nist.gov (in fact, most *.gov domains).  Must be a DNSSEC issue.



"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

The clock skew was not the issue.  I'm glad for that because I realized after my last post that this would have been a vulnerability in the protocol if it were the case.

I prompted ChatGPT to act as a Ubiquiti support engineer and together we walked through my configuration and checked everything from routing and NDP tables, to host firewalls, to Multicast settings on the switch.  It also wanted me to check any ACLs on the switch, but there are none that I can see (no interface option in UniFi Network controller for ACLs).

I'll need a couple days to see if there is success, but I'll summarize the changes here.  Are these settings generally correct / appropriate, or have I opened up a security or performance hole?

In UniFi controller:
- Enabled "IoT Auto-Discovery / mDNS" for the IoT network only.
- Enabled "Multicast Filtering" and "IGMP Snooping" on all networks / VLANs.
- Left disabled "Forward Unknown Multicast Traffic"
- Enabled "Fast Leave" option (all networks).
- Set the multicast Querier as the UniFi switch itself (all networks).
- Configured the trunk port carrying all tagged VLANs to OPNsense as a "Multicast Router Port" in port profiles
- Disabled "DHCP Guarding" / "DHCP Snooping" functions temporarily

In FreshTomato (the firmware I used to convert my old Asus WiFi router into an access point):
  - Enabled "IGMP Snooping"
  - Enabled IGMPv2

In OPNsense:
  - Added a rule for IGMP packets on all VLAN interfaces, as these started appearing and were getting default denied
      Action: pass
      Interface: HomeSubnets (all VLANs group)
      Direction: In
      Protocol: IPv4 IGMP
      Src: 0.0.0.1/32
      Dest: 224.0.0.1/32


The idea behind these changes is to make sure that both the switch and the AP (with 4 internal bridge interfaces) are passing multicast traffic well, so that NDP would not be affected.  Even though much of this relates to IPv4 traffic, ChatGPT was convinced that it could have negative impacts on IPv6 as well.

I left everything pertaining to IGMP Proxy off, as I want OPNsense to manage the inter-VLAN routing of multicast.  For now I have not added any specific rules in OPNsense for these, so I'm expecting that the default ICMPv6 rules are enough for NDP functions.

I also enabled the built-in Windows 10 firewall rules for ICMPv6 Echo as they were disabled and not passing.  Though NDP shouldn't depend on this, it was causing the linux boxes to fail to discover Windows as a neighbor.


"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

One more strange thing observed after changing these settings is that my Chromecast devices on the IoT network started making DNS queries to "192.168.0.1".  I'm used to them constantly trying to reach 8.8.8.8, but this IP is new.  There is no 192.168.0.x network anywhere, so I am wondering if this is something the Chromecasts are doing internally (setting up their own network somehow?).



Anyway, they are firewalled to the IoT VLAN and I am catching and redirecting all DNS that isn't destined to the firewall, so the 'rdr' rules are just sending this traffic to Unbound.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

Progress!

Enabling the aforementioned settings on the UniFi switch and creating the IGMP 'pass' rule in OPNsense seem to have resolved the issue on the clients which are directly connected to the switch.

Note the Windows client had been rebooted a couple days ago so has fewer deprecated temporaries than the linux client, but so far they are generating reliably.

~ $ ip -6 a
[...]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 26xx:xx:xxxx:xx83:6869:3745:906b:c26d/64 scope global temporary dynamic
       valid_lft 86388sec preferred_lft 14388sec
    inet6 26xx:xx:xxxx:xx83:8613:51e3:6d0a:e60a/64 scope global temporary deprecated dynamic
       valid_lft 86388sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xx83:ab4e:d1cb:2cf4:7862/64 scope global temporary deprecated dynamic
       valid_lft 86388sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xx83:8158:8786:aea:cde7/64 scope global temporary deprecated dynamic
       valid_lft 86388sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xx83:b779:f3de:d005:9f6c/64 scope global temporary deprecated dynamic
       valid_lft 86388sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xx83:f551:xxxx:xxxx:xx2f/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86388sec preferred_lft 14388sec
    inet6 fe80::ec8a:1bb1:304d:b712/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

>netsh interface ipv6 show addresses
[...]
Addr Type  DAD State   Valid Life Pref. Life Address
---------  ----------- ---------- ---------- ------------------------
Public     Preferred    23h59m57s   3h59m57s 26xx:xx:xxxx:xx83:2bbd:xxxx:xxxx:xxx57
Temporary  Deprecated   23h59m57s         0s 26xx:xx:xxxx:xx83:9d4b:705:a168:abd2
Temporary  Deprecated   23h59m57s         0s 26xx:xx:xxxx:xx83:c0fc:4705:e096:a9ef
Temporary  Preferred    23h59m57s   3h59m57s 26xx:xx:xxxx:xx83:c14d:7f8f:4b45:20ff
Other      Preferred     infinite   infinite fe80::d968:a93c:3f8a:521f%14

The wireless client is still getting stuck, so I think this is a clue that multicast is still not configured correctly on the AP:

~$ ip -6 a
[...]
3: wlxa842a105d67b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 26xx:xx:xxxx:xx83:d62c:d808:1157:6926/64 scope global temporary deprecated dynamic
       valid_lft 86362sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xx83:952a:xxxx:xxxx:xx0b/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86362sec preferred_lft 14362sec
    inet6 fe80::c7:5d08:e1c0:cb9b/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

I had posted a screenshot in my last post with the relevant AP settings in FreshTomato.  IGMP Snooping is currently enabled there, and was previously disabled.  Neither setting has worked reliably so far for the WiFi connected linux client.  What is the technically correct way to configure multicast on an access point?
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

It seems that Multicast Listener Discovery (MLD) for IPv6 is tied together with IGMP Snooping for IPv4 in UniFi switches.  Per Ubiquiti's automated support tool:

QuoteUniFi GPT
Steps to Control Multicast Listener Discovery (MLD) on a UniFi Switch

To control Multicast Listener Discovery (MLD) on your UniFi switch, you can manage it through the IGMP Snooping settings, as MLD Snooping is typically integrated with IGMP Snooping in network switches. Here's how you can enable and configure these settings:

    [...]

    Select the Network/VLAN:
        Choose the network or VLAN for which you want to configure MLD/IGMP Snooping.

    Enable IGMP Snooping:
        Scroll down to the Multicast Management section.
        Enable IGMP Snooping. This setting will also enable MLD Snooping for IPv6 multicast traffic.

    Configure Querier:
        If necessary, configure a specific switch as the Querier. This ensures that there is a designated device to manage multicast group memberships.
[...]

By following these steps, you should be able to control MLD on your UniFi switch. If you need further assistance, please click here to contact support.

I found a couple other reports, such as this one, of Ubiquiti users not having functional IPv6 until these settings were enabled.

It's not clear to me why these functions are needed for a small home network with a single switch, however enabling them made a difference.  It could be a quirk of UniFi at the time of this writing that causes interference with IPv6 multicast messaging in the default (off) state.

What I need from this community now is recommendations for how to set up the OPNsense rules properly since enabling IGMP functions on the switch.  As I mentioned earlier, once I enabled these I started seeing IGMP packets on all my VLAN interfaces with src 0.0.0.1 and dest 224.0.0.1.  These were getting dropped by the "Default deny / state violation" rule.  I added a very narrow 'pass' rule for these to my VLAN interface group ("HomeSubnets"), but I suspect this is not the best practice.

You cannot view this attachment.

You cannot view this attachment.

Some clarifying questions:

1) I'm assuming that no specific IPv6 MLD rules are needed from me, as OPNsense already has default rules for ICMPv6 and everything needed for NDP and Privacy Extensions to work are already included?

2) For IPv4 IGMP, do I need to widen my pass rule?  Is it advisable to allow all IGMP traffic, or all IGMP with dest 224.0.0.0/24 instead of just 224.0.0.1/32? These are the only packets related to IGMP that I am seeing, so that's why I started with the narrow rule.

3) How do I allow devices connected to the Guest VLAN to participate in multicast traffic originating from that subnet only?  I have an employer-issued laptop on the Guest network that might break if I disallow multicast (not sure how their IT department has things set up, but don't want to risk it).  At the same time I don't want hosts on the Guest network to see multicast from my other (private) networks.  Is this the default behavior of multicast traffic without IGMP proxy or mDNS repeater?  Or do I need explicit rules to do this filtering?


As for the WiFi AP, I have disabled the IGMP functions on it entirely and rebooted the AP.  Now my WiFi client has gotten a fresh temporary IPv6 address.  Will monitor to see if it continues to refresh daily.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

The wireless client is also generating temporaries reliably now...

$ ip -6 a
[...]
3: wlxa842a105d67b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 26xx:xx:xxxx:xxx3:cde6:b967:ddde:2c42/64 scope global temporary dynamic
       valid_lft 86097sec preferred_lft 14097sec
    inet6 26xx:xx:xxxx:xxx3:685:919b:e58b:48fe/64 scope global temporary deprecated dynamic
       valid_lft 86097sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xxx3:471e:a816:3f83:e635/64 scope global temporary deprecated dynamic
       valid_lft 86097sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xxx3:8e68:c66c:b515:720c/64 scope global temporary deprecated dynamic
       valid_lft 86097sec preferred_lft 0sec
    inet6 26xx:xx:xxxx:xxx3:xxxx:xxxx:xxxx:xx0b/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86097sec preferred_lft 14097sec
    inet6 fe80::c7:5d08:e1c0:cb9b/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

The multicast configurations remain a bit of an enigma to me, but I am marking the topic as solved for now.  Thank you!
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

Quoted from another thread:
Quote from: OPNenthu on February 12, 2025, 01:39:28 PMThe only problem that I'm still facing is that my SLAAC temporary addresses fail to renew once they get deprecated.  That old thread of mine is not actually resolved, sadly, and I'm really starting to blame UniFi for it
I haven't read everything and also I am not capable understanding everything but what I am somewhat sure about, it is a *Sense problem, maybe a FreeBSD problem, because I have it too, on the other Sense. If have an ISP which changes prefixes on a daily bases (1&1). Now when using the fritzbox, there is no problem for my Windows clients, so it must be the router. *Sense is not invalidating the old prefixes as far as I have read about this topic, so machines will not generate new IP-addresses.
But the other Sense is mostly used and developed in the US, where they don't use IPv6 and where most people have cable-internet with addresses changing rarely, so there is not enough pressure to work on this problem. So I guess it is on the OPNsense team to fix it for all of us and they don't or can't.

Now what I did to somewhat mitigate the problem. My prefix changes at 4 o'clock every morning. I use wifi-plugs (with timers) on all my switches to cut off power for a minimal time. This will make the NICs go off and on for all my connected client machines and with that, the old prefix will be discarded and new one will be used. I know, it is not pretty.
Btw. I am using fresh-tomato too, but with an ARM chip, not MIPS and only for WiFi but I see the problems on hardwired machines too, again, all with the other Sense.

I have no problem with renewals of IPv6 assignments via RA. I actually have a Unifi switch, but IGMP snooping was enabled all the time, so probably that was the vital difference to OPNenthu's setup.

I suspect that your problem is different:

From memory, I would say that I observed old prefixes not being removed by issuing a RA with lifetime=0 by OpnSense when the prefix changes, e.g. when the connection drops and is re-instantiated or when the ISP changes it. That does not happen because "DeprecatePrefix On" is not set in OpnSense's radvd.conf.

I am not in that situation any more, since my new ISP hands out "arbitrary" IPv6 prefixes, but does neither change them on disconnect nor during a running connection.

But I think that can be reduced to only a temporary problem when you use short RA lifetimes like 200 seconds, since then, the old prefix will be thrown away after that time and the new prefix prevails. That is, iff you use SLAAC only and not DHCPv6, I explained that (in german) here.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

(Sorry for the cross-post confusion. This was mentioned in this thread recently.)

Just a couple updates for the community:

I posted this on the Ubiquiti forum (no replies):

https://community.ui.com/questions/Unreliable-IPv6-temporary-address-generation/64ae65cb-f7d7-4a79-8bfc-c97efdc0005d

To date, I've only found a few reports of others having various issues in either Windows or Linux that are OS-specific reports and all of the threads are several years old with no real follow-up.  I don't know if all the underlying OS issues on desktop platforms have been resolved or not (the situation is probably better on mobile OSes).  All I can say is that because @meyergu has a working setup in Windows 11, then probably Microsoft at least have it ironed out in later versions of Windows.

In order to observe the issue one has to have a client PC that is running with zero downtime (no reboots, no shutdowns) over several days.  Temporary address generation works.  It's only the automatic re-generation of deprecated addresses that gets gummed up and I personally am seeing it on both Windows and Linux.

Here are some interesting threads, but probably red herrings.  As I said my Windows 10 box is having the issue.

https://github.com/systemd/systemd/issues/13218
https://github.com/systemd/systemd/issues/19838
https://bugzilla.redhat.com/show_bug.cgi?id=1419461

Quote from: Bob.Dig on February 13, 2025, 10:56:41 AMBut the other Sense is mostly used and developed in the US, where they don't use IPv6 and where most people have cable-internet with addresses changing rarely, so there is not enough pressure to work on this problem.

I am in the US and my IPv6 prefix delegation from the cable ISP is quite sticky after several months and modem resets.  Next time I have the opportunity to speak to them I will try asking if it's static or dynamic and how often it changes.  However because it's apparently not changing it makes the issue of unreliable temporary address generation even more important.  I now have a situation where my public IPv4 changes periodically (at least on modem reboot) but IPv6 does not.

Quote from: meyergru on February 13, 2025, 11:49:44 AMI have no problem with renewals of IPv6 assignments via RA. I actually have a Unifi switch, but IGMP snooping was enabled all the time, so probably that was the vital difference to OPNenthu's setup.

I did also enable IGMP snooping a few weeks back, although it had no effect on this issue.

I also enabled Flow Control in UniFi Network as without it I was getting terrible performance in iperf3 (only 300-400Mbps transfers in some instances over 2.5Gbe links, and up to 1.9Gbps in other instances, seemingly at random) and a very large number of retransmits in all cases.  With Flow Control I am getting consistently high transfers (~2.3Gbps) and very low, near zero retransmits.  I don't know why but Flow Control seems to be critically necessary in UniFi but for some reason it's off by default.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

I now have Windows 11, but until four months ago, it was Windows 10 and there were no IPv6 problems, either. The lifetimes of the addresses are quite long, however, and the Windows boxes are shut down over night, though, so I would not notice a problem if they occured only after nore than a day.

But I also have several Linux hosts that run 24/7 and they work just fine. Most machines use with RFC4941, so they actually use temporary addresses.

None of these machines are on Wireless, though (my wireless APs are Unifi as well). But I think we can rule that out, because your clients obviously receive the RAs, so it is not something blocking them.

I have a pseudo-static IPv6 prefix as you and my IPv6 configuration is explained here.

Reading most of this thread again, I see one misunderstanding: By 200 and 600 seconds, I meant the minimum and maximum intervals for RAs configured on OpnSense, not lifetimes on the client.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on February 14, 2025, 12:08:40 AMThe lifetimes of the addresses are quite long, however, and the Windows boxes are shut down over night, though, so I would not notice a problem if they occured only after nore than a day.

Yeah, I think that's one reason why more people aren't seeing / complaining about this.  Clients get powered down during off-hours and servers don't use privacy addresses.

We need some more brave souls willing to use a little energy at night for the sake of improving the IPv6 experience!  Come on, there must be some night owls and insomniacs here :)
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

There may be another explanation for your observations if they popped up under 14.2 kernel only:

https://forum.opnsense.org/index.php?msg=229494
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Thanks, I'll follow that thread although I'm still on OPNsense 24.7.12_4 (14.1-RELEASE-p6).

For what it's worth I tried your test and am not able to reproduce the ND timeout.  This is from my Raspberry Pi to the OPNsense LAN interface on the same VLAN.

#!/bin/bash

while :
do
  ndisc6 -m -n -r 1 fe80::xxxx:xxxx:xxxx:c30 eth0
done

~ $ ./ndisc6-test.sh
Soliciting fe80::xxxx:xxxx:xxxx:c30 (fe80::xxxx:xxxx:xxxx:c30) on eth0...
Target link-layer address: 64:xx:xx:xx:xx:30
 from fe80::xxxx:xxxx:xxxx:c30
Soliciting fe80::xxxx:xxxx:xxxx:c30 (fe80::xxxx:xxxx:xxxx:c30) on eth0...
Target link-layer address: 64:xx:xx:xx:xx:30
 from fe80::xxxx:xxxx:xxxx:c30
Soliciting fe80::xxxx:xxxx:xxxx:c30 (fe80::xxxx:xxxx:xxxx:c30) on eth0...
Target link-layer address: 64:xx:xx:xx:xx:30
 from fe80::xxxx:xxxx:xxxx:c30
Soliciting fe80::xxxx:xxxx:xxxx:c30 (fe80::xxxx:xxxx:xxxx:c30) on eth0...
Target link-layer address: 64:xx:xx:xx:xx:30
 from fe80::xxxx:xxxx:xxxx:c30
Soliciting fe80::xxxx:xxxx:xxxx:c30 (fe80::xxxx:xxxx:xxxx:c30) on eth0...
Target link-layer address: 64:xx:xx:xx:xx:30
 from fe80::xxxx:xxxx:xxxx:c30
Soliciting fe80::xxxx:xxxx:xxxx:c30 (fe80::xxxx:xxxx:xxxx:c30) on eth0...
Target link-layer address: 64:xx:xx:xx:xx:30
...

Same result from another (wireless) client on a different VLAN.

IIRC, the temporary addresses may have been working on an earlier release (24.7) around the time I first migrated to OPNsense, although I'm not certain.  I just remember noticing the issue after 24.7.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE

I have now tested several variants and found all of them to work fine with ND solicitations. My initial test of 25.1.1 was wrong. Sorry for the noise.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+