Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - shadowlaw

#1
Quote from: muchacha_grande on May 22, 2025, 01:47:08 PMHi shadowlaw, could you open an issue on github to report it?

https://github.com/opnsense/core/issues
I was thinking maybe the admins could reopen the existing one because that already has some context:

https://github.com/opnsense/core/issues/6247

But if that is not possible, I'll happily open a new one.
#2
I have had this issue for a really long time, and originally created this issue for it. TLDR, IPv6 MLD packets erroneously get sent out on the pppoe interface, whereas they should have been sent out on the local LAN interface. I had kind of given up on it and considered it user error, until another user reported running into the same issue.

So, I took a journey through the FreeBSD kernel, pf, the mpd PPP daemon, and this is what I found:

The kernel correctly sends out the MLD report on the LAN interface. However, OPNSense has a default firewall rule present that forces packets that have a source address of a gateway, to also go out over said gateway. By itself this seems totally fine - why would these MLD packets on the LAN have the same source address as the pppoe interface? Well, it turns out that the mpd5 ppp daemon just picks a random interface to determine its own link-local address, and in my case that happened to be the LAN interface. So:

root@opnsense:/tmp # ifconfig vtnet1_vlan100 | grep fe80
        inet6 fe80::9ca3:3dff:fea4:9380%vtnet1_vlan100 prefixlen 64 scopeid 0x12
root@opnsense:/tmp # ifconfig pppoe0 | grep fe80
        inet6 fe80::9ca3:3dff:fea4:9380%pppoe0 prefixlen 64 scopeid 0x18

Both pppoe and vtnet1_vlan100 (the LAN interface) have the same IPv6 link-local address. By itself, that seems fine, too - these are separate links and therefore having the same link-local address doesn't really matter. But, in combination with the firewall rule I mentioned earlier, these MLD packets, which originate from fe80::9ca3:3dff:fea4:9380, now wrongly get sent out over the pppoe0 interface instead.

I found that enabling 'Firewall -> Settings -> Advanced -> Disable automatic rules which force local services to use the assigned interface gateway' removes these rules, and indeed that fixes the issue, both for me and the other user that reported it.

I'm not a network expert so I'm not sure what actually the real proper fix is. My intuition is that it is fine for OPNSense to give out the same LLA to two different interfaces, but if it does, it should not add firewall rules that act on that ambiguity.

Can we reopen the issue I linked to track a solution?
#3
Quote from: franco on April 16, 2024, 02:37:44 PM
Ok keep in mind to do this on both master and backup and check if your prefix is really static (unless you are sure by contract that it is).
Yeah it's one of those 'you get your prefix and it really should never change, but we cannot promise anything'. So far it has been stable, though.
#4
Thanks a lot, this seems to work!
#5
Quote from: franco on April 16, 2024, 12:39:22 PM
IMO this only works relatively well with static IPv6 prefix and manual IPv6 assignments on the LAN side.
Thanks! I know how to set static IPv6 assignments on the LAN side, but how can I configure the static IPv6 prefix? Would I need to enable DHCPv6 on the LAN?
#6
Hi,

I have a CARP setup with a single public IPv4 and a /48 prefix, received over PPPoE. Everything on the v4 side is working well, including failover. On the IPv6 side, I have created a VIP interface using fe80:: as address, and I am advertising that address as the source for router advertisements - as recommend in the CARP IPv6 setup documentation.

My problem however is that only the master will include my IPv6 prefix in the router advertisements; the backup presumably does not announce it, because I only have a single IPv4 address, and so the backup itself does not have an active IPv6 connection (or a prefix), until it takes over as master. This means that IPv6 clients tend to get confused when the last RA is from the backup, since it does not contain any prefix. When the next RA is sent by the master, clients will work correctly again.

Since my prefix is (in theory) static, is there a way to manually configure radvd to just always announce this prefix, regardless of the state of the WAN interface? Or are there better ways of solving this?
#7
Hello,

I'm running opnsense virtualized inside proxmox; LAN/WAN are both virtio interfaces, which are each mapped to a bridge in proxmox, and the bridges map to two physical interfaces. My network has several unifi switches, and I have IGMP snooping configured on several VLANs on the internal LAN. This results in one of the switches to send out MLD queries for IPv6, to determine what multicast addresses a host/port is interested in. I confirmed that these multicast queries arrive correctly at the bridge in proxmox, and in turn in the virtio interface in opnsense. This is captured on the vtnet1_vlan100 interface, which corresponds to vlan100 on the LAN:


9:47:15.748139 IP6 fe80::f692:bfff:fe81:337c > ff02::1: HBH ICMP6, multicast listener querymax resp delay: 10000 addr: ::, length 24


this interface has IPv6 configured; my ISP gives out a prefix over pppoe, and I use "track interface" with WAN as upstream.

Interestingly, when pppoe is disconnected, everything looks fine, and the mld listener reports are sent back (:9380 is the vtnet1_vlan100 interface) correctly:


19:47:18.462474 IP6 fe80::9ca3:3dff:fea4:9380 > ff05::1:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff05::1:3, length 24
19:47:19.071261 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::2:ff49:d8be: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::2:ff49:d8be, length 24
19:47:19.271584 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::1:ffa4:9380: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffa4:9380, length 24
19:47:19.683804 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::2:49d8:bec6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::2:49d8:bec6, length 24
19:47:22.090143 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::1:2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:2, length 24


but when I setup a connection to my ISP over pppoe, the queries still arrive on vtnet1_vlan100, but suddenly, the reports go out over the pppoe0 interface, instead of the vtnet1_vlan100 interface.


# tcpdump -i pppoe0 | grep -i multi
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pppoe0, link-type NULL (BSD loopback), capture size 262144 bytes
19:54:20.570018 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::2:49d8:bec6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::2:49d8:bec6, length 24
19:54:21.370026 IP6 fe80::9ca3:3dff:fea4:9380 > ff05::1:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff05::1:3, length 24
19:54:21.576229 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::1:ffa4:9380: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffa4:9380, length 24
...


I think this probably happens because, as soon as PPPoE is connected, a default route for the pppoe0 interface is added to the routing table, and the MLD report goes over the default route. That seems wrong though - I think the report should at least go back over the interface where it came from (eg, vtnet1_vlan100) in this case.

Anyway, the result of this is that the linux bridge doesn't know that opnsense is interested in the "all routers" multicast address at ff02::2, and so when a client on the LAN sends an IPv6 router solicitation, that doesn't end up at my opnsense box, and things break.

I'm not sure if this is an opnsense problem, a freebsd problem, or user error, but any help would be much appreciated!