os-ndproxy is part of OPNsense Community Edition 24.7.9 as plugin.
The goal is to get a single IPv6 Prefix mapped from WAN to LAN on an OPNsense connected to a Provider that only provides a single /64 prefix.
https://man.freebsd.org/cgi/man.cgi?query=ndproxy
https://docs.opnsense.org/manual/ndproxy.html
Thanks Cedrik.
I'm guessing this could be particularly helpful in IPv4 only networks where the ISP cannot be bothered to offer IPv6, and the user sets up a HurricaneElectric IPv6 tunnel.
I don't understand. From what I see HE delegates a /48 prefix in their tunnel broker service. You can easily route and split that into multiple /64 nets.
ndproxy is only needed if you /only/ get a single /64 prefix which you can not split into smaller networks since then SLAAC will break.
An additional usecase is if the upstream router does not have a route to your downstream router but resolves all via neighbor discovery.
Its a fix for broken IPV6 implementations by ISPs, in mobile 4g/5g ipv6 networks and also quite helpful for cloud provider stuff like VPS. So the typical environments you find with home users and self hosting hobbyists.
It took way too long to figure this out (finally 0.0% packet loss/dup to cloudflare), so I thought I'd mention it here:
If you're getting dropped & duplicate packets (DUP! with ping on linux, windows doesn't show). It's likely the "Downlink MAC Address", in my case I actually had to set it to the Uplink (WAN) MAC Address. Could be mislabeled, or something with my setup specifically (the PE is not very reliable; XS-2426G-B).
Also if you are always getting an IPv6 /64 address on WAN with "Request Prefix only" enabled: find the gateway address in settings (save); set WAN as static IPv6 with same address but /128. Then manually re-make the IPv6 gateway using the previously saved gateway address. I assume this happens when the PE doesn't really support prefix delegation at all.
Thanks @Monviech for adding an ndproxy to OPNsense! Very useful.
Let me explain some things:
If the provider offers DHCPv6 prefix delegation (even if it's just a single /64), an ndproxy is not required. During prefix delegation, the PE creates a route for the delegated prefix, pointing to the CPE's WAN address.
An ndproxy is required if neither prefix delegation nor static routes are available on the PE. Typical examples are SLAAC-only networks (e.g. mobile broadband routers / "modems", tethered phones) or datacenters.
@fdevd is correct: Downlink MAC Address is the MAC address of the selected Uplink Interface. Promiscous mode is not required because we use the interface's actual address. The reason why the MAC address needs to be specified at all is that ndproxy can also operate on a dedicated machine instead of on the CPE itself. If the CPE doesn't have an ndproxy, you can run it on a separate machine which then handles NDP on behalf of the CPE. Promiscuous mode is only required for this rare use case.
The name "proxy" might be somewhat misleading. Ndproxy does not actually proxy packets between the CPE's WAN and LAN interfaces. It operates exclusively on the WAN interface.
Cheers
Maurice
Thank you both for testing and clarification.
If there's time eventually, can somebody look at the documentation for the plugin and point out the spots where its wrong?
What I wrote there actually works too, it can be verified that a /64 prefix delegation does not automatically make traffic on LAN work. The ndproxy needs to be there for it to work.
I wonder why if it should not be needed.
https://github.com/opnsense/docs/blob/master/source/manual/ndproxy.rst?plain=1
https://docs.opnsense.org/manual/ndproxy.html
The second point about the MAC address seems to be right:
https://man.freebsd.org/cgi/man.cgi?query=ndproxy
When looking at the Network Design again, the MAC address should be the same as the WAN on OPNsense. Though its the label that needs adjusting then, since its a hard requirement for the kernel module to be loaded.
The label for the downlink mac address is correct, the issue is that the description suggests using the LAN interface, but it should be WAN interface.
The ndproxy manpage refers to "uplink" and "downlink" when looking at it from the point of view of the PE and CPE link. Meaning if we think about the wires between the ISP (PE) and the customer router (CPE), the "uplink" is the ISP interface, the "downlink" is the WAN side of opnsense.
The only one that is a bit weird is the label "uplink_interface". Looking at that manpage's network example, there is a switch between the PE and CPE, and a BSD host on a third leg of that switched network. The "uplink interface" is from the bsd host's point of view in that network. What a wild example, and only when I think of this network layout like this does the manpage make sense.
Anyways, right under the network diagram it says "the BSD host and the CPE router can be the same node" as well. That means in this case, the "uplink interface" is now the WAN interface on the CPE........ further adding to the confusion! Unless you have read this manpage and analyzed it, "uplink interface" would seem like contravention to the naming conventions for "uplink" and "downlink" when referring to the PE and CPE interfaces, so it's no wonder the "downlink" interface description incorrectly suggests using the LAN interface!
I feel like the person who wrote the kernel module and the manpage is dealing with the frustration of a super crappy ISP and they had to get real creative. I showed this manpage to a friend and he informed me that there are absolutely crap ISPs out there that will force low grade equipment on you and won't let you bring your own, so creative types have done things like this network diagram to get traffic to re-route to other routers, but leaves the garbage equipment in place to answer authentication queries.
Thanks for the feedback: https://github.com/opnsense/plugins/pull/4553
Small docs update follows soon: https://github.com/opnsense/docs/pull/672
I am currently using NDProxy on our setup.
I am getting Packet losses + high pings during Ping test. I am curious how I should debug this issue.
Ping statistics for 2607:f8b0:400a:807::200e:
Packets: Sent = 100, Received = 73, Lost = 27 (27% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 265ms, Average = 12ms
No issue when Pinging on WAN side.
Thanks
I finally got it working... I had to disable "Promiscuous mode" on WAN for NDProxy setup.
Ping statistics for 2607:f8b0:400a:806::200e:
Packets: Sent = 1000, Received = 1000, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 170ms, Average = 2ms
Only one ping at 170ms, so all is well.
The issue is that the rc.d script forces it when the service starts:
https://github.com/opnsense/ports/blob/b3aa544a28e4946383801d88ef926d853fcb2fd8/net/ndproxy/files/ndproxy.in#L50
This is hardcoded from upstream.
Quote from: Monviech (Cedrik) on February 28, 2025, 08:35:51 PMThe issue is that the rc.d script forces it when the service starts:
https://github.com/opnsense/ports/blob/b3aa544a28e4946383801d88ef926d853fcb2fd8/net/ndproxy/files/ndproxy.in#L50
This is hardcoded from upstream.
This should be noted on the plugin page. This is a pretty big deal and is doing things to networking that end users may not be aware of.
I would rather patch this out of the upstream port, so that it is configurable. The promisc mode is in there for the usecase thats shown in the man page (CPE, PE, and proxy being different devices). I'll see what I can do.
The promisc mode thing will be patched out soon.
https://github.com/opnsense/plugins/pull/4676
I am a little perplexed about promiscuous mode though, so I have retested this with a SLAAC only setup:
WAN configuration:
IPv6 Configuration Type: SLAAC
igc1: flags=1808a43<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,PALLMULTI,LOWER_UP> metric 0 mtu 1500
description: igc1_WAN (wan)
options=4802028<VLAN_MTU,JUMBO_MTU,WOL_MAGIC,HWSTATS,MEXTPG>
ether f4:90:ea:01:3d:b3
inet6 fe80::f690:eaff:fe01:3db3%igc1 prefixlen 64 scopeid 0x2
inet6 2003:a:1704:XXXX:XXXX:eaff:fe01:3db3 prefixlen 64 autoconf pltime 86400 vltime 86400
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
LAN configuration:
STATIC IPv6: 2003:a:1704:xxxx:xxxx:fe01:3db4/64 (Address is in same prefix as WAN SLAAC address incremented by 1)
Router Advertisements enabled to Stateless
igc0: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1450
description: igc0_LAN (lan)
options=802028<VLAN_MTU,JUMBO_MTU,WOL_MAGIC,HWSTATS>
ether f4:90:ea:01:3d:b2
inet6 fe80::f690:eaff:fe01:3db2%igc0 prefixlen 64 scopeid 0x1
inet6 2003:a:1704:XXXX:XXXX:eaff:fe01:3db4 prefixlen 64
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=121<PERFORMNUD,AUTO_LINKLOCAL,NO_DAD>
NDproxy configuration:
net.inet6.ndproxycount: 86
net.inet6.ndproxyconf_uplink_ipv6_addresses: fe80::f690:eaff:fe00:d9f4 (uplink router)
net.inet6.ndproxyconf_exception_ipv6_addresses:
net.inet6.ndproxyconf_downlink_mac_address: f4:90:ea:01:3d:b3 (WAN address MAC)
net.inet6.ndproxyconf_uplink_interface: igc1 (WAN interface)
With this setup, a Windows client in the LAN interface pinged 2001:4869:4860::8888
-> and got no response.
I then calculated the multicast group of the LAN client from its auto generated GUA and joined it via port net/mcjoin
mcjoin -i igc1 ff02::1:ff27:c64e
-> The ping worked right away!
This means at least multicast groups must be joined, or all multicasts must be allowed.
e.g., setting ALLMULTI on the WAN interface makes it work right away too.
ifconfig igc1 allmulti
Conclusion:
Promiscuous mode is overkill, but ALLMULTI or calculated multicast group joining is needed for ndproxy to work.
If anybody has any more hints I would be happy.
I did build ndproxy 3.2.1402000_2 and os-ndproxy 1.1 and can't reproduce the behaviour. It just works, without enabling promiscuous mode, joining a multicast group or enabling promiscuous mode for multicast packets (allmulti).
Did you try a ping from OPNsense itself, setting the source address to the LAN interface address (2003:a:1704:XXXX:XXXX:eaff:fe01:3db4)?
Cheers
Maurice
hn0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: LAN (lan)
options=80018<VLAN_MTU,VLAN_HWTAGGING,LINKSTATE>
ether 00:15:5d:d2:76:3c
inet6 fe80::215:5dff:fed2:763c%hn0 prefixlen 64 scopeid 0x5
inet6 fd01:2345:6789:abcd::a prefixlen 64
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=121<PERFORMNUD,AUTO_LINKLOCAL,NO_DAD>
hn1: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: WAN (wan)
options=80018<VLAN_MTU,VLAN_HWTAGGING,LINKSTATE>
ether 00:15:5d:d2:76:87
inet6 fe80::215:5dff:fed2:7687%hn1 prefixlen 64 scopeid 0x6
inet6 fd01:2345:6789:abcd:215:5dff:fed2:7687 prefixlen 64 autoconf pltime 14400 vltime 86400
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
No my tests always included a client in LAN pinging from their GUA or ULA to a destination on the internet.
We did quite some troubleshooting and checked the source code, and we also have an alternative setup now, which also requires promisc mode in our tests.
So there either must be a difference, or tests influence the result (eg using tcpdump will put interfaces in promisc and ndproxy suddenly works).
Just unsure whats the truth.
https://github.com/opnsense/docs/pull/717
Thank you for getting back to me :)
Just to make sure it's actually a WAN issue (not a LAN issue), I'd try a ping test from OPNsense itself. Source address: LAN interface address, destination address: something on the Internet. This won't work without ndproxy, but doesn't depend on a client in the LAN.
I made sure the interfaces are not in promiscuous mode when testing (no packet capture running).
Are you only testing with physical Intel NICs? So far, I've done all my testing with VMs. Maybe the driver plays a role in this... ND offloading? Just a guess.
Yeah so far I only used physical intel nics with physical DEC750 machines, and the client also has a physical NIC.
I could also test in Hyper-V or Proxmox, though lets wait now for other user reports since the scope of the issue is quite unclear.
If promisc is sometimes needed, and sometimes not, thats also fine in the end as now the user controls it without hidden automatism by the port.
Thanks for your feedback, especially that the MAC should be WAN was quite helpful figuring this out.