https://docs.opnsense.org/manual/ndp-proxy-go.html
https://github.com/monviech/ndp-proxy-go
Thanks Cedrik.
I'm guessing this could be particularly helpful in IPv4 only networks where the ISP cannot be bothered to offer IPv6, and the user sets up a HurricaneElectric IPv6 tunnel.
I don't understand. From what I see HE delegates a /48 prefix in their tunnel broker service. You can easily route and split that into multiple /64 nets.
ndproxy is only needed if you /only/ get a single /64 prefix which you can not split into smaller networks since then SLAAC will break.
An additional usecase is if the upstream router does not have a route to your downstream router but resolves all via neighbor discovery.
Its a fix for broken IPV6 implementations by ISPs, in mobile 4g/5g ipv6 networks and also quite helpful for cloud provider stuff like VPS. So the typical environments you find with home users and self hosting hobbyists.
It took way too long to figure this out (finally 0.0% packet loss/dup to cloudflare), so I thought I'd mention it here:
If you're getting dropped & duplicate packets (DUP! with ping on linux, windows doesn't show). It's likely the "Downlink MAC Address", in my case I actually had to set it to the Uplink (WAN) MAC Address. Could be mislabeled, or something with my setup specifically (the PE is not very reliable; XS-2426G-B).
Also if you are always getting an IPv6 /64 address on WAN with "Request Prefix only" enabled: find the gateway address in settings (save); set WAN as static IPv6 with same address but /128. Then manually re-make the IPv6 gateway using the previously saved gateway address. I assume this happens when the PE doesn't really support prefix delegation at all.
Thanks @Monviech for adding an ndproxy to OPNsense! Very useful.
Let me explain some things:
If the provider offers DHCPv6 prefix delegation (even if it's just a single /64), an ndproxy is not required. During prefix delegation, the PE creates a route for the delegated prefix, pointing to the CPE's WAN address.
An ndproxy is required if neither prefix delegation nor static routes are available on the PE. Typical examples are SLAAC-only networks (e.g. mobile broadband routers / "modems", tethered phones) or datacenters.
@fdevd is correct: Downlink MAC Address is the MAC address of the selected Uplink Interface. Promiscous mode is not required because we use the interface's actual address. The reason why the MAC address needs to be specified at all is that ndproxy can also operate on a dedicated machine instead of on the CPE itself. If the CPE doesn't have an ndproxy, you can run it on a separate machine which then handles NDP on behalf of the CPE. Promiscuous mode is only required for this rare use case.
The name "proxy" might be somewhat misleading. Ndproxy does not actually proxy packets between the CPE's WAN and LAN interfaces. It operates exclusively on the WAN interface.
Cheers
Maurice
Thank you both for testing and clarification.
If there's time eventually, can somebody look at the documentation for the plugin and point out the spots where its wrong?
What I wrote there actually works too, it can be verified that a /64 prefix delegation does not automatically make traffic on LAN work. The ndproxy needs to be there for it to work.
I wonder why if it should not be needed.
https://github.com/opnsense/docs/blob/master/source/manual/ndproxy.rst?plain=1
https://docs.opnsense.org/manual/ndproxy.html
The second point about the MAC address seems to be right:
https://man.freebsd.org/cgi/man.cgi?query=ndproxy
When looking at the Network Design again, the MAC address should be the same as the WAN on OPNsense. Though its the label that needs adjusting then, since its a hard requirement for the kernel module to be loaded.
The label for the downlink mac address is correct, the issue is that the description suggests using the LAN interface, but it should be WAN interface.
The ndproxy manpage refers to "uplink" and "downlink" when looking at it from the point of view of the PE and CPE link. Meaning if we think about the wires between the ISP (PE) and the customer router (CPE), the "uplink" is the ISP interface, the "downlink" is the WAN side of opnsense.
The only one that is a bit weird is the label "uplink_interface". Looking at that manpage's network example, there is a switch between the PE and CPE, and a BSD host on a third leg of that switched network. The "uplink interface" is from the bsd host's point of view in that network. What a wild example, and only when I think of this network layout like this does the manpage make sense.
Anyways, right under the network diagram it says "the BSD host and the CPE router can be the same node" as well. That means in this case, the "uplink interface" is now the WAN interface on the CPE........ further adding to the confusion! Unless you have read this manpage and analyzed it, "uplink interface" would seem like contravention to the naming conventions for "uplink" and "downlink" when referring to the PE and CPE interfaces, so it's no wonder the "downlink" interface description incorrectly suggests using the LAN interface!
I feel like the person who wrote the kernel module and the manpage is dealing with the frustration of a super crappy ISP and they had to get real creative. I showed this manpage to a friend and he informed me that there are absolutely crap ISPs out there that will force low grade equipment on you and won't let you bring your own, so creative types have done things like this network diagram to get traffic to re-route to other routers, but leaves the garbage equipment in place to answer authentication queries.
Thanks for the feedback: https://github.com/opnsense/plugins/pull/4553
Small docs update follows soon: https://github.com/opnsense/docs/pull/672
I am currently using NDProxy on our setup.
I am getting Packet losses + high pings during Ping test. I am curious how I should debug this issue.
Ping statistics for 2607:f8b0:400a:807::200e:
Packets: Sent = 100, Received = 73, Lost = 27 (27% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 265ms, Average = 12ms
No issue when Pinging on WAN side.
Thanks
I finally got it working... I had to disable "Promiscuous mode" on WAN for NDProxy setup.
Ping statistics for 2607:f8b0:400a:806::200e:
Packets: Sent = 1000, Received = 1000, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 170ms, Average = 2ms
Only one ping at 170ms, so all is well.
The issue is that the rc.d script forces it when the service starts:
https://github.com/opnsense/ports/blob/b3aa544a28e4946383801d88ef926d853fcb2fd8/net/ndproxy/files/ndproxy.in#L50
This is hardcoded from upstream.
Quote from: Monviech (Cedrik) on February 28, 2025, 08:35:51 PMThe issue is that the rc.d script forces it when the service starts:
https://github.com/opnsense/ports/blob/b3aa544a28e4946383801d88ef926d853fcb2fd8/net/ndproxy/files/ndproxy.in#L50
This is hardcoded from upstream.
This should be noted on the plugin page. This is a pretty big deal and is doing things to networking that end users may not be aware of.
I would rather patch this out of the upstream port, so that it is configurable. The promisc mode is in there for the usecase thats shown in the man page (CPE, PE, and proxy being different devices). I'll see what I can do.
The promisc mode thing will be patched out soon.
https://github.com/opnsense/plugins/pull/4676
I am a little perplexed about promiscuous mode though, so I have retested this with a SLAAC only setup:
WAN configuration:
IPv6 Configuration Type: SLAAC
igc1: flags=1808a43<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,PALLMULTI,LOWER_UP> metric 0 mtu 1500
description: igc1_WAN (wan)
options=4802028<VLAN_MTU,JUMBO_MTU,WOL_MAGIC,HWSTATS,MEXTPG>
ether f4:90:ea:01:3d:b3
inet6 fe80::f690:eaff:fe01:3db3%igc1 prefixlen 64 scopeid 0x2
inet6 2003:a:1704:XXXX:XXXX:eaff:fe01:3db3 prefixlen 64 autoconf pltime 86400 vltime 86400
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
LAN configuration:
STATIC IPv6: 2003:a:1704:xxxx:xxxx:fe01:3db4/64 (Address is in same prefix as WAN SLAAC address incremented by 1)
Router Advertisements enabled to Stateless
igc0: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1450
description: igc0_LAN (lan)
options=802028<VLAN_MTU,JUMBO_MTU,WOL_MAGIC,HWSTATS>
ether f4:90:ea:01:3d:b2
inet6 fe80::f690:eaff:fe01:3db2%igc0 prefixlen 64 scopeid 0x1
inet6 2003:a:1704:XXXX:XXXX:eaff:fe01:3db4 prefixlen 64
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=121<PERFORMNUD,AUTO_LINKLOCAL,NO_DAD>
NDproxy configuration:
net.inet6.ndproxycount: 86
net.inet6.ndproxyconf_uplink_ipv6_addresses: fe80::f690:eaff:fe00:d9f4 (uplink router)
net.inet6.ndproxyconf_exception_ipv6_addresses:
net.inet6.ndproxyconf_downlink_mac_address: f4:90:ea:01:3d:b3 (WAN address MAC)
net.inet6.ndproxyconf_uplink_interface: igc1 (WAN interface)
With this setup, a Windows client in the LAN interface pinged 2001:4869:4860::8888
-> and got no response.
I then calculated the multicast group of the LAN client from its auto generated GUA and joined it via port net/mcjoin
mcjoin -i igc1 ff02::1:ff27:c64e
-> The ping worked right away!
This means at least multicast groups must be joined, or all multicasts must be allowed.
e.g., setting ALLMULTI on the WAN interface makes it work right away too.
ifconfig igc1 allmulti
Conclusion:
Promiscuous mode is overkill, but ALLMULTI or calculated multicast group joining is needed for ndproxy to work.
If anybody has any more hints I would be happy.
I did build ndproxy 3.2.1402000_2 and os-ndproxy 1.1 and can't reproduce the behaviour. It just works, without enabling promiscuous mode, joining a multicast group or enabling promiscuous mode for multicast packets (allmulti).
Did you try a ping from OPNsense itself, setting the source address to the LAN interface address (2003:a:1704:XXXX:XXXX:eaff:fe01:3db4)?
Cheers
Maurice
hn0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: LAN (lan)
options=80018<VLAN_MTU,VLAN_HWTAGGING,LINKSTATE>
ether 00:15:5d:d2:76:3c
inet6 fe80::215:5dff:fed2:763c%hn0 prefixlen 64 scopeid 0x5
inet6 fd01:2345:6789:abcd::a prefixlen 64
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=121<PERFORMNUD,AUTO_LINKLOCAL,NO_DAD>
hn1: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: WAN (wan)
options=80018<VLAN_MTU,VLAN_HWTAGGING,LINKSTATE>
ether 00:15:5d:d2:76:87
inet6 fe80::215:5dff:fed2:7687%hn1 prefixlen 64 scopeid 0x6
inet6 fd01:2345:6789:abcd:215:5dff:fed2:7687 prefixlen 64 autoconf pltime 14400 vltime 86400
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
No my tests always included a client in LAN pinging from their GUA or ULA to a destination on the internet.
We did quite some troubleshooting and checked the source code, and we also have an alternative setup now, which also requires promisc mode in our tests.
So there either must be a difference, or tests influence the result (eg using tcpdump will put interfaces in promisc and ndproxy suddenly works).
Just unsure whats the truth.
https://github.com/opnsense/docs/pull/717
Thank you for getting back to me :)
Just to make sure it's actually a WAN issue (not a LAN issue), I'd try a ping test from OPNsense itself. Source address: LAN interface address, destination address: something on the Internet. This won't work without ndproxy, but doesn't depend on a client in the LAN.
I made sure the interfaces are not in promiscuous mode when testing (no packet capture running).
Are you only testing with physical Intel NICs? So far, I've done all my testing with VMs. Maybe the driver plays a role in this... ND offloading? Just a guess.
Yeah so far I only used physical intel nics with physical DEC750 machines, and the client also has a physical NIC.
I could also test in Hyper-V or Proxmox, though lets wait now for other user reports since the scope of the issue is quite unclear.
If promisc is sometimes needed, and sometimes not, thats also fine in the end as now the user controls it without hidden automatism by the port.
Thanks for your feedback, especially that the MAC should be WAN was quite helpful figuring this out.
The discussion in here has been superseded.
I have written my own ndp proxy in lang/go which circumvents all of the issues described here.
I use it myself to proxy my /64 to multiple internal interfaces, and @Maurice tested it as well.
It's now generally available in 24.7.8. Have fun with it :)
QuoteIf you receive a DNS server from your ISP, but want the router to be the sole DNS server, use a Port Forward to force traffic destined to port 53 to the local running Unbound server instead.
I am very new to IPv6 and this is my hobby project so please be gentle. I have already implemented this in IPv4 with port forward to 127.0.0.1. How do I identify the IPv6 address of the local running Unbound server and implement for IPv6? My IPv6 stack is working well with this plugin with LAN configured as link-local so thanks for this.
That is a bit tricky: AFAIR, you cannot redirect on the loopback address on IPv6 because of RFC4921 section 2.5.3 (https://datatracker.ietf.org/doc/html/rfc4291) saying that the responses should not be routed, so "::1" is out of the question (as are link-local addresses for obvious reasons). For IPv4, this works with a redirection to 127.0.0.1.
What I do is something like this:
2025-11-29 09_36_03-Port Forward _ NAT _ Firewall _ OPNsense.mgsoft — Mozilla Firefox.png
The redirect target IP is an alias, which is a dynamic IPv6 alias on any IPv6-enabled interface with the EUI-64 of that interface (which is the same as the EUI-64 of the link-local IPv6).
Also note that I have an exception for one host alias (BLOB_MAC), which is identified by its MAC, because I cannot be sure if that client uses IPv6 privacy extensions. I need this exception because that client uses ACME with DNS-01 verification which Unbound cannot forward.
My general recommendation for setups which are a little more advanced is to bind services like DNS to loopback interfaces:
- Interfaces: Devices: Loopback, create a loopback interface, name it e.g. "Unbound".
- Assign the interface and configure it with static IP addresses (/128 ULA and /32 RFC1918 is fine).
- Services: Unbound DNS: General, set "Network Interfaces" to this loopback interface (only).
- In the DHCP / RA configuration, set the DNS server addresses to the loopback interface's addresses.
- Optional: If you want to force all DNS traffic to Unbound, forward port 53 to the loopback interface's addresses.
Cheers
Maurice
TWIMC: https://github.com/Monviech/ndp-proxy-go/issues/3
I got the proxy working now for PPPoE interfaces as well.
Also I just tried the port forward and it works for me without any tricks:
it might not be RFC conform but "hey it works I guess xD"
EDIT: DOESNT WORK!
Really? I just tried and it did not work for me like that.
I used ::1 as redirect target and used: "nslookup -query=A www.google.de 2001:4860:4860::8888" and got a communications error from a Linux client. The same thing works when I use a routeable IPv6 alias for OpnSense as a redirect target. Note that by using Google's DNS IPv6 explicitely, I force the IPv6 forwarding rule to be applied.
I recently had a dicussion with Patrick over this where he was surprised as well that it did not work.
His posting is here and OpnSense seems to adhere to RFC4291: https://forum.opnsense.org/index.php?msg=246585
Maybe you got an answer over a redundant DNS over IPv4?
Correct - to pull that link to the FreeBSD source from that other thread so you don't need to go on a scavenger hunt:
https://cgit.freebsd.org/src/tree/sys/netinet6/ip6_input.c?h=releng/14.3#n765
FreeBSD *should* categorically refuse to send a packet with source ::1 to anything but the loopback interface itself if I read that code correctly.
Yeah it seems like my assumption was wrong it fell back after not getting an answer:
Vlan:
21:15:23.740664 IP6 2003:a:177f:8463:b40e:4343:1cc8:df32.54262 > 2003:180:2:7000::53.53: 54276+ AAAA? ipv6.google.com. (33)
21:15:25.741343 IP 172.16.1.150.52057 > 172.16.1.1.53: 54276+ AAAA? ipv6.google.com. (33)
21:15:25.751291 IP 172.16.1.1.53 > 172.16.1.150.52057: 54276 2/0/0 CNAME ipv6.l.google.com., AAAA 2a00:1450:4016:800::200e (92)
Loopback doesnt respond:
21:15:23.740672 IP6 2003:a:177f:8463:b40e:4343:1cc8:df32.54262 > ::1.53: 54276+ AAAA? ipv6.google.com. (33)
Good to know, sorry xD
N.P., Cedrik, this just made it to https://forum.opnsense.org/index.php?topic=42985.0, point 29.
Quote from: meyergru on November 29, 2025, 09:43:57 AMThe redirect target IP is an alias, which is a dynamic IPv6 alias on any IPv6-enabled interface with the EUI-64 of that interface (which is the same as the EUI-64 of the link-local IPv6).
I believe this is working well. Did a DNS leak test https://browserleaks.com/dns (https://browserleaks.com/dns) and the DNS servers reported are those in my Unbound DoT forward list and not the ISP's DNS server. Thanks again.
Experimental PPPoE support is in 25.7.9.
The last feature I added is PF table (firewall alias) support to help with the network segmentation for highly dynamic setups.
It will most likely hit 25.7.10.
With that the proxy should be complete for now, I personally do not miss any feature when using it, it just worksTM and is quite possibly the leading most complete implementation to fix IPv6 for many setups.
I would call it generic since you can chain the proxy over multiple routers. You dont even need DHCPv6-PD anymore, this proxy handles dynamic IPv6 so gracefully that you won't believe it.
https://github.com/opnsense/docs/commit/5bb5fca5c67ac9162c8f76d6261ca6cc90f34076