Need help understand NPTv6

Started by wallaby501, Today at 04:00:11 PM

Previous topic - Next topic
Trying to make this work with my on-prem K8s ingress.

The scenario is
- k8s cluster that peers with opnsense via BGP (and FRR of course)
- service subnet advertisement is done with a ULA subnet (and ingress pinned to a static LB IP)
- All hosted on a separate VLAN

Target setup
- Use NPTv6 to direct all external traffic to that specific ULA ipv6 address

Questions
- what are my settings for nptv6?

I've tried it and I *think* it should listen to the K8s vlan interface since that technically has an ipv6 address (courtesy of track interface), then internal should be...the entire ULA ipv6 address? And then lastly track interface being my K8s vlan interface yes?

When doing that it says "not listening on that interface" but selecting WAN as the interface works (maybe I misunderstand but I feel like that would be wrong but maybe it still is the right choice as providing all the actual external traffic.)

Then lastly I'd set up dynamic DNS to query the IP for traffic on the K8s vlan interface (like the rest I do, just with listening to that interface instead.) Or should that also be on WAN?

Apologies. I've looked but seems there is not a lot of info out there via blogs/videos and the opnsense docs have me confused just a bit (my lack of knowledge, not their lack of detail.)

NPTv6 is for mapping whole prefixes, (e.g. the whole prefix 2003:0:0:0::/64 to fd01:0:0:0::/64, just a rough example)

If you want to use NAT overload (aka map a whole internal prefix to a single external IP address), just use the Outbound NAT menu and configure it just as if you would IPv4.
Hardware:
DEC740

Quote from: Monviech (Cedrik) on Today at 04:08:48 PMNPTv6 is for mapping whole prefixes, (e.g. the whole prefix 2003:0:0:0::/64 to fd01:0:0:0::/64, just a rough example)

If you want to use NAT overload (aka map a whole internal prefix to a single external IP address), just use the Outbound NAT menu and configure it just as if you would IPv4.

Boy you are quick @monviech :)

I also just thought- does it make sense then to just have some kind of dynamic IPv6 host alias (under Firewall-Aliases) and just a rule on that interface to direct all 80/443 to the ULA?

And I might be misunderstanding but I'm not trying to direct Outbound NAT- I am trying to get it so external devices can reach my ingress (ie. inbound via some combo of firewall rules to go to a certain pod.)

It does help though to read that NPTv6 is for mapping the entire prefix. I do not think this is what I would want then (unless that in combo with a firewall rule on the K8s vlan interface gets me the desired result, kind of like NAT Port Forwards.)

I just happened to see this one and thought it might be a misinterpretation.

I don't know enough about K8 clusters to really help more.

But I do know how routing works, so a network diagram of the IPv6 networks, the routers and the routes might be able to shed some light on this. No promises I can help though, but maybe somebody else will if I don't respond. (If I don't respond it means I don't know :)
Hardware:
DEC740

My diagrams.net expertise is sorely lacking but basically this.


I understand well enough the IPv4 side (essentially it's just a NAT port forward on the WAN to the service IP of the pod) and that works without issue but just trying to go dual stack here. Essentially it's the same thing only since IPv6 guarantees you GUA I assume that I should only listen on my K8s vlan GUA and then redirect that to the pod service IP. Which half makes me think that if I made a firewall rule that directs anything on the K8s vlan external IP to that pod IP I might be fine.

Image might not show so here it is- https://imgur.com/a/BcZuYwq

Today at 05:45:49 PM #5 Last Edit: Today at 05:48:38 PM by Monviech (Cedrik)
It might be better to skip NAT and use a layer 4 proxy (e.g. haproxy or caddy) that can stream between ipv4/ipv6 front end to ipv4 only backend. That reduces a lot of pain with IPv6 NAT and GUA/ULA weirdness.

I think the Kubernetes folks call that Ingress Controller and use traefik most of the time instead of haproxy or caddy.

Otherwise I think I would try a simple port forward (destination NAT)
Hardware:
DEC740