25.1 NAT reflection not working properly

Started by pj97, February 06, 2025, 03:46:28 PM

Previous topic - Next topic
Thats what we can find out. If you have snapshots of before and after the update, we could also compare the pf ruleset.

Just store whats in /tmp/rules.debug before and after the update and diff it for obvious changes regarding rdr or nat.
Hardware:
DEC740

Quote from: meyergru on February 07, 2025, 11:45:09 AMJust asking for clarity here: You say that this worked before 25.1 including Plex? The reason I am asking is that while Plex can have a different port than 32400, but it must know which IP to connect to. Since you cannot specify a DNS name in Plex, it probably is essential to use the same IP for inbound and outbound traffic, which is potentially (or per default) not the case.

The PIA approach seems to be that they provide you with a public IP and an abitrary port, which you could use as a target for inbound by specifying it directly or via DNS. All of your normal outbound traffic would go over the external NATed IP your ISP provides. This is all that Plex can see, so it would try to connect back to the IP that was reaching out to them.

So, IMHO, I think you would also have to direct all outbound Plex traffic over your PIA IP, maybe that is the problem. IDK what magic the PIA script does, but potentially, it has not been modified to work with 25.1, yet.


I'm routing all my internet traffic, on the interface plex lives on, through the PIA VPN tunnel using a firewall rule that forces all internet traffic through the PIA tunnel.

The PIA server assigns a specific port to my VPN's internal IP address (for example, 10.10.8.2) and PIA routes to that via NAT. I then automatically update an alias on my firewall with this assigned port. This allows me to specify that port in Plex's remote access settings. A NAT rule is in place to forward any incoming traffic on that PIA-assigned port directly to my Plex server.

The key here is that PIA handles the port assignment. I've confirmed that I am receiving a port from them and that traffic on that port reaches my firewall. The problem occurs after the firewall - the connection is not successfully forwarded to Plex.  I provied packet capture and firwall logs showing the connection were hitting the firewall in my previous posts.

This setup functioned perfectly on OPNsense version 23 and it survived the upgrade to all the way to 24.7 but now with 25.1 there are problems. I have been using this setup for over a year and through many Opnsense upgrades.  I have even went as far as reinstalling 24.7 and restoring a backup which resulted in 100% working NAT over Wireguard using the setup I have explained. However, after upgrading to version 25.1. I can pull any logs or whatever is needed from 24.7 so just let me know what I can do.

Quote from: Monviech (Cedrik) on February 07, 2025, 02:46:28 PMThats what we can find out. If you have snapshots of before and after the update, we could also compare the pf ruleset.

Just store whats in /tmp/rules.debug before and after the update and diff it for obvious changes regarding rdr or nat.

I luckily backup my config every night. I did a quick compare on the XML, and the NAT section remained the same, no changes. The only differences were the UUID's that were added. Other than that, its pretty much the same.

Good that the rules are the same, that means that can be ruled out (pun not intended xD)

If VPN is involved, the next possible cause can be PMTU (Path MTU Discovery), since VPN reduces possible MSS sizes.

In 25.1 there have been some issues regarding that.
https://github.com/opnsense/src/issues/235

You could try to install the test kernel and see if that fixes your issues:
https://github.com/opnsense/src/issues/235#issuecomment-2636702333
Hardware:
DEC740

Quote from: Monviech (Cedrik) on February 07, 2025, 03:35:37 PMGood that the rules are the same, that means that can be ruled out (pun not intended xD)

If VPN is involved, the next possible cause can be PMTU (Path MTU Discovery), since VPN reduces possible MSS sizes.

In 25.1 there have been some issues regarding that.
https://github.com/opnsense/src/issues/235

You could try to install the test kernel and see if that fixes your issues:
https://github.com/opnsense/src/issues/235#issuecomment-2636702333


I can definitely try that, but the issue is happening within the network as well. any device within the LAN cant reach my subdomains that arent routed through CF proxy.

Quote from: Monviech (Cedrik) on February 07, 2025, 03:35:37 PMGood that the rules are the same, that means that can be ruled out (pun not intended xD)

If VPN is involved, the next possible cause can be PMTU (Path MTU Discovery), since VPN reduces possible MSS sizes.

In 25.1 there have been some issues regarding that.
https://github.com/opnsense/src/issues/235

You could try to install the test kernel and see if that fixes your issues:
https://github.com/opnsense/src/issues/235#issuecomment-2636702333


This resolved my issue 100% thank you.

Quote from: Monviech (Cedrik) on February 07, 2025, 03:35:37 PMGood that the rules are the same, that means that can be ruled out (pun not intended xD)

If VPN is involved, the next possible cause can be PMTU (Path MTU Discovery), since VPN reduces possible MSS sizes.

In 25.1 there have been some issues regarding that.
https://github.com/opnsense/src/issues/235

You could try to install the test kernel and see if that fixes your issues:
https://github.com/opnsense/src/issues/235#issuecomment-2636702333



This also solved my issue

Working for me now :) Realized that i never tested the LAN, and only VPN. So at least my VPN is backup and running to access the domains :D

The kernel fix for the issue that some people here were having is going into 25.1.1 later this week.


Cheers,
Franco

I upgraded to 25.1 last night and did also notice issues with accessing my wireguard server in OPNsense. After a few hours of digging around, checking logs, firewall rules and various other settings, I found that a setting in Firewall normalization for my "WireGuard (Group)" was misconfigured and not allowing any peer's handshake to go through.

What fixed it for me was:
Firewall -> Settings -> Normalization -> "WireGuard (Group)" [or what ever your instance name is] -> Edit.
Direction was set to in, and needed to be set to "Any" according to the documentation.

Immediately after I changed this one setting, all of my WireGuard clients were able to connect again. I have no idea if this was a bug in the update (I'm not able to compare old configuration yet), or was just working in the old version out of sheer luck and broke when updated.

Anyway, I hope this helps someone else with this issue.