Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - motoridersd

#1
I removed the test rules that I had added for debug, and the pings started working while using the kernel posted a few replies above.

I then upgraded to the latest 24.7.3_1 and installed the 24.7.3 kernel and pings are still working.

So it seems the firewall rules I added to try to solve the problem ended up being a problem once the kernel regressions were implemented.
#2
Quote from: doktornotor on August 30, 2024, 04:43:27 PM
Try with this kernel as well.

https://github.com/opnsense/src/issues/218#issuecomment-2321096627

Just tried it and there's no change.

I use Hybrid Outbound NAT on my deployment, so not sure if that's different from what those with no issues have.
#3
Quote from: franco on August 30, 2024, 08:29:53 AM
Just to be sure: the 24.7 works for you in this regard?

Made the note here: https://github.com/opnsense/src/issues/218#issuecomment-2320210439

Might be worth inspecting your setup a bit closer. Do you use any explicit rules for that ping to pass? And it comes from where and goes to a public Internet server?


Cheers,
Franco

I never tested on 24.7, I went from 24.1.10 to 24.7, I wanted to wait until there was a .1 release of 24.7 before upgrading.

I'm also not sure if this was working on 24.1.10 before I upgraded because I don't often try to do ICMP4 pings to the internet.

I added the specific rules so I could look at the firewall logs to make sure they were getting allowed, but before I started troubleshooting I didn't have specific ICMP rules configured (and the pings weren't working then)
#4
I don't often use IPV4 pings to diagnose internet issues, so I don't know exactly when this broke, but I know for sure this used to work.

After the update to 24.7 I noticed that pings weren't working. I can't ping remote hosts (like 8.8.8.8 or 1.1.1.1) from my LAN, nor can I do it from the OPNSense console using the WAN interface as a source.

tcpdump captures on the console don't show the packets going out on the WAN interface, but I see the requests going out from the LAN interface.

I thought the issue with the FreeBSD kernel changes could be breaking this, so I updated to the test icmp2 kernel while on 24.7.1 with no change. 24.7.2 and .3 were installed and I have noticed no change.

My firewall rules allow this traffic. I added specific ICMP rules to track this, and I can see the allow rule on the LAN side showing up on the Firewall Log, as well as the WAN rule.

Despite the allow rule log showing up, I don't see the request going out on the WAN interface when doing a tcpdump filtered by ICMP.

I'm using a Hybrid NAT with the default NAT all to the WAN interface. All other traffic works fine. Traceroutes work without an issue.

I am also unable to ping the WAN IP of the firewall (my ISP assigns a public IPV4 address via DHCP). I can see the requests coming in in a tcpdump, as well as seeing the allowed log, but a reply never goes out.

I don't know what else I could be missing, it just seems that the firewall is not actually sending the ICMP requests to the internet or receiving the requests to be able to provide a reply.
#5
Quote from: franco on August 16, 2024, 08:23:38 AM
Well, the obvious question would be did any kernel on 24.7 ever work on your end?


Cheers,
Franco

No, I upgraded from 24.1.10 directly to 24.7.1 and that's when I noticed pings weren't working. I don't use them daily on my network, but often do and had never noticed they didn't work. My first indication they were broken was seeing the Gateway as "down" in the new dashboard.
#6
Hmm, icmp2 is not working for me. Am I missing something?

opnsense-update -zkr 24.7.1-icmp2

and rebooted. Can see that I am running "pf_icmp-n267786-b4771b598e90" but pings and traceroute don't work from the firewall or behind it.
#7
Quote from: franco on August 09, 2024, 09:58:27 AM
@doktornotor

# opnsense-update -zkr 24.7-xen3

I'll leave it there for a while longer then.


Cheers,
Franco

Is this supposed to revert to a kernel that has the expected ICMP behavior?
#8
Mmm not exactly. What is the Security Policy Database (SPD) referred to on that link? It's also not 100% complete, it is missing the IP subnet in the LAN Site B diagram.

I'm using Wireguard in my case so there isn't' a Virtual Net A and Virtual Net B, both nodes are part of the same Tunnel /24, with a single address on each side.

What if there was no tunnel involved, how would you do a NAT between two LAN IPs? Say you want to access 192.168.1.1 using a different LAN IP of 172.0.0.3? Both subnets are connected behind the same interface. Or say the 192.168.1.1 device is connected to a different interface on the OPNSense firewall. Would be easiest if it can all be done in the same interface with an alternate IP (say a 192.168.1.253/29 Virtual IP in this case) assigned to the same interface so the firewall can reach the host at 192.168.1.1.
#9
I'm trying to set up a NAT that will allow me to reach devices behind a local LAN interface using a Virtual IP (IP Alias) or a non-WAN IP. The goal is to be able to reach devices behind this NAT using a VPN tunnel.

Say I have a /29 subnet on the LAN side of Site B and I want to to reach local-only devices through a Virtual IP that is part of the /29 through a NAT.

Site A OPNSense 172.0.0.1/29
Site A Virtual IP#1 172.0.0.3 -> Site A local device 10.0.0.1
Site A Virtual IP#2 172.0.0.2 -> Site A local device 192.168.1.1

I can't route the Site A local device IPs across the tunnel, I would like to reach them using an IP that is part of the /29 that is already routed across the tunnel. Getting to the local devices should be achieved by either adding a Virtual IP in the same subnet to the LAN interface, or using a separate interface that lives on the same local subnet as the device.

The problem I'm running into is that most NAT guides and documentation are for a NAT on the WAN interface. Looking at the NAT rules with pfctl when trying different iterations doesn't seem to be showing me the flow I'm expecting.

In the end I want to be able to reach 10.0.0.1 using 172.0.0.3, and 192.168.1.1 using 172.0.0.4 across the tunnel. I can reach the OPNsense 172.0.0.1 IP across the tunnel, no problem. I can ping the Virtual IPs, but getting the NAT working is what is failing me. I can get to the virtual IPs across the tunnel, but they are acting like extensions of the OPNsense LAN IP, ie, I can open the OPNSense Web GUI on both Virtual IPs, which is not desired.

Is what I want to achieve doable? It should be. I know I can do this on a Fortigate or a Cisco ASA, I just can't seem to translate this into OPNsense.

#10
22.1 Legacy Series / Re: AT&T and IPv6
June 21, 2022, 01:31:49 AM
Did you figure it out? I tried some quick configs and couldn't get IPV6 to work. Just got ATT Fiber today.
#11
I ran into the same problem. Took me several reboots, factory resets and turning off services to finally figure out what it was.

I have two WAN interfaces and the parent interface for my VLANs. This worked well before, but broke with 22.1. I will leave IPS/IDS disabled for now.
#12
I currently have 3 gateways configured, but this happened when I had two.

One of them is a 1000/35 connection with a data cap, the other is an LTE connection with no data cap. Third connection is Wireguard tunnel sent out LTE connection Most traffic is configured to use the Cable WAN, and a few rules send other heavy traffic out the LTE connection.

The LTE performance varies throughout the day, being very good at night, but slow during the day when it is congested.

The issue I have is that if I configure gateway monitoring to use a distant host to determine packet loss, for example, pinging 1.1.1.1 over the LTE connection, as the LTE link gets congested, the packet loss starts crossing the configured threshold and an alarm is generated in the Gateway log file. This is fine. The problem is that every time there is an event, ALL connections on the network drop out briefly. This includes the traffic that is being sent out the Cable gateway.

I created some Gateway Groups, with Cable as Tier 1 and LTE as Tier 5. At first I was using this group as the gateway for my main traffic, but even if the LTE gateway is having issues, the traffic going out the Tier 1 interface should not be interrupted.

Even when setting my default internet rule to use the Single Cable gateway, I was still seeing drops/connection issues when the LTE ping times to 1.1.1.1 went above the threshold.

The fix was to have the LTE connection ping the LTE modem instead of an external IP, but this unfortunately leaves me with no ability to switch traffic based on congestion.

When I added the Wireguard interface over LTE, the ping to the default gateway means that when the LTE connection starts congesting, the Wireguard tunnel monitoring starts seeing loss (the gateway is on the other end traversing the LTE network). I started having packet loss on my LTE+Cable Group (even though the WG interface is not part of this group and the LTE gateway monitors a local IP and there was no packet loss there).

Logs of the WG interface event

2021-09-05T07:57:11 dpinger[62885] GATEWAY ALARM: WAN_PIAWG_IPv4 (Addr: 10.9.128.1 Alarm: 0 RTT: 430812us RTTd: 291906us Loss: 8%)
2021-09-05T07:57:11 dpinger[2274] WAN_PIAWG_IPv4 10.9.128.1: Clear latency 430812us stddev 291906us loss 8%
2021-09-05T07:56:54 dpinger[46066] GATEWAY ALARM: WAN_PIAWG_IPv4 (Addr: 10.9.128.1 Alarm: 1 RTT: 510552us RTTd: 216970us Loss: 6%)
2021-09-05T07:56:54 dpinger[2274] WAN_PIAWG_IPv4 10.9.128.1: Alarm latency 510552us stddev 216970us loss 6%
2021-09-05T07:54:05 dpinger[69999] GATEWAY ALARM: WAN_PIAWG_IPv4 (Addr: 10.9.128.1 Alarm: 0 RTT: 432131us RTTd: 246902us Loss: 1%)
2021-09-05T07:54:05 dpinger[2274] WAN_PIAWG_IPv4 10.9.128.1: Clear latency 432131us stddev 246902us loss 1%
2021-09-05T07:53:53 dpinger[80288] GATEWAY ALARM: WAN_PIAWG_IPv4 (Addr: 10.9.128.1 Alarm: 1 RTT: 502068us RTTd: 187465us Loss: 1%)
2021-09-05T07:53:53 dpinger[2274] WAN_PIAWG_IPv4 10.9.128.1: Alarm latency 502068us stddev 187465us loss 1%


Screenshot of Gateway config attached. Cable does ping a remote gateway because lately my ISP has a lot of issues with packet loss and I want the system to failover to the LTE connection when this happens.