Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - d0shie

#1
As title suggests, in the past couple of days I've been noticing unknown IPs spamming connection attempts to certain game servers I've port forwarded via DNAT. I know that when Firewall rule is set to Pass, the packets will automatically be redirected to the destination host (my game servers in this case) regardless of the source IP(s).

Seeing as Rules (new) no longer offers Add associated rule, I tried the new Register rule in place of it. Upon using Inspect mode in the WAN interface, I can see that the automatically generated rule gets placed under the block rule (with alias containing said IPs) I've made. What was odd to me was these automatic rules didn't have a sequence to them, but I'd thought nothing of it. So far, so good.

Then I went over to Log Files -> Live View to watch for packets coming from the offending IPs. For an hour, all I could see were logs with the "rdr" action. This is also fine, because I did enable logging for the respective DNAT rules. However, again that was all I could see. Those IPs went through and attempted connections to my game servers anyway. No signs of packets being blocked.

To confirm my suspicion, I went back to DNAT, switched to Manual for Firewall rule and manually created a "linked" rule in the WAN interface. This time, Inspect shows that the newly created rule indeed gets assigned a sequence. And what do you know, though the packets were redirected, the IPs were successfully blocked this time and I did not see any connection attempts in my game server console.

So my questions then become:
1. Is this the expected behavior?
2. If so, is there a better, proper way to do this?

I'm well aware I could just do Invert Source and put the alias in, but I'd like the observability if possible. This is, after all, a firewall appliance. I don't just want to see packets being redirected and accepted, there's a need to know what AREN'T via a separate block rule.
#2
Applied the patches and got the same expected outputs: Migration reported as complete with no errors, interfaces settings shown.
We'll see how it goes from here. Thanks for the hard work, Franco!
#3
26.1 Series / Re: UI lockout after 26.1 upgrade
January 30, 2026, 11:06:12 AM
Coming from this PPPoE connection timeout thread, I've tried the above commands and got nothing for the first one, with indication of migration problems in the second. Here's the (hopefully) relevant log output when it happened:
2026-01-30T01:45:00 Notice kernel <118>[25] *** OPNsense\Interfaces\Settings migration failed from 0.0.0 to 1.0.0, check log for details
2026-01-30T01:45:00 Notice kernel <118>[25] Migrated OPNsense\Firewall\DNat
2026-01-30T01:45:00 Notice kernel <118>[25] Migrated OPNsense\IDS\IDS from 1.1.1 to 1.1.2
2026-01-30T01:45:00 Error config #2 {main} )
2026-01-30T01:45:00 Error config #1 /usr/local/opnsense/mvc/script/run_migrations.php(54): OPNsense\Base\BaseModel->runMigrations()
2026-01-30T01:45:00 Error config #0 /usr/local/opnsense/mvc/app/models/OPNsense/Base/BaseModel.php(939): OPNsense\Base\BaseModel->serializeToConfig()
2026-01-30T01:45:00 Error config Stack trace:
2026-01-30T01:45:00 Error config   in /usr/local/opnsense/mvc/app/models/OPNsense/Base/BaseModel.php:814
2026-01-30T01:45:00 Error config Model OPNsense\Interfaces\Settings can't be saved, skip ( OPNsense\Base\ValidationException: [OPNsense\Interfaces\Settings:dhcp6_norelease] Value should be a boolean (0,1).{yes}
#4
Allow me to restate the timeline on my part: I didn't have any issue (like this) with PPPoE either until 25.7.11-9. The hostwatch hiccups forced a reboot and log wipe so IMHO there's no real evidence that .11 (and the subsequent hotfixes) is at fault. I also had hostwatch disabled first thing before the hotfixes even came out. I upgraded straight to 26.1 after the incident. That's it.
And yes, I went through exactly what OP had, down to the logging output. In my case, I'd even waited for over 50 PPPoE reconnect attempts and nothing. Reloading services from the CLI didn't help either. Only a reboot could fix the problem, though my WAN has stayed up since 26.1, unlike OP's.
I'm only posting here because it does look like there's a real possibility the issue might resurface for me, so I'd like to follow up.
#5
If it means anything, I didn't have this issue on 25.7.11-2. Only upgraded to 25.7.11-9 today in preparation for test driving 26.1. Then again .11 had stalled my installation with hostwatch days ago before the hotfixes came in, which then warranted a reboot, so it wasn't that good of an up-time metric.
#6
I also got this an hour ago (on 25.7.11-9) and yeah, only a reboot of the firewall would fix the problem. Didn't think much of it at the time as the connection stayed up since so I upgraded to 26.1 right after because why not. A little concerning this is happening to someone else considering I did look around and found threads with this issue dating back to years ago? Nothing very conclusive. And I'm not sure the landscape is the same considering mpd5 is involved this time.
#7
26.1 Series / Re: MiniUPNPD
January 27, 2026, 06:22:03 AM
Quote from: nero355 on January 27, 2026, 12:08:12 AMWhy not just give them 1:1 Port Mapping and leave it at Moderate NAT level instead of fully Open NAT ?!
Because Moderate NAT can only talk to Moderate and Open NAT. Other console people who are on Strict NAT (more than you'd think) can only talk to Open NAT. With how prevalent the P2P matchmaking model is, Moderate NAT just won't do if you want the best chance at finding more people to play with. There's also the need to factor in the effort to manually configure mappings for every game service. The better equivalence would be putting that console behind a DMZ, but it'd also mean the ports have to remain open 24/7, and only for that console.
UPnP, on the other hand, provides the perfect middle ground while cleaning up after itself so allowed devices can cycle between ports. I'd say these days consoles is one of the primary reasons why UPnP is in use.
#8
25.7, 25.10 Series / Re: [SOLVED] hostwatch at 100% CPU
January 20, 2026, 09:14:42 AM
Quote from: amarek on January 20, 2026, 08:14:11 AMTHX for this thread, this service was eating all my memory. after disabling it the usage was immediately at 28%, what a great solution to roll this out for all as fix implemented and started service............
I was away from home and thankfully only the firewall's Web UI became non-functional, so I could still do remote SSH and diagnose the problem. For me the new service silently ate up 52GB of space for logging alone in less than 2 days and somewhat stalled the system as a result. I even read the changelog and noticed it but didn't think much at the time.
So, it's one of those blunders with an unexpectedly high impact, yes, but it's rare. And they did promptly push out hotfixes to remedy the issue on reasonably short notice.
#9
25.7, 25.10 Series / Re: CSRF Check
January 20, 2026, 04:25:07 AM
Hi Steve, I have a faint suspicion that you're having problems with the new Automatic Discovery feature in Interfaces -> Neighbors. This is unfortunately enabled by default on the latest update and known to cause excessive logging as well as high CPU usage. Log into the console and run
du -h / | sort -rh | headto see if your disk has run out of space. I'll bet that it has, and all taken up by /var/log/hostwatch.
Remove the logs and disabling the service will solve your issue.
#10
23.7 Legacy Series / Re: PIA Wireguard Tunnel
November 25, 2023, 04:01:39 AM
I'm using os-wireguard-go instead and 23.7.9 broke it for me too. The wireguard adapters just wouldn't show up for assignment, most likely due to the new changes regarding interface assignments for wireguard devices mentioned in the changelog I'm sure. Reverting to 23.7.8_1 fixed everything for me. Even tried the kernel plugin and had the same problem as you.
So make of that what you will, I'd use the older plugin for now.