Recent posts

#81
Tutorials and FAQs / Re: [HOWTO] Sonos speaker in m...
Last post by fastboot - Today at 07:53:47 AM
"I'm guessing I need to add a "Sonos pass" rule before the "block access to 'private' networks" rule? "

=> Indeed
#82
26.1 Series / Re: Upgrade to RC1 successful
Last post by OPNenthu - Today at 06:56:32 AM
@franco I upgraded 25.7.11_2 to 26.1.r2_2 today.  I haven't migrated my rules yet, but noticed a couple interface related things:

- After upgrade, the radvd service was enabled on two interfaces where it was not previously enabled.  I'm using Dnsmasq for RAs on all interfaces.

- After I manually switched all my interfaces from "Track Interface (legacy)" to "Identity Association," radvd then disabled itself everywhere.

I don't know why those two interfaces, specifically, were selected and auto-enabled and not the others.

You cannot view this attachment.

- Finally I uninstalled the 'os-isc-dhcp' plugin (I think it was labelled as 'development' even though I switched back to the Community branch after upgrade).  This caused the UI to break down and required a reboot.

I could not reboot from the UI itself because it was unresponsive.  Had to login SSH and issue a reboot from the command menu.

You cannot view this attachment.

All seems normal after the reboot.


#83
26.1 Series / Re: MiniUPNPD
Last post by d0shie - Today at 06:22:03 AM
Quote from: nero355 on Today at 12:08:12 AMWhy not just give them 1:1 Port Mapping and leave it at Moderate NAT level instead of fully Open NAT ?!
Because Moderate NAT can only talk to Moderate and Open NAT. Other console people who are on Strict NAT (more than you'd think) can only talk to Open NAT. With how prevalent the P2P matchmaking model is, Moderate NAT just won't do if you want the best chance at finding more people to play with. There's also the need to factor in the effort to manually configure mappings for every game service. The better equivalence would be putting that console behind a DMZ, but it'd also mean the ports have to remain open 24/7, and only for that console.
UPnP, on the other hand, provides the perfect middle ground while cleaning up after itself so allowed devices can cycle between ports. I'd say these days consoles is one of the primary reasons why UPnP is in use.
#84
26.1 Series / Re: New rule system
Last post by OPNenthu - Today at 06:20:59 AM
@tessus sorry, pay attention to this note in the latest release notes also:

Quoteo Firewall: NAT: Port Forwarding is now called "Destination NAT".  Firewall rule associations are no longer supported, but the old associated firewall rules remain in place with their last known configuration and can now be edited to suit future needs.

This was discussed in another thread too.  If you have existing NAT association rules on your interfaces they'll still be there after the upgrade, but they're unlinked now from the NAT rules.  You have to remember to manage them manually.
#85
25.7, 25.10 Series / Re: DuckDB-related DNS/DHCP ou...
Last post by mawa2559 - Today at 04:45:01 AM
of course, forgot to upload images. Attached here.
#86
25.7, 25.10 Series / DuckDB-related DNS/DHCP outage...
Last post by mawa2559 - Today at 04:32:27 AM
Hi all. First time poster and new OPNsense user here.

TLDR; DNS/DHCP breaks once per day but appears to self-resolve when duckdb restore/cleanup task runs. How to make this cycle stop?

Background:
I first set up OPNsense 25.7.11 about two weeks ago. I followed a pretty basic tutorial to set up my interfaces, a couple vlans, DNSmasq and DHCP + unbound, added in DNS over TLS, enabling IPv6, and started playing around with plugins like the node exporter and tailscale as well as adding in blocklists - it's been a lot of fun and I was really enjoying the platform.

However after a few days I started experiencing 1x daily DNS outages - first resolution becomes spotty, then fails completely, of course resulting in failures all over my network. At first I definitely blamed myself and a bad config - I tried systematically removing IPv6, DoT, getting rid of a wildcard override in unbound, removing the singular blocklist I added, and getting rid of all restrictive firewall rules, adding new ones to ensure dns ports were allowed etc. but no matter what the 1x daily DNS outage keeps occurring.

Through troubleshooting, I discovered that in addition to DNS issues, it appears that all IPv4 addressing stops working during these outages - clients lose ipv4 addresses (showing APIPA addressing) and opnsense becomes unreachable via IPv4, but remains accessible over IPv6 - and all services show as running and healthy on opnsense, including unbound and DHCP. The weirdest part is opnsense itself has no issues resolving hostnames using the diagnostic tool during these outages.

Troubleshooting:
Two days ago I factory reset my isp's router (that sits in front of opnsense in bridge mode) and did a fresh install of opnsense. My LAN firewall rules currently only consist of allowing IPv4 and IPv6 from LAN to all, pic attached. I again enabled dnsmasq, dhcp + unbound, DoT, and am still running IPv6, and the DNS issues continue 1x daily, with all of the same symptoms/behavior. Today, January 26th, DNS issues began at around 11:15am and ended at 13:30pm as evidenced by uptime-kuma DNS monitoring (image attached). I was not home so did nothing to mitigate, and the issue self resolved.


This time, I managed to catch a line in the Unbound log file that coincided with exactly when the issue self-resolved:

2026-01-26T13:30:51-06:00 Notice unboundDatabase auto restore from /var/cache/unbound.duckdb for cleanup reasons in 2.59 seconds

Likely related, metrics collected via node exporter and brought into grafana show free memory dropping from over 1GiB to below 500Mib at 13:31pm, essentially the same time as that duckdb restore/cleanup occurred (image attached).

I am assuming that this db is becoming unhealthy/corrupted/oversized in advance of the (likely scheduled?) cleanup on a regular basis, and that issue is affecting DHCP somehow. Forgive my ignorance of how duckdb is used on the platform. My primary concern is predictably- how can I make this stop happening? Turn off unbound reporting altogether? Initiate more frequent cleanups? Set stricter db size limit somewhere? I'm not quite sure how to proceed, and as you might expect, opnsense is not passing the wife test so far (keeps interrupting her shows).

I'm going to disable unbound reporting right now and see if that helps at all, but interested if anybody has any insight or suggestions! Happy to provide any other info as needed. Thanks in advance!

opnsense version: OPNsense 25.7.11_2-amd64
Hardware: Lenovo ThinkCentre M70q Gen1, 4gb RAM, 12 core CPU, 500GB SATA SSD, 1gb onboard nic used for WAN interface, 2.5gb Intel m.2 > Ethernet adapter card for LAN
Environment: ISP modem in bridge mode > opnsense box > 24 port USW pro for lan, including 1 WAP
#87
26.1 Series / Re: New rule system
Last post by tessus - Today at 04:05:12 AM
Thanks @OPNenthu

Quote from: OPNenthu on Today at 02:09:47 AMnothing changes except for the ability to set Floating rules on a single, specific interface.

Yep, this might be bad for me. I actually use quite a few of those.

Of course I could move them to the specific interface, but I used the floating rules UI for a reason. It is easier and more convenient to have an overview, especially if you want to clone a rule for a new interface. You don't have to click on every interface to find the rule.
The workaround to create groups with a single interface is a massive overhead in terms of administration. Why not support a single interface instead?

Anyway, I am sure I will adapt. I just hope it's not too much work and that the result won't be less intuitive and convenient.
#88
25.7, 25.10 Series / Re: RAM usage
Last post by OPNenthu - Today at 03:57:42 AM
Take a look at this thread for virtualization recommendations: https://forum.opnsense.org/index.php?topic=44159.0

Also, the hardware sizing guide gives some info on RAM needs for various scenarios.

My firewall has 8GB (also an N5105, but bare metal) and I'm not even using half of it currently.  Swap is not touched at all and I have /var/log and /tmp both on RAM disk to help preserve the SSD.  That's without IDS/IDP and with ~1700 policies across 8 groups, ~500k table entries, ~800k+ domains in Unbound block lists, a few WG tunnels and WAN shaping for anti-bufferbloat.  Only a few users though and normally fewer than 500 active f/w states.
#89
I appreciate the information offered about configuring Suricata with os-stunnel—it's a complicated but necessary topic for improving network security. I've previously struggled with comparable situations in which tiny configuration errors resulted in huge problems. It may be beneficial to describe specific instances or issues we encountered while monitoring traffic; these personal anecdotes can assist highlight optimal practices and common pitfalls. I look forward to learning from everyone's experiences!
#90
26.1 Series / Re: New rule system
Last post by OPNenthu - Today at 03:05:13 AM
I added a feature request: https://github.com/opnsense/core/issues/9652

If this gets rejected, so be it.  I don't know what limitations or challenges there are to doing this with the new MVC approach.