Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - OPNenthu

#1
I don't think so.  I'm able to go back and forth between 1M and 2M under "Firewall->Settings->Advanced->Firewall Maximum Table Entries" and that's being reflected in the Aliases UI.

You cannot view this attachment.

Are you seeing any errors logged and/or memory limitation?

#2
Quote from: Maurice on February 11, 2026, 10:13:50 PMA single /60 is just nasty and a reason to complain.

I hope a critical mass of such complaints comes forward, but unless more subscribers decide to ditch their WiFi routers and learn a bit about networking... I don't see it.

Maybe Google will force the issue :-)

Quote from: Patrick M. Hausen on February 11, 2026, 10:30:20 PMI don't get it. It's not like IPv6 addresses were scarce ...

I came across one argument for this which claims that it mostly comes down to two things:

- Operational complexity for cable providers.  There's apparently some cost with mapping and tracking migratory prefixes on CMTS networks.

- Product differentiation so that business subscribers don't start complaining about why they pay more for a /48.

I can't verify these claims.  The second one is understandable.  The first one, if true, implies something about cable networks at scale.  It's interesting to note that Verizon (one of the large fiber ISPs here) do provide a /56 on their residential FiOS plans.
#3
The sublinked article on the Android Developers Blog and RFC 9663 are interesting.

Does it apply for home internet subscribers where the delegated prefix is often /60 or /56?  I get a /60 from my ISP so I only have 16 /64s to play with.  In what world would I want to delegate an entire /64 to a single Android device and its connected gadgets?

(I'm sure I'm either misinterpreting or missing some context...)
#4
Someone (maybe @Monviech?) can speak to this authoritatively for OPNsense.  My interpretation is in line with @Mpegger.

I think evidence for this is in the Dnsmasq manual if looking at the syntax for --dhcp-option:

Quote-O, --dhcp-option=[tag:<tag>,[tag:<tag>,]][encap:<opt>,][vi-encap:<enterprise>,][vendor:[<vendor-class>],][<opt>|option:<opt-name>|option6:<opt>|option6:<opt-name>],[<value>[,<value>]]

They use OR (|) to indicate that you can pass either <option> or <option6>, but not both.

--

EDIT: actually, it looks like the OPNsense UI is explicit about it (screenshot) when you try to save :P

#5
Quote from: rolsch on February 10, 2026, 08:19:09 PMOnly the label "rdr rule" is shown

Yes, DNAT rules get logged with 'rdr rule'.  Some context on that: https://forum.opnsense.org/index.php?topic=45348.msg226752#msg226752

If you have a separate rule to pass the associated traffic and enable logging on that, then you'll see that label.  For example this is what a couple NTP redirects looks like ('rdr' followed by 'pass'):

You cannot view this attachment.

Do you have associated pass rules for your DNAT rules?  I don't remember off the top of my head but I'd imagine that if you are using implicit pass on the NAT rule then you'll only see the 'rdr' logged.
#6
26.1 Series / Re: hostwatch db grows rapidly
February 09, 2026, 09:20:14 PM
Ah, thanks.  Now I got it.  It's a database optimization: https://sqlite.org/lang_vacuum.html

I was thinking it was like the Kea process that periodically removes stales from its leases file.

So, the useful retention period for this data is TBD...
#7
Quote from: LucaS on February 09, 2026, 07:45:07 PMPossibly it loses the reply-to state

It does.  There's a just-developed patch for that which isn't released yet but you can apply it manually to try out.  They're looking for feedback.

https://forum.opnsense.org/index.php?topic=50760.0
#8
26.1 Series / Re: hostwatch db grows rapidly
February 09, 2026, 07:19:01 PM
I had hostwatch-1.0.11 running since at least the OPNsense-26.1.1 release (maybe even prior to that with a manual patch update, I don't recall) and I just patched up to 1.0.12.

It looks like the vacuuming fix maybe isn't working for me as I have a bunch of accumulated IPv6 temporary addresses for the same IOT client going back to the prior month (screen attached).

In the second screenshot for comparison, the filtered output from Diagnostics->NDP Table shows just 3 current entries; a stable EUI-64 address, a temporary, and a link-local which are all also present in hostwatch.

I don't know for sure but I'm wondering if the change to trigger vacuuming every 10k inserts is not doing me any favors for my small network size if I'm not hitting it (?).  In that case maybe that value could be configurable.

FWIW the db is not overgrown.  I only have 229 total hostwatch entries at present.

root@firewall:~ # ls -lah /var/db/hostwatch
total 17707
drwxr-xr-x   2 hostd hostd    5B Jan 27 12:29 .
drwxr-xr-x  24 root  wheel   33B Feb  9 12:19 ..
-rw-r--r--   1 hostd hostd  4.0M Feb  9 12:24 hosts.db
-rw-r--r--   1 hostd hostd  128K Feb  9 12:24 hosts.db-shm
-rw-r--r--   1 hostd hostd  128M Feb  9 13:07 hosts.db-wal
root@firewall:~ #
#9
26.1 Series / Re: zfs and sqlite
February 09, 2026, 07:00:16 PM
That errors out but I got it working with '$ opnsense-revert -z hostwatch':

Fetching hostwatch.pkg: .... done
Verifying signature with trusted certificate pkg.opnsense.org.20260120... done
hostwatch-1.0.11: already unlocked
Installing hostwatch-1.0.12...
package hostwatch is already installed, forced install
===> Creating groups
Using existing group 'hostd'
===> Creating users
Using existing user 'hostd'
Extracting hostwatch-1.0.12: 100%

I'm not sure how to measure the I/O difference but I didn't notice much of a problem with the previous 1.0.11 version in any case.  FWIW, I do still see the hostwatch process popping in and out of the top spot in '$ top -S -m io -o total' every few seconds:

last pid: 92973;  load averages:  0.06,  0.08,  0.08                                                      up 1+23:29:30  12:52:12
104 processes: 2 running, 100 sleeping, 2 waiting
CPU:  0.8% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.2% idle
Mem: 300M Active, 1683M Inact, 2286M Wired, 3467M Free
ARC: 1419M Total, 993M MFU, 232M MRU, 12M Anon, 23M Header, 155M Other
     1092M Compressed, 2899M Uncompressed, 2.66:1 Ratio
Swap: 8192M Total, 8192M Free

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
71347 hostd         16      0      0     16      0     16 100.00% hostwatch
    1 root           0      0      0      0      0      0   0.00% init
   33 root           0      0      0      0      0      0   0.00% aiod1
80673 root           0      0      0      0      0      0   0.00% php-cgi
27553 root           0      0      0      0      0      0   0.00% dpinger
    2 root          12      0      0      0      0      0   0.00% clock
   34 root           0      0      0      0      0      0   0.00% aiod2
25122 root           0      0      0      0      0      0   0.00% php-cgi
43874 root           0      0      0      0      0      0   0.00% php-cgi
85890 root           0      0      0      0      0      0   0.00% php-cgi
    3 root           0      0      0      0      0      0   0.00% crypto
   35 root           0      0      0      0      0      0   0.00% aiod3
19235 nobody         2      3      0      0      0      0   0.00% dnsmasq
89379 root           0      0      0      0      0      0   0.00% php-cgi
50851 root           0      0      0      0      0      0   0.00% php-cgi
    4 root           0      0      0      0      0      0   0.00% cam
   36 root           0      0      0      0      0      0   0.00% aiod4
29156 root           4      1      0      0      0      0   0.00% dpinger
76612 root           0      0      0      0      0      0   0.00% sshd-session
 3812 root           0      0      0      0      0      0   0.00% php-cgi
    5 root           0      0      0      0      0      0   0.00% busdma
53349 root           0      0      0      0      0      0   0.00% rtsold

I think there's still an issue with the vacuuming process in 1.0.11 but I'll add my datapoint to an appropriate thread for that.
#10
26.1 Series / Re: Legacy Rules Migration
February 09, 2026, 07:09:02 AM
Looks like a patch is available.  @franco, does this apply retroactively to those with already migrated rules?  Or would we need to roll back, upgrade, apply the patch, then migrate?
#11
Quote from: meyergru on February 08, 2026, 10:35:33 PM
Quote from: Mpegger on February 08, 2026, 08:13:48 PMWhat exactly do you mean "use the device's MAC instead"? Is it possible to use MAC addresses instead of IPs in OPNsense? Or are you talking about the LLA (fe80:) address?

Yes, via a MAC alias.

If the question was about System->Settings->General->DNS servers, then I think not.  AFAIK aliases are limited to firewall rules.

@Mpegger, you can use IPv4 there for your DNS server, but keep in mind this is a crutch because of the issue with dynamic IPv6 prefixes.  There are other such cases why it's been suggested to use only IPv4 for DNS for the time being as well (e.g. the "Source Net(s)" field in Unbound->Blocklists).

I think that these gaps will be filled in future OPNsense releases as the developers have been keen to make things better for us residential IPv6 internet subscribers, but it'll take some work.  I think 'hostwatch' is one small step toward that eventuality because with it we could in theory track dynamic IPv6 hosts (including their privacy addresses) for DNS purposes, which is currently lacking even in Dnsmasq.
#12
26.1 Series / Re: Legacy Rules Migration
February 08, 2026, 07:58:22 PM
@SMiTTY - I'm guessing you ran into this: https://github.com/opnsense/core/issues/9761
#13
26.1 Series / Re: zfs and sqlite
February 08, 2026, 07:07:11 PM
Quote from: tessus on February 08, 2026, 12:47:51 PMThe issue is that when using WAL, writes to the DB stall until they basically timeout, which brings the app using it down.

Ah, thanks.  There had been a lot of posts recently about high number of writes related to hostwatch (uses sqlite) so I thought you were responding to that. Appreciate the PM.
#14
Gotcha.  I think you would still need to specify the DNS in DHCP if you really need to use GUAs because unless you have a static prefix you can't use the System->General configs.

In Dnsmasq you can conveniently use constructors to track the interface.
#15
It sounds like you have a configuration issue somewhere.  There shouldn't be any loops regardless of whether you use GUAs or ULAs, or both.

I edited my reply above to mention that I don't think it's a good idea to use an internal DNS for OPNsense itself (fine for LAN clients).  I feel that OPNsense being the head of the network should not depend on anything downstream of it for its core functions.  Why do you need the OPNsense itself to go over Pi-hole?

You're right the ULA would not be preferred where a GUA or IPv4 is available but unless OPNsense knows about them they wouldn't be used.  Hypothetically, you would just enter the ULA address in the OPNsense config and that's what it would use, but don't quote me on that.  Again, I don't think it's a good idea anyway.

Just my unqualified two cents...