I upgraded to 26.1 a couple of hours ago and suddenly got a warning that the disk is close to full.
It is caused by hostwatch db growing rapidly:
root@xxx:~ # ll -h /var/db/hostwatch/
total 6392960
-rw-r--r-- 1 hostd hostd 4.0M Jan 30 10:25 hosts.db
-rw-r--r-- 1 hostd hostd 12M Jan 30 10:25 hosts.db-shm
-rw-r--r-- 1 hostd hostd 6.1G Jan 30 10:25 hosts.db-wal
I have rebooted but still same filesize
Disable hostwatch for the time being.
Interfaces: Neighbors: Automatic discovery
You can also try the latest test version as we already found the auto-vacuum doesn't always trigger:
# opnsense-revert -z hostwatch
# service hostwatch restart
Cheers,
Franco
Same issue observed with 26.1. Resolved by the latest test version, here's the before and after:
total 2286624
-rw-r--r-- 1 hostd hostd 4.2M Jan 30 04:47 hosts.db
-rw-r--r-- 1 hostd hostd 4.3M Jan 30 04:47 hosts.db-shm
-rw-r--r-- 1 hostd hostd 2.2G Jan 30 04:47 hosts.db-wal
root@www:~ # ll -h /var/db/hostwatch/
total 139904
-rw-r--r-- 1 hostd hostd 4.2M Jan 30 04:47 hosts.db
-rw-r--r-- 1 hostd hostd 4.3M Jan 30 04:47 hosts.db-shm
-rw-r--r-- 1 hostd hostd 128M Jan 30 04:47 hosts.db-wal
root@www:~ # ll -h /var/db/hostwatch/
Quote from: franco on January 30, 2026, 10:55:03 AMYou can also try the latest test version as we already found the auto-vacuum doesn't always trigger:
# opnsense-revert -z hostwatch
# service hostwatch restart
Cheers,
Franco
Thank you. The test version worked fine. Same sizes as above.
With the help of your feedback we decided to hotfix the .11 as well in 2-3 hours.
Cheers,
Franco
Thank you for this post. I had three instances that almost ran out of diskspace. Was able to fix it because of this in a couple of minutes.
Hotfix is available now for everyone. Make sure to restart hostwatch to ensure the correct version is running since there is no reboot.
Cheers,
Franco
Hi there,
I even rebooted after going to OPNsense 26.1_4-amd64, but hosts.db-wal grows incredible fast (for a homelab) - see below:
root@OPNsense:/var/db/hostwatch # ls -lha
total 4015
drwxr-xr-x 2 hostd hostd 5B Feb 2 16:12 .
drwxr-xr-x 21 root wheel 28B Feb 2 16:11 ..
-rw-r----- 1 hostd hostd 4.0M Feb 2 16:12 hosts.db
-rw-r----- 1 hostd hostd 32K Feb 2 16:12 hosts.db-shm
-rw-r----- 1 hostd hostd 5.4M Feb 2 16:13 hosts.db-wal
root@OPNsense:/var/db/hostwatch # ls -lha
total 4503
drwxr-xr-x 2 hostd hostd 5B Feb 2 16:12 .
drwxr-xr-x 21 root wheel 28B Feb 2 16:11 ..
-rw-r----- 1 hostd hostd 4.0M Feb 2 16:12 hosts.db
-rw-r----- 1 hostd hostd 32K Feb 2 16:12 hosts.db-shm
-rw-r----- 1 hostd hostd 14M Feb 2 16:17 hosts.db-wal
Any idea what could be wrong? Thanks in advance!
KR
Harald
If it doesn't grow beyond tens of megabytes it's ok.
We'll refine this further to minimise database writes in the near future which should also make the journal smaller.
Cheers,
Franco
It's currently 150 MB on my home router, mostly caused by IPv6 addresses which appear to change frequently over time. I also see log entries going back to the initial activation. Is there any expiry or cleanup mechanism in place for HostWatch data (database and/or logs)?
Database yes although from 1.0.9 to 1.0.11 we had to make the "auto" cleanup a forced cleanup per interval because it wouldn't do "auto".
Logs are rotated via syslog-ng and then garbage collected as per your retention policies.
Cheers,
Franco
I too see a massive hostdb database. Much higher CPU and RAM usage too as it has been growing. I am running 26.1_4 and have rebooted several times. Any suggestions on how to shrink it or get it back to normal? Thanks!
drwxr-xr-x 2 hostd hostd uarch 5B Jan 29 08:20 ./
drwxr-xr-x 25 root wheel uarch 35B Feb 3 08:27 ../
-rw-r--r-- 1 hostd hostd uarch 4.1M Feb 3 08:28 hosts.db
-rw-r--r-- 1 hostd hostd uarch 1.0G Feb 3 08:38 hosts.db-shm
-rw-r--r-- 1 hostd hostd uarch 1.0T Feb 2 07:53 hosts.db-wal
Is hostwatch running and is 1.0.11 installed?
Cheers,
Franco
Quote from: franco on February 03, 2026, 03:03:04 PMIs hostwatch running and is 1.0.11 installed?
Cheers,
Franco
It appears so yes
hostd 35125 99.8 1924.6 51174818388 155872832 - R 08:28 44:52.60 /usr/local/bin/hostwatch -p -c -S -P /var/run/hostwatch/hostwatch.pid -d /var/
/usr/local/bin/hostwatch
/usr/local/etc/rc.d/hostwatch
/usr/local/share/licenses/hostwatch-1.0.11/BSD2CLAUSE
/usr/local/share/licenses/hostwatch-1.0.11/LICENSE
/usr/local/share/licenses/hostwatch-1.0.11/catalog.mk
/usr/local/etc/inc/plugins.inc.d/hostwatch.inc
/usr/local/opnsense/scripts/interfaces/setup_hostwatch.sh
/usr/local/opnsense/service/conf/actions.d/actions_hostwatch.conf
/usr/local/opnsense/service/templates/OPNsense/Syslog/local/hostwatch.conf
Just to be sure we're going to reinstall the correct hostwatch, restart it and check:
# pkg add -f https://pkg.opnsense.org/FreeBSD:14:amd64/26.1/MINT/26.1_4/latest/All/hostwatch-1.0.11.pkg
# service hostwatch restart
# ls -lah /var/db/hostwatch
Cheers,
Franco
For reference on my end:
# ls -lah /var/db/hostwatch
total 36949
drwxr-x--- 2 hostd hostd 5B Jan 26 10:28 .
drwxr-xr-x 23 root wheel 31B Feb 3 15:19 ..
-rw-r----- 1 hostd hostd 9.2M Feb 3 15:26 hosts.db
-rw-r----- 1 hostd hostd 320K Jan 31 15:59 hosts.db-shm
-rw-r----- 1 hostd hostd 128M Feb 3 15:37 hosts.db-wal
QuoteJust to be sure we're going to reinstall the correct hostwatch, restart it and check:
# pkg add -f https://pkg.opnsense.org/FreeBSD:14:amd64/26.1/MINT/26.1_4/latest/All/hostwatch-1.0.11.pkg
# service hostwatch restart
# ls -lah /var/db/hostwatch
Cheers,
Franco
Thank you for the info. I ran all the commands with results below. Not sure it it may take a while to cleanup the database or not.
root@firewall:# pkg add -f https://pkg.opnsense.org/FreeBSD:14:amd64/26.1/MINT/26.1_4/latest/All/hostwatch-1.0.11.pkg
Fetching hostwatch-1.0.11.pkg: 100% 1 MiB 1.4MB/s 00:01
Installing hostwatch-1.0.11...
package hostwatch is already installed, forced install
===> Creating groups
Using existing group 'hostd'
===> Creating users
Using existing user 'hostd'
Extracting hostwatch-1.0.11: 100%
root@firewall:# service hostwatch restart
hostwatch not running? (check /var/run/hostwatch/hostwatch.pid).
Starting hostwatch.
root@firewall: # ls -lah /var/db/hostwatch/
total 70972059
drwxr-xr-x 2 hostd hostd 5B Jan 29 08:20 .
drwxr-xr-x 25 root wheel 35B Feb 3 09:28 ..
-rw-r--r-- 1 hostd hostd 4.1M Feb 3 09:45 hosts.db
-rw-r--r-- 1 hostd hostd 40M Feb 3 09:46 hosts.db-shm
-rw-r--r-- 1 hostd hostd 1.0T Feb 2 07:53 hosts.db-wal
So the hostwatch pid file doesn't work I guess.
# pkill hostwatch && service hostwatch start
Cheers,
Franco
Quote from: franco on February 03, 2026, 03:54:43 PMSo the hostwatch pid file doesn't work I guess.
# pkill hostwatch && service hostwatch start
Cheers,
Franco
root@firewall:# pkill hostwatch && service hostwatch start
hostwatch already running? (pid=37932).
root@firewall:# ps aux | grep hostwatch
root 49557 0.0 0.0 13744 2012 0 S+ 10:01 0:00.00 grep hostwatch
root@firewall:# service hostwatch start
Starting hostwatch.
root@firewall:# ps aux | grep hostwatch
hostd 51113 39.4 0.2 70356 16352 - R 10:01 0:04.91 /usr/local/bin/hostwatch -p -c -S -P /var/run/hostwatch/hostwatch.pid -d /var/db/hostwatch
root@firewall:# ll -lah /var/db/hostwatch/
total 70972059
drwxr-xr-x 2 hostd hostd 5B Jan 29 08:20 ./
drwxr-xr-x 25 root wheel 35B Feb 3 09:28 ../
-rw-r--r-- 1 hostd hostd 4.1M Feb 3 10:01 hosts.db
-rw-r--r-- 1 hostd hostd 166M Feb 3 10:03 hosts.db-shm
-rw-r--r-- 1 hostd hostd 1.0T Feb 2 07:53 hosts.db-wal
When you have stopped it try to remove the extra files named hosts.db-* and restart.
Very odd case.
Cheers,
Franco
Quote from: franco on February 03, 2026, 04:14:46 PMWhen you have stopped it try to remove the extra files named hosts.db-* and restart.
Very odd case.
Cheers,
Franco
That seemed to do it, thank you very much.
root@firewall:/var/db/hostwatch # ls -lah
total 10631
drwxr-xr-x 2 hostd hostd 5B Feb 3 10:26 .
drwxr-xr-x 25 root wheel 35B Feb 3 09:28 ..
-rw-r--r-- 1 hostd hostd 4.1M Feb 3 10:27 hosts.db
-rw-r--r-- 1 hostd hostd 256K Feb 3 10:27 hosts.db-shm
-rw-r--r-- 1 hostd hostd 119M Feb 3 10:27 hosts.db-wal
I had hostwatch-1.0.11 running since at least the OPNsense-26.1.1 release (maybe even prior to that with a manual patch update, I don't recall) and I just patched up to 1.0.12 (https://forum.opnsense.org/index.php?topic=50786.msg259797#msg259797).
It looks like the vacuuming fix (https://github.com/opnsense/hostwatch/commit/5f35418a15e88c1cd61caef955284fdf58d5c605) maybe isn't working for me as I have a bunch of accumulated IPv6 temporary addresses for the same IOT client going back to the prior month (screen attached).
In the second screenshot for comparison, the filtered output from Diagnostics->NDP Table shows just 3 current entries; a stable EUI-64 address, a temporary, and a link-local which are all also present in hostwatch.
I don't know for sure but I'm wondering if the change to trigger vacuuming every 10k inserts is not doing me any favors for my small network size if I'm not hitting it (?). In that case maybe that value could be configurable.
FWIW the db is not overgrown. I only have 229 total hostwatch entries at present.
root@firewall:~ # ls -lah /var/db/hostwatch
total 17707
drwxr-xr-x 2 hostd hostd 5B Jan 27 12:29 .
drwxr-xr-x 24 root wheel 33B Feb 9 12:19 ..
-rw-r--r-- 1 hostd hostd 4.0M Feb 9 12:24 hosts.db
-rw-r--r-- 1 hostd hostd 128K Feb 9 12:24 hosts.db-shm
-rw-r--r-- 1 hostd hostd 128M Feb 9 13:07 hosts.db-wal
root@firewall:~ #
> It looks like the vacuuming fix maybe isn't working for me as I have a bunch of accumulated IPv6 temporary addresses for the same IOT client going back to the prior month (screen attached).
That's not part of vacuuming. We're considering cleansing the database but not sure which intervals we want to enforce. This matters mostly for IPv6, in IPv4 historic information is valuable (and sparse).
> In the second screenshot for comparison, the filtered output from Diagnostics->NDP Table shows just 3 current entries; a stable EUI-64 address, a temporary, and a link-local which are all also present in hostwatch.
The lifetime of NDP is only in the range of mere minutes, which creates other visibility issues (infamously the ISC-DHCPv6 lease page showing clients as offline when they aren't).
Cheers,
Franco
Ah, thanks. Now I got it. It's a database optimization: https://sqlite.org/lang_vacuum.html
I was thinking it was like the Kea process that periodically removes stales from its leases file.
So, the useful retention period for this data is TBD...