hostwatch db grows rapidly

Started by astrandb, January 30, 2026, 10:37:29 AM

Previous topic - Next topic
Just to be sure we're going to reinstall the correct hostwatch, restart it and check:

# pkg add -f https://pkg.opnsense.org/FreeBSD:14:amd64/26.1/MINT/26.1_4/latest/All/hostwatch-1.0.11.pkg
# service hostwatch restart
# ls -lah /var/db/hostwatch


Cheers,
Franco

For reference on my end:

# ls -lah /var/db/hostwatch
total 36949
drwxr-x---   2 hostd hostd    5B Jan 26 10:28 .
drwxr-xr-x  23 root  wheel   31B Feb  3 15:19 ..
-rw-r-----   1 hostd hostd  9.2M Feb  3 15:26 hosts.db
-rw-r-----   1 hostd hostd  320K Jan 31 15:59 hosts.db-shm
-rw-r-----   1 hostd hostd  128M Feb  3 15:37 hosts.db-wal

QuoteJust to be sure we're going to reinstall the correct hostwatch, restart it and check:

# pkg add -f https://pkg.opnsense.org/FreeBSD:14:amd64/26.1/MINT/26.1_4/latest/All/hostwatch-1.0.11.pkg
# service hostwatch restart
# ls -lah /var/db/hostwatch


Cheers,
Franco

Thank you for the info. I ran all the commands with results below. Not sure it it may take a while to cleanup the database or not.

root@firewall:#  pkg add -f https://pkg.opnsense.org/FreeBSD:14:amd64/26.1/MINT/26.1_4/latest/All/hostwatch-1.0.11.pkg
Fetching hostwatch-1.0.11.pkg: 100%    1 MiB   1.4MB/s    00:01   
Installing hostwatch-1.0.11...
package hostwatch is already installed, forced install
===> Creating groups
Using existing group 'hostd'
===> Creating users
Using existing user 'hostd'
Extracting hostwatch-1.0.11: 100%


root@firewall:# service hostwatch restart
hostwatch not running? (check /var/run/hostwatch/hostwatch.pid).
Starting hostwatch.


root@firewall: # ls -lah /var/db/hostwatch/
total 70972059
drwxr-xr-x   2 hostd hostd    5B Jan 29 08:20 .
drwxr-xr-x  25 root  wheel   35B Feb  3 09:28 ..
-rw-r--r--   1 hostd hostd  4.1M Feb  3 09:45 hosts.db
-rw-r--r--   1 hostd hostd   40M Feb  3 09:46 hosts.db-shm
-rw-r--r--   1 hostd hostd  1.0T Feb  2 07:53 hosts.db-wal

So the hostwatch pid file doesn't work I guess.

# pkill hostwatch && service hostwatch start


Cheers,
Franco

Quote from: franco on February 03, 2026, 03:54:43 PMSo the hostwatch pid file doesn't work I guess.

# pkill hostwatch && service hostwatch start


Cheers,
Franco

root@firewall:# pkill hostwatch && service hostwatch start
hostwatch already running?  (pid=37932).

root@firewall:# ps aux | grep hostwatch
root     49557   0.0  0.0   13744  2012  0  S+   10:01     0:00.00 grep hostwatch

root@firewall:# service hostwatch start
Starting hostwatch.

root@firewall:# ps aux | grep hostwatch
hostd    51113  39.4  0.2   70356 16352  -  R    10:01     0:04.91 /usr/local/bin/hostwatch -p -c -S -P /var/run/hostwatch/hostwatch.pid -d /var/db/hostwatch

root@firewall:# ll -lah /var/db/hostwatch/
total 70972059
drwxr-xr-x   2 hostd hostd    5B Jan 29 08:20 ./
drwxr-xr-x  25 root  wheel   35B Feb  3 09:28 ../
-rw-r--r--   1 hostd hostd  4.1M Feb  3 10:01 hosts.db
-rw-r--r--   1 hostd hostd  166M Feb  3 10:03 hosts.db-shm
-rw-r--r--   1 hostd hostd  1.0T Feb  2 07:53 hosts.db-wal


When you have stopped it try to remove the extra files named hosts.db-* and restart.

Very odd case.


Cheers,
Franco

Quote from: franco on February 03, 2026, 04:14:46 PMWhen you have stopped it try to remove the extra files named hosts.db-* and restart.

Very odd case.


Cheers,
Franco

That seemed to do it, thank you very much.

root@firewall:/var/db/hostwatch # ls -lah
total 10631
drwxr-xr-x   2 hostd hostd    5B Feb  3 10:26 .
drwxr-xr-x  25 root  wheel   35B Feb  3 09:28 ..
-rw-r--r--   1 hostd hostd  4.1M Feb  3 10:27 hosts.db
-rw-r--r--   1 hostd hostd  256K Feb  3 10:27 hosts.db-shm
-rw-r--r--   1 hostd hostd  119M Feb  3 10:27 hosts.db-wal

February 09, 2026, 07:19:01 PM #22 Last Edit: February 09, 2026, 07:56:37 PM by OPNenthu
I had hostwatch-1.0.11 running since at least the OPNsense-26.1.1 release (maybe even prior to that with a manual patch update, I don't recall) and I just patched up to 1.0.12.

It looks like the vacuuming fix maybe isn't working for me as I have a bunch of accumulated IPv6 temporary addresses for the same IOT client going back to the prior month (screen attached).

In the second screenshot for comparison, the filtered output from Diagnostics->NDP Table shows just 3 current entries; a stable EUI-64 address, a temporary, and a link-local which are all also present in hostwatch.

I don't know for sure but I'm wondering if the change to trigger vacuuming every 10k inserts is not doing me any favors for my small network size if I'm not hitting it (?).  In that case maybe that value could be configurable.

FWIW the db is not overgrown.  I only have 229 total hostwatch entries at present.

root@firewall:~ # ls -lah /var/db/hostwatch
total 17707
drwxr-xr-x   2 hostd hostd    5B Jan 27 12:29 .
drwxr-xr-x  24 root  wheel   33B Feb  9 12:19 ..
-rw-r--r--   1 hostd hostd  4.0M Feb  9 12:24 hosts.db
-rw-r--r--   1 hostd hostd  128K Feb  9 12:24 hosts.db-shm
-rw-r--r--   1 hostd hostd  128M Feb  9 13:07 hosts.db-wal
root@firewall:~ #

February 09, 2026, 08:07:09 PM #23 Last Edit: February 09, 2026, 08:08:49 PM by franco
> It looks like the vacuuming fix maybe isn't working for me as I have a bunch of accumulated IPv6 temporary addresses for the same IOT client going back to the prior month (screen attached).

That's not part of vacuuming. We're considering cleansing the database but not sure which intervals we want to enforce.  This matters mostly for IPv6, in IPv4 historic information is valuable (and sparse).

> In the second screenshot for comparison, the filtered output from Diagnostics->NDP Table shows just 3 current entries; a stable EUI-64 address, a temporary, and a link-local which are all also present in hostwatch.

The lifetime of NDP is only in the range of mere minutes, which creates other visibility issues (infamously the ISC-DHCPv6 lease page showing clients as offline when they aren't).


Cheers,
Franco

Ah, thanks.  Now I got it.  It's a database optimization: https://sqlite.org/lang_vacuum.html

I was thinking it was like the Kea process that periodically removes stales from its leases file.

So, the useful retention period for this data is TBD...