Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - GreenMatter

#1
So, I upgraded to 26.1 and imported firewall rules. Among them are now standalone, formerly associated destination firewall rules.
In grid view, those rules have the same source and destination as policy rules in destination nat; but after opening them in edit window - source and destination options are empty (nothing selected)...

Is it a bug or feature? ;-)
#2
25.7, 25.10 Series / Re: Hostwatch - high disk writes
January 19, 2026, 12:01:24 PM
Updated to 25.7.11_2.
Unfortunately there's only slight difference in i/o demand created by hostwatch.
top -S -m io -o total
last pid:  8428;  load averages:  0.39,  0.41,  0.37                                                                                                                           up 2+17:07:59  11:52:30
144 processes: 2 running, 140 sleeping, 2 waiting
CPU:  5.1% user,  0.0% nice,  2.9% system,  0.5% interrupt, 91.5% idle
Mem: 343M Active, 6483M Inact, 1559M Wired, 670M Buf, 3881M Free
Swap: 8192M Total, 8192M Free
  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
92104 hostd       2707     37      0   2640      0   2640  99.21% hostwatch
 7034 root          53     54      0     19      0     19   0.71% python3.11
   16 root         548      1      0      2      0      2   0.08% bufdaemon
    1 root           0      0      0      0      0      0   0.00% init
97153 unbound       47      1      0      0      0      0   0.00% unbound
    2 root          56      0      0      0      0      0   0.00% clock
 5314 root          12      6      0      0      0      0   0.00% ng_queue
 1474 squid          0      0      0      0      0      0   0.00% security_file_certg
    3 root           0      0      0      0      0      0   0.00% crypto
10115 root           0      0      0      0      0      0   0.00% ge


iostat -x 2
                        extended device statistics  
device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b  
da0            0     105      7.7   4048.1     0     1     0     1    0   0 
da1            0       0      0.0      0.0     0     0     0     0    0   0 



Plus I wasn't able to start hostwatch when I handpicked interfaces, works only when "All" is selected. When trying to start it in cli:

service hostwatch restart

hostwatch not running? (check /var/run/hostwatch/hostwatch.pid).
Starting hostwatch.
thread 'main' (116664) panicked at src/main.rs:53:79:
called `Option::unwrap()` on a `None` value
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Abort trap
/usr/local/etc/rc.d/hostwatch: WARNING: failed to start hostwatch



#3
I'm not sure when it started but at least for the last couple of days I see in ipv6 gateway monitoring:

RTT: 191.5 ms
RTTd: 123.9 ms
Loss: 3.0 %

RTT used to be 15-25 ms and RTTd around 10 ms.
gif interface has same MTU as is on tunnel broker's side: 1480. gif MSS is also set as 1480 and system automatically subtracts 60b and reaches 1420. It is checked by http://pmtud.enslaves.us :

Direction

  Maximum Size Segment     Client Sent MSS Notes
Server to Client IPv4  14601460OK
Client to Server IPv4unlimitedn/aOK
Server to Client IPv614201420OK
Client to Server IPv6unlimitedn/aOK


So, is it something related to opnsense (latest 25.7.11) or more on HE tunnel broker's side?


#4
25.7, 25.10 Series / Re: Hostwatch - high disk writes
January 17, 2026, 03:18:05 PM
Quote from: franco on January 17, 2026, 02:31:13 PMThese typical UFS issues mainly are from unclean shutdowns regardless of where the corruption occurs.

If you have a process that is writing while the power goes off there will be an error for it. And if the process is writing all the time the chances are pretty high it's going to catch that one.
It is freshly installed system (UFS, because of VM's disk is on ZFS) with restored config and updated to 25.7.11. It didn't experience any unclean shutdowns... BTW, I couldn't finalized installation (was stuck at "preparing target system") if I imported config during installation. I had to install Opnsense, configure lan interface and restore config using webgui...
#5
25.7, 25.10 Series / Re: Hostwatch - high disk writes
January 17, 2026, 01:46:46 PM
I'll leave here following:
top -S -m io -o total

last pid: 25126;  load averages:  0.49,  0.44,  0.46                                                                                                                           up 0+01:26:59  20:11:30
143 processes: 3 running, 138 sleeping, 2 waiting
CPU:  9.4% user,  0.0% nice,  4.7% system,  0.9% interrupt, 84.9% idle
Mem: 1154M Active, 1634M Inact, 1585M Wired, 880M Buf, 7477M Free
Swap: 8192M Total, 8192M Free

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
34081 hostd       2494     13      0   2496      0   2496  99.96% hostwatch
   16 root         500      0      0      1      0      1   0.04% bufdaemon
85376 root           0      0      0      0      0      0   0.00% php-cgi
53184 root          10      1      0      0      0      0   0.00% php
    1 root           0      0      0      0      0      0   0.00% init
74881 root           0      0      0      0      0      0   0.00% php-cgi
79105 root           0      0      0      0      0      0   0.00% csh
35073 root           0      0      0      0      0      0   0.00% php-cgi
68801 root           0      0      0      0      0      0   0.00% php-cgi
    2 root          33      0      0      0      0      0   0.00% clock
 5314 root           2      2      0      0      0      0   0.00% ng_queue
 1474 squid          0      0      0      0      0      0   0.00% security_file_certg

And also this:
fsck_ffs -n /dev/gpt/rootfs
** /dev/gpt/rootfs (NO WRITE)
** SU+J Recovering /dev/gpt/rootfs

USE JOURNAL? no

Skipping journal, falling through to full fsck
** Last Mounted on /mnt
** Root file system
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
UNALLOCATED  I=1122920  OWNER=hostd MODE=100644
SIZE=21032 MTIME=Jan 16 19:04 2026
FILE=/var/db/hostwatch/hosts.db-journal

UNEXPECTED SOFT UPDATE INCONSISTENCY

Second command's result worries me a bit, as I've just reinstalled Opnsense on fresh (and partitioned by installer) VM disk and right away I've got this kind of errors???



#6
25.7, 25.10 Series / Re: Hostwatch - high disk writes
January 17, 2026, 09:06:48 AM
That's how increase (clearly you can see when hostwatch was running) in writes looks like; my instance runs in Proxmox as VM...
#7
25.7, 25.10 Series / Re: Hostwatch - high disk writes
January 16, 2026, 09:37:17 PM
Quote from: franco on January 16, 2026, 09:26:15 PMWell, it's either enabled or not. There may be a bug that doesn't stop it but I haven't seen it. Worst case a reboot would take care of it (when properly disabled).


Cheers,
Franco
Does hostwatch suppose to create such disk writes?
#8
25.7, 25.10 Series / Re: Hostwatch - high disk writes
January 16, 2026, 09:10:27 PM
Quote from: franco on January 16, 2026, 09:02:37 PMhttps://github.com/opnsense/changelog/blob/efe03ef435b5abfff641262fd69e02efd926be5a/community/25.7/25.7.11#L10-L12

Interfaces: Neighbors: Automatic Discovery.


Cheers,
Franco
Thanks, I've seen it. But it still causing really high disk writes. For a time being I stopped this service...
#9
25.7, 25.10 Series / Hostwatch - high disk writes
January 16, 2026, 08:51:04 PM
After upgrading to 25.7.11 hostwatch (v. 1.0.2) causes high (60M) disk writes and increased CPU utilisation. 
Is there any fix for it?
#10
25.7, 25.10 Series / Ntopng - high CPU utilization
November 28, 2025, 12:04:41 PM
I would like to keep using ntopng as a general overview of data flow but it causes high CPU (N100) utilization when transfer speeds starts reaching 400 Mb/s (Opnsense runs as VM in Proxmox). It wasn't the case when I was using Zenarmor...
Is there a way to set ntopng to be less resource hungry?
#11
25.7, 25.10 Series / Re: netflow on 25.7
July 24, 2025, 12:14:02 PM
+1
#12
Quote from: franco on June 13, 2025, 12:20:00 PMFiling a plugins bug report could help reach the maintainer. Not sure if this a general issue as I haven't seen a ticket and nothing really changed except the C-ICAP upstream version recently I think.
Thanks, I filed bug report...
#13


Since version 25.1.7 C-ICAP doesn't start automatically and throws following errors in log: 


2025-06-12T21:03:43Criticalc-icapmain proc, Error opening/parsing config file
2025-06-12T21:03:43Criticalc-icapmain proc, WARNING: Can not check the used c-icap release to build service clamd_mod.so
2025-06-12T21:03:43Criticalc-icapmain proc,
2025-06-12T21:03:43Criticalc-icapmain proc, WARNING: Can not check the used c-icap release to build service virus_scan.so
2025-06-12T21:03:43Criticalc-icapmain proc, Warning, alias is the same as service_name, not adding
2025-06-12T21:03:43Criticalc-icapmain proc, The line is: sys_logger.access !localserver
2025-06-12T21:03:43Criticalc-icapmain proc, Fatal error while parsing config file: "/usr/local/etc/c-icap/c-icap.conf" line: 32
2025-06-12T21:03:43Criticalc-icapmain proc, Error adding acl spec: !localserver.
2025-06-12T21:03:10Criticalc-icapmain proc, Error opening/parsing config file
2025-06-12T21:03:10Criticalc-icapmain proc, WARNING: Can not check the used c-icap release to build service clamd_mod.so
2025-06-12T21:03:10Criticalc-icapmain proc,
2025-06-12T21:03:10Criticalc-icapmain proc, WARNING: Can not check the used c-icap release to build service virus_scan.so
2025-06-12T21:03:10Criticalc-icapmain proc, Warning, alias is the same as service_name, not adding
2025-06-12T21:03:10Criticalc-icapmain proc, The line is: sys_logger.access !localserver
2025-06-12T21:03:10Criticalc-icapmain proc, Fatal error while parsing config file: "/usr/local/etc/c-icap/c-icap.conf" line: 32
2025-06-12T21:03:10Criticalc-icapmain proc, Error adding acl spec: !localserver.

And once line:
sys_logger.access !localserver
is removed in config file /usr/local/etc/c-icap/c-icap.conf, I'm able to manually start C-ICAP.

How to fix it permanently?
#14
Is there any update related to this issue?
#15
I can see exactly same error. Is there any solution?