OPNsense runs out of space

Started by m4rtin, November 22, 2023, 07:02:23 PM

Previous topic - Next topic
Quote from: doktornotor on December 13, 2023, 10:55:40 AM
I mean, with ZFS in place with compression enabled, we are not even getting meaningful figures here. Consider:

# man du

     -A      Display the apparent size instead of the disk usage.  This can be
             helpful when operating on compressed volumes or sparse files.



# find /var/log/filter -type f -exec du -Ah {} + | sort -h
9.2M    /var/log/filter/filter_20231213.log
17M    /var/log/filter/filter_20231210.log
23M    /var/log/filter/filter_20231204.log
24M    /var/log/filter/filter_20231211.log
30M    /var/log/filter/filter_20231212.log
58M    /var/log/filter/filter_20231206.log
59M    /var/log/filter/filter_20231209.log
75M    /var/log/filter/filter_20231207.log
92M    /var/log/filter/filter_20231208.log
95M    /var/log/filter/filter_20231205.log


vs.


# find /var/log/filter -type f -exec du -h {} + | sort -h
1.4M    /var/log/filter/filter_20231213.log
1.9M    /var/log/filter/filter_20231210.log
3.1M    /var/log/filter/filter_20231204.log
3.1M    /var/log/filter/filter_20231211.log
4.2M    /var/log/filter/filter_20231212.log
8.1M    /var/log/filter/filter_20231209.log
8.2M    /var/log/filter/filter_20231206.log
11M    /var/log/filter/filter_20231207.log
13M    /var/log/filter/filter_20231205.log
13M    /var/log/filter/filter_20231208.log


So, e.g. those firewall log files here you listed, they are actually not half gig, but ~5G per day. :o


547M    /var/log/filter/filter_20231129.log
540M    /var/log/filter/filter_20231127.log
534M    /var/log/filter/filter_20231206.log
532M    /var/log/filter/filter_20231128.log
531M    /var/log/filter/filter_20231205.log
529M    /var/log/filter/filter_20231130.log
522M    /var/log/filter/filter_20231207.log
512M    /var/log/filter/filter_20231204.log
509M    /var/log/filter/filter_20231123.log


It seems to be the same in my opnsense:

root@OPNsense:~ # find /var/log/filter -type f -exec du -Ah {} + | sort -h
211M    /var/log/filter/filter_20231213.log
417M    /var/log/filter/filter_20231210.log
418M    /var/log/filter/filter_20231209.log
479M    /var/log/filter/filter_20231208.log
492M    /var/log/filter/filter_20231211.log
498M    /var/log/filter/filter_20231212.log
522M    /var/log/filter/filter_20231207.log
root@OPNsense:~ # find /var/log/filter -type f -exec du -h {} + | sort -h
211M    /var/log/filter/filter_20231213.log
417M    /var/log/filter/filter_20231210.log
418M    /var/log/filter/filter_20231209.log
479M    /var/log/filter/filter_20231208.log
493M    /var/log/filter/filter_20231211.log
498M    /var/log/filter/filter_20231212.log
522M    /var/log/filter/filter_20231207.log

December 13, 2023, 11:15:41 AM #16 Last Edit: December 13, 2023, 11:17:41 AM by doktornotor
Well, if you are using UFS (yuck again), I'd suggest taking a configuration backup and doing a reinstall with ZFS. Will cut the storage space used by logs alone about tenfold, assuming same retention in place (the filesystem is lz4-compressed by default, see output I posted above). Plus, it does not suffer from unsolvable filesystem corruption issues.

@m4rtin did you resolve this?

im having a similar issue see https://forum.opnsense.org/index.php?topic=37633.0

i have since tried limiting logs to 5 days and turned off local capture of netflow


Yes I could solve it with installing opnsense with zfs.

I think this is caused by the sensei zenarmor plugin. Some months ago I had trouble with the database that was stopped again and again. I then did not correctly uninstall sensei. Reinstall it again later and used mongodb instead of elasticsearch. Maybe that messed up the whole system.

Now I use zfs and in sensei elasticsearch as database. That works so far.