Hi,
my OPNsense runs out of space since about September. I don't know what exactly I changed but since then the free space declines about 16 G in 2 weeks (see screenshot).
I one removed some log files but it didn't fully solve the problem.
du -sh
doesn't seem to show all files (see screnshots).
Do you have any idea?
https://forum.opnsense.org/index.php?topic=34791.msg168539#msg168539
Quote from: Patrick M. Hausen on November 22, 2023, 07:08:56 PM
https://forum.opnsense.org/index.php?topic=34791.msg168539#msg168539
Why that set of options? I would think these would be easier.
du -h / | sort -rh | head
Or you could just skip the -r and head completely and look at the bottom.
-s sum up the directories
-k report in kbytes
-x don't cross file system mount points
If you use -h, sort -n will sort 1G before 2M before 3k ...
Thanks for the hint! I found an unues mongodb and some logfiles. In total about 20 GB. Now there are still 60 GB used if I look on the webinterface or 20 GB if I look in the shell, but I can live with that as long as it doesn't get more.
Don't forget to check System > Settings > Logging. Tooltips for the fields are useful. The most important in my opinion is the number of days to preserve, as it ensures the log file rotation keeps storage on reasonable levels, depending on the need/want for history.
Quote from: Patrick M. Hausen on November 22, 2023, 09:45:27 PM
-s sum up the directories
-k report in kbytes
-x don't cross file system mount points
If you use -h, sort -n will sort 1G before 2M before 3k ...
Not if you add -h to the sort. :)
I honestly did not know sort can do that. I am using my variant for over 30 years, now and did not see a reason to change it. :)
Quote from: Patrick M. Hausen on November 26, 2023, 06:35:03 PM
I honestly did not know sort can do that. I am using my variant for over 30 years, now and did not see a reason to change it. :)
;D
I now changed the preserve days to 7, but still 83 GB seemes to be in use (dashboard).
But the command only outputs 29 GB in /:
root@OPNsense:/var/log # du -ah / | sort -rh | head -n 20
29G /
25G /var
23G /var/log
13G /var/log/flowd.log
9.3G /var/log/filter
4.2G /usr
2.0G /usr/swap0
1.7G /usr/local
1.1G /var/netflow
666M /var/log/suricata
607M /usr/local/lib
547M /var/log/filter/filter_20231129.log
540M /var/log/filter/filter_20231127.log
534M /var/log/filter/filter_20231206.log
532M /var/log/filter/filter_20231128.log
531M /var/log/filter/filter_20231205.log
529M /var/log/filter/filter_20231130.log
522M /var/log/filter/filter_20231207.log
512M /var/log/filter/filter_20231204.log
509M /var/log/filter/filter_20231123.log
Do you know where the other 54 GB may hide? :D
try as root/admin user:
find / -type f -exec du -h {} + | sort -h
That's basically the same:
40M /usr/local/etc/suricata/rules/rules.sqlite
42M /usr/local/datastore/mongodb/mongod.log
43M /usr/local/lib/python3.9/site-packages/duckdb.cpython-39.so
43M /var/log/suricata/eve.json
43M /var/unbound/usr/local/lib/python3.9/site-packages/duckdb.cpython-39.so
47M /usr/local/bin/mongod
55M /usr/bin/ld.lld
56M /usr/local/zenarmor/bin/ipdrstreamer
58M /var/netflow/interface_000030.sqlite
64M /usr/local/zenarmor/db/GeoIP/GeoLite2-City.mmdb
79M /var/netflow/dst_port_003600.sqlite
81M /usr/bin/c++
83M /usr/bin/lldb
92M /var/log/suricata/eve.json.2
100M /usr/local/datastore/mongodb/journal/WiredTigerLog.0000000001
100M /usr/local/datastore/mongodb/journal/WiredTigerPreplog.0000000001
100M /usr/local/datastore/mongodb/journal/WiredTigerPreplog.0000000002
102M /var/log/suricata/eve.json.3
103M /var/log/suricata/eve.json.1
108M /var/log/suricata/eve.json.0
112M /var/netflow/dst_port_086400.sqlite
130M /var/netflow/dst_port_000300.sqlite
164M /var/log/filter/filter_20231213.log
264M /var/netflow/src_addr_086400.sqlite
417M /var/log/filter/filter_20231210.log
418M /var/log/filter/filter_20231209.log
447M /var/netflow/src_addr_details_086400.sqlite
479M /var/log/filter/filter_20231208.log
493M /var/log/filter/filter_20231211.log
498M /var/log/filter/filter_20231212.log
522M /var/log/filter/filter_20231207.log
2.0G /usr/swap0
13G /var/log/flowd.log
At the risk of stating the obvious, did you use some reliable method to check the disk space usage first? Cannot even make sense of where does the graph come from in the original post.
# zpool list
# df -h
Some more notes:
- Those netflow DBs and logs can eat entire disk space easily. Get some decent storage before enabling it. If unable, disable and reset netflow data.
- You seem to be running the (absolutely horrlble) MongoDB thing on your firewall. For what? Yuck.
- Collecting half gig of firewall logs a day - what's the log retention set to.
Finally: have you ever rebooted the box after deleting those mongdb and whatnot files you mentioned earlier?
I mean, with ZFS in place with compression enabled, we are not even getting meaningful figures here. Consider:
# man du
-A Display the apparent size instead of the disk usage. This can be
helpful when operating on compressed volumes or sparse files.
# find /var/log/filter -type f -exec du -Ah {} + | sort -h
9.2M /var/log/filter/filter_20231213.log
17M /var/log/filter/filter_20231210.log
23M /var/log/filter/filter_20231204.log
24M /var/log/filter/filter_20231211.log
30M /var/log/filter/filter_20231212.log
58M /var/log/filter/filter_20231206.log
59M /var/log/filter/filter_20231209.log
75M /var/log/filter/filter_20231207.log
92M /var/log/filter/filter_20231208.log
95M /var/log/filter/filter_20231205.log
vs.
# find /var/log/filter -type f -exec du -h {} + | sort -h
1.4M /var/log/filter/filter_20231213.log
1.9M /var/log/filter/filter_20231210.log
3.1M /var/log/filter/filter_20231204.log
3.1M /var/log/filter/filter_20231211.log
4.2M /var/log/filter/filter_20231212.log
8.1M /var/log/filter/filter_20231209.log
8.2M /var/log/filter/filter_20231206.log
11M /var/log/filter/filter_20231207.log
13M /var/log/filter/filter_20231205.log
13M /var/log/filter/filter_20231208.log
So, e.g. those firewall log files here you listed, they are actually not half gig, but ~5G per day. :o
547M /var/log/filter/filter_20231129.log
540M /var/log/filter/filter_20231127.log
534M /var/log/filter/filter_20231206.log
532M /var/log/filter/filter_20231128.log
531M /var/log/filter/filter_20231205.log
529M /var/log/filter/filter_20231130.log
522M /var/log/filter/filter_20231207.log
512M /var/log/filter/filter_20231204.log
509M /var/log/filter/filter_20231123.log
Quote from: doktornotor on December 13, 2023, 10:23:33 AM
At the risk of stating the obvious, did you use some reliable method to check the disk space usage first? Cannot even make sense of where does the graph come from in the original post.
# zpool list
# df -h
Some more notes:
- Those netflow DBs and logs can eat entire disk space easily. Get some decent storage before enabling it. If unable, disable and reset netflow data.
- You seem to be running the (absolutely horrlble) MongoDB thing on your firewall. For what? Yuck.
- Collecting half gig of firewall logs a day - what's the log retention set to.
Finally: have you ever rebooted the box after deleting those mongdb and whatnot files you mentioned earlier?
Hi!
The graph is from zabbix monitoring. I installed the plugin in the opnsense, so I can monitor the space. I attached the used space.
root@OPNsense:~ # zpool list
no pools available
root@OPNsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/rootfs 115G 80G 26G 76% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/gpt/efifs 256M 1.7M 254M 1% /boot/efi
devfs 1.0K 1.0K 0B 100% /var/dhcpd/dev
/dev/md43 48M 24K 44M 0% /usr/local/zenarmor/output/active/temp
devfs 1.0K 1.0K 0B 100% /var/unbound/dev
/usr/local/lib/python3.9 115G 80G 26G 76% /var/unbound/usr/local/lib/python3.9
The MongoDB is from the sensei plugin. In the meantime I deleted sensei and reinstalled it. I keep its data for 2 days.
Edit: retention days are set to: 7
Quote from: doktornotor on December 13, 2023, 10:55:40 AM
I mean, with ZFS in place with compression enabled, we are not even getting meaningful figures here. Consider:
# man du
-A Display the apparent size instead of the disk usage. This can be
helpful when operating on compressed volumes or sparse files.
# find /var/log/filter -type f -exec du -Ah {} + | sort -h
9.2M /var/log/filter/filter_20231213.log
17M /var/log/filter/filter_20231210.log
23M /var/log/filter/filter_20231204.log
24M /var/log/filter/filter_20231211.log
30M /var/log/filter/filter_20231212.log
58M /var/log/filter/filter_20231206.log
59M /var/log/filter/filter_20231209.log
75M /var/log/filter/filter_20231207.log
92M /var/log/filter/filter_20231208.log
95M /var/log/filter/filter_20231205.log
vs.
# find /var/log/filter -type f -exec du -h {} + | sort -h
1.4M /var/log/filter/filter_20231213.log
1.9M /var/log/filter/filter_20231210.log
3.1M /var/log/filter/filter_20231204.log
3.1M /var/log/filter/filter_20231211.log
4.2M /var/log/filter/filter_20231212.log
8.1M /var/log/filter/filter_20231209.log
8.2M /var/log/filter/filter_20231206.log
11M /var/log/filter/filter_20231207.log
13M /var/log/filter/filter_20231205.log
13M /var/log/filter/filter_20231208.log
So, e.g. those firewall log files here you listed, they are actually not half gig, but ~5G per day. :o
547M /var/log/filter/filter_20231129.log
540M /var/log/filter/filter_20231127.log
534M /var/log/filter/filter_20231206.log
532M /var/log/filter/filter_20231128.log
531M /var/log/filter/filter_20231205.log
529M /var/log/filter/filter_20231130.log
522M /var/log/filter/filter_20231207.log
512M /var/log/filter/filter_20231204.log
509M /var/log/filter/filter_20231123.log
It seems to be the same in my opnsense:
root@OPNsense:~ # find /var/log/filter -type f -exec du -Ah {} + | sort -h
211M /var/log/filter/filter_20231213.log
417M /var/log/filter/filter_20231210.log
418M /var/log/filter/filter_20231209.log
479M /var/log/filter/filter_20231208.log
492M /var/log/filter/filter_20231211.log
498M /var/log/filter/filter_20231212.log
522M /var/log/filter/filter_20231207.log
root@OPNsense:~ # find /var/log/filter -type f -exec du -h {} + | sort -h
211M /var/log/filter/filter_20231213.log
417M /var/log/filter/filter_20231210.log
418M /var/log/filter/filter_20231209.log
479M /var/log/filter/filter_20231208.log
493M /var/log/filter/filter_20231211.log
498M /var/log/filter/filter_20231212.log
522M /var/log/filter/filter_20231207.log
Well, if you are using UFS (yuck again), I'd suggest taking a configuration backup and doing a reinstall with ZFS. Will cut the storage space used by logs alone about tenfold, assuming same retention in place (the filesystem is lz4-compressed by default, see output I posted above). Plus, it does not suffer from unsolvable filesystem corruption issues.
@m4rtin did you resolve this?
im having a similar issue see https://forum.opnsense.org/index.php?topic=37633.0
i have since tried limiting logs to 5 days and turned off local capture of netflow
Yes I could solve it with installing opnsense with zfs.
I think this is caused by the sensei zenarmor plugin. Some months ago I had trouble with the database that was stopped again and again. I then did not correctly uninstall sensei. Reinstall it again later and used mongodb instead of elasticsearch. Maybe that messed up the whole system.
Now I use zfs and in sensei elasticsearch as database. That works so far.