du -sh
https://forum.opnsense.org/index.php?topic=34791.msg168539#msg168539
du -h / | sort -rh | head
-s sum up the directories-k report in kbytes-x don't cross file system mount pointsIf you use -h, sort -n will sort 1G before 2M before 3k ...
I honestly did not know sort can do that. I am using my variant for over 30 years, now and did not see a reason to change it.
root@OPNsense:/var/log # du -ah / | sort -rh | head -n 20 29G / 25G /var 23G /var/log 13G /var/log/flowd.log9.3G /var/log/filter4.2G /usr2.0G /usr/swap01.7G /usr/local1.1G /var/netflow666M /var/log/suricata607M /usr/local/lib547M /var/log/filter/filter_20231129.log540M /var/log/filter/filter_20231127.log534M /var/log/filter/filter_20231206.log532M /var/log/filter/filter_20231128.log531M /var/log/filter/filter_20231205.log529M /var/log/filter/filter_20231130.log522M /var/log/filter/filter_20231207.log512M /var/log/filter/filter_20231204.log509M /var/log/filter/filter_20231123.log
find / -type f -exec du -h {} + | sort -h
40M /usr/local/etc/suricata/rules/rules.sqlite 42M /usr/local/datastore/mongodb/mongod.log 43M /usr/local/lib/python3.9/site-packages/duckdb.cpython-39.so 43M /var/log/suricata/eve.json 43M /var/unbound/usr/local/lib/python3.9/site-packages/duckdb.cpython-39.so 47M /usr/local/bin/mongod 55M /usr/bin/ld.lld 56M /usr/local/zenarmor/bin/ipdrstreamer 58M /var/netflow/interface_000030.sqlite 64M /usr/local/zenarmor/db/GeoIP/GeoLite2-City.mmdb 79M /var/netflow/dst_port_003600.sqlite 81M /usr/bin/c++ 83M /usr/bin/lldb 92M /var/log/suricata/eve.json.2100M /usr/local/datastore/mongodb/journal/WiredTigerLog.0000000001100M /usr/local/datastore/mongodb/journal/WiredTigerPreplog.0000000001100M /usr/local/datastore/mongodb/journal/WiredTigerPreplog.0000000002102M /var/log/suricata/eve.json.3103M /var/log/suricata/eve.json.1108M /var/log/suricata/eve.json.0112M /var/netflow/dst_port_086400.sqlite130M /var/netflow/dst_port_000300.sqlite164M /var/log/filter/filter_20231213.log264M /var/netflow/src_addr_086400.sqlite417M /var/log/filter/filter_20231210.log418M /var/log/filter/filter_20231209.log447M /var/netflow/src_addr_details_086400.sqlite479M /var/log/filter/filter_20231208.log493M /var/log/filter/filter_20231211.log498M /var/log/filter/filter_20231212.log522M /var/log/filter/filter_20231207.log2.0G /usr/swap0 13G /var/log/flowd.log
# zpool list# df -h
-A Display the apparent size instead of the disk usage. This can be helpful when operating on compressed volumes or sparse files.
# find /var/log/filter -type f -exec du -Ah {} + | sort -h9.2M /var/log/filter/filter_20231213.log 17M /var/log/filter/filter_20231210.log 23M /var/log/filter/filter_20231204.log 24M /var/log/filter/filter_20231211.log 30M /var/log/filter/filter_20231212.log 58M /var/log/filter/filter_20231206.log 59M /var/log/filter/filter_20231209.log 75M /var/log/filter/filter_20231207.log 92M /var/log/filter/filter_20231208.log 95M /var/log/filter/filter_20231205.log
# find /var/log/filter -type f -exec du -h {} + | sort -h1.4M /var/log/filter/filter_20231213.log1.9M /var/log/filter/filter_20231210.log3.1M /var/log/filter/filter_20231204.log3.1M /var/log/filter/filter_20231211.log4.2M /var/log/filter/filter_20231212.log8.1M /var/log/filter/filter_20231209.log8.2M /var/log/filter/filter_20231206.log 11M /var/log/filter/filter_20231207.log 13M /var/log/filter/filter_20231205.log 13M /var/log/filter/filter_20231208.log
547M /var/log/filter/filter_20231129.log540M /var/log/filter/filter_20231127.log534M /var/log/filter/filter_20231206.log532M /var/log/filter/filter_20231128.log531M /var/log/filter/filter_20231205.log529M /var/log/filter/filter_20231130.log522M /var/log/filter/filter_20231207.log512M /var/log/filter/filter_20231204.log509M /var/log/filter/filter_20231123.log
At the risk of stating the obvious, did you use some reliable method to check the disk space usage first? Cannot even make sense of where does the graph come from in the original post.Code: [Select]# zpool list# df -hSome more notes: - Those netflow DBs and logs can eat entire disk space easily. Get some decent storage before enabling it. If unable, disable and reset netflow data.- You seem to be running the (absolutely horrlble) MongoDB thing on your firewall. For what? Yuck.- Collecting half gig of firewall logs a day - what's the log retention set to.Finally: have you ever rebooted the box after deleting those mongdb and whatnot files you mentioned earlier?
root@OPNsense:~ # zpool listno pools availableroot@OPNsense:~ # df -hFilesystem Size Used Avail Capacity Mounted on/dev/gpt/rootfs 115G 80G 26G 76% /devfs 1.0K 1.0K 0B 100% /dev/dev/gpt/efifs 256M 1.7M 254M 1% /boot/efidevfs 1.0K 1.0K 0B 100% /var/dhcpd/dev/dev/md43 48M 24K 44M 0% /usr/local/zenarmor/output/active/tempdevfs 1.0K 1.0K 0B 100% /var/unbound/dev/usr/local/lib/python3.9 115G 80G 26G 76% /var/unbound/usr/local/lib/python3.9