1
24.1 Production Series / 62GB of query.csv in /var/cache/unbound.duckdb
« Last post by anicoletti on Today at 10:16:01 pm »We received notification via Zabbix that one of our OPNsense firewalls was at 10% disk space free. We attempted to connect to it but the WebGUI was failing to load. We were able to access it via SSH, and upon running df we noticed the system was completely full. I manually deleted a few log files and restarted the WebGUI to get logged in. We had this issue about 2 months ago with this location but we actually rebuilt the firewall completely on new hardware, just restoring the original configuration.
After reviewing this issue further today, I went ahead and purged the rest of the logs, including RRD and Netflow data, but there was still 62GB of unaccounted for space used.
Ended up hopping back onto the shell and ran the following command:
The results showed that 62G was under /var/cache/unbound.duckdb. On checking that folder, I found these files.
Two query.csv files totalling 62GB seems a bit off to me. Any ideas on why these got so bad and how to prevent this issue in the future?
After reviewing this issue further today, I went ahead and purged the rest of the logs, including RRD and Netflow data, but there was still 62GB of unaccounted for space used.
Ended up hopping back onto the shell and ran the following command:
Code: [Select]
du -h / | grep '[0-9\.]\+G'
The results showed that 62G was under /var/cache/unbound.duckdb. On checking that folder, I found these files.
Code: [Select]
-rw-r--r-- 1 root unbound 2888 May 21 08:45 client.csv
-rw-r--r-- 1 unbound unbound 214 May 20 08:44 load.sql
-rw-r--r-- 1 unbound unbound 33267605145 May 20 08:44 query.csv
-rw-r--r-- 1 unbound unbound 1503 May 20 08:44 schema.sql
-rw-r--r-- 1 root unbound 33726476128 May 21 08:45 tmp_query.csv
Two query.csv files totalling 62GB seems a bit off to me. Any ideas on why these got so bad and how to prevent this issue in the future?