Wanted to grow the root partion from 16GB to 32GB, so I did:
Output of gpart shows:
Usage:
Details:
Tried this, rebooted, but did not do anything:
fsck did give lots of weird error:
tried:
Found this:
And now solved:
But WTF!?
Now only 5.8G is used? Before grow it was 14G ...
Why was /var/db so big?
- Shutdown OpnSense
- In Proxmox Harddisc->Resize +16
- Reboot OpnSense
Output of gpart shows:
Code Select
root@opnsense:~ # gpart show
=> 40 33554352 da0 GPT (32G) [CORRUPT]
40 1024 1 freebsd-boot (512K)
1064 33553328 2 freebsd-ufs (16G)
Usage:
Code Select
root@opnsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 15G 14G 705M 95% /
devfs 1.0K 0B 1.0K 0% /dev
tmpfs 611M 6.3M 604M 1% /var/log
tmpfs 1.8G 4.4M 1.8G 0% /tmp
tmpfs 1.8G 120K 1.8G 0% /var/lib/php/tmp
devfs 1.0K 0B 1.0K 0% /var/dhcpd/dev
devfs 1.0K 0B 1.0K 0% /var/unbound/dev
/usr/local/lib/python3.11 15G 14G 705M 95% /var/unbound/usr/local/lib/python3.11
/lib 15G 14G 705M 95% /var/unbound/lib
/dev/md43 145M 72K 133M 0% /usr/local/zenarmor/output/active/temp
tmpfs 100M 12K 100M 0% /usr/local/zenarmor/run/tracefs
Details:
Code Select
root@opnsense:~ # du -hs /*
8.0K /COPYRIGHT
1.4M /bin
312M /boot
12M /conf
4.0K /dev
4.0K /entropy
2.1M /etc
4.0K /home
17M /lib
164K /libexec
4.0K /media
4.0K /mnt
4.0K /net
4.0K /proc
4.0K /rescue
76K /root
4.9M /sbin
0B /sys
39M /tmp
5.1G /usr
8.5G /var
root@opnsense:~ # du -hs /var/*
4.0K /var/account
12K /var/at
12K /var/audit
4.0K /var/authpf
20M /var/backups
47M /var/cache
8.0K /var/crash
16K /var/cron
7.8G /var/db
104K /var/dhcpd
4.0K /var/empty
60K /var/etc
4.0K /var/games
4.0K /var/heimdal
277K /var/lib
15M /var/log
4.0K /var/mail
4.0K /var/msgs
844K /var/netflow
4.0K /var/preserve
164K /var/run
4.0K /var/rwho
148K /var/spool
12K /var/tmp
696M /var/unbound
4.0K /var/yp
Tried this, rebooted, but did not do anything:
Code Select
touch /.probe.for.growfs.nano
fsck did give lots of weird error:
Code Select
** /dev/da0p2 (NO WRITE)
** Last Mounted on /mnt
** Root file system
** Phase 1 - Check Blocks and Sizes
INCORRECT BLOCK COUNT I=160265 (31872 should be 28672)
CORRECT? no
INCORRECT BLOCK COUNT I=1602731 (8 should be 0)
tried:
Code Select
root@opnsense:~ # gpart resize -i 2 da0
gpart: table 'da0' is corrupt: Operation not permitted
- Booting in single user mode, tried everything again, nothing helped.
- Restored backup, tried again, same problem.
Found this:
Code Select
root@opnsense:~ # service growfs onestart
Growing root partition to fill device
da0 recovered
da0p2 resized
And now solved:
Code Select
root@opnsense:~ # gpart show
=> 40 67108784 da0 GPT (32G)
40 1024 1 freebsd-boot (512K)
1064 67107760 2 freebsd-ufs (32G)
But WTF!?
Code Select
root@opnsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 31G 5.8G 23G 20% /
devfs 1.0K 0B 1.0K 0% /dev
tmpfs 611M 7.9M 603M 1% /var/log
tmpfs 1.8G 584K 1.8G 0% /tmp
tmpfs 1.8G 120K 1.8G 0% /var/lib/php/tmp
devfs 1.0K 0B 1.0K 0% /var/dhcpd/dev
devfs 1.0K 0B 1.0K 0% /var/unbound/dev
/usr/local/lib/python3.11 31G 5.8G 23G 20% /var/unbound/usr/local/lib/python3.11
/lib 31G 5.8G 23G 20% /var/unbound/lib
/dev/md43 145M 12K 133M 0% /usr/local/zenarmor/output/active/temp
tmpfs 100M 32K 100M 0% /usr/local/zenarmor/run/tracefs
Now only 5.8G is used? Before grow it was 14G ...
Why was /var/db so big?
Code Select
root@opnsense:~ # du -hs /var/*
4.0K /var/account
12K /var/at
12K /var/audit
4.0K /var/authpf
20M /var/backups
156M /var/cache
8.0K /var/crash
16K /var/cron
44M /var/db
100K /var/dhcpd
4.0K /var/empty
64K /var/etc
4.0K /var/games
4.0K /var/heimdal
133K /var/lib
849K /var/log
4.0K /var/mail
4.0K /var/msgs
844K /var/netflow
4.0K /var/preserve
148K /var/run
4.0K /var/rwho
148K /var/spool
12K /var/tmp
698M /var/unbound
4.0K /var/yp
"