This old topic helped me out: https://forum.opnsense.org/index.php?topic=19250
I thought I would create a new topic here for me to refer to in future and to help anyone else out in the same position.
I have opnsense running on proxmox. Setup is rock solid, no complaints.
I found that Zenarmor was consuming more disk space once I got a faster internet connection.
This prompted me to increase the disk space on my opensense VM.
Here is how I did it.
Find the disk of your VM on proxmox.
Screenshot 2026-01-19 141059.png
Step 1, resize the proxmox virtual disk on Proxmox/Debian (VM Host)
ssh root@proxmox
root@proxmox:~# qm resize 100 scsi1 +120G
Size of logical volume pve/vm-100-disk-1 changed from 30.00 GiB (7680 extents) to 150.00 GiB (38400 extents).
Logical volume pve/vm-100-disk-1 successfully resized.
# originally wanted 150G but later decided i wanted 200G
root@proxmox:~# qm resize 100 scsi1 +50G
Size of logical volume pve/vm-100-disk-1 changed from 150.00 GiB (38400 extents) to 200.00 GiB (51200 extents).
Logical volume pve/vm-100-disk-1 successfully resized.
# checking size of the VM disk after resize
root@proxmox:~# fdisk /dev/mapper/pve-vm--100--disk--1 -l
Disk /dev/mapper/pve-vm--100--disk--1: 200 GiB, 214748364800 bytes, 419430400 sectors
Device Start End Sectors Size Type
/dev/mapper/pve-vm--100--disk--1-part1 40 532519 532480 260M EFI System
/dev/mapper/pve-vm--100--disk--1-part2 532520 533543 1024 512K FreeBSD boot
/dev/mapper/pve-vm--100--disk--1-part3 533544 419430359 418896816 199.7G FreeBSD UFS
Step 2, resize and then grow the file system on Opnsense/FreeBSD (VM Guest)
ssh root@OPNsense
root@OPNsense:~ # gpart show
=> 40 419430320 da0 GPT (200G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 62380976 3 freebsd-ufs (30G)
62914520 356515840 - free - (170G)
root@OPNsense:~ # gpart resize -i 3 da0
da0p3 resized
root@OPNsense:~ # gpart show
=> 40 419430320 da0 GPT (200G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 418896816 3 freebsd-ufs (200G)
root@OPNsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/rootfs 29G 12G 15G 44% /
root@OPNsense:~ # growfs /
Device is mounted read-write; resizing will result in temporary write suspension for /.
It's strongly recommended to make a backup before growing the file system.
OK to grow filesystem on /dev/gpt/rootfs, mounted on /, from 30GB to 200GB? [yes/no] yes
root@OPNsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/rootfs 193G 12G 166G 7% /
You should change the title to include "with an UFS install" - I think you need different (probably no steps at all) inside the VM for ZFS installs.
Quote from: meyergru on January 19, 2026, 10:35:55 AMI think you need different (probably no steps at all) inside the VM for ZFS installs.
The GPT partition resizing is exactly the same. After that I am not quite sure from the top of my head if the vdev is expanded automatically today or if you still need "zpool online -e <pool> <partition>".
I just wonder why @del13r's instructions do not show gpart complaining about the missing backup partition table and the necessity to run "gpart recover" before "gpart resize".
OPNsense does the partition and file system adjustments automatically, for both UFS and ZFS. You trigger this by creating a "magic file":
- touch /.probe.for.growfs
- Shutdown OPNsense and expand the disk image (qemu-img resize / Resize-VHD / qm resize / ...)
- There is no step 3. When OPNsense boots, the rc script performs its magic.
Cheers
Maurice
Wow! 🤯
Quote from: meyergru on January 19, 2026, 10:35:55 AMYou should change the title to include "with an UFS install" - I think you need different (probably no steps at all) inside the VM for ZFS installs.
Thanks, I have updated the title to specify UFS file system.
Quote from: Maurice on January 19, 2026, 06:36:45 PMShutdown OPNsense
Thanks for the tip.
When I did my steps above, I did not have to shutdown or restart opnsense.
@del13r Good point. If uptime is critical, the manual approach might be worth it.
Cheers
Maurice
Quote from: Patrick M. Hausen on January 19, 2026, 10:57:36 AMI am not quite sure from the top of my head if the vdev is expanded automatically today or if you still need "zpool online -e <pool> <partition>".
When you are talking about a regular ZFS NAS setup with let's say 5 HDD's and RADIZ2 configured and then replace all 5 of them with larger HDD's then ZFS will automatically expand the VDEV for some years now :)
Quote from: Maurice on January 19, 2026, 06:36:45 PMOPNsense does the partition and file system adjustments automatically, for both UFS and ZFS. You trigger this by creating a "magic file":
- touch /.probe.for.growfs
- Shutdown OPNsense and expand the disk image (qemu-img resize / Resize-VHD / qm resize / ...)
- There is no step 3. When OPNsense boots, the rc script performs its magic.
So the file you create with the
touch command has the special name that triggers the expansion ?
Quote from: nero355 on January 20, 2026, 12:16:53 AMSo the file you create with the touch command has the special name that triggers the expansion ?
Correct. The rc script (https://github.com/opnsense/core/blob/master/src/etc/rc) checks whether this file exists. If it does, the partition and file systems modifications are executed and the file is deleted, so this happens only once:
GROWFS_MARKER=/.probe.for.growfs
[...]
if [ -f ${GROWFS_MARKER} ]; then
if [ -n "${ROOT_IS_UFS}" ]; then
grow_partition ${ROOT_IS_UFS}
growfs -y "/"
elif [ -n "${ROOT_IS_ZFS}" ]; then
zpool list -Hv ${ROOT_IS_ZFS} | while read NAME MORE; do
if [ "${NAME}" != "${ROOT_IS_ZFS}" ]; then
grow_partition ${NAME}
zpool online -e ${ROOT_IS_ZFS} ${NAME}
fi
done
fi
fi
[...]
rm -f ${GROWFS_MARKER}
/.probe.for.growfs exists on nano, vm and arm images so they fill all available disk space on first boot. But you can create this file any time on any OPNsense installation.
Cheers
Maurice
Just made it into the HOWTO (https://forum.opnsense.org/index.php?topic=44159.0), thanks @Maurice!
Quote from: Maurice on January 20, 2026, 12:28:03 AMQuote from: nero355 on January 20, 2026, 12:16:53 AMSo the file you create with the touch command has the special name that triggers the expansion ?
Correct. The rc script (https://github.com/opnsense/core/blob/master/src/etc/rc) checks whether this file exists. If it does, the partition and file systems modifications are executed and the file is deleted, so this happens only once:
GROWFS_MARKER=/.probe.for.growfs
[...]
if [ -f ${GROWFS_MARKER} ]; then
if [ -n "${ROOT_IS_UFS}" ]; then
grow_partition ${ROOT_IS_UFS}
growfs -y "/"
elif [ -n "${ROOT_IS_ZFS}" ]; then
zpool list -Hv ${ROOT_IS_ZFS} | while read NAME MORE; do
if [ "${NAME}" != "${ROOT_IS_ZFS}" ]; then
grow_partition ${NAME}
zpool online -e ${ROOT_IS_ZFS} ${NAME}
fi
done
fi
fi
[...]
rm -f ${GROWFS_MARKER}
/.probe.for.growfs exists on nano, vm and arm images so they fill all available disk space on first boot. But you can create this file any time on any OPNsense installation.
VERY COOL !!!
Thanks for the TIP! :)
Quote from: Patrick M. Hausen on January 19, 2026, 06:49:16 PMWow! 🤯
Hehe! It's been used for the Nano images in particular on first boot for a long time, but it's indeed reusable and works for VMs just as well.
Cheers,
Franco
Actually, it would also work for disk cloning on physical installs which often becomes neccessary with cheap SSD disks or when replacing them with larger ones. IMHO it should probably be the default. I can only imagine exotic cases where it would not work as intended, like when somebody wants to leave room on the disk for another OS. There was a case just these days (https://forum.opnsense.org/index.php?topic=50440), but I wonder who needs OpnSense other than as a 24/7 appliance?
Not sure that is a good idea. We try to avoid disk manipulation if not strictly necessary. We've also never messed with /etc/fstab for that reason. Except for missing bootloader updates (which also FreeBSD is working towards) I think this is a good policy.
Cheers,
Franco
Quote from: meyergru on January 20, 2026, 12:18:00 PMActually, it would also work for disk cloning on physical installs which often becomes neccessary with cheap SSD disks or when replacing them with larger ones.
IMHO it should probably be the default.
Considering the fact that SSD's are very sensitive to poor alignment of the partitions I am not a big fan of cloning them like we did with HDD's in the past to be honest :)
Quotebut I wonder who needs OpnSense other than as a 24/7 appliance?
Totally agree!
This was a timely nugget of information.
I'm trying to reproduce a multi-site wireguard site-to-site issue and am using VMs to mimic the environments. My VM template disk was too small, but with `touch /.probe.for.growfs` I was back up and running in minutes...
And also a big thanks to @Maurice for the aarch64 images!
Wanted to grow the root partion from 16GB to 32GB, so I did:
- Shutdown OpnSense
- In Proxmox Harddisc->Resize +16
- Reboot OpnSense
Output of gpart shows:
root@opnsense:~ # gpart show
=> 40 33554352 da0 GPT (32G) [CORRUPT]
40 1024 1 freebsd-boot (512K)
1064 33553328 2 freebsd-ufs (16G)
Usage:
root@opnsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 15G 14G 705M 95% /
devfs 1.0K 0B 1.0K 0% /dev
tmpfs 611M 6.3M 604M 1% /var/log
tmpfs 1.8G 4.4M 1.8G 0% /tmp
tmpfs 1.8G 120K 1.8G 0% /var/lib/php/tmp
devfs 1.0K 0B 1.0K 0% /var/dhcpd/dev
devfs 1.0K 0B 1.0K 0% /var/unbound/dev
/usr/local/lib/python3.11 15G 14G 705M 95% /var/unbound/usr/local/lib/python3.11
/lib 15G 14G 705M 95% /var/unbound/lib
/dev/md43 145M 72K 133M 0% /usr/local/zenarmor/output/active/temp
tmpfs 100M 12K 100M 0% /usr/local/zenarmor/run/tracefs
Details:
root@opnsense:~ # du -hs /*
8.0K /COPYRIGHT
1.4M /bin
312M /boot
12M /conf
4.0K /dev
4.0K /entropy
2.1M /etc
4.0K /home
17M /lib
164K /libexec
4.0K /media
4.0K /mnt
4.0K /net
4.0K /proc
4.0K /rescue
76K /root
4.9M /sbin
0B /sys
39M /tmp
5.1G /usr
8.5G /var
root@opnsense:~ # du -hs /var/*
4.0K /var/account
12K /var/at
12K /var/audit
4.0K /var/authpf
20M /var/backups
47M /var/cache
8.0K /var/crash
16K /var/cron
7.8G /var/db
104K /var/dhcpd
4.0K /var/empty
60K /var/etc
4.0K /var/games
4.0K /var/heimdal
277K /var/lib
15M /var/log
4.0K /var/mail
4.0K /var/msgs
844K /var/netflow
4.0K /var/preserve
164K /var/run
4.0K /var/rwho
148K /var/spool
12K /var/tmp
696M /var/unbound
4.0K /var/yp
Tried this, rebooted, but did not do anything:
touch /.probe.for.growfs.nano
fsck did give lots of weird error:
** /dev/da0p2 (NO WRITE)
** Last Mounted on /mnt
** Root file system
** Phase 1 - Check Blocks and Sizes
INCORRECT BLOCK COUNT I=160265 (31872 should be 28672)
CORRECT? no
INCORRECT BLOCK COUNT I=1602731 (8 should be 0)
tried:
root@opnsense:~ # gpart resize -i 2 da0
gpart: table 'da0' is corrupt: Operation not permitted
- Booting in single user mode, tried everything again, nothing helped.
- Restored backup, tried again, same problem.
Found this:
root@opnsense:~ # service growfs onestart
Growing root partition to fill device
da0 recovered
da0p2 resized
And now solved:root@opnsense:~ # gpart show
=> 40 67108784 da0 GPT (32G)
40 1024 1 freebsd-boot (512K)
1064 67107760 2 freebsd-ufs (32G)
But WTF!?
root@opnsense:~ # df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p2 31G 5.8G 23G 20% /
devfs 1.0K 0B 1.0K 0% /dev
tmpfs 611M 7.9M 603M 1% /var/log
tmpfs 1.8G 584K 1.8G 0% /tmp
tmpfs 1.8G 120K 1.8G 0% /var/lib/php/tmp
devfs 1.0K 0B 1.0K 0% /var/dhcpd/dev
devfs 1.0K 0B 1.0K 0% /var/unbound/dev
/usr/local/lib/python3.11 31G 5.8G 23G 20% /var/unbound/usr/local/lib/python3.11
/lib 31G 5.8G 23G 20% /var/unbound/lib
/dev/md43 145M 12K 133M 0% /usr/local/zenarmor/output/active/temp
tmpfs 100M 32K 100M 0% /usr/local/zenarmor/run/tracefs
Now only 5.8G is used? Before grow it was 14G ...
Why was /var/db so big?
root@opnsense:~ # du -hs /var/*
4.0K /var/account
12K /var/at
12K /var/audit
4.0K /var/authpf
20M /var/backups
156M /var/cache
8.0K /var/crash
16K /var/cron
44M /var/db
100K /var/dhcpd
4.0K /var/empty
64K /var/etc
4.0K /var/games
4.0K /var/heimdal
133K /var/lib
849K /var/log
4.0K /var/mail
4.0K /var/msgs
844K /var/netflow
4.0K /var/preserve
148K /var/run
4.0K /var/rwho
148K /var/spool
12K /var/tmp
698M /var/unbound
4.0K /var/yp
Quote from: teclab on January 30, 2026, 02:55:08 PMNow only 5.8G is used? Before grow it was 14G ...
Why was /var/db so big?
Maybe this : https://forum.opnsense.org/index.php?topic=50590.0
Was the reason ?!
Quote from: teclab on January 30, 2026, 02:55:08 PMTried this, rebooted, but did not do anything:
touch /.probe.for.growfs.nano
Wrong file name.
Cheers
Maurice