How to increase a proxmox VM disk and have opnsense/freebsd use the extra space

Started by del13r, Today at 04:04:03 AM

Previous topic - Next topic
This old topic helped me out: https://forum.opnsense.org/index.php?topic=19250
I thought I would create a new topic here for me to refer to in future and to help anyone else out in the same position.

I have opnsense running on proxmox. Setup is rock solid, no complaints.
I found that Zenarmor was consuming more disk space once I got a faster internet connection.
This prompted me to increase the disk space on my opensense VM.

Here is how I did it.

Find the disk of your VM on proxmox.

You cannot view this attachment.

Step 1, resize the proxmox virtual disk on Proxmox/Debian

ssh root@proxmox

root@proxmox:~# qm resize 100 scsi1 +120G
  Size of logical volume pve/vm-100-disk-1 changed from 30.00 GiB (7680 extents) to 150.00 GiB (38400 extents).
  Logical volume pve/vm-100-disk-1 successfully resized.

# originally wanted 150G but later decided i wanted 200G

root@proxmox:~# qm resize 100 scsi1 +50G
  Size of logical volume pve/vm-100-disk-1 changed from 150.00 GiB (38400 extents) to 200.00 GiB (51200 extents).
  Logical volume pve/vm-100-disk-1 successfully resized.

# checking size of the VM disk after resize

root@proxmox:~# fdisk /dev/mapper/pve-vm--100--disk--1 -l
Disk /dev/mapper/pve-vm--100--disk--1: 200 GiB, 214748364800 bytes, 419430400 sectors

Device                                  Start       End   Sectors   Size Type
/dev/mapper/pve-vm--100--disk--1-part1     40    532519    532480   260M EFI System
/dev/mapper/pve-vm--100--disk--1-part2 532520    533543      1024   512K FreeBSD boot
/dev/mapper/pve-vm--100--disk--1-part3 533544 419430359 418896816 199.7G FreeBSD UFS

Step 2, resize and then grow the file system on Opnsense/FreeBSD

ssh root@OPNsense

root@OPNsense:~ # gpart show
=>      40  419430320  da0  GPT  (200G)
        40    532480    1  efi  (260M)
    532520      1024    2  freebsd-boot  (512K)
    533544  62380976    3  freebsd-ufs  (30G)
  62914520  356515840      - free -  (170G)

root@OPNsense:~ # gpart resize -i 3 da0
da0p3 resized

root@OPNsense:~ # gpart show
=>       40  419430320  da0  GPT  (200G)
         40     532480    1  efi  (260M)
     532520       1024    2  freebsd-boot  (512K)
     533544  418896816    3  freebsd-ufs  (200G)

root@OPNsense:~ # df -h
Filesystem                  Size    Used  Avail Capacity  Mounted on
/dev/gpt/rootfs              29G    12G    15G    44%    /

root@OPNsense:~ # growfs /
Device is mounted read-write; resizing will result in temporary write suspension for /.
It's strongly recommended to make a backup before growing the file system.
OK to grow filesystem on /dev/gpt/rootfs, mounted on /, from 30GB to 200GB? [yes/no] yes

root@OPNsense:~ # df -h
Filesystem                  Size    Used  Avail Capacity  Mounted on
/dev/gpt/rootfs              193G    12G    166G    7%    /


You should change the title to include "with an UFS install" - I think you need different (probably no steps at all) inside the VM for ZFS installs.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on Today at 10:35:55 AMI think you need different (probably no steps at all) inside the VM for ZFS installs.

The GPT partition resizing is exactly the same. After that I am not quite sure from the top of my head if the vdev is expanded automatically today or if you still need "zpool online -e <pool> <partition>".

I just wonder why @del13r's instructions do not show gpart complaining about the missing backup partition table and the necessity to run "gpart recover" before "gpart resize".
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)