I was able to do this although not being an expert the procedure was not perfect in the sense that some space would not grow and I cant really explain why.
What I did was the following.
I had a 10GB disk. I wanted to grow it to 20GB. In KVM/Your choice of Hypervisor I increased the disk allocation to 20GB with OPNsense turned off.
Now after booting OPNsense I typed "gpart show"
This displayed 2 areas for me. An MBR section (vtbd0) and a BSD device (vtbd0s1) both 10GB in size.
Under vtbd0 I had one partition labelled '1' which is 10GB right before the free space.
Similarly under the BSD section I had one partition labelled '1' right before the free space.
I could also see 10 GB of free unallocated space in each.
First I typed
"gpart resize -i 1 -s 19G vtbd0"
When I tried 20G it said there was not enough space and I did not know how to extend it to the last GB so I just used 19G. In my case I was using a dynamic disk so the hypervisor only grows the disk as used anyway. I imagine you could spend more time here getting every last byte but I did not need to.
This grew the MBR section to 19G. The -i 1 signifies the partition number. I wanted partition 1 to increase. the -s 19G is the size to grow to.
I then typed
"gpart resize -i 1 vtbd0s1"
This grew the BSD section to fill the space I had allocated within the MBR (I think).
'gpart show' now showed both sections grown to 19GB.
I then typed 'df -h' to list my mountpoints and the current size (not grown yet).
I then wanted the specific mount point of my choosing to take up the new space.
"growfs /dev/ufs/OPNsense"
This grew the partition. i got some warnings but all seemed fine (would recommend you have a backup obviously).
Then 'df -h' now showed disk was bigger. However for some reason I only got a 18GB as the size listed by df -h.
I could not explain this discrepancy, however everything seemed to be working.
I then rebooted into single user mode and did an fsck /dev/ufs/OPNsense and let it run. It didnt find any issue.
I rebooted OPNsense into normal mode and it seemed to be fine, however as I mentioned it displays I have a partition of 18GB despite growing it to 19GB and allocating 20GB.
So the procedure was possible and I dont have any issues, all is working normally and I did get extra space but somehow couldnt allocate all the space I wanted and lost 1GB and 1GB again during the procedure. This is no doubt due to my ignorance of the perfect settings to type when performing the procedure.
In terms of stability and functionality all seems totally fine, no errors, no problems, rebooted multiple times without an issue. As I mentioned its a dynamically expanding virtual disk so Im not really worried that the last 2GB couldnt be grown, and probably I can just grow it again if I need but seems like the procedure would need a bit of tweaking before its perfect. This was what I managed on my own just reading the man pages for freebsd.
Pete
What I did was the following.
I had a 10GB disk. I wanted to grow it to 20GB. In KVM/Your choice of Hypervisor I increased the disk allocation to 20GB with OPNsense turned off.
Now after booting OPNsense I typed "gpart show"
This displayed 2 areas for me. An MBR section (vtbd0) and a BSD device (vtbd0s1) both 10GB in size.
Under vtbd0 I had one partition labelled '1' which is 10GB right before the free space.
Similarly under the BSD section I had one partition labelled '1' right before the free space.
I could also see 10 GB of free unallocated space in each.
First I typed
"gpart resize -i 1 -s 19G vtbd0"
When I tried 20G it said there was not enough space and I did not know how to extend it to the last GB so I just used 19G. In my case I was using a dynamic disk so the hypervisor only grows the disk as used anyway. I imagine you could spend more time here getting every last byte but I did not need to.
This grew the MBR section to 19G. The -i 1 signifies the partition number. I wanted partition 1 to increase. the -s 19G is the size to grow to.
I then typed
"gpart resize -i 1 vtbd0s1"
This grew the BSD section to fill the space I had allocated within the MBR (I think).
'gpart show' now showed both sections grown to 19GB.
I then typed 'df -h' to list my mountpoints and the current size (not grown yet).
I then wanted the specific mount point of my choosing to take up the new space.
"growfs /dev/ufs/OPNsense"
This grew the partition. i got some warnings but all seemed fine (would recommend you have a backup obviously).
Then 'df -h' now showed disk was bigger. However for some reason I only got a 18GB as the size listed by df -h.
I could not explain this discrepancy, however everything seemed to be working.
I then rebooted into single user mode and did an fsck /dev/ufs/OPNsense and let it run. It didnt find any issue.
I rebooted OPNsense into normal mode and it seemed to be fine, however as I mentioned it displays I have a partition of 18GB despite growing it to 19GB and allocating 20GB.
So the procedure was possible and I dont have any issues, all is working normally and I did get extra space but somehow couldnt allocate all the space I wanted and lost 1GB and 1GB again during the procedure. This is no doubt due to my ignorance of the perfect settings to type when performing the procedure.
In terms of stability and functionality all seems totally fine, no errors, no problems, rebooted multiple times without an issue. As I mentioned its a dynamically expanding virtual disk so Im not really worried that the last 2GB couldnt be grown, and probably I can just grow it again if I need but seems like the procedure would need a bit of tweaking before its perfect. This was what I managed on my own just reading the man pages for freebsd.
Pete