Thin disk / ZFS / Unmap?

Started by ThyOnlySandman, April 21, 2026, 12:28:46 AM

Previous topic - Next topic
April 21, 2026, 12:28:46 AM Last Edit: April 21, 2026, 12:57:19 AM by ThyOnlySandman
Last week I setup a new ESXi VM template to move from UFS to ZFS and upgrade to 26.1.
I ran several ZFS unmap tests inflating thin VMDK with large ISO and deleting.

zpool set autotrim=on zroot

System didn't appear to auto trim / unmap within ~30 min.
But running - zpool trim zroot - manual trim worked.  VMDK shrunk to very close to exact used space  So was happy and proceeded to swap over to new ZFS VM template.

Only been weekend since deployed and now reviewing today VMDK is 47GB yet Opnsense reports only 5GB used?
I've since ran manual trim again but it only shrunk VMDK ~1GB.  There is no way this much data has ever been written other than some internal ZFS function.

Is ZFS scrub or compression screwing with thin provisioning unmap / zero space?
I'm at a loss what FreeBSD + ZFS + VMFS thin disk is doing - Any suggestions appreciated.

Thin disk is 80GB

# df -h
             Filesystem                   Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default            69G    5.0G     64G     7%    /
devfs                        1.0K      0B    1.0K     0%    /dev
/dev/gpt/efiboot0            260M    1.3M    259M     1%    /boot/efi
zroot/var/mail                64G    160K     64G     0%    /var/mail
zroot/var/log                 64G     31M     64G     0%    /var/log
zroot/usr/src                 64G     96K     64G     0%    /usr/src
zroot/tmp                     64G    206M     64G     0%    /tmp
zroot                         64G     96K     64G     0%    /zroot
zroot/usr/ports               64G     96K     64G     0%    /usr/ports
zroot/var/audit               64G     96K     64G     0%    /var/audit
zroot/home                    64G     96K     64G     0%    /home
zroot/var/crash               64G     96K     64G     0%    /var/crash
zroot/var/tmp                 64G    388K     64G     0%    /var/tmp
devfs                        1.0K      0B    1.0K     0%    /var/unbound/dev
/usr/local/lib/python3.13     69G    5.0G     64G     7%    /var/unbound/usr/local/lib/python3.13
/lib                          69G    5.0G     64G     7%    /var/unbound/lib
/dev/md43                    484M     48K    445M     0%    /usr/local/zenarmor/output/active/temp
fdescfs                      1.0K      0B    1.0K     0%    /dev/fd
procfs                       8.0K      0B    8.0K     0%    /proc
tmpfs                        100M     24K    100M     0%    /usr/local/zenarmor/run/tracefs

# zpool status
  pool: zroot
 state: ONLINE
config:

   NAME        STATE     READ WRITE CKSUM
   zroot       ONLINE       0     0     0
     da0p4     ONLINE       0     0     0

errors: No known data errors

Edit:  Reviewing backup logs the fresh VM template VMDK was 14GB prior to few GB of Zenarmor / NTOPNG data accumulated over weekend.
VMDK has grown around ~29GB beyond what it should be in around 3 days.

Anybody using ZFS + ESXi thin disks?

I ran:
dd if=/dev/zero of=/zerofile bs=1M count=62000
rm zerofile
zpool trim zroot

This shrunk vmdk down from ~47GB to ~30GB.  Still larger than the ~20GB it should be.
I think ZFS compression is a problem for ESXi unmap.  With Zerofile created it doesn't actually consume space.
I may try temporarily turning off compression on pool zroot/ROOT/default and trying zerofile again.
I'm still unclear why VMDK size exploded in just few days.

I'm not familiar with the intricacies of ZFS in a VM, but regarding autotrim: this does only trim recently freed up disk space, locally so to speak. It doesn't do full disk passes periodically; you would need zpool trim for that, as you have already found out.
Note: This post may have been lightly edited by AI for spelling and minor readability improvements. The content and findings are entirely my own.