Apologies if it's not appropriate to reply to this thread, but it's one of the only ones that matches what I've been searching, and I thought if I find some resolution it can remain for anyone looking in the future. I woke up to OPNsense stuck on the first screen of booting, changing to last kernel did not help as it has once in the past.
I attempted to acess the zfs pool in a live CD of Ubuntu, but received the same kernel error as OP. I attempted to mount the zfs pool in a FreeBSD live CD, was unable to mount zroot/ROOT or zroot/ROOT/default, which google searches showed contain the config file. I thought maybe physical disk problems so I ran a L2 spinrite6.1 scan, which found no errors.
I put a different SSD in the machine and flashed 25.7 fresh, uploaded my last config backup I could find but it was before some extensive NAT/firewall/reverse proxy changes I'd rather not rebuild; I download config backups anytime I make big changes so I have no clue how the most recent backup I can find is from August, but here I am.
Contents of 'zfs list' are as follows:
I received no errors importing/mounting (on that attempt), tried a 'zpool scrub zroot,' with 'zpool status -v' as follows:
I'm at a loss for why zroot/ROOT (mounted: /Volumes/OPNsense/) is empty other than /home /tmp /usr /var /zroot subfolders, most of which have files in them.
'zfs list' is showing 3.55G in use in the dataset, would that be the case if someone gained access to my OPNsense machine and wiped out files but not directories in / ?
All the import errors I was getting initially were in the vein of 'permission denied' so I'm hoping there's some unresolved permissions issue going on where I need to 'take ownership' of the datasets or something along those lines? It's odd that the zroot mountpoint is a folder within /OPNsense alongside the files that should be in it, so maybe I need to import without mounting (-N) and then set mountpoints and mount one by one and make sure they're nested correctly?
I've run out of theories and realize I'm incredibly ignorant vis a vis ZFS, which I plan to work on as I intend to build a NAS in the future. I'm running as root (sudo -s) on the macOS terminal running openzfs via homebrew, installed today.
Any guidance would be appreciated, I am at a loss of hope and ideas.
I attempted to acess the zfs pool in a live CD of Ubuntu, but received the same kernel error as OP. I attempted to mount the zfs pool in a FreeBSD live CD, was unable to mount zroot/ROOT or zroot/ROOT/default, which google searches showed contain the config file. I thought maybe physical disk problems so I ran a L2 spinrite6.1 scan, which found no errors.
I put a different SSD in the machine and flashed 25.7 fresh, uploaded my last config backup I could find but it was before some extensive NAT/firewall/reverse proxy changes I'd rather not rebuild; I download config backups anytime I make big changes so I have no clue how the most recent backup I can find is from August, but here I am.
Contents of 'zfs list' are as follows:
Code Select
NAME USED AVAIL REFER MOUNTPOINT
zroot 4.04G 213G 1.89M /Volumes/OPNsense/zroot
zroot/ROOT 3.55G 213G 1.95M /Volumes/OPNsense
zroot/ROOT/20250401030029 8K 213G 1.73G /Volumes/OPNsense
zroot/ROOT/default 3.55G 213G 1.82G /Volumes/OPNsense
zroot/home 1.99M 213G 1.99M /Volumes/OPNsense/home
zroot/tmp 15.7M 213G 15.7M /Volumes/OPNsense/tmp
zroot/usr 3.92M 213G 96K /Volumes/OPNsense/usr
zroot/usr/ports 1.93M 213G 1.93M /Volumes/OPNsense/usr/ports
zroot/usr/src 1.90M 213G 1.90M /Volumes/OPNsense/usr/src
zroot/var 411M 213G 96K /Volumes/OPNsense/var
zroot/var/audit 1.93M 213G 1.93M /Volumes/OPNsense/var/audit
zroot/var/crash 3.81M 213G 3.81M /Volumes/OPNsense/var/crash
zroot/var/log 401M 213G 401M /Volumes/OPNsense/var/log
zroot/var/mail 1.94M 213G 1.94M /Volumes/OPNsense/var/mail
zroot/var/tmp 1.94M 213G 1.94M /Volumes/OPNsense/var/tmp
I received no errors importing/mounting (on that attempt), tried a 'zpool scrub zroot,' with 'zpool status -v' as follows:
Code Select
pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:00:08 with 0 errors on Thu Oct 9 23:15:21 2025
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
disk4 ONLINE 0 0 0
errors: No known data errors
I'm at a loss for why zroot/ROOT (mounted: /Volumes/OPNsense/) is empty other than /home /tmp /usr /var /zroot subfolders, most of which have files in them.
'zfs list' is showing 3.55G in use in the dataset, would that be the case if someone gained access to my OPNsense machine and wiped out files but not directories in / ?
All the import errors I was getting initially were in the vein of 'permission denied' so I'm hoping there's some unresolved permissions issue going on where I need to 'take ownership' of the datasets or something along those lines? It's odd that the zroot mountpoint is a folder within /OPNsense alongside the files that should be in it, so maybe I need to import without mounting (-N) and then set mountpoints and mount one by one and make sure they're nested correctly?
I've run out of theories and realize I'm incredibly ignorant vis a vis ZFS, which I plan to work on as I intend to build a NAS in the future. I'm running as root (sudo -s) on the macOS terminal running openzfs via homebrew, installed today.
Any guidance would be appreciated, I am at a loss of hope and ideas.