Hi there,
I recently set up an appliance to be used as cold spare / backup for the case that the primary system dies:
- Installed OPNsense (24.7.x)
- Installed BEmanager
Everything working so far.
Then I exported the active BE (24.1.x) from the primary system to a network storage using BEmanager and then imported this BE to the new backup system. After activating this BE and rebooting, I get the following error and the device unable to boot:
Mounting from zfs:zroot/ROOT/restore-2024-10-24-061121 failed with error 45.
Loader variables:
vfs.root.mountfrom=zfs:zroot/ROOT/restore-2024-10-24-061121
I am pretty sure I've done everything the same way I did it several times before when creating backup devices or moving to new devices.
Primary device:
Quote
bectl list
BE Active Mountpoint Space Created
24.1.10 NR / 2.26G 2022-01-13 06:34
24.1.10_2 - - 2.91M 2024-10-24 07:50
Where I exported the active 24.1.10 BE.
(I had to create 24.1.10_2 BE since BEmanager will not recognize any BE for export when only one BE exists; however, it is the same way I've done it several times before.)
Backup device after importing BE:
bectl list
BE Active Mountpoint Space Created
default NR / 1.12G 2024-10-24 05:22
restore-2024-10-24-061121 - - 2.26G 2024-10-24 06:11
root@OPNsense:~ # bectl activate restore-2024-10-24-061121
Successfully activated boot environment restore-2024-10-24-061121
root@OPNsense:~ # bectl list
BE Active Mountpoint Space Created
default N / 1.12G 2024-10-24 05:22
restore-2024-10-24-061121 R - 2.26G 2024-10-24 06:11
root@OPNsense:~ # reboot
I am not really familar with ZFS and BE, though I have no idea where to start debugging,
but I now found that the SSD of the backup system only has 8GB where the primary system has 30GB, though ZFS partition size is 5.2GB on backup system and 22GB on primary system. Also swap size differs.
Could this cause any problems? Any other ideas?
Cheers
Any BSD/ZFS aces? Patrick?
Sorry, not really, because ...
1. I never used BEmanager.
2. The "vfs.root.mountfrom" method to indicate the boot FS is deprecated for quite some time.
Instead the "bootfs" property of the zpool should be used:
root@office-ka:~ # zpool get bootfs zroot
NAME PROPERTY VALUE SOURCE
zroot bootfs zroot/ROOT/24.10 local
HTH,
Patrick
Building on that insight, maybe a comparison can be helpful between the two systems of:
#sysctl -a | grep vfs
#zpool get all zroot
The idea is to compare and see if there are differences, what could be the reason before charting a way forward.
Thank you both, I will give it a try when I am back in office on monday...
Got it working now.
First I replaced the Disk with more size => same result.
Then I checked what you said:
'zpool get bootfs zroot' shows that bootfs is used for both systems.
'sysctl -a | grep vfs' and 'zpool get all zroot' showed tons of information. The point was that the 24.7 destination system has more options / features shown (see example screenshot), though I thought it could be helpful to set up the new system initially with 24.1 (same as source system)...
And finally that did the trick!
I guess there were some changes to zfs (?) between both versions that made the exported BE incompatible.
I never had this issue in the past when setting up with different major versions.
So everyone using exported BE as backup should initially set up a new system with the same major version the source has.
Thanks for the hints!
Yes, a fresh install with 24.7 has newer ZFS options/features introduced.
It's been so long on FreeBSD 13 that it was probably not noticeable before because all resulting systems behaved the same.
Cheers,
Franco