How to automount a second zpool ?

Started by ajm, February 12, 2022, 03:14:11 PM

Previous topic - Next topic
February 12, 2022, 03:14:11 PM Last Edit: February 12, 2022, 03:33:12 PM by ajm
I feel a bit ashamed having to ask this here but I've not found any references to the correct way under OPNsense to mount a second zpool at system start.

The system is already root-on-ZFS, and the second zpool created no problems and was mounted (see output below), however after a reboot its no longer mounted.

I'm unfamiliar with the OPNsense system startup, so unsure as to the 'correct' way to do this. Perhaps a 'syshook' script, or modify /etc/fstab ? Any advice gratefully recieved..


root@a-fw:~ # camcontrol devlist
<ULTIMATE CF CARD Ver7.01C>        at scbus0 target 0 lun 0 (pass0,ada0)
<CT2000MX500SSD1 M3CR043>          at scbus1 target 0 lun 0 (pass1,ada1)

root@a-fw:~ # gpart create -s GPT /dev/ada1
ada1 created

root@a-fw:~ # gpart add -t freebsd-zfs -a 4k /dev/ada1
ada1p1 added

root@a-fw:~ # gpart modify -l tank -i 1 /dev/ada1
ada1p1 modified

root@a-fw:~ # gpart show
=>      40  30408256  ada0  GPT  (14G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  26210304     3  freebsd-zfs  (12G)
  30406656      1640        - free -  (820K)

=>        40  3907029088  ada1  GPT  (1.8T)
          40  3907029088     1  freebsd-zfs  (1.8T)

root@a-fw:~ # zpool create -m /tank tank /dev/ada1p1
root@a-fw:~ # zfs set atime=off tank

root@a-fw:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank   1.81T   432K  1.81T        -         -     0%     0%  1.00x    ONLINE  -
zroot    12G   805M  11.2G        -         -     2%     6%  1.00x    ONLINE  -

root@a-fw:~ # mount
zroot/ROOT/base-setup on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
devfs on /var/dhcpd/dev (devfs)
devfs on /var/unbound/dev (devfs)
tank on /tank (zfs, local, noatime, nfsv4acls)


TBH I'm scratching my head a bit now..

None of the zfs or zpool commands such as zpool list, zpool status return ANY info about the new pool.

The disk and partition appear to available to the system so I don't understand why ZFS isn't finding the new pool.


root@a-fw:~ # geom -t
Geom               Class      Provider
ada0               DISK       ada0
  ada0             PART       ada0p1
    ada0p1         DEV
    ada0p1         LABEL      gpt/gptboot0
      gpt/gptboot0 DEV
  ada0             PART       ada0p2
    ada0p2         DEV
    swap           SWAP
  ada0             PART       ada0p3
    ada0p3         DEV
    zfs::vdev      ZFS::VDEV
  ada0             DEV
ada1               DISK       ada1
  ada1             PART       ada1p1
    ada1p1         DEV
    ada1p1         LABEL      gpt/tank
      gpt/tank     DEV
  ada1             DEV

Aha ! zpool import DOES find it, and reports errors. I will clean the disk off and try again..


root@a-fw:~ # zpool import
   pool: tank
     id: 17258291557216105619
  state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:

        tank        UNAVAIL  insufficient replicas
          ada1      UNAVAIL  invalid label

   pool: tank
     id: 8003397822133176640
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          ada1p1    ONLINE

February 12, 2022, 04:12:17 PM #3 Last Edit: February 12, 2022, 04:14:48 PM by ajm
OK, done that, now I'm back to where I started. The new pool doesn't mount at boot, but if a I do a zpool import, it then mounts as expected.

February 12, 2022, 04:36:48 PM #4 Last Edit: February 12, 2022, 04:39:29 PM by ajm
Further checks look OK I think ?

Its listed in the cachefile and has 'canmount' set, so it should be mounted at boot, no ?


root@a-fw:~ # zpool get cachefile
NAME   PROPERTY   VALUE      SOURCE
tank   cachefile  -          default
zroot  cachefile  -          default

root@a-fw:~ # zfs get all tank
NAME  PROPERTY              VALUE                  SOURCE
tank  type                  filesystem             -
tank  creation              Sat Feb 12 15:04 2022  -
tank  used                  10.6M                  -
tank  available             1.76T                  -
tank  referenced            10.1M                  -
tank  compressratio         1.00x                  -
tank  mounted               yes                    -
tank  quota                 none                   default
tank  reservation           none                   default
tank  recordsize            128K                   default
tank  mountpoint            /tank                  local
tank  sharenfs              off                    default
tank  checksum              on                     default
tank  compression           off                    default
tank  atime                 off                    local
tank  devices               on                     default
tank  exec                  on                     default
tank  setuid                on                     default
tank  readonly              off                    default
tank  jailed                off                    default
tank  snapdir               hidden                 default
tank  aclmode               discard                default
tank  aclinherit            restricted             default
tank  createtxg             1                      -
tank  canmount              on                     default
tank  xattr                 on                     default
tank  copies                1                      default
tank  version               5                      -
tank  utf8only              off                    -
tank  normalization         none                   -
tank  casesensitivity       sensitive              -
tank  vscan                 off                    default
tank  nbmand                off                    default
tank  sharesmb              off                    default
tank  refquota              none                   default
tank  refreservation        none                   default
tank  guid                  6254459496930362475    -
tank  primarycache          all                    default
tank  secondarycache        all                    default
tank  usedbysnapshots       0B                     -
tank  usedbydataset         10.1M                  -
tank  usedbychildren        516K                   -
tank  usedbyrefreservation  0B                     -
tank  logbias               latency                default
tank  objsetid              54                     -
tank  dedup                 off                    default
tank  mlslabel              none                   default
tank  sync                  standard               default
tank  dnodesize             legacy                 default
tank  refcompressratio      1.00x                  -
tank  written               10.1M                  -
tank  logicalused           10.2M                  -
tank  logicalreferenced     10.0M                  -
tank  volmode               default                default
tank  filesystem_limit      none                   default
tank  snapshot_limit        none                   default
tank  filesystem_count      none                   default
tank  snapshot_count        none                   default
tank  snapdev               hidden                 default
tank  acltype               nfsv4                  default
tank  context               none                   default
tank  fscontext             none                   default
tank  defcontext            none                   default
tank  rootcontext           none                   default
tank  relatime              off                    default
tank  redundant_metadata    all                    default
tank  overlay               on                     default
tank  encryption            off                    default
tank  keylocation           none                   default
tank  keyformat             none                   default
tank  pbkdf2iters           0                      default
tank  special_small_blocks  0                      default


February 12, 2022, 07:38:55 PM #5 Last Edit: February 12, 2022, 08:09:40 PM by ajm

root@a-fw:~ # cat /usr/local/etc/rc.loader.d/20-zfs
# ZFS standard environment requirements
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
vfs.zfs.min_auto_ashift=12
opensolaris_load="YES"
zfs_load="YES"


Why is 'kern.geom.label.disk_ident.enable' & 'kern.geom.label.gptid.enable' disabled ?

(Edit. I think the reason is to supress the display of long GPTID strings)

When (re) creating the zpool, I opted to use a GPT label instead of the partition number.

(Clutching at straws here.. :))

Not sure how this works. We do mount using "mount -a" but maybe that historically ignores ZFS due to only looking at /etc/fstab? At least this file has the auto-mount flag.


Cheers,
Franco

Hrm, we also use "zfs mount -va". -a does mount all according to manual page:

Mount all available ZFS file systems.  Invoked automatically as part of the boot process if configured


Cheers,
Franco

February 12, 2022, 08:48:44 PM #8 Last Edit: February 12, 2022, 09:11:09 PM by ajm
Thanks.

On this 22.1 system, system boot does not auto-mount my second zpool 'tank'.

Manually executing 'zfs mount -va' doesn't either, only 'zpool import tank':


root@a-fw:~ # zfs mount -va
root@a-fw:~ # mount
zroot/ROOT/22.1-base on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
devfs on /var/dhcpd/dev (devfs)
devfs on /var/unbound/dev (devfs)

root@a-fw:~ # zpool import tank
root@a-fw:~ # mount
zroot/ROOT/22.1-base on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
devfs on /var/dhcpd/dev (devfs)
devfs on /var/unbound/dev (devfs)
tank on /tank (zfs, local, noatime, nfsv4acls)


I'll do some further testing on FreeBSD  13.0.

PS. Further on this, after a reboot 'zfs mount -va' doesn't mount the 2nd. zpool. If I then mount it using 'zpool import tank', and then unmount, 'umount /tank', if I THEN use 'zfs mount -va', it IS mounted successfully.

@franco which version of zfs is used on 22.1 ?
I ask as I'm pretty sure the mounting requires the zfs canmount property and a mountpoint property. I'll have a dig at the relevant man pages to confirm.

Sorry I was thinking of a dataset. The question was for a pool, my mistake.

I think the problem is you have used an invalid character in the label. Normally when creating gpt labels, they are different from the device names i.e not /dev/daX types. And that is the reason you have the error ada1      UNAVAIL  invalid label
I think if you recreate the label without the forward slash it will work as intended. I imagine there are dmesg logs attempting to mount it on boot.

February 13, 2022, 12:55:57 AM #12 Last Edit: February 13, 2022, 07:51:37 PM by ajm
Hi, thx for the reply. Sorry for any confusion but that error was resolved by clearing the disk then re-creating the pool, see the following post.

The problem seems to be that ZFS does not identify the second pool attached to the system, so no attempt to mount it is made at system startup.

I've not yet been able to establish the mechanism by which ZFS gets this info, in order to debug further.

Do you have an OPNsense system there with 2+ pools mounted ?

I created a FreeBSD 13.0-RELEASE-p7 boot disk for the machine, and did some comparative testing vs OPNsense 22.1.

I found that as expected, after doing a 'zpool import tank', and rebooting, the 'tank' pool was mounted at boot under FreeBSD, but not under OPNsense.

I did some various other tests, including destroying and recreating the pool, exporting/importing under both OPNsense 22.1 and FreeBSD 13.0. The ZFS startup script under '/etc/rc.d' is the same. The contents of '/etc/zfs/zpool.cache' were the same on both systems.

The only apparent difference between the two systems was the failure to auto mount under OPNsense.

So that I could get on with my project, I've created a hacky syshook script to import (&mount) the pool, but it would be good to have this fixed properly.

I suppose the next steps would be to get some more lowlevel debug info about ZFS startup, but I guess that would be done using a debug kernel ?

For historic reasons and uncontrollable results we don't go through /etc/rc.d for our boot sequence.

/etc/rc.d/zfs zfs_start_main() seems to run zfs mount -va as expected and follows up with zfs share -a but I'm unsure if that would be the relevant difference (the manual page doesn't explain what "sharing" means).

If this is hidden in scripting I suspect FreeBSD does more than it should or "zfs mount -a" doesn't do it's job as it was documented?

About the version version:

# zfs version
zfs-2.1.2-FreeBSD_gaf88d47f1
zfs-kmod-2.1.2-FreeBSD_gaf88d47f1


Cheers,
Franco