I have an existing (single drive) ZFS based install which I'd like to add an (identical) mirror drive to -- ideally with out a re-install and config reload.
I feel the the following should do it, but considering my experience with BSD and ZFS are limited, perhaps someone could confirm? :)
Output from "gpart show":
=> 40 1000215136 ada0 GPT (477G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 982902784 4 freebsd-zfs (469G)
1000214528 648 - free - (324K)
Thinking this should do it?
gpart backup ada0 | gpart restore -F ada1
dd if=/dev/ada0p1 of=/dev/ada1p1
dd if=/dev/ada0p2 of=/dev/ada1p2
zpool attach zroot ada0p4 ada1p4
Thanks!
-James
That's exactly the procedure to do it. You could additionally create a GEOM mirror for swap.
Thanks! I'm going to give it a shot tonight.
I hadn't considered the swap partition, but it seems like that would be safest in case ada0 has a complete meltdown.
I'm am a little curious if OPNsense would normally create a GEOM mirror for swap when choosing ZFS mirroring during install though -- as I generally like to keep as close to "stock" under the hood as possible. :)
In any case I took a stab at what I feel might be the procedure, but people feel free to pick it apart if I'm way off!
swapoff -a
dd if=/dev/zero of=/dev/ada0p3
dd if=/dev/zero of=/dev/ada1p3
gmirror label -v -b round-robin swap /dev/ada0p3 /dev/ada1p3
# Check if there is a mirror now?
gmirror status
ls /dev/mirror/swap
# If so edit /etc/fstab to ..
/dev/mirror/swap none swap sw 0 0
swapon -a
Thanks again,
-James
Just some notes and answers for anyone heading down this same path, or future me. :)
If you do a single drive ZFS install of OPNsense, the drive is drive is actually added to the pool by GPT lablel (gpt/zfs0) rather than partition name (e.g. ada0p3). So you'll have to take that into account in the zpool attach command. (Curiously if a mirror is configured during install the drives seem to be added by partition name)
In the end I ran into trouble with the adapter I was going to use to add the second drive. To the point where I actually re-installed because of all the write errors I saw. :-\ So to answer my question above about swap, when OPNsense does a ZFS mirrored install it appears to add a separate swap partition on each drive and includes both in fstab.
All good now, but was hoping to avoid the re-install and config fire drill. Glad to see re-applying the backup config works well though! :)
Hello. I just used your instructions and everything seems to be working, but the redundancy I was going for, was to still be able to boot if one drive dies. Since the boot partition isn't a mirror, and fstab only references /boot/efi on one of the drives, not both, how does that work if that's the one that dies? Do I need to add the second drive boot partition to fstab as well?
My fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ada1p1 /boot/efi msdosfs rw 2 2
/dev/ada1p3 none swap sw 0 0
/dev/ada0p3 none swap sw 0 0
My drives
# gpart show
=> 40 250069600 ada0 GPT (119G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 232757248 4 freebsd-zfs (111G)
250068992 648 - free - (324K)
=> 40 250069600 ada1 GPT (119G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 232757248 4 freebsd-zfs (111G)
250068992 648 - free - (324K)
The mountpoint of the EFI partition in FreeBSD is irrelevant for the system to boot. You need to copy the partition contents from the first to the second disk e.g. with dd. The command is in one of the earlier posts in this thread and I guess you already did that.
That's all. Your EFI BIOS must be able to pick up the boot loader on the second disk when the first fails.
The mount point exists to be able to change/update the contents of the EFI partition if necessary.
Great, thanks. I should be good then.
You should also use the commands in @Moonshine's post to turn your swap partitions into a mirror.
So I just ran the attach command (confusing because I started with ada1 as my original single drive zfs pool, and am adding ada0, which is the opposite of the OP).
The problem is that since the instructions specify to dd the partitions it also seems to have copied over whatever indicator that says the drive is part of the pool. So when I try to attach it says ada0p4 is already part of the pool (which it's not), it just thinks it is because of the dd. Am I safe to pass -f to override?
root@firewall:/etc # zpool status
pool: zroot
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada1p4 ONLINE 0 0 0
errors: No known data errors
root@firewall:/etc # zpool attach zroot ada1p4 ada0p4
invalid vdev specification
use '-f' to override the following errors:
/dev/ada0p4 is part of active pool 'zroot'
Huh? The swap partitions are p3, not p4 ...
So if you dd'ed anything to p4, that broke your pool. Also you do not really need the dd to the swap partitions.
Do a `gpart show` and a `zpool status` again, please, and we will start from there ...
Sorry, I switched topics and didn't clarify. I wasn't trying to create the swap mirror, I was just trying to execute the attach command after completing the OP's first post of instructions. I realized what I did though. Instead of just dd'ing p1 and p2, I also dd'd the zfs partition (p4), so it copied over the zfs metadata. That was my mistake. I wiped the first 512K and last 512K from p4 on the new drive (which is where the zfs meta data lives) and tried attaching again. It worked fine.
It only took 7 seconds to re-silver. I'm hoping that was because the partitions were already almost identical, minus the first 512K and last 512k.
Thank you for taking the time to answer my questions.
This looks horrible from a ZFS perspective :-)
Does FreeBSD really set up partitionson disk instead of datasets in a ZFS pool?
I would have thought that basically the following should suffice:
Zpool attach zroot gpt/whateveryourdrivenameis
This is sufficient. FreeBSD uses a single partition of type freebsd-zfs to store pool data and of course uses datasets.
But in general you need
- an EFI boot partition or
- a legacy boot loader partition
- a swap partition
all outside of ZFS.
This thread is about how to create these if you build a mirror setup after installation. The zpool attach is easy, but what good is a second disk if there is no boot loader on it?
Quote from: pmhausen on July 10, 2022, 05:12:17 PM
But in general you need
- an EFI boot partition or
- a legacy boot loader partition
- a swap partition
all outside of ZFS.
Thanks - I'm more familiar with OmniOS, which as far as I am aware does not use partitions and where you would just "install" the bootloader after "attaching" a second disk (https://omnios.org/info/migrate_rpool.html). That's Illumos and not *BSD, though.
There's a current discussion about whether FreeBSD should get a bootadm command. The partition layout is due to the PC architecture, your OmniOS example seems to be Sparc? At least the device names hint at that.
Quote from: pmhausen on July 12, 2022, 09:48:06 AM
, your OmniOS example seems to be Sparc? At least the device names hint at that.
Nope - OmniOS is pure x64. I think SPARC is only catered for by Triblix.
Great thread, exactly what I was looking for.
One more question: Can I run the commands directly from OPNSense shell while booted or do I have to boot from an external drive?
All from the live system. For another complete walkthrough see
https://forum.opnsense.org/index.php?topic=32650.msg157910#msg157910
Thanks, worked nicely.
The only weird thing was, that gmirror load gave the following error message "gmirror: Command 'load' not available; try 'load' first.".
Don't know if it was because its is already loaded. (however gmirror unload gives a similar error).
In any case, I was able to complete the steps afterwards and gmirror status now shows me the swap mirror.
So I'm adding a second NVME drive to my DEC850v2 and I wanted to check the commands since all the examples above seem to be for SATA drives.
output of gpart show on original drive:
gpart show nda0
=> 3 500118181 nda0 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
Output of zpool status:
zpool status
pool: zroot
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
nda0p3 ONLINE 0 0 0
What I think I should do:
#copy partition table
gpart backup nda0 | gpart restore -F nda1
#copy EFI
dd if=/dev/nda0p1 of=/dev/nda1p1
#copy bootloader
dd if=/dev/nda0p2 of=/dev/nda1p2
#mirror zfs
zpool attach zroot nda0p3 nda1p3
# turn swap partition into mirrored device
gmirror load
swapoff -a
gmirror label -b round-robin swap nda1p4
gmirror configure -a swap
gmirror insert swap nda0p4
Does this seem sound for how the NVME drive is configured on the DEC850 v2?
Looks good.
Quote from: Patrick M. Hausen on December 30, 2024, 10:08:13 PMLooks good.
So got around to doing this and ran into an issue.
First, although I didn't change the original NVME's location on the MB the assignments changed:
root@DEC850:~ # zpool status
pool: zroot
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
nda1p3 ONLINE 0 0 0
errors: No known data errors
root@lurch:~ # gpart show nda0
gpart: No such geom: nda0.
root@lurch:~ # gpart show nda1
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
So I tried modifying the prior post instructions to be:
root@DEC850:~ # gpart backup nda1 | gpart restore -F nda0
but the result I got back was:
gpart: entries '4': Invalid argument
When I checked nothing was transfered:
gpart: entries '4': Invalid argument
root@lurch:~ # gpart show nda1
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
root@lurch:~ # gpart show nda0
gpart: No such geom: nda0.
Any suggestions?
camcontrol devlist
please.
First checking that both drives are there and aren't throwing errors:
root@DEC850:~ # smartctl -a /dev/nvme0
smartctl 7.4 2023-08-01 r5530 [FreeBSD 14.1-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: TS256GMTE712A-LNW
Serial Number: J026240050
Firmware Version: 82B2W2AM
PCI Vendor/Subsystem ID: 0x1d79
IEEE OUI Identifier: 0x48357c
Controller ID: 0
NVMe Version: 1.4
Number of Namespaces: 1
Namespace 1 Size/Capacity: 256,060,514,304 [256 GB]
Namespace 1 Utilization: 0
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 7c3548 52559c4832
Local Time is: Sun Feb 9 14:50:16 2025 CST
Firmware Updates (0x14): 2 Slots, no Reset required
Optional Admin Commands (0x0016): Format Frmw_DL Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x0f): S/H_per_NS Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
Maximum Data Transfer Size: 64 Pages
Warning Comp. Temp. Threshold: 110 Celsius
Critical Comp. Temp. Threshold: 115 Celsius
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 9.00W - - 0 0 0 0 0 0
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 23 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 5 [2.56 MB]
Data Units Written: 10 [5.12 MB]
Host Read Commands: 197
Host Write Commands: 105
Controller Busy Time: 0
Power Cycles: 5
Power On Hours: 0
Unsafe Shutdowns: 3
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 29 Celsius
Temperature Sensor 2: 23 Celsius
Temperature Sensor 3: 23 Celsius
Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged
Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
No Self-tests Logged
root@DEC850:~ # smartctl -a /dev/nvme1
smartctl 7.4 2023-08-01 r5530 [FreeBSD 14.1-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: TS256GMTE710T
Serial Number: I225250032
Firmware Version: 82B0U9MP
PCI Vendor/Subsystem ID: 0x1d79
IEEE OUI Identifier: 0x48357c
Controller ID: 0
NVMe Version: 1.4
Number of Namespaces: 1
Namespace 1 Size/Capacity: 256,060,514,304 [256 GB]
Namespace 1 Utilization: 81,589,567,488 [81.5 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 7c3548 5225de24f0
Local Time is: Sun Feb 9 14:50:18 2025 CST
Firmware Updates (0x14): 2 Slots, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x0f): S/H_per_NS Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
Maximum Data Transfer Size: 64 Pages
Warning Comp. Temp. Threshold: 85 Celsius
Critical Comp. Temp. Threshold: 90 Celsius
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 9.00W - - 0 0 0 0 0 0
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 22 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 24%
Data Units Read: 1,349,946 [691 GB]
Data Units Written: 60,833,497 [31.1 TB]
Host Read Commands: 19,361,131
Host Write Commands: 593,589,915
Controller Busy Time: 2,688
Power Cycles: 13
Power On Hours: 5,031
Unsafe Shutdowns: 9
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 27 Celsius
Temperature Sensor 2: 22 Celsius
Temperature Sensor 3: 21 Celsius
Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged
Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
No Self-tests Logged
The output of camcontrol devlist
root@DEC850:~ # camcontrol devlist
<TS256GMTE712A-LNW 82B2W2AM> at scbus0 target 0 lun 1 (pass0,nda0)
<TS256GMTE710T 82B0U9MP> at scbus1 target 0 lun 1 (pass1,nda1)
OK. That's weird. So let's check if it's a problem with the backup of the current partition table or the restore operation that fails:
gpart backup nda1
I suspect it's the restore that fails, because the "3" for the start of the first partition looks strange. Commonly that is "40". My Deciso appliance looks like this:
groot@opnsense:~ # gpart show
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 482805760 4 freebsd-zfs (230G)
500117504 648 - free - (324K)
You could try to create the partitions on nda0 manually and fix the odd sizes while going along. Then when you are finished creating the ZFS mirror, you could fix the odd layout on nda1.
gpart create -s gpt nda0
gpart add -s 532480 -t efi nda0
gpart add -s 1024 -t freebsd-boot nda0
gpart add -a 1m -s 482344960 -t freebsd-zfs nda0
gpart add -a 1m -t freebsd-swap nda0
As long as the ZFS partitions are the same size, you should be able to create the mirror. So if that works without a problem the next step would be:
zpool attach zroot nda1p3 nda0p3
HTH, please report back for the next steps.
*** Danger, Will Robinson ***
What I outlined in my last post will not damage your system in any way. But possibly you would want to turn your installation into the "official" partition layout - however you created your current one in the first place.
For that - compare to the output for my own disk - we would need to swap the ZFS and the swap partition. Not a problem with a new SSD to work with. But I need to get a calculator and rewrite the operations from my last post.
So tell me what you would prefer.
I can get to the "fix everything" post tomorrow.
Kind regards,
Patrick
Quote from: Patrick M. Hausen on February 09, 2025, 10:18:26 PM*** Danger, Will Robinson ***
What I outlined in my last post will not damage your system in any way. But possibly you would want to turn your installation into the "official" partition layout - however you created your current one in the first place.
For that - compare to the output for my own disk - we would need to swap the ZFS and the swap partition. Not a problem with a new SSD to work with. But I need to get a calculator and rewrite the operations from my last post.
So tell me what you would prefer.
I can get to the "fix everything" post tomorrow.
Kind regards,
Patrick
I'm aiming to keep it official as this is on a DEC850 and I figure supporting it will be easier if I keep it closer to stock.
I'm surprised that you're saying the DEC850 I got directly from Desico in March of 2024 did not have the official load out?
Quote from: Patrick M. Hausen on February 09, 2025, 10:11:09 PMOK. That's weird. So let's check if it's a problem with the backup of the current partition table or the restore operation that fails:
gpart backup nda1
I suspect it's the restore that fails, because the "3" for the start of the first partition looks strange. Commonly that is "40". My Deciso appliance looks like this:
groot@opnsense:~ # gpart show
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 482805760 4 freebsd-zfs (230G)
500117504 648 - free - (324K)
You could try to create the partitions on nda0 manually and fix the odd sizes while going along. Then when you are finished creating the ZFS mirror, you could fix the odd layout on nda1.
gpart create -s gpt nda0
gpart add -s 532480 -t efi nda0
gpart add -s 1024 -t freebsd-boot nda0
gpart add -a 1m -s 482344960 -t freebsd-zfs nda0
gpart add -a 1m -t freebsd-swap nda0
As long as the ZFS partitions are the same size, you should be able to create the mirror. So if that works without a problem the next step would be:
zpool attach zroot nda1p3 nda0p3
HTH, please report back for the next steps.
So here are the results:
root@DEC850:~ # gpart backup nda1
GPT 4
1 efi 3 532480 efifs
2 freebsd-boot 532483 305 bootfs
3 freebsd-zfs 532788 482344960
4 freebsd-swap 482877748 17240436 swapfs
Following your steps resulted in:
root@DEC850:~ # gpart create -s gpt nda0
nda0 created
root@DEC850:~ # gpart add -s 532480 -t efi nda0
nda0p1 added
root@DEC850:~ # gpart add -s 1024 -t freebsd-boot nda0
nda0p2 added
root@DEC850:~ # gpart add -a 1m -s 482344960 -t freebsd-zfs nda0
nda0p3 added
root@DEC850:~ # gpart add -a 1m -t freebsd-swap nda0
nda0p4 added
root@DEC850:~ # gpart show
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 482344960 3 freebsd-zfs (230G)
482879488 17238016 4 freebsd-swap (8.2G)
500117504 648 - free - (324K)
Then added the partition to the zpool:
root@DEC850:~ # zpool attach zroot nda1p3 nda0p3
root@DEC850:~ # zpool status
pool: zroot
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Feb 9 15:31:08 2025
33.3G / 33.3G scanned, 747M / 33.3G issued at 187M/s
751M resilvered, 2.19% done, 00:02:58 to go
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nda1p3 ONLINE 0 0 0
nda0p3 ONLINE 0 0 0 (resilvering)
errors: No known data errors
So that worked to make the new NVME match your layout. Still feeling a little weird that my factory DEC850 didn't come with the official layout.
I did go ahead and mirrored the swap also:
root@DEC850:~ # gmirror load
root@DEC850:~ # swapoff -a
swapoff: removing /dev/gpt/swapfs as swap device
root@DEC850:~ # gmirror label -b round-robin swap nda0p4
GEOM_MIRROR: Device mirror/swap launched (1/1).
root@DEC850:~ # gmirror configure -a swap
root@DEC850:~ # gmirror insert swap nda1p4
GEOM_MIRROR: Device swap: rebuilding provider nda1p4.
root@DEC850:~ # GEOM_MIRROR: Device swap: rebuilding provider nda1p4 finished.
Would you like me to outline the steps to turn the layout into the official one? Alrightee ... I'll bite :-)
# stop the mirror operation
zpool detach zroot nda0p3
# nuke the partition table on nda0
gpart destroy -F nda0
# recreate the partition table following the standard
gpart create -s gpt nda0
gpart add -s 532480 -t efi nda0
gpart add -s 1024 -t freebsd-boot nda0
gpart add -a 1m -s 8g -t freebsd-swap nda0
gpart add -a 1m -t freebsd-zfs nda0
You should then be able to add partition 4 - not 3 - of the new disk to the ZFS mirror:
zpool attach zroot nda1p3 nda0p4
Once that is completed, we can proceed. I might not be available for much longer today, but as long as we do not do anything destructive to nda1, you should be fine.
Or if you want to just proceed with ZFS and swap partitions reversed compared to the standard - fine. Just tell me how you wish to proceed.
Quote from: Patrick M. Hausen on February 09, 2025, 10:43:50 PMWould you like me to outline the steps to turn the layout into the official one? Alrightee ... I'll bite :-)
# stop the mirror operation
zpool detach zroot nda0p3
# nuke the partition table on nda0
gpart destroy -F nda0
# recreate the partition table following the standard
gpart create -s gpt nda0
gpart add -s 532480 -t efi nda0
gpart add -s 1024 -t freebsd-boot nda0
gpart add -a 1m -s 8g -t freebsd-swap nda0
gpart add -a 1m -t freebsd-zfs nda0
You should then be able to add partition 4 - not 3 - of the new disk to the ZFS mirror:
zpool attach zroot nda1p3 nda0p4
Once that is completed, we can proceed. I might not be available for much longer today, but as long as we do not do anything destructive to nda1, you should be fine.
Or if you want to just proceed with ZFS and swap partitions reversed compared to the standard - fine. Just tell me how you wish to proceed.
I'd like to get things the official layout.
I started down your path but got a message:
root@DEC850:~ # zpool detach zroot nda0p3
root@DEC850:~ # gpart destroy -F nda0
gpart: Device busy
Could this be from my having mirrored the swap already?
Quote from: charles.adams on February 09, 2025, 10:57:34 PMI started down your path but got a message:
root@DEC850:~ # zpool detach zroot nda0p3
root@DEC850:~ # gpart destroy -F nda0
gpart: Device busy
Could this be from my having mirrored the swap already?
Yep.
gmirror remove swap nda0p4
Quote from: Patrick M. Hausen on February 09, 2025, 11:02:37 PMQuote from: charles.adams on February 09, 2025, 10:57:34 PMI started down your path but got a message:
root@DEC850:~ # zpool detach zroot nda0p3
root@DEC850:~ # gpart destroy -F nda0
gpart: Device busy
Could this be from my having mirrored the swap already?
Yep.
gmirror remove swap nda0p4
That worked:
root@DEC850:~ # gmirror remove swap nda0p4
root@DEC850:~ # GEOM_MIRROR: Device swap: provider nda0p4 destroyed.
root@DEC850:~ # zpool detach zroot nda0p3
root@DEC850:~ # gpart destroy -F nda0
nda0 destroyed
root@DEC850:~ # gpart create -s gpt nda0
nda0 created
root@DEC850:~ # gpart add -s 532480 -t efi nda0
nda0p1 added
root@DEC850:~ # gpart add -s 1024 -t freebsd-boot nda0
nda0p2 added
root@DEC850:~ # gpart add -a 1m -s 8g -t freebsd-swap nda0
nda0p3 added
root@DEC850:~ # gpart add -a 1m -t freebsd-zfs nda0
nda0p4 added
root@DEC850:~ # zpool attach zroot nda1p3 nda0p4
root@DEC850:~ # zpool status
pool: zroot
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Feb 9 16:38:23 2025
33.4G / 33.4G scanned, 1.34G / 33.4G issued at 229M/s
1.36G resilvered, 4.03% done, 00:02:23 to go
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nda1p3 ONLINE 0 0 0
nda0p4 ONLINE 0 0 0 (resilvering)
errors: No known data errors
I did get a strange error when I tried to add the new nda0p3 as swap but I'm rebooting right now and I'll try again after that.
I think the error when mirroring the swap is because the new swap size is smaller than the original partition?
root@DEC850:~ # gpart show
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 482805760 4 freebsd-zfs (230G)
500117504 648 - free - (324K)
Yes. We need to repartition the original drive and completely recreate the swap. It's definitely bedtime for me now, but we can continue tomorrow. Just let the resilver finish.
Quote from: Patrick M. Hausen on February 09, 2025, 11:56:32 PMYes. We need to repartition the original drive and completely recreate the swap. It's definitely bedtime for me now, but we can continue tomorrow. Just let the resilver finish.
So after the reboot it does start and I get this message on the serial console:
(https://i.ibb.co/WW7QS6SG/17391454592565417830511501807614.jpg) (https://ibb.co/N6fXwswt)
free image hosting (https://imgbb.com/)
When I try to boot from it anyway via it gives me:
(https://i.ibb.co/F4trrQ3P/17391456084606102340370249961347.jpg) (https://ibb.co/GQj66Zph)
free image hosting (https://imgbb.com/)
I'd give this as a paste but I only have a phone to get to the internet and I don't have hotspot.
Going to see if I can download to the phone, transfer to the desktop, and reinstall from serial before I fall asleep.
So I got the network back by using the opnsense importer off a backup .XML I made before starting this.
Haven't figured this out on the nvme (and left it as it is), and zenarmor seems missing from this live disk import, but family will be happy when they wake and I can resume this after sleeping.
Oh dear. The EFI partition needs to be copied over before a reboot. That is not part of the zpool. Sorry, I should have made it clear that you cannot reboot until everything is finished.
Quote from: Patrick M. Hausen on February 10, 2025, 06:25:55 AMOh dear. The EFI partition needs to be copied over before a reboot. That is not part of the zpool. Sorry, I should have made it clear that you cannot reboot until everything is finished.
Live and learn, fortunately nothing is destroyed yet.
root@DEC850:~ # gpart show
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 482805760 4 freebsd-zfs (230G)
500117504 648 - free - (324K)
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 63 60628929 da1 MBR (29G)
63 1985 - free - (993K)
2048 60626912 1 fat32lba [active] (29G)
60628960 32 - free - (16K)
=> 34 1953525101 da0 GPT (932G)
34 66584 1 efi (33M)
66618 122 2 freebsd-boot (61K)
66740 1024 3 freebsd-swap (512K)
67764 5082432 4 freebsd-ufs (2.4G)
5150196 1948374939 - free - (929G)
=> 34 1953525101 diskid/DISK-AAAABBBB0009 GPT (932G)
34 66584 1 efi (33M)
66618 122 2 freebsd-boot (61K)
66740 1024 3 freebsd-swap (512K)
67764 5082432 4 freebsd-ufs (2.4G)
5150196 1948374939 - free - (929G)
=> 63 60628929 diskid/DISK-071C2B5B141FDC30 MBR (29G)
63 1985 - free - (993K)
2048 60626912 1 fat32lba [active] (29G)
60628960 32 - free - (16K)
So, what's your current state and where do you want to go today? ;-)
Did you install with a ZFS mirror from the beginning?
EDIT: from your gpart output you did not. So please post the output of `zpool status` to check which SSD you are running from.
Quote from: Patrick M. Hausen on February 10, 2025, 04:10:52 PMSo, what's your current state and where do you want to go today? ;-)
Did you install with a ZFS mirror from the beginning?
EDIT: from your gpart output you did not. So please post the output of `zpool status` to check which SSD you are running from.
Right now it is running from a Live USB via the Importer. So no zfs.
Before reboot I had gotten the zpool mirrior up on nda1p3 and nda0p4 and not gotten the swap mirriored but swap was on nda0p3.
Probably all that is missing for now is the EFI partition content on nda0.
dd if=/dev/nda1p1 of=/dev/nda0p1
Then try a reboot after "office hours".
Quote from: Patrick M. Hausen on February 10, 2025, 04:44:21 PMProbably all that is missing for now is the EFI partition content on nda0.
dd if=/dev/nda1p1 of=/dev/nda0p1
Then try a reboot after "office hours".
Yes, the original install (on nda1) is the factory one that came on my DEC850 so it was ZFS striped as from the factory they only include 1 NVME.
I've copied partition 1 across and that wasn't enough:
From the serial console during boot:
InsydeH2O Version : 05.22.01.0021.0013
BIOS Build Date : 09/06/2023
Processor Type : AMD EPYC 3201 8-Core Processor
System Memory Speed : 2133 MHz
CPUID : 800F12
Press Esc go to Setup Utility
Please set hw.efifb.address and hw.efifb.stride.
Consoles: EFI console
zio_read error: 5
zio_read error: 5
zio_read error: 5
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
Reading loader env vars from /efi/freebsd/loader.env
Setting currdev to disk0p1:
FreeBSD/amd64 EFI loader, Revision 1.1
Command line arguments: loader.efi
Image base: 0x740da000
EFI version: 2.50
EFI Firmware: INSYDE Corp. (rev 21024.4112)
Console: efi (0x1000)
Load Path: \EFI\BOOT\BOOTX64.EFI
Load Device: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(1,GPT,85
00F6A6,0x28,0x82000)
BootCurrent: 0003
BootOrder: 0003[*] 0001 0000 2001 2002 2003
BootInfo Path: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(1,GPT,
EA00F6A6,0x28,0x82000)
Ignoring Boot0003: Only one DP found
Trying ESP: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(1,GPT,85B5EA
A6,0x28,0x82000)
Setting currdev to disk0p1:
Trying: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(2,GPT,882B612B-E
x82028,0x400)
Setting currdev to disk0p2:
Trying: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(3,GPT,8C120998-E
x82800,0x1000000)
Setting currdev to
Trying: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(4,GPT,8E6BB550-E
x1082800,0x1CC70800)
Setting currdev to
Failed to find bootable partition
press any key to interrupt reboot in 1 seconds
I also tried going into the UEFI and switching from nda1 to nda0 and got the same:
Please set hw.efifb.address and hw.efifb.stride.
Consoles: EFI console
zio_read error: 5
zio_read error: 5
zio_read error: 5
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
Reading loader env vars from /efi/freebsd/loader.env
Setting currdev to disk0p1:
FreeBSD/amd64 EFI loader, Revision 1.1
Command line arguments: loader.efi
Image base: 0x740da000
EFI version: 2.50
EFI Firmware: INSYDE Corp. (rev 21024.4112)
Console: efi (0x1000)
Load Path: \EFI\BOOT\BOOTX64.EFI
Load Device: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(1,GPT,85B5EA4E-E736-11EF-9807-F490EA
00F6A6,0x28,0x82000)
BootCurrent: 0003
BootOrder: 0003[*] 0001 0000 2001 2002 2003
BootInfo Path: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(1,GPT,85B5EA4E-E736-11EF-9807-F490
EA00F6A6,0x28,0x82000)
Ignoring Boot0003: Only one DP found
Trying ESP: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(1,GPT,85B5EA4E-E736-11EF-9807-F490EA00F6
A6,0x28,0x82000)
Setting currdev to disk0p1:
Trying: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(2,GPT,882B612B-E736-11EF-9807-F490EA00F6A6,0
x82028,0x400)
Setting currdev to disk0p2:
Trying: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(3,GPT,8C120998-E736-11EF-9807-F490EA00F6A6,0
x82800,0x1000000)
Setting currdev to
Trying: PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/NVMe(0x1,32-48-9C-55-52-48-35-7C)/HD(4,GPT,8E6BB550-E736-11EF-9807-F490EA00F6A6,0
x1082800,0x1CC70800)
Setting currdev to
Failed to find bootable partition
press any key to interrupt reboot in 1 seconds
Just incase I copied both the EFI and bootloader partitions from nvda1 to nvda0 via:
#copy EFI
dd if=/dev/nda1p1 of=/dev/nda0p1
#copy bootloader
dd if=/dev/nda1p2 of=/dev/nda0p2
And still no luck. Wondering if somehow the nda1 data got messed up?
I started down the path of the opnsense-installer while I was booted from the Live USB importer and saw something that makes me think that the install is still there if we could just correct the efi issue:
(https://i.ibb.co/wrFB1xNc/image.png) (https://ibb.co/2Y1WBf7Z)
nda0 and nvd0 and the new NVME I was trying to expand onto.
nda1 and nvd1 are the original factory Desico NVME drive with the strange partitioning you IDed.
da1 is the usb stick on a hub that contains the original setup's config.xml
da0 is the SATA SSD in a usb enclosure that has the Opnsense 24.10 Live USB installed with rufus from the provided img when I purchased the DEC850 in March 2024.
md98 I'm not sure about, is it the ufs from da0?
zroot I think would be the mirror we created on the two NVME drives?
I would do a complete reinstall given that you have a saved config. That should fix any partition layout issues. You can select mirrored to both NVMe drives during installation.
So I'm trying to reinstall from a 24.10.1 img flashed to a SATA USB drive that I managed to get booted from the importer:
(https://i.ibb.co/MDHtcwrV/image.png) (https://ibb.co/3ybjp62N)
root@DEC850:~ # gpart show
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 482805760 4 freebsd-zfs (230G)
500117504 648 - free - (324K)
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 34 1953525101 da0 GPT (932G)
34 66584 1 efi (33M)
66618 122 2 freebsd-boot (61K)
66740 1024 3 freebsd-swap (512K)
67764 5082432 4 freebsd-ufs (2.4G)
5150196 1948374939 - free - (929G)
=> 3 500118181 diskid/DISK-I225250032 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 34 1953525101 diskid/DISK-AAAABBBB0009 GPT (932G)
34 66584 1 efi (33M)
66618 122 2 freebsd-boot (61K)
66740 1024 3 freebsd-swap (512K)
67764 5082432 4 freebsd-ufs (2.4G)
5150196 1948374939 - free - (929G)
=> 63 60628929 da1 MBR (29G)
63 1985 - free - (993K)
2048 60626912 1 fat32lba (29G)
60628960 32 - free - (16K)
=> 63 60628929 diskid/DISK-071C2B5B141FDC30 MBR (29G)
63 1985 - free - (993K)
2048 60626912 1 fat32lba (29G)
60628960 32 - free - (16K)
However, when I select ZFS and go to mirror it doesn't show both NVME drives:
(https://i.ibb.co/gLF1mJBQ/image.png) (https://ibb.co/nN826j54)
I've never installed OpnSense from scratch as we've always purchased a factory device from Desico and have never had a problem before this so I may be doing something wrong?
I'd contact Deciso about this. I stand by my claim that the partition layout on the former nda0, now nda1 does not look right. Maybe they know how that came to be. With a GPT scheme partitioned disk in FreeBSD the first partition always starts at 40 512 byte sectors, for example.
Sorry about any extra work or confusion I might have caused.
Quote from: Patrick M. Hausen on February 12, 2025, 02:52:40 PMI'd contact Deciso about this. I stand by my claim that the partition layout on the former nda0, now nda1 does not look right. Maybe they know how that came to be. With a GPT scheme partitioned disk in FreeBSD the first partition always starts at 40 512 byte sectors, for example.
Sorry about any extra work or confusion I might have caused.
Thanks, I wish I'd been able to customize the original order with mirrored drives from the factory and have avoided all this.
Going to try a striped install but I do wish the new nvme would show in the installer. Do you think I should open a github ticket about that or is there something I can try first?
Possibly wipe the new NVME first. From your running installation as root:
dd if=/dev/zero of=/dev/nvd0 bs=1m
Make double sure nvd0 is the new one which we configured. ;-)
I would not open a github ticket but contact their company head quarters. Even out of warranty and without an active support contract they should be able and willing to provide some help for their "official" hardware.
Kind regards,
Patrick
Quote from: Patrick M. Hausen on February 12, 2025, 06:52:08 PMPossibly wipe the new NVME first. From your running installation as root:
dd if=/dev/zero of=/dev/nvd0 bs=1m
Make double sure nvd0 is the new one which we configured. ;-)
I would not open a github ticket but contact their company head quarters. Even out of warranty and without an active support contract they should be able and willing to provide some help for their "official" hardware.
Kind regards,
Patrick
nda0 is the new one per the partition layout. but the code in your quote should be corrected as you have nvd0 in it.
It was purchased in March of 2024 so I hope it is still under warranty!
Quote from: Patrick M. Hausen on February 12, 2025, 02:52:40 PMI'd contact Deciso about this. I stand by my claim that the partition layout on the former nda0, now nda1 does not look right. Maybe they know how that came to be. With a GPT scheme partitioned disk in FreeBSD the first partition always starts at 40 512 byte sectors, for example.
Sorry about any extra work or confusion I might have caused.
So I didn't have to reach out to them as sales contacted me to accuse me of running multiple machines without a volume license... I replied back with a link to this thread and a request for help.
I verified the right disk and tried wiping the new NVME and got a another problem: (< USB DISK 3.0 PMAP> is the conf usb and <ASMT 2235 0> is the usb sata drive I've got the installer on)
root@DEC850:~ # camcontrol devlist
<TS256GMTE712A-LNW 82B2W2AM> at scbus0 target 0 lun 1 (pass0,nda0)
<TS256GMTE710T 82B0U9MP> at scbus1 target 0 lun 1 (pass1,nda1)
<ASMT 2235 0> at scbus2 target 0 lun 0 (pass2,da0)
< USB DISK 3.0 PMAP> at scbus3 target 0 lun 0 (da1,pass3)
root@DEC850:~ # dd if=/dev/zero of=/dev/nda0 bs=1m
dd: /dev/nda0: Operation not permitted
So I noticed in the bootup scroll several times where I mentions 'zpool' even though the Live USB importer is using UFS. Do you think it could be also loading the zfs pool and that is why it won't permit this action?
Quote from: charles.adams on February 12, 2025, 02:21:59 PMSo I'm trying to reinstall from a 24.10.1 img flashed to a SATA USB drive that I managed to get booted from the importer:
(https://i.ibb.co/MDHtcwrV/image.png) (https://ibb.co/3ybjp62N)
root@DEC850:~ # gpart show
=> 40 500118112 nda0 GPT (238G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 482805760 4 freebsd-zfs (230G)
500117504 648 - free - (324K)
=> 3 500118181 nda1 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 34 1953525101 da0 GPT (932G)
34 66584 1 efi (33M)
66618 122 2 freebsd-boot (61K)
66740 1024 3 freebsd-swap (512K)
67764 5082432 4 freebsd-ufs (2.4G)
5150196 1948374939 - free - (929G)
=> 3 500118181 diskid/DISK-I225250032 GPT (238G)
3 532480 1 efi (260M)
532483 305 2 freebsd-boot (153K)
532788 482344960 3 freebsd-zfs (230G)
482877748 17240436 4 freebsd-swap (8.2G)
=> 34 1953525101 diskid/DISK-AAAABBBB0009 GPT (932G)
34 66584 1 efi (33M)
66618 122 2 freebsd-boot (61K)
66740 1024 3 freebsd-swap (512K)
67764 5082432 4 freebsd-ufs (2.4G)
5150196 1948374939 - free - (929G)
=> 63 60628929 da1 MBR (29G)
63 1985 - free - (993K)
2048 60626912 1 fat32lba (29G)
60628960 32 - free - (16K)
=> 63 60628929 diskid/DISK-071C2B5B141FDC30 MBR (29G)
63 1985 - free - (993K)
2048 60626912 1 fat32lba (29G)
60628960 32 - free - (16K)
However, when I select ZFS and go to mirror it doesn't show both NVME drives:
(https://i.ibb.co/gLF1mJBQ/image.png) (https://ibb.co/nN826j54)
I've never installed OpnSense from scratch as we've always purchased a factory device from Desico and have never had a problem before this so I may be doing something wrong?
So I tried going into the opensense-installer again but this time going down the 'other' path:
(https://i.ibb.co/MyRLxg0N/image.png) (https://imgbb.com/)
(https://i.ibb.co/8g4HGgVs/image.png) (https://imgbb.com/)
(https://i.ibb.co/CZTPs0C/image.png) (https://imgbb.com/)
(https://i.ibb.co/SX2n21HY/image.png) (https://ibb.co/MDbVbjKw)
and device info can see both nvme but when I try to select it in the mirror dialog it still doesn't show both nvme to select them as mirror:
(https://i.ibb.co/KpFrZhpf/image.png) (https://imgbb.com/)
This is getting rather frustrating. Hopefully I can get Desico to help?
Quote from: charles.adams on February 13, 2025, 02:54:58 AMQuote from: Patrick M. Hausen on February 12, 2025, 02:52:40 PMI'd contact Deciso about this. I stand by my claim that the partition layout on the former nda0, now nda1 does not look right. Maybe they know how that came to be. With a GPT scheme partitioned disk in FreeBSD the first partition always starts at 40 512 byte sectors, for example.
Sorry about any extra work or confusion I might have caused.
So I didn't have to reach out to them as sales contacted me to accuse me of running multiple machines without a volume license... I replied back with a link to this thread and a request for help.
I verified the right disk and tried wiping the new NVME and got a another problem: (< USB DISK 3.0 PMAP> is the conf usb and <ASMT 2235 0> is the usb sata drive I've got the installer on)
root@DEC850:~ # camcontrol devlist
<TS256GMTE712A-LNW 82B2W2AM> at scbus0 target 0 lun 1 (pass0,nda0)
<TS256GMTE710T 82B0U9MP> at scbus1 target 0 lun 1 (pass1,nda1)
<ASMT 2235 0> at scbus2 target 0 lun 0 (pass2,da0)
< USB DISK 3.0 PMAP> at scbus3 target 0 lun 0 (da1,pass3)
root@DEC850:~ # dd if=/dev/zero of=/dev/nda0 bs=1m
dd: /dev/nda0: Operation not permitted
So I noticed in the bootup scroll several times where I mentions 'zpool' even though the Live USB importer is using UFS. Do you think it could be also loading the zfs pool and that is why it won't permit this action?
and I tried
root@DEC850:~ # gpart destroy -F nda0
gpart: Device busy
Why is it busy when I'm not booted off it, nor am I running the installer but just on the shell via serial?
I tried checking my hunch and it doesn't look like it is caused by zfs:
root@DEC850:~ # zpool status
no pools available
root@DEC850:~ # zfs list
no datasets available
I'm giving up for tonight and heading to bed before I try chasing down GEOM and the swap. I wish there was a way to use https://github.com/Emrion/uploaders on Opnsense as I've used it to fix a NAS with boot issues.
I couldn't stay asleep.
So it isn't swap that is keeping the nda0 drive busy:
root@DEC850:~ # mdconfig -l -v
md98 vnode 2048M /usr/swap0
root@DEC850:~ # swapinfo -k
Device 1K-blocks Used Avail Capacity
/dev/md98 2097152 0 2097152 0%
root@DEC850:~ # tail /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ufs/OPNsense_Install / ufs ro,noatime 1 1
tmpfs /tmp tmpfs rw,mode=01777 0 0
However, I remembered we used gmirror when making the swap so I tried:
root@DEC850:~ # gmirror status
Name Status Components
mirror/swap COMPLETE nda0p3 (ACTIVE)
Which I think is the blocking issue that is keeping me from clearing the new nda0.
So if I'm understanding the mann page correctly for gmirror (and this worked so I think so)
root@DEC850:~ # gmirror stop swap
GEOM_MIRROR: Device swap: provider destroyed.
GEOM_MIRROR: Device swap destroyed.
GEOM_MIRROR: Device mirror/swap launched (1/1).
root@DEC850:~ # gmirror status
Name Status Components
mirror/swap COMPLETE gptid/8c120998-e736-11ef-9807-f490ea00f6a6 (ACTIVE)
root@DEC850:~ # gmirror destroy swap
GEOM_MIRROR: Device swap: provider destroyed.
GEOM_MIRROR: Device swap destroyed.
root@DEC850:~ # gpart destroy -F nda0
nda0 destroyed
and now when I go to opnsense-installer I can select both nvme drives:
(https://i.ibb.co/HTD8sDvL/image.png) (https://ibb.co/3myZqyX5)
I will try sleeping again and see how this works later.
Quote from: charles.adams on February 13, 2025, 06:16:54 AMI couldn't stay asleep.
So it isn't swap that is keeping the nda0 drive busy:
root@DEC850:~ # mdconfig -l -v
md98 vnode 2048M /usr/swap0
root@DEC850:~ # swapinfo -k
Device 1K-blocks Used Avail Capacity
/dev/md98 2097152 0 2097152 0%
root@DEC850:~ # tail /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ufs/OPNsense_Install / ufs ro,noatime 1 1
tmpfs /tmp tmpfs rw,mode=01777 0 0
However, I remembered we used gmirror when making the swap so I tried:
root@DEC850:~ # gmirror status
Name Status Components
mirror/swap COMPLETE nda0p3 (ACTIVE)
Which I think is the blocking issue that is keeping me from clearing the new nda0.
So if I'm understanding the mann page correctly for gmirror (and this worked so I think so)
root@DEC850:~ # gmirror stop swap
GEOM_MIRROR: Device swap: provider destroyed.
GEOM_MIRROR: Device swap destroyed.
GEOM_MIRROR: Device mirror/swap launched (1/1).
root@DEC850:~ # gmirror status
Name Status Components
mirror/swap COMPLETE gptid/8c120998-e736-11ef-9807-f490ea00f6a6 (ACTIVE)
root@DEC850:~ # gmirror destroy swap
GEOM_MIRROR: Device swap: provider destroyed.
GEOM_MIRROR: Device swap destroyed.
root@DEC850:~ # gpart destroy -F nda0
nda0 destroyed
and now when I go to opnsense-installer I can select both nvme drives:
(https://i.ibb.co/HTD8sDvL/image.png) (https://ibb.co/3myZqyX5)
I will try sleeping again and see how this works later.
So incase someone else turns this up when searching one of the bits of advice I got from my Desico email was to use
nvmecontrol format nvme0
to wipe the new NVME drive instead of gpart destroy.
I'm going to give making the mirror from my original ZFS install another try instead of wiping both and re-installing from the opnsense-installer. Since I have it booting from the original factory NVME, with the new NVME blank again, I'm going to first use the opnsense-installer to create a clone onto the USB SATA drive. If I understand the function of opnsense-installer from https://docs.opnsense.org/manual/install.html#opnsense-installer correctly this should make it easier to restore if something else goes wrong.
To do that I'm wiping the da0 drive with
gpart destroy -F da0
and when I run the opnsense-installer I will select 'other modes' then 'guided root on zfs' then select stripe on da0, force 4k, pool name 'backup' and with a 8GB swap and GPT(BIOS+UEFI). This should give me a cloned backup of my current NVME onto the USB SATA disk and then if something goes pear shaped I just boot from that while we work.
I'd like to complete what we started to validate the method for anyone following after us.
So here is the revised plan to try this again:
Create partitions on new nvme:
gpart create -s gpt nda0
gpart add -s 532480 -t efi nda0
gpart add -s 1024 -t freebsd-boot nda0
gpart add -a 1m -s 8g -t freebsd-swap nda0
gpart add -a 1m -t freebsd-zfs nda0
create the zfs mirror with the two freebsd-zfs partitions to mirror the OS:
zpool attach zroot nda1p3 nda0p4
copy EFI partition:
dd if=/dev/nda0p1 of=/dev/nda1p1
copy bootloader parition:
dd if=/dev/nda0p2 of=/dev/nda1p2
turn swap partition into mirrored device
gmirror load
swapoff -a
gmirror label -b round-robin swap nda1p4
gmirror configure -a swap
gmirror insert swap nda0p3
Wait for the resilvering to complete before proceeding with changing the old nvme to the standard partition format.
then stop the mirror operation on the old nvme:
zpool detach zroot nda1p3
unmirror swap to stop activity on old nvme:
gmirror stop swap
gmirror remove nda1p4
then nuke the partition table on the old nvme:
nvmecontrol format nvme1
then recreate the partition table on the old nvme following the standard
gpart create -s gpt nda1
gpart add -s 532480 -t efi nda1
gpart add -s 1024 -t freebsd-boot nda1
gpart add -a 1m -s 8g -t freebsd-swap nda1
gpart add -a 1m -t freebsd-zfs nda1
then zfs mirror the OS partiton:
zpool attach zroot nda0p4 nda1p4
now WAIT for resilvering to complete.
then copy the efi and boot partions to the old nvme:
dd if=/dev/nda1p1 of=/dev/nda0p1
dd if=/dev/nda1p2 of=/dev/nda0p2
then mirror the swap partitions:
gmirror load
swapoff -a
gmirror label -b round-robin swap nda0p3
gmirror configure -a swap
gmirror insert swap nda1p3
And NOW finally reboot?
That does not look consistent to me. Did you perform a fresh install? To which of the two drives? Why not a mirrored install right from the installer?
You can wipe both drives with nvmecontrol and finally they should both show up.
Quote from: Patrick M. Hausen on February 15, 2025, 09:30:07 AMThat does not look consistent to me. Did you perform a fresh install? To which of the two drives? Why not a mirrored install right from the installer?
You can wipe both drives with nvmecontrol and finally they should both show up.
I did not wipe and reinstall as I got it booting from a live CD, figured out what was blocking nda0 and wiped it. and now want to restart from the original post (nda1 with original factory install, nvd0 blank) to prove the process for the next person who gets a factory Desico and wants to add mirroring of the nvmes.
If the factory SSD comes with a non standard partitioning you should IMHO reinstall with a standard one. I reinstall all factory new units first thing after unpacking. No idea how these odd partition boundaries cane to be.