Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - ender526

#1
Sorry, I switched topics and didn't clarify.  I wasn't trying to create the swap mirror, I was just trying to execute the attach command after completing the OP's first post of instructions.  I realized what I did though.  Instead of just dd'ing p1 and p2, I also dd'd the zfs partition (p4), so it copied over the zfs metadata.  That was my mistake.  I wiped the first 512K and last 512K from p4 on the new drive (which is where the zfs meta data lives) and tried attaching again.  It worked fine.

It only took 7 seconds to re-silver.  I'm hoping that was because the partitions were already almost identical, minus the first 512K and last 512k.

Thank you for taking the time to answer my questions.
#2
So I just ran the attach command (confusing because I started with ada1 as my original single drive zfs pool, and am adding ada0, which is the opposite of the OP).

The problem is that since the instructions specify to dd the partitions it also seems to have copied over whatever indicator that says the drive is part of the pool.  So when I try to attach it says ada0p4 is already part of the pool (which it's not), it just thinks it is because of the dd.  Am I safe to pass -f to override?

root@firewall:/etc # zpool status
  pool: zroot
state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada1p4    ONLINE       0     0     0

errors: No known data errors
root@firewall:/etc # zpool attach zroot ada1p4 ada0p4
invalid vdev specification
use '-f' to override the following errors:
/dev/ada0p4 is part of active pool 'zroot'
#3
Great, thanks.  I should be good then.
#4
Hello.  I just used your instructions and everything seems to be working, but the redundancy I was going for, was to still be able to boot if one drive dies.  Since the boot partition isn't a mirror, and fstab only references /boot/efi on one of the drives, not both, how does that work if that's the one that dies?  Do I need to add the second drive boot partition to fstab as well?

My fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ada1p1             /boot/efi       msdosfs rw              2       2
/dev/ada1p3             none    swap    sw              0       0
/dev/ada0p3             none    swap    sw              0       0


My drives
# gpart show
=>       40  250069600  ada0  GPT  (119G)
            40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528   16777216     3  freebsd-swap  (8.0G)
   17311744  232757248     4  freebsd-zfs  (111G)
  250068992        648        - free -  (324K)

=>       40  250069600  ada1  GPT  (119G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528   16777216     3  freebsd-swap  (8.0G)
   17311744  232757248     4  freebsd-zfs  (111G)
  250068992        648        - free -  (324K)
#5
Hello,

I am considering re-installing using a zfs mirror for redundancy more than data integrity. If I lose my FW to a bad drive I lose internal dns and internet. So my goal would be if I lost a drive, that both drives would be bootable and I could stay up on a single drive until Amazon prime delivered another drive. My question is, are both drives bootable when using the installer mirror option by default, Or do I need to mess with the partition tables?

Thanks!
#6
Thanks. Would be great if there was something in the docs or website about this. It's also possible there is, and I missed it.
#7
Hello,

This may be a silly question related to my unfamiliarity with how cloud threat intelligence works in zenarmor, but I can't find clarification in the docs.  It appears like most of the functionality breaks when you turn off cloud threat intelligence. How does the intelligence feature work? Does it send every website address I visit to the cloud (even anonymized), to be checked? Or does it just use the cloud to update the local signature data? I am used to other IDS's using local rule sets that can be pulled down/updated regularly. I prefer this method as all the analysis is happening locally, and no private data is being send to the cloud.

So I guess my question is... is it sending my data to the cloud to perform this action, and if so, is there a local option, either existing or on a roadmap?

Thanks!