Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - vivekmauli14

#1
It says:

root@OPNsense:~ # zpool export zroot
cannot unmount '/var/log': pool or dataset is busy
root@OPNsense:~ #

and for every try got this device busy error.

Do I need to rename the default dataset for this from zroot to something else?
#2
Oh alright:

Here it is:



root@Marge:~ # mount
/dev/iso9660/OPNSENSE_INSTALL on / (cd9660, local, read-only)
devfs on /dev (devfs)
tmpfs on /tmp (tmpfs, local)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/home on /home (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/boot on /boot (unionfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/conf on /conf (unionfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/etc on /etc (unionfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/home on /home (unionfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/root on /root (unionfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/usr on /usr (unionfs, local, noatime, nosuid, nfsv4acls)
<above>:/tmp/.cdrom/var on /var (unionfs, local, noatime, nosuid, nfsv4acls)
devfs on /var/dhcpd/dev (devfs)
devfs on /var/unbound/dev (devfs)
/usr/local/lib/python3.11 on /var/unbound/usr/local/lib/python3.11 (nullfs, local, read-only)
/lib on /var/unbound/lib (nullfs, local, read-only)
root@Marge:~ #


#3
Hi Patrick,

This is the output for mount: mount
#4
Hi Patrick,

My zpool status returns this :

root@Marge:~ # zpool status
  pool: zroot
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada0p4    ONLINE       0     0     0

errors: No known data errors
root@opnsense:~ #


Also am I the only one facing this issue while reinstalling?
#5
On a previosuly installed opnsense 25.7 when I booted into the 25.7 installer for reinstalling, it would fail with errors like "No disks present". When I dropped into the live shell to wipe the disk manually, I hit a wall of errors:

gpart destroy failed with Device busy

zpool destroy -f zroot failed with pool or dataset is busy

Even the low-level dd command failed with Operation not permitted

I was completely locked out from touching the internal hard drive, even as root. This was new to me; I've reinstalled older versions like 24.7 on many devices before without ever seeing this kind of stubborn lock.

How do I smoothly reinstall the machine without needing of another bootables of Gpart for drive cleaning ?

Thanks
#6
Hi,

I was thinking and have done some UI revamp for MVC using react and have successfully included the static build of react app and hosted that using lighttpd, but that is not percistent also, I have to authenticate the APIs with key and secret, how do I do that internally ? so that I dont have to put the api pair into react app.

My goal is to have my React UI use the same authentication as the default .volt UI.

I would greatly appreciate any insights, documentation pointers, or examples you could share.

Best,
VivekSP
#7
Hi,

I'm running OPNsense 24.7.1 and noticed that the latest package caddy-custom-2.10 includes the layer4 module. I tried to install this package on my 24.7 system, but it doesn't work — likely due to binary or ABI incompatibility with the current firmware.

Looks like I might need to upgrade to latest version now.
#8
Hi Monviech,

I guess our version of caddy does not have layer4 module for my usecase of NAT46 workaround


root@Marge:/usr/local/etc/caddy/caddy.d # caddy list-modules | grep layer4
root@Marge:/usr/local/etc/caddy/caddy.d #


Trying with relayd, will update here..

Thanks

#9
Hi Maurice,

I tried this configuration but didnt work as expected. The goal was to translate traffic from IPv4 address (192.0.2.66) to a real internal IPv6 address (fd00:abcd::10).

 Tayga startup script
#!/bin/sh
# /root/start-tayga46.sh

echo "[*] Cleaning up any previous NAT46 instance..."
ifconfig nat46 destroy 2>/dev/null
killall tayga 2>/dev/null

echo "[*] Starting TAYGA and waiting for it to create the nat46 interface..."
/usr/local/sbin/tayga -c /usr/local/etc/tayga46.conf &

# Poll for the interface to appear
INTERFACE_EXISTS=false
for i in $(seq 1 10); do
    if ifconfig nat46 >/dev/null 2>&1; then
        echo "[+] Interface nat46 created by Tayga."
        INTERFACE_EXISTS=true
        break
    fi
    sleep 0.5
done

if [ "$INTERFACE_EXISTS" = false ]; then
    echo "[!] TAYGA failed to create the interface. Aborting."
    exit 1
fi


# Configure the Proxmox interface with its IPv6 address.
ifconfig igc2 inet6 fd00:abcd::1/64 alias

# Configure the IPv4 side of the tunnel.
ifconfig nat46 inet 192.0.2.1 192.0.2.2 netmask 255.255.255.255 up

# Configure the IPv6 side of the tunnel using our new dedicated network.
ifconfig nat46 inet6 fd00:aabb::2 fd00:aabb::1 prefixlen 128 up

echo "[*] Adding NAT46 routes..."

# This route directs traffic for the fake IPv4 network to Tayga.
route add -inet 192.0.2.0/24 -iface nat46



and Tayga config

# /usr/local/etc/tayga46.conf

tun-device nat46

ipv4-addr 192.0.2.1

ipv6-addr fd00:aabb::1

map 192.0.2.66 fd00:abcd::10

data-dir /var/db/tayga46


When I run a packet capture, I can see that Tayga is translating the packets correctly! The initial packet from the client gets translated from IPv4 to IPv6, and the destination server sends back a [S.] (SYN-ACK) reply.

16:38:20.035896 IP 192.0.2.1.1612 > 192.0.2.66.8082: Flags [S], seq 729630911, win 65228, options [mss 1460,nop,wscale 7,sackOK,TS val 3783615226 ecr 0], length 0
16:38:20.035929 IP6 fd00:aabb::1.1612 > fd00:abcd::10.8082: Flags [S], seq 729630911, win 65228, options [mss 1460,nop,wscale 7,sackOK,TS val 3783615226 ecr 0], length 0
16:38:20.035955 IP6 fd00:abcd::10.8082 > fd00:aabb::1.1612: Flags [S.], seq 2602201702, ack 729630912, win 0, options [mss 1460], length 0

The connection stalls immediately. As you can see in the third packet, the IPv6 server (fd00:abcd::10) is responding with a TCP win 0.and the TCP handshake never completes and no data can be transferred.

what could be causing this win 0 response? Does this point to a problem in my Tayga/network configuration, or is it more likely an issue on the destination server (fd00:abcd::10) that is incompatible with the translated packet it's receiving? or is it the compatibility issue of PF with tayga NAT46?

As Monviech suggested, I may try relayd or caddy as an alternative, but I'd love to understand why this is failing.

Thank you all !
#10
Hi,
Thanks for reverting! The user wants :

IPv4 clients must be able to reach a specific IPv6-only service on my internal network. This means the firewall needs to perform a NAT46 translation on inbound traffic. An external IPv4 client would send a request to a public IPv4 address on the box, and the firewall would translate that to the correct IPv6 address of my internal service.
looking forward for guidance to make this work reliably and securely within the environment.

Thanks again!

Best,
VivekSP
#11
Hi,

I am having a unique requirement of NAT46 at one of my user, I'm aware that Linux-based machines like Jool or TAYGA support NAT46 via TUN/TAP interfaces or netfilter hooks. I'm hoping to either replicate that behavior natively with any native tool or find a more flexible workaround than an L7 reverse proxy. Any insights, documentation links, or experimental solutions would be very helpful — happy to test patches or contribute to development if needed.

Thanks in advance!

Best,
VivekSP
#12
Hi,

I'm working on a requirement to bring VDOM-like functionality (Virtual Domains), inspired by how Fortinet enables multiple fully isolated firewall instances (tenants) on a single hardware appliance. Has any similar approach been explored before?

Are there thoughts on integrating bhyve or external orchestration in a more native way? Looking forward to your input and thoughts on how this can be achieved?

Best,
VivekSP
#13
Actually my requirement is, I have to configure a public IP 112.xxx.xxx.37 and 192.168.xx.xx in a same interface with a NAT policy
#14
Hi,

Thank you so much. Can you detail a bit more on how do you suggest this to be configured, first create IP Alias then Add the Alias IP on the loopback Address and then NAT or without NAT also this will Work ?
Looking forward to hear from you.

Thanks!
#15
Hi Patrick,

Thanks for the clarification — I understand that from a traditional BSD networking model However, I've heard certain vendors (like Fortinet and Juniper) allow you to treat secondary IPs almost as if they were separate interfaces — using them in different routing instances, policies, or even NAT/firewall contexts. They often abstract this at the OS or control plane level to allow for that kind of flexibility.

Out of curiosity, does OPNsense offer any feature that might allow similar behavior? For example:

Creating a virtual interface or group that binds to a specific IP alias

Using aliases in policy-based routing or as part of a ruleset that treats them as more than just an additional address

Assigning a loopback or dummy interface with an alias and routing through that

Or is the OPNsense implementation (being FreeBSD-based) bound strictly to the traditional interface model with no way to "promote" an alias to interface-like behavior?

Appreciate any insights from others who've tried something similar or worked around this in creative ways.

Best regards,
VivekSP