Zenarmor & IPv6: Bad Combo (At least on ATT Fiber/US)

Started by lrosenman, January 10, 2022, 03:43:07 AM

Previous topic - Next topic
I finally got to the bottom of my IPv6 all of a sudden NOT working from my LAN.  If I turn OFF the Zenarmor Packet Engine it works as it's supposed to. if I turn ON the Packet Engine, my IPv6 doesn't work any more.

I filed a bug report from the UI, but wanted to post here as well.

Other than turning off Zenarmor, how did you verify it's actually caused by that? Do your clients have a global IPv6? Which NIC? Did you try passive operation or native/emulated netmap?
I never had a problem with my IPv6 tests a few weeks back. Held off on using it until DHCPv6 in OPNsense gets fixed though.

I have global IPv6 addresses from ATT, and with Zenarmor on, I can't get past the OPNSense router.  A ping gets nothing.  Turn off Zenarmor and it works fine.

EM nics
using NETMAP, AFAIK (Protectli FW6b HW).


em0@pci0:1:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82583V Gigabit Network Connection'
    class      = network
    subclass   = ethernet
em1@pci0:2:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82583V Gigabit Network Connection'
    class      = network
    subclass   = ethernet
em2@pci0:3:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82583V Gigabit Network Connection'
    class      = network
    subclass   = ethernet
em3@pci0:4:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82583V Gigabit Network Connection'
    class      = network
    subclass   = ethernet
em4@pci0:5:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82583V Gigabit Network Connection'
    class      = network
    subclass   = ethernet
em5@pci0:6:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82583V Gigabit Network Connection'
    class      = network
    subclass   = ethernet



root@home-fw:~ # dmesg|grep -i netmap
000.000054 [4344] netmap_init               netmap: loaded module
em0: netmap queues/slots: TX 1/1024, RX 1/1024
em1: netmap queues/slots: TX 1/1024, RX 1/1024
em2: netmap queues/slots: TX 1/1024, RX 1/1024
em3: netmap queues/slots: TX 1/1024, RX 1/1024
em4: netmap queues/slots: TX 1/1024, RX 1/1024
em5: netmap queues/slots: TX 1/1024, RX 1/1024
root@home-fw:~ #


What do Zenarmor's logs show for that IP you try to ping? Your policy configuration might simply block it.

January 10, 2022, 04:49:05 PM #4 Last Edit: January 10, 2022, 05:28:21 PM by lrosenman
nothing in the logs.



Hi,

Looking into via ticket. I will also update here.

Hi,

The reason for the loss of connectivity is that when Zenarmor packet engine opens the interface in the netmap mode, netmap re-initializes that interface, causing a DOWN/UP link event.
Seeing an interface DOWN/UP event, OPNsense fires IPv4/IPv6 address re-configuration. For IPv4, this takes milliseconds, bur for IPv6, due to auto-configuration, WAN tracking etc, this process might take about 15-60 seconds during which time you might lose WAN connectivity.
After this, everything should be back to normal.
We're aware of this issue, however the solution involves working with 3rd parties (netmap team, OPNsense team etc).
For the final solution, several options are on the table and we're working on them.
I hope this helps clarify the situation

As I said in my ticket reply, even after the connectivity is fixed, IPv6 packets don't make it past OPNSense with the packet engine running.

What else can I provide?

Quote from: sy on January 11, 2022, 06:57:04 PM
Hi,

The reason for the loss of connectivity is that when Zenarmor packet engine opens the interface in the netmap mode, netmap re-initializes that interface, causing a DOWN/UP link event.
Seeing an interface DOWN/UP event, OPNsense fires IPv4/IPv6 address re-configuration. For IPv4, this takes milliseconds, bur for IPv6, due to auto-configuration, WAN tracking etc, this process might take about 15-60 seconds during which time you might lose WAN connectivity.
After this, everything should be back to normal.
We're aware of this issue, however the solution involves working with 3rd parties (netmap team, OPNsense team etc).
For the final solution, several options are on the table and we're working on them.
I hope this helps clarify the situation

Awesome, thanks for the insights!

That's quite true, Zenarmor chokes IPv6 if you have hardware checksum offload and/or hardware TCP segmentation offload enabled.
OPNsense HW:

Minisforum Venus series UN100C, 16 GB RAM, 512 GB SSD
T-bao N9N Pro, 16 GB RAM, 512 GB SSD

even with all the offload stuff turned off, I still can't get IPv6 packets to traverse the firewall

Quote from: almodovaris on January 13, 2022, 01:54:36 PM
That's quite true, Zenarmor chokes IPv6 if you have hardware checksum offload and/or hardware TCP segmentation offload enabled.
Then turn off hardware offload. HW offload is only useful on a system where the TCP connection terminates. Which is practically never the case for a firewall unless you explicitly use proxies. As long as you push packets and filter with pf, the endpoints of TCP connections are always systems on either side of the firewall.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

As I said, even with the offload stuff OFF, it still blocks IPv6 traversing the Firewall.