Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - DGentry

#1
Setting the tunable net.pf.share_forward6 to 0 does seem effective, the system has been stable for 24+ hours where previously it would have paniced repeatedly.

I note console messages appearing now which I have not seen before, and which occur at roughly the frequency where the system previously paniced:

cannot forward src fe80:1::201:5cff:fea2:8846, dst 2602:248:7b4a:ff60:54bb:8c4c:a0f7:dd1a, nxt 58, rcvif igb0, outif igb2
cannot forward src fe80:1::201:5cff:fea2:8846, dst 2602:248:7b4a:ff60:54bb:8c4c:a0f7:dd1a, nxt 58, rcvif igb0, outif igb2
cannot forward from fe80:4::28a1:40ff:fe31:e546 to fe80:4::5c91:f6ff:fedc:25b6 nxt 58 received on igb2
cannot forward from fe80:4::28a1:40ff:fe31:e546 to fe80:4::5c91:f6ff:fedc:25b6 nxt 58 received on igb2
cannot forward from fe80:4::bc21:c3ff:fea4:9bc8 to fe80:4::7c83:26ff:fe48:f5ba nxt 58 received on igb2
cannot forward from fe80:4::bc21:c3ff:fea4:9bc8 to fe80:4::7c83:26ff:fe48:f5ba nxt 58 received on igb2
cannot forward from fe80:4::bc21:c3ff:fea4:9bc8 to fe80:4::7c83:26ff:fe48:f5ba nxt 58 received on igb2
cannot forward from fe80:4::682b:b5ff:fedb:5a10 to fe80:4::409f:1fff:fe95:c6d1 nxt 58 received on igb2
cannot forward from fe80:4::682b:b5ff:fedb:5a10 to fe80:4::409f:1fff:fe95:c6d1 nxt 58 received on igb2
cannot forward src fe80:1::201:5cff:fea2:8846, dst 2602:248:7b4a:ff60:54bb:8c4c:a0f7:dd1a, nxt 58, rcvif igb0, outif igb2
cannot forward src fe80:1::201:5cff:fea2:8846, dst 2602:248:7b4a:ff60:54bb:8c4c:a0f7:dd1a, nxt 58, rcvif igb0, outif igb2
cannot forward from fe80:4::682b:b5ff:fedb:5a10 to fe80:4::409f:1fff:fe95:c6d1 nxt 58 received on igb2
cannot forward from fe80:4::682b:b5ff:fedb:5a10 to fe80:4::409f:1fff:fe95:c6d1 nxt 58 received on igb2
cannot forward from fe80:4::682b:b5ff:fedb:5a10 to fe80:4::409f:1fff:fe95:c6d1 nxt 58 received on igb2

igb0 is one of the WAN interfaces, igb2 is the LAN.

2602:248:7b4a:ff60:: is the prefix I use within the home, which comes from the *other* WAN interface igb1 (sonic.net). I use PAT on igb0 (Comcast) to rewrite 2602:248:7b4a:ff60: on outgoing packets to the Comcast-supplied IPv6 prefix. At least, I think I do: that packets are arriving from Comcast destined to 2602:248:7b4a:ff60:: seems unexpected, especially with a link-local source address.


igb0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
   description: WAN
   options=4800028<VLAN_MTU,JUMBO_MTU,NOMAP>
   ether __:__:__:__:__:__
   inet 24.4.201.__ netmask 0xfffffe00 broadcast 255.255.255.255
   inet6 fe80::a236:9fff:fe59:19b0%igb0 prefixlen 64 scopeid 0x1
   inet6 2001:558:6045:5c:54bb:8c4c:a0f7:dd1a prefixlen 128
   groups: AllWAN
   media: Ethernet autoselect (1000baseT <full-duplex>)
   status: active
   nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
igb1: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
   description: WAN2
   options=4800028<VLAN_MTU,JUMBO_MTU,NOMAP>
   ether __:__:__:__:__:__
   inet 135.180.175.__ netmask 0xfffffc00 broadcast 135.180.175.255
   groups: AllWAN
   media: Ethernet autoselect (1000baseT <full-duplex>)
   status: active
   nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
igb2: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
   description: LAN
   options=4800028<VLAN_MTU,JUMBO_MTU,NOMAP>
   ether __:__:__:__:__:__
   inet 10.1.10.1 netmask 0xffffff00 broadcast 10.1.10.255
   inet6 fe80::a236:9fff:fe59:19b2%igb2 prefixlen 64 scopeid 0x4
   inet6 2602:248:7b4a:ff60::1 prefixlen 64
   media: Ethernet autoselect (1000baseT <full-duplex>)
   status: active
   nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
opt1_stf: flags=4041<UP,RUNNING,LINK2> metric 0 mtu 1280
   inet6 2602:248:7b4a:ff60:: prefixlen 28
   groups: stf
   v4net 135.180.175.__/0 -> tv4br 184.23.144.1
   nd6 options=103<PERFORMNUD,ACCEPT_RTADV,NO_DAD>
#2
I believe I can produce this as well, with 22.1.8. I haven't noticed a particular pattern, but it happens at least once per hour after updating to 22.1.x.

In case it matters, my deployment has two WAN interfaces and uses IPv6 Prefix Translation to map the LAN prefix to whichever WAN is active.


Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address   = 0x10
fault code      = supervisor read data, page not present
instruction pointer   = 0x20:0xffffffff80eb0b9d
stack pointer           = 0x28:0xfffffe00085a94b0
frame pointer           = 0x28:0xfffffe00085a95d0
code segment      = base 0x0, limit 0xfffff, type 0x1b
         = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags   = interrupt enabled, resume, IOPL = 0
current process      = 0 (if_io_tqg_1)
trap number      = 12
panic: page fault
cpuid = 1
time = 1653915865
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00085a9270
vpanic() at vpanic+0x17f/frame 0xfffffe00085a92c0
panic() at panic+0x43/frame 0xfffffe00085a9320
trap_fatal() at trap_fatal+0x385/frame 0xfffffe00085a9380
trap_pfault() at trap_pfault+0x4f/frame 0xfffffe00085a93e0
calltrap() at calltrap+0x8/frame 0xfffffe00085a93e0
--- trap 0xc, rip = 0xffffffff80eb0b9d, rsp = 0xfffffe00085a94b0, rbp = 0xfffffe00085a95d0 ---
ip6_forward() at ip6_forward+0x62d/frame 0xfffffe00085a95d0
pf_refragment6() at pf_refragment6+0x164/frame 0xfffffe00085a9620
pf_test6() at pf_test6+0xfdb/frame 0xfffffe00085a9790
pf_check6_out() at pf_check6_out+0x40/frame 0xfffffe00085a97c0
pfil_run_hooks() at pfil_run_hooks+0x97/frame 0xfffffe00085a9800
ip6_tryforward() at ip6_tryforward+0x2ce/frame 0xfffffe00085a9880
ip6_input() at ip6_input+0x60f/frame 0xfffffe00085a9960
netisr_dispatch_src() at netisr_dispatch_src+0xb9/frame 0xfffffe00085a99b0
ether_demux() at ether_demux+0x138/frame 0xfffffe00085a99e0
ng_ether_rcv_upper() at ng_ether_rcv_upper+0x88/frame 0xfffffe00085a9a00
ng_apply_item() at ng_apply_item+0x2bd/frame 0xfffffe00085a9aa0
ng_snd_item() at ng_snd_item+0x28e/frame 0xfffffe00085a9ae0
ng_apply_item() at ng_apply_item+0x2bd/frame 0xfffffe00085a9b80
ng_snd_item() at ng_snd_item+0x28e/frame 0xfffffe00085a9bc0
ng_ether_input() at ng_ether_input+0x4c/frame 0xfffffe00085a9bf0
ether_nh_input() at ether_nh_input+0x1f1/frame 0xfffffe00085a9c50
netisr_dispatch_src() at netisr_dispatch_src+0xb9/frame 0xfffffe00085a9ca0
ether_input() at ether_input+0x69/frame 0xfffffe00085a9d00
iflib_rxeof() at iflib_rxeof+0xc27/frame 0xfffffe00085a9e00
_task_fn_rx() at _task_fn_rx+0x72/frame 0xfffffe00085a9e40
gtaskqueue_run_locked() at gtaskqueue_run_locked+0x15d/frame 0xfffffe00085a9ec0
gtaskqueue_thread_loop() at gtaskqueue_thread_loop+0xc2/frame 0xfffffe00085a9ef0
fork_exit() at fork_exit+0x7e/frame 0xfffffe00085a9f30
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00085a9f30
--- trap 0, rip = 0xffffffff80c2b91f, rsp = 0, rbp = 0x3 ---
mi_startup() at mi_startup+0xdf/frame 0x3
KDB: enter: panic