Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - iqt4

#1
Although the thread is quite old, I only activated multiqueue on the host side - that's it.
On Proxmox you can achieve this via the GUI (Network Device - Advanced). Result of 4 queues on Proxmox host in OPNsense guest:

# dmesg
vtnet2: <VirtIO Networking Adapter> on virtio_pci5
vtnet2: Ethernet address: 02:00:00:02:01:02
vtnet2: netmap queues/slots: TX 4/256, RX 4/512

# sysctl
dev.vtnet.2.txq3.opackets: 3203078
dev.vtnet.2.rxq3.ipackets: 2405300
dev.vtnet.2.txq2.opackets: 3550761
dev.vtnet.2.rxq2.ipackets: 2472518
dev.vtnet.2.txq1.opackets: 53252329
dev.vtnet.2.rxq1.ipackets: 23579916
dev.vtnet.2.txq0.opackets: 34315060
dev.vtnet.2.rxq0.ipackets: 16892481
#2
22.7 Legacy Series / Re: Dual WAN Failover stuck
December 23, 2022, 11:41:57 AM
Dear,

sorry for hijacking the topic, but I have exactly the same problem (version OPNsense 22.7.10_2-amd64).

I tried to simulate the situation in GNS3 and couldn't reconstruct the issue. Failover and recovery did work. The network traffic was rather low - just a few pings.

Then I set up a new (virtual) environment, connected a few clients and the issue is back: Failover works as designed, but recovery does not.

Analysis from last night. The trigger appears to be from /usr/local/etc/rc.syshook.d/monitor/10-dpinger:

/usr/bin/logger -t dpinger "GATEWAY ALARM: ${GATEWAY} (Addr: ${2} Alarm: ${3} RTT: ${4}us RTTd: ${5}us Loss: ${6}%)"

echo -n "Reloading filter: "
/usr/local/bin/flock -n -E 0 -o /tmp/filter_reload_gateway.lock configctl filter reload skip_alias


Gateway log:
<12>1 2022-12-22T22:55:47+01:00 OPNsense.localdomain dpinger 46446 - [meta sequenceId="1"] WAN_GWv4_1 37.209.40.1: Alarm latency 502128us stddev 312011us loss 0%
<13>1 2022-12-22T22:55:47+01:00 OPNsense.localdomain dpinger 14062 - [meta sequenceId="2"] GATEWAY ALARM: WAN_GWv4_1 (Addr: 37.209.40.1 Alarm: 1 RTT: 502128us RTTd: 312011us Loss: 0%)
<12>1 2022-12-22T22:56:11+01:00 OPNsense.localdomain dpinger 46446 - [meta sequenceId="3"] WAN_GWv4_1 37.209.40.1: Clear latency 411311us stddev 298042us loss 1%
<13>1 2022-12-22T22:56:11+01:00 OPNsense.localdomain dpinger 36717 - [meta sequenceId="4"] GATEWAY ALARM: WAN_GWv4_1 (Addr: 37.209.40.1 Alarm: 0 RTT: 411311us RTTd: 298042us Loss: 1%)


Config daemon log:
<13>1 2022-12-22T22:55:48+01:00 OPNsense.localdomain configd.py 196 - [meta sequenceId="1"] [de0c153b-b628-488d-9aca-6dbc676535d1] Reloading filter
<13>1 2022-12-22T22:55:48+01:00 OPNsense.localdomain configd.py 196 - [meta sequenceId="2"] [12ff4b1c-04a6-46ac-afda-1c78ac9be651] request pf current overall table record count and table-entries limit
<13>1 2022-12-22T22:56:11+01:00 OPNsense.localdomain configd.py 196 - [meta sequenceId="3"] [286febe5-4a77-4691-96b1-2e4c32f6d2d4] Reloading filter
<13>1 2022-12-22T22:56:12+01:00 OPNsense.localdomain configd.py 196 - [meta sequenceId="4"] [da34e743-0c02-45b8-bfe8-e03091f0cd9d] request pf current overall table record count and table-entries limit


According to the logs, the relevant commands were running. However, the rules did not change (192.168.0.2 is the failover GW):
pass in quick on vtnet0 route-to (vtnet1 192.168.0.2) inet from any to ! <private> flags S/SA keep state label "f5a781eeb65a44a79c529c6d7ba4cbb6"

After triggering filter reload manually, the gateway changes from 192.168.0.2 to the primary GW 192.168.0.1 as expected:
pass in quick on vtnet0 route-to (vtnet1 192.168.0.1) inet from any to ! <private> flags S/SA keep state label "f5a781eeb65a44a79c529c6d7ba4cbb6"

Anything other ideas to analyse this?

Best Dirk