Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - ctr

#1
Have you checked your outbound connections while you observe the issue?
This symptom sounds familiar when running into the problem described here: https://forum.opnsense.org/index.php?topic=31431.0
#2
After upgrading to 22.7.9_3 I observe massive storms of retransmissions for IPv6 traffic like this:

16:44:53.107317 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107320 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107322 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107325 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107327 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107330 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107333 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107335 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107338 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107340 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107343 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107346 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107348 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107351 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107353 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107356 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107358 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107361 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107364 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107366 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107369 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107372 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432
16:44:53.107374 IP6 2a01:dead::beef.22483 > 2600:1901:1:c36::.443: Flags [.], seq 0:1432, ack 1, win 517, options [nop,nop,TS val 1734819756 ecr 4230049189], length 1432


This is going on hundreds of times per second, effectively creating a DoS for other traffic (can't tell if due to bandwidth or firewall resource exhaustion). The traffic in questions seems to be generated by Squid, I can't observe the retransmission on an internal leg of the firewall, but the destinations belong to legit communication to several destinations (Apple, Microsoft) and by several distinct client types (Linux workstation, iOS smartphones).

The state tables also look weird for those, with ESTABLISHED:FIN_WAIT_2 states for the destinations in question.

This started after upgrading to 22.7.9_3, coming from 22.7.8

Disablinh Suricata, resetting states, reboot and even switching the underlying VM host (this is OPNsense running as KVM VM) did not help.
#3
I'm also running into this issue. Should be fixed in ports now - hope to see it in OPNsense soon:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260331
#4
I'm having the same problem and it started to manifest when switching from userspace to in-kernel WG.

My assumption is that since both WG and IPSec live in the kernel now that packets don't reach PF/routing/NAT so that the src IP cannot be mangled. I'll try to switch my IPSec tunnel to routed mode to see if that makes a difference.
#5
21.7 Legacy Series / Re: nat64clat anyone?
December 17, 2021, 02:17:54 PM
Some more info: with direct_output=1 I can't see the outbound sessions in PF anymore and as result the return packets are rejected on the outside interface. With direct_output=0 (which is what I wan't and is the base for the first post) I can see the IPv6 session in the PF table, can't see rejects for the return traffic, but the return traffic is not coming through.
#6
21.7 Legacy Series / nat64clat anyone?
December 16, 2021, 01:44:40 PM
Did anyone play around with nat64clat (for 464XLAT) yet?
I know there is tayga, but that is for a different use case and it is in userspace and not ideally suited as simple CLAT.

ipfw-integrated nat64clat however has the capability of just prepending an ipv6 address (default prefix or custom) and stateless in-kernel NAT (supported by ipfw_nat64).

I used what I could find https://forums.freebsd.org/threads/nat64-464xlat.73741/ and apart from some prefix issues (the command doesnt really like every /96 prefix for some reason) it seems to work out of the box to a certain degree. I see CLATed traffic leaving the outbound interface and I can also see in on my (own) PLAT. There the IPv4 traffic leaves and receives a response, which results in the return packet being sent to my IPv6 address. However, I receive an ICMP-unreachable from OPNsense outside interface as result. There is no deny/reject about this in the log and I can see the outbound session in the session table, which leads to my assumption that the return traffic should match the existing sessions and should be allowed as result.

Any thoughts or suggestions?
#7
21.7 Legacy Series / add tun interface
December 11, 2021, 03:47:30 AM
It looks like in the past it was possible to bring up an arbirtrary interface (i.e. tun) and then it would be configurable in the GUI (see https://forum.opnsense.org/index.php?topic=3168.msg10301#msg10301 ), but this doesn't seem to work anymore. I'm working with a tun interface that works perfectly fine on the BSD layer but for some functionality I need to add it to the OPNsense configuration as well.

Any suggestions how to achieve this?
#8
I went ahead with naive approach and it worked. Not sure if OPNsense would have renamed the interface itself but since noone responded here I don't think so.

Still having problems with CARP and SR-IOV though, it works on one node, but not on the other. Could be a configuration problem though. Will open another thread if the problem persists.

(Random mac address is another issue: you have to assign a static mac on the host node and also enable  trust mode and disable spoof chk)
#9
I'm running OPNsense 20.1 on a Proxmox cluster with NIC-passthrough and was always having issues due to the old ixlv driver in FreeBSD 11. With a test instance I could confirm the issues don't exist anymore (see https://forum.proxmox.com/threads/issues-with-sriov-based-nic-passthrough-to-firewall.66392/#post-307232 ).

Now I'm eagerly waiting for 20.7 to upgrade my prod instance and would like to double-check in which step of the upgrade (and how) I shall update all interface names (all ixlv interfaces will be renamed to iavf!).

My naive approach would be:
- take a backup
- manually edit /conf/config.xml, replacing all occurences of ixlv with iavf
- apply upgrade/reboot


Any recommendations / suggestions?
#10
If you put those statements into an include dir it also survives a reconfig in the GUI.
I'm using /usr/local/etc/squid/pre-auth/39-ipv6-bind.conf
#11
I'm having some problems to implement PBR for traffic destined from OPNsense itself.
My goal is to build two VPN tunnels (Wireguard) via two different links, but to the same destination IP. The decision making criteria which path to chose shall be the (source or destination) port.
It already starts strange, if I create two gateways (one for each path) and a static (host) route on each gateway only one is inserted into the kernel, strangely the one that is on the gateway with the *higher* priority although it reads "lower means more important". I assume this is only the case for default gateway, but how can I set the metric then?

When trying to divert traffic to a specific port (again, originated on the firewall itself) I can't find a working combination. Which firewall / nat rule am I supposed to enter traffic that is originated from the FW?
If I put it on the interface where it would leave as per the route the pbr routing works, but it is going out with the wrong source IP in that case...
#12
After upgrading to 20.1.2 I have a crash when I save the config (so far only tried in the wireguard section) which results in a reboot, corrupted filesystem and either empty or garbage /conf/config.xml

So far I could only recover by booting into single user mode, running fsck, restoring from and older config and booting again.
Initially I suspected the new kernel, but this is not the case (I just happened to boot off a config.xml which wasn't broken when I tried kernel.old). So my prime candidates would be web-ui writing the config (it immediately kernel panics when I "save") or something in the backend reading it again/processing config templates...
#13
Well for me there is a crash on every bootup while using the new kernel. Going back to kernel.old fixes the problem.
Also sent a crash report about this a few minutes ago.

Apparently *my* problem is not related to the kernel (but something overwriting the config.xml with binary data), so I'll drop out of this thread.
#14
Yes, I already set it to 10 because the error is so seldom I may get away with it. However, at the same time I was wondering what the root cause is (as using tunables is only the last resort).
#15
I'm using a opnsense 19.7 HA cluster on two identical ESXi systems. Everything runs fine (after the initial virtual switch configuration) generally. However after a few weeks I randomly get "kernel: carp: demoted by 240 to 240 (send error 55 on vmx0_vlan41)"

If found two reference, but both don't have a resolution:
https://forum.netgate.com/topic/78354/send-error-55-with-vtnet
https://forum.opnsense.org/index.php?topic=5476.0

Does "send error 55" really point to "No buffer space available" (as defined in sys/errno.h) and what could it mean in an opnsense installation?