Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Crazyachmed

#1
Looking into this further I saw a case where it took 8:40h (!) for the client to close the connection after the server closed it.

I experimented with setting tcp.closing to 3600s (originally 900s), which of course makes some of the connections work. But setting a very long timeout does not help, becacues by then the server has already closed its socket and only sends RSTs back.

Currently there is also no method of permanently modifying any session timeouts outside of the provided templates. For completness sake, this is the very temporary workaround:


  • Add "set timeout tcp.closing 3600" to /tmp/rules.debug
  • Apply using "pfctl -f /tmp/rules.debug"

Which can only be implemented via a feature request, if anyone has a need for someting like this.[/list]
#3
Quote from: mimugmail on October 01, 2020, 07:37:28 PM
Havent followed the thread but async routing usually comes in interfaces with upstream gateway set and also a second device in this network

I only have the one gateway configured. A few client networks and one WAN-network.

Also looking at the trace I do see all packets up until the client packet dropped on the firewall itself.
#4
Quote from: esquagga on September 30, 2020, 11:40:27 PM
I haven't sniffed the traffic yet, but I'm wondering if this is the same thing I am seeing.

Can you please add the following three floating rules at the bottom of the list of floating rules in this order?

  • Action: Block
  • Quick: Disabled
  • TCP/IP Version: IPv4+IPv6
  • Protocol: TCP
  • Log: Enable
  • Description: Stale TCP
  • Advanced, TCP flags, ACK: Both checkboxes


  • Action: Block
  • Quick: Disabled
  • TCP/IP Version: IPv4+IPv6
  • Protocol: TCP
  • Log: Enable
  • Description: Stale TCP FIN
  • Advanced, TCP flags, FIN: Both checkboxes


  • Action: Block
  • Quick: Disabled
  • TCP/IP Version: IPv4+IPv6
  • Protocol: TCP
  • Log: Enable
  • Description: Stale TCP RST
  • Advanced, TCP flags, RST: Both checkboxes

This way you can easily see what kind of packets hit the "Default deny rule" without looking at each entry's details. In my case I see a lot of hits for FIN.

On another note I completely disabled all HW-offloading on the VM-Host using this blunt instrument:

#!/bin/bash

OFFOPTS="rx tx tso ufo gso gro lro rxvlan txvlan rxhash ntuple"
INTF=$(/sbin/ifconfig | /bin/egrep '^\S' | /usr/bin/cut -d':' -f 1 | /usr/bin/tr '\n' ' ')

function disoff {
  for OPTION in $OFFOPTS; do
    /sbin/ethtool --offload "$1" "$OPTION" off &>/dev/null || true
  done
}

for CUR in $INTF; do
  disoff $CUR
done


Sadly, there was no improvement
#5
20.7 Legacy Series / Trouble with half-closed connections
September 28, 2020, 06:07:43 PM
I investigated an unusually high number of packets hitting my default deny rule. For the most part they had the TCP-FPA flags set. Looking at a trace I did on the box hosting the firewall-VM I looked at some affected connections and the most unusual about them is that the client still sends traffic into a half-closed connection. Two examples:

1) Server sends FIN
2) Client ACKs the FIN
3) Client Sends additional data -> Packet is blocked
4) Client sends FIN -> Packet is blocked
5) Client retransmits FIN a few times -> Packets are blocked

Second case:
1) Server sends FIN
2) Client ACKs packet before FIN
3) Server retransmits FIN
4) Client ACKs the FIN
5) Client sends FIN -> Packet is blocked
6) Client retransmits FIN a few times -> Packets are blocked

Both clients are Android devices talking to the cloud. One connection is IPv4, the other is IPv6. How can I troubleshoot this further? Why don't the packets match the stateful return path?
#6
OK, so I managed to fix my issue. I needed to add a Client specific override and add the "iroute" option pointing to the remote LAN.

OpenVPN only shows this issue on very high verbosity levels. Basically the kernel route is installed by the Remote LAN option under the server tab, but OpenVPN also needs an internal route.
#7
Hi all!

I set up an OpenVPN tunnel from my OPNsense to a RaspberryPi located in a remote network for a Site-2-Site tunnel. The tunnel itself is established successfully, correct routes are installed on both sides.

What works:
- Ping from both sides inside the VPN tunnel network
- Ping from the remote side to *any* of the firewall interfaces
- Ping from local LAN box to remote VPN tunnel address

What does not work:
- Ping from the firewall (or any local LAN box) to the remote-LAN-address of the raspi.
- everything else ;)

At first I thought this was due to ip_forward issues on the raspi, but a trace shows that there are no VPN-data packetes egressing from the firewall when pinging the remote-LAN (they are when pinging the remote tunnel address). Although they are visible when tracing the tun interface on the firewall.

Since pinging from the remote side to all firewall interfaces is working I suspect something regarding the rules is wrong (the remote ping is only working because the return path matches an established fw-state?). I have enabled logging for all the default rules and for all IPv4 deny-rules on the Floating, LAN and OpenVPN tabs. There is nothing in the logs and I even can see the pass-state created when pinging from the firewall itself.

I have noticed some oddities:

- Matching on "OpenVPN Network" in rules does not work, I've used an Alias for now. Is the expected behavior?
- Successfull matching on the VPN-Interface IPs when using "This Firewall" seems to depend on some devine interference?
- I can not edit my other OpenVPN connections, because it failes with "local port in use". I use two servers with the same port, one is IPv4, the other IPv6

Does someone have an idea why my box doesn't send out the packets to the remote LAN? Can I or should I assign the tunnel interfaces under Interfaces -> Assignments? This would allow me to dumb down my ruleset.

Cheers,
Flo