Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - jahlives

#1
Virtual private networks / Re: Weird Wireguard issue
August 01, 2025, 10:48:03 AM
Mistery solved thanks to Cedrik from OPNsense support :-)

We still had an active IPSec configuration from the very beginning. As this never worked with the remote we changed to Wireguard but forgot the IPSec. As IPSec phase 2 never was established there was no route or interface visible but seems the kernel already "stole" the packets based on phase 1 and then just dropped them. The drop has not been shown in any logfile/packet capture.

So I case of such "weird" issues: ensure you check IPSec settings in GUI as the console does not show routes or interfaces for IPSec if phase 2 is not established. The only trace of this on cli was the output of
swanctl --list-sas
which finally led us to the right trace
#2
About performance: it depends on a lot of factors and also it depends very much on what and how you test. Generally Wireguard is way faster that OpenVPN and in many cases also faster than IPSec. For reliable testing you should use a tool like iperf(3) on both client and server and always perform the same test via a non-wireguard connection to compare. On OpnSense the iperf can be installed as well (from plugins/packages). It can be a good idea to play with the iperf params (ex parallel connections etc).
Quote... or at least seeing any CPU running hot due to the cryptography of the tunnel.
Wireguard is quite efficient in CPU usage, so even if you hit the max of the tunnel it does not necessarily mean that your CPUs are running on 100% usage. I have not found many tests with 10Gb cards but here in a Reddit thread there are some numbers: https://www.reddit.com/r/linux/comments/9bnowo/wireguard_benchmark_between_two_servers_with_10/ but keep in mind that they used huge MTU (8.5k) to achieve the speed. Also here https://www.netgate.com/blog/wireguard-in-pfsense-2-5-performance some performance although only with 1Gb card
#3
QuoteI have captured logs and screenshots, but in short, after making the connection to the VPN using my Android phone...
what does the peer status page in OpnSense say? Device connected and current handshake, or not?
QuoteI cannot ping any resources on the desired LAN I have made a VPN connection to.
if the above shows connected, have you added a firewall rule (for Wireguard Interface) to allow the traffic from your android device to you LAN? And just to ensure: do you make your tests with the android while it's connected to the WLAN (aka same subnet as OpnSense)? If yes then better try from a remote WLAN or via the mobile connection
#4
QuoteI don't think so and am suggesting to look elsewhere.
that was the AI's conclusion not mine ;-)

But actually I have no idea what else to check/test. It's not my first Wireguard setup and until now I never had issues like that. Usually it was a missing allowed_ip that interfered with routing. But in this case everything looks fine to me, setup/configuration-wise I mean. The fact that I never see the packets on wg1 interface first lead me to the assumption that it's a routing issue but as I cannot see the packets leaving on ANY interface I think it's not routing related. Then it could be the firewall/rules but as even with pfctl -d no success I think the firewall and rules are off the table. The fact that the firewall itself can reach remote also speaks against a routing issue. So the only thing I can think of is the different handling between outgoing traffic (when firewall itself pings remote) compared to forward traffic (when a client behind the firewall pings remote). And as said: forward traffic is not an issue for other remote destinations via the same WG instance, it's just this particular remote subnet.

To my knowledge (which may is a bit limited for FreeBSD ;-)) the processing of traffic is kernel stuff, no? If it works for outbound packets but not for forward packets where else could I look for? I would really check everything that is suggested here.

Just saw that I missed to mention the OpnSense version here: we use OPNsense 25.1.10-amd64 on both sides of the tunnel

Cheers

tobi
#5
I let AI summarize my testings from yesterday :-)

OPNsense WireGuard Forwarding Issue Summary

Problem Statement
We have an OPNsense firewall (116.203.251.18) with a WireGuard VPN tunnel (wg1) to a remote endpoint (217.20.196.67). The firewall itself can reach the remote network (10.3.0.0/16) via the WireGuard tunnel, but clients on the LAN (10.20.60.0/24) cannot reach this specific remote network. Interestingly, LAN clients can reach other networks behind other WireGuard peers without issues, and they can even ping the WireGuard IP of the problematic remote peer (10.230.0.254), but not any IPs in its 10.3.0.0/16 network.

Network Configuration
- OPNsense Firewall: 116.203.251.18
  - WireGuard interface (wg1): 10.230.0.1/16
  - LAN interface (vtnet1): 10.20.60.2/24
- Remote WireGuard peer: 217.20.196.67
  - WireGuard IP: 10.230.0.254
  - Local network: 10.3.0.0/16
- Remote peer AllowedIPs configuration includes 10.20.60.0/24
- Routing on OPNsense shows 10.3.0.0/16 correctly routed to wg1 interface

Diagnostic Steps and Results

1. **Packet Capture Tests**:
   - Traffic from LAN clients to 10.3.0.0/16 reaches the LAN interface (vtnet1)
   - No traffic appears on the WireGuard interface (wg1)
   - No traffic appears in pflog0 (firewall logs)

2. **Routing Verification**:
   - `netstat -rn | grep 10.3.0`: Confirms route exists (10.3.0.0/16 via wg1)
   - `route add 10.3.0.5 -interface wg1`: Added specific host route, still no traffic

3. **Firewall Rule Tests**:
   - Created floating rule with "sloppy state" tracking
   - Verified no blocking rules exist for this traffic
   - `pfctl -s all | grep "block drop"`: No relevant blocking rules

4. **NAT Configuration**:
   - Added specific outbound NAT rule for 10.20.60.0/24 to 10.3.0.0/16 via WireGuard
   - Enabled "static port" option

5. **WireGuard Configuration Check**:
   - Confirmed remote peer's AllowedIPs includes 10.20.60.0/24
   - OPNsense can ping remote network, confirming tunnel works
   - LAN clients can ping 10.230.0.254 (WireGuard IP) but not 10.3.0.0/16 networks

6. **Advanced Tests**:
   - Checked for asymmetric routing issues
   - Verified state tracking settings
   - Examined MTU settings (wg1: 1420)
   - Checked for system tunables conflicts

Key Findings

1. **Critical Detail**: LAN clients can reach the WireGuard peer IP (10.230.0.254) but not the networks behind it (10.3.0.0/16)

2. **Mystery**: No packets from LAN to 10.3.0.0/16 ever appear on wg1 interface, despite:
   - Correct routing table entries
   - No visible blocking firewall rules
   - Functioning WireGuard tunnel (confirmed by firewall's ability to reach 10.3.0.0/16)
   - Specific host routes having no effect

3. **Most Likely Cause**: Bug in OPNsense's WireGuard implementation regarding how forwarded traffic to specific networks is handled.

What could be preventing forwarded traffic from LAN clients from reaching the WireGuard interface, when direct traffic from the firewall works fine, and when other WireGuard networks are accessible?
#6
QuoteInterface=LAN
Direction= out
destinations: IPTV List
gateway= Surfshark_GW
if you intention is to re-route all outgoing traffic from LAN to the IPTV List via the VPN Gateway then the direction should be in. As the packets are incoming for the firewall. Possibly you'll also need a outgoing NAT rule to replace the original LAN IP with the Surfshark IP of the firewall, to ensure proper symmetric routing
#7
QuoteSo it's not true if you say that every untrustworthy LAN device can communicate with other sensible devices... is it?
not so sure ;-) What would prevent them from communicating via the LAN address to the sensitive device? If sensitive and un-sensitive devices are within the same layer2 subnet then they always can connect directly using the LAN address. Okay you could firewall the sensitive boxes to allow sensitive traffic only on the wireguard interface. But that imho is really a lot of effort and quite error prone. You would have to maintain firewalls on all sensitive boxes. The only way out of that would be to make two segregated subnets: one for the sensitive and one for the un-sensitive boxes and connect them via the OpnSense. Then you could firewall the traffic between the two subnets in one central place, although then imho no need for wireguard :-)

Really the best way would be to invest into VLAN-aware switches and segregate trusted from untrusted via VLAN tags. And one other point (others may not see that too strict): but a firewall on a virtualized host is imho (almost) never the way to go. That adds an unnecessary layer of complexity to the setup. Means the firewall cannot protect you from a bug in the virtualization itself. If there is such a bug the host system that runs your firewall can be compromised without any chance for the firewall to prevent it. I would really recommend to spend some bucks into a bare-metal for the firewall and VLAN-aware switches. Most provider routers can be run in bridge mode. Such a router can be connected to the WAN interface of the firewall. Then there is not need that the router supports VLAN.

Cheers

tobi
#8
Virtual private networks / Weird Wireguard issue
July 28, 2025, 12:12:13 PM
Hello forum

we're currently facing a weird Wireguard problem which makes 0 sense to me. We have

server side 10.20.60.0/24 server side wg IP 10.230.0.1 and server side opnsense local IP 10.20.60.2
peer: jv41jYNdMIt+OGZcgBFBjNJxYeZasPHOTm6axu1lWzw=
  endpoint: redacted:60554
  allowed ips: 10.3.0.0/16, 10.230.0.254/32
  latest handshake: 53 seconds ago
  transfer: 44.95 GiB received, 20.35 GiB sent
  persistent keepalive: every 30 seconds

client side 10.3.0.0/16 client side wg IP 10.230.0.254 and client side opnsense local IP 10.3.0.5
peer: 56tUzFXZ1QrweRqKCyJyG1OPeKYv0Fr9Ke/sBA/viR0=
  endpoint: redacted:51820
  allowed ips: 10.20.50.0/24, 10.11.0.0/16, 10.20.60.0/24, 10.231.0.0/16, 10.230.0.1/32
  latest handshake: 1 minute, 59 seconds ago
  transfer: 19.33 GiB received, 45.48 GiB sent
  persistent keepalive: every 30 seconds

peer config on both sides allow the remote local net traffic

When I try to ping from 10.20.60.8 to 10.3.0.81 I can see on the firewall that the traffic comes in on LAN interface but is not shown on wg1 interface. Also not shown outgoing on ANY other interface. Route on the firewall (server side) looks okay
route -n get 10.3.0.81
   route to: 10.3.0.81
destination: 10.3.0.0
       mask: 255.255.0.0
        fib: 0
  interface: wg1
      flags: <UP,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1420         1         0
Also the firewall itself can ping the 10.3.0.0/16 subnet. There are NO firewall rules that would block the traffic. The firewall itself can reach the remote network without issues. Also the remote network (client side) can reach the local network (server side). Also when I ping from 10.20.60.8 to the Wireguard IP of the remote firewall (10.230.0.254) I get replies. But not if I ping the local IP of the remote firewall (10.3.0.5). We do not use outgoing NAT rules as the remote systems have routes for the local net via their local opnsense. But for testing I tried with an explicit outgoing NAT rule but no success either.

Are we hitting a bug? I tried to debug the issue for quite a while now, but we absolutely no success. The fact that I cannot see traffic leaving on server side wg1 interface indicates to me that the packets are either misrouted (although I cannot see the outgoing traffic on any interface) or dropped (although even with pfctl -d it does not work)

Any ideas?

Kind regards

tobi

Also this firewall has WG connections to many other endpoints and there is no problem in reaching those remote networks. It's just this 10.3.0.0/16 destination network that causes trouble
#9
adding float IP as alias to opnsense helped to survive the Referred check :-)
Case solved
#10
omg found it: mtu issue :-) After lowering interface mtu to 1420 it worked. Now the last problem remains that the HTTP Referer check fails
QuoteThe HTTP_REFERER "https://REDACTED/" does not match the predefined settings. You can disable this check if needed under System: Settings: Administration.
problem is that redacted is the public floating ip about which opnsense itself has no clue
is it possible to disable this check via cli? Or should I add the floating IP as an alias to the opnsense interface to pass this check?
#11
Hi

have a imho very weird problem with a new opnsense setup. The box is a openstack VM with only one interface (WAN). The WAN interface is within a private subnet and we use a public floating ip on provider's side to connect to the outside world. That floating IP acts like a portforward and forwards every traffic to that floating IP to the internal IP of that box. That box can ping to outside and the box can be pinged from outside. But when I try to access the GUI from outside the browser ends in a timeout. I ran tcpdump on both sides (opnsense and my client) and can see that https packets go back and forth on both sides. But browser cannot establish connection. I already disabled packet filtering completely, no change.

Any idea what could be the cause for that? As said tcpdump looks okay so far on both sides. Following a tcpdump from the client's side

09:01:12.569669 IP 192.168.0.22.52810 > REDACTED.https: Flags [S], seq 2124490507, win 32120, options [mss 1460,sackOK,TS val 2774329427 ecr 0,nop,wscale 7], length 0
09:01:12.574899 IP REDACTED.https > 192.168.0.22.52810: Flags [S.], seq 3692313351, ack 2124490508, win 65228, options [mss 1452,nop,wscale 7,sackOK,TS val 3147813325 ecr 2774329427], length 0
09:01:12.574937 IP 192.168.0.22.52810 > REDACTED.https: Flags [.], ack 1, win 251, options [nop,nop,TS val 2774329432 ecr 3147813325], length 0
09:01:12.584496 IP 192.168.0.22.52810 > REDACTED.https: Flags [P.], seq 1:640, ack 1, win 251, options [nop,nop,TS val 2774329441 ecr 3147813325], length 639
09:01:12.590093 IP REDACTED.https > 192.168.0.22.52810: Flags [.], ack 640, win 506, options [nop,nop,TS val 3147813340 ecr 2774329441], length 0
09:01:12.596557 IPREDACTED.https > 192.168.0.22.52810: Flags [P.], seq 1441:2632, ack 640, win 511, options [nop,nop,TS val 3147813342 ecr 2774329441], length 1191
09:01:12.596586 IP 192.168.0.22.52810 > REDACTED.https: Flags [.], ack 1, win 251, options [nop,nop,TS val 2774329454 ecr 3147813340,nop,nop,sack 1 {1441:2632}], length 0

REDACTED is the public floating IP of opnsense, always the same correct IP. I'm not the tcpdump pro but for me it looks like answers are coming back on the client's request.

And following a screenshot from tcpdump on opnsense side (redacted my clients public IP)


One question: is is possible to enable SSH without GUI directly from command line? Would like to enable root SSH access (with password) to try to access the GUI via a ssh-tunnel. Just to verify if it works via a tunnel

Thanks for any hint how to more debug to narrow down the source of the problem.

tobi

#12
> A host must not have more than one interface configured by DHCP.

it think that depends on what exactly "host" means :-) If host is your opnsense (router) then I would agree to only configure one of the hosts interface via DHCP (mostly WAN/upstream) and all the others statically. But if host means any host inside your network then I'd agree to the others as this is not feasible nowadays. I have multiple hosts with 3 or more interfaces. To manually configure them would really be a pita. One just has to take care that not multiple default routes are pushed and that the interfaces do not share the same broadcast network. But beside from that I see no point to not configure all interfaces of a host via dhcp :-)
#13
have you checked you BIOS settings? Usually it's named something like "console redicrection" or similar
#14
I'm unfamiliar with terraform but what made me stuck in your kvm config snippet: why forward mode nat on a bridged interface? Usually a briged-to-the-host interface does not require nat as the VM should be in the same network as the host. If it is nat'ed from the host to the VM then usually one need port-forward rules to access ports on the vm from outside the VM
#15
First question that comes into my mind when seeing this:
Quoterule 5/0(match) block in on bridge0
do you have rules on said interface to allow traffic? Also check the settings of the following two system tunables

net.link.bridge.pfil_bridge
net.link.bridge.pfil_local_phys

in my bridged setup I have the first on 1 and the second on 0 which enables filtering on the bridge interface and not the underlying physical interfaces. Usually one want to filter on the bridge interface and not the physical one (at least in my case :)))