Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - Dark-Sider

#1
25.1, 25.4 Legacy Series / Wireguard issue(s)
January 20, 2026, 11:15:08 PM
Hi,

since wireguard made its way into opnsense it works ok-ish for however its "stability" is not comparable to OpenVPN. However I like the concept behind wireguard therefore I'm putting up with some issues and still using it.

Last week I had to restart my opnSense box (25.1.12) and me being away on a business trip, wireguard failed me (again) after the restart. I usually solve this issue by de- and reactivating my one and only wg0 instance via the webgui. After restarting wg0 everything works as it is supposed to.

Since the issue caught me cold (again) I did some forum reading and found interesting threads regarding wg and DNS, stale connections etc:
https://forum.opnsense.org/index.php?topic=49432.0
https://forum.opnsense.org/index.php?topic=37905.0
https://forum.opnsense.org/index.php?topic=42648.0

Honestly I didn't know about the quirks of wg and DNS resolve issues after your dynamic IP refreshes or wg only doing DNS queries once on startup and not refreshing it. One might argue that using a static ip would solve such problems, however static IPs on consumer lines are hard to get these days. Even IPv6 is dynamic with my ISP.

While I think wg's behaviour is a severe design oversight in the protocol / moudule (nothing related to opnsense though) I appreciate the effort that a cron job exists that somewhat is supposed to fix the issue.

I activated the cron-job to run */5 * * * * however my issue was not resolved. My mobile phone was not able to connect via IPv6 or IPv4 (both usually works) to my opnSense box. I did a packet capture on 51820 and the packets from phone arrived but no response was sent back.

I then noticed there is another cron-job called "restart wireguard service" I also did setup this job */7 * * * * however after waiting for 14 minutes my wireguard log still showed that the service was started last week - no other log entries.

While looking at the logs I found that the wg status page was quite empty, only showing wg0 with my local endpoint at port 33###. Didn't notice this first, but my wg setup uses only port 51820. Also no peers were shown at all on the status page.

I have 3 road warrior peers configured ("dial in only") being my phone, my laptop and a mobile gl_inet travel router. I also have a site2site connection configured to a remote network.

Only after I deactivate my Instance and reactivate it, all 4 peers will be listed on the status page. When the peers are listed the connections start working again.

My openSense runs virtualized (yes it could need a firmware update which I will do later) and is on a dial up connection at a German ISP (M-Net) using both IPv4 and IPv6 connectivity via pppoe. Luckily my ISP-connection is hyper-stable so reboots and disconnects thus ip-changes happen very rarely.

I still wonder why wg needs a kick in the... after my box boots up? And shouldn't that restart wg cron-job also fix my issue?

thanks,
Dark-Sider
#2
Hi,

I have a web-application that up until now used a NAT port-forward. However I need URL-based filtering. As the application is "closed", my solution of choice was to setup a nginx reverse proxy in opnsense and add some ACL-based filtering. It all works great, except on small but important detail:

The web-app displays a logon-page. If I enter the correct username / password (while using nginx as reverse proxy) it displays an login error page. The web-app's log shows:

[ERROR] 2022-02-24 13:14:01,144 [qtp142733894-87857] Unauthorized access detected
com.appName.AuthenticationException: Invalid CSRF token


If I then press "reload" on the browser, I'm magically logged in and everything works. Since the web-app is also accessed by external users, I would like to get it 100% working though :)

The reverse proxy configuration is very basic at this stage:
Upstream, and Upstream server are configured with correct ssl certs.
I tried the Upstream configuration with Proxy Protocol enabled and disabled (no difference)

Location configuration is as basic as it can get (just enforce HTTPS) I also tried to enable and disable the response/request buffering (no idea what this actually does though)

The HTTP-Server configuration is also very basic. It just listens on a specific virtual IP on specific ports. Location is set and SSL-Cert is set.
I also tried enabling proxy protocol within the HTTP-Server options, and setting the real ip source to all options. Nothing worked (I restarted nginx after each configuration change)

I have not defined any security headers.

Any ideas what my configuration is missing?

regards
Dark-Sider
#3
Hi,

I'm running an AVM Fritz!Box router as a LAN client for my VOIP needs. It's a neat all-in-one PBX solution for internal ISDN, DECT and SIP phones. Internet is provided by the German ISP M-Net through FTTH/GPON. OPNsense connects via PPPoE to the Internet (MTU set to 1492 in the pppoe options).

To avoid NAT issues with SIP I configured all SIP-accounts as IPv6 only. Under Firewall settings I have allowed sipagte's servers so they can talk to my AVM Fritz!Box.

I'm using both, sipagte and my ISP's voip service. While my ISP's voip works fine, incoming sipagate calls cannot be answered (caller still hears the phone ringing although the call was picked up).

To troubleshoot the problem I did some packet capturing. As it turns out the SIP/SDP 200 OK packet that is sent  to sipgate is "rejected" by OPNsense with an ICMPv6 "Packet too big" and therefore never reaching my voip provider. The ICMPv6 Packet Too Big contains the correct MTU of 1484 (the value that is displayed und calculated MTU for PPP in the pppoe options), the Packet that is rejected has a packet size of 1494.

What would be the expected behavior there? Is the Fritz!Box expected to rentransmit the packet with a smaller packet size or is OPNsense expected to refragment the packet to fit the MTU? If so, what setting do I miss?

regards,
Fabian
#4
20.7 Legacy Series / Need help with wireguard setup
August 27, 2020, 06:52:57 PM
Hey guys,

I'm running a wireguard VPN between two dial-up sites in Germany. One provider in M-Net (local carrier in Munich) and the other provider is Deutsche Glasfaser (a FTTH provider, active in several parts of Germany).

For the ease of this post I'll refer to the two sites as MNET and DG. Both sites have installed identical hardware, opnsense is running in an ESXi 6.7 VM on a qotom headless PC (core i7). I have plenty of experience with that hardware and it gets the job done quite well.

Although both sites connect via FTTH (gpon) mnet requires a pppoe connection which featurese IPv4/IPv6 Dual Stack. The IPv6 prefix (a /56) is received via the IPv4 connectivity through the pppoe connection. The pppoe interface does not receive a public but a linklocal IPv6 address. DG just issues IPv4 and IPv6 addresses via DHCP (no pppoe required). The WAN interface also gets a public /128 IPv6 address. Since DG only offers cg NAT for IPv4 connectivity, I'm using wireguard with IPv6 to connect the two sites.

As the IPv6 subnets assigned by both providers are not static, I use dynv6 to update my IPv6 addresses on both sites. Since MNet does not provide a public v6 address on the pppoe/wan I selected the LAN-IPv6-adresses to be served on dynv6. This is checked, and both boxes are pingable with their respective hostnames from the internet using IPv6.

The wireguard part of the configuration was quite straight forward:
- install wirguard
- configure the local part on both boxes
- configure the remote-endpoint on both boxes with the public key of the other box
- activate the endpoints within the respective local config.

For the allowed IP-addresses I selected the corresponding remote network, and the remote tunnel address with /32

MNET uses 172.20.0.0/16, 172.19.0.0.2/32
DG uses 192.168.0.0/24, 172.19.0.0.1/32
as allowed nets. The allowed net is the remote LAN-Subnet of the other box and the other boxe's tunnell address.

I don't need IPv6 within the tunnel so it's only IPv4 for the tunnel.

For the sake of ease both sites have the firewall settings wg0 to IPv4+IPv6 any to any. (Protocol, sourc, dest, port...)

Both boxes run the latest 20.7 opnSense version.

Now the strange part: The tunnel sometimes works and sometimes doesn't work - although the IPv6 prefixes of both sites have not changed and DNS returns the correct IPv6 address. The DNS record also has only an AAAA record configured, to avoid IPv4 connections.

When I look at the wireguard "List Configuration" output on each site i see sth like this:


interface: wg0
  public key: XXX
  private key: (hidden)
  listening port: 51820

peer: XXX
  endpoint: [2a00:6020:1000:xxx]:51820
  allowed ips: 172.19.0.2/32, 172.20.0.0/16
  latest handshake: 19 minutes, 38 seconds ago
  transfer: 22.06 KiB received, 14.01 KiB sent
  persistent keepalive: every 1 minute


The other site displays similiar information. If the connection is working, then the received and sent counters go up as expected and traffic passes through the network. But I very often reach a state where both boxes just send packets and the other box won't receive any, or it receives the packet and even sends out an answer (check with tcpdump) but the sending box won't receive any packets.

I also noticed, that the IPv6 address noted in the "peer" section of the List Configuration view on the MNET box, does not match the configured address in the endpoint dialogue or the the resolved address from hostname. (tried both) but keeps displaying the WAN address of the DG box instead of it's LAN address. So it seems to me that wg tries to origniate traffic from DGs WAN instead of LAN IP. I updated the DYNV6 configuration to match the WAN - but no luck there either.

WG runs on 51820 on both boxes and the ports are opened in the Firewall (wan/pppoe) with "THIS FIREWALL" as target. To allow the ICMP-Echo for both boxes I use the same rule but with ICMP...

any ideas? any fundamental flaws in my thinking?

the odd thing is, that it sometimes works and sometimes doesn't, and I can't see whe :(

regards,
Fabian
#5
Hi,

to set the general picture: I installed wireguard devel on 19.1.x a couple of weeks ago. As I now had time play around with my planned wg VPN tunnel I finally did so yesterday.

I already have multiple ipsec and openvpn tunnels on my opnsense box running - so I thought this would be an easy and straight forward task :-)

The purpose for my setup is to route some hosts of my network through an VPS server in canada. This VPS runs ubuntu 18.04 with wireguard. I did manage to bring up the tunnel quite easily. The tunnel network is 192.168.4.0/24 with my wireguard box being .2 and the VPS .1

Without defining any other virtual interfaces (just setting the allow rule for the wirguard interface) I was able to ping the remote location - ping times and tcpdump also show that it actually is the VPS who answered.

My first try was to put 192.168.4.0/24 as "allowed ips" in the wireguard config on the opnsense box. Once the tunnel was established the routes were set and the tunnel worked.

The next step was to route actual internet upstream traffic through the vpn. I looked in the forum and found those threads:
https://forum.opnsense.org/index.php?topic=8998.0 (ovpn)
https://forum.opnsense.org/index.php?topic=4979.0 (ovpn)
https://forum.opnsense.org/index.php?topic=11737.0 (wg)

So I went ahead and did
- define an alias for one host in my network,
- setup the NAT rules (but not needed as NAT can be achieved on the VPS, just makes allowed ips config easier)
- created the virtual VPN interface with assigned wg0 to it and set it to ipv4 DHCP
- created the firewall rule with the alias and the gateway of the virtual VPN interface.

It did not work as I had expected. tcpdump on the VPS shows, that 1962.168.4.0/24 traffic is sent through the tunnel to the VPS but all other traffic just doesn't go through the tunnel - but also does not exit my opnsense on any other interface (ping shows a timeout).

I did a bit of thinking and replaced the allowed ips in the wg config with 0.0.0.0/0 as this might be the problem.

After restarting  the box my whole network was routed through the VPN not just this one host. However, the traffic actually went over the VPS in Canada. It took me some time to figure out that I probably had to check that disable routes checkbox in the wireguard config, das my default route was now set to 0.0.0.0/0 -> VPN

before doing so however I also read that 19.7 was now released and has wg as stable on board. So I reset my opnsense VM to a config state before my wireguard efforts and upgraded the box to 19.7.2 (uninstalled wg-devel and installed the stable wg package)

After setting up the wg tunnel again and verifying that the VPS is pingable from my LAN I again defined the virtual interface.

Then things changed.

First thing I noticed is, that there is no more gateway created automatically once you choose IPV4 DHCP for the virtual interface --- why?

I just stayed with ipv4 "none". Restarted  the tunnel and the VPN interface got the correct ip 192.168.4.2 - a minor change I thought.

I created the gateway manually leaving all IP fields blank just setting the Interface to VPN. After pressing OK and Applying there was no gateway added. I retried this a couple of times - no error message - no gateway.

I then created the gateway with the IP 192.168.4.1 and Interface VPN - which actually worked.

I thend changed the allowed ip again to 0.0.0.0/0 - and all traffic was again forced through the tunnel.

After that I ticked the "disable routes" option in the wg config and restarted the box (just to be safe). This is when things got really bad.

The default route was gone, however so was any route pointing 192.168.4.0/24 to go out through the VPN interface. Thus, 192.168.4.1 (my VPS) was no longer reachable. The routing table also shows only 192.168.4.2 (my opnsense box) on the VPN interface without any netmask.

I then tried to add the route manually as read on the forums, however this is not (not anymore?) possible. Whenn adding the route I can't chose my VPN interface. I only have choice of my PPPoE gws for my dial-up connection.

This is where I hit the roadblock, as I was not able to set
- 0.0.0.0/0 as allowed ips
- without setting a default route through the tunnel
- and I failed adding the 192.168.4.0/24 route manually

Were there changes in 19.7 how this is done?
Is there an obvious mistake I just don't see?

Thanks for helping out!

bye
Fabian






#6
Hi,

I'm trying to use opnsense with a German ISP called M-Net. My connection is Dual-Stack. IPv4 via PPPoE and IPv6 is DHCPv6 via the IPv4 uplink.

My IPv6 cofinguration on the WAN interface is:
- DHCPv6
- use IPv4 connectivity
- send solicit
- send prefix hint
- request 56 bit prefix
- only request prefix

My IPv6 configuration on the LAN interface is:
- Track Interface (WAN)
- Prefix ID 0

The result:
- The LAN interface gets a public IPv6 Address and a link local address (fe80::1:1)
- The WAN-pppoe interface gets a link local IPv6 address and a link local gateway from my ISP.
- The LAN-clients get propper public IPv6 Addresses from the requested prefix and the default gateway is set to fe80::1:1

The problem:
LAN-Clients can ping the WAN's public and private IPv6 addresses
opnsense can ping public IPv6 Addresses on the internet
LAN-Clients can't ping public IPv6 Addresses on the internet (no reply)

Packet Capture on the LAN side shows the ICMPv6 Echo requests with client's public IPv6 and target's public IPv6 however the ICMPv6 Echo requests never leave through the WAN interface.

IPv6 is enabled in the Firewall Settings. The LAN-Interface has an IPv6 "Lan Net 2 any" pass rule (otherwise opnsense's LAN-IPv6 wouldn't be pingable from my LAN). The WAN-Interface has no special rules, except that I removed the block bogon and RFC networks (for troubleshooting) and a NAT forward for 2 IPv4 hosts.

I have rebooted opnsense several times :)


What's going wrong there? Am I missing some IPv6 forwarding setting in the options? I'm using a pfsense setup with the same ISP on a different line and IPv6 routing works with the very same configuration w/o problems. link-local gateway etc. look the same. Any hints where I can start troubleshooting?

regards,
Fabian