Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - systeme

#1
Quote from: DEC740airp414user on April 22, 2026, 11:34:40 AM"There's a "Dynamic gateway policy" checkbox on the interface—maybe that could be the solution to these errors?"

for my wireguard i do exactly that.

it enables gateway monitoring.    i also click disable routes as well


Hello,

I tried these settings on one instance and I have other error :


I also have the gateway on the instance with a /32 subnet same error (second error).
#2
Quote from: DEC740airp414user on April 21, 2026, 01:29:27 PMyou add this under wireguard instance > advanced > gateway to resolve that error

Indeed, the gateway field is empty on the WireGuard instances.

If I set the IP_WAN_GATEWAY under WireGuard instance > Advanced > Gateway, I have to disable route forwarding.



But since we use RIP for automatic route propagation on our WireGuard tunnels, won't that cause a problem?

We're following this documentation: https://docs.opnsense.org/manual/how-tos/wireguard-client.html

There's a "Dynamic gateway policy" checkbox on the interface—maybe that could be the solution to these errors?



Thanks for your help.
#3
Hello,

Since upgrading to version 26.1.2 (or 26.1.7), we've been seeing this error in the WireGuard logs at startup:

2026-04-21T07:59:30
Error
wireguard
/usr/local/opnsense/scripts/wireguard/wg-service-control.php: The command </sbin/route add -'inet' default 'IP_WAN_GATEWAY'> returned exit code 1 and the output was "add net default: gateway IP_WAN_GATEWAY fib 0: Invalid argument"

Configuration IP_WAN_GATEWAY :



The priorities on the gateways have been changed since the last changelog (26.1.2), but the same errors persist.



Opnsense is virtualized, so there are no groups to configure at the gateway level.

Do you have any suggestions on how to avoid this error in the future?

Thank you in advance,

Best regards,


#4
Reboot opnsense this weekend due to latest update.

No ping on hosts on remote network: 10.220.0.0/16.
Ok after a manual restart of the tunnel.

If anyone has any ideas. The capture was made before the restart of the tunnel.
#5
Hello,

We are experiencing an IPsec tunnel connectivity problem with OPNsense that occurs specifically on system reboots.
One particular IPsec tunnel fails to reconnect automatically after an OPNsense reboot, while all other tunnels reconnect without issue.The blockage seems to be in phase 2.

Workaround:

Once manually restarted via the interface, the tunnel remains stable until the next system reboot.
The problem occurs systematically each time OPNsense is restarted.

Configuration details :


  • Tunnel Configuration: No special or unique configuration settings compared to working tunnels.
  • Rekey Settings: Reconnection time values are correctly matched and identical on both endpoints.
  • Symmetry: Both sides of the tunnel have consistent configuration settings.

Do you have any idea what might be causing this malfunction or any idea of a setting that might solve the problem?

Thank you in advance,

Best regards,
#6
Thanks for your feedback.

It confirms what we thought, which is why we only applied it where it caused problems.

We don't have NFS, nor SMB etc... but if the infrastructure evolves. But in any case, it's preferable not to create other problems.
#7
Do you think MSS Clamping should be applied across the entire network?
Would there be any problems if we activated this "normalization" everywhere?
#8
ICMP is filtered on the public IP on the Proxmox VE side and on the additional public IP of the server used by the Opnsense WAN interface.
Do you think that allowing it would change anything? Since PMTU is a standard, it is natively authorized?

Edit: I've tried authorizing it, but it doesn't change anything.
#9
Thank you for your answer.

How do I get the "path mtu discovery" (PMTUD) function to work properly? Is it possible to do this other than with MSS Clamping ?

Similar problem : https://community.spiceworks.com/t/network-mtu-problems/1112518/2

#10
Hello,

We are experiencing TCP fragmentation issues on our network infrastructure, which have been resolved by implementing MSS Clamping (https://docs.opnsense.org/manual/firewall_scrub.html). Here are the details of our environment and situation:

Environment:

  • Proxmox Virtual Environment (PVE) with multi-site Software Defined Networking (SDN) using VXLAN zones
  • MTU set to 1450 on all VMs due to VXLAN encapsulation requiring 50 additional bytes (Proxmox documentation: https://pve.proxmox.com/pve-docs/chapter-pvesdn.html)
  • OPNsense virtualized on these PVE hosts


Current Configuration:

  • MSS Clamping enabled via "Firewall: Settings: Normalization"
  • Specific rules created for interfaces experiencing issues:
    • WireGuard VPN group interface (configured on OPNsense)
    • 2 LAN interfaces (one with VMs using WireGuard+OpenVPN, and another where we limited the source to the VM requiring long curl requests)
    • Max MSS value set to 1250

Symptoms observed before correction (non-exhaustive list):

  • Timeouts on long requests (curl)
  • Access issues to Proxmox consoles
  • SSH connection difficulties

Note:
  • Currently, no negative impact observed on IPSec tunnels configured on OPNsense or other LAN interfaces

Our question concerns the optimal strategy for MSS Clamping implementation:

  • Should we apply it globally across the entire network?
  • Or is it better to maintain our current targeted approach, applying it only to interfaces experiencing issues?
  • Would there be a knock-on effect if we activated this "normalization" everywhere?

Thank you in advance,

Best regards,
#11
Virtual private networks / Re: OPENVPN NOT WORKING
July 11, 2024, 09:14:31 AM
Hello,

Have you found a solution?

https://forum.opnsense.org/index.php?topic=41486.0
#12
I have 2 opnsense (primary and slave) where this tunnel appears, but a different IP range for the "Virtual Network".
When I check with the keyword "wg" (primary), the route 172.28.0.0/16 is not listed, but my secondary's route 172.29.0.0/16 is.

What I can't explain?

So I disabled the tunnel on the secondary and took its IP range 172.29.0.0/16 (Virtual Network) and put it on the primary.
The 172.29.0.0/16 range is indeed listed as "Allowed Address" on the WG side.

The ping from my WG peer comes out fine this time through the WG tunnel, so there's an improvement.
Since it goes through the tunnel, the ping appears in the Live View and is in "pass" status when I filter the Virtual IP of my OpenVPN client (172.29.0.6).
However, it doesn't reach the destination. I'm still looking for a solution.

Thanks for any help
#13
The ping from my WG peer doesn't go out and goes through my local IP instead of through the WG tunnel. If I force the ping on the WG tunnel interface, it doesn't work either.
So no packet received on my host with the OpenVPN client (during tcpdump).
However, I can ping the WG peer from the OpenVPN client...

Does anyone have any ideas? Thanks in advance.
#14
Unique tunnel IP address WG, IPsec network and I have add the  "Tunnel Network"  OpenVPN.
#15
Thank you for your reply, unfortunately I had already tried it and I've just done it, but it doesn't change anything.

For IPSec I had the same problem, which was solved by mentioning my WG instance in the SPD section and creating an entry in SNAT using the IP of my gateway on the LAN side for translation.

https://forum.opnsense.org/index.php?topic=41108.msg201474#msg201474