WireGuard service restarted on CARP BACKUP node after XMLRPC config sync

Started by mprenditore, February 16, 2026, 11:58:55 AM

Previous topic - Next topic
Hi everyone,

I am running an HA setup with two OPNsense nodes and WireGuard. I am encountering an issue where the WireGuard service inadvertently starts on the BACKUP node during configuration synchronization, leading to a split-brain VPN scenario.

The Setup:

Cluster: 2x OPNsense nodes in High Availability (Master/Backup). (both on 25.7.11 )

Service: WireGuard (kmod).

Config: The WireGuard Instance is configured with "Depend on (CARP)" set to my LAN VIP (VHID).

Sync: XMLRPC Sync is enabled for WireGuard and standard configuration details.

The Issue:
Under normal conditions, the WireGuard service correctly stays stopped on the BACKUP node. However, whenever a configuration change is made on the Master that triggers an XMLRPC Sync:

The configuration is replicated to the Backup node.

The system triggers a reload/restart of the WireGuard service on the Backup node.

The WireGuard service STARTS on the Backup node, completely ignoring the "Depend on (CARP)" status (which is definitely BACKUP).

Expected Behavior:
When the configuration syncs, the Backup node should evaluate the CARP status before starting the service. Since the node is in BACKUP state, the WireGuard service should remain stopped, even if the service is marked as "Enabled" in the config.

The Impact (Routing Blackhole):
Since the Backup node has no active WAN connection, the WireGuard tunnel cannot handshake with peers. However, because the wg0 interface is technically UP, the kernel installs connected routes for the VPN subnet.

When I try to manage the Backup node from a remote VPN site:

  • My request reaches the Backup node correctly (routed via the Master's LAN).
  • The Backup node tries to send the reply packet.
  • Instead of routing the reply back to the Default Gateway (the Master), it routes it into its own dead wg0 tunnel because it sees a direct route for the VPN subnet.
  • The packet is dropped, and the Backup node becomes unreachable from the VPN.

Question:
Is this a known limitation of the CARP dependency mechanism? Should it be expected to work across XMLRPC-triggered restarts, or does it only apply to CARP state transitions? If this is by design, is there a recommended approach to prevent services from starting on the BACKUP node after sync?

Any input appreciated. Thanks