Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - davorin

#1
Haven't look at it any further....

Weird is that not the HA sync causes this, but when I change which services have to be synced in HA.
#2
Good morning

Running two 26.1.4 opnsense instances here virtually for testing with CARP on WAN and LAN.
Therefore I am using Kea DHCP for HA configured as per official documentation.

Now and then when I do a HA Sync on the master the Kea DHCP service doesn't start again, sometimes the Kea agent as well...
After resyncing one or two times more the Kea services are running again on the backup FW.

The last log lines show that DHCP server is started but shutdown again (or at least this is what I understand ;o):

<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="81"] INFO  [kea-dhcp4.ha-hooks.0x2dbe8985c008] HA_SERVICE_STARTED fw2: started high availability service in hot-standby mode as standby server
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="82"] INFO  [kea-dhcp4.dhcpsrv.0x2dbe8985c008] DHCPSRV_CFGMGR_USE_ALLOCATOR using the iterative allocator for V4 leases in subnet 192.168.241.0/24
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="83"] INFO  [kea-dhcp4.dhcp4.0x2dbe8985c008] DHCP4_MULTI_THREADING_INFO enabled: yes, number of threads: 1, queue size: 64
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="84"] INFO  [kea-dhcp4.dhcp4.0x2dbe8985c008] DHCP4_STARTED Kea DHCPv4 server version 3.0.2 started
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="85"] INFO  [kea-dhcp4.commands.0x2dbe8985c008] COMMAND_RECEIVED Received command 'shutdown'
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-ctrl-agent 33012 - [meta sequenceId="86"] INFO  [kea-ctrl-agent.dctl.0x5794f6a71008] DCTL_SHUTDOWN Control-agent has shut down, pid: 33012, version: 3.0.2
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="87"] INFO  [kea-dhcp4.dhcp4.0x2dbe8985c008] DHCP4_SHUTDOWN server shutdown
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="88"] INFO  [kea-dhcp4.ha-hooks.0x2dbe8985c008] HA_DEINIT_OK unloading High Availability hooks library successful
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="89"] INFO  [kea-dhcp4.host-cmds-hooks.0x2dbe8985c008] HOST_CMDS_DEINIT_OK unloading Host Commands hooks library successful
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="90"] INFO  [kea-dhcp4.lease-cmds-hooks.0x2dbe8985c008] LEASE_CMDS_DEINIT_OK unloading Lease Commands hooks library successful
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="91"] INFO  [kea-dhcp4.hooks.0x2dbe8985c008] HOOKS_LIBRARY_CLOSED hooks library /usr/local/lib/kea/hooks/libdhcp_ha.so successfully closed
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="92"] INFO  [kea-dhcp4.hooks.0x2dbe8985c008] HOOKS_LIBRARY_CLOSED hooks library /usr/local/lib/kea/hooks/libdhcp_host_cmds.so successfully closed
<134>1 2026-03-18T09:00:05+01:00 fw2.internal kea-dhcp4 31801 - [meta sequenceId="93"] INFO  [kea-dhcp4.hooks.0x2dbe8985c008] HOOKS_LIBRARY_CLOSED hooks library /usr/local/lib/kea/hooks/libdhcp_lease_cmds.so successfully closed

Anyone else having this problem?

Seems there is no other options for having HA DHCP on the LAN side....

#3
Being a little further...master FW was blocking Multicast on pfSync interface from backup FW....

But still every time I change settings for System->HA, I see in the WG logs on the backup FW:

2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:14    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:13    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:12    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:12    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:12    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:12    Notice    wireguard     Wireguard configure event instance Office-SiteA (wg2) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:12    Notice    wireguard     Wireguard configure event instance Office-SiteB (wg1) vhid: 50 carp: BACKUP interface: down
2026-03-09T15:31:12    Notice    wireguard     Wireguard configure event instance Office-SiteC (wg0) vhid: 50 carp: BACKUP interface: down

Is this the expected behaviour?

Because when I change HA settings the WG tunnels are unusable for few seconds...

This is the log from the backup FW during save of HA settings:

<13>1 2026-03-10T07:47:28+00:00 fw2.internal kernel - - [meta sequenceId="24"] <6>[3281] carp: 10@vtnet1: BACKUP -> MASTER (preempting a slower master)
<13>1 2026-03-10T07:47:28+00:00 fw2.internal kernel - - [meta sequenceId="25"] <6>[3281] carp: 10@vtnet0: BACKUP -> MASTER (preempting a slower master)
<13>1 2026-03-10T07:47:28+00:00 fw2.internal kernel - - [meta sequenceId="26"] <6>[3281] arp: 192.168.1.1 moved from 00:00:5e:00:01:0a to 52:54:00:40:0c:64 on vtnet1
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 5030 - [meta sequenceId="27"] /usr/local/etc/rc.syshook.d/carp/20-openvpn: Carp cluster member " (192.168.1.1) (10@vtnet1)" has resumed the state "MASTER" for vhid 10
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 5999 - [meta sequenceId="28"] /usr/local/sbin/pluginctl: plugins_configure crl (1)
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 5999 - [meta sequenceId="29"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : core_trust_crl(1))
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 13189 - [meta sequenceId="30"] /usr/local/etc/rc.syshook.d/carp/20-openvpn: Carp cluster member " (192.168.122.10) (10@vtnet0)" has resumed the state "MASTER" for vhid 10
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 5999 - [meta sequenceId="31"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : openvpn_refresh_crls(1))
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 16795 - [meta sequenceId="32"] /usr/local/sbin/pluginctl: plugins_configure crl (1)
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 16795 - [meta sequenceId="33"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : core_trust_crl(1))
<13>1 2026-03-10T07:47:28+00:00 fw2.internal kernel - - [meta sequenceId="34"] <6>[3281] carp: 10@vtnet0: MASTER -> BACKUP (more frequent advertisement received)
<13>1 2026-03-10T07:47:28+00:00 fw2.internal kernel - - [meta sequenceId="35"] <6>[3281] wg0: link state changed to UP
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 20179 - [meta sequenceId="36"] /usr/local/etc/rc.syshook.d/carp/20-openvpn: Carp cluster member " (192.168.122.10) (10@vtnet0)" has resumed the state "BACKUP" for vhid 10
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 16795 - [meta sequenceId="37"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : openvpn_refresh_crls(1))
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 24338 - [meta sequenceId="38"] /usr/local/sbin/pluginctl: plugins_configure crl (1)
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 24338 - [meta sequenceId="39"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : core_trust_crl(1))
<13>1 2026-03-10T08:47:28+01:00 fw2.internal opnsense 24338 - [meta sequenceId="40"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : openvpn_refresh_crls(1))
<13>1 2026-03-10T08:47:29+01:00 fw2.internal opnsense 30806 - [meta sequenceId="41"] /usr/local/etc/rc.syshook.d/carp/20-openvpn: Carp cluster member " (192.168.1.1) (10@vtnet1)" has resumed the state "BACKUP" for vhid 10
<13>1 2026-03-10T08:47:29+01:00 fw2.internal opnsense 33152 - [meta sequenceId="42"] /usr/local/sbin/pluginctl: plugins_configure crl (1)
<13>1 2026-03-10T08:47:29+01:00 fw2.internal opnsense 33152 - [meta sequenceId="43"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : core_trust_crl(1))
<13>1 2026-03-10T07:47:29+00:00 fw2.internal kernel - - [meta sequenceId="44"] <6>[3282] carp: 10@vtnet1: MASTER -> BACKUP (more frequent advertisement received)
<13>1 2026-03-10T08:47:29+01:00 fw2.internal opnsense 33152 - [meta sequenceId="45"] /usr/local/sbin/pluginctl: plugins_configure crl (execute task : openvpn_refresh_crls(1))
<13>1 2026-03-10T07:47:29+00:00 fw2.internal kernel - - [meta sequenceId="46"] <6>[3282] wg0: link state changed to DOWN
#4
Did now a virtualized test setup with a master/backup running CARP on WAN and LAN and a WG tunnel to a third OPNSense installation...

Tunnel runs fine on master FW, but as soon I change high availability settings to include WireGuard for syncing, the backup FW immediately takes over and becomes the master. After around 70 seconds the backup FW redraws and all is fine again.

2026-03-09T12:23:30 Notice kernel <6>[2133] wg0: link state changed to DOWN
2026-03-09T12:23:30 Notice wireguard Wireguard configure event instance Test (wg0) vhid: 10 carp: BACKUP interface: down
2026-03-09T12:23:30 Notice wireguard wireguard instance Test (wg0) switching to DOWN
2026-03-09T12:23:30 Notice wireguard Wireguard configure event instance Test (wg0) vhid: 10 carp: BACKUP interface: up
2026-03-09T12:22:24 Notice wireguard Wireguard configure event instance Test (wg0) vhid: 10 carp: MASTER interface: up
2026-03-09T12:22:24 Notice kernel <6>[2068] wg0: link state changed to UP
2026-03-09T12:22:24 Notice wireguard wireguard instance Test (wg0) switching to UP
2026-03-09T12:22:24 Notice wireguard Wireguard configure event instance Test (wg0) vhid: 10 carp: MASTER interface: down
2026-03-09T12:21:30 Notice kernel <6>[2014] wg0: link state changed to DOWN
2026-03-09T12:21:30 Notice kernel <6>[2014] wg0: link state changed to UP
2026-03-09T12:21:30 Notice wireguard wireguard instance Test (wg0) started
2026-03-09T12:21:30 Notice wireguard /usr/local/opnsense/scripts/wireguard/wg-service-control.php: plugins_configure monitor (execute task : dpinger_configure_do(,[]))
2026-03-09T12:21:30 Notice wireguard /usr/local/opnsense/scripts/wireguard/wg-service-control.php: plugins_configure monitor (,[])
2026-03-09T12:21:30 Notice wireguard /usr/local/opnsense/scripts/wireguard/wg-service-control.php: ROUTING: entering configure using opt2
2026-03-09T12:21:30 Notice wireguard wireguard instance Test (wg0) stopped
#5
Weird behaviour of our backup FW running 25.7.6 where WireGuard tunnel is ignoring the WAN CARP state.

The master FW shows no log entries and it stays always the master for the WireGuard tunnel.
The backup FW shows in the WireGuard logs permanently a state change of the WAN CARP and takes over the WireGuard tunnel, although the state of the WAN interface is backup.

The other side of the tunnel is also a HA setup running 26.1.2, but there is no flapping of the tunnel on the backup FW.

Anyone else seing this odd behaviour?

Problem is that I had to disable WireGuard instances and HA syncing of WireGuard configuration.

#6
German - Deutsch / Re: WebGUI via WAN nicht möglich
March 02, 2026, 02:31:51 PM
Wie sieht es denn bei Master/Backup mit CARP aus?

Hier im Betrieb haben wir etliche OPNSense Instanzen mit nur einem WAn Anschluss, und da ist reply-to nicht deaktiviert.
#7
German - Deutsch / Re: WebGUI via WAN nicht möglich
March 01, 2026, 09:27:42 AM
Genau das war's: reply-to deaktivieren...dachte das wäre nur bei Multi_WAN nötig.

Eigenartig, auf einer anderen OPNSense Installation auf der gleichen Hardware ist das nicht deaktiviert und der Zugriff WAN-seitig funktionierte von Anfang an. und die hat zwei WAN Schnittstellen.

#8
German - Deutsch / WebGUI via WAN nicht möglich
February 28, 2026, 01:16:30 PM
Tag allerseits (o;

Ich habe hier eine kleine Intel Appliance mit 2 * 2.5GB und 2 * 10GB Ports und frisch OPNSEnse 26.1.2 installiert mit Standardwerten.
WAN als DHCP und LAN belassen mit 192.168.1.1/24.

Von der LAN Seite alles wunderbar. Nur wenn ich eine FW Rule einfüge, damit ich WAN-seitig hier im lokalem LAN zugreifen kann, passiert nix, dabei habe ich testweise alles WAN-seitig zur WAN-Adresse erlaubt.

In den FW Logs erscheint auch nichts. Nur wenn ich explizit z.B. nur HTTP zulasse, sehe ich in den Logs, wenn ich HTTPS zugreifen will.

LAN-seitig auf die WAN-IP zugreifen geht, also "horcht" die OPNSense WebGUI auf der WAN-Seite.


Jemand irgendeine Idee, was hier schief läuft?



#9
Well I just installed for testing ipfire on my apu2d4...though had to switch back my home setup to SRX240B2 so I got my VPN back and full but slow 500mbps speed back.

Maybe I test how RouterOS performs...got one in the office for testing.

#10
Well I know of course....but setting to 1000TX fixed causes flapping...100TX not...

Anyway...have more problems as IPsec site to site won't work due to socket errors...which worked flawlessly on junos with just few lines of config (o;

Have a look now at Mikrotik RouterOS to see if that runs on APU2....
#11
Ah okay..seems to be not needed anymore....also the Phase 1 peer identification.

I thought IPsec would be much easier with opnsense as with Juniper SRX, but it isn't (o;

No way so far I can connect to a fritzbox or connect to it remotely with a macos client...

#12
Good afternoon

As I am not successful currently in bringing up a VPN to a FBox which could be setup easily with a Juniper SRX I try now to follow this guide to setup a remote ipsec client:

https://wiki.opnsense.org/manual/how-tos/ipsec-road.html

There it says under user privileges to add the user to "User - VPN - IPsec xauth Dialin"....but this option is missing in 19.1.2...I only see:

GUI Status: IPsec
GUI Status: IPsec: Leasespage
GUI Status: IPsec: SAD
GUI Status: IPsec: SPD
GUI Status: System logs: IPsec VPN
GUI Status: System logs: IPsec VPN
GUI VPN: IPsec
GUI VPN: IPsec: Edit Phase 1
GUI VPN: IPsec: Edit Phase 2
GUI VPN: IPsec: Edit Pre-Shared Keys
GUI VPN: IPsec: Mobile
GUI VPN: IPsec: Pre-Shared Keys List


Xauth not allowed anymore in opnsense?


thanks in advance
richard
#13
Hmm...also see this in the logs when restarting IPSec:

Mar 3 13:32:32 ipsec_starter[98955]: charon (43576) started after 60 ms
Mar 3 13:32:32 ipsec_starter[42182]: no known IPsec stack detected, ignoring!
Mar 3 13:32:32 ipsec_starter[42182]: no KLIPS IPsec stack detected
Mar 3 13:32:32 ipsec_starter[42182]: no netkey IPsec stack detected
Mar 3 13:32:32 ipsec_starter[42182]: Starting strongSwan 5.7.2 IPsec [starter]...


Is there some package missing?
#14
Good day

I am trying to migrate away a site2site VPN connection from a Fritzbox to a SRX240H.

Adding the IPsec tunnel phase1/2 and restarting IPSec I see in the logs of my 19.1.2 box:

Mar 3 13:10:14 charon: 04[NET] error writing to socket: Permission denied
Mar 3 13:10:14 charon: 16[NET] <con1|1> sending packet: from y.y.90.159[500] to x.x.53.70[500] (176 bytes)
Mar 3 13:10:14 charon: 16[IKE] <con1|1> sending retransmit 2 of request message ID 0, seq 1
Mar 3 13:10:06 charon: 04[NET] error writing to socket: Permission denied
Mar 3 13:10:06 charon: 16[NET] <con1|1> sending packet: from y.y.90.159[500] to x.x.53.70[500] (176 bytes)
Mar 3 13:10:06 charon: 16[IKE] <con1|1> sending retransmit 1 of request message ID 0, seq 1
Mar 3 13:10:02 charon: 04[NET] error writing to socket: Permission denied
Mar 3 13:10:02 charon: 05[NET] <con1|1> sending packet: from y.y.90.159[500] to x.x.53.70[500] (176 bytes)
Mar 3 13:10:02 charon: 05[ENC] <con1|1> generating ID_PROT request 0 [ SA V V V V V ]
Mar 3 13:10:02 charon: 05[IKE] <con1|1> initiating Main Mode IKE_SA con1[1] to x.x.53.70


Any fw rule I missed here?

I just got the basic IPsec rule and the allow ESP rule towards WAN.
#15
Hmmm..switching WAN interface on my APU2D4 to 100TX fullduplex fixed seems to solve this...
but won't have my 500mbps speed *sniff (o;