Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Colani1200

#1
This would mean that it is possible to start/stop single phase2 SAs. Maybe this is part of my problem. In general, the tunnel was up after doing ipsec down con1; ipsec up con1 but some phase2 SAs were missing. Maybe I should specifically up and down them all one by one?
#2
I really need to restart that specific tunnel only without causing interruptions on the others.

Quote from: zerwes on April 27, 2022, 10:49:24 AM
/usr/local/opnsense/scripts/ipsec/connect.py
/usr/local/opnsense/scripts/ipsec/disconnect.py

(these are IMHO called by the WebUI)

Thanks, this looks like what I need, will give that a try. These scripts should take e.g. con1 as argument, right?
#3
Hi all,

before I start digging in source code, can anybody tell me what the "play/stop" buttons on the "VPN: IPsec: Status Overview" page exactly trigger? I sometimes have problems with a specific connection and would like to restart it via monit and a script. I assumed that ipsec down con(x); ipsec up con(x) would work, but it seems that this is not enough to fully restart that specific tunnel. Apparently the buttons on the status page do more than that, those work fine for a tunnel restart.
#4
The box miraculously completed the upgrade at 0:00. Maybe because some services got restarted then?!
#5
I scheduled an upgrade from 21.7.3 to 21.7.7 and it is hanging at "Updating OPNsense repository catalogue...". In fact the "updates" tab in the GUI shows this twice:
***GOT REQUEST TO UPDATE***
Updating OPNsense repository catalogue...
OPNsense repository is up to date.
All repositories are up to date.
Updating OPNsense repository catalogue...

Any idea how to solve this? I don't want to risk bricking the box.
#6
After a lot of painful research, try and error, I found the solution: Set "disable dpd" and "vpn-idle-timeout none" on the Cisco side. I hope this will help anybody with the same problem.
#7
Im am currently migrating a bunch of IPsec tunnels from a different platform to OPNsense. I am having trouble with one particular tunnel to a customer running a Cisco ASA (with current firmware 9.14.2-15). The tunnel is using IKEv2 with multiple Phase 2 entries. Symptoms look like this:

- After a fresh boot of OPNsense, the tunnel usually comes up fine with all phase 2 entries. Phase 2 entries disconnect after a while when there is no relevant traffic. In the log, it looks like this:
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[IKE] <con3|2> sending DELETE for ESP CHILD_SA with SPI c2e6e74f
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[ENC] <con3|2> generating INFORMATIONAL request 32 [ D ]
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[NET] <con3|2> sending packet: from xxx.xxx.xxx.xxx[500] to yyy.yyy.yyy.yyy[500] (80 bytes)
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[NET] <con3|2> received packet: from yyy.yyy.yyy.yyy[500] to xxx.xxx.xxx.xxx[500] (80 bytes)
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[ENC] <con3|2> parsed INFORMATIONAL response 32 [ D ]
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[IKE] <con3|2> received DELETE for ESP CHILD_SA with SPI cc1b7fcb
Jun 01 21:26:17 zzz.zzz.zzz.zzz charon[76496]: 09[IKE] <con3|2> CHILD_SA closed

Afterwards, it is possible to initiate the phase 2 in question from the OPNsense side by sending traffic, but not the other way round. Usually the OPNsense log stays completely silent, or you'll find something like this:
Jun 11 16:53:39 zzz.zzz.zzz.zzz charon[57826]: 15[IKE] <con9|1> traffic selectors aaa.aaa.aaa.aaa/32 bbb.bbb.bbb.bbb/24 === ccc.ccc.ccc.ccc/32 ddd.ddd.ddd.ddd/24 unacceptable
(Trying to ping from ccc.ccc.ccc.ccc/32 on the Cisco side to aaa.aaa.aaa.aaa/32 on the OPNsense side. bbb.bbb.bbb.bbb/24 is the left side and ddd.ddd.ddd.ddd/24 the right side in the phase 2 definition)

- When no traffic at all from either side is sent, the tunnel will disconnect completely. Afterwards it takes multiple tries to get it up again by sending traffic. During connection attempts, the log shows something like this:

Jun 16 09:17:26 zzz.zzz.zzz.zzz charon[69339]: 11[IKE] <con9|5> retransmit 1 of request with message ID 1
Jun 16 09:17:26 zzz.zzz.zzz.zzz charon[69339]: 11[NET] <con9|5> sending packet: from xxx.xxx.xxx.xxx[4500] to yyy.yyy.yyy.yyy[4500] (304 bytes)
Jun 16 09:17:26 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] <con9|5> checkin IKE_SA con9[5]
Jun 16 09:17:26 zzz.zzz.zzz.zzz charon[69339]: 03[NET] sending packet: from xxx.xxx.xxx.xxx[4500] to yyy.yyy.yyy.yyy[4500]
Jun 16 09:17:26 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] <con9|5> checkin of IKE_SA successful
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] checkout IKEv2 SA with SPIs d658a65316b4cd4a_i 1f37e0b747a28c00_r
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] IKE_SA con9[5] successfully checked out
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 11[IKE] <con9|5> retransmit 2 of request with message ID 1
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 11[NET] <con9|5> sending packet: from xxx.xxx.xxx.xxx[4500] to yyy.yyy.yyy.yyy[4500] (304 bytes)
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] <con9|5> checkin IKE_SA con9[5]
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 03[NET] sending packet: from xxx.xxx.xxx.xxx[4500] to yyy.yyy.yyy.yyy[4500]
Jun 16 09:17:33 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] <con9|5> checkin of IKE_SA successful
Jun 16 09:17:46 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] checkout IKEv2 SA with SPIs d658a65316b4cd4a_i 1f37e0b747a28c00_r
Jun 16 09:17:46 zzz.zzz.zzz.zzz charon[69339]: 11[MGR] IKE_SA con9[5] successfully checked out


Any idea how to get this going? Is there a way to force keeping the tunnel up even when there is no traffic (preferrably without ping workaround)? Once it is up it seems to work fine...
#8
Are you using IKEv2?
#9
Guess what, yesterday I reconfigured the peer to use DynDNS again. Today the IP address has changed, but the tunnel endpoint entry of the manual SPD in the database still points to the old IP address. A restart of the IPsec service doesn't help, only a reboot. Looks like it is time for a bug report.
#10
Thanks for testing. Somehow my OPNsense was in a really messed up state.

- Restarted IPsec service: SPD entries still there.
- Stopped IPsec service: SPD entries still there.
- Deleted the whole IPsec tunnel and restarted the service again: SPD entries still there. What?  :o
- Rebooted:  SPD entries gone.
- Recreated the IPsec tunnel from scratch: Everything correct and working as expected!

Not sure why it was in this state. OK, I did do a lot of testing and configuration changes, the peer was configured with a DNS entry before (DynDNS), that mysterious IP address might have been a leftover from that. But this should not happen of course, we're not talking about Windows are we...?

Anyway, thank you so much for your help so far! You pointed me to the right direction, now I can continue the migration. This was only a test setup so far, the real one is with multiple phase2 entries plus there are other more complex tunnels. I'll see how far I can get now without having to show up here again. If I encounter a situation like this again I'll try to figure out if/how the problem can be reproduced.
#11
Quote from: goodomens42 on April 14, 2021, 06:30:41 PM

First thought:
Is it possible your firewall maquerades, when forwarding to the OPNSense ?

No it doesn't, just simple routing. Masquerading to an IP from NETWORK_D would probably even make it work because then the source is in a network that is directly attached to the OPNsense. But this is a rather nasty workaround. I'd rather like to understand what's going on, otherwise this behaviour might become a showstopper in a new scenario.

Quote
Just for me to understand: The endpoint IP is correct when entering NETWORK_D as SPD and "mysterious" when entering NETWORK_A or did you enter both and are getting different endpoints ?
I added both NETWORK_D and NETWORK_A comma separated at the same time. Both get different endpoints in the SPD database, the one with NETWORK_D is perfectly fine while the one with NETWORK_A is something crazy. Maybe you can reproduce this on one of your installations? Try adding a fictional, not directly attached network as manual SPD entry to an existing tunnel and check the related endpoint IP in the SPD database...
#12
That ping works. I did some more tests in that direction and I need to clarify things. To be honest, my setup is a bit more complex than I first described. I tried to simplify a bit because I thought it was not relevant but it seemingly is. In fact, my setup looks like this:


   Client LAN                    OPNsense                                                                                   Customer site
---------------   Firewall   --------------        NAT             -----------------------      IPsec            -----------------
|Network A  | ----------> |Network D| -----------------> |IP from Network B| -----------------> |  Network C  |
---------------                 ---------------                         -----------------------                          -----------------


As you can see, the OPNsense does not directly reside in the client LAN (Network A), but in the DMZ of another firewall (network D).

What I tried now: I added network D as a manual SPD in phase 2 and added a firewall rule accordingly. I can ping network C from network D without problems and NAT is working properly. I also checked the tunnel endpoint IP in the Security database and it is correct for Network D. Only the entry for network A has this mysterious tunnel endpoint IP.

To sum this up: It looks like the OPNsense has a problem with a manual SPD entry when that network is not directly connected to it.

Any ideas?
#13
Now this is interesting. The SPD entry is there, but the tunnel endpoint IP is totally wrong. Really no idea where that IP is coming from, this is the only tunnel currently configured on the OPNsense.

The peer ist behind NAT-T, could that cause confusion somewhere?
#14
Quote from: goodomens42 on April 13, 2021, 09:07:08 PM

Did you also check if there are any other routes pointing to NETWORK_C ?

I did, there aren't any.

Quote
It might also be an idea, to turn off automatic addition of routes under "VPN -> IPSEC -> Advanced Settings", this will enforce policy based routing.

Tried that right now, it didn't help. Still the only entry I have in the firewall log is an incoming allow on the LAN interface from (un-NATed) Network_A to Network_C.

Like I said in my first post, NAT does work when I just change the interface from IPsec to WAN in my NAT rule. To me this looks like the manual SPD entry doesn't get evaluated and traffic is not entering the tunnel.
#15
Thanks for taking the time to look at it, goodomens42. My outboud NAT rule looked like this:

IPsec Network_A * NETWORK_C  * VIRTUAL_IP_IN_NETWORK_B  * NO


I replaced "Network_A" with "any" as suggested, but it didn't help.

"Install policy" in phase1 is checked, I verified that.

I think a firewall rule on the IPsec interface should not be neccessary because that is covered by an autogenerated rule (screenshot attached). Plus I don't see any relevant traffic being blocked in the log. Nevertheless I created the rule as you suggested, but no success.

Any other ideas?