Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Timmey22

#1
Hallo zusammen,

ich habe zwischen zwei OPNSense einen route based IPSec Site to Site Tunnel konfiguriert.
Das ganze geht über IPv6 mit FQDNs, die Verbindung kommt auch zustande.
Wenn der Tunnel initial steht, sehe ich die richtige IPv6 als Remote Host im Status Overview und die Tunnel Adresse der Gegenseite ist mit ping (IPv4) erreichbar.
Nach etwa 5-10 pings hört das allerdings direkt auf und der Remote Host ändert sich von der IPv6 der Gegenstelle auf die IPv4 Adresse des WAN Interface (beide male jeweils private IP Adresse). Ich habe über das Regelwerk IPv4 Isakmp, NAT-T und ESP am WAN interface eingehend verboten.
Nun habe ich den Tunnel neu gestartet und das Phänomen tritt erneut auf. Diesmal mit den IPv4 Adressen der jeweiligen Firewalls der Peer to Peer OpenVPN Tunnel..
Zusätzlich finde ich im Log keine Zeilen, die auf eine Kommunikation über Port 500 hindeuten - dabei ist NAT-T explizit deaktiviert.

Der Auszug aus der Config des Responders:
conn con3
  aggressive = no
  fragmentation = yes
  keyexchange = ikev2
  mobike = yes
  reauth = yes
  rekey = yes
  forceencaps = no
  installpolicy = no
  type = tunnel
  dpdaction = restart
  dpddelay = 10s
  dpdtimeout = 60s

  left = 2a00:6020:1000:[omitted]:2983:1356
  right = initiate.mydomain.de

  leftid = respondonly.mydomain.de
  ikelifetime = 86400s
  lifetime = 28800s
  ike = aes128gcm16-sha256-modp2048!
  leftauth = pubkey
  rightauth = pubkey
  leftcert = /usr/local/etc/ipsec.d/certs/cert-3.crt
  leftsendcert = always
  rightca = "/C=DE/ST=NRW/L=COE/O=mydomain/emailAddress=my@mail.com/CN=OPN-01-COE-CA/"
  rightid = initiate.mydomain.de
  reqid = 8
  rightsubnet = 0.0.0.0/0
  leftsubnet = 0.0.0.0/0
  esp = aes128gcm16!
  auto = add

Update: Hab das Problem selbst lösen können. Hierzu musste ich das IKEv2 MOBIKE Protokoll in Phase 1 deaktivieren.
#2
TL;DR:
A route based IPSec Tunnel (IKEv2 IPv6) is not working when using FQDNs or "::" as the remote gateway and / or the Dynamic gateway option. Is this behaviour expected?

Hi,

i am currently trying to setup a  IPSec IKEv2 IPv6 site to site tunnel with route-based phase 2 (IPv4) between two opnsenses.
I already established an OpenVPN peer to peer tunnel via UDP6, which is working fine but i want to rely on IPSec primarily and only falling back to OpenVPN in case that the IPSec tunnel is not established via BGP.
As you can see in the attached image, both firewalls have an IPv6 address and a private or carrier grade NAT IPv4 address configured on their WAN interface.
I have configured IPSec on both sides and experience the follwing problems when using Dyndns FQDN on both ends:

Scenario 1:
Left side:
Connection method: start immediate
Remote gateway: FQDN of other peer
Dynamic gateway: unchecked

Right side:
Connection method: respond only
Remote gateway: FQDN of other peer
Dynamic gateway: unchecked

Result: Tunnel is up and remote tunnel ip is reachable via icmp.

Scenario 2:
Left side:
Connection method: start immediate
Remote gateway: FQDN of other peer
Dynamic gateway: unchecked

Right side:
Connection method: respond only
Remote gateway: FQDN of other peer
Dynamic gateway: checked

Result: On both peers phase 2 is up, i can see the entries in the security policy database. The tunnel ip of the right peer is not reachable via icmp and i can see that bytes are transmitted from the left side to the right side on both peers but the right side is not sending any bytes.

In the log of the right side peer i see this message: <con3|2> querying policy 0.0.0.0/0 === 0.0.0.0/0 out failed, not found

Comparing the /usr/local/etc/ipsec.conf of scenario 1 and 2, only the line "rightallowany = yes" was added on the right side peer.


Scenario 3:
Left side:
Connection method: start immediate
Remote gateway: FQDN of other peer
Dynamic gateway: unchecked

Right side:
Connection method: respond only
Remote gateway: ::
Dynamic gateway: checked

Result: On both peers phase 2 is up, i can only see the entries in the security policy database on the left side peer, not on the right side. The tunnel ip of the right peer is not reachable via icmp and i can see that bytes are transmitted from the left side to the right side on both peers but the right side is not sending any bytes.

Comparing the /usr/local/etc/ipsec.conf of scenario 2 and 3, only the line "right = " was altered on the right side peer from the FQDN to "::".

Scenario 4:
Left side:
Connection method: start immediate
Remote gateway: FQDN of other peer
Dynamic gateway: unchecked

Right side:
Connection method: respond only
Remote gateway: ::
Dynamic gateway: unchecked

Result: On both peers phase 2 is up, i can only see the entries in the security policy database on the left side peer, not on the right side. The tunnel ip of the right peer is not reachable via icmp and i can see that bytes are transmitted from the left side to the right side on both peers but the right side is not sending any bytes.

Comparing the /usr/local/etc/ipsec.conf of scenario 3 and 4, only the line "rightallowany = yes" was removed on the right side peer from the FQDN to "::".

On the right side peer i can see in the log: querying policy 0.0.0.0/0 === 0.0.0.0/0 in failed, not found


Since the left side will receive a new IPv6 address every day, i would like to allow any address to connect to the right side peer, but this is not working for me at the moment. Has anyone experienced this as well or is this behaviour expected?
#3
The BGP neighborship is established between the tunnel ips, so no eBGP multihop is used here. In comparison, for other ethernet networks with other BGP peers i do not need any gateway and still the whole network is listed in netstat.
I have defined a gateway with the BGP peer's address and used this in different rules and the interface configuration but have not seen any effect in the netstat -rn4 output so far.
A little hint would be much appreciated  ;)

Edit: I figured it out myself, did not display the advanced options in wireguard...
#4
Hi,

i recently set up two opnsense 21.1.3 with wireguard site to site, the tunnel establishment works like a charm. Since i am currently just testing wireguard and already have a connection between those two sites, i use BGP for route exchange over all available paths.
I configured wireguard on both ends with "disable routes" enabled and 0.0.0.0/0 as allowed networks for the endpoint. Wireguard successfully establishes a tunnel, however via this tunnel both sites cannot reach each other (for example via ping) and also cannot establish a routing neighborship via this connection.
I configured both interfaces ip address on opnsense based on the wireguard configuration (in this case 172.31.32.1 & 2 /24). 
After studying the routing table, i noticed that the tunnel subnet was not installed at all and the ping to the remote tunnel ip was forwarded via the default route:

root@OPN-01:~ # netstat -rn4
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            192.168.10.1       UGS         em0
10.54.112.0/24     10.54.112.1        UGS      ovpnc1
10.54.112.1        link#12            UH       ovpnc1
10.54.112.46       link#12            UHS         lo0
172.31.31.2        link#11            UH          lo1
172.31.31.2/32     127.0.0.1          UGSB        lo0
172.31.32.2        link#13            UH          wg0

root@OPN-01:~ # ping 172.31.32.1
PING 172.31.32.1 (172.31.32.1): 56 data bytes
92 bytes from 192.168.10.1: Redirect Host(New addr: 192.168.10.254)
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
4  5  00 0054 91d7   0 0000  3f  01 52f5 192.168.10.20  172.31.32.1

After adding the route to this tunnel subnet manually on both appliances, i could reach the remote tunnel ip via icmp and the BGP session was established:

root@OPN-01:~ # route add 172.31.32.0/24 -iface wg0
add net 172.31.32.0: gateway wg0

root@OPN-01:~ # ping 172.31.32.1
PING 172.31.32.1 (172.31.32.1): 56 data bytes
64 bytes from 172.31.32.1: icmp_seq=0 ttl=64 time=66.344 ms
64 bytes from 172.31.32.1: icmp_seq=1 ttl=64 time=39.939 ms
64 bytes from 172.31.32.1: icmp_seq=2 ttl=64 time=40.095 ms
^C
--- 172.31.32.1 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 39.939/48.793/66.344/12.411 ms
root@OPN-01:~ # netstat -rn4
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            192.168.10.1       UGS         em0
10.54.112.0/24     10.54.112.1        UGS      ovpnc1
10.54.112.1        link#12            UH       ovpnc1
10.54.112.46       link#12            UHS         lo0
10.255.10.0/24     172.31.32.1        UG1         wg0
10.255.11.0/24     172.31.32.1        UG1         wg0
10.255.255.24/30   172.31.32.1        UG1         wg0
10.255.255.26/32   172.31.32.1        UG1         wg0
10.255.255.28/30   172.31.32.1        UG1         wg0
100.64.100.0/30    172.31.32.1        UG1         wg0
100.64.255.0/30    172.31.32.1        UG1         wg0
100.64.255.4/30    172.31.32.1        UG1         wg0
100.64.255.8/30    172.31.32.1        UG1         wg0
100.64.255.12/30   172.31.32.1        UG1         wg0
100.65.100.0/30    link#3             U          vmx1
100.65.100.2       link#3             UHS         lo0
100.65.200.0/30    link#10            U      vmx1_vla
100.65.200.2       link#10            UHS         lo0
127.0.0.1          link#5             UH          lo0
172.31.31.2        link#11            UH          lo1
172.31.31.2/32     127.0.0.1          UGSB        lo0
172.31.32.0/24     wg0                US          wg0
172.31.32.2        link#13            UH          wg0
192.168.1.0/24     172.31.32.1        UG1         wg0
192.168.10.0/24    link#1             U           em0
192.168.10.20      link#1             UHS         lo0
192.168.11.0/24    172.31.32.1        UG1         wg0
192.168.20.0/24    link#9             U      vmx0_vla
192.168.20.253     link#9             UHS         lo0
192.168.21.0/24    172.31.32.1        UG1         wg0
192.168.30.0/24    link#8             U      vmx0_vla
192.168.30.253     link#8             UHS         lo0
192.168.79.0/24    172.31.32.1        UG1         wg0
192.168.80.0/24    172.31.32.1        UG1         wg0
192.168.81.0/27    172.31.32.1        UG1         wg0
192.168.81.0/24    172.31.32.1        UG1         wg0
192.168.90.0/24    172.31.32.1        UG1         wg0
192.168.168.0/24   172.31.32.1        UG1         wg0
192.168.169.0/24   172.31.32.1        UG1         wg0
192.168.170.0/24   172.31.32.1        UG1         wg0
192.168.222.1/32   172.31.32.1        UG1         wg0
192.168.222.128/25 172.31.32.1        UG1         wg0
192.168.255.0/30   172.31.32.1        UG1         wg0

Since i have not found any other topic regarding this problem i am curious if anyone has stumbled upon this problem too or if you are aware of this?