1
19.7 Legacy Series / AWS VPN connectivity issue
« on: October 02, 2019, 08:09:04 pm »
I have an OPNsense firewall (version 19.7) with two redundant ISP's in a group that has a VPN connection to an AWS VPC over an IPSEC VPN with the AWS VPN. Phase 1 and Phase 2 start with no issues. The problem is connectivity between hosts on either side of the VPN. NAT Traversal is disabled.
It appears that the firewall is handing the inbound and outbound connection initiation differently so I hoping to get some guidance on what to alter to fix the issue.
AWS subnet: 172.31.32.0/20
Local subnet: 192.168.22.0/24
AWS test host: 172.31.42.168
Local test host: 192.168.22.253
If I initiate a ping from the AWS test host -> Local test host, the ping succeeds.
> ping 192.168.22.253
Pinging 192.168.22.253 with 32 bytes of data:
Reply from 192.168.22.253: bytes=32 time=197ms TTL=127
Reply from 192.168.22.253: bytes=32 time=197ms TTL=127
State dump for that ping appears as follows:
Int Proto Source -> Router -> Destination State
all icmp 172.31.42.168:1 -> 192.168.22.253:1 0:0
all icmp 192.168.22.253:1 <- 172.31.42.168:1 0:0
If I immediately ping Local test host -> AWS test host before the states expire, the ping succeeds.
If I ping Local test host -> AWS test host with no existing state, the ping fails.
> ping 172.31.42.168
Pinging 172.31.42.168 with 32 bytes of data:
Request timed out.
Request timed out.
State dump for that ping appears as follows:
Int Proto Source -> Router -> Destination State
all icmp 172.31.42.168:1 <- 192.168.22.253:1 0:0
all icmp 58.xxx.xxx.xxx:48865 (192.168.22.253:1) -> 172.31.42.168:48865 0:0
The "58.xxx.xxx.xxx" field changes back and forth between the two different external IP addresses as they are load-shared.
I attempted to disable the outbound NAT for those connections by changing the outbound NAT mode to hybrid, then creating a manual "No NAT" rule to disable NAT with source of 192.168.22.0/24 and destination of 172.31.32.0/20. That change makes no difference.
Where else can I look to resolve this?
It appears that the firewall is handing the inbound and outbound connection initiation differently so I hoping to get some guidance on what to alter to fix the issue.
AWS subnet: 172.31.32.0/20
Local subnet: 192.168.22.0/24
AWS test host: 172.31.42.168
Local test host: 192.168.22.253
If I initiate a ping from the AWS test host -> Local test host, the ping succeeds.
> ping 192.168.22.253
Pinging 192.168.22.253 with 32 bytes of data:
Reply from 192.168.22.253: bytes=32 time=197ms TTL=127
Reply from 192.168.22.253: bytes=32 time=197ms TTL=127
State dump for that ping appears as follows:
Int Proto Source -> Router -> Destination State
all icmp 172.31.42.168:1 -> 192.168.22.253:1 0:0
all icmp 192.168.22.253:1 <- 172.31.42.168:1 0:0
If I immediately ping Local test host -> AWS test host before the states expire, the ping succeeds.
If I ping Local test host -> AWS test host with no existing state, the ping fails.
> ping 172.31.42.168
Pinging 172.31.42.168 with 32 bytes of data:
Request timed out.
Request timed out.
State dump for that ping appears as follows:
Int Proto Source -> Router -> Destination State
all icmp 172.31.42.168:1 <- 192.168.22.253:1 0:0
all icmp 58.xxx.xxx.xxx:48865 (192.168.22.253:1) -> 172.31.42.168:48865 0:0
The "58.xxx.xxx.xxx" field changes back and forth between the two different external IP addresses as they are load-shared.
I attempted to disable the outbound NAT for those connections by changing the outbound NAT mode to hybrid, then creating a manual "No NAT" rule to disable NAT with source of 192.168.22.0/24 and destination of 172.31.32.0/20. That change makes no difference.
Where else can I look to resolve this?