Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - onnieoneone

#1
Yep, it's unchecked. I thought this should've helped me too.
#2
So to answer my own question.

Time has moved on and it looks like whatever bug in OPNsense has been fixed.

Now I'm running 26.1 I can nicely connect OPNsense <-> VyOS with the config I presented earlier.

On the USG-3 VyOS of course there are no updates or fixed, being legacy gear, but for future archeologists or cheapskates like me the answer was to configure a route-based ipsec tunnel but also add an IP address for the local tunnel endpoint to the VyOS vti interface, so now it looks like this:

# configure
# set interface vti vti0 address 10.111.0.2/30
# commit; save
# exit
# ip a s dev vti0
12: vti0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1436 qdisc noqueue state UNKNOWN
    link/ipip 192.168.188.20 peer A.A.A.A
    inet 10.111.0.2/30 brd 10.111.0.3 scope global vti0
       valid_lft forever preferred_lft forever

You need to be careful as the vti0 name could be different (it seems it increments every time you use the UI to create a new vpn config) but that's better left for a VyOS forum.
#3
To answer my own question:

It looks to me like a problem with connection tracking.

The ICMP traffic works because it just has a rule to allow in any direction, no state needed.

The SSH outbound fails because it disappears into the IPsec SPD _before_ it hits enc0, and so gets tracked from lagg0_vlan1018, so the returning traffic that _does_ appear on enc0 has no matching state and gets dropped unless I put some funky firewall rules in place.

Of course the inbound SSH gets tracked on the enc0 interface and so is allowed.

At least this is my theory. That is the theory that I have and which is mine and what it is, too.

The solution in the end was to use VTI which _does_ enter and exit a virtual interface and so the firewall rules are all good.
#4
Did I find an OPNSense bug?

Or, more likely am I misunderstanding something subtle about how pf and IPsec work together?

I'll detail below in all its gory detail how I think pf is not tracking connections correctly when using IPsec policy based connections.

Here goes.

First off, I think I have set up the IPSec side correctly for a site-to-site VPN between my OPNsense (bang up to date 26.1.2) router at site1 (IPv4 WAN address behind CPE NAT) and an old Unifi Security Gateway 3 (basically Vyatta) at site2 (also IPv4 behind NAT).

I see the Child SAs looking good and even ICMP traffic passing from either side to the other and back.

The subnets in question I am tunnelling at the moment from site1 are 10.1.8.0/24 and 10.2.4.0/24 (and a few others) at site2, that is, all 10.1.X.Y hosts are at site1 and 10.2.X.Y hosts are at site2.

I have a Floating firewall ICMP allow rule that creates this pf rule on the OPNsense at site1:

site1-opnsense # grep 'Allow RFC' /tmp/rules.debug
pass log quick inet proto icmp from $rfc1918_ipv4_nets to $rfc1918_ipv4_nets keep state label "f8d0246d9d3d9c9afd3f531459d75811" # Allow RFC1918 networks ICMP access
site1-opnsense # pfctl -t rfc1918_ipv4_nets -T show
   10.0.0.0/8
   172.16.0.0/12
   192.168.0.0/16

When I try to ping between a couple of hosts (10.1.8.27 at site1 and 10.2.4.43 at site2) everything looks good.

Firstly, a ping from site1 to site2:

site1-host # ping -c 1 -I 10.1.8.27 10.2.4.43
PING 10.2.4.43 (10.2.4.43) from 10.1.8.27 : 56(84) bytes of data.
64 bytes from 10.2.4.43: icmp_seq=1 ttl=62 time=296 ms

--- 10.2.4.43 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 296.325/296.325/296.325/0.000 ms

site1-opnsense # tail -f /var/log/filter/latest.log|grep 10.2.4.43
<134>1 2026-02-21T17:19:51+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="50416"] 214,,,f8d0246d9d3d9c9afd3f531459d75811,lagg0_vlan1018,match,pass,in,4,0x0,,64,30186,0,DF,1,icmp,84,10.1.8.27,10.2.4.43,datalength=64
<134>1 2026-02-21T17:19:51+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="50417"] 214,,,f8d0246d9d3d9c9afd3f531459d75811,igb3,match,pass,in,4,0x0,,63,24807,0,none,1,icmp,84,10.2.4.43,10.1.8.27,datalength=64


site1-opnsense # tcpdump -i enc0 -n -vvvv -tttt host 10.2.4.43
tcpdump: listening on enc0, link-type ENC (OpenBSD encapsulated IP), snapshot length 262144 bytes
2026-02-21 17:19:50.447230 (authentic,confidential): SPI 0xc90a3520: IP (tos 0x0, ttl 63, id 30186, offset 0, flags [DF], proto ICMP (1), length 84, bad cksum a476 (->a576)!)
    10.1.8.27 > 10.2.4.43: ICMP echo request, id 52438, seq 1, length 64
2026-02-21 17:19:50.742370 (authentic,confidential): SPI 0xc233a0d7: IP (tos 0x0, ttl 63, id 24807, offset 0, flags [none], proto ICMP (1), length 84)
    10.2.4.43 > 10.1.8.27: ICMP echo reply, id 52438, seq 1, length 64
^C
2 packets captured
2 packets received by filter
0 packets dropped by kernel

And again a trial ping from site2 to site1:

site2-host # ping -c 1 -I 10.2.4.43 10.1.8.27
PING 10.1.8.27 (10.1.8.27) from 10.2.4.43 : 56(84) bytes of data.
64 bytes from 10.1.8.27: icmp_seq=1 ttl=62 time=298 ms

--- 10.1.8.27 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 298.207/298.207/298.207/0.000 ms

site1-opnsense # tail -f /var/log/filter/latest.log|grep 10.2.4.43
<134>1 2026-02-21T17:21:54+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="50646"] 214,,,f8d0246d9d3d9c9afd3f531459d75811,igb3,match,pass,in,4,0x0,,63,54800,0,DF,1,icmp,84,10.2.4.43,10.1.8.27,datalength=64
<134>1 2026-02-21T17:21:54+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="50647"] 214,,,f8d0246d9d3d9c9afd3f531459d75811,lagg0_vlan1018,match,pass,out,4,0x0,,62,54800,0,DF,1,icmp,84,10.2.4.43,10.1.8.27,datalength=64

site1-opnsense # tcpdump -i enc0 -n -vvvv -tttt host 10.2.4.43
tcpdump: listening on enc0, link-type ENC (OpenBSD encapsulated IP), snapshot length 262144 bytes
2026-02-21 17:21:53.976534 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 54800, offset 0, flags [DF], proto ICMP (1), length 84)
    10.2.4.43 > 10.1.8.27: ICMP echo request, id 10057, seq 1, length 64
2026-02-21 17:21:53.977528 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 43093, offset 0, flags [none], proto ICMP (1), length 84, bad cksum b20b (->b30b)!)
    10.1.8.27 > 10.2.4.43: ICMP echo reply, id 10057, seq 1, length 64
^C
2 packets captured
2 packets received by filter
0 packets dropped by kernel

So everything looks great, passing traffic, firewall is working. This is even working for my other child SA networks (10.2.1.0/24, 10.2.2.0/24, etc).


The problem then comes in with some other traffic. I try to SSH between sites and it's just not working.

Firstly my SAs:
site1-opnsense # swanctl --list-sas
no files found matching '/usr/local/etc/strongswan.opnsense.d/*.conf'
ca24ede1-a222-4a6e-91dc-2c5720143fe6: #4, ESTABLISHED, <IKE-STUFF>
  local  '<LOCAL-UNNATTED-WAN-IP>' @ <LOCAL-NATTED-WAN-IP>[4500]
  remote '<REMOTE-NATTED-WAN-IP>' @ <REMOTE-UNNATTED-WAN-IP>[4500]
  <CRYPTO-CIPHERS-AND-STUFF>
  established 8811s ago, rekeying in 4253s
  77828303-0d3c-4568-8a06-7093234cd740: #15, reqid 1, INSTALLED, TUNNEL-in-UDP, ESP:<CRYPTO-CIPHERS-AND-STUFF>
    installed 731s ago, rekeying in 2610s, expires in 3230s
    in  c4e9cb8f,   1008 bytes,    12 packets,    55s ago
    out cbb6212d,   1824 bytes,    12 packets,    55s ago
    local  10.1.8.0/24
    remote 10.2.2.0/24
ca24ede1-a222-4a6e-91dc-2c5720143fe6: #6, ESTABLISHED, <IKE-STUFF>
  local  '<LOCAL-UNNATTED-WAN-IP>' @ <LOCAL-NATTED-WAN-IP>[4500]
  remote '<REMOTE-NATTED-WAN-IP>' @ <REMOTE-UNNATTED-WAN-IP>[4500]
  <CRYPTO-CIPHERS-AND-STUFF>
  established 8811s ago, rekeying in 5563s
  77828303-0d3c-4568-8a06-7093234cd740: #14, reqid 3, INSTALLED, TUNNEL-in-UDP, ESP:<CRYPTO-CIPHERS-AND-STUFF>
    installed 819s ago, rekeying in 2586s, expires in 3142s
    in  c68d83fa,   1092 bytes,    13 packets,    55s ago
    out c9f3d367,   1976 bytes,    13 packets,    56s ago
    local  10.1.8.0/24
    remote 10.2.0.0/24
  77828303-0d3c-4568-8a06-7093234cd740: #16, reqid 4, INSTALLED, TUNNEL-in-UDP, ESP:<CRYPTO-CIPHERS-AND-STUFF>
    installed 248s ago, rekeying in 3023s, expires in 3713s
    in  cfadb11d,    336 bytes,     4 packets,    55s ago
    out c0a8bbd4,    608 bytes,     4 packets,    56s ago
    local  10.1.8.0/24
    remote 10.2.8.0/24
ca24ede1-a222-4a6e-91dc-2c5720143fe6: #5, ESTABLISHED, <IKE-STUFF>
  local  '<LOCAL-UNNATTED-WAN-IP>' @ <LOCAL-NATTED-WAN-IP>[4500]
  remote '<REMOTE-NATTED-WAN-IP>' @ <REMOTE-UNNATTED-WAN-IP>[4500]
  <CRYPTO-CIPHERS-AND-STUFF>
  established 8811s ago, rekeying in 4473s
  77828303-0d3c-4568-8a06-7093234cd740: #13, reqid 2, INSTALLED, TUNNEL-in-UDP, ESP:<CRYPTO-CIPHERS-AND-STUFF>
    installed 830s ago, rekeying in 2457s, expires in 3131s
    in  c4bad24d,   3525 bytes,    25 packets,    55s ago
    out ce6bb79c,   4384 bytes,    22 packets,    55s ago
    local  10.1.8.0/24
    remote 10.2.4.0/24

So my SPIs for this are c4bad24d and ce6bb79c and furthermore in the packet captures it looks like the right traffic is hitting the right SPI.

I created some test firewall rules, one right below the other in the (old) UI and here's the pf result:

site1-opnsense # grep TEST /tmp/rules.debug
pass log quick inet proto tcp from $temp_10_1_8_27 to $temp_10_2_4_43 port {22} keep state label "04dcefb83dcccd488cdddb48c535eab9" # TEST IPSec site1 to site2 SSH access
pass log quick inet proto tcp from $temp_10_2_4_43 to $temp_10_1_8_27 port {22} keep state label "fc037937eb7b02d7efbd5244c7b9c70f" # TEST IPSec site2 to site1 SSH access

site1-opnsense # pfctl -t temp_10_1_8_27 -T show
   10.1.8.27
site1-opnsense # pfctl -t temp_10_2_4_43 -T show
   10.2.4.43

So here's a successful SSH from site2 to site1 (I just ctrl+c when it asks for fingerprint validation):

site2-host # ssh 10.1.8.27
The authenticity of host '10.1.8.27 (10.1.8.27)' can't be established.
ED25519 key fingerprint is SHA256:Zwag/SC/O5MPZm8A22/63FEN4YB+4c/zj/wFLs2cZrE.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? ^C

site1-opnsense # tail -f /var/log/filter/latest.log|grep 10.2.4.43
<134>1 2026-02-21T17:25:32+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="50972"] 207,,,fc037937eb7b02d7efbd5244c7b9c70f,igb3,match,pass,in,4,0x10,,63,59405,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,33100,22,0,S,3196047947,,64240,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:25:32+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="50973"] 207,,,fc037937eb7b02d7efbd5244c7b9c70f,lagg0_vlan1018,match,pass,out,4,0x10,,62,59405,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,33100,22,0,S,3196047947,,64240,,mss;sackOK;TS;nop;wscale
^C

site1-opnsense # tcpdump -i enc0 -n -vvvv -tttt host 10.2.4.43
tcpdump: listening on enc0, link-type ENC (OpenBSD encapsulated IP), snapshot length 262144 bytes
2026-02-21 17:25:31.544381 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59405, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [S], cksum 0xc0d8 (correct), seq 3196047947, win 64240, options [mss 1460,sackOK,TS val 274155881 ecr 0,nop,wscale 7], length 0
2026-02-21 17:25:31.545438 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 1a74 (->1b74)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [S.], cksum 0xf4ab (correct), seq 4163488344, ack 3196047948, win 65160, options [mss 1460,sackOK,TS val 2825870737 ecr 274155881,nop,wscale 7], length 0                                                                                       
2026-02-21 17:25:31.837392 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59406, offset 0, flags [DF], proto TCP (6), length 52)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [.], cksum 0x1ee6 (correct), seq 1, ack 1, win 502, options [nop,nop,TS val 274156174 ecr 2825870737], length 0
2026-02-21 17:25:31.843063 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59407, offset 0, flags [DF], proto TCP (6), length 85)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [P.], cksum 0x627a (correct), seq 1:34, ack 1, win 502, options [nop,nop,TS val 274156174 ecr 2825870737], length 33: SSH: SSH-2.0-OpenSSH_10.0p2 Debian-7
2026-02-21 17:25:31.843428 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34597, offset 0, flags [DF], proto TCP (6), length 52, bad cksum 9356 (->9456)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [.], cksum 0x1d94 (correct), seq 1, ack 34, win 509, options [nop,nop,TS val 2825871035 ecr 274156174], length 0
2026-02-21 17:25:31.848758 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34598, offset 0, flags [DF], proto TCP (6), length 85, bad cksum 9334 (->9434)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [P.], cksum 0x6123 (correct), seq 1:34, ack 34, win 509, options [nop,nop,TS val 2825871040 ecr 274156174], length 33: SSH: SSH-2.0-OpenSSH_10.0p2 Debian-7
2026-02-21 17:25:32.141973 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59408, offset 0, flags [DF], proto TCP (6), length 52)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [.], cksum 0x1c45 (correct), seq 34, ack 34, win 502, options [nop,nop,TS val 274156478 ecr 2825871040], length 0
2026-02-21 17:25:32.142833 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34599, offset 0, flags [DF], proto TCP (6), length 796, bad cksum 906c (->916c)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [P.], cksum 0x1607 (correct), seq 34:778, ack 34, win 509, options [nop,nop,TS val 2825871334 ecr 274156478], length 744                                                                                                                       
2026-02-21 17:25:32.148146 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59410, offset 0, flags [DF], proto TCP (6), length 172)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [P.], cksum 0x502c (correct), seq 1482:1602, ack 34, win 502, options [nop,nop,TS val 274156479 ecr 2825871040], length 120
2026-02-21 17:25:32.148208 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59411, offset 0, flags [DF], proto TCP (6), length 1422)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [.], cksum 0x766c (correct), seq 34:1404, ack 34, win 502, options [nop,nop,TS val 274156479 ecr 2825871040], length 1370
2026-02-21 17:25:32.148227 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59412, offset 0, flags [DF], proto TCP (6), length 250)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [P.], cksum 0xe306 (correct), seq 1404:1602, ack 34, win 502, options [nop,nop,TS val 274156479 ecr 2825871040], length 198
2026-02-21 17:25:32.149161 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34600, offset 0, flags [DF], proto TCP (6), length 64, bad cksum 9347 (->9447)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [.], cksum 0xb46e (correct), seq 778, ack 34, win 509, options [nop,nop,TS val 2825871341 ecr 274156478,nop,nop,sack 1 {1482:1602}], length 0
2026-02-21 17:25:32.149244 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34601, offset 0, flags [DF], proto TCP (6), length 64, bad cksum 9346 (->9446)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [.], cksum 0xae59 (correct), seq 778, ack 1602, win 497, options [nop,nop,TS val 2825871341 ecr 274156479,nop,nop,sack 1 {1482:1602}], length 0
2026-02-21 17:25:32.441484 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59413, offset 0, flags [DF], proto TCP (6), length 100)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [P.], cksum 0xdadc (correct), seq 1602:1650, ack 778, win 497, options [nop,nop,TS val 274156778 ecr 2825871334], length 48
2026-02-21 17:25:32.449304 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34602, offset 0, flags [DF], proto TCP (6), length 544, bad cksum 9165 (->9265)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [P.], cksum 0x6d4c (correct), seq 778:1270, ack 1650, win 497, options [nop,nop,TS val 2825871641 ecr 274156778], length 492
2026-02-21 17:25:32.782963 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59414, offset 0, flags [DF], proto TCP (6), length 52)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [.], cksum 0x0c4f (correct), seq 1650, ack 1270, win 494, options [nop,nop,TS val 274157119 ecr 2825871641], length 0
2026-02-21 17:25:35.391466 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59415, offset 0, flags [DF], proto TCP (6), length 52)                                                                                                                                     
    10.2.4.43.33100 > 10.1.8.27.22: Flags [F.], cksum 0x021d (correct), seq 1650, ack 1270, win 494, options [nop,nop,TS val 274159728 ecr 2825871641], length 0
2026-02-21 17:25:35.394564 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x0, ttl 63, id 34603, offset 0, flags [DF], proto TCP (6), length 52, bad cksum 9350 (->9450)!)
    10.1.8.27.22 > 10.2.4.43.33100: Flags [F.], cksum 0xf697 (correct), seq 1270, ack 1651, win 497, options [nop,nop,TS val 2825874586 ecr 274159728], length 0
2026-02-21 17:25:35.688799 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x10, ttl 63, id 59416, offset 0, flags [DF], proto TCP (6), length 52)
    10.2.4.43.33100 > 10.1.8.27.22: Flags [.], cksum 0xf573 (correct), seq 1651, ack 1271, win 494, options [nop,nop,TS val 274160023 ecr 2825874586], length 0
^C                                                                                                                                           
19 packets captured                                                                                                                         
19 packets received by filter                             
0 packets dropped by kernel 

Now the flow that doesn't work, site1 to site2 SSH:

site1-host # ssh 10.2.4.43
ssh: connect to host 10.2.4.43 port 22: Connection timed out

site1-opnsense # tail -f /var/log/filter/latest.log|grep 10.2.4.43
<134>1 2026-02-21T17:44:03+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52450"] 206,,,04dcefb83dcccd488cdddb48c535eab9,lagg0_vlan1018,match,pass,in,4,0x10,,64,18105,0,DF,6,tcp,60,10.1.8.27,10.2.4.43,36596,22,0,S,2269409707,,64240,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:03+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52451"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:05+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52458"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:06+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52460"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:07+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52462"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:08+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52464"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:10+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52466"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:15+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52472"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:23+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52480"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale
<134>1 2026-02-21T17:44:39+01:00 site1-opnsense filterlog 44466 - [meta sequenceId="52500"] 24,,,02f4bab031b57d1e30553ce08e0ec131,igb3,match,block,in,4,0x0,,63,0,0,DF,6,tcp,60,10.2.4.43,10.1.8.27,22,36596,0,SA,3655318959,2269409708,65160,,mss;sackOK;TS;nop;wscale


site1-opnsense # tcpdump -i enc0 -n -vvvv -tttt host 10.2.4.43
tcpdump: listening on enc0, link-type ENC (OpenBSD encapsulated IP), snapshot length 262144 bytes
2026-02-21 17:44:03.173792 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x10, ttl 63, id 18105, offset 0, flags [DF], proto TCP (6), length 60, bad cksum d3aa (->d4aa)!)
    10.1.8.27.36596 > 10.2.4.43.22: Flags [S], cksum 0xa86e (correct), seq 2269409707, win 64240, options [mss 1460,sackOK,TS val 2826982365 ecr 0,nop,wscale 7], length 0
2026-02-21 17:44:03.468275 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0xbbf2 (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275267803 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:04.200440 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x10, ttl 63, id 18106, offset 0, flags [DF], proto TCP (6), length 60, bad cksum d3a9 (->d4a9)!)
    10.1.8.27.36596 > 10.2.4.43.22: Flags [S], cksum 0xa46b (correct), seq 2269409707, win 64240, options [mss 1460,sackOK,TS val 2826983392 ecr 0,nop,wscale 7], length 0
2026-02-21 17:44:05.224458 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x10, ttl 63, id 18107, offset 0, flags [DF], proto TCP (6), length 60, bad cksum d3a8 (->d4a8)!)
    10.1.8.27.36596 > 10.2.4.43.22: Flags [S], cksum 0xa06b (correct), seq 2269409707, win 64240, options [mss 1460,sackOK,TS val 2826984416 ecr 0,nop,wscale 7], length 0
2026-02-21 17:44:05.521403 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0xb3f0 (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275269853 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:06.248612 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x10, ttl 63, id 18108, offset 0, flags [DF], proto TCP (6), length 60, bad cksum d3a7 (->d4a7)!)
    10.1.8.27.36596 > 10.2.4.43.22: Flags [S], cksum 0x9c6b (correct), seq 2269409707, win 64240, options [mss 1460,sackOK,TS val 2826985440 ecr 0,nop,wscale 7], length 0
2026-02-21 17:44:06.539606 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0xaff0 (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275270877 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:07.276362 (authentic,confidential): SPI 0xce6bb79c: IP (tos 0x10, ttl 63, id 18109, offset 0, flags [DF], proto TCP (6), length 60, bad cksum d3a6 (->d4a6)!)
    10.1.8.27.36596 > 10.2.4.43.22: Flags [S], cksum 0x9867 (correct), seq 2269409707, win 64240, options [mss 1460,sackOK,TS val 2826986468 ecr 0,nop,wscale 7], length 0
2026-02-21 17:44:07.567066 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0xabec (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275271905 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:08.594028 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0xa7ea (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275272931 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:10.609542 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0xa00a (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275274947 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:14.801735 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0x8faa (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275279139 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:22.994395 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0x6faa (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275287331 ecr 2826982365,nop,wscale 7], length 0
2026-02-21 17:44:39.122500 (authentic,confidential): SPI 0xc4bad24d: IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    10.2.4.43.22 > 10.1.8.27.36596: Flags [S.], cksum 0x30aa (correct), seq 3655318959, ack 2269409708, win 65160, options [mss 1460,sackOK,TS val 275303459 ecr 2826982365,nop,wscale 7], length 0
^C
14 packets captured
22 packets received by filter
0 packets dropped by kernel

So, that's 9 returned packets for 9 firewall blocks. By the way in my firewall it gets blocked by the default rule:

site1-opnsense # grep 02f4bab031b57d1e30553ce08e0ec131 /tmp/rules.debug
block in log inet from {any} to {any} label "02f4bab031b57d1e30553ce08e0ec131" # Default deny / state violation rule
block in log inet6 from {any} to {any} label "02f4bab031b57d1e30553ce08e0ec131" # Default deny / state violation rule

The question is, why isn't there a "igb3,match,pass,out,.....,10.1.8.27,10.2.4.43,36596,22,....." on the second line of the firewall log? I guess the lack of a match is why the return 'in' traffic is not tracked as established and getting blocked?

All my rules are floating and should apply to the "IPSec" interface, and indeed I see them in the UI in the "Floating rules" drilldown list.

What am I misunderstanding about how PF works with enc0?
#5
Hi, I am trying to set up a route-based (VTI) ipsec site-to-site tunnel between OPNsense (the A site) and VyOS (well, a Unifi USG-3P, the B site) using the "new > 23.1" setup using this guide: https://docs.opnsense.org/manual/how-tos/ipsec-s2s-conn-route.html

I have previously had a policy-based ipsec vpn working since about 19.1 but I think the problem referenced here hindered me (https://forum.opnsense.org/index.php?topic=30525.0) in that after 22.7 only one pair of subnets at each site could communicate at any one time and I had to manually take CHILD_SAs up and down.

I'm finally deciding to move to a route-based setup in the hope that with only one CHILD_SA for 0.0.0.0/0 -> 0.0.0.0/0 communication that it will work properly. I'd move to wireguard but there is no Unifi support for it on the USG-3P.

So, I have followed the guide for OPNsense for site A, and set up the other site B according to some VyOS/Unifi instructions.

I see what looks to me like a good SA setup (yes, both sides WAN interfaces are behind NAT on a 192* "DMZ" network, but the IKE and ESP seems to work):

# swanctl --list-sas
no files found matching '/usr/local/etc/strongswan.opnsense.d/*.conf'
ca24ede1-a222-4a6e-91dc-2c5720143fe6: #1, ESTABLISHED, IKEv2, 7d53711e7fea6ef4_i* 62a0c1809bc25095_r
  local  'A.A.A.A' @ 192.168.178.20[4500]
  remote '192.168.188.20' @ B.B.B.B[4500]
  AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
  established 2172s ago, rekeying in 11323s
  afbe2c6c-fb46-4a1d-a9f4-facecc5ff2f9: #1, reqid 10, INSTALLED, TUNNEL-in-UDP, ESP:AES_CBC-256/HMAC_SHA1_96
    installed 2172s ago, rekeying in 1169s, expires in 1788s
    in  cc44dc3d,      0 bytes,     0 packets
    out c033ca54,      0 bytes,     0 packets
    local  0.0.0.0/0
    remote 0.0.0.0/0

Here is my OPNsense ipsec10 interface (do I have my 'tunnel inet' and 'inet' addresses mixed up?):

# ifconfig ipsec10
ipsec10: flags=1008011<UP,POINTOPOINT,MULTICAST,LOWER_UP> metric 0 mtu 1400
        options=0
        tunnel inet A.A.A.A --> 192.168.188.20
        inet 10.111.0.1 --> 10.111.0.2 netmask 0xfffffffc
        inet6 fe80::dead:beef:dead:beef%ipsec10 prefixlen 64 tentative scopeid 0x1a
        groups: ipsec
        reqid: 10
        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>

My A site's local networks are /24s contained in a 10.1.0.0/16 range and the B site is likewise in 10.2.0.0/16

Here's my A site routing table:

# netstat -rn4
Routing tables

Internet:
Destination        Gateway            Flags         Netif Expire
default            192.168.178.1      UGS            igb3
10.1.2.0/24        link#12            U      lagg0_vlan10
10.1.2.1           link#5             UHS             lo0
10.1.4.0/24        link#14            U      lagg0_vlan10
10.1.4.1           link#5             UHS             lo0
10.1.5.0/24        link#15            U      lagg0_vlan10
10.1.5.1           link#5             UHS             lo0
10.1.6.0/24        link#16            U      lagg0_vlan10
10.1.6.1           link#5             UHS             lo0
10.1.8.0/24        link#18            U      lagg0_vlan10
10.1.8.1           link#5             UHS             lo0
10.1.9.0/24        link#19            U      lagg0_vlan10
10.1.9.1           link#5             UHS             lo0
10.1.46.1          link#5             UHS             lo0
10.1.64.0/24       link#27            US            nat64
10.1.64.1          link#27            UH            nat64
10.2.0.0/16        10.111.0.2         UGS         ipsec10
10.111.0.1         link#5             UHS             lo0
10.111.0.2         link#26            UH          ipsec10
127.0.0.1          link#5             UH              lo0
192.168.1.0/24     link#1             U              igb0
192.168.1.10       link#5             UHS             lo0
192.168.178.0/24   link#4             U              igb3
192.168.178.20     link#5             UHS             lo0

So it looks as though I have set up the gateway and route to send 10.2.0.0/16 to the far side of the ipsec10 tunnel.

But it's not working.

I try a packet capture on ipsec10 and a ping to the far network (at the same time) and I get this:

$ ping -4 -S 10.1.8.1 10.2.4.1
PING 10.2.4.1 (10.2.4.1) from 10.1.8.1: 56 data bytes
ping: sendto: Network is down
ping: sendto: Network is down
ping: sendto: Network is down
^C
--- 10.2.4.1 ping statistics ---
3 packets transmitted, 0 packets received, 100.0% packet loss

# tcpdump -i ipsec10
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ipsec10, link-type NULL (BSD loopback), snapshot length 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel


I'm not sure how this should behave, but I get this:

# ping 10.111.0.1
PING 10.111.0.1 (10.111.0.1): 56 data bytes
64 bytes from 10.111.0.1: icmp_seq=0 ttl=64 time=0.329 ms
64 bytes from 10.111.0.1: icmp_seq=1 ttl=64 time=0.158 ms
^C
--- 10.111.0.1 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.158/0.243/0.329/0.086 ms
# ping 10.111.0.2
PING 10.111.0.2 (10.111.0.2): 56 data bytes
ping: sendto: Network is down
ping: sendto: Network is down
^C
--- 10.111.0.2 ping statistics ---
2 packets transmitted, 0 packets received, 100.0% packet loss

When using policy-based ipsec I used to be able to tcpdump on the enc0 interface and see the encapsulated ESP traffic, but that also shows nothing.

Perhaps the far end needs to be configured with that tunnel address, but it looks like VyOS doesn't do it like that and their vti interface is simply unnumbered (see this old unaswered post https://forum.opnsense.org/index.php?topic=38062.0)

What am I doing wrongly?

Does anyone have such an OPNsense <-> VyOS setup working?
#6
Quote from: Patrick M. Hausen on October 28, 2024, 11:47:11 PM
Place hosts that share an outbound policy in a common network/VLAN and ignore host addresses. Filtering by address does not scale and is easily spoofed.

This seems to be the most straightforward way, thanks for confirming.
#7
Quote from: bimbar on October 29, 2024, 02:45:18 PM
Also, please do not use /57 as LAN networks, you must (almost) always use /64 for IPv6 networks.

My bad, it was late at night when I posted. I have a number of /64s in the /57 that's been delegated to my OPNsense router by my CPE router (that has the /48).
#8
Hi,

I am migrating to IPv6 only. I have a /48 from my ISP which I have created a number of /57 local prefixes in which I am hosting various vms and physical machines.

All these hosts I have set to use SOII (the OpenBSD name for RFC7217 addresses). In short each host has a static listening address and fairly rapidly cycles through random(?) addresses in the /57 for outgoing traffic. I think Windows hosts do something similar so what I am asking here is I guess a fairly common use case.

Any incoming traffic through OPNsense is easy to add to allowlists in firewall rules as the addresses is static, but the outgoing traffic is causing me issues.

I would like to, on a host-by-host basis create allowlists and so firewall rules for specific outgoing traffic. So far I have tried allowing by src MAC address (even though it was in an "IPv6" rule); this worked for a while but then started blocking the traffic some hours later*. I have settled on allowing the entire /57 (I basically have a single host in each /57 I have created so far) but this seems unsatisfactory and not a long term solution.

Does anyone have any advice/war stories regarding the same? I thought I'd check here before I head upstream.

*I had a quick read around and filtering by MAC does seem a bad idea:
- Still true?: https://forum.opnsense.org/index.php?topic=2790.0
- Also seems like it could get bad performance: https://forums.freebsd.org/threads/filtering-by-mac-address.32841/

#9
Hi,

I have dhcp active on all my subnets. This works well and provides the dns nameserver option for my dhcp clients to point to a couple of non-opnsense nameservers I use internally.

I am now configuring unbound to listen on just a single vlan/subnet.

I spotted this at the bottom of the plugin config page:
Quote
If the DNS Resolver is enabled, the DHCP service (if enabled) will automatically serve the LAN IP address as a DNS server to DHCP clients so they will use the DNS Resolver.

This is a pity because it doesn't just provide the nameservers for the vlan I'm targeting (also through ipv6 RAs, not just dhcp as mentioned in the quote) but it overrides my custom dhcp dns nameserver settings for _all_ other scopes.

Is this really necessary? Is it possible to change this behaviour?

Thanks
#10
Hi,

I have ipv6 working well (I believe). I get a prefix on my opnsense host from my ISP (XS4ALL) and distribute it to downstream ipv6-only vlans using radvd.

I am trying to set up a non-transparent (opaque?) proxy in each of these vlans using the built-in squid proxy from opnsense. Here I have a problem.

I set the proxy to use the ipv6-only vlan interface as a "Proxy interface" but the squid.conf doesn't get generated correctly. I get:
# grep -E 'https_port|http_port' /usr/local/etc/squid/squid.conf
http_port 127.0.0.1:3128 intercept
http_port [::1]:3128 intercept
http_port 192.168.1.10:3128 
http_port 10.1.8.1:3128 
http_port 10.1.9.1:3128 
http_port 10.1.6.1:3128 
http_port 10.1.2.1:3128 
http_port 10.1.4.1:3128
http_port :3128   <--- problem line
http_port 10.1.5.1:3128 

You'll see some other ipv4-only vlan interface addresses there.

My ipv6-only vlan interface looks like this:

# ifconfig lagg0_vlan640
lagg0_vlan640: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:02:b0:1a:68:39
inet6 2001:dead:beef:a8:202:b0ff:fe1a:6839 prefixlen 64
inet6 fe80::1:1%lagg0_vlan640 prefixlen 64 scopeid 0x14
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet autoselect
status: active
vlan: 640 vlanpcp: 0 parent interface: lagg0
groups: vlan


If I put that address in the squid.conf replacing the 'problem line' above it seems to work correctly, clients in that vlan can use the proxy just fine:

# netstat -an|grep 3128
tcp4       0      0 10.1.2.1.3128          10.1.2.23.33264        ESTABLISHED
tcp4       0      0 10.1.5.1.3128          *.*                    LISTEN
tcp6       0      0 2001:dead:beef:a8.3128 *.*                    LISTEN
tcp4       0      0 10.1.4.1.3128          *.*                    LISTEN
tcp4       0      0 10.1.2.1.3128          *.*                    LISTEN
tcp4       0      0 10.1.6.1.3128          *.*                    LISTEN
tcp4       0      0 10.1.9.1.3128          *.*                    LISTEN
tcp4       0      0 10.1.8.1.3128          *.*                    LISTEN
tcp4       0      0 192.168.1.10.3128      *.*                    LISTEN
tcp6       0      0 ::1.3128               *.*                    LISTEN
tcp4       0      0 127.0.0.1.3128         *.*                    LISTEN


Could it be that the squid.conf generator just isn't inserting that address correctly? I use the unbound plugin too and it puts that ipv6 address in its config file just fine.

Thanks
#11
Excellent, looks like it will soon be all sorted. I will just make a bug report next time.

As you say, I had selected Basic with a prefix of /60 before I chose the Config File Override, so yes, I only got the choice of 0 to f. Selecting Basic and /56 and applying it prior to selecting Config File Override allowed me to "Enter a hexadecimal value between 0 and ff here," for the prefix id in the track interface section.

Thanks for the quick response.
#12
Hi,

I'm successfully running a a wan interface with a DHCPv6 Config File Override that gets a /57 prefix from an upstream router and configures some internal vlan interfaces.

This works nicely.

My override file, /var/etc/dhcp6c_wan.override.conf looks like this:

interface igb3 {
  send ia-na 0; # request stateful address
  send ia-pd 0; # request prefix delegation
  request domain-name-servers;
  request domain-name;
  script "/var/etc/dhcp6c_wan_script.sh"; # we'd like some nameservers please
};
id-assoc na 0 { };
id-assoc pd 0 {
  prefix ::/57 infinity;
  prefix-interface lagg0_vlan620 {
    sla-id 20;
    sla-len 7;
  };
  prefix-interface lagg0_vlan640 {
    sla-id 40;
    sla-len 7;
  };
  prefix-interface lagg0_vlan660 {
    sla-id 60;
    sla-len 7;
  };
  prefix-interface lagg0_vlan680 {
    sla-id 80;
    sla-len 7;
  };
};


This is basically an extension from what gets created from the "Basic" mode, only I have:
1. A /57 prefix. My upstream router gets a /48 from my ISP but then will only hand out a /57 and no other. This 57 is not a chooseable option in the "Basic" interface, it goes from 56 to 60. Can this be fixed?
2. sla-ids that are more than 0xf, which is the maximum "allowed" in the "Track Interface" part of each interface's config. Can this be fixed too?

Thanks,
Ben
#13
So, I gave up and used the HAProxy plugin instead. Nice to have learned something new.
#14
Hi,

I'm having an intermittent problem with (I think) my Opnsense 17.7.7 router/firewall. Apologies up front if I've missed some other post describing exactly this problem.

So, I have this Opnsense firewall (192.168.178.20) and a rpi3 (192.168.178.21) sitting on my flat network behind my ISP's CPE (192.168.178.1). I then have my real home network behind the Opnsense firewall.

I'm trying to terminal a little bit of HTTPS traffic from the outside world on the rpi3 and send it through my firewall as plain HTTP via a port forward from 192.168.178.20:1644/tcp to my Nextcloud server (10.1.6.44:80/tcp) in my real home network.

This works half the time. I'll show you what I mean.

Here's a successful wget from my docker instance on the rpi3:

root@b3f74cc5c974:/$ wget --header 'Host: myserver.com' http://192.168.178.20:1644/blah1.php
Connecting to 192.168.178.20:1644 (192.168.178.20:1644)
wget: server returned error: HTTP/1.1 404 Not Found
root@b3f74cc5c974:/$ netstat -tupa
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 localhost:9000          0.0.0.0:*               LISTEN      313/php-fpm.conf)
tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:http            0.0.0.0:*               LISTEN      309/nginx.conf
tcp        0      0 0.0.0.0:https           0.0.0.0:*               LISTEN      309/nginx.conf
tcp        0      0 b3f74cc5c974:37972      myopnsensebox.com:1644  TIME_WAIT   -
tcp        0      0 :::11211                :::*                    LISTEN      -
udp        0      0 0.0.0.0:11211           0.0.0.0:*                           -
udp        0      0 :::11211                :::*                                -


And the Nextcloud server's access.log:

myserver.com 192.168.178.21 - - [30/Oct/2017:19:46:43 +0100] "GET /blah1.php HTTP/1.1" 404 0

So, the page isn't there, but they're at least they're on speaking terms.

Weird thing is that just a few seconds later I try again and get this:

root@b3f74cc5c974:/$ wget --header 'Host: myserver.com' http://192.168.178.20:1644/blah2.php
Connecting to 192.168.178.20:1644 (192.168.178.20:1644)
wget: can't connect to remote host (192.168.178.20): Connection refused


What's happening?

I did a traffic capture on the outside (192.168.178.20) and inside (10.1.6.1) interfaces of the firewall (simple text captures attached), but the crux of it is that the outside interface is resetting the second connection immediately like so:

19:47:40.630906 IP 192.168.178.21.37974 > 192.168.178.20.1644: Flags [S], seq 4153658341, win 29200, options [mss 1460,sackOK,TS val 10306995 ecr 0,nop,wscale 7], length 0
19:47:40.631670 IP 192.168.178.20.1644 > 192.168.178.21.37974: Flags [R.], seq 0, ack 4153658342, win 0, length 0


This is not showing up in any logs I can see, where can I look for what is doing this??

Many thanks,
Ben

#15
Hi Franco,

I will look into it further when I can, but I think for now I will give in and buy a managed switch and do the plumbing downstream of my opnsense host.

I'm guessing there is no analogue of a Cisco BVI for opnsense/freebsd because bridging vlans on 2 physical interfaces (say igb1_vlan1016 and igb2_vlan1016) will get the kernel involved in layer 2 switching and that will never be very nice unless there's some hardware integration like on Cisco (and other) devices. Right?

Still, it's something that would be nice to have working in the end.

Many thanks for your time.