OPNsense Forum

Archive => 18.7 Legacy Series => Topic started by: chadwickthecrab on December 12, 2018, 05:44:23 pm

Title: Wireguard Handshake But No Traffic After Update
Post by: chadwickthecrab on December 12, 2018, 05:44:23 pm
Hello,

I had set up a functional wireguard config in a "road warrior" scenario. It no longer works after the required reboot of today's update to 18.7.9. it looks like the handshake is successful but I can't ping anything or resolve DNS.

Quote
interface: wg0
  public key: hCHSYE6ljF608lc58piqyhxdfRFl5Ydd2p0Umj1vHk4=
  private key: (hidden)
  listening port: 51820

peer: 6gmHy2Vg5BB6u6iUw3LAlPA7YNT8g0Ub2zPbyk5MUDc=
  endpoint: 174.192.0.178:8088
  allowed ips: 192.168.2.2/32
  latest handshake: 11 minutes, 39 seconds ago
  transfer: 4.64 KiB received, 96 B sent

peer: u5EqdHj1Ifdlbx5/PihyegJGuYA5R988yO/H/t0IEwA=
  allowed ips: 192.168.2.3/32


ifconfig:
Quote
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1492
options=c00b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,LINKSTATE>
ether 3e:b3:55:3d:41:28
hwaddr 3e:b3:55:3d:41:28
inet6 fe80::3cb3:55ff:fe3d:4128%vtnet0 prefixlen 64 scopeid 0x1
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet 10Gbase-T <full-duplex>
status: active
vtnet1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=c00b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,LINKSTATE>
ether ea:34:e0:7a:43:84
hwaddr ea:34:e0:7a:43:84
inet 192.168.1.1 netmask 0xffffff00 broadcast 192.168.1.255
inet6 fe80::e834:e0ff:fe7a:4384%vtnet1 prefixlen 64 scopeid 0x2
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet 10Gbase-T <full-duplex>
status: active
enc0: flags=0<> metric 0 mtu 1536
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: enc
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: lo
pflog0: flags=100<PROMISC> metric 0 mtu 33160
groups: pflog
pfsync0: flags=0<> metric 0 mtu 1500
groups: pfsync
syncpeer: 0.0.0.0 maxupd: 128 defer: off
ovpnc1: flags=8010<POINTOPOINT,MULTICAST> metric 0 mtu 1500
options=80000<LINKSTATE>
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: tun openvpn
pppoe1: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1484
inet6 fe80::86d:e850:f8da:454e%pppoe1 prefixlen 64 scopeid 0x8
inet 71.181.122.57 --> 10.10.10.10  netmask 0xffffffff
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
ztanv9hnl3qfnl8: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 5000 mtu 2800
options=80000<LINKSTATE>
ether aa:62:f1:6c:f7:d7
hwaddr 00:bd:4d:d9:f7:09
inet6 fe80::2bd:4dff:fed9:f709%ztanv9hnl3qfnl8 prefixlen 64 scopeid 0x9
inet 192.168.191.1 netmask 0xffffff00 broadcast 192.168.191.255
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet autoselect
status: active
groups: tap
Opened by PID 84763
wg0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1420
options=80000<LINKSTATE>
inet 192.168.2.1 --> 192.168.2.1  netmask 0xffffff00
inet6 fe80::86d:e850:f8da:454e%wg0 prefixlen 64 scopeid 0xa
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: tun
Opened by PID 9 
Title: Re: Wireguard Handshake But No Traffic After Update
Post by: mimugmail on December 12, 2018, 05:52:57 pm
Can you add via CLI:

ifconfig wg0 group wireguard
Title: Re: Wireguard Handshake But No Traffic After Update
Post by: chadwickthecrab on December 12, 2018, 06:17:47 pm
Ran that command but still no traffic, only the successful handshake.

Edit: Actually looks like no handshake now.
Title: Re: Wireguard Handshake But No Traffic After Update
Post by: mimugmail on December 12, 2018, 06:31:33 pm
opnsense-revert -r 18.7.8 os-wireguard-devel

Then reboot ..
Title: Re: Wireguard Handshake But No Traffic After Update
Post by: chadwickthecrab on December 12, 2018, 06:40:24 pm
That worked, thanks!
Title: Re: Wireguard Handshake But No Traffic After Update
Post by: franco on December 12, 2018, 10:11:05 pm
We'll try to address this tomorrow in a hotfix.