OPNsense Forum

English Forums => Virtual private networks => Topic started by: Kieeps on January 17, 2021, 08:56:21 am

Title: wireguard kernel implementation
Post by: Kieeps on January 17, 2021, 08:56:21 am
Any new on the progress of the bsd kernel? Read somewhere a while back that it was being pushed to kernel, did it ever land?

And will the plugin currently in opnsense move from userspace to kernel when it gets implemented? :-)
Title: Re: wireguard kernel implementation
Post by: Greelan on January 17, 2021, 09:24:25 am
FreeBSD 13 now has the kernel module so I guess it will come to OPNsense when 13 does (the 13 stable is tentatively scheduled for release by end March)
Title: Re: wireguard kernel implementation
Post by: Kieeps on January 17, 2021, 09:26:59 am
Good news :-) it'll be fun to see if it has any noticeable improvements over the current implementation :-D

Keep upnthe great work!
Title: Re: wireguard kernel implementation
Post by: Greelan on January 17, 2021, 09:29:22 am
I’ve got no connection to OPNsense, just keen to see the kernel module too. It will definitely have better performance
Title: Re: wireguard kernel implementation
Post by: Greelan on January 17, 2021, 09:31:02 am
I’m also not sure how long typically OPNsense lags behind FreeBSD/HardenedBSD releases, so it could be a while after 13 stable is released
Title: Re: wireguard kernel implementation
Post by: chemlud on January 17, 2021, 09:36:11 am
22.7 or 23.1?
Title: Re: wireguard kernel implementation
Post by: mimugmail on January 17, 2021, 11:24:50 am
FreeBSD 13 itself is not even stable yet
Title: Re: wireguard kernel implementation
Post by: franco on January 17, 2021, 04:29:33 pm
The wireguard module is a good candidate for stable/12. The iflib preparations for this module are already backported. With pfSense 2.5 almost out and being based on FreeBSD 12 it's very likely this is going to happen within this year since they always said the userspace implementation is bad and they would rather wait for the "real" thing.

https://redmine.pfsense.org/issues/8786

;)


Cheers,
Franco
Title: Re: wireguard kernel implementation
Post by: Greelan on January 17, 2021, 08:21:04 pm
Nice, thanks Franco!
Title: Re: wireguard kernel implementation
Post by: JeGr on January 20, 2021, 01:13:58 am
https://www.netgate.com/blog/wireguard-for-pfsense-software.html

With a new snapshot of pfSense 2.5pre incoming tomorrow and as it's based on 12.2 I suspect the Kernel module if_wg is (al)ready (to be) backported? So it shouldn't take long to bring it further to HardenedBSD I suspect :)
Looking forward to a new kernel space implementation and read speed benchmarks :D
Title: Re: wireguard kernel implementation
Post by: mimugmail on January 20, 2021, 06:24:57 am
Nice!  8)
Title: Re: wireguard kernel implementation
Post by: chemlud on January 20, 2021, 08:54:28 am
Does the kernel implementation have better logging? I could not find anything in my opnsense system logs, except for some errors when the peer is offline (for rebooting). Or does it need higher log level?
Title: Re: wireguard kernel implementation
Post by: mimugmail on January 20, 2021, 08:56:26 am
I'd guess everything stays the same since the userland tool wg-quick doesn't change, it's only crypto offloading to kernel.
Title: Re: wireguard kernel implementation
Post by: JeGr on January 20, 2021, 02:45:26 pm
Couldn't find anything about more logging. Even the newly created docs of the pfsense snapshots tells you in the bullet points:

 * It operates completely in the kernel
 * Configuration is placed directly on the interfaces
 * It has no concept of connections or sessions
 * There is no “status” of the VPN (e.g. it isn’t considered up or down, it has no visible timers, etc.)
 * It has no facilities for user authentication
 * There is no service daemon to stop or start
 * There is only minimal logging from the kernel
 * It does not bind to a specific interface or address on the firewall, it accepts traffic to any address on the firewall on its specified port

As it is virtually "service- and status-less" logging would be hard to implement besides errors/messages from the kernel(module) itself. But that's only my understanding, mimugmail may be deeper into that :D
Title: Re: wireguard kernel implementation
Post by: Patrick M. Hausen on January 20, 2021, 03:15:23 pm
Quote
* It has no concept of connections or sessions
 * There is no “status” of the VPN
That's a feature.

"Look, a packet matching my tunnel policy" --> Encrypt with peer's public key, encapsulate in UDP, send to peer instead. Is the peer alive? How the heck should I know?
"Look an encrypted packet" --> Does it come from a configured peer? Does it decrypt with my private key? OK, decrypt, de-encapsulate, throw into ip_input() again.

It's definitely more like GRE or IPIP than, say, OpenVPN. You can do something similar with IPSec, throw away all of ISAKM/IKE and phase 1 and just statically configure phase 2 SAs and keys. Similar result.
Title: Re: wireguard kernel implementation
Post by: JeGr on January 20, 2021, 03:37:26 pm
> That's a feature.

I assumed it is ;) But I'm not so deep into the implementation itself to say "more logging is possible" :)
Title: Re: wireguard kernel implementation
Post by: mimugmail on January 20, 2021, 03:43:33 pm
It's a killer for CARP ... enterprise integration will be low .. lower than low :)
Title: Re: wireguard kernel implementation
Post by: chemlud on January 20, 2021, 03:44:23 pm
yeah, considering the nature of the beast...

https://www.wireguard.com/#conceptual-overview

... only option might be to log all packages for the FW-rule. Tried for the allow rule for the tunnel IP on the wiregurad FW-rules tab, but see nothing in live view.

PS: now I enabled logging for the allow rule on Wireguard FW-rules tab for one of the remote nets and I see the traffic flow.

But where to see these "keepalive = 10" packages? They should be going back and forth between the tunnel network IPs, or?
Title: Re: wireguard kernel implementation
Post by: JeGr on January 20, 2021, 06:02:04 pm
It's a killer for CARP ... enterprise integration will be low .. lower than low :)

Ah because it binds on the interface instead of an actual IP? Makes sense, failover possibility is nonexistent that way.
Title: Re: wireguard kernel implementation
Post by: Patrick M. Hausen on January 20, 2021, 06:39:56 pm
I figure the problem @mimugmail is seeing with CARP is more due to the fact that presumably a WireGuard tunnel router cannot determine if the tunnel is alive or not.

But I may stand corrected with respect to my last statement. There is this "last handshake" information, so there must be a little bit of keepalive checking, nonetheless. Possibly one can use that amount of state to force a CARP failover if necessary.

The rest can be pretty simple, IMHO, just use two boxes on both sides and two tunnels. Clients on either side throw their packets at the active node that forwards it through *its* tunnel to the other side. Provided that a keepalive check and failover is possible, there is no need to have *one* redundant tunnel as long as there is redundant connectivity.
Title: Re: wireguard kernel implementation
Post by: mimugmail on January 20, 2021, 07:32:27 pm
The problem is, there is no daemon to bind an interface. So when you send a packet to the client, you cant decide if the source is whether IP of Interface or CARP IP

Title: Re: wireguard kernel implementation
Post by: Patrick M. Hausen on January 20, 2021, 07:37:39 pm
Ah ... I wasn't thinking road warrior, because without user authentication and IP address pool management this is not going to fly in a corporate environment, anyway.

My idea was
Code: [Select]
              Box 1a ------------ Tunnel 1 ------------ Box 1b
                |                                        |
                |                                        |
LAN a --- CARP Address                             CARP Address --- LAN b
                |                                        |
                |                                        |
              Box 2a ------------ Tunnel 2 ------------ Box 2b


But back to the road warrior scenario.

There is not really a "client" and a "server" in WG. So the client would accept that packet from any IP address as long as the keys match. In that case the problem lies with the NAT gateway which will be in front of the "client" in most cases.

But if on the "server" side incoming packets to the CARP address are taken care of, why not NAT WG packets originating from the interface address to the CARP address?
Title: Re: wireguard kernel implementation
Post by: JeGr on January 20, 2021, 10:30:45 pm
Ah ... I wasn't thinking road warrior, because without user authentication and IP address pool management this is not going to fly in a corporate environment, anyway.

My idea was
Code: [Select]
              Box 1a ------------ Tunnel 1 ------------ Box 1b
                |                                        |
                |                                        |
LAN a --- CARP Address                             CARP Address --- LAN b
                |                                        |
                |                                        |
              Box 2a ------------ Tunnel 2 ------------ Box 2b


But back to the road warrior scenario.

There is not really a "client" and a "server" in WG. So the client would accept that packet from any IP address as long as the keys match. In that case the problem lies with the NAT gateway which will be in front of the "client" in most cases.

But if on the "server" side incoming packets to the CARP address are taken care of, why not NAT WG packets originating from the interface address to the CARP address?

Or put the routes out of WG and use the wg interfaces in a FRR/OSPF Setup with both sides watching their "other side" gateway IP so if one is down (because Box1a/b/WAN1 is down) then FRR should route via #2 - AFAIR that should be possible as long as one configures the WG interface as "non broadcasting" and add a manual entry for the Neighbor using the WireGuard interface address of the appropriate peer?
Title: Re: wireguard kernel implementation
Post by: mimugmail on January 21, 2021, 07:05:46 am
Yes, there were ppl reporting to work with OPNsense implementation and dynamic routing yet, but that's not carp, if you have a clean failover strategy, dynamic routing would not solve it and it's hard to write a generic documentation on how to setup WG with HA.

I already had small success with outbound nat, but no chance to test on a ha cluster due to homeoffice :)
Title: Re: wireguard kernel implementation
Post by: chemlud on January 21, 2021, 08:31:40 am
There is not really a "client" and a "server" in WG.

Yes and no, the client is the one who has no allow rule on WAN, so he has to initiate the traffic in outbound direction to make the tunnel happen

So the client would accept that packet from any IP address as long as the keys match.

No, the receiver checks both, key and IP to identify legitimate traffic:

"Once decrypted, the plain-text packet is from 192.168.43.89. Is peer LMNOPQRS allowed to be sending us packets as 192.168.43.89?"

https://www.wireguard.com/#conceptual-overview
Title: Re: wireguard kernel implementation
Post by: JeGr on February 04, 2021, 10:38:54 pm
The problem is, there is no daemon to bind an interface. So when you send a packet to the client, you cant decide if the source is whether IP of Interface or CARP IP



@mimugmail: Just a heads up - perhaps one could have a look into the patched kernel module in Netgates Redmine Thread.
-> https://redmine.pfsense.org/issues/11354
Seems the patched ko actually recognizes and respects the destination address so incoming traffic e.g. on WAN1 will be responded via WAN1, WAN2 will get answers via WAN2 and packages to CARP IP seem to respond via CARP IP. Only problem open seems that a manually demoted CARP master (that's still active otherwise) is still sometimes getting packages but the other failover scenarios look good. So when implementing the kernel module mode - that could also bring a fix for using CARP and MultiWAN - or let's say for most cases. :)
Title: Re: wireguard kernel implementation
Post by: mimugmail on February 04, 2021, 10:43:15 pm
Thx, I'll have a look when Franco added the patches to Kernel :)