OPNsense Forum

English Forums => General Discussion => Topic started by: Jeroen1000 on June 20, 2018, 04:48:16 pm

Title: Openconnect throughput
Post by: Jeroen1000 on June 20, 2018, 04:48:16 pm
Dear community

I'm looking to setup Openconnect in client mode. My main router will PBR traffic to the OPNsense router which then encrypts it and sends it on its way to my VPN-provider. I normally know how to get this done technically but I do have a few Q's.

I need about 70 Mbps of net throughput. However, I'm having trouble finding out whether this VPN-flavour is HW-accelerated using AES-NI. I was looking at this board https://www.pcengines.ch/apu2c4.htm (https://www.pcengines.ch/apu2c4.htm). But is this a good choice or should I be looking at more powerful HW?

Title: Re: Openconnect throughput
Post by: mimugmail on June 20, 2018, 05:00:41 pm
HW accellerated is only IPSEC and best at GCM enc. OpenVPN or OpenConnect only do this in userspace (max 200-300mbit), no idea how much with APU
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 20, 2018, 07:26:40 pm
At the risk souding dumb, but could you explain a bit. I thought openSSL could make use of AES-NI? But if not, is the client multithreaded?

If any of this is the case, I'd better upgrade to Intel Atom hardware or even something more powerful
Title: Re: Openconnect throughput
Post by: mimugmail on June 20, 2018, 09:17:17 pm
It can use the AES-NI for encryption, but the packets are handled in userspace, not only kernel (like with IPSEC).
Why not invest in a Qotom (250$€) with i5 .. then you should achieve 200mbit with OpenVPN.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 20, 2018, 10:15:36 pm
Thanks that is an excellent suggestion. I almost pulled the trigger on an Atom in the Denverton series but Qotom is a fair bit cheaper
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 21, 2018, 11:30:16 am
Thanks that is an excellent suggestion. I almost pulled the trigger on an Atom in the Denverton series but Qotom is a fair bit cheaper

One last small q. Is there a limit on the amount of simultaneous Openconnect (not Openvpn) VPN connections? I want to PBR traffic to different VPN tunnels depending on the type of traffic.

Title: Re: Openconnect throughput
Post by: mimugmail on June 21, 2018, 01:39:28 pm
Do you mean OpenConnect Plugin from the Firewall itself or OpenConnect clients behind the Firewall in your LAN?
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 21, 2018, 01:59:04 pm
I mean the firewall itself acting as an Openconnect client and setting up multiple VPN-tunnels using this plug-in
https://www.routerperformance.net/using-openconnect-with-newly-released-opnsense-18-1-1/

I'm doing the same with PPTP on Mikrotik gear: I have 4 PPTP-tunnels active. I mangle (mark) traffic based on ports or subnets and send it to the desired PPTP VPN-tunnel. So policy based routing. It looks that this is possible for Openvpn but have not found anything about openconnect

ps: I ordered a Qotom i5 5200u with 4GB RAM. It's fast enough for anything I might want to throw at it.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 21, 2018, 02:00:10 pm
double post
Title: Re: Openconnect throughput
Post by: mimugmail on June 21, 2018, 03:07:54 pm
No, with OpenConnect plugin only one instance is allowed ...
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 21, 2018, 03:15:38 pm
That too bad. Maybe I can config more via CLI. We'll see:-)
Title: Re: Openconnect throughput
Post by: mimugmail on June 21, 2018, 03:37:05 pm
Sure, but then you'll have to remove the plugin and only use the package :)
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 21, 2018, 10:56:56 pm
Did you build this plugin? Is this the client that is used: http://www.infradead.org/openconnect/

I did some testing today (note: Linux knowledge: low. Networking knowledge: high) using the client in the link above. It was quite easy to establish 2 tunnels by starting them from 2 different terminal windows.

1. In the linux routing table two tunnel interfaces (tun0 and tun1) appeared. Both with a metric of 0 for 0.0.0.0/0
2. I configured ip tables to only route specific IP's to either tun0 or tun1 (PBR routing or prerouting)
3. I changed the metric of the tunnels to be higher than metric of the gateway for my LAN so that regular traffic skips the tunnels

This works as intended. Now I'm wondering how hard would it be expand the plugin for use with multiple tunnels with adjustable metric?  2 can already be handled by Opnsense if the second tunnel interface is visible to it.

Title: Re: Openconnect throughput
Post by: mimugmail on June 22, 2018, 06:17:21 am
Yes, I build the plugin. Since I'm an AnyConnect user where multi-instance is not supported, I didn't add this to the plugin. Also I'm not sure how to handle routing with multiple VPN's.

Perhaps it's better you try to set this up with OPNsense by CLI and when it's running like you intend we can see how to get this in.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on June 22, 2018, 11:22:25 pm
Here is an update after an evening of testing.
It's mainly a matter of calling Openconnect with the correct parameters. It allows for multiple vpn interfaces to be established. It names them tun0, tun1, etc. Name can be changed to whatever Opensense wants with this option
Code: [Select]
--interface=IFNAMEExample to setup a VPN-tunnel:

Code: [Select]
echo "PASSWORD" | openconnect https://xx.xx.xx.xx:PORT --user=USERNAME --passwd-on-stdin --servercert sha256:SOMERANDOMSTUFF --background
Caveats:
It adds a default route in the routing table to the tunnel with metric 0. So the vpnc script needs to be adapted to allow setting a custom metric per interface (going to give that a try). You then have a few options:

1) you set the metric higher than your regular LAN GW and PBR traffic to a  specific VPN-tunnel
2) you set the metric lower than your regular LAN GW (metric 0 to kiss). This will push all traffic over a VPN-tunnel.
3) I don't know how you handle vpn-providers with self signed certs. But you need to use --servercert in such case. It can probably be automatic as Openconnect litteraly tells you what to do:


Code: [Select]
certificate from VPN server "xxx.xxx.xxx.xxs" failed verification.
Reason: signer not found
To trust this server in future, perhaps add this to your command line:
    --servercert sha256:SOMERANDOMSTUFF
So the plugin would need the options to:

1) specify a metric per tunnel and the ability to change it.
2) a checkbox "route ALL traffic over this connection" (set metric to 0 for that specific tunnel). You would then loose the ability to override the metric in (1)
3) handle self-signed certificates or provide an input box for the user to put the hash

I will also test this with Opnsense if my gear arrives. Or maybe in a VM if find the time. I hope this gives you an idea on how to do this?

Title: Re: Openconnect throughput
Post by: mimugmail on June 22, 2018, 11:50:02 pm
You really should test with OPN .. FreeBSD doesnt support metrics, and Interface renaming in Openconnect is only available since 0.12 ;) I'm also on the Openconnect mailing list :)
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 14, 2018, 10:24:14 pm
I got my Qotom mini-PC today so I'll be testing pretty soon time permitting. I got it working as I want in Ubuntu but let's what I can do with BSD:-)!
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 27, 2018, 03:18:29 pm
So I'm back from holiday so it is time for an update.

Step 1:
- I've installed Opnsense on my Qotom mini-pc.
- I can surf the Internet using a standard home NAT setup

Topology:

Code: [Select]
Qotom WAN (igb0 DHCP client) ---BRIDGED_---> Mikrotik routerboard WAN ---> cable modem 
Qotom LAN (igb1 192.168.200.250/24)  ------> Netgear switch ----> LAN clients

The Mikrotik is invisible for the Qotom box because of the bridge. So the Opnsense WAN igb0 has a public IP on it.

Step 2: openconnect

- Plugin is installed
- VPN is up
Code: [Select]
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            10.65.X.X      UGS      ocvpn0
10.65.0.0/16   10.65.X.X       UGS      ocvpn0
10.65.X.X       link#9        UH       ocvpn0
- Outbond NAT rule to masquerade traffic to the VPN interface ocvpn0 has been made and is hit

Code: [Select]
root@OPNsense:~ # pfctl -v -s nat
No ALTQ support in kernel
ALTQ related functions disabled
nat log on ocvpn inet from 192.168.200.0/24 to any -> (ocvpn:0) port 1024:65535 round-robin
  [ Evaluations: 468       Packets: 709       Bytes: 106489      States: 0     ]
  [ Inserted: uid 0 pid 2448 State Creations: 62    ]

So far I can ping a random host from the Opnsense box itself with the traffic going over the vpn as intended.
Hosts on the LAN, however, are unable to do this. So there's something still iffy with either routing or firewall.
Once I find what's wrong, I'll focus on getting multiple vpn instances running and routing traffic over those based on source IP and/or TOS marking.


Title: Re: Openconnect throughput
Post by: mimugmail on July 27, 2018, 03:51:13 pm
Can you post a screenshot of your NAT rule?
For me this setup works fine ...
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 27, 2018, 05:19:30 pm
Screenshot in attach. It appears ICMP is working fine contrary to what I said in my previous post.  But HTTPS and HTTP are not working hmm. Although that doesn't make sense to me just yet.
Title: Re: Openconnect throughput
Post by: mimugmail on July 27, 2018, 07:06:36 pm
Transparent Proxy?
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 27, 2018, 10:47:32 pm
I'll probably have to sniff the traffic to see what is happening. This should not be a proxy issue. VPN is working fine if the tunnel is setup on a Linux box so it has to be Opnsense specific. But just to rule out my Mikrotik router, I'll plug the WAN port directly into the cable modem.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 27, 2018, 11:48:31 pm
Transparent Proxy?

I believe it might be the MTU. It is set to 1322 for the vpn interface.

Code: [Select]
ocvpn0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1322
So I started a tunnel myself via the shell

Code: [Select]
tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1322It has the same MTU. When the ICMP goes payload goes beyond 1294 bytes (+ 8 for ICMP + 20 for IP header makes 1322) I get this message
Code: [Select]
LZS decompression failed: File too large

So the max. bytes I can get through for ICMP is 1294 (+ 20 for IP header + 8 for ICMP makes 1322). Any higher and I can see this error:

Code: [Select]
LZS decompression failed: File too large
On my MAC the MTU for the interface is 1340 so maybe that is what the server is using and I'm dealing with some MTU mismatch...
Title: Re: Openconnect throughput
Post by: mimugmail on July 28, 2018, 01:15:46 am
Then you have to lower the MSS in LANfor TCP and rwgarding ICMP just dont enable DF bit
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 28, 2018, 12:46:00 pm
But it is DTLS (udp) so tcp mss probably does not apply?
I sniffed the connection in the meanwhile. The TCP 3-way handshake completes with destination host 213.239.154.31. But then it seems my 192.168.200.50 source host receives a packet it was not expecting. So it sends a duplicate ack to tell this to 213.239.154.31. This happens twice before 213.239.154.31 closes the connection.

So there is most likely something that is causing packet loss (MTU still being my prime suspect). I did this test with the Opnsense box directly connected to the cable modem. All I need to do is find the cause now ::)

Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 28, 2018, 02:38:21 pm
Alright, I'm getting somewhere near the cause.

I manually loaded the NAT rule via console and manually started a tunnel with this option
Code: [Select]
--no-deflate
This at least caused this error to go away
Code: [Select]
LZS decompression failed: File too large. Now I am able to browse the web. The speed of the tunnel has taken quite a large hit because compression is now completely disabled.

MTU related debug info with compression disabled:
Code: [Select]
X-DTLS-CipherSuite: PSK-NEGOTIATE
X-CSTP-Base-MTU: 1406
X-CSTP-MTU: 1340
DTLS option X-DTLS-DPD : 90
DTLS option X-DTLS-Port : 22
DTLS option X-DTLS-Rekey-Time : 172838
DTLS option X-DTLS-Rekey-Method : ssl
DTLS MTU reduced to 1322
Established DTLS connection (using OpenSSL). Ciphersuite PSK-AES256-CBC-SHA.
Initiating IPv4 MTU detection (min=661, max=1322)
No change in MTU after detection (was 1322)

So the issue lies with the LZS decompression. Does this give you an idea as to how this can be fixed?
Title: Re: Openconnect throughput
Post by: mimugmail on July 28, 2018, 05:07:06 pm
If you find a config option to disable I can integrate it
Title: Re: Openconnect throughput
Post by: Jeroen1000 on July 29, 2018, 06:27:36 pm
The option I mentioned disables it but it's not a good idea as the loss of speed is very severe. I'll leave a line on the mailing list to ask what the cause might be.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 01, 2018, 12:36:04 pm
If you find a config option to disable I can integrate it

I can confirm now there is an MTU-mismatch. On Opnsense I have 1322 bytes but the remote side has 1340. There are workarounds but I still need to test them. And also this might be a bug that needs fixing but I'm asking this on the openconnect mailing list. Also I believe 1322 is incorrect. I see no reason why it settles on that?

May I ask what your interface MTU is when you connect to your server? It says what it negotiates too of you start it with -v for more debug info.
Title: Re: Openconnect throughput
Post by: mimugmail on August 01, 2018, 01:00:30 pm
ocvpn0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1406
        options=80000<LINKSTATE>
        inet6 fe80::a00:27ff:febf:2658%ocvpn0 prefixlen 64 scopeid 0x9
        inet 10.24.69.165 --> 10.24.69.165  netmask 0xffffffff
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: tun ocvpn
        Opened by PID 81777

This is the value my ASA assigns to the client ...
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 01, 2018, 01:16:48 pm
I was hoping you were connecting to an ocserv server. I'll redownload openconnect as a package and see whether is still behaves the same.
Title: Re: Openconnect throughput
Post by: mimugmail on August 01, 2018, 01:51:18 pm
No, that was the reason why creating this plugin so I can connect to my company without having client installed :)
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 01, 2018, 02:05:42 pm
Would you reckon it is much work to include the option to create multiple interfaces (like with Openvpn) and provide the option to pick a custom vpnc-script per interface? That way the routing can be custom tailored for those that can edit the vpnc script?

Title: Re: Openconnect throughput
Post by: mimugmail on August 01, 2018, 03:33:17 pm
Do you already have a running setup within OPNsense? I dont want to invest time only seeing it's something unsupported regarding FreeBSD port ...
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 01, 2018, 03:57:35 pm
Yes, I've managed to test it already. I've tested it using CLI, adding the FW rules and NAT (manually) for running a 2nd interface. But I need to be able to use tweaked VPNC-scripts that don't add a def. GW.

Basically passing more parameters like the way you would do when invoking openconnect via CLI.

Once the MTU-issue is resolved (which I'm confident will happen as openconnect has active development) I can remove my temporary Openvpn setup. 
Title: Re: Openconnect throughput
Post by: mimugmail on August 01, 2018, 05:11:01 pm
When you post the whole setup will all processes, config files and vpnc scripts I can have a look. But only if it really works von OPNsense ...
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 02, 2018, 10:33:47 am
Will do. I need to incorporate a patch first for the MTU-issue and recompile openconnect on BSD so that will take some time as I've never done that before.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 04, 2018, 11:27:04 am
Here is a status update:

MTU-issue 1320 (client) vs 1340 (server): SOLVED
The patch won't be required. I found out that the client is presenting these ciphers:
Cipher Suites (4 suites)
    Cipher Suite: TLS_PSK_WITH_AES_256_CBC_SHA (0x008d)
    Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA (0x008c)
    Cipher Suite: TLS_PSK_WITH_3DES_EDE_CBC_SHA (0x008b)
    Cipher Suite: TLS_EMPTY_RENEGOTIATION_INFO_SCSV (0x00ff)

And the server is picking the first while the server itself prefers GCM-ciphers. This is the cause of the MTU-mismatch. The GCM-servers have lower overhead so the server can get away with 1340 bytes and the client can't as the CBC ciphers has a higher overhead. Is it a negotiation bug? I'd say yes.  I've put it on the openconnect mailing list.

Anyway the fix is to use the option:
Code: [Select]
--dtls-ciphers=OC-DTLS1_2-AES128-GCM or
Code: [Select]
  --dtls-ciphers=OC-DTLS1_2-AES256-GCM

The VPNC-script: I'm going to manual edit the route table for now. That manual edit will be replaced by altering the script later. This has been done before for various purposed. This isn't really a blocking factor.

Multiple tunnels + routing set up:: the gist of the matter. I'm going to set this up hopefully over the weekend with a full test. If that works it would be super if you could make multiple instances happen (no pressure intended!) ;D



Title: Re: Openconnect throughput
Post by: mimugmail on August 04, 2018, 12:08:33 pm
When you send me a list of all ciphers I can add a dropdown list
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 04, 2018, 01:09:52 pm
It is best to let DTLS negotiate the cipher itself with PSK-NEGOTIATE. On Linux this behaves. On BSD it does not for some reason. So the only valid strings I found in the RFC are these:

dtls-ciphers=OC-DTLS1_2-AES128-GCM
dtls-ciphers=OC-DTLS1_2-AES256-GCM
dtls-ciphers=PSK-NEGOTIATE (which is the default and results in an MTU-mismactch)

So the option is an override if you either want to use GCM explicitly or in this case when the negotiation botches up
You could go openvpn style and just provide a free text windows where you can put parameters. Just an idea.

Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 15, 2018, 05:43:55 pm
When you send me a list of all ciphers I can add a dropdown list

I just completed testing with multiple instances after getting some alone time with the router:-). I'll send you a write up as pdf with all the steps I took this weekend. There is one thing that I don't quite grasp so I'll spill the beans on that :

Code: [Select]
ocvpn0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1322
options=80000<LINKSTATE>
inet6 fe80::4262:31ff:fe00:c081%ocvpn0 prefixlen 64 scopeid 0xb
inet 10.23.244.84 --> 10.23.244.84  netmask 0xffffffff
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
groups: tun ocvpn
Opened by PID 89427

When you run openconnect with your plugin it sets up the default route (which I obviously don't when using multiple instances).
When trying to figure out what the Def. gateway was, I did a netstat -rn. So the default gw pointed to 10.23.244.84, which is also my interface ip as you can see in the code fragment just above.

Apparently, 10.23.244.84 is not the real gateway. When doing a traceroute the next hop was within the /16 in the routing table which was added by the vpnc-script:

Code: [Select]
traceroute to google.be (172.217.13.163), 64 hops max, 52 byte packets

 1  10.65.0.1 (10.65.0.1)  107.129 ms  109.283 ms  107.764 ms
Code: [Select]
Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
10.65/16           10.23.244.84       UGSc            0        0   ocpvn0

So I don't understand why the real gw 10.65.0.1 is not in the routing table. But in the opnsense gui for setting up the far gateway, I had to fill in the real one I found using traceroute. So this is not a parameter for establishing the session, but it is essential to configure the real gateway in the Opnsense gui manually:)

Title: Re: Openconnect throughput
Post by: mimugmail on August 15, 2018, 06:24:15 pm
It doesnt set up the default route, it's the server pushing you this route.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 15, 2018, 06:30:20 pm
Yes I'm not questioning that :D. But it is the vpnc-script that sets up the routes in the actual routing table (routes that the server pushes) My point is that the real default gateway ip isn't in the routing table. Something more is going on behind the scenes here.

I did not immediately see this default route in the verbose output in openconnect but I'll look a bit further.
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 17, 2018, 01:23:10 pm
It doesnt set up the default route, it's the server pushing you this route.

Oke I got it. Final hurdle taken at last. If you do not allow openconnect to setup a default route (which I do not want as I run multiple tunnels), you need to define a gateway in Opnsense that you can use in the PBR firewall rules. But that's just pointing out the obvious.

This is the undesired situation:
Code: [Select]
Internet:
Destination        Gateway            Flags     Netif Expire
default            10.65.244.84       UGS      ocvpn0
10.65.0.0/16       10.65.244.84       UGS      ocvpn0
10.65.200.84       link#9             UH       ocvpn0

If we remove the default route, and restore the one to the isp, this would be the new desired situation
Code: [Select]
Internet:
Destination        Gateway            Flags     Netif Expire
default            92.xxx.xx.1       UGS        igb0
10.65.0.0/16       10.65.244.84       UGS      ocvpn0
10.65.00.84       link#9             UH       ocvpn0

However, here is the caveat: if you define the GW in the gui as being 10.65.244.84 it won't work. As 10.65.244.84 points to its own ocpvn0 interface address. That does not compute for Opnsense so it just does not use it. We do know the actual GW  (=the remote vpn server) is somewhere in the /16. So filling in any other IP than 10.65.244.84 that is within the /16 fixes the routing.

So it's just a trick to get packets to the ocpvn0 interface. The real gateway does not matter it just can't be the interface address as I stated above.

After lots of tinkering I can start on the how to now:-)


 
Title: Re: Openconnect throughput
Post by: Jeroen1000 on August 25, 2018, 03:36:56 pm
Just had the time to write up my overview. See attachment! :-). I believe my manual actions can be scripted easily but I'm not very good at scripts. At all.
Title: Re: Openconnect throughput
Post by: mimugmail on August 29, 2018, 08:05:47 am
Hi,

I had a look at your doc. Stupid question, why do you want to delete default gateway? PBR routes have a higher priority than system routes. You just need to setup host routes for your multiple VPN server and you are good. The renaming stuff of interfaces will come with openconnect 8.0 .. but I have no idea if it will be backported to FreeBSD 11.

I'll try to find out how this vpnc script stuff works ..


P.S.: Wireguard has way faster speed .. I achieved 1,8Gbit on server hardware :)
Title: Re: Openconnect throughput
Post by: Jeroen1000 on September 01, 2018, 05:40:22 pm
I like wireguard too. A lot:-) but it's still not widely supported but I agree, for speed this is the one to watch.

I had to delete the default route because each time you setup a Openconnect VPN, it adds itself as the new default GW. This does not affect PBR as you remark, but I don't want the 'regular' LAN-hosts to go through either a VPN. That is why I delete the default route openconnect adds and just add the one to my ISP again.

A 2nd reason is because of the renaming of the openconnect interface The current VPNC script does not restore everything back to normal. Normal being no leftovers in the routing table after it disconnects. The cause of this, is the renaming. The script should look for the new name instead of the old one.

My next step is playing with the VPNC script too. It shouldn't be too hard to stop it from adding a default GW.