Openconnect throughput

Started by Jeroen1000, June 20, 2018, 04:48:16 PM

Previous topic - Next topic
I was hoping you were connecting to an ocserv server. I'll redownload openconnect as a package and see whether is still behaves the same.

No, that was the reason why creating this plugin so I can connect to my company without having client installed :)

Would you reckon it is much work to include the option to create multiple interfaces (like with Openvpn) and provide the option to pick a custom vpnc-script per interface? That way the routing can be custom tailored for those that can edit the vpnc script?


Do you already have a running setup within OPNsense? I dont want to invest time only seeing it's something unsupported regarding FreeBSD port ...

Yes, I've managed to test it already. I've tested it using CLI, adding the FW rules and NAT (manually) for running a 2nd interface. But I need to be able to use tweaked VPNC-scripts that don't add a def. GW.

Basically passing more parameters like the way you would do when invoking openconnect via CLI.

Once the MTU-issue is resolved (which I'm confident will happen as openconnect has active development) I can remove my temporary Openvpn setup. 

When you post the whole setup will all processes, config files and vpnc scripts I can have a look. But only if it really works von OPNsense ...

Will do. I need to incorporate a patch first for the MTU-issue and recompile openconnect on BSD so that will take some time as I've never done that before.

Here is a status update:

MTU-issue 1320 (client) vs 1340 (server): SOLVED
The patch won't be required. I found out that the client is presenting these ciphers:
Cipher Suites (4 suites)
    Cipher Suite: TLS_PSK_WITH_AES_256_CBC_SHA (0x008d)
    Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA (0x008c)
    Cipher Suite: TLS_PSK_WITH_3DES_EDE_CBC_SHA (0x008b)
    Cipher Suite: TLS_EMPTY_RENEGOTIATION_INFO_SCSV (0x00ff)

And the server is picking the first while the server itself prefers GCM-ciphers. This is the cause of the MTU-mismatch. The GCM-servers have lower overhead so the server can get away with 1340 bytes and the client can't as the CBC ciphers has a higher overhead. Is it a negotiation bug? I'd say yes.  I've put it on the openconnect mailing list.

Anyway the fix is to use the option: --dtls-ciphers=OC-DTLS1_2-AES128-GCM or  --dtls-ciphers=OC-DTLS1_2-AES256-GCM


The VPNC-script: I'm going to manual edit the route table for now. That manual edit will be replaced by altering the script later. This has been done before for various purposed. This isn't really a blocking factor.

Multiple tunnels + routing set up:: the gist of the matter. I'm going to set this up hopefully over the weekend with a full test. If that works it would be super if you could make multiple instances happen (no pressure intended!) ;D




When you send me a list of all ciphers I can add a dropdown list

August 04, 2018, 01:09:52 PM #39 Last Edit: August 04, 2018, 01:12:50 PM by Jeroen1000
It is best to let DTLS negotiate the cipher itself with PSK-NEGOTIATE. On Linux this behaves. On BSD it does not for some reason. So the only valid strings I found in the RFC are these:

dtls-ciphers=OC-DTLS1_2-AES128-GCM
dtls-ciphers=OC-DTLS1_2-AES256-GCM
dtls-ciphers=PSK-NEGOTIATE (which is the default and results in an MTU-mismactch)

So the option is an override if you either want to use GCM explicitly or in this case when the negotiation botches up
You could go openvpn style and just provide a free text windows where you can put parameters. Just an idea.


August 15, 2018, 05:43:55 PM #40 Last Edit: August 15, 2018, 05:48:02 PM by Jeroen1000
Quote from: mimugmail on August 04, 2018, 12:08:33 PM
When you send me a list of all ciphers I can add a dropdown list

I just completed testing with multiple instances after getting some alone time with the router:-). I'll send you a write up as pdf with all the steps I took this weekend. There is one thing that I don't quite grasp so I'll spill the beans on that :

ocvpn0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1322
options=80000<LINKSTATE>
inet6 fe80::4262:31ff:fe00:c081%ocvpn0 prefixlen 64 scopeid 0xb
inet 10.23.244.84 --> 10.23.244.84  netmask 0xffffffff
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
groups: tun ocvpn
Opened by PID 89427


When you run openconnect with your plugin it sets up the default route (which I obviously don't when using multiple instances).
When trying to figure out what the Def. gateway was, I did a netstat -rn. So the default gw pointed to 10.23.244.84, which is also my interface ip as you can see in the code fragment just above.

Apparently, 10.23.244.84 is not the real gateway. When doing a traceroute the next hop was within the /16 in the routing table which was added by the vpnc-script:

traceroute to google.be (172.217.13.163), 64 hops max, 52 byte packets

1  10.65.0.1 (10.65.0.1)  107.129 ms  109.283 ms  107.764 ms


Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
10.65/16           10.23.244.84       UGSc            0        0   ocpvn0


So I don't understand why the real gw 10.65.0.1 is not in the routing table. But in the opnsense gui for setting up the far gateway, I had to fill in the real one I found using traceroute. So this is not a parameter for establishing the session, but it is essential to configure the real gateway in the Opnsense gui manually:)


It doesnt set up the default route, it's the server pushing you this route.

Yes I'm not questioning that :D. But it is the vpnc-script that sets up the routes in the actual routing table (routes that the server pushes) My point is that the real default gateway ip isn't in the routing table. Something more is going on behind the scenes here.

I did not immediately see this default route in the verbose output in openconnect but I'll look a bit further.

August 17, 2018, 01:23:10 PM #43 Last Edit: August 17, 2018, 01:25:16 PM by Jeroen1000
Quote from: mimugmail on August 15, 2018, 06:24:15 PM
It doesnt set up the default route, it's the server pushing you this route.

Oke I got it. Final hurdle taken at last. If you do not allow openconnect to setup a default route (which I do not want as I run multiple tunnels), you need to define a gateway in Opnsense that you can use in the PBR firewall rules. But that's just pointing out the obvious.

This is the undesired situation:
Internet:
Destination        Gateway            Flags     Netif Expire
default            10.65.244.84       UGS      ocvpn0
10.65.0.0/16       10.65.244.84       UGS      ocvpn0
10.65.200.84       link#9             UH       ocvpn0


If we remove the default route, and restore the one to the isp, this would be the new desired situation
Internet:
Destination        Gateway            Flags     Netif Expire
default            92.xxx.xx.1       UGS        igb0
10.65.0.0/16       10.65.244.84       UGS      ocvpn0
10.65.00.84       link#9             UH       ocvpn0


However, here is the caveat: if you define the GW in the gui as being 10.65.244.84 it won't work. As 10.65.244.84 points to its own ocpvn0 interface address. That does not compute for Opnsense so it just does not use it. We do know the actual GW  (=the remote vpn server) is somewhere in the /16. So filling in any other IP than 10.65.244.84 that is within the /16 fixes the routing.

So it's just a trick to get packets to the ocpvn0 interface. The real gateway does not matter it just can't be the interface address as I stated above.

After lots of tinkering I can start on the how to now:-)



Just had the time to write up my overview. See attachment! :-). I believe my manual actions can be scripted easily but I'm not very good at scripts. At all.