OPNsense Forum

Archive => 16.7 Legacy Series => Topic started by: bb-mitch on October 13, 2016, 12:43:10 am

Title: PPTP / MPD performance issue
Post by: bb-mitch on October 13, 2016, 12:43:10 am
First... I know PPTP is not secure - believe me we aren't counting on this for security - only bonding public traffic. 8)

We are close IF this is possible, so bear with me - here are all the details.

Background / goal: We need to create a bonding solution to improve performance and bond multiple similar links (or potentially dissimilar links) into a single higher speed link. MLPPP might be ideal except that the connections are over the Internet / not bridged Ethernet so as far as I understand that’s not possible. We settled on using the MPD plugin on a head end box, and configuring it to accept the PPTP traffic, bundle it, and NAT the traffic to the internet.

We tried this before but when it didn’t work we thought our understanding of the traffic splitting of minor differences in latency were to blame – the new system has two connections as close to the same performance as we can manage and practically maintain.  If we knew what we had to change we could though.

From my understanding, there are a few options...
PPTP / MPD (which we are trying)
MLPPP which is PPPoE and requires bridged connectivity (which we can't always have)
OpenVPN (which is another idea, but it's user space and probably lower performance than the MPD from what I read?

If you have another suggestion, based on open tool chains we're willing to look. opnsense is something we're comfortable with so we're looking to leverage knowledge already at hand.

The good news is, it works – we can see the multiple connections – the 2 WAN connections on the remote end seem to be used similarly (balancing). The bad news is that the performance isn’t great. It doesn’t make anything seem better – but worse… And I’m not sure why or where the limit is – in our config or in our understanding. Nothing we can see in terms of bandwidth, cpu, etc. seems to be saturated on ither end.

Network / Client side: The two connections are virtually identical VDSL services with 50Mb/s downstream and 10Mb/s upstream.

Hardware / Client side:
AMD G-T40E Processor (2 cores) / 4GB RAM
OPNsense 16.7.5-i386
FreeBSD 10.3-RELEASE-p9
OpenSSL 1.0.2j 26 Sep 2016

Network / Server side: The server side is connected by Gb and is capable of bursting to the full speed.

Hardware for the MPD Server:
AMD GX-412TC SOC (4 cores) / 4GB RAM
OPNsense 16.7.5-amd64
FreeBSD 10.3-RELEASE-p9
OpenSSL 1.0.2j 26 Sep 2016

Setup:

On the client, we created a PPTP interface and used the two existing WAN’s as link interfaces. We put the server gateway IP in both interface gateways and left the “Local IP”s blank. Then we added a lan rule to route a PC out over this connection. When it browses the internet, the public IP of the MPD server is seen as it’s IP.

On the server, we:
-   Installed the MPD/PPTP plugin, enabled the PPTP server (VPN->PPTP->Settings) and set
-   No. of PPTP users (16)
-   Server address (Server WAN IP)
-   Remote address range (private IP for tunnel start range)
-   PPTP dns servers
-   We added a couple users – we did not assign those users to specific IP addresses.
-   Then we started the PPTP service.

On the firewall rules we:
-   Allow all GRE traffic to WAN
-   Allow all TCP:1723 traffic to the WAN interface

The results:

Network / Performance: Speedtest.net on both links normally shows 50Mb/s or more down, and 10Mb/s or more up. Each link is about 12-15ms from the MPD server. The server is 1ms to 4ms from the speedtest.net test servers.

CPU idle percentage which is normally 85-90% idle does not change during test. Top does not show any noticeable CPU use during the speed tests.

I ran iostat during tests – the only noticeable change is a snall increase in interrupt time % (1 or 2%!)

The results are horrible though! We see a combined :
Download speed of 12 to 20Mb
Upload of 6-9Mb.


So why is it slow? Shouldn’t it be faster?

Using traffic graphs we can see the traffic is split into two streams, and recombined. But it almost feels like we are tunneling over a TCP socket waiting for acks for every packet... is that what's happening?

If so, how do we fix it?

I'm sure we must have missed something?

Thanks in advance!!