1gbit PPPOE, getting 600 down and 940 up speeds

Started by bvgoor, June 02, 2022, 08:00:04 AM

Previous topic - Next topic
Hi,

Wondering if somebody is willing to point me in the right direction on how to troubleshoot why I'm getting a difference in speeds on a 1Gbit up/down the connection.

The default KPN (dutch) glass fiber router/modem is able to do full speed (930/930)

Running opnsense 22.1.8, virtualized (esxi, intel nic hardware passthrough for WAN)

Thanks

Bas

Hi Bas, is that the firewall reduced to just a NAT router? Do you run the KPN device as a modem?

Hi thanks for the quick reply! Not using any KPN equipment other than the glass to ethernet conversion

Cpu? Number vcpus allocated to opnsense? NIC? Is the NIC pass through or you using vmxnet3?

Hi

WAN interface  Intel Server adapter Gbit (via esxi passthrough), remaining interfaces LAN , DMZ  etc vmxnet3
8 vcpu, 8 GB, intel Xeon D-1541 2.1GHz

The upload is maxing out at 930Mbit so that seems to be fine, only the download is around 600 which is the issue

Bas

Think I'm running a similar setup. Also with the latest stable version of OPNsense, but not in a virtual environment.

When using speedtest.net to KPN Amsterdam I get :
ping 4ms, download 937 Mbps and upload 938 Mbps.

Versions: OPNsense 22.1.8_1-amd64, FreeBSD 13.0-STABLE, OpenSSL 1.1.1o 3 May 2022
CPU type: 11th Gen Intel(R) Core(TM) i5-11400 @ 2.60GHz (6 cores, 12 threads)
NICs: 2x Intel 1Gbit/sec (Intel(R) PRO/1000 82576).

Best regards.

June 03, 2022, 08:36:59 PM #6 Last Edit: June 03, 2022, 08:41:53 PM by johndchch
Quote from: bvgoor on June 03, 2022, 03:12:41 PM
WAN interface  Intel Server adapter Gbit (via esxi passthrough), remaining interfaces LAN , DMZ  etc vmxnet3
8 vcpu, 8 GB, intel Xeon D-1541 2.1GHz

by 'intel server adapter' do you mean an 82576? I tried one of them in passthru (probably for the same reason as you - 82576 support was dropped from esxi7 so it's passthrough or nothing)  and was getting some serious packet drops under load, rapidly reverted back to using an i210 ( and from there to trunking for wan+lan over a 10gbe link into an x540-t2)

all my testing ( 1gbps ppoe link same as you ) has shown me the impact of using vmxnet3 vs passthrough is basically zero - I'd try using your vmxnet3 supported adaptors for both wan and lan and see if the problem goes away


Actually its a 82571, just that was the one i had lying about, no special reason other then it feels safer to have a dedicated NIC over a vlan via switch/trunk

Ill see about getting a different nic to test, strang that it only affects downstream and not upstream

Quote from: bvgoor on June 05, 2022, 02:41:16 PM
Actually its a 82571, just that was the one i had lying about, no special reason other then it feels safer to have a dedicated NIC over a vlan via switch/trunk

Ill see about getting a different nic to test, strang that it only affects downstream and not upstream

There's a note in the 82571 eratta docs about tx fifo over-runs causing drops - you should be able to check the number of drops via sysctl

Do you have rss enabled ( using multiple queues and interrupts should help )

As for vlans - let esxi handle it via vlan tagging at the port group level , means opnsense sees each vlan as a separate vmxnet3 adaptor, much easier than doing it inside opnsense and allows NIC sharing between vms

tried a few different options and nothing really solved it.

now using a spare dedicated device that works fine,

@all thanks for the assistance