Wireguard performance 100% faster on pfSense than OPNsense

Started by pfop, February 19, 2024, 05:04:59 PM

Previous topic - Next topic
Quote from: meyergru on December 11, 2024, 10:52:56 PMHave you tried using more than one stream (iperf -P8)?
Yeah, same results. Currently - no matter what I tweak, the speed is still somewhere capped and test results come out identical. Now all systems are up to date, the problem persists.

I haven't checked any firewall rules, next thing I will probably try disabling rules one by one.
Maybe I should try super low MTU/MSS, I haven't gone below 1200.

If nothing fixes the problem, I will probably have to migrate to OpenVPN or IPSec. I Hope I will find some solution :D

Don't go below 1280. For one it's useless, second 1280 is the mandatory minimum MTU for IPv6. 1500 - 1280 leaves 220 bytes of possible encapsulation overhead to account for. You will never need that much.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Some comparison data:

I have OpnSense running baremetal on an N100, definitely slower than your 12th gen i5.

When running iperf3 against my datacenter Proxmox OpnSense with vtnet adapters, I get ~400 MBit/s both upstream and downstream with -P4, which I would expect from the N100. I can run MTU 1500 on my equipment on the WAN interfaces and I have MTU 1400 on my Wireguard instances.

The Proxmox in the datacenter is on an Core i5-13500 and I use "host" CPU type to enable AES-NI with 4 cores assigned.

My uplink is somewhat more than 400 MBit/s, so definitely limited by Wireguard performance here. Ping is 37ms over Wireguard.

P.S.: With "--bidir", speed substantially decreases to ~250 MBit/s up- and downstream with -P4. The tests were conducted on OpnSense itself acting as iperf server and client.
Intel N100, 4 x I226-V, 16 GByte, 256 GByte NVME, ZTE F6005

1100 down / 440 up, Bufferbloat A+

Monitoring this thread, but I tried a few things.
My remote is a fedora VPS with 2 cores.
I have an Alpine VM on my network and of course OpnSense.

When iperf'ing from opensense through a wg tunnel to the VPS, I get between 200mbds to 350.
When iperf'ing through another tunnel, directly from the Alpine VM to the VPS, I get results between 750 and 1.2Gbds.
When iperf'ing through a GRE tunnel between OPNSense and the VPS, I get > 1.2Gbds.

CPU usage is never above 30/40% whatever the case. So yeah, there's an issue with OPNSense. Never had any issue with PF, but will never ever go back.

I'm running opn in a qemu VM. Didn't fiddle yet with microcode or QAT virtual function. I'll try and report