Have a fresh install of OPNsense as a guest on Proxmox @ OVH.
Am using VMAC and a /32 Addon IP.
network/interfaces
==================
auto vmbr0
iface vmbr0 inet static
address 1X.235.225.XXX/32
gateway 100.64.0.1
bridge-ports enp1s0f0np0
bridge-stp off
bridge-fd 0
OPnsense WAN
============
VM_WAN_MAC: <OVH Virual Mac>
WANIP: OVH_ADDON/32
GW: 100.64.0.1 (Upstream/Far)
I am getting reasonable Up/Down Bandwitch on Proxmox Host, retries could be better:
# iperf3 -c speedtest.sin1.sg.leaseweb.net -b 1G
Connecting to host speedtest.sin1.sg.leaseweb.net, port 5201
[ 5] local XXXX:1f00:XXXX:aa00:: port 48930 connected to 2402:a7c0:8100:a013::77 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 119 MBytes 999 Mbits/sec 1 653 KBytes
[ 5] 1.00-2.00 sec 119 MBytes 1.00 Gbits/sec 0 653 KBytes
[ 5] 2.00-3.00 sec 119 MBytes 1.00 Gbits/sec 0 653 KBytes
[ 5] 3.00-4.00 sec 113 MBytes 948 Mbits/sec 218 149 KBytes
[ 5] 4.00-5.00 sec 112 MBytes 941 Mbits/sec 124 148 KBytes
[ 5] 5.00-6.00 sec 112 MBytes 935 Mbits/sec 137 158 KBytes
[ 5] 6.00-7.00 sec 113 MBytes 946 Mbits/sec 145 142 KBytes
[ 5] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 156 166 KBytes
[ 5] 8.00-9.00 sec 112 MBytes 936 Mbits/sec 124 142 KBytes
[ 5] 9.00-10.00 sec 112 MBytes 943 Mbits/sec 136 123 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.12 GBytes 959 Mbits/sec 1041 sender
[ 5] 0.00-10.03 sec 1.12 GBytes 956 Mbits/sec receiver
iperf Done.
===
# iperf3 -R -c speedtest.sin1.sg.leaseweb.net -b 1G
Connecting to host speedtest.sin1.sg.leaseweb.net, port 5201
Reverse mode, remote host speedtest.sin1.sg.leaseweb.net is sending
[ 5] local XXXX:1f00:XXXX:aa00:: port 39864 connected to 2402:a7c0:8100:a013::77 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 124 MBytes 1.04 Gbits/sec
[ 5] 1.00-2.00 sec 119 MBytes 1.00 Gbits/sec
[ 5] 2.00-3.00 sec 119 MBytes 1.00 Gbits/sec
[ 5] 3.00-4.00 sec 119 MBytes 1000 Mbits/sec
[ 5] 4.00-5.00 sec 119 MBytes 1.00 Gbits/sec
[ 5] 5.00-6.00 sec 119 MBytes 1.00 Gbits/sec
[ 5] 6.00-7.00 sec 119 MBytes 999 Mbits/sec
[ 5] 7.00-8.00 sec 119 MBytes 1.00 Gbits/sec
[ 5] 8.00-9.00 sec 119 MBytes 1.00 Gbits/sec
[ 5] 9.00-10.00 sec 119 MBytes 999 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.04 sec 1.17 GBytes 1.00 Gbits/sec 1 sender
[ 5] 0.00-10.00 sec 1.17 GBytes 1.00 Gbits/sec receiver
iperf Done.
But OPNSENSE performance is abysmal:
root@pma:~ # iperf3 -c speedtest.sin1.sg.leaseweb.net -b 1G
Connecting to host speedtest.sin1.sg.leaseweb.net, port 5201
[ 5] local XXX.99.84.XXX port 43261 connected to 23.108.99.54 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 119 MBytes 999 Mbits/sec 6 453 KBytes
[ 5] 1.00-2.00 sec 119 MBytes 1.00 Gbits/sec 6 312 KBytes
[ 5] 2.00-3.00 sec 119 MBytes 1.00 Gbits/sec 6 386 KBytes
[ 5] 3.00-4.00 sec 87.2 MBytes 732 Mbits/sec 20 93.2 KBytes
[ 5] 4.00-5.00 sec 105 MBytes 879 Mbits/sec 9 123 KBytes
[ 5] 5.00-6.00 sec 136 MBytes 1.14 Gbits/sec 55 97.8 KBytes
[ 5] 6.00-7.00 sec 92.9 MBytes 779 Mbits/sec 12 107 KBytes
[ 5] 7.00-8.00 sec 90.5 MBytes 759 Mbits/sec 23 64.5 KBytes
[ 5] 8.00-9.00 sec 75.9 MBytes 636 Mbits/sec 12 106 KBytes
[ 5] 9.00-10.00 sec 79.6 MBytes 668 Mbits/sec 6 121 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.00 GBytes 859 Mbits/sec 155 sender
[ 5] 0.00-10.00 sec 1.00 GBytes 859 Mbits/sec receiver
iperf Done.
# iperf3 -R -c speedtest.sin1.sg.leaseweb.net -b 1G
Connecting to host speedtest.sin1.sg.leaseweb.net, port 5201
Reverse mode, remote host speedtest.sin1.sg.leaseweb.net is sending
[ 5] local XXX.99.84.XXX port 23238 connected to 23.108.99.54 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.01 sec 768 KBytes 6.23 Mbits/sec
[ 5] 1.01-2.01 sec 896 KBytes 7.31 Mbits/sec
[ 5] 2.01-3.01 sec 1.00 MBytes 8.42 Mbits/sec
[ 5] 3.01-4.01 sec 896 KBytes 7.33 Mbits/sec
[ 5] 4.01-5.01 sec 896 KBytes 7.35 Mbits/sec
[ 5] 5.01-6.02 sec 1.00 MBytes 8.32 Mbits/sec
[ 5] 6.02-7.02 sec 896 KBytes 7.34 Mbits/sec
[ 5] 7.02-8.02 sec 1.00 MBytes 8.39 Mbits/sec
[ 5] 8.02-9.01 sec 1.00 MBytes 8.45 Mbits/sec
[ 5] 9.01-10.01 sec 896 KBytes 7.35 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 9.27 MBytes 7.77 Mbits/sec 1896 sender
[ 5] 0.00-10.01 sec 9.12 MBytes 7.65 Mbits/sec receiver
iperf Done.
And SCP transfer is even worse @ about 700k:
scp root@home.xxxxx.net:/usr/share/pac/testfile1gb .
1% 12MB 707.6KB/s 24:24 ETA
There's obviously something seriously wrong.
CPU usage during transfer < 6%, 2 Cores, 2048GB Ram, all of the recommending tunings have been added and the performance before and after the tuning is no noticeable difference.
ChatGPT has been spitting out commands, but none have seemed to work.
Any suggestions ?
Pass through the NIC?
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
@bart Thanks for helping.
Yes, I did consider that after seeing another post after posting this one, have not tried yet.
But the upload speed is fine, it's only to download which is dial up modem speed.
vtnet interface in OPNsense? All hardware offloading functions disabled? vtnet inside KVM hypervisor can exhibit that behaviour if hardware offloading is active.
@patrik - yes I did have a vtnet but have now removed it and rebooted, no change.
All hardware offloading is disabled, as per the default settings.
I tried adding the PCI device for enp1s0f1np1 (second nic) to opnsense and lost all network communications.
found pci id:
# ethtool -i enp1s0f1np1 | awk '/bus-info/ {print $2}'
0000:01:00.1
but using the web interface when I selected 0000:01:00.1 it added the parent:
hostpci0: 0000:01:00
before doing that I disabled the interface in network/interfaces
# iface enp1s0f1np1 inet manual
# #NIC2
I presume that if i passthrough to VM then I need to remove from network/interfaces ?
and of course because it added the parent pci, as soon as I saved the VM config via web console I lost all networking.
If I use the resource PCI Mapping feature, will that allow me to map just 0000:01:00.1 or will that also map its parent and disable everything ?
I found the cause, hardware offload disables.
# iperf3 -R -c speedtest.sin1.sg.leaseweb.net -b 5GB
Connecting to host speedtest.sin1.sg.leaseweb.net, port 5201
Reverse mode, remote host speedtest.sin1.sg.leaseweb.net is sending
[ 5] local XXX.99.84.XXX port 59066 connected to 23.108.99.54 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.01 sec 600 MBytes 4.99 Gbits/sec
[ 5] 1.01-2.02 sec 602 MBytes 5.00 Gbits/sec
[ 5] 2.02-3.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 3.02-4.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 4.02-5.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 5.02-6.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 6.02-7.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 7.02-8.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 8.02-9.02 sec 596 MBytes 5.00 Gbits/sec
[ 5] 9.02-10.02 sec 596 MBytes 5.00 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.02 sec 5.83 GBytes 5.00 Gbits/sec 1181 sender
[ 5] 0.00-10.02 sec 5.83 GBytes 5.00 Gbits/sec receiver
an increase of 1,747,526 % !!!!!!!!!
I spoke too soon.
That solves the download issue, but completely disables all of my NAT forwarding.
Is this a bug ?
The problem with NAT I can resolve.
See here: https://forum.opnsense.org/index.php?topic=44651.0
ifconfig vtnet1 rxcsum -txcsum rxcsum6 -txcsum6 -tso lro
That resolves the NAT blocking and also restores the download bandwidth, however guests on the OPNSENSE lan still experience the same slow speed as they always have.
I have tried applying the same to the LAN interface, but it makes no difference to the guest dl speed.
I believe this is the same issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235607
But what I dont understand is why nobody else is facing this problem, surely I'm not the only 1 ?
Finally, after 2 weeks of testing just about every tunable possible I found the solution:
iface enp1s0f0np0 inet manual
pre-up ethtool --offload enp1s0f0np0 generic-receive-offload off
Generic Receive Offload (GRO)
- GRO is a network optimization feature that allows the NIC to combine multiple incoming packets into larger ones before passing them to the kernel.
- This reduces CPU overhead by decreasing the number of packets the kernel processes.
- It is particularly useful in high-throughput environments as it optimizes performance.
GRO may cause issues in certain scenarios, such as:
1. Poor network performance due to packet reordering or handling issues in virtualized environments.
2. Debugging network traffic where unaltered packets are required (e.g., using `tcpdump` or `Wireshark`).
3. Compatibility issues with some software or specific network setups.
This is OVH Advance Server with Broadcom BCM57502 NetXtreme-E.
Hope this will save somebody else a lot of wasted time.
Hi,
just wnated to say thank you for your effort in this research.
This saved me a lot of time.
Interesting. Seems like a NIC-specific problem. OVH now has that in their FAQs: https://help.ovhcloud.com/csm/en-dedicated-servers-proxmox-network-troubleshoot?id=kb_article_view&sysparm_article=KB0066095
This was detected even earlier: https://www.thomas-krenn.com/de/wiki/Broadcom_P2100G_schlechte_Netzwerk_Performance_innerhalb_Docker