Hi,
I have very bad 10 Gbit routing performance in my lan,
On a i9900k, 128 gb ram with Proxmox with a Intel x710, running Opensense with sr-iov I get about 150MB/s(1,5 Gbit)
On a dedicated machine, some Supermicro with a Xeon D-1520, 128gb ram and a integrated intel x540 I get about 103mb/s (1gbit)
I don't run any traffic inspection.
The CPU is idling on both machines when trying to transfer files. The PCI express slot on the NIC gets the full bandwidth on both hosts.
I have tried to enable and disable the nic hardware offloading on both hosts, (makes no difference)
Also tried Vyos, but the performance was about the same.
When changing MTU to 9000 on both the FW and client I get about 190mb/s.
If I try to transfer a big file via smb multichannel within the same subnet from my nas I max out the 10 Gbit interface. (The nas is a different host from the fw)
I understand that I might not get the full 10 gbit routed, but I think I should at least get a better speed than 1.5 Gbit
Any ideas of what I can try?
Highly dependend on how you measure: When you refer to "routing performance", I assume you measured from two different machines on seperate networks, but not from the OpnSense box itself - because IP throughput from OpnSense itself can be much slower than routed traffic.
Also, the tools for measuring are relevant: Using a single stream does not give you full performance figures. E.g. with iperf, you need -P8.
I also would expect a virtualized solution to be slower, compared on the same CPU basis which you do not have.
On hardware you will get the full 10G, on ESX around 7G, Proxmox and BSD does a bad Job from what I read so far
I just tried it out on my hyper-v host with 5Gbit Network. My routed SMB File transfer maxed out at around 3,5-3,7Gbit/s.
So the ~7GBit/s limit of ESXi would also apply for Hyper-V.
EDIT: I was making sure the target and source both had SSDs. Maybe that could be a possible bottleneck?
I am running on a DEC850 and see line performance on 10G.
https://www.speedtest.net/result/c/59052567-48a8-470c-8dd0-9ead1e3f4034 (https://www.speedtest.net/result/c/59052567-48a8-470c-8dd0-9ead1e3f4034)
This is with RSS and all hardware offloading enabled.
Can you please share what SFP/DAC modules are you using for both WAN/lan and what tunables you've configured.
My Dec3840 is simply not coping above 1G...
Here you go:
Relevant Tunables:
dev.ax.0.iflib.override_nrxds = 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048
dev.ax.0.iflib.override_ntxds = 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048
dev.ax.0.rss_enabled = 1
dev.ax.1.iflib.override_nrxds = 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048
dev.ax.1.iflib.override_ntxds = 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048
dev.ax.1.rss_enabled = 1
hw.ibrs_disable = 1
net.isr.bindthreads = 1
net.isr.maxthreads = -1
vm.pmap.pti = 0
I tested different SFP+ modules and it made no difference. Currently running
LAN:10Gtek SFP+ DAC Twinax Cable
WAN: Flexoptix BIDI module from my ISP
Are you getting 10G performance if you are just checking from test PC to test PC with no router in the middle?
Also, do you have flow control disabled on you switches?
OPNsense virtualized?
Post the VM-Settings
Quote# cat /etc/pve/qemu-server/100.conf
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=virtio0;ide2
cores: 16
cpu: host
efidisk0: encrypted:100/vm-100-disk-0.qcow2,efitype=4m,size=528K
hostpci0: 0000:07:02.2,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.0.2,ctime=1690644423
name: opnsense
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=130669cb-f0ca-46bb-9a6d-9a4ce2844dba
sockets: 1
virtio0: encrypted:100/vm-100-disk-1.qcow2,iothread=1,size=32G
vmgenid: 4130066d-8174-46ff-aa74-15ea23a91901
hostpci0 is a intel x710 sr-iov interface.
Quote from: guenti_r on August 04, 2023, 11:08:56 AM
OPNsense virtualized?
Post the VM-Settings
Yes, I mean between two different hosts on different subnets, and No I don't initiate the transfer from the fw host or hypervisor.
Since I can max out 10 gbit without going trough the fw, i don't think its a measuring issue.
Quote from: meyergru on August 03, 2023, 09:40:44 AM
Highly dependend on how you measure: When you refer to "routing performance", I assume you measured from two different machines on seperate networks,
EDIT:
After seeing my own vm config, I changed the cores to 8 instead of 16 (to get rid of the hyperthreading cores) and increased ram to 8GB.
After that, I got about 500MB/s, so I guess that's in line of what one can expect in a virtualised environment.