Slow throughput

Started by filip.koci, February 16, 2021, 12:48:23 AM

Previous topic - Next topic
Hi everyone, I have a problem achieving 10Gb/s with opnsense

Can someone try to help me with tune opnsense settings?

My setup:
hypervisor1 (h1) - proxmox, 2x 10Gb/s sfp+ in bond LACP
    GW1 - 2x socket (2x 8 cores), latest opnsense, virtio network with multiqueue 8
    VM1 - linux

hypervisor2 (h2) - proxmox, 2x 10Gb/s sfp+ in bond LACP
    GW2 - 2x socket (2x 8 cores), latest opnsense, virtio network with multiqueue 8
    VM2 - linux

When is everything (used iperf3):
GW -> VM throughput is ~3Gb/s (on same hypervisor)
VM -> GW throughput is ~1Gb/s (on same hypervisor)

GW1 <-> GW2 throughput is ~1Gb/s (via optic bond)

h1 <-> h2  throughput is ~10Gb/s
VM1 <-> VM2  throughput is ~10Gb/s

When Hardware CRC, TSO, and LRO is enabled:
GW1 <-> GW2 throughput is ~10Gb/s
GW <-> VM (on same hypervisor) throughput is ~20Gb/s
But NAT stops working

Also, I tried to change tunables options but without notable performance impact
I tried to edit:
net.inet.tcp.sendbuf_auto
net.inet.tcp.recvbuf_auto
hw.igb.rx_process_limit
hw.igb.tx_process_limit
legal.intel_igb.license_ack
compat.linuxkpi.mlx4_enable_sys_tune
net.link.ifqmaxlen
net.inet.tcp.soreceive_stream
net.inet.tcp.hostcache.cachelimit
compat.linuxkpi.mlx4_inline_thold
compat.linuxkpi.mlx4_log_num_mgm_entry_size
compat.linuxkpi.mlx4_high_rate_steer

BSD Virtio support is just bad.

If you need more then 1Gb pcie passtrough your nic.