Home
Help
Search
Login
Register
OPNsense Forum
»
Archive
»
21.1 Legacy Series
»
Slow throughput
« previous
next »
Print
Pages: [
1
]
Author
Topic: Slow throughput (Read 2763 times)
filip.koci
Newbie
Posts: 1
Karma: 0
Slow throughput
«
on:
February 16, 2021, 12:48:23 am »
Hi everyone, I have a problem achieving 10Gb/s with opnsense
Can someone try to help me with tune opnsense settings?
My setup:
hypervisor1 (h1) - proxmox, 2x 10Gb/s sfp+ in bond LACP
GW1 - 2x socket (2x 8 cores), latest opnsense, virtio network with multiqueue 8
VM1 - linux
hypervisor2 (h2) - proxmox, 2x 10Gb/s sfp+ in bond LACP
GW2 - 2x socket (2x 8 cores), latest opnsense, virtio network with multiqueue 8
VM2 - linux
When is everything (used iperf3):
GW -> VM throughput is ~3Gb/s (on same hypervisor)
VM -> GW throughput is ~1Gb/s (on same hypervisor)
GW1 <-> GW2 throughput is ~1Gb/s (via optic bond)
h1 <-> h2 throughput is ~10Gb/s
VM1 <-> VM2 throughput is ~10Gb/s
When Hardware CRC, TSO, and LRO is enabled:
GW1 <-> GW2 throughput is ~10Gb/s
GW <-> VM (on same hypervisor) throughput is ~20Gb/s
But NAT stops working
Also, I tried to change tunables options but without notable performance impact
I tried to edit:
net.inet.tcp.sendbuf_auto
net.inet.tcp.recvbuf_auto
hw.igb.rx_process_limit
hw.igb.tx_process_limit
legal.intel_igb.license_ack
compat.linuxkpi.mlx4_enable_sys_tune
net.link.ifqmaxlen
net.inet.tcp.soreceive_stream
net.inet.tcp.hostcache.cachelimit
compat.linuxkpi.mlx4_inline_thold
compat.linuxkpi.mlx4_log_num_mgm_entry_size
compat.linuxkpi.mlx4_high_rate_steer
Logged
Voodoo
Newbie
Posts: 49
Karma: 4
Re: Slow throughput
«
Reply #1 on:
February 17, 2021, 01:22:58 am »
BSD Virtio support is just bad.
If you need more then 1Gb pcie passtrough your nic.
Logged
spi39492
Newbie
Posts: 24
Karma: 0
Re: Slow throughput
«
Reply #2 on:
February 17, 2021, 07:18:30 pm »
Check out
https://forum.opnsense.org/index.php?topic=18754.msg85807#msg85807
Logged
Print
Pages: [
1
]
« previous
next »
OPNsense Forum
»
Archive
»
21.1 Legacy Series
»
Slow throughput