OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of Nomsplease »
  • Show Posts »
  • Topics
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Topics - Nomsplease

Pages: [1]
1
23.7 Legacy Series / OPNSense on Proxmox, 10Gb network awful throughput
« on: October 24, 2023, 10:02:56 pm »
Bringing this thread over from Reddit, I am the OP just with a different user name, to hopefully get some more insight on what could be wrong here.

Reddit thread is here: https://www.reddit.com/r/opnsense/comments/17fjbbw/opnsense_on_proxmox_10gb_network_woes/

I have been having quite a lot of issues getting a virtualized OPNsense setup to pass 10Gb traffic anywhere near line speed. This is a fresh setup so I do not have existing vlans setup. I have been going at this for the past couple days without any success and I'm currently running on a mini PC to continue trying to find the solution to this issue.

Firstly the hardware the proxmox host is running on.

Board: Supermicro x11SSH-F

CPU: E3-1275V5

Ram: 64GB DDR4 UDIMM

Storage: SSDs in ZFS Mirror

Nic: X520-DA2


The VM has been setup on both virtual platforms, that being i440fx and Q35 with no change from either. It is setup to have 4 cores and 8 GB of ram. I have attempted to pass through the NIC as well as run it bridged through the host itself.

I have ran multi queue on the host in both 4 and 8, neither made any difference. I have tried numerous tunables to get the line speed with also no success. I have even gone as far as installing OPNSense on bare metal on this host, again this has not worked either.


Tunables I have tried:
Code: [Select]
hw.ibrs_disable=1
net.isr.maxthreads=-1
net.isr.bindthreads = 1
net.isr.dispatch = deferred
net.inet.rss.enabled = 1
net.inet.rss.bits = 6
kern.ipc.maxsockbuf = 614400000
net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.recvspace=65536
net.inet.tcp.sendbuf_inc=65536
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.sendspace=65536
net.inet.tcp.soreceive_stream = 1
net.pf.source_nodes_hashsize = 1048576
net.inet.tcp.mssdflt=1240
net.inet.tcp.abc_l_var=52
net.inet.tcp.minmss = 536
kern.random.fortuna.minpoolsize=128
net.isr.defaultqlimit=2048

VM Setup:



Closest test to Line speed I have gotten, this was running all the tunables above and making all the vCPUs sockets instead of cores. This was able to reach 9Gb/s but only maintained it for 20 seconds and fell off hard.

Iperf when installed bare metal on the host hardware was only able to reach 1.1Gb/s


I even went as far as putting it on another host where my Truenas VM lives. This is pretty clearly telling me something in OPNsense is either misconfigured, or just outright is not working correctly with 10G hardware.

Anyone have any ideas that could maybe lead me in the right direction? I know plenty of other people have no issues running OPNsense in VMs on proxmox, I think the place im getting stuck on is the 10G side of things.

Pages: [1]
OPNsense is an OSS project © Deciso B.V. 2015 - 2024 All rights reserved
  • SMF 2.0.19 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2