Speeds getting slower when I open the traffic dashboard

Started by Poli, December 14, 2021, 07:20:50 PM

Previous topic - Next topic
Hey everyone,

I am playing around with my OPNSense instance before putting it in production but I really get weird speed issues.

So here is how my infrastructure is :

WAN -> Proxmox -> OPNSense -> Mikrotik 10 GBE Switch -> Proxmox (the same host just going through OPNSense on a different phyiscal interface).

I am testing my network with Iperf between two hosts at 1 GBE.
So it's going like this.

My External host -> WAN -> Proxmox -> OPNSense -> Mikrotik -> Proxmox (Iperf on the proxmox host)
My External host <- WAN <- Proxmox <- OPNSense <- Mikrotik <- Proxmox (Iperf on the proxmox host)


My iperf results are the following :


Now, whenever I open this dashboard (I have no IPS, or any rules activated it's a really basic host).



And my test start's to get really slow :




Okay at first I thought it was an issue with CPU usage both are running (host and client when the traffic overview is open) around 35% it's a 10 core Intel Xeon E5-2680 V2.

Do you have an Idea of what this could be?

Note : I am using the latest OPNSense version, hosted on a virtual machine it's running on Intel NIC's on SFP+.
Locally on the same switch getting 10 GB/s between the hosts.


PS : I hope I posted it at the correct place in this forum.


Get in love with your new hosting provider!
https://polisystems.ch/ - Swiss quality hosting

December 14, 2021, 08:09:09 PM #1 Last Edit: December 14, 2021, 08:33:14 PM by bunchofreeds
Hi,

I was getting something similar on my setup.
https://forum.opnsense.org/index.php?topic=24932.msg119582#msg119582

My speedtest throughput would drop by ~50% when opening the Traffic Reporting view.
Testing was from a VM on the same host through OPNsense to WAN, and a physical device on the same LAN through OPNsense to WAN.

I initially saw this using an intel dual port 82576EB 1GbE Card, I upgraded to an Intel x540 10GbE but had the same experience.
I was not passing through these adapters as I was wanting to use live migration (which works extremely well with OPNsense on Proxmox)

I am running PPPoE on my WAN and thought it might be related to this as it does have issue with limiting to a single core.

Sorry I didn't get an answer for this myself.

Thanks for your reply, sad to see that you didn't find any solution  :'(.

I would love to fix that OPNsense is a good option to me but it's a too big bug for my usage...
Get in love with your new hosting provider!
https://polisystems.ch/ - Swiss quality hosting

Hi!
does the behavior change if you add a  traffic graph widget to the lobby (no queries for top hosts)?

Thanks for your reply!

Firstly I found out I am averaging to 800 Mbits if I only show LAN.

And concerning your request not I don't have the same speed it's like if nothing happens (like if I don't open the traffic monitor).

So that could be bound to the top talkers?  But how so, do I have something missing hardware side?
Get in love with your new hosting provider!
https://polisystems.ch/ - Swiss quality hosting

it was just a guess: since iperf itself creates a load on the interface and cpu, a "top hosts" request that calls iftop command at the backend (and iftop, in turn, sniffs traffic) can logically affect network performance. not sure if there is a way to avoid the effect of iftop on network performance at high channel load

That's what I am finding weird.

I am getting no issue if I run 10 GBE between two proxmox hosts directly (without passing by OPNSense).

And this simple 1 GBE test gets slow when I open this traffic monitor.
Iftop consumes a lot of CPU but far enough to make the machine slower, so I can't see where is the bottleneck at this moment.
Get in love with your new hosting provider!
https://polisystems.ch/ - Swiss quality hosting

December 16, 2021, 07:43:17 PM #7 Last Edit: December 16, 2021, 08:44:15 PM by johndchch
Quote from: Poli on December 16, 2021, 06:46:54 PM
Iftop consumes a lot of CPU but far enough to make the machine slower, so I can't see where is the bottleneck at this moment.

I think that fact that iftop is consuming a lot of cpu is another symptom of your issue - running virtualised under esxi here and monitoring both wan and lan interfaces with two instances of iftop I'm seeing 3% cpu load per instance whilst testing the wan speed ( on a 1gig fibre connection - zero drop in thruput observed with both instances of iftop running)

what are the underlying physical NICs you're using? Sounds to me like there's issues with either proxmox or the NICs

I am not running OPNsense currently due to this unfortunately.

I was seeing this with proxmox using virtio drivers on Intel network adapters. Both 1GbE and 10GbE varieties.
Initially I thought it might be related to me using a PPPoE internet connection that forces only a single CPU to be used. But I was seeing this on the LAN interface also.

If I selected to monitor both WAN and LAN using the traffic graphs, then I would get two instances of iftop running that consumed ~30% CPU each. Choosing either WAN or LAN spawned a single iftop using ~30% CPU.

I put it down to iftop needing to sniff the packets and then place them back inline again. I first saw this on the 1GbE card so tried a 10GbE card hoping it had additional headroom. This was unsuccessful and had the same result with iftop and performance.

To test I only used ookla Speedtest Cli (not iperf) from both a virtual device on the same host and virtual network, and a physical device. I would have ~900Mbit down and ~400Mbit up (Max of my plan) without graphs, Then ~400Mbit down and ~200Mbit up with graphs on.

Other users with virtual OPNsense on proxmox confirmed it worked OK for them. I don't think I confirmed if they were passing through their adapters though.

Quote from: bunchofreeds on December 16, 2021, 10:11:51 PM
Other users with virtual OPNsense on proxmox confirmed it worked OK for them. I don't think I confirmed if they were passing through their adapters though.

as I said above NOT an issue on esxi - either using vmxnet3 or pci pass thru - sounds like a proxmox issue more than an opnsense issue

I would agree in as much as its is more likely Proxmox - Virtio - Freebsd related rather than specifically OPNsense.

Quote from: bunchofreeds on December 17, 2021, 01:42:58 AM
I would agree in as much as its is more likely Proxmox - Virtio - Freebsd related rather than specifically OPNsense.

got a spare box you can spin esxi up on - it's one of those site specific things where the only way to properly test is with the opposing hypervisor in the same environment

Or maybe the same version of FreeBSD as used by OPNsense as a guest on the same Proxmox host. See if the issue persists with iperf and running iftop.

I can't do either currently. Might be able to over the holidays

Quote from: bunchofreeds on December 16, 2021, 10:11:51 PM
Other users with virtual OPNsense on proxmox confirmed it worked OK for them. I don't think I confirmed if they were passing through their adapters though.
Just wanted to confirm that I'm also seeing this effect with OPNsense 21.7.6 running on Proxmox VE 7.0 with the Intel I210 LAN and WAN NICs using PCI passthrough. Haven't done any more testing however.

I am also using PCI passthrough, with a Intel X520-DA2 SFP+ card and FS.COM modules.

Tried with or without PCI passthrough same issue.. :/
Get in love with your new hosting provider!
https://polisystems.ch/ - Swiss quality hosting