VM benchmark speed is marginally slow

Started by Sunshine, January 04, 2025, 09:18:42 PM

Previous topic - Next topic
January 04, 2025, 09:18:42 PM Last Edit: January 04, 2025, 11:04:24 PM by Sunshine
I've been through most of the speed threads here and think I've tried everything mentioned, but having switched to OPNsense from Untangle, I'm getting marginally slower downloads on otherwise identical VMs.
My hardware is old, but relatively common so I don't think it has any major quirks to work around. I'm running proxmox on a Qotom PC with 4 intel 1G nics.
VMs are identical with virtio bridges, and the i5 CPU passed as 'host'. It's been pretty solid for about 3-4 years on Untangle, but the licensing has changed and I'm shopping alternatives now. 

I'm running speed tests on a LAN machine. My ISP service is 1G down / 40M up.
I believe all hardware offload is disabled. My OPNsense install is fresh and nearly default settings.
During tests, cpu report in the OPNsense dash never report peaks higher than 50% and it mostly just runs under 10%. Proxmox reports a bit higher, but nothing I see that's concerning.

With untangle I get:
  • directly on VM: untested
  • LAN client: 850-900 down

With OPNsense:
  • directly on VM: 920 down
  • LAN client /OPNsense no add ins: 630-690 down
  • LAN client w/ wireguard enabled: 530 down
  • LAN client w/ wireguard and ips: 430 down
  • LAN client w/ zenarmor: 380 down --edit, just added

It seems the LAN interface is the bottleneck, but I'm just a hobbyist so don't want to jump to conclusions. So far the only change I've tried is setting "generic-receive-offload off" in proxmox, but I'm not seeing much change.
I can live with some performance hit on the bridges, but I plan to add more services like zenarmor and am concerned that it will get more sluggish.

Any suggestions for things I've missed?








A bit more tinkering. I ended up passing through 2 of the interfaces, which was a bit of an adventure, but ultimately ended up with the same performance.
Switching back to untangle, I noticed it has slowed down and traced it to the interfaces falling back to half duplex. I corrected that and untangle was back up to speed. That got me looking at opnsense interface settings.

It looks like the LAN is properly autodetecting 1G full. Changing the setting to 1000baseT-full makes no difference.
On the WAN side I see there is no place to set it in the GUI. OPNsense has it detected as:
  • Media   10Gbase-T <full-duplex>
  • Media (Raw)   Ethernet autoselect (10Gbase-T <full-duplex>)

But it's only a 1G Intel interface. Any idea if this could be causing trouble, or is it okay that it's over-spec'd?

QuoteWith OPNsense:

    directly on VM: 920 down
    LAN client /OPNsense no add ins: 630-690 down
    LAN client w/ wireguard enabled: 530 down
    LAN client w/ wireguard and ips: 430 down
    LAN client w/ zenarmor: 380 down --edit, just added

So you tested between a HOST and OPNSense directly? If yes then

A. LAN client /OPNsense no add ins: 630-690 down -
iperf should be thru OPNsense not towards, meaning host to host

B. LAN client w/ wireguard enabled: 530 down -
this is CPU heavy it can limit you to such speed, but after changing the base to FreeBSD 14.1 WG should be improved there is a topic abotu this try to look it up. Did you also configure the Normalization as is in the guide?

C.LAN client w/ wireguard and ips: 430 down
D.LAN client w/ zenarmor: 380 down --edit, just added
I am not sure about Suricata, but ZA limits you to only 1 CORE usage as it does not have a multi-thread support. Thus is heavy dependent on the performance and clock speed of a single CPU. Having WG + ZA + depending what else you run on OPN, can result in lower throughput.

Did you try to enable RSS? I have seen improvement in performance when using ZA.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

> But it's only a 1G Intel interface. Any idea if this could be causing trouble, or is it okay that it's over-spec'd?
It is what happens when the interface is virtualised and not a problem in itself. That is the speed of virtio between VM and host only as it is I  believe (not seen the code) coded that way.

Quote from: cookiemonster on January 06, 2025, 11:12:25 AM> But it's only a 1G Intel interface. Any idea if this could be causing trouble, or is it okay that it's over-spec'd?
It is what happens when the interface is virtualised and not a problem in itself. That is the speed of virtio between VM and host only as it is I  believe (not seen the code) coded that way.

Jop, virtio interfaces are negotiated at 10G speeds, because that the speed the Proxmox bridge communicates with its VMs. This is well expected. Back in the past it showed 100G :D, but they actually fixed it to 10G.

https://www.reddit.com/r/Proxmox/comments/lhe3vv/proxmox_show_10gb_setting_for_1gb_nic_adapter/

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD