OPNsense Forum

English Forums => Hardware and Performance => Topic started by: sparticle on September 12, 2021, 01:30:27 am

Title: Poor network performance running opnsense on VirtualBox
Post by: sparticle on September 12, 2021, 01:30:27 am
I have struggled for a while troubleshooting what I thought was internet settings. In the end it turned out to be virtualbox network performance.

I tried just about every combination of virtual nic in Vbox and the myriad of settings. Throughput was variable at best. One second 95% of the available ISP bandwidth the next 20% with no idea why.

I am not a networking expert.

The OpnSense setup is a VirtualBox VM running on a Dell R710 24 cores and 128GB of memory. 1 x 4 port Intel GB Nic.

2 of the nic ports are dedicated to the VM and bridge networking. Tried virtio but could not get it to run.

The fastest Vbox emulated nic was Intel Pro1000 MT Server. The rest were variously slower. The VM has 16Gb of memory and 4 cores.

In the end a ridiculously modest old Dell Optiplex Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz (2 cores) with a pcie x 1 HP 2 Port GB Nic massively outperformed the VM. Full maximum ISP bandwidth all of the time. I use a dedicated Rpi4 to monitor the internet connection 24x7 running this awesome little build https://github.com/geerlingguy/internet-monitoring

From a graph that looked like a bad picture of the mountains with massive variability in the bandwidth my network was seeing. It is now essentially a solid block running at the max profile (40Mb Down 10 Up) BT provide us with a tiny amount  of variability of a few tens of kbs.

The reason I moved to a VM was to cut down on the number of machines I was running.

If someone knows the trick to getting full performance or nearly full performance out of a Vbox virtual nic then please post it as I would love to virtualise opnsense again.

Iperf between the lan side of opnsense and my desktop was c. 80mb using the virtual nic. using the Physical dell optiplex it's essentially 1Gb running at 980ish mb. So sometimes much worse than a 10th of the potential.

Any advice appreciated.

Cheers
Sspart :(
Title: Re: Poor network performance running opnsense on VirtualBox
Post by: sorano on September 12, 2021, 07:41:57 am
May I suggest changing hypervisor to ESXi or Hyper-V?
Title: Re: Poor network performance running opnsense on VirtualBox
Post by: sparticle on September 12, 2021, 09:16:07 pm
Non trivial piece of work as I have other VM's running and have been for many years without issues. But thanks for the suggestion.

I am now pretty certain this is nothing to do with Vbox and everything to do with opnsense/bsd nic drivers.

More testing. I tried a iperf test between other VM's from the same host server I was running my opnsense router to other lan clients. Using the same Pro 1000 MT virtual nic. I get essentially 1gb across the lan from any other VM I am running. Most are Ubuntu servers even fired up some older Ubuntu 16.04 servers and they are all running across the network at a gb.

The VM nic interface provided by Vbox to the guest is based on the Intel 82545EM the FreeBSD em driver has been around since BSD 4.x according to the documentation and intel are recommended as the Nic of choice.

I can see it is loading the em driver at boot and assigns the 2 nics as em0 and em1 but the throughput is terrible.

I know it's not that scientific a test but it is real world. I am getting a tiny fraction of the throughput into/out of the lan interface in opnsense.

Is BSD really that sh*t at networking. We are talking about a massive reduction in throughput via opnsense. Do I need a different driver in opnsense? All this despite the BSD documentation stating that the em driver supports the following.

The driver   supports Transmit/Receive checksum offload and Jumbo Frames on
     all but 82542-based adapters.

     Furthermore it supports TCP segmentation offload (TSO) on all adapters
     but those based on   the 82543, 82544 and 82547 controller chips.  The
     identification LEDs of the   adapters supported by the em driver can   be
     controlled   via the   led(4) API for localization purposes.  For further
     hardware information, see the README included with   the driver.
 
If I set the Vbox VM to use Virtio drivers then I need the guest (opnsense) to configure itself correctly on boot to use the virtio drivers.

UPDATE:

I set the Lan nic as Virtio and left the WAN nic alone.

The drivers are loaded correctly it seems.

Code: [Select]
pciconf -lv | grep -A1 -B3 network
    subclass   = VGA
virtio_pci0@pci0:0:3:0: class=0x020000 card=0x00011af4 chip=0x10001af4 rev=0x00 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio network device'
    class      = network
    subclass   = ethernet
--
em0@pci0:0:8:0: class=0x020000 card=0x075015ad chip=0x100f8086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82545EM Gigabit Ethernet Controller (Copper)'
    class      = network
    subclass   = ethernet

Throughput is a little better up from around 10-20mbs to 80-100mbs but still 10x slower.

Interesting flags on the vnet0 interface.

Code: [Select]
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=c00b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,LINKSTATE>

which seems to suggest ti support TSO etc. Currently they are disabled in the config.


Any help appreciated.

Cheers
Spart
Title: Re: Poor network performance running opnsense on VirtualBox
Post by: sparticle on September 16, 2021, 03:21:17 pm
Talking to myself I know, but useful for me to refer back to and might help others not waste their time trying to optimise this kind of setup.

After much testing and many hours work. Installing re-installing etc. OPNsense on a VirtualBox VM I have concluded it is a waste of time as the BSD overhead makes network performance a joke.

I threw resource at it. 8 Xeon CPU's 16GB of memory in my Dell R710 test machine. I know it would likely make no difference and I was right. I think the issue boils down to crap driver emulation in BSD.

If I build for instance an IPCOP FW or an OpenWRT router on the same VM I get Gb speeds from the Vnics. Likewise with an ubuntu server  etc.  I have tried all combos of the Virtual NICS VirtualBox provides the most performant being VirtIO, no surprise there.


We are not talking a small performance degradation, we are talking about 50 mbits throughput massive variability seen as low as 6Mbits throughput.

Is this one of those things like the 'Emperor has no clothes', everyone knows about it but no one wants to talk about how abysmal it is.

Virtualbox has worked fine for years for us. Performance is fine for everything we have ever done apart from this!

Am I missing something?

Cheers
Spart
Title: Re: Poor network performance running opnsense on VirtualBox
Post by: testo_cz on September 28, 2021, 11:21:54 pm
Hi

I did some tests because I like to use Virtualbox too.

Used a PC with Core i7 Ivy-bridge , >3.5GHz, and dedicated dual-port Intel 82576 NIC. I got a testbed with a two physical machines which normally saturates 1GbE full-duplex over my physical OPNsense box.
The PC under this test runs Debian 10 and either Virtualbox 6.1 or VMware player 16.
OPNsense 21.7 in mostly default setup.
iperf3 testing outside of OPNsense with two or ten TCPs, straight and -R reverse tests. Deviation of the measurements were not so large , so averages only:
Code: [Select]
Virtualbox + vtnet: 80Mbits/s
Virtualbox + emulated 82545: 225 Mbits/s

VMplayer +  emulated 82545: 380 Mbits/s
VMplayer +  vmx: 530 Mbits/s
then I've disabled Hyperthreading and Virtualbox performed better:
Code: [Select]
no HT , Virtualbox + vtnet:  100 (130 -R ) Mbits/s
no HT , Virtualbox + emulated 82545: 380 (430 -R ) Mbits/s
no HT with VMplayer didn't make much difference

Clearly, a paravirtualized should be better than an emulated.

I paid attention to all of the TSO, LRO settings down the WAN/LAN lines. Also tunables but there was no significant effect.

Hard to say what to do. Maybe this is just not the best time for Virtualbox + OPNsense combination. And surely, people post here some Proxmox or ESXi results which are good at least, if not great.

T.