OPNsense Forum

English Forums => Intrusion Detection and Prevention => Topic started by: t0mc@ on September 09, 2021, 06:52:22 PM

Title: Performance tuning when running as KVM
Post by: t0mc@ on September 09, 2021, 06:52:22 PM
Hi everybody,

since a few days I upgraded my ISP Speed from 500MBit to 1000MBit. I noticed when suricata is enabled, I get max. speeds of ~600MBits, wheras when suricata is disabled, I got speeds up to 980MBits.

OPNSense is running on a Proxmox Virtual Server (Debian based Distro for running VMs) as a KVM VirtualMachine.

Hardware is a HP Z420 Workstation
Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 16 Cores
64GB RAM

The OPNsense VM is configured to use 6 Cores in Host Mode with 6GB RAM, uses 3 virtual VirtIO NICs, each using its own physical NIC (1 Onboard, 2 PCIe)... see att. VMConfig.png

During speed tests one can see in OPNSenses Dashboard, that CPU/RAM isn't the bottleneck, so I assume this has sth. to do with the network layer. (s. att. SpeedTest.png)

I just noticed this pinned Thread https://forum.opnsense.org/index.php?topic=6590.0 .

But many of the mentioned tunables are dealing with hardware NICs such as Intel ones, FlowControl for example, which isn't available with virtio VirtualNICs using the vtnet driver:


# sysctl -a | grep dev.vtnet.0
dev.vtnet.0.txq0.rescheduled: 0
dev.vtnet.0.txq0.tso: 0
dev.vtnet.0.txq0.csum: 0
dev.vtnet.0.txq0.omcasts: 15
dev.vtnet.0.txq0.obytes: 516930
dev.vtnet.0.txq0.opackets: 4009
dev.vtnet.0.rxq0.rescheduled: 0
dev.vtnet.0.rxq0.csum_failed: 0
dev.vtnet.0.rxq0.csum: 5892
dev.vtnet.0.rxq0.ierrors: 0
dev.vtnet.0.rxq0.iqdrops: 0
dev.vtnet.0.rxq0.ibytes: 6938441
dev.vtnet.0.rxq0.ipackets: 7767
dev.vtnet.0.tx_task_rescheduled: 0
dev.vtnet.0.tx_tso_offloaded: 0
dev.vtnet.0.tx_csum_offloaded: 0
dev.vtnet.0.tx_defrag_failed: 0
dev.vtnet.0.tx_defragged: 0
dev.vtnet.0.tx_tso_not_tcp: 0
dev.vtnet.0.tx_tso_bad_ethtype: 0
dev.vtnet.0.tx_csum_bad_ethtype: 0
dev.vtnet.0.rx_task_rescheduled: 0
dev.vtnet.0.rx_csum_offloaded: 0
dev.vtnet.0.rx_csum_failed: 0
dev.vtnet.0.rx_csum_bad_proto: 0
dev.vtnet.0.rx_csum_bad_offset: 0
dev.vtnet.0.rx_csum_bad_ipproto: 0
dev.vtnet.0.rx_csum_bad_ethtype: 0
dev.vtnet.0.rx_mergeable_failed: 0
dev.vtnet.0.rx_enq_replacement_failed: 0
dev.vtnet.0.rx_frame_too_large: 0
dev.vtnet.0.mbuf_alloc_failed: 0
dev.vtnet.0.act_vq_pairs: 1
dev.vtnet.0.requested_vq_pairs: 0
dev.vtnet.0.max_vq_pairs: 1
dev.vtnet.0.%parent: virtio_pci2
dev.vtnet.0.%pnpinfo:
dev.vtnet.0.%location:
dev.vtnet.0.%driver: vtnet
dev.vtnet.0.%desc: VirtIO Networking Adapter



# sysctl -a | grep hw.vtnet
hw.vtnet.rx_process_limit: 512
hw.vtnet.mq_max_pairs: 8
hw.vtnet.mq_disable: 0
hw.vtnet.lro_disable: 1
hw.vtnet.tso_disable: 0
hw.vtnet.csum_disable: 0



Tried around a bit with other tunables found in this thread, but until now I found nothing, what gives better performance.

Any hint how to tune suricata on OPNSense running as KVM?

Thx in advance!
T0mc@
Title: Re: Performance tuning when running as KVM
Post by: simiki on September 10, 2021, 04:07:09 PM
FreeBSD performance on proxmox (KVM virtualitzation) It's (or it was, I'm not KVM anymore) an old issue. It seems it's related with vitrio driver, some people changed it for e1000 with success.

https://forum.proxmox.com/threads/poor-virtio-network-performance-on-freebsd-guests.26289/


You can try to virtualitze, as we, with XEN Server https://xcp-ng.org/

Cheers