Dear all,
Recently, I had the opportunity to go to fiber Internet connectivity. Speed is great, I get almost 1GB/s symetrical on a PHYSICAL computer connected on the box. ;D
What a disapointment when I got (after tests and optimizations) only the half behind my firewall VM.
The whole virtualization is probably to blame with all added latency and overhead (driver, hypervisor, etc...), so I don't know how to solve that.
But as a solution, I have an unused physical machine that was a previous hypervisor with 32GB of RAM and a core I5 and this should be just fine for that job (please comment if you think the opposite)
My idea is to go from a single VM which offers now with backups and snapshots a lot of flexibility and reliability to an HA cluster between the new physical and the VM machines.
Performance loss in case of problem/maintenance is 100% acceptable but of course, not the config/connectivity loss because I've also a VPN tunnel to another location and family is connecting to my infra (yes it's a home lab! ;))
VM network config:
- 2 virtual NICs (VMXNET3, one for WAN, one for LANs
- the LANs interface is configured with multiple VLANs/subnets
- All default GWs are with IPs finishing with .1
I'd like to keep the .1 as default GWs, so this has to be moved to the virtual IPs. .2 and .3 are all reserved for that project on all LAN subnets.
I know already there is difficulties with drivers and stuff (doc speaks about the necessity to use LAGG) to do such a mixed setup, but knowing the above, where do I start?
Thanks in advance for you great help :)
Réfs:
https://docs.opnsense.org/manual/hacarp.html
https://docs.opnsense.org/manual/how-tos/carp.html
Virtual nics are cpu bound, with most processors you only get 5Gbit~
You need a proper intel/melanox sfp+ nic and use pcie passtrough.
edit: read 10G, 1G should be no problem
Make sure to disable hardware offloading which can create issues.
# Interfaces -> Settings: Check "Disable CRC, TSO and LRO hardware offload" and "Disable VLAN Hardware Filtering"
Quote from: Voodoo on February 23, 2021, 01:27:08 PM
Make sure to disable hardware offloading which can create issues.
# Interfaces -> Settings: Check "Disable CRC, TSO and LRO hardware offload" and "Disable VLAN Hardware Filtering"
Yeah, that is the thing, I've already disabled all of this. Also, IPS is for the moment completelly off.
VM runs on a PowerEdge R730 with dual E5-2620 and 128 GB of RAM (<50% used), Storage is a Synology on SSDs (iSCSI) and network is Ubiquiti 10GB/s switch.
VM is configured with 4 vCPUs and 4GB of RAM.
VM current config seemed like a "sweet spot" as more would not give more performance (you end up "CPU waiting") but less was definitivelly worse! :-)
The benchmark was made on the same physical computer as for the direct test, otherwise, it doesn't make sense.
And of course, I'm not against trying to tweak the VM... but with your statement, I'm wondering where is the issue now! ;D
Edit: I re-tested the connection speed accros the switches I'm using for virtualization, so the same computer, not directly attached to the box like the first test, but to it's switch. I got 800MB/s symetrical or a little more consistantly
So I'm really loosing the half through the FW. ???
Hi there,
So I made some tests, I've created some VMs, so we have (same host, same client machine)
- FW production : 2 VMXNET3 NICS, multiple VLAN interfaces, IPSEC tunnel and OpenVPN for some VPN clients, 2 vCPUs, 4GB RAM
- FW test 1 : 2 VMXNET3 NICS, single VLAN interfaces (no tagging), no VPN, 1 vCPUs, 2GB RAM
- FW test 2 : 2 E1000 NICS, single VLAN interfaces (no tagging), no VPN, 1 vCPUs, 2GB RAM
More or less, this is what I get (on averrage) from my test client on DSLreports:
- Prod FW: down: 500MB/s / up: 450MB/s
- Test FW 1: down: 700MB/s / up: 650MB/s
- Test FW 2: down: 780MB/s / up: 850MB/s
So, the differences between VMXNET3 and E1000 is quite thin on these test VMs, but if I compare both extremes, there is a huge speed difference...
Any help/advice to give before I try a physical machine? Thanks in advance ;)
Anyone? ;)
Hi there,
Made a few tests, so host can really make a difference, my newest one has a CPU frequence of 3.4 Ghz (i7-6800) and on a VM that has a direct connection to Internet VLAN, I can reach speeds above 900MB/s.
The same VM using my firewall (both on same host), cut the speed in half.
I've also tried to add a dedicated interface in my FW, without VLAN tagging, same (bad) result.
I'm out of idea now. Any clue (please check also my previous post with test VMs)? Thx ;)