VM to hardware migration and performance optimization

Started by EHRETic, April 15, 2021, 08:43:56 AM

Previous topic - Next topic
Hi there,

After have tried to get more info to create an HA setup VM/Physical server, I abandonned the idea of the HA as it doesn't seems to be feasible. The only possibility would be to create a LAG to have the same interfaces on both side but it is apparently not really possible in a VMware VM (more info here: https://forum.opnsense.org/index.php?topic=21696.msg102192)

So I have an unused i5-4570 with 32GB or RAM (previous ESX in my homelab), storage is a USB stick (32GB)

2 main questions:
- what is the best way to migrate from one to the other?
Knowing that I've 2 interfaces in the VMs (a trunk with several VLANs and the WAN) and on the physical server, I have a 4x1GB/s Broadcom card. How is the interface assignement working after configuration upload?

- what kind of optimization/tweaks can I do to use the most of my hardware (esp. memory, which is overkill)?
You can suggest hardware modifications too...

PS: in a near future, I'll have a dual Intel 10 GB/s for this machine, is it recommended to create a LACP LAG with "VLAN interfaces" for all or to dedicate one for WAN and one for LANs?
(I've a 1 GB/s Internet connection)

Thanks for your suggestions! ;)

If you're migrating from a machine with different interface IDs or drivers, you can edit the backup config .xml file and replace the old interface IDs with the new IDs on your destination firewall.

For instance, if you old VM firewall had the WAN interface on vmx0 and you were moving to a physical device where you want the WAN on bce0, you would edit the config file and insert these values.

I don't have much feedback on the dual 10GB question. Me personally I'd just use two interfaces one for WAN, one for LAN as it's less complicated and it lets me swap the switch if it fails without having to rely on duplicating another config on another physical device.

Thanks for that information, this would be very useful for migration! :)

I've a few small extra questions:
- Do you know if driver has any impact?
- So if I understood correctly, I just need to replace in config file the chain vmx0 by bge0 and vmx1 by bge1 and its done? Would be awesome.

Thanks again!

Yes, it's just an interface label. As long as you know the order of the interfaces on the new firewall, the config will import without issues.

It's also worth noting you could restore the backup config without making changes, and then just go back and manually reset your chosen interfaces. It's a bit more clunky this way but they both work.

Quote from: opnfwb on April 15, 2021, 03:54:44 PM
If you're migrating from a machine with different interface IDs or drivers, you can edit the backup config .xml file and replace the old interface IDs with the new IDs on your destination firewall.

opnfwb, I really have to thank you a lot, my migration was flawless and super easy! It was done in literally 5 minutes (host was prestaged and updated) :)

It was so easy that I was quite suspicious about it!!! ;D

I can now get 900Mb/s+ of Internet in both directions and approx 850Mb/s with IPS/IDS activated.
I'll retry when I'll switch interfaces to the 10Gb/s cards to see if IDS/IPS would perform better.

Any other question: how can use the most my system? Memory is very little used:


Sorry for the late response I just saw your reply. I'm glad the config imported smoothly!

In regards to getting the most out of it, this will come down to load simulation and checking where the bottlenecks are. With IDS/IPS enabled, what is your CPU usage when pushing 1gbit/s of traffic? Adding more cores can help throughput depending on the packages and if those packages can use multiple threads (PPPoE, OpenVPN, etc. are single thread). In single threaded instances, the faster cores can be the only way to get more performance (for instance, going from 2.4ghz to 3.0ghz with a CPU swap).

If you aren't using large IPS rulesets usually I see memory remain pretty low, sub-4GB in most cases. However, this is highly dependent on the environment and use case.

For the broadcom NICs, BSD doesn't have a lot of performance tweaking options. I would just make sure that MSI and MSI-X are enabled so if the NICs do support multiple queues per processor thread you can make the most of that.

At the console check that both of these return '1':
sysctl hw.pci.enable_msi
sysctl hw.pci.enable_msix


Are you getting a faster internet connection in addition to the 10gbit NICs? Otherwise I'm not sure there would be much realized benefit here unless the ISP over-provisions beyond 1gbit?

Quote from: opnfwb on April 28, 2021, 10:11:50 PM
At the console check that both of these return '1':
sysctl hw.pci.enable_msi
sysctl hw.pci.enable_msix


Are you getting a faster internet connection in addition to the 10gbit NICs? Otherwise I'm not sure there would be much realized benefit here unless the ISP over-provisions beyond 1gbit?

I get 1 for both commands, so nothing more on that level. I have no plan to update CPU, but I'll have a look.

For the 10 GB/s, as I've several zones and one in particular is the multimedia that communicate with my Plex server on another zone, it might be good to have something "strong". Even if I go 4k in the future, 1GB/s will not saturate them but the 10 GB/s are also an Inteal NIC, which should be able to offload some things.

But I need to purchase 2 extra Twinax cables first! ::)

For the future, I also plan to put a proxy so kids can have filtered Internet, nut I'll try that in a VM lab first to see how it works and if I can integrate it with AD auth.