OPNsense Forum

English Forums => 24.7, 24.10 Legacy Series => Topic started by: logi on September 18, 2024, 10:13:29 PM

Title: Best Guest System to virtualize OPNsense in Proxmox
Post by: logi on September 18, 2024, 10:13:29 PM
I will be adding doing some passthrough with PCI devices (Ethernet ports and Thermal agent from the Intel Core i7-10810U CPU)

- Should I pick i440fx or q35?
- Do I select QEMU agent?

Thanks
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: Taomyn on September 19, 2024, 08:02:17 AM
Personally I've used q35, have to make sure secureboot is disabled as it's not supported, so use SEABIOS or disable it in the UEFI bios, and yes enable the QEMU agent as there's support for that within OPNsense.


For now I have ballooning off as when I installed last year it would give OPNsense more than 2GB, but I believe this is solved now and just something I need to get around to checking out again.
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: cookiemonster on September 19, 2024, 03:26:26 PM
Unless things have changed recently you have to use q35 for successful pci passthrough, balloning still is unusable for freeBSD and yes to qemu agent. Needs installation inside the VM too.
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: logi on September 19, 2024, 03:47:10 PM
Quote from: Taomyn on September 19, 2024, 08:02:17 AM
For now I have ballooning off as when I installed last year it would give OPNsense more than 2GB, but I believe this is solved now and just something I need to get around to checking out again.

I am not sure if I should use Ballooning or not, when the guest is running Proxmox reports around 90% RAM usage, when in reality is no more than 20%, I have been playing around with Ballooning ON/OFF, but it doesn't seem to solve the memory reporting issue in Proxmox, are you experiencing the same problem? Thanks
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: Taomyn on September 20, 2024, 01:16:20 PM
Quote from: logi on September 19, 2024, 03:47:10 PM
I am not sure if I should use Ballooning or not, when the guest is running Proxmox reports around 90% RAM usage, when in reality is no more than 20%, I have been playing around with Ballooning ON/OFF, but it doesn't seem to solve the memory reporting issue in Proxmox, are you experiencing the same problem? Thanks


This has been $64billion question on the Proxmox forum with no end of reasons given but none of them to me really explaining why. For me the issue is worse for Windows, because it's Windows, but also if ZFS is involved then that can use RAM without the OS knowing about it as well. I just enable it when I can and just monitor it from within the VMs, for which I use Zabbix.
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: cookiemonster on September 20, 2024, 01:31:06 PM
Quote from: cookiemonster on September 19, 2024, 03:26:26 PM
Unless things have changed recently you have to use q35 for successful pci passthrough, balloning still is unusable for freeBSD and yes to qemu agent. Needs installation inside the VM too.
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: Taomyn on September 20, 2024, 01:38:51 PM



Not sure what the point of repeating yourself was, so https://man.freebsd.org/cgi/man.cgi?query=virtio_balloon&sektion=4&manpath=FreeBSD+14.1-STABLE points to that it should and someone else on the forum stated that it did for them: https://forum.opnsense.org/index.php?topic=41958.msg209102#msg209102
Title: Re: Best Guest System to virtualize OPNsense in Proxmox
Post by: cookiemonster on September 20, 2024, 09:29:12 PM
OP seems to have ignored it first time. That is the point. To reiterate.
Also on that other thread it seems contradicting that although says works for him, there is always a discrepancy.
And that is one part of the problem. The problem still is, as I said unless has changed recently, that the driver for freebsd wasn't updated in a long time to reflect the ballooning in and out of the memory and accounting also for the impact of the ARC.
Some chose to read some replies, and that's fine. Out of the thread now.