1
24.1 Legacy Series / Re: Poor speed in virtual environment.
« on: March 11, 2024, 10:48:41 pm »
Hi,
I am neither a PVE nor an OPNsense expert. However in your original post (according to my knowledge) there seems to be a mix-up of terms regarding 'passthrough' and 'linux bridges'.
As I understand it for a single NIC it's either one path or the other. If you try to implement both you will most likely cause conflicts at best or lack of operation. This means that for a single NIC:
So if you've mixed NICs, like for example, let's imagine NIC_A, and you passed that through as PCI device to a VM then you "shouldn't" also create a VM Linux bridge for it and pass the bridge to the same or any other VM for that matter. I don't know how Proxmox deals with this but normally it shouldn't let you.
Vice versa, if you have NIC_A and you created a Linux VM Bridge for it, let's say VMBR0 , and you use vmbr0 for various VMs then you shouldn't passthrough your NIC_A to a VM as that would potentially "block" your VMBR0.
In addition to all that I've found that when using VM Linux bridge interfaces the model which I mentioned earlier plays a role (for me) in the speeds I am achieving. For exaxmple with VirtIO (paravirtualized) I am getting the full speed of my ISP on my WAN interface. But when I go E1000 I am auotmatically losing ~50Mbit for some reason.
Hope that helps somehow. Again I could be wrong regarding the Proxmox stuff but that's where I've ended up through my research on the thing.
Good luck!
I am neither a PVE nor an OPNsense expert. However in your original post (according to my knowledge) there seems to be a mix-up of terms regarding 'passthrough' and 'linux bridges'.
As I understand it for a single NIC it's either one path or the other. If you try to implement both you will most likely cause conflicts at best or lack of operation. This means that for a single NIC:
- You either create a Linux VM Bridge for it (generally I think you need a 1:1 configuration, 1 vmbr for 1 physical NIC ideally) and you use some model in the settings, virtio (paravirtualization) or E1000 (emulation)
- or you pass through the NIC itself which means you permit your VM to gain access directly to the raw hardware, in which case it takes (I think) full and exclusive control of that hardware
So if you've mixed NICs, like for example, let's imagine NIC_A, and you passed that through as PCI device to a VM then you "shouldn't" also create a VM Linux bridge for it and pass the bridge to the same or any other VM for that matter. I don't know how Proxmox deals with this but normally it shouldn't let you.
Vice versa, if you have NIC_A and you created a Linux VM Bridge for it, let's say VMBR0 , and you use vmbr0 for various VMs then you shouldn't passthrough your NIC_A to a VM as that would potentially "block" your VMBR0.
In addition to all that I've found that when using VM Linux bridge interfaces the model which I mentioned earlier plays a role (for me) in the speeds I am achieving. For exaxmple with VirtIO (paravirtualized) I am getting the full speed of my ISP on my WAN interface. But when I go E1000 I am auotmatically losing ~50Mbit for some reason.
Hope that helps somehow. Again I could be wrong regarding the Proxmox stuff but that's where I've ended up through my research on the thing.
Good luck!