I have an OPNsense VM running on my Proxmox server that handles all of my networking (DHCP using Kea, DNS using dnsmasq, Port Forwarding, NTP, etc). I use an intel X550-T2 to handle all of my physical connections but within the Proxmox server itself I also need to share networking to my VMs.
My issue is that when I try to connect my VMs using the Virtio networking driver (I have also tried VMXNET3 with similar results) OPNsense does not seem to recognize the connection. I have verified that all the firewall and all other network controls that Proxmox has are turned off. OPNsense sees the network driver but when I look at the virtual interface I am actually missing the section "Speed and Duplex".
I have searched the forums and seen people saying that this has not been an issue since 19.X when the driver was incorporated into FreeBSD. I have managed to limp along just fine using the E1000 driver which retains full functionality but I need 10Gbps connections to files stored and managed by several VMs for my workflows.
I have tried clean installs and even before I restore my settings no Virtio connections will work. As a sanity check I created a PFsense VM and it worked immediately at 10Gbps so I know it's not a problem with Proxmox or FreeBSD itself.
Any help or advice would be appreciated.
I don't understand the problem.
is it the VM that does not recognise the NIC, and that VM is not OPN, right?
You seem to be mixing two things (or I am).
Your VMs are not connected directly to OPN next to it, they are independent from each other.
Traffic from a VM will go out of the host and back into OPN. So the driver used for the VM has nothing to do with OPN.
IDK why the virtio emulation does not work for you, however even if you use E1000 emulation, it ist just that this is the presentation to the OpnSense VM. Theoretically, it is not limited to 1000 MBit/s, but rather by the emulation efficiency and the performance on the VM itself.
For Linux VMs, virtio is a tad faster than E1000, IDK about FreeBSD.
Most people use PCIe passthrough to speed up things. With 10G, this might be the way to go, however, you cannot make use of the same hardware NIC as both bridged and passthrough.
I knew when I was posting I would probably need to clarify since it's really easy to have something you've set up and can look at, but it's a lot harder to properly communicate that setup to others. My main issue is OPNsense failing to interface with the virtio driver at any speed (not even 100 or 10mbps).
For more context:
The OPNsense VM has a Intel X550-T2 passed through to it via IOMMU. That runs perfectly fine at full 10Gbps. The physical interface provides me with WAN on one port and LAN on the other port to my physical machines. All of my physical devices can communicate back to an intermediate Zyxel 10Gbps switch, and OPNsense, at 10Gbps (about 8+ Gbps tested with iperf3 and real world demonstrations using LanCache).
Separate from this is a virtual interface I have added, OPT1, where I am trying to use the Virtio driver (device shows us at vtnet0 in OPNsense) and having to use the E1000 driver due to Virtio not working correctly. All of my VMs use corresponding Virtio based network connectors and none of them have a problem reading the interfaces as being capable of 10Gbps, the problem appears to be within OPNsense not recognizing something about the Virtio connection. I understand that the VMs are separate from OPNsense, and using iperf3 I can benchmark around 8.2Gbps between them when I give them enough threads and RAM. There is no slow down with my other VMs as a connection between any other 2 VMs should be the same as any VM to OPNsense.
As to using E1000 while I understand that this could, theoretically, be pushed faster, I don't really have the interest in trying to dive into that lower driver/emulation layer (part of why I ended up buying the intel NIC I did was because driver support for it is native to FreeBSD and I wouldn't have to deal with stability and security issues related to trying to bodge the drivers for another 4x1Gbps card I had previous been using when IPfire, based on Debian, was being used for my router). When OPNsense views that connection it properly interprets it as going up to, and can send traffic and assign IPs to my VMs, at 1Gbps.
clear now. Have you loaded the module in case is not built in?
What module would that be? Looking in either packages or plugins within OPNsense the only thing I can see related to KVM or Proxmox is the qemu-guest-agent plugin.
My understanding is the virtio driver should be a part of the kernel. I guess that's the module you mean. I'll look into it.
Following the instructions found on the FreeBSD wiki: https://man.freebsd.org/cgi/man.cgi?virtio(4). I edited /boot/loader.conf to ensure that virtio_load, virtio_pci_load, and virtio_net_load have been manually loaded on my system (though documentation I am seeing says that Virtio should be part of a (the?) generic module set that can be loaded automatically.
Each time I set my OPT1 connection to my VMs to use vtnet0 I lose all connectivity.
As a bit more explanation, I use a bridge0 device to connect between my physical and virtual interfaces. Just makes it easier for me to use a single DHCP, DNS, etc setup on the bridge device that then filters down to its "child" connections (the physical connector and the virtual connector to the VMs); this shouldn't ever result in a full loss of any connection (from what I know) but I thought I'd mention that in case it was relevant. Interfaces are added following step 2 of a guide in the OPNsense docs (https://docs.opnsense.org/manual/how-tos/lan_bridge.html).
Quote from: Crycaeon on August 04, 2024, 06:03:37 AM
Interfaces are added following step 2 of a guide in the OPNsense docs (https://docs.opnsense.org/manual/how-tos/lan_bridge.html).
Step 2 is definitely not enough. Frankly sounds like you missed steps 5/6 and everything is blocked on the bridge member interfaces.
Obviously, the topology is not yet fully described:
- If you use one physical passthrough for both WAN and LAN, do they have VLANs configured and the breakout is done externally?
- Since you also use one virtio adapter, to which bridge and thus, to which other NIC on the Proxmox host is that connected? How is that connected to you switching topology (i.e. which VLAN)?
If you lose all connectivity by enabling an additional connection, the reason might be a network loop that in turn causes (R)STP to cut off your connections.
Quote from: Crycaeon on August 04, 2024, 01:42:40 AM
What module would that be? Looking in either packages or plugins within OPNsense the only thing I can see related to KVM or Proxmox is the qemu-guest-agent plugin.
My understanding is the virtio driver should be a part of the kernel. I guess that's the module you mean. I'll look into it.
Yes I meant the virtio driver. Yes virtio is built into the freebsd kernel but OPN might not bring in all and every part of the generic kernel.
What I don't know is what sort of speeds virtio net devices can reach or present between VM and hypervisor for freebsd. I expect you will be able to solve the problem of connectivity at present. Many people use virtio successfully. As to reach 10 gbps with it is a different matter.
QuoteStep 2 is definitely not enough. Frankly sounds like you missed steps 5/6 and everything is blocked on the bridge member interfaces.
Again, I should've been clearer. The bridge has already been setup and is already running fine using an em0 interface (i.e. the E1000) driver. I stepped through the entire guide to get the bridge running. As I am doing testing I have found it useful to create another interface, vtnet0, which I test by removing em0 as a member of the bridge0 interface and adding vtnet0 as a part of that bridge interface. When I'm doing this testing I'm fully turning off (as I understand it) em0 by removing it as a member of bridge0 and then adding vtnet0 as a new member to bridge0.
Quote- If you use one physical passthrough for both WAN and LAN, do they have VLANs configured and the breakout is done externally?
The Intel X550-T2 has 2 physical interfaces (that's what the T2 stands for). I have passed through the full card and use a full physical interface, each, for WAN and LAN. I don't have any VLANs configured to separate out WAN and LAN, or my LAN and my VMs.
Quote- Since you also use one virtio adapter, to which bridge and thus, to which other NIC on the Proxmox host is that connected? How is that connected to you switching topology (i.e. which VLAN)?
My bridge0 is used as an easy way to share services (DHCP, DNS, NTP) between my physical LAN connection to my physical machines and the virtual LAN connection to my other VMs on Proxmox. The only VLAN that could be involved, now that I think about it, would be within Proxmox as all my VMs (any my existing em0 connection within OPNsense) are broken out on vmbr1 as opposed to the default vmbr0.
QuoteAs to reach 10 gbps with it is a different matter.
Getting 10Gbps exactly is less of the final goal and the actual goal is being able to interface with the VMs from my physical systems at greater than 1Gbps.
I should note that before I ever used a bridge this problem existed when I was trying to run this connection as a standalone OPT1 connection with the virtio driver.
QuoteI expect you will be able to solve the problem of connectivity at present. Many people use virtio successfully.
That's why I ultimately made my post. I figure I'm just doing something stupid when configuring my setup. I am relatively savvy when it comes to getting around Linux CLI (and for the most part the OPNsense GUI does a good job of exposing relevant logs) but FreeBSD seems just different enough that I am probably missing something when it comes to log checking and finding the issue.
Quote from: Crycaeon on August 04, 2024, 12:58:24 AM
...
All of my VMs use corresponding Virtio based network connectors and none of them have a problem reading the interfaces as being capable of 10Gbps, the problem appears to be within OPNsense not recognizing something about the Virtio connection.
...
OPNsense is working "out-of-tha-box" with Virtio networking, so if there's a problem you should most definitly look at your host system / configuration (ie Proxmox), not OPNsense.
A fresh default OPNsense 24.7 install on current Debian Testing, kernel v6.9.12, libvirt v10.5.0, qemu v8.2.4 and two (virtual) networks: "Isolated" LAN & "NAT" WAN
Guest OPNsense:
# ifconfig
vtnet0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: LAN (lan)
options=800a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
ether 52:54:00:54:29:36
inet 192.168.1.1 netmask 0xffffff00 broadcast 192.168.1.255
inet6 fe80::5054:ff:fe54:2936%vtnet0 prefixlen 64 scopeid 0x1
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vtnet1: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: WAN_1 (wan)
options=800a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
ether 52:54:00:51:5e:79
inet 192.168.111.168 netmask 0xffffff00 broadcast 192.168.111.255
inet6 fe80::5054:ff:fe51:5e79%vtnet1 prefixlen 64 scopeid 0x2
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
Host KVM:The LAN interface (vtnet0) on the Hypervisor correspond to vnet34
# ip a s vnet34
51: vnet34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr2 state UNKNOWN group default qlen 1000
link/ether fe:54:00:54:29:36 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe54:2936/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
# ethtool -I vnet34
Settings for vnet34:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: off
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Current message level: 0x00000000 (0)
Link detected: yes
And the WAN interface (vtnet1)
# ip a s vnet35
52: vnet35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr1 state UNKNOWN group default qlen 1000
link/ether fe:54:00:51:5e:79 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe51:5e79/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
# ethtool -I vnet35
Settings for vnet35:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: off
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Current message level: 0x00000000 (0)
As stated earlier in this thread, Virtio networking isn't a physical ethernet interface but an API between HyperVisor and Guest. It just throws packets through a (virtual) pipe as fast as your Host CPU / Memory can handle. Link speeds of virtual interfaces are "virtual", they have no use at all (only visual and/or compatibility).
Let's "upgrade" the 10Gb virtual LAN interface on the Host to 25Gb (or 40Gb, or 100Gb, or ...)
ethtool -s vnet34 speed 25000 duplex full autoneg on
Well, that's a cheap upgrade...
# ethtool -I vnet34
Settings for vnet34:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 25000Mb/s
Duplex: Full
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Current message level: 0x00000000 (0)
Link detected: yes
Again, it's virtual, despite setting autoneg to on, this is not how virtual interfaces do work (there's no autoneg), as you can validate on OPNsense, which still shows a 10Gb connection (with autoneg to on also):
# ifconfig vtnet0
vtnet0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: LAN (lan)
options=800a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
ether 52:54:00:54:29:36
inet 192.168.1.1 netmask 0xffffff00 broadcast 192.168.1.255
inet6 fe80::5054:ff:fe54:2936%vtnet0 prefixlen 64 scopeid 0x1
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
Ergo,
There's nothing wrong with OPNsense, it supports Virtio Networking and displays interface speeds by default. So fix, reconfigure, upgrade your Proxmox host, and OPNsense will follow...
QuoteThere's nothing wrong with OPNsense, it supports Virtio Networking and displays interface speeds by default. So fix, reconfigure, upgrade your Proxmox host, and OPNsense will follow
So I am not averse to saying that Proxmox is the issue. However, I am not certain what could be causing it. I am on the latest version of Proxmox (6.8.8-4).
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
proxmox-kernel-6.5.13-5-pve-signed proxmox-kernel-6.8.4-2-pve-signed
proxmox-kernel-6.8.4-3-pve-signed proxmox-kernel-6.8.8-1-pve-signed
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Your System is up-to-date
Where I am confused is I can create a fresh install of OPNsense: its easy enough to restore from the backup file and just assign my interfaces later. However, the fresh install will still have this issue of being unable to work with Virtio while a fresh install of PFsense does not seem to have this problem and registers the Virtio interface correctly (though I do not want to make the switch to PFsense as I have been burned by corporate product maintainers and have heard from others similar things about Netgate).
To be clear, I am not asking for Proxmox help on the OPNsense forms. That wouldn't make any sense. I am just asking if there is any additional troubleshooting that I am missing. I will go back an reexamine my interfaces, on both my host and VMs, and see if there are any differences or differences.
Your proxmox version is not the newest one:
https://www.proxmox.com/de/downloads/proxmox-virtual-environment (https://www.proxmox.com/de/downloads/proxmox-virtual-environment)
U have to upgrade as described on proxmox.
If that doesn't help, I'm sure there is a missconfiguration of your bridge(s) in proxmox. Have a look here:
https://homenetworkguy.com/how-to/virtualize-opnsense-on-proxmox-as-your-primary-router/ (https://homenetworkguy.com/how-to/virtualize-opnsense-on-proxmox-as-your-primary-router/)
QuoteYour proxmox version is not the newest one:
What proxmox shows on update command is the kernel version. I mistakenly called that out as the version of proxmox itself. Below is the proper command showing I am running the latest version of Proxmox.
root@X:~# pveversion
pve-manager/8.2.4/faa83925c9641325 (running kernel: 6.8.8-4-pve)
Appreciate the guide. I will take a look at it when I have the time after work to see if I've missed anything.
Perhaps you can post the VM config from Proxmox so we can see exactly how it's been configured?
I have a small multi-NIC Intel mini-PC running OPNsense on Proxmox really well, so hopefully it's just a misconfiguration somewhere. This is my config - I have 3 of the 4 NICs in pass-through for performance and not wanting anything else interfering with them, with the 4th one bridged for access to my LAN + a couple of VLANs:
agent: 1
args: -vnc 0.0.0.0:10
balloon: 0
bios: seabios
boot: order=scsi0
cores: 4
cpu: host
hostpci0: 0000:01:00,pcie=1
hostpci1: 0000:03:00,pcie=1
hostpci2: 0000:04:00,pcie=1
hotplug: disk,network,usb,cpu
machine: q35,viommu=virtio
memory: 8192
meta: creation-qemu=8.1.2,ctime=1701086589
name: BART
net0: virtio=BC:REDACTED:D5,bridge=vmbr0
numa: 1
onboot: 1
ostype: l26
scsi0: local-zfs:vm-100-disk-1,discard=on,iothread=1,size=64G
scsihw: virtio-scsi-single
smbios1: uuid=fa-REDACTED-88
sockets: 1
startup: order=2
tags: linux;vm
vmgenid: 84-REDACTED-bf
When I first installed this back in September 2023, the only real issues I had were that I could not get OPNsense to boot if I tried to use UEFI, hence why I have "seabios" for the BIOS, plus disabling ballooning or I would be stuck at 2GB RAM. Otherwise it's all pretty standard.
This is on PvE v8.2.4 and OPNsense v24.1 as I haven't been brave enough to upgrade to v24.7 yet.
Just so I am clear on your issue.
At the OPNsense level you want to bridge a passed-through X550-T2 port (LAN) Physically connected to a switch with a Proxmox virtual-only linux bridge using model virtio? So line speed to physical lan and Paravirtualized speed to Proxmox VM's through a single OPNsense bridge.
At the proxmox level, I assume this Linux bridge does not have an assigned Port/Slave (so not physically connected to anything)?
This OPNsense Bridge (passed-through X550-T2 port (LAN) + proxmox Net1) functions when the Proxmox Network device defined in the OPNsense VM is Model Intel E1000, but the Bridge fails if the Proxmox Network device defined in the OPNsense VM is Model Virtio?
AND this setup works correctly in PFSENSE Bridge (passed-through X550-T2 port (LAN) + proxmox Net1 Model Virtio)?
Have you tested to be sure that a proxmox Virtio network device in promox functions at expected speeds alone outside this bridge?
I would retry using a q35 based OPNsense VM. Latest Nonsubscription promox has moved to QEMU 9 as well and that may require a reboot to move the VM over.
QuotePerhaps you can post the VM config from Proxmox so we can see exactly how it's been configured?
A great idea, I'm not sure why I didn't do that in the first place. I will note that the hostpci0 device that I pass through is the base address of the networking card that I pass through for physical connections and is further enumerated, within OPNsense, into its 2 component interfaces that give me my physical WAN and LAN connections; I pass it through in this was so that, at the lowest level, I isolate it from Proxmox.
agent: 0
balloon: 2048
boot: order=scsi0;ide2
cores: 2
cpu: host
hostpci0: 0000:0e:00
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=8.1.5,ctime=1712721471
name: OPNsense
net0: e1000e=[Redacted],bridge=vmbr100
net1: virtio=[Redacted],bridge=vmbr100
numa: 0
onboot: 1
ostype: l26
parent: After24_7_9
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=30G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=25c81be7-86b3-4923-8b40-3c4944d43b7a
sockets: 2
startup: order=1
tags: opnsense
vga: std
vmgenid: Redcated
I also was not able to get it working consistently with UEFI and had to stay with Seabios since sometimes the system failed to boot entirely. Not sure why that was but I have simply stuck with Seabios since.
QuoteThis is on PvE v8.2.4 and OPNsense v24.1 as I haven't been brave enough to upgrade to v24.7 yet.
I have been more aggressive than normal when it comes to updating OPNsense since I have been hoping, for a while now, that an update might fix my issue. I tend to lean on the side of everyone who has said this is a Proxmox and not an OPNsense issue since I have used OPNsense with Virtio at work before but for the life of me I can't figure out why it would behave like this.
QuoteJust so I am clear on your issue.
Vesalius, you have characterized the situation perfectly in your breakdown.
QuoteHave you tested to be sure that a proxmox Virtio network device in promox functions at expected speeds alone outside this bridge?
Yes and even running by itself and activated the connection does not seem to correctly register. When looking at the interfaces menu I cannot see any settings for speed or duplex settings. However, running ifconfig shows that the interface is active and functioning at the correct speed but it is not reachable on its own by anything more than a ping.
If I do it right I will have attached pictures comparing the vtnet0 (Virtio) and em0 (E1000) interfaces from within OPNsense GUI and CLI.
Out of interest, how much RAM is OPNsense reporting as installed on the dashboard?
I can see you have a balloon set at 2048 with 8192 for the total, which is much like what I wanted, but that never worked for me and OPNsense reported only ever reported 2048 and would never go above this. I had to set balloon to 0, i.e. disable it, before it could see all 8GB. Again, not sure yet if this is something that 24.7 may have fixed with its newer OS - going to give the upgrade a go this coming weekend.
QuoteOut of interest, how much RAM is OPNsense reporting as installed on the dashboard?
Dashboard shows 8155MB of memory. Not sure why the discrepancy between 8192 and that value. Also, balloon seems to have always worked for me.
I'm gonna go ahead and mark this as closed since I have circumvented the issue by changing my setup. I ended up rerouting things to use my Proxmox management port as my LAN port for my network since externally I'm bottlenecked to gigabit anyway. Considering my management port isn't out of band I'm not really getting any great usage out of a dedicated management port so might as well do double duty there; a bit of extra CPU overhead won't hurt my system.
Not terribly happy I couldn't figure out what was going on but for now my VMs, local network, and WAN connection aren't bottlenecked so this works for my purposes. :-\