Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Crycaeon

#1
I'm gonna go ahead and mark this as closed since I have circumvented the issue by changing my setup. I ended up rerouting things to use my Proxmox management port as my LAN port for my network since externally I'm bottlenecked to gigabit anyway. Considering my management port isn't out of band I'm not really getting any great usage out of a dedicated management port so might as well do double duty there; a bit of extra CPU overhead won't hurt my system.

Not terribly happy I couldn't figure out what was going on but for now my VMs, local network, and WAN connection aren't bottlenecked so this works for my purposes.  :-\
#2
QuoteOut of interest, how much RAM is OPNsense reporting as installed on the dashboard?

Dashboard shows 8155MB of memory. Not sure why the discrepancy between 8192 and that value. Also, balloon seems to have always worked for me.
#3
QuotePerhaps you can post the VM config from Proxmox so we can see exactly how it's been configured?

A great idea, I'm not sure why I didn't do that in the first place. I will note that the hostpci0 device that I pass through is the base address of the networking card that I pass through for physical connections and is further enumerated, within OPNsense, into its 2 component interfaces that give me my physical WAN and LAN connections; I pass it through in this was so that, at the lowest level, I isolate it from Proxmox.

agent: 0
balloon: 2048
boot: order=scsi0;ide2
cores: 2
cpu: host
hostpci0: 0000:0e:00
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=8.1.5,ctime=1712721471
name: OPNsense
net0: e1000e=[Redacted],bridge=vmbr100
net1: virtio=[Redacted],bridge=vmbr100
numa: 0
onboot: 1
ostype: l26
parent: After24_7_9
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=30G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=25c81be7-86b3-4923-8b40-3c4944d43b7a
sockets: 2
startup: order=1
tags: opnsense
vga: std
vmgenid: Redcated


I also was not able to get it working consistently with UEFI and had to stay with Seabios since sometimes the system failed to boot entirely. Not sure why that was but I have simply stuck with Seabios since.

QuoteThis is on PvE v8.2.4 and OPNsense v24.1 as I haven't been brave enough to upgrade to v24.7 yet.

I have been more aggressive than normal when it comes to updating OPNsense since I have been hoping, for a while now, that an update might fix my issue. I tend to lean on the side of everyone who has said this is a Proxmox and not an OPNsense issue since I have used OPNsense with Virtio at work before but for the life of me I can't figure out why it would behave like this.

QuoteJust so I am clear on your issue.

Vesalius, you have characterized the situation perfectly in your breakdown.

QuoteHave you tested to be sure that a proxmox Virtio network device in promox functions at expected speeds alone outside this bridge?

Yes and even running by itself and activated the connection does not seem to correctly register. When looking at the interfaces menu I cannot see any settings for speed or duplex settings. However, running ifconfig shows that the interface is active and functioning at the correct speed but it is not reachable on its own by anything more than a ping.

If I do it right I will have attached pictures comparing the vtnet0 (Virtio) and em0 (E1000) interfaces from within OPNsense GUI and CLI.
#4
QuoteYour proxmox version is not the newest one:

What proxmox shows on update command is the kernel version. I mistakenly called that out as the version of proxmox itself. Below is the proper command showing I am running the latest version of Proxmox.

root@X:~# pveversion
pve-manager/8.2.4/faa83925c9641325 (running kernel: 6.8.8-4-pve)


Appreciate the guide. I will take a look at it when I have the time after work to see if I've missed anything.
#5
QuoteThere's nothing wrong with OPNsense, it supports Virtio Networking and displays interface speeds by default. So fix, reconfigure, upgrade your Proxmox host, and OPNsense will follow

So I am not averse to saying that Proxmox is the issue. However, I am not certain what could be causing it. I am on the latest version of Proxmox (6.8.8-4).

Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
  proxmox-kernel-6.5.13-5-pve-signed proxmox-kernel-6.8.4-2-pve-signed
  proxmox-kernel-6.8.4-3-pve-signed proxmox-kernel-6.8.8-1-pve-signed
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Your System is up-to-date


Where I am confused is I can create a fresh install of OPNsense: its easy enough to restore from the backup file and just assign my interfaces later. However, the fresh install will still have this issue of being unable to work with Virtio while a fresh install of PFsense does not seem to have this problem and registers the Virtio interface correctly (though I do not want to make the switch to PFsense as I have been burned by corporate product maintainers and have heard from others similar things about Netgate).

To be clear, I am not asking for Proxmox help on the OPNsense forms. That wouldn't make any sense. I am just asking if there is any additional troubleshooting that I am missing. I will go back an reexamine my interfaces, on both my host and VMs, and see if there are any differences or differences.
#6
QuoteStep 2 is definitely not enough. Frankly sounds like you missed steps 5/6 and everything is blocked on the bridge member interfaces.

Again, I should've been clearer. The bridge has already been setup and is already running fine using an em0 interface (i.e. the E1000) driver. I stepped through the entire guide to get the bridge running. As I am doing testing I have found it useful to create another interface, vtnet0, which I test by removing em0 as a member of the bridge0 interface and adding vtnet0 as a part of that bridge interface. When I'm doing this testing I'm  fully turning off (as I understand it) em0 by removing it as a member of bridge0 and then adding vtnet0 as a new member to bridge0.

Quote- If you use one physical passthrough for both WAN and LAN, do they have VLANs configured and the breakout is done externally?

The Intel X550-T2 has 2 physical interfaces (that's what the T2 stands for). I have passed through the full card and use a full physical interface, each, for WAN and LAN. I don't have any VLANs configured to separate out WAN and LAN, or my LAN and my VMs.

Quote- Since you also use one virtio adapter, to which bridge and thus, to which other NIC on the Proxmox host is that connected? How is that connected to you switching topology (i.e. which VLAN)?

My bridge0 is used as an easy way to share services (DHCP, DNS, NTP) between my physical LAN connection to my physical machines and the virtual LAN connection to my other VMs on Proxmox. The only VLAN that could be involved, now that I think about it, would be within Proxmox as all my VMs (any my existing em0 connection within OPNsense) are broken out on vmbr1 as opposed to the default vmbr0.

QuoteAs to reach 10 gbps with it is a different matter.

Getting 10Gbps exactly is less of the final goal and the actual goal is being able to interface with the VMs from my physical systems at greater than 1Gbps.

I should note that before I ever used a bridge this problem existed when I was trying to run this connection as a standalone OPT1 connection with the virtio driver.

QuoteI expect you will be able to solve the problem of connectivity at present. Many people use virtio successfully.

That's why I ultimately made my post. I figure I'm just doing something stupid when configuring my setup. I am relatively savvy when it comes to getting around Linux CLI (and for the most part the OPNsense GUI does a good job of exposing relevant logs) but FreeBSD seems just different enough that I am probably missing something when it comes to log checking and finding the issue.
#7
Following the instructions found on the FreeBSD wiki: https://man.freebsd.org/cgi/man.cgi?virtio(4). I edited /boot/loader.conf to ensure that virtio_load, virtio_pci_load, and virtio_net_load have been manually loaded on my system (though documentation I am seeing says that Virtio should be part of a (the?) generic module set that can be loaded automatically.

Each time I set my OPT1 connection to my VMs to use vtnet0 I lose all connectivity.

As a bit more explanation, I use a bridge0 device to connect between my physical and virtual interfaces. Just makes it easier for me to use a single DHCP, DNS, etc setup on the bridge device that then filters down to its "child" connections (the physical connector and the virtual connector to the VMs); this shouldn't ever result in a full loss of any connection (from what I know) but I thought I'd mention that in case it was relevant. Interfaces are added following step 2 of a guide in the OPNsense docs (https://docs.opnsense.org/manual/how-tos/lan_bridge.html).
#8
What module would that be? Looking in either packages or plugins within OPNsense the only thing I can see related to KVM or Proxmox is the qemu-guest-agent plugin.

My understanding is the virtio driver should be a part of the kernel. I guess that's the module you mean. I'll look into it.
#9
I knew when I was posting I would probably need to clarify since it's really easy to have something you've set up and can look at, but it's a lot harder to properly communicate that setup to others. My main issue is OPNsense failing to interface with the virtio driver at any speed (not even 100 or 10mbps).

For more context:
The OPNsense VM has a Intel X550-T2 passed through to it via IOMMU. That runs perfectly fine at full 10Gbps. The physical interface provides me with WAN on one port and LAN on the other port to my physical machines. All of my physical devices can communicate back to an intermediate Zyxel 10Gbps switch, and OPNsense, at 10Gbps (about 8+ Gbps tested with iperf3 and real world demonstrations using LanCache).

Separate from this is a virtual interface I have added, OPT1, where I am trying to use the Virtio driver (device shows us at vtnet0 in OPNsense) and having to use the E1000 driver due to Virtio not working correctly. All of my VMs use corresponding Virtio based network connectors and none of them have a problem reading the interfaces as being capable of 10Gbps, the problem appears to be within OPNsense not recognizing something about the Virtio connection. I understand that the VMs are separate from OPNsense, and using iperf3 I can benchmark around 8.2Gbps between them when I give them enough threads and RAM. There is no slow down with my other VMs as a connection between any other 2 VMs should be the same as any VM to OPNsense.

As to using E1000 while I understand that this could, theoretically, be pushed faster, I don't really have the interest in trying to dive into that lower driver/emulation layer (part of why I ended up buying the intel NIC I did was because driver support for it is native to FreeBSD and I wouldn't have to deal with stability and security issues related to trying to bodge the drivers for another 4x1Gbps card I had previous been using when IPfire, based on Debian, was being used for my router). When OPNsense views that connection it properly interprets it as going up to, and can send traffic and assign IPs to my VMs, at 1Gbps.

#10
I have an OPNsense VM running on my Proxmox server that handles all of my networking (DHCP using Kea, DNS using dnsmasq, Port Forwarding, NTP, etc). I use an intel X550-T2 to handle all of my physical connections but within the Proxmox server itself I also need to share networking to my VMs.

My issue is that when I try to connect my VMs using the Virtio networking driver (I have also tried VMXNET3 with similar results) OPNsense does not seem to recognize the connection. I have verified that all the firewall and all other network controls that Proxmox has are turned off. OPNsense sees the network driver but when I look at the virtual interface I am actually missing the section "Speed and Duplex".

I have searched the forums and seen people saying that this has not been an issue since 19.X when the driver was incorporated into FreeBSD. I have managed to limp along just fine using the E1000 driver which retains full functionality but I need 10Gbps connections to files stored and managed by several VMs for my workflows.

I have tried clean installs and even before I restore my settings no Virtio connections will work. As a sanity check I created a PFsense VM and it worked immediately at 10Gbps so I know it's not a problem with Proxmox or FreeBSD itself.

Any help or advice would be appreciated.