OPNSense on KVM (Virtio) ?

Started by park0kyung0won, July 25, 2019, 10:27:25 PM

Previous topic - Next topic
Hello
I've heard before that BSD has a problem with Linux KVM Virtio network driver implementation.
Is it still a problem today?
Would it be okay if I turn off offload functionalities in OPNSense VM?

19.7 should have fixed most of the virtio problems .. but havent tested yet.

I run OPNSense on UNRaid VM (Q35) using 8 virtual NICs (VIRTIO).

On 19.1 only Q35-2.6 worked, otherwise there were no NICs detected in OPNsense.
Since update to 19.7 there are no NICs detected anymore!!

Now I run a Backup on 19.1 again...
HELP?? thanx a lot!!

Can anyone confirm that OPNsense 19.7 is working on Proxmox/KVM with the VirtIO drivers? Or do the E1000 need to be used?
I'm doing a complete migration from ESX to Proxmox later today and don't want to hit any unexpected issues since the OPNsense vm will be the first one brought up in the new environment. Thanks.

Quote from: unraider on July 27, 2019, 05:17:46 AM
I run OPNSense on UNRaid VM (Q35) using 8 virtual NICs (VIRTIO).

On 19.1 only Q35-2.6 worked, otherwise there were no NICs detected in OPNsense.
Since update to 19.7 there are no NICs detected anymore!!

Now I run a Backup on 19.1 again...
HELP?? thanx a lot!!

Same here, using Unraid but I have the 2 Intel 350 NICs passthrough to the VM. The exact same configuration worked in 19.1 but after upgrading to 19.7 NICs aren't detected anymore

I see lots of comrades having a similar setup with me here
If you have intel i350 nics you can use SR-IOV function
Pass VFs to OPNSense instead
SR-IOV + KVM works well with OPNSense
But somewhat tricky to setup VLAN

I have OPNSense 19.7 firewalls running in a Proxmox KVM using Ceph disks.

* I tried the image the "nano" image and it sort of worked, but I had some disk errors so I just decided to install it from ISO.

* SCSI Controller: VirtIO SCSI Single 

* Hard disk. Bus Device: SCSI (NOT VirtIO-Block, though that may work too). I also enable Writeback Cache, Discard, and IO Thread.

* Network. On the bare metal Proxmox host, I created two bridge interfaces vmbr0 and vmbr1, which go to WAN and LAN hardware. In OPNSense, these become the vtnet0 and vtnet1 interfaces. Network card Model: VirtIO (paravirtualized).

* Processor: kvm64

* OS Type: Other  (not sure this is needed; Linux, Windows, and Solaris are the other options)

* Qemu Agent: Disabled (would be nice to enable, but I don't think there is a qemu-guest-agent for OPNSense).

* pfsync/HA works. :)

Quote from: park0kyung0won on July 27, 2019, 09:43:17 PM
I see lots of comrades having a similar setup with me here
If you have intel i350 nics you can use SR-IOV function
Pass VFs to OPNSense instead
SR-IOV + KVM works well with OPNSense
But somewhat tricky to setup VLAN

As I said I am doing a passthrough of the NICs directly to the VM, and it doesn't work

I have OPNsense running in Proxmox with VirtIO and  haven't seen any issues...

July 28, 2019, 10:28:21 AM #9 Last Edit: July 28, 2019, 10:41:57 AM by rantwolf
Hi @all.

I'm running my OPNsense on a QOTOM Q555G6 inside Proxmox.
Runs very smooth. Now on 19.7, earlier on 19.1.
Here is short config:


bios: ovmf
bootdisk: virtio0
cores: 3
cpu: host,flags=+pcid;+spec-ctrl
efidisk0: local:200/vm-200-disk-1.qcow2,size=128K
machine: q35
memory: 3072
name: opnsense
net0: virtio=XX:76:3C:XX:XX:XX,bridge=vmbr1
net1: virtio=XX:DA:42:XX:XX:XX,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
parent: uefi
sata2: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=xxxxxxxx-84b4-4a47-xxxx-c3cdf22xxxxx
sockets: 1
startup: order=1
virtio0: local:200/vm-200-disk-0.qcow2,size=32G
vmgenid: xxxxxxxx-2676-48ff-xxxx-b14f644xxxxx


Quote from: l0rdraiden on July 28, 2019, 12:43:04 AM
Quote from: park0kyung0won on July 27, 2019, 09:43:17 PM
I see lots of comrades having a similar setup with me here
If you have intel i350 nics you can use SR-IOV function
Pass VFs to OPNSense instead
SR-IOV + KVM works well with OPNSense
But somewhat tricky to setup VLAN

As I said I am doing a passthrough of the NICs directly to the VM, and it doesn't work
If you want to passthrough a NIC you have to enable the kernel module: "IOMMU"

intel_iommu=on      # Intel only
iommu=pt iommu=1    # AMD only

I am doing the passthrough correctly, it works in 19.1 but if I install 19.7 NIC's are not detected at all.

Q35 does not detect nics (at least 4.0+). You'll need to use i440fx which works fine.

i440fx is for windows machines.

Q35 with passthrough works with pfsense 2.4 and opnsense 19.1, so I don't think is a problem with my configuration

Quote from: l0rdraiden on July 29, 2019, 08:29:21 AM
i440fx is for windows machines.

Q35 with passthrough works with pfsense 2.4 and opnsense 19.1, so I don't think is a problem with my configuration

"For windows machines" yet works fine with Opnsense 19.7.

I've never been able to get Q35 4.0+ working with Opnsense. It never detected the NICs. If you managed to (and it's not Q35 3.0) then congrats.

Unfortunately this is still the current state of today:

i440fx - virtio NICs working
Q35 - virtio NICs not working