Hyper-v and SR-IOV in OpnSense support?

Started by bandit8623, January 18, 2025, 10:40:13 PM

Previous topic - Next topic
January 18, 2025, 10:40:13 PM Last Edit: January 19, 2025, 12:25:25 AM by bandit8623
Hello,

I have been tinkering with virtualizing my opnsense setup.  using hyper-v i can pass through nic directly to vm and that works,  but wont be able to do snapshots and such.

So i tried to set up sr-iov by creating 2 sr-iov switches.  then i attached to my opnsense vm.   the vm sees the ports but doesnt install the proper driver. which in turn has hyper v tell me sr-iov not operational.


in opnsense it shows both ethernet interfaces as
dev.hn.0 Hyper-v network interface
dev.hn.1 Hyper-v network interface


if i look i can see this msg  ,for both nics
pci0: <network, ethernet> at device 2.0 (no driver attached)
pci1: <network, ethernet> at device 2.0 (no driver attached)

https://ibb.co/x556P5g

Anyone have any ideas how to get this to work?  is the ixv driver included with opnsense? maybe just needs to be enabled?

sriov works fine with other operating systems with driver support. in hyper-v manager sr-iov will show operational when driver inside vm supports.

https://www.intel.com/content/www/us/en/download/645984/intel-network-adapter-virtual-function-driver-for-pcie-10-gigabit-network-connections-under-freebsd.html

Thanks in advance.






SR-IOV should have the same impact on the VM functionality as passing through the card, so I'm not sure if that is going to help you.

Quote from: bimbar on January 21, 2025, 01:23:48 PMSR-IOV should have the same impact on the VM functionality as passing through the card, so I'm not sure if that is going to help you.
Yes besides being able to snapshot.  passthrough snapshots are not going to work.

Hello bandit8623, did you find anything about how ixv drivers can be loaded on a hyper-v virtualized opnsense?
I'm having the same problem...

Quote from: pchealing on February 14, 2025, 04:11:44 PMHello bandit8623, did you find anything about how ixv drivers can be loaded on a hyper-v virtualized opnsense?
I'm having the same problem...
no sorry im passing through for now until i figure it out. sorry for delay on response i dont get email notifications on here for some reason..

passthrough working well just cant snapshot..

Quote from: pchealing on February 14, 2025, 04:11:44 PMHello bandit8623, did you find anything about how ixv drivers can be loaded on a hyper-v virtualized opnsense?
I'm having the same problem...
after a reboot my vm in hyper-v stopped booting today.  was after a server 2025 update.  all my other vms boot fine though.  annoying

Really Hyper-V has never been interested in creating compatibility with freeBSD. Hyper-V for tinkering is fine, but that is the reason it is hardly ever in my experience used in industry.
Don't bother with Hyper-v for anything but clicking around to learn some concepts. But you might as well learn with something you can use and rely on. Proxmox, Xen, etc. Even VirtualBox will be better for this.

June 03, 2025, 06:23:04 AM #7 Last Edit: June 03, 2025, 06:25:53 AM by bandit8623
Quote from: cookiemonster on June 02, 2025, 11:04:13 AMReally Hyper-V has never been interested in creating compatibility with freeBSD. Hyper-V for tinkering is fine, but that is the reason it is hardly ever in my experience used in industry.
Don't bother with Hyper-v for anything but clicking around to learn some concepts. But you might as well learn with something you can use and rely on. Proxmox, Xen, etc. Even VirtualBox will be better for this.

proxmox doesnt support hardware raid which is still faster. i just loaded up my 2nd baremetal system for opnsense.   btw hyper-v was working fine for 6 months.  most likely a server 25 update.  what i get for using latest greatest :)  some things proxmox is still not as good for though

Proxmox uses to my knowledge, Linux's LVM (which I get why but personally don't like much). So you can have a raid in software similar to ZFS, but in zfs-land means replacing raid cards with hbas, don't know if LVM is the same. But maybe you mean something else when you say proxmox doesn't support hardware raid.

Proxmox does support hardware RAID if there is a driver for the controller in question. Virtual volumes created by a RAID controller are just regular SCSI disks to the OS.

Proxmox also supports ZFS.

Now if you meant if both can be managed in the UI - then the answer is no. You need to use the builtin RAID controller setup tools and the command line for ZFS.

But it does work with both.

I'd challenge the claim that "hardware RAID is faster". Also there is always vendor lock in. If the RAID controller fails years after deployment of the machine and you cannot get the same model or at least brand/series, your data is toast. A ZFS pool can always be read by any current Linux, FreeBSD or even Mac OS.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Proxmox can use many filesystem types as storage layers: LVM, ZFS, CEPH and also XFS, BTRFS and EXT4. And you can put most, if not all of those on a hardware raid block device.

Actually, Proxmox is Debian-based, so anything that is supported there, will do just fine. Been there, done that.

And Patrick is right: With the advent of ZFS, I finally replaced all of my HBA raid adapters.

Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on June 04, 2025, 12:21:25 PMProxmox can use many filesystem types as storage layers: LVM, ZFS, CEPH and also XFS, BTRFS and EXT4. And you can put most, if not all of those on a hardware raid block device.

Actually, Proxmox is Debian-based, so anything that is supported there, will do just fine. Been there, done that.

And Patrick is right: With the advent of ZFS, I finally replaced all of my HBA raid adapters.



dont confuse HBA and raid.  2 diff things
for nvme i agree software raid is really good.

megaraid for lsi/broadcom is a giant mess in proxmox.  looks like hba mode is only thing that worsk which i dont want for my setup.  otherwise ide take another look.

https://forum.proxmox.com/threads/lsi-megaraid-9341-4i-controller-supported-in-proxmox-ve-8.138728/


Quote from: Patrick M. Hausen on June 04, 2025, 12:17:15 PMProxmox does support hardware RAID if there is a driver for the controller in question. Virtual volumes created by a RAID controller are just regular SCSI disks to the OS.

Proxmox also supports ZFS.

Now if you meant if both can be managed in the UI - then the answer is no. You need to use the builtin RAID controller setup tools and the command line for ZFS.

But it does work with both.

I'd challenge the claim that "hardware RAID is faster". Also there is always vendor lock in. If the RAID controller fails years after deployment of the machine and you cannot get the same model or at least brand/series, your data is toast. A ZFS pool can always be read by any current Linux, FreeBSD or even Mac OS.

there is no worry on replacing the card. lsi/broadom cards work with each other and see foreign configs.  and there are hundrends of older cards still out in wild.  ive taken a raid config from a sas2 card and put in a sas3 card sees config just fine and works. 

I do not think I confused anything: I wrote "HBA raid adapters" (should have written "raid HBAs") and meant Areca ARC1280, which are in fact raid host bus adapters. And I can assure you will have a hard time replacing those with software or other brand adapters.

They worked fine at their time, but time moves on and ZFS has merits way beyond RAID, namely compression, checksumming and snapshots, plus the capability to work across OSes and HBA brands.

Especially the checksumming feature is really important: I once had one HDD that wrote defective data without telling. It seemed like it had a defective RAM buffer. In case you did not know: classic RAID does not help with that, while ZFS does! It also helps with nowadays huge drives, where errors can also statistically go unnoticed in the disk itself.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

We dropped hardware RAID between 2009 (FreeBSD 8) and 2012 (FreeBSD 9). Not looking back. I run >100 physical machines in 5 data centres, all with ZFS.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)