Home
Help
Search
Login
Register
OPNsense Forum
»
Archive
»
18.7 Legacy Series
»
Hyper-V NIC PCI Passthrough (Not VSwitch)
« previous
next »
Print
Pages: [
1
]
Author
Topic: Hyper-V NIC PCI Passthrough (Not VSwitch) (Read 7215 times)
DanMc85
Jr. Member
Posts: 68
Karma: 4
Hyper-V NIC PCI Passthrough (Not VSwitch)
«
on:
November 06, 2018, 03:14:01 am »
Hello all,
I was just wondering if anyone has attempted setting up OPNSense using real PCI Express passthrough of the NIC, to bypass the software virtual switch in Hyper-V environments. Which should treat OPNSense as if it was running on a metal box vs dealing with the Host Windows OS and Virtual Switch. Great for things like VLANs, intrusion detection and other plug-ins of that nature better suited with real NIC access.
I tried to do it this evening, but not without an error which may be driver related. However, I am not entirely sure and maybe someone can chime in with ideas.
FreeBSD 11.1 fully supports this type of PCI Passthrough/DDA on a Windows Server 2016+ Host OS w/Hyper-V:
https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/Supported-FreeBSD-virtual-machines-on-Hyper-V
"With Windows Server 2016 administrators can pass through PCI Express devices via the Discrete Device Assignment mechanism. Common devices are network cards, graphics cards, and special storage devices. The virtual machine will require the appropriate driver to use the exposed hardware. The hardware must be assigned to the virtual machine for it to be used."
I used this as a guide, so I don't take credit for the base script:
https://blogs.technet.microsoft.com/heyscriptingguy/2016/07/14/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/
Using that guide as a base and making modifications...
The entered Windows Server 2016 PowerShell commands were the following:
$vmName = 'OPNSense Firewall'
$vm = Get-VM -Name $vmName
$dev = "PCI\VEN_8086&DEV_1521&SUBSYS_50018086&REV_01\
A0369#############
"
^^^ CAN BE FOUND IN DEVICE MANAGER - PROPERTIES - DETAILS - DEVICE INSTANCE PATH PROPERTY
- # = omitted information for privacy (just in case)
Disable-PnpDevice -InstanceId $dev -Confirm:$false
$locationPath = (Get-PnpDeviceProperty -KeyName DEVPKEY_Device_LocationPaths -InstanceId $dev).Data[0]
Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force -Verbose
Add-VMAssignableDevice -VM $vm -LocationPath $locationPath -Verbose
Once this was done, the NIC appeared in OPNSense Console immediately and on reboot. However, due to issues with either the NIC, FreeBSD, OPNSense, or Kernel Drivers. I was unable to utilize this Intel I350 NIC Port as a direct PCI Passthrough WAN port for my testing purposes.
I had the following console output errors on OPNSense:
igb0: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> at device 0.0 on pci0
igb0: Unable to map MSIX table
igb0: Using an MSI interrupt
igb0: Setup of Shared code failed
device_attach: igb0 attach returned 6
Has anyone seen these igb0 errors or have any information to resolve for the Intel I350 (with latest firmware)?
Not sure if this is Hyper-V pass-through related or OPNSense/FreeBSD Compatibility/Driver Issue.
FYI: Firmware being used on Intel I350 (Dell OEM) Version 18.5.18:
https://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=3XJH0
Thanks in advance for any assistance or ideas!
- Dan
«
Last Edit: November 06, 2018, 03:48:13 am by DanMc85
»
Logged
Print
Pages: [
1
]
« previous
next »
OPNsense Forum
»
Archive
»
18.7 Legacy Series
»
Hyper-V NIC PCI Passthrough (Not VSwitch)