Bhyve on OPNsense for virtualization in 2023

Started by pinako, March 04, 2023, 10:17:25 PM

Previous topic - Next topic
March 04, 2023, 10:17:25 PM Last Edit: March 04, 2023, 10:26:38 PM by pinako
Greetings. Following in the footsteps of slipperyduck &co, I managed to convert my OPNsense router into a virtualization host, and I'd like to register my positive signal. One could debate the merits of running OPNsense on bare metal to host virtual machines vs running OPNsense itself as a virtual machine alongside other virtual machines vs using separate physical machines. Suffice it to say that there may be security implications of using OPNsense as a virtualization host that are unknown as of yet because this is an uncommon setup.

Don't use this in production. Do try this in your lab so we could exercise this code path and work out the kinks ;)

Anyway, it all begins with the installation of the popular vm-bhyve system and required GRUB loader from the FreeBSD repository:

pkg lock -y pkg
sed -i '' 's/enabled: no/enabled: yes/' /usr/local/etc/pkg/repos/FreeBSD.conf
pkg install -y vm-bhyve grub2-bhyve
sed -i '' 's/enabled: yes/enabled: no/' /usr/local/etc/pkg/repos/FreeBSD.conf
pkg unlock -y pkg


Next, we prepare vm-bhyve (I'm using ZFS here):

zfs create zroot/vm
sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/vm"
vm init
cp /usr/local/share/examples/vm-bhyve/* /zroot/vm/.templates/


Here, we return to OPNsense's web UI to configure the network bridge. To make things easy, we could create a bridge for the LAN and then (later) use the same bridge for the VMs. My bridge is called bridge0 (the device name in the Device field).

In addition, I recommend doing packet filtering on the bridge interface instead of the individual ports, so we could connect any number of VMs without reconfiguring the firewall. In System > Settings > Tunables, set net.link.bridge.pfil_member=0 and net.link.bridge.pfil_bridge=1 (sysctl works too, but the settings wouldn't survive reboot).

Back to the command line, we tell vm-bhyve about our bridge:

vm switch create -t manual -b bridge0 public

Now let's choose an OS for our VM. I'm going lightweight with Alpine:

vm iso https://dl-cdn.alpinelinux.org/alpine/v3.17/releases/x86_64/alpine-standard-3.17.2-x86_64.iso

The Alpine template requires a little tweak:

sed -i '' 's/vanilla/lts/g' /zroot/vm/.templates/alpine.conf

And now, the moment of truth...

vm create -t alpine -s 1G myfirstvm
vm install -f myfirstvm alpine-standard-3.17.2-x86_64.iso


We should now be able to install the OS. I don't know how to exit the nmdm console during installation (~ ^D did not work for me here) short of disconnecting the SSH session; maybe y'all could help me out ;)

After installation, we should be able to use the vm start myfirstvm and vm stop myfirstvm commands to manage the VM. Running vm console myfirstvm connects to the console (and here, ~ ^D did work to disconnect).

If all works well, we could make this VM start upon host boot:

sysrc vm_list="myfirstvm"

Protip: The bhyvectl tool could be used to manage the VMs directly if vm-bhyve ran into issues. If bhyvectl were used to stop a VM, the /zroot/vm/<name>/run.lock file should be removed to avoid confusing vm-bhyve.

Now all we need is a slick web UI ;)

@pinako thank you for the detailed instructions!
I was considering to install the Proxmox with Opnsense on top of it, but decided to give a try for the Opnsense+bhyve combination to use the Opnsense box as a hypervisor for few small VMs.
The instructions are really good, the only thing that does not work for me is the last step
vm install -f myfirstvm alpine-standard-3.17.2-x86_64.iso
My VM does not get an IP over the DHCP from the LAN interface. I can see the ARP requests hitting the bridge0 interface (with tcpdump), but the Opnsense's dhcpd listening on the LAN interface (em0) would not answer these DHCP requests. Just need to fix this last thing, other than that everything works OK.

The instructions create a manual switch. I think the instructions may be missing the steps that a tap interface needs to be created and added to the bridge manually.

When you start the vm, can you check ifconfig and see if you have a tap interface and whether it was added to the bridge.  I suspect not, in which case you have to make the interface with ifconfig tap0 create and then add it to the bridge with ifconfig bridge0 addm tap0.

From there you might have to add it to the vm by running vm config <vmname> and adding network0_device=tap0.

How about FreeBSD as the hyper visor?

Also does bhyve have a gui like virt-manager does for Linux kvm?

@pinako and @slipperyduck thanks for the guides you have each written. I've had some success in following it, but not 100%, hoping you might have some insight as to where I have gone wrong.

I've been able to configure interfaces, create a VM, confirm that it receives an IP via DHCP, but the default firewall rules are kicking in and blocking traffic such as ICMP.

Looking at the firewall log I can see traffic being blocked which is originating from my VM's IP, on the tap0 interface. The first guide states:

Quote
Once it's back up we need to go to OPNSense web interface and commision the tap interface:

[opnsense] [Interfaces] [Assignments]
Look for tap0 on the dropdown and click [+ADD]
  now click [SAVE]

And this is the only part I haven't had success in, because when I look at the Assignments page I don't have an option of selecting tap0, which in turn means I can't create firewall rules against it, and so on.

I have the tap0 interface, have used /usr/local/etc/rc.syshook.d/start/50-tapstart to add tap0 to bridge0 and so on, but can't get to the point where I would create an assignment for tap0.

This is on OPNsense 23.7.9-amd64.

thanks!


root@OPNsense: # ifconfig bridge0
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
description: bridge0 (opt2)
ether 58:9c:fc:10:ff:81
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
        ifmaxaddr 0 port 13 priority 128 path cost 2000000
member: igb2 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
        ifmaxaddr 0 port 3 priority 128 path cost 55
groups: bridge vm-switch viid-4c918@
nd6 options=9<PERFORMNUD,IFDISABLED>

root@OPNsense:# ifconfig tap0
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
description: vmnet/windows2022/0/public
options=80000<LINKSTATE>
ether 58:9c:fc:10:ff:ad
inet6 fe80::5a9c:fcff:fe10:ffad%tap0 prefixlen 64 scopeid 0xd
groups: tap vm-port
media: Ethernet autoselect
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
Opened by PID 88991

Update: I found out what I needed to do to be able to assign the tap interface in the UI, or at least why I couldn't initially. It seems that opnsense filters out some interfaces from appearing in the assignment list, as per https://www.jafdip.com/renaming-ethernet-interfaces-under-freebsd/ I renamed tap0 to something else and I was then able to assign it via the UI, then add firewall rules and get traffic routing working.

to rename in the current running session:

ifconfig tap0 name hyve0


then to make it permanent add it to /etc/rc.conf:

ifconfig_tap0_name="hyve0″

This is good info.

I have been running OPNSense as a guest under Proxmox on a small server that has one other VM on it (basic linux install for pihole) but I have found that WireGuard requires WAY more CPU than I expected at gigabit speeds, so I am considering doing away with Proxmox, running OPN Sense bare metal, and moving the pihole VM into bhyve on OPNSense instead to make sure OPNSense can talk straight to the hardware and be more efficient.

In my config the one VM would not be externally exposed, but instead get its own entirely virtual local network on the LAN side of the OPNSense firewall, so I am not terribly concerned about security, but I'll port scan it from the WAN side just to make sure.

I probably won't get around to this right away, but when I do I'll definitely post back here.

Thanks for sharing.
OPNSense running as a VM in KVM under Proxmox:
- Rocket Lake Xeon E2314 in a Supermicro X12STL-F.  
- IOMMU forwarded i210 Ethernet for WAN and x520 for LAN.
- Pi-hole running as separate LXC Container on same server. 
- Lots of VLAN's and tricky firewall rules.