Home
Help
Search
Login
Register
OPNsense Forum
»
English Forums
»
Tutorials and FAQs
»
[HOWTO] OpnSense under virtualisation (Proxmox et.al.)
« previous
next »
Print
Pages: [
1
]
Author
Topic: [HOWTO] OpnSense under virtualisation (Proxmox et.al.) (Read 22 times)
meyergru
Hero Member
Posts: 1675
Karma: 164
IT Aficionado
[HOWTO] OpnSense under virtualisation (Proxmox et.al.)
«
on:
Today
at 10:43:58 am »
These days, there are many folks who use OpnSense under a virtualisation host, like Proxmox, for example.
This configuration has its own pitfalls, therefore I wanted to have this guide. The first part starts with common settings needed, the second part will deal with a setup where the virtualisation host is to be deployed remotely (e.g. in a datacenter) and holds other VMs besides OpnSense.
Filesystem peculiarities
First off, when you create an OpnSense VM, what should you choose as file system? If you have Proxmox, it will likely use ZFS, so you need to choose between UFS and ZFS for OpnSense itself. Although it is often said that ZFS underneath ZFS is more overhead, I would use it regardless, just because UFS fails more often.
32 GBytes is a minimum I would use for size.
After a while, you will notice, that the space you have allocated for the OpnSense disk will grow to use 100%, despite that within OpnSense, the disk may be mostly unused. That is a side-effect of the copy-on-write feature of ZFS: writing logs and RRD data and other statistics always writes new data and the old data does not get dismissed against the underlying (virtual) block device.
That is, if the ZFS "autotrim" feature is not set manually. You can either set this via the OpnSense CLI with "zpool set autotrim=on zroot" or, better, add a daily cron job to to this (System: Settings: Cron) with "zroot" as parameter.
You can trim your zpool once via CLI with "zpool trim zroot".
That being said, you should always avoid to fill up the space for the disk by having verbose logging. If you do not need to keep your logs, you can also put them on a RAM disk (System: Settings: Miscellaneous).
Network "hardware"
With modern FreeBSD, there should not be any more discussion about pass-through vs. emulated VTNET adapters: the latter are often faster. This is because Linux drivers are often more optimized than the FreeBSD ones. There are exceptions to the rule, but not many.
In some situations, you basically have no choice than to use vtnet, e.g.:
If FreeBSD has no driver for your NIC hardware
If the adapter must be bridged, e.g. in a datacenter with a single NIC machine
With vtnet, you should make sure that hardware checksumming is off ("hw.vtnet.csum_disable=1", which is the default on new OpnSense installations anyway).
When you use bridging with vtnet, there is a
known Linux bug with IPv6 multicasting
, that breaks IPv6 after a few minutes. It can be avoided by disabling multicast snooping in /etc/network/interfaces of the Proxmox host like:
auto vmbr0
iface vmbr0 inet manual
bridge-ports eth0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
bridge-mcsnoop 0
TL;DR
Use ZFS, dummy
Keep 20% free space
Add a trim job to your zpool
Use vtnet
Check if hardware checksumming is off on OpnSense
Disable multicast snooping
That is all for now, recommendations welcome!
«
Last Edit:
Today
at 10:49:38 am by meyergru
»
Logged
Intel N100, 4 x I226-V, 16 GByte, 256 GByte NVME, ZTE F6005
1100 down / 440 up
,
Bufferbloat A+
meyergru
Hero Member
Posts: 1675
Karma: 164
IT Aficionado
Re: [HOWTO] OpnSense under virtualisation (Proxmox et.al.)
«
Reply #1 on:
Today
at 10:44:18 am »
This is the placeholder for the second part...
Logged
Intel N100, 4 x I226-V, 16 GByte, 256 GByte NVME, ZTE F6005
1100 down / 440 up
,
Bufferbloat A+
Print
Pages: [
1
]
« previous
next »
OPNsense Forum
»
English Forums
»
Tutorials and FAQs
»
[HOWTO] OpnSense under virtualisation (Proxmox et.al.)