OPNsense on ESXi 7.* - traffic setup

Started by SecCon, June 11, 2022, 12:30:28 PM

Previous topic - Next topic
I am completely new to OPNSense and have a running instance on an ESXi host with 4 virtual Xeon CPU cores, 256GB of disk space and 16GB of DDR3 RAM. Used the iso installation and after the usual FreeBSD disk controller mayhem, change to IDE, I have a perfectly working and updated OPNSense, running parallell to my regular Router and current network, on 192.168.1.99. No other configuration done, I plan to move my current gateway to OPNSense once I sort out my questions about how to map and connect the Lan.

This is a small network with about 50 mixed devices, it is private, I am the sole admin and I use it as my Home office, gaming, development and monitoring. ISP conn is 100/100 and network speed is above 1GB. I have patchboxes, core switch and all the cabling in place to configure this any which way that would be needed, pondering gettting an extra NIC for the server though, to keep ports and traffic apart, if deemed necessary.

Now, sorry for mentioning this, but I have no clue about how to set up the conns in ESXi Virtual Lan or Virtual Switches and that part of this. Searching I have encountered many hits on ESXi related matters on these forums, and some more or less obscure tutorials on the web, however, most of this handles old versions of ESXi. I keep mine updated. I did read up in the PFSense documentation that you should do this in a particular way, adding WAN and LAN ports (ref:  https://docs.netgate.com/pfsense/en/latest/recipes/virtualize-esxi.html#creating-port-groups ) so I wonder if this may be the way to go and relevant in particular to ESXI 7.03 branch.

As mentioned, eventually my network will run exclusively over OPNSense, but I need to sort out these questions first, since it has to be done first.
CLI is the lack of UI!

June 11, 2022, 11:39:14 PM #1 Last Edit: June 12, 2022, 07:53:38 AM by johndchch
If you can get 3 NICs in your esxi host that's the easiest setup, keep one for esxi management and any ther vm's you have running, dedicate one to 'wan' and one to 'lan'. That means a vswitch bound to each of the dedicated NICs ( separate from the default vswitch ), and one port group per vswitch ( and then those two port groups set as the two virtual NICs in the vm )

Other option is to instead use pci passthrough on those two NICs bypassing esxi totally and let opnsense use it's drivers - this of course means using NICs that are directly compatible with opnsense. On modern hardware the speed gain achieved by doing this is trivial, but on older hardware it can help, and it can make chasing issues easier as you don't have to worry about whether the fault is inside opnsense or in esxi

If you can only manage two NICs in your host - use the default vswitch/port group for lan, dedicate the other port to wan ( either vmxnet3 or pass through)

If you have high speed internet ( >1gbps ) and don't want to tie up two 'valuable' high speed NIC and switch ports there is the option of using tagging at the switch level combined with vswitch level vlans to present each vlan as a totally separate vmxnet3 interface inside opnsense, which avoids the pitfalls of handling the vlans inside opnsense itself

Hi @johndchch, thanks a lot for your reply.

3 NIC, ah you mean single port?

I have 1 NIC on mainboard with 2 ports right now, and a third for ILO (its HPE hardware) and got a couple of PCI-e slots that I can use.

I was considering adding a double port NIC, but maybe its better with a single port in regards to performance? I guess it comes down to what brand and model.

As I understand it I should use the physical interface like this.

NIC0 (with 2 ports), already have that onboard:

Physical WAN Port> ESXi port Wan on ESXi Switch >
OPNSense >
ESXi port LAN on ESXi Switch >  Physical LAN Port

Other NIC1 (to be added - one port is ok) : other vMachines.

I'm going to get that extra NIC..

Then I'll see if I can sort out the ESXi networking setup when I have it, but your info goes a long way.

CLI is the lack of UI!

Quote from: SecCon on June 12, 2022, 09:58:07 AM
3 NIC, ah you mean single port?

multi-port cards present themselves to the OS as separate NICs - so it doesn't really matter whether it's separate cards or multiple ports on a single card (or on the motherboard), inside esxi each separate port is seen as a 'physical NIC'

Go have a look on the 'physical NIC' tab of the networking page in esxi - you should be able to see the two onboard NICs

Next have a look on the virtual switches tab, you most likely have JUST the default vswitch0.

Presuming you do, click it and check the right hand side of the topolgy section -  likely it ONLY has one of the onboard physical NIC shown as uplink and your 2nd onboard NIC is currently sitting there unused ( other possibility if you've addedd it as redundant/load balancing uplink to vswitch0 )

if you do indeed have the 2nd onboard NIC sitting unused then can just go ahead and add vswitch using it and then a port group using that vswitch and use that as either wan/lan (and then add an extra pcie card for additional ports). Or you could instead add it as redundant/load balancing uplink to vswitch0 and then add a multi-port card for wan/lan - esxi doesn't care ( choice comes down to how many spare ports you have on your physical switch and how much bandwidth OTHER vms are going to need )

Quote
I was considering adding a double port NIC, but maybe its better with a single port in regards to performance? I guess it comes down to what brand and model.

as long as the slot has the required bandwidth for the card there's ZERO impact in using a multi-port card.

But that is also what you need to watch out for - a single port gbe card like the ubiquitous i210 will be an pcie2 x1 card ( so will work in any slot ) - multi-port gbe cards like the 350-t2/t4 will be pcie2 x4 so you need an open x4 (or wider) slot. High speed multi-port cards will either be wider still ( if they're pcie2 - like the x520/x540 models which are x8 ) or higher speed ( x550-t2 is pcie3 x4 - so gets the required increased bandwidth by using faster lanes rather than more of them )

As a result card choice comes down to what slots you have open, how fast they are, how wide they are, and what the budget is ( a 350-t2 is more than twice the price of 2 i210s - so if you don't MIND tying up two slots a pair of i210s is generally cheaper)

Of course the other thing to watch for is card support in esxi7 - I've got a nice intel 82576 'server' multiport card here on the shelf as it dropped out of support in the esxi6->7 upgrade ( so it's ONLY usable in pass through now ).  So before you buy and card make SURE the card is support on esxi7 ( https://www.vmware.com/resources/compatibility/search.php )


I checked on the server and there is room for a full size PCI-x4 card.

Actually there is more room than that, but I would have to get another riser card...

Thanks for the links and what you posted, I'll make sure to check that out.


CLI is the lack of UI!

June 13, 2022, 09:47:15 PM #6 Last Edit: June 13, 2022, 10:36:33 PM by johndchch
Quote from: SecCon on June 13, 2022, 07:09:12 PM
I checked on the server and there is room for a full size PCI-x4 card.

Actually there is more room than that, but I would have to get another riser card...

If you've got an x4 available but want to conserve slots then grab a 350-t4 - they're only marginally dearer than a 350-t2 and if you ever need more ports you're good to go ( 340-t4 is also ok and cheap used - only diff is the 340 doesn't support sr-iOV )

If the slot is pcie3 and you think you might end up with >1gbps internet then an x4 10gbe dual port card would also make sense ( the x550-t2 would be a good option - the x710-t2 is x8 unfortunately as well as being really dear )


June 14, 2022, 07:28:16 AM #7 Last Edit: June 14, 2022, 09:56:42 AM by SecCon
Since you got in to the hw...  ;)
I see a lot of cheaper HP cards with SFP... like the NC550, sure I would have to get RJ45 > SFP adapters, but any big downside on those cards?

Edit:
it should work according to: https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=17123&deviceCategory=io&details=1&partner=515&releases=578&deviceTypes=6&page=1&display_interval=500&sortColumn=Partner&sortOrder=Asc
CLI is the lack of UI!

Quote from: SecCon on June 14, 2022, 07:28:16 AM
Since you got in to the hw...  ;)
I see a lot of cheaper HP cards with SFP... like the NC550, sure I would have to get RJ45 > SFP adapters, but any big downside on those cards?

Edit:
it should work according to: https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=17123&deviceCategory=io&details=1&partner=515&releases=578&deviceTypes=6&page=1&display_interval=500&sortColumn=Partner&sortOrder=Asc

If it's compatible with 7.03 you're all good - but with HP enterprise gear you'll need to watch out for sfp+ compatibility - it's likely to have firmware that blocks non-HP dacs and transceivers

At work we still run a lot of x520-da2s - but if you're going to use sfp+ base-t modules it ends up being no cheaper than just getting a card with base-t on it ( cheapest sfp+ base-t modules we trust are the ubiquiti ones )

If you've got any x8 slots open the supermicro x540-t2 clone is very cheap new.

June 15, 2022, 08:18:45 AM #9 Last Edit: June 15, 2022, 08:20:52 AM by SecCon
I am kinda in the midst of prepping some conns to be SFP (between servers and storage) and some may remain RJ45.
Still lacking the SFP cabling, so for now its gonna have to be adapters. My NAS do not have SFP so that's a slight bottleneck, but considering disk I/O it matters little.

So I buy SFP NIC's for this.
CLI is the lack of UI!

If you can stay away from copper/RJ45 SFP+. 10 gigabit on copper tends to run really hot, which implies a lot of wasted energy. Don't be afraid of fiber - if it's just inside your closet/rack/whatever. And it's not even more expensive, anymore.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

June 15, 2022, 01:05:08 PM #11 Last Edit: June 15, 2022, 01:08:23 PM by SecCon
Quote from: pmhausen on June 15, 2022, 09:32:15 AM
If you can stay away from copper/RJ45 SFP+. 10 gigabit on copper tends to run really hot, which implies a lot of wasted energy. Don't be afraid of fiber - if it's just inside your closet/rack/whatever. And it's not even more expensive, anymore.

I agree, but I don't have the wiring for fiber in place yet. Only finished Cat6 the other year. So I implement it where I can, between servers in the same cabinet. Not running 10GB yet either, so while not knowing if heat is a factor at those speeds, I should be ok. Nothing is really cramped or generating huge amounts of heat either. Yet.
CLI is the lack of UI!

Quote from: SecCon on June 15, 2022, 01:05:08 PM
I agree, but I don't have the wiring for fiber in place yet. Only finished Cat6 the other year. So I implement it where I can, between servers in the same cabinet. Not running 10GB yet either, so while not knowing if heat is a factor at those speeds, I should be ok. Nothing is really cramped or generating huge amounts of heat either. Yet.

10base-t in a proper server is no big deal - the cards themselves have appropriate heatsinking and a proper server will have adequate air flow. It's more of an issue in consumer desktops where cooling is marginal at best and people are noise focussed ( so object to fans ramping up to a level appropriate to deal with the heat )


June 27, 2022, 11:14:32 AM #13 Last Edit: June 27, 2022, 11:16:25 AM by SecCon
Got the extra NIC and it was correctly identified by affected systems so that part is done. Waiting for some additional cabling. Might exist a FW update also on the chip.

On a sidenote also got the book: Practical OPNsense: Building Enterprise Firewalls with Open Source  ( https://www.amazon.com/gp/product/B09841K8HQ)  and reading up a bit on that.

I have questions, but I will make a new thread for those
CLI is the lack of UI!