OPNsense Forum

Archive => 18.1 Legacy Series => Topic started by: jcdick1 on March 08, 2018, 11:00:58 pm

Title: Multiple NICs and routing and such
Post by: jcdick1 on March 08, 2018, 11:00:58 pm
I have a couple of VM hosts with multiple NICs - 1Gbe (LAN) and 10Gbe (NFS/CIFS storage) - and the VM guests have virtual NICs coinciding with each network.  On the switch, these connections are isolated in their own VLANs, with OPNsense having interfaces coinciding with each VLAN.

How do I go about forcing the hosts (both hypervisor and guest) to use their 1Gbe LAN connections instead of their 10Gbe connections for getting out into the world?  Firewall rule?  Remove any gateways?  I'd like their 10Gbe data network to be almost isolated.  I'd like to get to it from my LAN connected workstation for reasons, but they should really only see each other.

I'm not super familiar with all the full ins and outs of IP networks as far as routing and subnets go.

Thanks for any help you can provide.
Title: Re: Multiple NICs and routing and such
Post by: Ciprian on March 09, 2018, 10:53:06 am
Hi!

Two cases:

1. If your 10 GB devices share the same broadcast domain (a.i. all their IP addresses are in the same network segment):


Just don't add (or remove, if already present) the "Default allow 10 Gb LAN to any rule" in OPNsense FW. Your FW Rules for that interface should be blank, no rules at all.

PS The "Default allow LAN to any rule" already existing in OPNsense, for your 1 Gb LAN, from the setup/ wizard stage does permit session initiation from 1 Gb LAN network to 10 GB LAN network (meaning your 1 Gb LAN workstation will get to them no problem). As for the returning/ reply traffic from 10 Gb to your workstation in LAN, the OPNsense, being a stateful router, does this by default.

2. If you are further segmenting network for all 10 Gb interfaces of all, each and every host or guest:

You should create and fine-tune FW Rules in OPNsense for each and every segment (maybe even host) - and on each corresponding interface in OPNsense - you want to communicate in a particular way/ direction/ port/ protocol.

PS This is a more intricate situation, and further details are needed (like networks involved, which hosts or guests are in each netowrk, what are their IPs, toward what should each and every be permitted to initiate connections... And so on and so forth.

Good luck, and cheers!
Title: Re: Multiple NICs and routing and such
Post by: jcdick1 on March 09, 2018, 03:20:54 pm
I have three different interfaces in OPNsense, corresponding to three different VLANs on the switch.  My primary network for all of my devices and such (192.158.1.X/24), a "storage" VLAN (10.10.10.X/24), and a "management" VLAN (10.10.20.X/24) that is for the iLO/DRAC interfaces on the servers, etc.  Since I have OPNsense configured with an interface for each of those, I don't have any layer 3 (I think that's right) functions enable for those VLANs on the switch itself.

I am finding that when I try to mount the NFS from my storage server that allows only clients in the 10.10.10.0/24 subnet, I continually get "permission denied by server," which makes me think that the servers are trying to mount via their 192.168.1.X interfaces, rather than their 10.10.10.X 10Gbe interfaces.

I will look into rules to block that traffic.

Thanks!
Title: Re: Multiple NICs and routing and such
Post by: jcdick1 on March 09, 2018, 05:25:09 pm
I've set rules and now one of my two hosts can mount the share just fine.  The other host has an issue that I am working with that community to resolve.  I appreciate the clue on the firewall.

Thanks!
Title: Re: Multiple NICs and routing and such
Post by: Ciprian on March 09, 2018, 05:25:57 pm
I am finding that when I try to mount the NFS from my storage server that allows only clients in the 10.10.10.0/24 subnet, I continually get "permission denied by server," which makes me think that the servers are trying to mount via their 192.168.1.X interfaces, rather than their 10.10.10.X 10Gbe interfaces.

If I understand correctly, you are trying to do something that is not recommended: having more than one and only one IP address on any end-device should be avoided by all means. More so, having more than one GW address set on one single NIC/ NIC teaming is strictly forbidden.

If it's a server, the rule can be bent to one IP + GW for access NIC, one IP for iSCSI NIC (only if you have a NAS with iSCSI you wish to connect your File Server/ Storage server to...), and one IP + GW for iLO NIC. That's it!!! All and every connection to and from all these devices' NICs should be isolated using VLANs (layer 2 OSI model) and routed (Layer 3) through trunk teaming/ LAGG interfaces (capacity purposes) using those Layer 3 Managed Switches or a router. And you never route a iSCSI network, neither IN nor OUT, neither from nor towards.

If you have both 192... and 10... on the NFS server, and more so you also have GW IP addresses set on both NICs, it is very tricky to isolate and direct each packet on the desired interface/ network, you would have to use static routes on many (if not on all) end-devices (workstations, servers, NAS devices...). It's a hell of a topology. :)
Title: Re: Multiple NICs and routing and such
Post by: jcdick1 on March 09, 2018, 06:21:12 pm
If it's a server, the rule can be bent to one IP + GW for access NIC, one IP for iSCSI NIC (only if you have a NAS with iSCSI you wish to connect your File Server/ Storage server to...), and one IP + GW for iLO NIC.

If you have both 192... and 10... on the NFS server, and more so you also have GW IP addresses set on both NICs, it is very tricky to isolate and direct each packet on the desired interface/ network, you would have to use static routes on many (if not on all) end-devices (workstations, servers, NAS devices...). It's a hell of a topology. :)

That's basically what I have for the servers, one IP+GW (1Gbe, 192.168.1.X) for web services and normal stuff, one IP for the NFS (10Gbe, 10.10.10.x), and one IP for iLO/DRAC/SNMP (1Gbe, 10.10.20.X).  I'm not sure why it was doing what it was doing, because I set the DHCP for "none" on the gateway for the NFS subnet, and the servers are actually configured for static IP.  But as soon as I put the blocking rules in for those hosts, I no longer had the "permission denied" issue.

Now to work on my port forwarding issue ... but that's for another post.