OPNsense Forum
Archive => 20.1 Legacy Series => Topic started by: ole on May 22, 2020, 05:48:23 pm
-
Hello,
I've read something about DMZ and try to realize them for my home lan now, but want to have a plan before :)
Locally, I've opnsense on APU4C4 with LAN+WAN and a VLAN aware switch (zyxel GS1900).
The LAN is 192.168.1.0/24. Hence 2 (with WAN) of the 4 ports are gone. I like to add the DMZ with 2 clients at igb2 to spreat the bandwith. Further I want to add a Mgmt VLAN and the 'CableVLAN'.
The DMZ shall have the net 192.168.90.0/24 and the Mgmt 192.168.10.0/24, CableVLAN 192.168.20.0/24. In my case probably obvious is the use of VLAN Tag=10 for Mgmt, VLAN Tag=20 for CableVLAN. But how to realize the DMZ? As 'physical' net 192.168.90.0/24 or as VLAN Tag=90 as 192.168.90.0/24 with e.g. 192.168.2.0/24 as underlaying LAN segment??
One of the Server is Docker Host, running UNIFI Controller Software. As far I've understood the simplest solution is to allow the LAN segment PVID=1 rule to the inform URI from Unifi AP / WLAN network (e.g. VLANs 192.168.{30,40}.0/24 for WiFi and Guests). Setting the inform network to Mgmt network is possible, but for an home lan overkill and may result into cutting of the access risk to the AP (I've read).
Can I wire both LAN(igb0) and DMZ(igb2) from opnsense box to the switch (ports tagged according of course) and plug in both servers to the same switch too? The switch has strong backplate to allow parallel traffics.
Maybe have the option to create DMZ#2 with VLAN=91 later on?
Are my considerations usefull/clear?
Thanks in advance
-
You don't need separate ports with VLANs. You simple assign all the VLANs to the same port on opnsense and that port is then the trunk. You have one trunk connection to the switch, you can then break out the separate VLANs on the switch and/or pass on the trunk to the next switch and so on.
-
thanks for your answer.
So, it's possible to wire all 3 (or even 4) ports of opnsense to the switch as trunk (or even link aggregation group (LAG)/bond) and opnsense performs some kind of 'load balancing' (3/4x GBit), the switch distributes all to the 'clients' according VLAN tags / different LAN networks?
-
So, it's possible to wire all 4 ports of opnsense to the switch as trunk and opnsense performs some kind of 'load balancing' (4x GBit)?
Configured as LAGG on switch and opnsense. You don't want that.
You could use your ports like
* untagged WAN
* several tagged OPTs and/or tagged LAN1 -> to managed switch
* untagged LAN2 (optional) -> for a unmanaged switch
* spare Port or management
The switch will tag/untag packets. You get a Trunk port with tagged VLANs between opnsense and switch.
E.g. one port on the switch gets the VLAN 90 untagged with PVID 90 if the servers NIC is not vlan-aware. Another server connects to a switch-port with tagged VLAN90 and VLAN20 then the server has to be configured with a VLAN90 and a VLAN20 on this Trunk. A client gets a port on the switch with untagged VLAN1/PVID1.
-
I'm note sure what you hope to achieve by trying to do that. You have an APC4, one port will be needed for WAN. So you have a maximum of three available ports for all LANs, whether they be DMZ or whatever. My point was that you didn't need to use all the ports with VLANs.
-
Configured as LAGG on switch and opnsense. You don't want that.
From principle point of view - why?
I'm note sure what you hope to achieve by trying to do that. You have an APC4, one port will be needed for WAN. So you have a maximum of three available ports for all LANs, whether they be DMZ or whatever. My point was that you didn't need to use all the ports with VLANs.
So here I'm back. What about:
[igb0] -> trunk VLAN {10,20} aka {Mgmt,LAN} -> SmartSwitch
-> Port: untagged {20} LAN Clients
-> Port: tagged {10,20} My/Daddy LAN Client
-> Port: untagged {10} Mgmt Failback
[igb1] -> Cable Modem -> WAN
[igb2] -> trunk VLAN {10,90} aka {Mgmt, DMZ} -> SmartSwitch
-> Port: tagged {10,90} Srv1: Docker/VM
-> Port: tagged {10,90} Srv2: NAS
[igb3] -> trunk Class C + VLAN {10,30,31} aka {Mgmt, WLAN(1), WLAN(Guest)} -> SmartSwitch
-> trunk Unifi AP
BTW; why is WAN on opnsense igb1 on default, not igb0? I left the default to be on the safe side in case of ...
Even Unifi AP seems to be recommanded, I'm not convinced afterwards due to Unifi Controller Software running on DMZ docker host and VLAN .... Unifi components seems to be closed/alone working best. I know, that the Controller doesn't must run all the time - only on configuration time.
My goal is also to avoid sharing bandwidth between the VLANS on same physical layer.
-
My goal is also to avoid sharing bandwidth between the VLANS on same physical layer.
Then you will need 10Gbps switches and CAT6 cable for a 10Gbps trunk or individual LAN segments wired with their own switches - otherwise you are limited by the 1Gps trunk of a standard managed witch, no matter how you divide the inputs.
BTW; why is WAN on opnsense igb1 on default, not igb0? I left the default to be on the safe side in case of ...
When you installed it the installer asks you which interface you wish to use for WAN LAN and OPT. If you do not answer it tries to auto detect which should be which, in some cases it might make the wrong choice, but you are given the option. You can also change that at any time from the terminal shell, option 1.
-
OK, so I'm back with some results:
I realized so far:
LAN: 192.168.1.0/24
Mgmt VLAN10: 192.168.10.0/24
DMZ VLAN90: 192.168.90.0/24
[igb0] -> Cable Modem -> WAN
[igb1] -> LAN and VLAN {10} aka {Mgmt} actually direct wired
Me/Client 'tux' (192.168.1.100) and tagged {10}
[igb2] -> VLAN {10,90} aka {Mgmt, DMZ} actually direct wired
'Srv1': Docker/VM
At old pfsense/Alix network I've prepared 'Srv1' before which shall go to DMZ as:
clr1$ ip -d link show mgmt
4: mgmt@enp3s0:
...
vlan protocol 802.1Q id 10
clr1$ ip addr show dev mgmt
4: mgmt@enp3s0:
...
inet 192.168.10.11/24 brd 192.168.10.255 scope global mgmt
clr1$ ip -d link show dmz
5: dmz@enp3s0:
...
vlan protocol 802.1Q id 90
clr1$ ip addr show dev dmz
5: dmz@enp3s0:
...
inet 192.168.90.11/24 brd 192.168.90.255 scope global dmz
And my (LAN) client 'tux' as:
tux $ ip addr show dev enp5s0
2: enp5s0:
....
inet 192.168.1.100/24 brd 192.168.1.255 scope global dynamic noprefixroute enp5s0
tux $ ip -d link show enp5s0.10
11: enp5s0.10@enp5s0:
....
vlan protocol 802.1Q id 10
tux$ ip r show 192.168.10.0/24
192.168.10.0/24 dev enp5s0.10 proto kernel scope link src 192.168.10.100 metric 400
Attached the rules I applied with interface.
Now I can reach the DMZ IP:
tux$ ping -c 3 192.168.90.11
PING 192.168.90.11 (192.168.90.11) 56(84) bytes of data.
64 bytes from 192.168.90.11: icmp_seq=1 ttl=63 time=1.02 ms
...
tux$ ping -c 3 192.168.90.1
PING 192.168.90.1 (192.168.90.1) 56(84) bytes of data.
64 bytes from 192.168.90.1: icmp_seq=1 ttl=64 time=0.506 ms
...
but not the Mgmt Gateway and IP:
tux$ ping -c 3 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
^C
tux$ ping -c 3 192.168.10.11
PING 192.168.10.11 (192.168.10.11) 56(84) bytes of data.
^C
on opnsense (ssh):
admin@OPNsense:~ % ping -c3 192.168.10.1
PING 192.168.10.1 (192.168.10.1): 56 data bytes
64 bytes from 192.168.10.1: icmp_seq=0 ttl=64 time=0.233 ms
...
admin@OPNsense:~ % ping -c3 192.168.10.11
PING 192.168.10.11 (192.168.10.11): 56 data bytes
^C
So I miss ome fundamentals :( What is missing? Other hints?
-
According to your screenshot there is no vlan10 on igb2. Thus you can't connect to vlan10 in the DMZ.
Is there no switch involved anymore?
Don't crate a 2nd vlan 10 on igb2, let the switch handle that single vlan10 on the igb1 port.
Also I'd use a 'real' Trunk between igb1 and switch, with tagged LAN and tagged Mngmt and let the switch hand out an untagged LAN to the clients port.
-
According to your screenshot there is no vlan10 on igb2. Thus you can't connect to vlan10 in the DMZ.
Is there no switch involved anymore?
No, directly wired to APU4. You are right, interface DMZ hasn't Mgmt net.
Don't crate a 2nd vlan 10 on igb2, let the switch handle that single vlan10 on the igb1 port.
I will try, since I have VLAN capable switch. My old network still uses this bigger Zyxel GS1900 switch with the same network 192.168.1.0/24 actively used, so I can't switch off my folk here :)
Fortunately I have my small cisco SG200-8 smart-switch to experiment. For configuration see attachment.
So I wired and configured hopefully correct:
-----------------+ +-----------------
opnsense | | SG200
igb | |
0/WAN | -- WAN |
1/LAN,Mgmt | -- LAN ----- | 1 (trunk [LAN, Mgmt=VLAN10])
2/DMZ | -- DMZ ---- | 2 (trunk [VLAN90])
3 | | ...
----------------+ | ...
<- Me --- LAN ---| 6 (trunk LAN, Mgmt=VLAN10, DMZ=VLAN90])
<- Srv1 -DMZ ---| 7 tagged (VLAN {10,90})
| 8
+-----------------
But I'm still not able to ping even the gateway 192.168.10.1
Also I'd use a 'real' Trunk between igb1 and switch, with tagged LAN and tagged Mngmt and let the switch hand out an untagged LAN to the clients port.
Of course, later LAN will become VLAN20 untagged. At this time, my 'working' box has only one ethernet PHY, but to interfaces (VLAN10=Mgnt and default LAN) to administrate later on.
**Edit**:
After some tests, I'm not sure that the 2nd interface on my linux box is setup correct:
tux$ ip route show
default via 192.168.1.1 dev enp5s0 proto dhcp metric 100
default via 192.168.10.1 dev enp5s0.10 proto static metric 20400
192.168.1.0/24 dev enp5s0 proto kernel scope link src 192.168.1.100 metric 100
192.168.10.0/24 dev enp5s0.10 proto kernel scope link src 192.168.10.100 metric 400
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
tux$ ip route show table mgmt
default via 192.168.10.1 dev enp5s0.10
tux$ ip -d link show enp5s0.10
3: enp5s0.10@enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:1f:d0:9d:e7:81 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 0 maxmtu 65535
vlan protocol 802.1Q id 10 <REORDER_HDR> addrgenmode none numtxqueues 1 numrxqueues 1 gso_max_size 64000 gso_max_segs 64
tux$ ping 192.168.10.100
PING 192.168.10.100 (192.168.10.100) 56(84) bytes of data.
64 bytes from 192.168.10.100: icmp_seq=1 ttl=64 time=0.049 ms
...
tux$ ping 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
^C
but from opnsense box to tux:
admin@OPNsense:~ % ping 192.168.10.1
PING 192.168.10.1 (192.168.10.1): 56 data bytes
64 bytes from 192.168.10.1: icmp_seq=0 ttl=64 time=0.238 ms
...
admin@OPNsense:~ % ping 192.168.10.100
PING 192.168.10.100 (192.168.10.100): 56 data bytes
^C
Even after more tests, and added untagged 10 to Port #5 of Cisco switch, using DHCP on VLAN10=Mgnt, a client on this port doesn't get a DHCP lease :(
-
One day later ... I created VLAN20 for normal/trusted LAN use and renamed default LAN to LAN1 (no VLAN tag on opnsense).
(New) LAN/VLAN20 is on opnsense's igb1 (as of Mgmt VLAN10) with DHCP, see iface_LAN and dhcp_LAN. Cisco's switch Port8 is configured to "Access" ID=20 untagged, see cisco_Ports. Also I added a LAN rule 'IP4+6 to everywhere (not shown here). On client I did run wireshark, see wireshark_eno1. Obviously something happens, but I can't interpret this. The same applies probably to my Mgmt VLAN10 ... There is no DHCP Offer from server. I assume there is fundamental configuration error but I can't find them.
-
ok, obviously there where wrong settings on Cisco's trunk port connected to opnsense LAN/igb0 port.
Now DHCP works for client on Cisco port 5 and 8 (VLAN10,20). Story continuous ...
Edit:
The switch port where my linux/working box is attached was wong too. Now ping to mgmt gateway works.
-
Nevertheless my problems with VLAN/Switch, what shall be the nominal ruless?
E.g. Mgmt LAN, obviously? only SSH into DMZ, no WAN? In DMZ I've running docker with Unifi Controller, portainer, Sys-Logging (ElasticSearch once a day maybe). Also SSH(HTTP of opnsense should be into Mgmt LAN - isn't it? How to handle this cases? SSH into LAN client only, not reverse? How to, since I have here only one NIC with VLAN interface attached.
Further:
- DMZ no local/private Nets.
- Guest Wifi ~
- Wifi Family with DMZ
- WAN nothing yet :) Later (probably only nextcloud)
- IoT only WAN
BTW, what about asymmetric routing at my box with actually LAN1 and Mgmt IP (later LAN and Mgmt)? Webpage loading takes more time on APU4 than on my old Alix board! DNS resolution is fast as before.
*Edit*, are this confirmed values by other APU4Cx users on Gigabit Ethernet? I'm testing the default LAN (aka LAN1 here) agains opnsense (using os-iperf plugin):
iperf3 -c 192.168.1.1 -p 44002
Connecting to host 192.168.1.1, port 44002
[ 5] local 192.168.1.102 port 41168 connected to 192.168.1.1 port 44002
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 25.4 MBytes 213 Mbits/sec 42 69.3 KBytes
[ 5] 1.00-2.00 sec 25.6 MBytes 215 Mbits/sec 36 65.0 KBytes
[ 5] 2.00-3.00 sec 24.9 MBytes 209 Mbits/sec 16 69.3 KBytes
[ 5] 3.00-4.00 sec 25.4 MBytes 213 Mbits/sec 13 67.9 KBytes
[ 5] 4.00-5.00 sec 25.8 MBytes 217 Mbits/sec 2 67.9 KBytes
[ 5] 5.00-6.00 sec 23.3 MBytes 196 Mbits/sec 72 65.0 KBytes
[ 5] 6.00-7.00 sec 25.4 MBytes 213 Mbits/sec 31 48.1 KBytes
[ 5] 7.00-8.00 sec 24.8 MBytes 208 Mbits/sec 41 65.0 KBytes
[ 5] 8.00-9.00 sec 25.8 MBytes 216 Mbits/sec 25 65.0 KBytes
[ 5] 9.00-10.00 sec 25.0 MBytes 209 Mbits/sec 18 65.0 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 251 MBytes 211 Mbits/sec 296 sender
[ 5] 0.00-10.00 sec 251 MBytes 211 Mbits/sec receiver
-
This is the current state, see attachement. Are the VLAN settings correct for the switch? The Cisco notation seems to be slightly different from Zyxel, isn't it?
The LAN1 (default LAN) will be removed later on. The WiFi part isn't yet, but the Unifi controller is in the DMZ by this design and must be reachable from Unifi AP to get the inform URL. I'm unsure about at this time.
I would assume if the network settings are ok I can continue with the rules ....
Also, I'm not sure about the two IP on my box, since routing shall be go trought the default gateway (192.168.1.1 resp. later 192.168.20.1) expect for VLAN10 only trough 192.168.10.0/24.