Hello o/ first post here. I'm looking for some insight on a network design. Let me first say that I am not a network, hardware, system engineer or any of that. So please forgive an ill use of terminology. While this is not my forte I do have the internet and I can read so we work with what we have.
There is however a lot of conjecture on the internet. Since I have recently been somewhat thrown into a position that requires me to assume the role of a network administrator and engineer I am looking for is some input on how the network we have is setup and if there is a better way to deal with it then what I am currently planning.
As of right now we have 2 separate networks, with 2 different firewalls. One for the DMZ and one for the office LAN. The DMZ plays host to about 6 different servers, some web servers, some other. All hosted in VMware ESX across 3 deferent hosts. Networked to the DMZ switch (LAN Switch is separate) then through the firewall. All the servers on the DMZ port forwarded through the firewall by way of their external static IP.
The part that is worrisome to me and why I am attempting to change and rock the boat is there all on the same DMZ LAN. IE: Server1 = 192.168.20.11, Server2 =192.168.20.12, Server2 =192.168.20.13 etc. All are on the same vlan and can see each other ping etc. Nothing is stopping them from communicating amongst themselves, they are not but could.
So as of now here is a rough idea of how things go:
wtf1.shft.com(Static Public IP: 123.1.1.1) > [FIREWALL:Port forward 443] > Server1_192.168.20.11
wtf2.shft.com(Static Public IP: 123.1.1.2) > [FIREWALL:Port forward 443] > Server1_192.168.20.12
wtf3.shft.com(Static Public IP: 123.1.1.3) > [FIREWALL:Port forward 443] > Server1_192.168.20.13
I guess the real question I have is would I be better to have two networks each with there own firewall or two networks with one firewall? I tried to do up a simple version of whats in my head and what I planned on implementing to try and get the server own there on and not all on the same vlan and subnet.
While separate I planned on using something like DMZ-1=192.168.10.XXX, DMZ-2=192.168.20.XXX, I know there separate but logically in my head it helps not to confuse them if there different.
I would greatly a appreciate and input or suggestions. If you have any question from me, I'm sure some clarification will be needed, and I will answer to the best of my abilities.
Thanks, in advanced. :D
https://pasteboard.co/I5yC2al.png (https://pasteboard.co/I5yC2al.png)
(https://pasteboard.co/I5yC2al.png)
I'm not a professional network engineer either but here's me 5c and take them for what it's worth. Some considerations are necessary here which may help drive the decision; do the servers in the DMZ need to talk to each other or are they all completely stand-alone? What are they actually running in terms of OS and applications and how secure can they be? How many servers? Bear in mind that a firewall such as OPNsense is itself a server running FreeBSD - although with a lot of focus on keeping it secure. It may be possible to harden your application servers in themselves to a similar degree, depending.
A common practise for smaller setups would be to separate DMZ from LAN (as you have) and basically treat the DMZ almost as the public internet, i.e. assume that any server in there could be hacked (through what they expose to the internet) and therefore harden all the servers themselves, through absolute minimum number of ports and services open etc.
Additional layering (network segments and firewalls) can bring additional isolation and security as well as scale but I wouldn't go there just for the sake of it before taking a hard look at the servers themselves first and how they interact with one another on the DMZ if at all.
Hi Nashmeira,
I am a technical architect :-)
You use internal firewalls for reasons of scale and performance. From a logical point of view it is just as secure to have a single firewall where DMZ hosts have a single interface, or an external and internal firewall where they have two.
Some general principles:
- Create a management network to access your firewall, hypervisors, network devices, etc. Separate switches if possible, separate VLAN's if not.
- Don't allow any traffic from/to your firewall on the production network. Only allow traffic *through* it. The firewall should make for a hole in the network. I've deployed (not OPNsense) firewalls that increment the TTL so that they don't even show up in traceroute. Management web interfaces, SSH access, etc. all listen on your management network only.
- Have as little database/directory/business information in the DMZ as possible. Database authentication for your websites, RADIUS for AD logins with RODC as a last resort. The full data assets stay on the internal network.
- DMZ servers should be stateless, ideally able to be deleted at the slightest whiff of suspicion, and quickly rebuilt through orchestration. Servers are cattle, not pets. Load balancers make this low impact to your clients.
- VLAN's are cheap. If webservers have less than a few dataflows between them (ideally zero) put them on separate DMZ's. Trunk your VLAN's through resilient physical interfaces to ESXi and OPNsense to reduce your cabling. Consider making OPNsense a virtual server to benefit from vShpere HA and vNIC's instead of VLAN's.
- IPS is a must. Mistrust your DMZ hosts with the greatest of paranoia. Even if you have restricted the traffic by firewall rules to what is allowed, you still need to make sure it follows normal patterns.
- Use a distributed vSwitch to centralise your port group management if you can afford the vSphere Enterprise+ licence, or if you need that for other features.
Sorry for the long rant. Happy to discuss all and any of these.
Bart...
Quote from: rungekutta on March 16, 2019, 08:49:44 AM
I'm not a professional network engineer either but here's me 5c and take them for what it's worth. Some considerations are necessary here which may help drive the decision; do the servers in the DMZ need to talk to each other or are they all completely stand-alone? What are they actually running in terms of OS and applications and how secure......
Only one server has to talk to another, that being a file share server, (nextcloud) that talks with onlyoffice, which is internal only and does not accept public traffic. Its now setup to only communicate with the nextcloud server on the internal DMZ network. The other servers have no reason to talk with each other at all.
Kinds of server in the DMZ are 3x Website servers, including Nexctcloud, 1x FTP server, 1x proxy server.
Thanks for your insight.
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- Create a management network to access your firewall, hypervisors, network devices, etc. Separate switches if possible, separate VLAN's if not.
There is a kind of, not really, mgnt network in place. But its on the same subnet as the LAN1. What the person before setup was EXAMPLE: 192.168.110.0/23 with all the users and Voip on 192.168.110.X and all the servers, and hardware (Printers, Displays etc) on 192.168.111.X so uhh yeah... I put that on my list but have it as a low priority at the moment. Biggest fear was the DMZ setup and what was there. I'll get to the really scary thing in just a moment.
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- Don't allow any traffic from/to your firewall on the production network. Only allow traffic *through* it. The firewall should make for a hole in the network. I've deployed (not OPNsense) firewalls that increment the TTL so that they don't even show up in traceroute. Management web interfaces, SSH access, etc. all listen on your management network only.
I think this is part of the above in the regards I need to see about getting all of my services requiring control on another network and off LAN1.
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- Have as little database/directory/business information in the DMZ as possible. Database authentication for your websites, RADIUS for AD logins with RODC as a last resort. The full data assets stay on the internal network.
All of the DB's on the DMZ are part of the individual web servers. There all built on Ubuntu from what I have found to there just LAMP stacks, am I saying that right, and are currently 16.05. only a few ports are open like 80, 443 and 22. SSH is setup with a pem for what that's worth, I would think more than password.
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- DMZ servers should be stateless, ideally able to be deleted at the slightest whiff of suspicion, and quickly rebuilt through orchestration. Servers are cattle, not pets. Load balancers make this low impact to your clients.
The VMs are all backed up daily at night so should/if/when the unthinkable happen they can be restored pretty fast. Seven days' worth are kept before retention removed the oldest.
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- VLAN's are cheap. If webservers have less than a few dataflows between them (ideally zero) put them on separate DMZ's. Trunk your VLAN's through resilient physical interfaces to ESXi and OPNsense to reduce your cabling. Consider making OPNsense a virtual server to benefit from vShpere HA and vNIC's instead of VLAN's.
They my diagram doesn't show it OPnSense and all of the systems other then those in the LAN1 bubble at the top of the diagram are VMs. The host: QUARTA actually hosts OPNsense as well as a DC and RODC. The DC her is not an AD DC that runs LAN1 but was a separate DC used just for the people that needed access to the websites and the nextcloud setup. So, all the user accounts for those users was on this DC. LAN1's AD DC is completely separate and ny the two shall meet.
Here is the scary part. The RODC was not always there, before the DC1, and its backup DC2 were part of the DMZ. Again, I'm no network engineer/architect what have you, but OMG that scared the crap out of me. So, the first thing I did was read up on what to do and RODC was my quickest solution. The DC1 and DC2 are running a directory controller called Univention, and I kind of like it. I wish the community was larger so if I had an issue I could google fu it and find answers like I can for Active Directory, but we work with what we have. Again, DC1 and DC2 are no longer on the DMZ, and LAN1's ADDC1 and ADDC2 are not touching.
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- IPS is a must. Mistrust your DMZ hosts with the greatest of paranoia. Even if you have restricted the traffic by firewall rules to what is allowed, you still need to make sure it follows normal patterns.
Way a head of you on paranoia, now I just need to get better at what I'm looking for as far as trouble goes. X3
Quote from: bartjsmit on March 16, 2019, 10:55:01 AM
- Use a distributed vSwitch to centralize your port group management if you can afford the vSphere Enterprise+ licence, or if you need that for other features.
I have not really read up on the Distribution vSwitch functionality or what/how it differs from a normal vSwitch. I'm still learning a lot about VMware. Our vCenter is just Standard 😐 but better than no vCenter at all.
The only saving grace is at home I use Xen a little for my NAS, VPN and Dev server for my Automation projects. So I have some experience with virtualization, granted Xen and VMware are different but it helps, but I am by no means a master of ether.
I guess that's all I can think of, thanks for the insight and help.
Sounds like you're in pretty good shape.
Quote from: nashmeira on March 18, 2019, 08:21:16 PM
All of the DB's on the DMZ are part of the individual web servers. There all built on Ubuntu from what I have found to there just LAMP stacks, am I saying that right, and are currently 16.05. only a few ports are open like 80, 443 and 22. SSH is setup with a pem for what that's worth, I would think more than password.
That's good, LAMP really lends itself to moving the database out of the DMZ. All the MySQL clients (perl, PHP, etc.) can easily point to an external database server. Unless the database contains temporary data that can a) be wiped without impact and b) appear on the front page of your local newspaper without recriminations, it should not be on a DMZ.
Quote from: nashmeira on March 18, 2019, 08:21:16 PM
I have not really read up on the Distribution vSwitch functionality or what/how it differs from a normal vSwitch.
dvSwitches allow you to trunk all your VLAN's through a couple of physical NIC's, both to spread the load and to offer resilience. They are expensive toys though. Their main attraction is that they are assigned to the cluster. If you add a host, it inherits all the port groups, VLAN's, etc. that are properties of the switch. Much less chance of errors (e.g. misspelled port group names).
The VMware Enterprise+ licence is assigned to the hosts, about $3,500 per socket. vCenter is licensed separately.
Bart...