VLAN DMZ for website server (ubuntu server) or any other way of doing it?

Started by flamur, October 25, 2025, 11:00:51 PM

Previous topic - Next topic
Hi,

I am just starting my life time goal to have a somewhat serious network at home and to host my own websites.

I have just installed opnsense firewall as the first node from the fiber. I then connect a newvly installed ubuntu server in to the firewall.

My plan, after some googleing and reading, is to create a separate VLAN and DMZ for that server.

What makes it a bit tricky is that I have a truenas scale server with NGINX and proxy thrue cloudflare. So before this I just pointed cloudflare to my public IP and then my asus router would portforward that to my truenas scale server with NGINX to point the traffic to my website server.

Taken this in to account my NGINX would be on separate VLAN not DMZ. To keep it "internal" and more safe. But not sure how it would be able to direct traffic to my new website server within the other VLAN with DMZ setup.

My question is perhaps to broad, since I dont really know where to start this. Do anyone have a guide on this specific thing? Or can point me in the right direction?

Or would you recommend any other (more secure) setup for my website server?

Best regards,

Flamur

Usually, you would just use a reverse proxy like Caddy or HAproxy (there are howtos for those in the tutorial section) to redirect requests to any web backend by name). Using it this way, you do not need to know any ports, just the DNS names for the servers. The reverse proxy does the TLS termination and also fetches the certificates via ACME.sh (preferably via wildcard domains). You would open up ports 80 and 443 on your OpnSense, while the web UI is put on another arbitrary port.

By setting up a separate DMZ VLAN for the backend web server(s), you would then make sure that if one is getting hacked, they cannot get through to your valuable ressources on LAN. Since OpnSense has access to all VLANs, you can put the backends anywhere.

For this to work, you must (these are quite some tasks):

1. Divert the OpnSense web UI to other ports.
2. Set up a working DMZ VLAN with separation from your LAN to put your web server into.
3. Configure the reverse proxy.
4. Set up certificate generation.
5. Configure DNS names to point to your OpnSense instance (potentially involving DDNS).

Cloudflare works differently, AFAIK. They use a reverse tunnel from your web service to Cloudflare, which works much like a VPN. This way, nobody using your web service ever gets to know your real IP or contacts it via ports 80/443. This has the advantage to work even if you are behind CG-NAT, where you cannot set up an open port to work from the outside in in the first place. Your web backend can exist in a separate DMZ VLAN as well in this scenario. Since the connection is done from you to Cloudflare and not the other way around, you also do not have to deal with (D)DNS or expose anything directly to the internet.

For this to work, you must set up:

1. A working separate DMZ VLAN which can access the internet. You place your web server in that DMZ, as well as the Cloudflare client.
2. Cloudflare reverse proxy with certificates.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on October 26, 2025, 09:13:28 AMUsually, you would just use a reverse proxy like Caddy or HAproxy (there are howtos for those in the tutorial section) to redirect requests to any web backend by name). Using it this way, you do not need to know any ports, just the DNS names for the servers. The reverse proxy does the TLS termination and also fetches the certificates via ACME.sh (preferably via wildcard domains). You would open up ports 80 and 443 on your OpnSense, while the web UI is put on another arbitrary port.

By setting up a separate DMZ VLAN for the backend web server(s), you would then make sure that if one is getting hacked, they cannot get through to your valuable ressources on LAN. Since OpnSense has access to all VLANs, you can put the backends anywhere.

For this to work, you must (these are quite some tasks):

1. Divert the OpnSense web UI to other ports.
2. Set up a working DMZ VLAN with separation from your LAN to put your web server into.
3. Configure the reverse proxy.
4. Set up certificate generation.
5. Configure DNS names to point to your OpnSense instance (potentially involving DDNS).

Cloudflare works differently, AFAIK. They use a reverse tunnel from your web service to Cloudflare, which works much like a VPN. This way, nobody using your web service ever gets to know your real IP or contacts it via ports 80/443. This has the advantage to work even if you are behind CG-NAT, where you cannot set up an open port to work from the outside in in the first place. Your web backend can exist in a separate DMZ VLAN as well in this scenario. Since the connection is done from you to Cloudflare and not the other way around, you also do not have to deal with (D)DNS or expose anything directly to the internet.

For this to work, you must set up:

1. A working separate DMZ VLAN which can access the internet. You place your web server in that DMZ.
2. Cloudflare reverse proxy with certificates.


I have been working on the server settings and opnsense settings and taking time to read and looking at youtube videos on this/network topics to understand what I am doing more. I still feel lost but have some things pinned down. Have started mapping everything in a excel to keep track of my network.

I want to use the last solution you write. Its somewhat what I had before on my normal home asus router, which was very simple on that one.

What I have done is put my server on a dedicated port on the firewall. Have made a subnet(?) for that to be on - 192.168.20.1 IP within the interface settings (dhcp server adress might be the technical term? 🤷�♂️).

Not sure if I need dhcp on that interface since I will only have my server running there with a static ip - if possible.

I will also place my truenas scale server which handles nginx on a dedicated port on the firewall. With its own subnet(?), 192.168.10.1.

In my mind this would be as different "VLAN" but hardcoded with the ports instead to keep it simpler for me to handle in the beginning and also easier on the firewall ports for maximum speed.

Whats next?
1) So in my mind I now have to figure out how to open ports for traffic to flow to my truenas server from WAN, to be able to get traffic from Cloudflare as before (what was called portforward on the asus router).

2) Then I need to open ports from my truenas server to my hosting server for traffic to flow between them - so that Nginx can handle the proxy.

Am I on to it/close or totally lost? 🤔😅

Questions
1) if above is somewhat correct. Where in all this do I configure DMZ?
2) do I need dhcp on the two server ports/interfaces since it will only be on server on each dedicated port in the firewall?

With Cloudflare, there are no ports to be opened, since the whole Cloudflare connection is going inside out - Cloudflare provides a client to connect to their servers and then use this tunnel to direct traffic to your internal network and services. That is, the take up the part of terminating HTTP(S) traffic on their end (including certificates), doing the reverse proxy and directing the traffic through a "kind of VPN" tunnel to your network.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on November 09, 2025, 05:07:11 PMWith Cloudflare, there are no ports to be opened, since the whole Cloudflare connection is going inside out - Cloudflare provides a client to connect to their servers and then use this tunnel to direct traffic to your internal network and services. That is, the take up the part of terminating HTTP(S) traffic on their end (including certificates), doing the reverse proxy and direting the traffic through a "kind of VPN" tunnel to your network.

Thanks for that explanation. I thought it was the other way around 🙈

Can I ask if I even need to think about DMZ with my planned setup? I wont use VLAN since I use two different dedicated ports on the firewall for my two servers.

I have rules that allow internet for them, but not connect locally (followed https://homenetworkguy.com guide). Is this a DMZ? 🤔

I use this rule on all my interfaces more or less as a standard:

(https://photos.app.goo.gl/HWNak1ELHYHaeqr59)

Would this rule be good practice for my truenasscal that host the nginx proxy for example?

Well, I always use a DMZ for any openly accessible service. The main reason for this is that web applications (or complex applications in general, which excludes "more simple" SSH, file service and VPN endpoints) bear the risk of being exploited.

Imagine an SQL-injection that surpasses the login or any other WASP exploit. If this were the case, you could probably use the application as a starting point to gather intelligence or break into your network. By confining this application in a DMZ, it cannot be used to gain access to your LAN - correct firewalling implied.

A bad example for this would be Proxmox Backup Server: It has a limited API, yet its endpoint is the same as the web UI. Thus, if you just want to expose backup services, you have to expose the full web UI. Therefore, you must use a VPN on top, which would be dispensable if the API was separate.

The same reasoning applies for IoT-devices that use outbound connections to the cloud, because these connections can also be reversed. Heck, I even confine smartphones to a different VLAN for the same reasons. They need internet access, but no access to my LAN.

P.S.: You have to trust Cloudflare not to misuse their infrastructure, that should be clear by now. However, with the endpoint in a DMZ, this is also less of an issue, provided that their daemon runs there and not on the firewall itself.

W/r to your TrueNAS server: It was better if you separated the file server (LAN) from the application server (DMZ). That way, you could confine the application (which might get hacked) to a subset of your data (i.e. the part that you give access to). For this, you would need a firewall rule to allow file access and hope that the authorisation cannot be circumvented.

You can imagine this like an onion, where you have to surpass several levels in order to get through to the core.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on November 10, 2025, 10:50:54 AMW/r to your TrueNAS server: It was better if you separated the file server (LAN) from the application server (DMZ). That way, you could confine the application (which might get hacked) to a subset of your data (i.e. the part that you give access to). For this, you would need a firewall rule to allow file access and hope that the authorisation cannot be circumvented.

Can I separate my server if I only have one ethernet port on the truenas server? I thought I read end points cant handle tagged vlans. Or how would I do that? 🤔

You can run this single port as a trunk port with multiple tagged VLANs.

All my TrueNAS systems have a 2-port LACP link to my switch and all VLANs on top of that. Works with a single port, too.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

IDK exactly if you could run nginx as a separate VM on TrueNAS Scale, which then would be connected to a VLAN or separate network adapter.

I do that with Proxmox, where it works. Patrick presumably uses TrueNAS core on FreeBSD, which might differ.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on November 10, 2025, 11:57:36 AMIDK exactly if you could run nginx as a separate VM on TrueNAS Scale, which then would be connected to a VLAN or separate network adapter.

Of course you can and I do. I run both TN CORE and TN CE (formerly SCALE) - as I wrote all connected exclusively via VLANs. The TrueNAS CE machine runs 3 VMs:

- Windows 11
- ElastiFlow on Ubuntu
- Home Assistant

Currently my workloads are distributed like this:

Storage and jails: TrueNAS CORE
Docker "apps" and VMs: TrueNAS CE

HTH,
Patrick
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

I wonder if my novice level might be forgotten here.

I dont know how to trunk my ports. I will google though. But is this needed for my purpouse? It feels like I am going a bit too far over my head atm.

Or can the firewall rules be setup so that they separate my apps from the local storage instead of VLANs? 🤔

I only have this on my TN server:
1) nginx to proxy traffic from cloudflare, so that my website server works. Also proxy for nextcloud (cloudstorage) and plex server.
2) I have plex server app for movies
3) Nextcloud app for cloud storage
4) Some locally shared storage to make a central storage for my data

I guess I need to expose 1-3 to the internet. And to do that as locked as possible. Thats why I use cloudflare to handle my domainname and point traffic to my router. And only open the ports they need out to WAN.

Nr4 I understand should run on LAN somehow not exposed to the internet as the rest.

So do I need to put in more reading on the Trunk solution to get VLANs up or is it as good to use firewall rules (if its even possible)?

Also please note my TN server only has one NIC.

I liked this solution, but I think I might not understand what I am reading.

QuoteFor this to work, you must set up:

1. A working separate DMZ VLAN which can access the internet. You place your web server in that DMZ.
2. Cloudflare reverse proxy with certificates.

1. This is done I think. I have my webserver on a separate interface and its own subnet with locked down firewall rules to only access internet and not locally within my network.
2. I have a cloudflare account for my domainname. on there I previously set up cloudflare to handle my domain, and point to my own IP in the DNS records. I also added ssl with strict setting. And used the certificate and put it into the Nginx app.

But perhaps I dont ned nginx app in the truenas scale anymore if opnsense can direct the traffic locally instead? 🤔

Quote from: flamur on November 10, 2025, 02:10:24 PMBut perhaps I dont ned nginx app in the truenas scale anymore if opnsense can direct the traffic locally instead? 🤔

There is two parts to this:

I said in my first answer that to set up a connection from outside, you can follow either an OpnSense-only setup or a Cloudflare based approach. Either one will take care of having a connection to your internal services.

Cloudflare is easier, because the neccessary steps to open up and encrypt an inbound connection an/or set up your own reverse proxy for that might be diffcult for a beginner.


The second part concerns where to actually host your own service. You need a separate physical LAN or a VLAN to create an isolated subnet (DMZ) which should be the one that your application endpoint (nginx) runs on. For this, you need to create a VM that is connected to your DMZ (either via a separate port/switch) or via a VLAN. The Cloudflare daemon would then run on this VM, as well as your Nginx.

Maybe Patrick can tell you how to do that, as I said, I use Proxmox for this purpose.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: meyergru on November 10, 2025, 02:34:19 PM
Quote from: flamur on November 10, 2025, 02:10:24 PMBut perhaps I dont ned nginx app in the truenas scale anymore if opnsense can direct the traffic locally instead? 🤔

The Cloudflare daemon would then run on this VM, as well as your Nginx.


Many thanks for your patience and help. I read, but sometimes it flyes over my head WHAT I read 😅

I see now you mean cloudflare in another way than I had it before. I will google and see if I can get that up and running before going further.

I will also investigate how to get the VLAN solution to work on my truenas scale server.

I will re-plan my work and start with getting my truenas scal up and running on the new network. Thought I could leave it running on the asus router while setting up everything else 😇

I think its just a big overload of new concepts and stuff I have never heard or done, so it takes alot of time to process it 😊

I figured as much, hence why I wrote:

Quote from: meyergru on October 26, 2025, 09:13:28 AMFor this to work, you must (these are quite some tasks):
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+