Installing and configuring Tailscale for selfhosted services (Split DNS)

Started by RaymondFFX, April 21, 2025, 11:05:11 PM

Previous topic - Next topic
I spent quite a bit of time getting this to work so I figured I would share it here in case others have also struggled with this.

I have an OPNsense install working as my main firewall/router. In my home network I have a bunch of self hosted services. They are all accessed through a public domain name so I can use Lets Encrypt for TLS certificates. Some of them though, I want to only be accessible through Tailscale. This is where the problem comes in.

If you use a public domain name and are not at home, the traffic will, by default, be routed through your public IP-address. The reverse proxy or service will think it is just a random outside IP and drop the connection. To access the services through Tailscale, you need to set-up split DNS.

I'm going to list the steps I took. I can't think of a reason this wouldn't work with the old tailscale ports install but I did it with the new Tailscale plug-in so I will list those steps too.

Optional - if Tailscale was already installed
  • Uninstall the Tailscale ports install with the following steps in the OPNsense shell:
    • service tailscaled stop
    • service tailscaled disable
    • opnsense-code ports
    • cd /usr/ports/security/tailscale
    • make deinstall
    • make clean

Installing and configuring Tailscale
  • Install the Tailscale plug-in from the OPNsense GUI - System > Fireware > Plugins
  • Add and enable the Tailscale interface
  • Create an allow rule on the Tailscale interface with Source and Destination as Any
    • Access is already authenticated through Tailscale so this should be fine
  • Add the subnet the webserver is in as an advertised route in the Tailscale plugin settings
  • Make sure DNS is enabled on the Tailscale interface
    • I'm using UnboundDNS so I go to Services > UnboundDNS > General and check the Tailscale interface under the Network Interfaces setting
  • Create overrides for your services
    • For UnboundDNS I go to Services > UnboundDNS > Overrides
    • In the Hosts section I create an entry for my Nginx Proxy Manager reverse proxy that I use with it's local IP. For example 192.168.1.10
      • If you do not use a reverse proxy, enter the local IP of the service you are trying to make accessible and skip the Aliases part below
    • In the Aliases section, create an entry per site that is proxied through NPM.
      • For Host Override choose the Proxy host created in the previous step
      • For website.example.com enter website in the Host field and example.com in the Domain field

The OPNsense part should now be done.
Now for the Tailscale portion
The OPNsense machine should now be visible in the Tailscale admin dashboard

  • Accept the shared subnet(s) under the machine menu > Edit route settings
  • Go to the DNS tab and click on Add nameserver > Custom...
  • Enter the Tailscale IP of your OPNsense machine
  • Check Restrict to domain and enter your domain name and save


You should now be able to access your hosted services through Tailscale.
Whether you are connected to Tailscale or not, have enabled the Exit Node or not.

As far as the webserver or proxy is concerned, the traffic is originating from the OPNsense IP so you can base access restrictions around that. In the firewall logs DNS requests are shown correctly with the Tailscale IP of the requester. Routing of traffic itself I believe is handled by Tailscale internally and only shows up as outgoing traffic from the firewall IP to the service on the interface the service is connected to.

Hope this helps anyone running into this problem in the future! :)

Thanks Raymond,

that was really helpful. I've successfully managed to get it to work except the combination of internal (split) DNS (active directory domain DNS servers here) and running OPNSense as an exit node.
I have the following setup:
  • OPNSense running tailscale, routes to internal network advertised (as you've described)
  • Headscale running as coordination server (with magicdns enabled and advertising the internal (active directory) DNS servers)

I have the following findings (using Windows clients, connected over mobile hotspot):
  • If the exit node is not enabled on a client, VPN is working (access to internal and external addresses) and DNS is working (resolving internal and external names correctly). The public IP (www.whatismyipaddress.com) on the client is the mobile network ip (as expected).
  • If the exit node is enabled on a client, VPN is working (access to internal and external addresses) but DNS is only resolving public IP addresses (internal names don't resolve --> non-existent domain). The public IP (www.whatismyipaddress.com) on the client is the OPNSense WAN provider ip (as expected, so all traffic is actually tunneled through the exit node). 

In both cases, nslookup uses by default the magicdns server (100.100.100.100). Without an active exit node, DNS resolves internal names correctly, with an active exit node it does not. If i do "nslookup - <internaldns-server>" internal names do resolve correctly, even with an active exit node - so it is definitely not a firewall issue (internal DNS servers are reachable from the tailnet)...

It seems more like enabling the exit node on the (Windows) client breaks (split) DNS resolving... 
I suspect, I am missing some small bit - I've been chasing this for a few days now without success. 
Any hints would be highly appreciated!

Thanks!