Hi all!
For several days I'm pulling my hairs to solve the following. I have three servers: Directadmin, Home Assistant and Nextcloud. The direct admin server is used by external users and has variable domains which those users add and remove all the time. The HA and Nextcloud servers are more static.
As for now I've configured the directadmin server with plain and simple port forwarding. I want to achieve that if certain domains are used (ha.example.org or nxt.example.org) the firewalls forwards the request to the right internal server, but for all the other requests it should go to directadmin. I've tried configuring it using HAproxy and got it working for Home Assistant since that's running on a different port. But nextcloud is not working (I believe because of overlapping ports 80 & 443 with directadmin) When I do a CURL command to nxt.example.org is shows a certificate from Directadmin.
Does anyone have the golden idea on how to achieve this? And/or what do you need from me (like configs to check what's wrong)
this is my haproxy config:
#
# Automatically generated configuration.
# Do not edit this file manually.
#
global
uid 80
gid 80
chroot /var/haproxy
daemon
stats socket /var/run/haproxy.socket group proxy mode 775 level admin
nbthread 4
hard-stop-after 60s
no strict-limits
tune.ssl.ocsp-update.mindelay 300
tune.ssl.ocsp-update.maxdelay 3600
httpclient.resolvers.prefer ipv4
tune.ssl.default-dh-param 2048
spread-checks 2
tune.bufsize 16384
tune.lua.maxmem 0
log /var/run/log local0 info
lua-prepend-path /tmp/haproxy/lua/?.lua
defaults
log global
option redispatch -1
maxconn 5000
timeout client 30s
timeout connect 30s
timeout server 30s
retries 3
default-server init-addr last,libc
default-server maxconn 5000
# autogenerated entries for ACLs
# autogenerated entries for config in backends/frontends
# autogenerated entries for stats
# Frontend: HA (Home Assistant)
frontend HA
bind ha.example.org:8123 name ha.example.org:8123
mode tcp
default_backend homeassistant-pool
# logging options
option tcplog
# WARNING: pass through options below this line
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
# Frontend: Nextcloud (Nextcloud)
frontend Nextcloud
bind nxt.example.org:443 name nxt.example.org:443
bind nxt.example.org:80 name nxt.example.org:80
mode tcp
default_backend nextcloudpool
# logging options
option tcplog
# WARNING: pass through options below this line
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
# Backend: homeassistant-pool ()
backend homeassistant-pool
# health check: HA-Healthcheck
mode tcp
balance source
# stickiness
stick-table type ip size 50k expire 30m
stick on src
server homeassistant 192.168.1.88:8123 check inter 2s port 8123
# Backend: nextcloudpool ()
backend nextcloudpool
# health check: Nextcloud-Healthcheck
mode tcp
balance source
# stickiness
stick-table type ip size 50k expire 30m
stick on src
server qfeeds-office 192.168.1.35:443 check inter 5s port 443
listen local_statistics
bind 127.0.0.1:8822
mode http
stats uri /haproxy?stats
stats realm HAProxy\ statistics
stats admin if TRUE
listen remote_statistics
bind 192.168.1.1:8999
mode http
stats uri /haproxy?stats
stats hide-version
A frequent mistake with HAproxy is to define multiple frontend (external) instances - you must configure only one, and then backends/rules/whatever-haproxy-calls-these to address the various backend servers.
I don't know all the details because I use the os-caddy plugin instead. I can recommend it.
HTH,
Patrick
Thank you for your quick reply! Aah that might be the issue. Does caddy have the same functionality? And could I achieve my goal with Caddy?
Seems easier to maintain so might consider a shift to caddy.
Caddy in OPNsense can only proxy web applications, but should fit your use case.
Just installed caddy and created a domain and handler as described in the documentation. Obviously disabled HAproxy. But when I try to connect to the configured domain it's trying to connect to the directadmin server again... :(
Also a lot of error regarding obtaining a ssl certificate in the log.
error","ts":"2024-06-22T12:25:59Z","logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"blabla.example.org","issuer":"acme-v02.api.letsencrypt.org-directory","error":"HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new order :: too many failed authorizations recently: see https://letsencrypt.org/docs/failed-validation-limit/"}
When I disable the NAT rule to the directadmin server caddy works flawless. The question is, how can I make sure that it first runs over caddy and if no domain in the caddy config was found it needs to be forwarded to the DA server.
You can do that by using a wildcard domain and subdomains.
When putting a handler on the wildcard domain itself, it will send all unmatched subdomains to the handler upstream, like e.g the direct admin server.
Any handlers on subdomains will match more specific and you can send them to other upstreams.
But for wildcard domain cert you need a dns provider or a custom wildcard cert imported to the opnsense.
That's not really a desired situation. The directadmin server should handle the certificates etc. Isn't there a type of proxy which simply proxies
1.domain.org -> server 1
2.domain.org -> server 2
if none of the above -> server 3 (transparant)
There is but this proxy must by necessity do the SSL termination.
Even with SSL termination caddy has a build in mechanism to reverse proxy the ACME HTTP-01 challenge. So when a wildcard domain also has the direct admin IP in the HTTP-01 redirection field, it can do ACME at the same time as caddy.
https://docs.opnsense.org/manual/how-tos/caddy.html#redirect-acme-http-01-challenge
When this is all not an option for you, HA Proxy is the way to go (with all of its hardships).
That could be the solution. But can I just use "*" as a domain? Or how should the config look like?
If the DA host is SAAS and people log into it and create their own domains there, its not really possible without HA proxy and a TCP stream on Layer 4.
I would suggest HA proxy.
For the Caddy implementation on OPNsense this really stretches its usecase, I wouldn't go further here.
(Since getting a wildcard cert for *. is impossible)
Aah that's what I thought, thank you for confirming.
Now the question is, how to configure HAproxy ;D
I have it working for HA in HAproxy. but unfortunately the nextcloud config keeps forwarding to the DA host. I've tried to setup just one listener as Patrick suggested but then I'm not able to define a default pool which breaks everything:
#
# Automatically generated configuration.
# Do not edit this file manually.
#
global
uid 80
gid 80
chroot /var/haproxy
daemon
stats socket /var/run/haproxy.socket group proxy mode 775 level admin
nbthread 4
hard-stop-after 60s
no strict-limits
tune.ssl.ocsp-update.mindelay 300
tune.ssl.ocsp-update.maxdelay 3600
httpclient.resolvers.prefer ipv4
tune.ssl.default-dh-param 2048
spread-checks 2
tune.bufsize 16384
tune.lua.maxmem 0
log /var/run/log local0 info
lua-prepend-path /tmp/haproxy/lua/?.lua
defaults
log global
option redispatch -1
maxconn 5000
timeout client 30s
timeout connect 30s
timeout server 30s
retries 3
default-server init-addr last,libc
default-server maxconn 5000
# autogenerated entries for ACLs
# autogenerated entries for config in backends/frontends
# autogenerated entries for stats
# Frontend: HA-listener (Public service)
frontend HA-listener
bind ha.example.org:8123 name ha.example.org:8123
bind nextcloud.example.com:443 name nextcloud.example.com:443
mode tcp
# logging options
option tcplog
# WARNING: pass through options below this line
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
# Frontend (DISABLED): Nextcloud (Nextcloud)
# Backend: homeassistant-pool ()
backend homeassistant-pool
# health check: HA-Healthcheck
mode tcp
balance source
# stickiness
stick-table type ip size 50k expire 30m
stick on src
server homeassistant 192.168.1.88:8123 check inter 2s port 8123
# Backend: nextcloudpool ()
backend nextcloudpool
# health check: Nextcloud-Healthcheck
mode tcp
balance source
# stickiness
stick-table type ip size 50k expire 30m
stick on src
server nextcloud 192.168.1.35:443 check inter 5s port 443
listen local_statistics
bind 127.0.0.1:8822
mode http
stats uri /haproxy?stats
stats realm HAProxy\ statistics
stats admin if TRUE
listen remote_statistics
bind 192.168.1.1:8999
mode http
stats uri /haproxy?stats
stats hide-version
With multiple listeners configured only the HA implementation works.
I've came to this setup:
#
# Automatically generated configuration.
# Do not edit this file manually.
#
global
uid 80
gid 80
chroot /var/haproxy
daemon
stats socket /var/run/haproxy.socket group proxy mode 775 level admin
nbthread 4
hard-stop-after 60s
no strict-limits
maxconn 10000
tune.ssl.ocsp-update.mindelay 300
tune.ssl.ocsp-update.maxdelay 3600
httpclient.resolvers.prefer ipv4
tune.ssl.default-dh-param 4096
spread-checks 2
tune.bufsize 16384
tune.lua.maxmem 0
log /var/run/log local0 info
lua-prepend-path /tmp/haproxy/lua/?.lua
defaults
log global
option redispatch -1
maxconn 5000
timeout client 30s
timeout connect 30s
timeout server 30s
retries 3
default-server init-addr last,libc
default-server maxconn 5000
# autogenerated entries for ACLs
# autogenerated entries for config in backends/frontends
# autogenerated entries for stats
# Frontend (DISABLED): SNI-listener (Public service)
# Frontend (DISABLED): HA-Listener (public)
# Frontend: Public-service-sni-listener ()
frontend Public-service-sni-listener
bind [::]:443 name [::]:443
bind [::]:80 name [::]:80
bind 0.0.0.0:443 name 0.0.0.0:443
bind 0.0.0.0:80 name 0.0.0.0:80
bind 0.0.0.0:8123 name 0.0.0.0:8123
bind [::]:8123 name [::]:8123
mode tcp
default_backend pool-all
# logging options
# WARNING: pass through options below this line
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
# Backend (DISABLED): homeassistant-pool ()
# Backend (DISABLED): nextcloudpool ()
# Backend (DISABLED): directadminpool ()
# Backend: pool-all ()
backend pool-all
# health checking is DISABLED
mode tcp
balance source
# stickiness
stick-table type ip size 50k expire 30m
stick on src
# ACL: homeassistant_sni
acl acl_668517d7e34a26.66992240 req.ssl_sni -i app1.example1.org
# ACL: nextcloud_sni
acl acl_668517cca10095.43472848 req.ssl_sni -i app2.example2.org
# ACTION: ha_sni_rule
use-server homeassistant if acl_668517d7e34a26.66992240
# ACTION: nextcloud_sni_rule
use-server office if acl_668517cca10095.43472848
# ACTION: other_sni_rule
use-server directadmin unless acl_668517d7e34a26.66992240 acl_668517cca10095.43472848
server directadmin 192.168.10.102:443
server homeassistant 192.168.1.88:8123
server office 192.168.1.35:443
# statistics are DISABLED
Unfortunately it doesn't work, anyone a suggestion?
It will work soon in Caddy now:
https://github.com/opnsense/plugins/pull/4112
You will be able to create a Layer 4 route that goes like this:
Domains: *.example.com *.opnsense.com
Matchers: not tls sni
Upstream Domain: Your hosting panel IP Address(es)
Upstream Port: 443
This will route "any" domains other than *.example.com *.opnsense.com to the hosting panel to port 443 without terminating TLS.
The domains *.example.com *.opnsense.com will fall through to the reverse proxy which will terminate TLS and you can then route them via the normal HTTP Handlers.
That's great news! I've now solved it with HAproxy, I then had the challenge that only the IP adres of the proxy was visible to the hosting panel. Logical behaviour but I wasn't aware that customers were using fail2ban-like functionality on their websites. Luckily I've been able to activate the PROXY protocol on the hosting services to solve that challenge.
I really like Caddy and the ease to use it, but given the IP address challenge I would still not be able to switch I'm afraid. Anyway my problem is solved for now. If someone needs help with a similar use-case feel free to dm me :)
Oh thank you for your answer.
Now I know what the proxy protocol is. Caddy supports that too.
https://caddyserver.com/docs/modules/layer4.handlers.proxy#proxy_protocol
You dont have to switch though, happy you got it working with HA proxy. But what you said gave me further insight. Something else to add to the new functionality. ^^
Added: https://github.com/opnsense/plugins/pull/4112/commits/e766488355d2a4631a9653a7e6bb8802b9d13083