You can do exactly that with a single virtual server at any cloud provider, connect all your locations via e.g. WireGuard and have a self hosted self managed transparent solution. You will have to pay about a fiver per month for that virtual server.I would never use any "VPN provider" because I care about my privacy.
I think Patrick means to use the tools that are already there: WireGuard.You rent a VPS at your trusted VPS host and let this be the WireGuard hub. All your other locations connect to this hub and traffic is distributed as needed/configured.
[Interface]Address = 192.168.254.1/24,2003:a:d59:3840::1/64PrivateKey = *********ListenPort = 51820# Peer 1[Peer]PublicKey = *********AllowedIPs = 192.168.254.2/32,2003:a:d59:3840::2/128# Peer 2[Peer]PublicKey = *********AllowedIPs = 192.168.254.254/32,2003:a:d59:3840::254/128[...]
nat on vtnet0 inet from 192.168.254.0/24 to any -> ww.xx.yy.zznat on vtnet0 inet6 from 2003:a:d59:3840::/64 to any -> dead:beef:dead:beef:dead:beef:dead:beefpass all no state
Quote from: Gauss23 on September 17, 2024, 07:34:14 amI think Patrick means to use the tools that are already there: WireGuard.You rent a VPS at your trusted VPS host and let this be the WireGuard hub. All your other locations connect to this hub and traffic is distributed as needed/configured.Exactly.E.g. in my FreeBSD 14.1 VPS at vulture.com: /usr/local/etc/wireguard/wg0.confCode: [Select][Interface]Address = 192.168.254.1/24,2003:a:d59:3840::1/64PrivateKey = *********ListenPort = 51820# Peer 1[Peer]PublicKey = *********AllowedIPs = 192.168.254.2/32,2003:a:d59:3840::2/128# Peer 2[Peer]PublicKey = *********AllowedIPs = 192.168.254.254/32,2003:a:d59:3840::254/128[...]Connect as many peers as you like. If you want to route entire networks to specific peers just add them to the "AllowedIPs" statements. Configure the peers in a matching fashion, done.I don't know what for one would need a "VPN service". Plus, I don't trust them.To perform outbound NAT I use /etc/pf.conf:Code: [Select]nat on vtnet0 inet from 192.168.254.0/24 to any -> ww.xx.yy.zznat on vtnet0 inet6 from 2003:a:d59:3840::/64 to any -> dead:beef:dead:beef:dead:beef:dead:beefpass all no stateYou can add inbound port forwarding or e.g. a Caddy reverse proxy with Letsencrypt as you like.
I think Patrick means to use the tools that are already there: WireGuard.You rent a VPS at your trusted VPS host and let this be the WireGuard hub. All your other locations connect to this hub and traffic is distributed as needed/configured.I have this currently in place. Some of my locations also have "old" IPsec tunnels between each other. But I want to get rid off those. I could use WireGuard but then I stumbled across Netbird and I'm directly a huge fan of it.It leverages the idea of Zero Trust, which I definitely prefer as boundaries are vanishing more and more. In a hybrid environment with multi-cloud and multiple On-Prem locations it gives you the best approach to connect everything with each other. And the best part is: the hub concept is only used, when a direct connection is not possible. Otherwise the spokes are connecting directly to each other.I don't understand why you didn't succeed in getting Netbird up and running. I'm using it with Authentik and used the script that was provided. No issues at all.
KISS with Wireguard only or Wireguard + Netbird at the "Price" of having a bigger ecosystem that can break more easily ? Uhm ...
Connecting all the spokes in a peer-to-peer manner is another story, if you have more than 4 spokes: that's 6 spoke connections and one to the hub, with 5 spokes, it's already 10 connections+ the hub.
Quote from: luckylinux on September 17, 2024, 07:37:40 pmKISS with Wireguard only or Wireguard + Netbird at the "Price" of having a bigger ecosystem that can break more easily ? Uhm ...I just saw, that I use Netbird with the default IdP Zitadel and not Authentik or Keycloak. Used the provided script and it was running out of the box.Of course you add another service (at least self hosted if you want), but I think you gain a lot of features, like Zero-Trust for your clients.Configuring connections to one single hub is fairly easy. If your central WireGuard hub goes down, you're lost, too.Connecting all the spokes in a peer-to-peer manner is another story, if you have more than 4 spokes: that's 6 spoke connections and one to the hub, with 5 spokes, it's already 10 connections+ the hub.With Netbird you're able to configure multiple routes to the same destination, if you want. I think OPNsense and Netbird are a perfect match here.
By the book that's a full mesh, not a hub and spoke topology. In the latter everything goes through the hub.
Good to know that's also a Feature Netbird provides . If only it would work in my case .As for Zitadel, that's the third Attempt I did back then on my Hetzner VPS (after Authentik and Keycloak) and it would NOT work at all. Zitadel was such a Memory Hog that I believe it triggered the OOM Killer due to excessive RAM Usage. Anyways, not an Option on a low CPU/RAM VPS. I have a dedicated Server now with several KVM Virtual Machines, so I could try that. But I really liked Authentik, it's just an absolute PITA to interface with Netbird. And Netbird Debugging / Troubleshooting Capabilities are quite bad in my View, when something does not work (at all), it's not very clear (at least to me) as to why. And when it works, it's probably fine (until it breaks). I never managed to even get something to show up on the Web GUI so it's really frustrating to be honest .
export NETBIRD_DOMAIN=netbird.example.com; curl -fsSL https://github.com/netbirdio/netbird/releases/latest/download/getting-started-with-zitadel.sh | bash
Granted, it could also be due to the Reverse Proxy (Traefik) Setup and possibly some Firewall Rules (I added exceptions based on Netbird specifically mentioning Hetzner Stateless Firewall although that did NOT make any Difference).As to Wireguard breaking down ... I see that as a MUCH less likely Risk. Yes, it might be more of a PITA to set up Manually 100 Instances of Wireguard (Ironically in my Homelab, Gitlab and Nextcloud kinda forced my Hand on this one, since I HAVE to use NFS since their Update Script doesn't work with Samba/SSHFS Permissions and I don't have the Time to setup a Kerberos server for NFS - so I just do NFSv3 TCP over Wireguard UDP).But compare generating a Keypair, setting up one small Config file for each Point-to-Point Connection with a System that might very easily break between Updates (either on Netbird side, or on Authentik/Keycloak/Zitadel side). I'd say Wireguard is very Reliable in that Regards.Netbird should begin having some Consistency in their config File ... Depending on the Guide you Follow some Config/Environment Variables are NETBIRD_AUTH_XXXX and others are AUTH_XXXX and it's not always clear which Direction they are moving towards (I kinda had to duplicate quite a few of them in Order to suppress some Warnings in the Logs, although that did not solve my Problems).
My Netbird host is a Hetzner VPS, ARM64, 2 CPUs, 4 GB of RAM, of which only 1.2GB are used. Postgres as database backend. Can't really see the OOM problems you had.
# ln -s / usr/bin/podman /usr/bin/docker#ln -s /usr/bin/podman-compose /usr/bin/docker-composeError: no container with name or ID "netbird-quickstart_zdb_1" found: no such containerError: no container with name or ID "netbird-quickstart_zitadel_1" found: no such containerError: no container with name or ID "netbird-quickstart_coturn_1" found: no such containerError: no container with name or ID "netbird-quickstart_management_1" found: no such containerError: no container with name or ID "netbird-quickstart_relay_1" found: no such containerError: no container with name or ID "netbird-quickstart_signal_1" found: no such containerError: no container with name or ID "netbird-quickstart_dashboard_1" found: no such containernetbird-quickstart_caddy_1Error: no container with ID or name "netbird-quickstart_zitadel_1" found: no such containerError: no container with ID or name "netbird-quickstart_zdb_1" found: no such containerError: no container with ID or name "netbird-quickstart_coturn_1" found: no such containerError: no container with ID or name "netbird-quickstart_management_1" found: no such containerError: no container with ID or name "netbird-quickstart_relay_1" found: no such containerError: no container with ID or name "netbird-quickstart_signal_1" found: no such containerError: no container with ID or name "netbird-quickstart_dashboard_1" found: no such containernetbird-quickstart_caddy_1537090513c345560782ef175c08e189493932b95de2544738b3c25be008ae775Error: no container with name or ID "netbird-quickstart_relay_1" found: no such containerError: no container with name or ID "netbird-quickstart_signal_1" found: no such containerError: no container with name or ID "netbird-quickstart_coturn_1" found: no such containerError: no container with name or ID "netbird-quickstart_zitadel_1" found: no such containerError: no container with name or ID "netbird-quickstart_zdb_1" found: no such containerError: no container with name or ID "netbird-quickstart_management_1" found: no such containerError: no container with name or ID "netbird-quickstart_dashboard_1" found: no such containerError: no container with name or ID "netbird-quickstart_caddy_1" found: no such containerError: no container with ID or name "netbird-quickstart_zitadel_1" found: no such containerError: no container with ID or name "netbird-quickstart_zdb_1" found: no such containerError: no container with ID or name "netbird-quickstart_coturn_1" found: no such containerError: no container with ID or name "netbird-quickstart_management_1" found: no such containerError: no container with ID or name "netbird-quickstart_relay_1" found: no such containerError: no container with ID or name "netbird-quickstart_signal_1" found: no such containerError: no container with ID or name "netbird-quickstart_dashboard_1" found: no such containerError: no container with ID or name "netbird-quickstart_caddy_1" found: no such containerError: no pod with name or ID pod_netbird-quickstart found: no such pod4834915b5ccc38ab944085421f75e24649a222ac10936b9934503424ae39781194826dfeff09f96232cb57ee7d3e98bf13555f3126a7ce676d95004e85d1d100netbird-quickstart_caddy_1✔ docker.io/netbirdio/dashboard:latestTrying to pull docker.io/netbirdio/dashboard:latest...Getting image source signaturesCopying blob f7dab3ab2d6e skipped: already exists Copying blob 25d8059c17de done | Copying blob ff09aab76d97 done | Copying blob e252bd70cdea done | Copying blob e9fb81678df7 done | Copying blob 78f3aa16cfa5 done | Copying blob b6c81a3e8178 done | Copying blob 932bd785729d done | Copying blob 217c556afd61 done | Copying blob f846d527a638 done | Copying blob cb7988d44772 done | Copying config 5aa906f022 done | Writing manifest to image destinationd800170607d2c90e26f03b84e5018a3f6d2510dc11d98dd9e01ddbc8bb590f6dnetbird-quickstart_dashboard_13bb334c8ed73d09b4b6d8dc5950116e58eb8bbdccb593979e957aa58334c7111netbird-quickstart_signal_151555daff8ad139dfb116c240b779f41d67ccbbef9732bc670a5c1f5dafb9aa4netbird-quickstart_relay_1d4694f689c6e5e43804a9f99e175ae49ac0741cf9185ca797d762a647ae439d2netbird-quickstart_management_19a642849aa9f33ed8836136e2bc9adef3abec013d7070a8e5ce8af4f98048612netbird-quickstart_coturn_1Traceback (most recent call last): File "/usr/bin/podman-compose", line 33, in <module> sys.exit(load_entry_point('podman-compose==1.2.0', 'console_scripts', 'podman-compose')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/podman_compose.py", line 3503, in main asyncio.run(async_main()) File "/usr/lib64/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/base_events.py", line 687, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/podman_compose.py", line 3499, in async_main await podman_compose.run() File "/usr/lib/python3.12/site-packages/podman_compose.py", line 1742, in run retcode = await cmd(self, args) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/podman_compose.py", line 2499, in compose_up podman_args = await container_to_args(compose, cnt, detached=args.detach) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/podman_compose.py", line 1204, in container_to_args raise ValueError("'CMD_SHELL' takes a single string after it")ValueError: 'CMD_SHELL' takes a single string after it
Ok, so podman seems to be the issue here. What speaks against using docker?