My setup is a baremetal AMD ryzen 7 wit single nic with IP 1 for Public/internet connection + additional IP 2 and MAC address for virtualization.This is configured as a debian 11 Linux server with proxmox and 2 Ubuntu 20.04LTS VM's for Mail-in-a-box_LDAP (VM1), Cloud-in-a-Box (Nextcloud) (VM2) and a MX Linux (VM3) Workstation for maintenance/testing.
The MIAB_LDAP uses and serves ldap over port 636 to mail users and is designed to provide the CIAB (Nextcloud) with the same ldaps services for nextcloud users so that the two services can usethe same credentials for users on both mail and collaboration/storage services.
The 4th VM in this setup is a OPNsense server with version 20.19. It serves as the gateway and firewall and once working fully may also provide proxy and automated certification services through Let's Encrypt.
I can access the internet through all VM's and the Host. But ldaps will not connect between the VM1 & VM2
I am now frustrated and tired after 3 days of failure. There seems no reason for this failure. the default layout is as below in the attachment:
Help :'( :confused:
:confused:
Presuming all your VMs are on the same subnet, have you probed LDAPS (636/tcp) on VM1 from VM2 or VM3 using netcat, nmap, etc. to see if it responds...? You should probably also check that your service is listening on 636/tcp using lsof, netstat, ss, etc.
If your VMs are not all on the same subnet, and are using discrete or multiple shared interfaces on the OPNsense firewall, please share a little of that setup, including interfaces used, firewall rules and any NAT that you may have configured.
From your description, it seems possible that you are trying to do too much with one interface. I should mention that mixing WAN and LAN IP addresses on a single interface is unlikely to work and exposes an exploitable vulnerability. You might need to configure VLANs within Proxmox to provide some network demarcation. The problem you will encounter is that your Linux Bridge will need to be your WAN.
Practically, this means you will either need to tunnel with a VPN through to your LAN, or expose SSH on your WAN and port forward to your VM3 (as an example) - both of which require you to work from your WAN, i.e. the internet. Alternatively, you could trunk WAN and LAN VLANs through your one interface and break them out on a 802.1Q-capable switch. I note that mixing WAN and LAN VLANs in the one trunk also exposes an exploitable vulnerability.
Having at least two physical network interfaces will really make your solution much more workable. You could then have separate bridges for your WAN and LAN. Place VM3 and your Proxmox box in LAN, plus any physical device you would use to access VM3 (or anything for that matter). You would then create virtualised networks for your servers with corresponding virtual interfaces on your OPNsense firewall. If any of your servers are physical you should add at least one more NIC to trunk those networks to an external switch.
Unless you have reason to, you should not expose your Proxmox server to the internet. Are you publishing anything on it? If not, just bridge it on the LAN side.
Finally, you really should not publish your publicly routable IP addresses on this public forum. If you publish those addresses and potential design vulnerabilities, nefarious personalities might take advantage of that.
Hi
Thanks for the pointers.
When I issue ~# lsof -i -P
on the mailserver with the ldap database I get 3 ldap readings
slapd 4242 openldap 8u IPv4 33726 0t0 TCP localhost.localdomain:389 (listen)
slapd 4242 openldap 9u IPv4 33728 0t0 TCP *:636 (listen)
slapd 4242 openldap 10u IPv4 33729 0t0 TCP *:636 (listen)
So it seems the ldaps port is open and listening.
As far as the network is concerned thanks for the warning on the IP's but they are not my real ones.
I have and internal network lan of the range 192.168.0.0/24. Both mail and nextcloud VMs are no that lan and the ufw firewall on the mail/ldaps VM is set allow ldaps from anywhere the ip of the nextcloud/ldap client.
It is true I have one network card with an public IP set on VMBR0
The VMs access the world through VMBR1 using the separate mac address ordered specially for the second IP address bought for proxmox virtualization using a routed configuration similar to :
iface lo inet loopback
iface enp35s0 inet manual
up route add -net X.X.X.X/X netmask 255.255.255.X gw X.X.X.X dev
enp35s0
# route X.X.X.X./X via X.X.X.X
auto vmbr0
iface vmbr0 inet static
address <PUBLIC-IP/29>
gateway <TO PUBLIC-GW>
bridge-ports enp35s0
bridge-stp off
bridge-fd 0
pointtopoint <PUBLIC-GW>
bridge_hello 2
bridge_maxage 12
auto vmbr101
iface vmbr101 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
#OPNSense_LAN1
auto vmbr100
iface vmbr100 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
#OPNSense_LAN0
auto vmbr1
iface vmbr1 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
#OPNSense_WAN0 Assigned to MAC Address of 2nd PUBLIC-IP
I can't see any reason other than the incomplete setting in OPNsense.
Quote from: bernieo on November 07, 2021, 06:41:47 AM
So it seems the ldaps port is open and listening.
Your tests confirm that ldap is listening on 636/tcp, but not that it is open.
Quote from: bernieo on November 07, 2021, 06:41:47 AM
Both mail and nextcloud VMs are [on] that lan and the ufw firewall on the mail/ldaps VM is set allow ldaps from anywhere the ip of the nextcloud/ldap client.
I think your problem is here. The wording of your ufw rule seems a little off when you mention VM2. Moreover, if both your VM1 and VM2 systems are on the same network, OPNsense has nothing to do with filtering those packets.
I would try connecting from VM2 using netcat:
nc -zvn <IP of VM1> 636
If that doesn't work, briefly disable ufw on VM1:
sudo ufw disable
...and then check again from VM2 using netcat as before.
If that works, you need to correct your ufw rule (and enable ufw again).
I would suggest it should look something like the following:
<redacted>:~$ sudo ufw status
[sudo] password for <redacted>:
Status: active
To Action From
-- ------ ----
636/tcp ALLOW Anywhere
...
Regarding that rule, if you have WAN clients trying to connect to LDAPS on your OPNsense WAN IP and you are not performing NAT of those connections at OPNsense, your ufw will have to permit any source to connect. You may want to reconsider publishing LDAP on your WAN, it will get hit and harvested for credentials. Perhaps consider using OPNsense's VPN server offerings and connecting your clients via VPN. Then you can configure your ufw to permit just the VPN subnet and VM2. Like this:
<redacted>:~$ sudo ufw status
[sudo] password for <redacted>:
Status: active
To Action From
-- ------ ----
636/tcp ALLOW <VPN Client Network/mask>
636/tcp ALLOW <IP of VM2>
...
It's also worth mentioning that it is conceivable that the outbound connections permitted by ufw on VM2 are too restrictive. You may want to disable ufw on that host to perform your checks too.
You can also launch the GUI if you find that more intuitive:
sudo gufw
Thank you Benyamin.
It's great to have you helping to confirm or clarify my thinking and steps in this troubleshoot.
Indeed the ports are open and using netcat returns: connected to 192.168.0.213 636 port [tcp/*] succeeded!
ldap/s is not aalowed on the WAN and as both VMs are on the same LAN subnet ldap has been restricted to open and listening from 192.168.0.213 to recieve only from 192.168.0.212.
The ufw rule is:
ufw allow proto tcp from 192.168.0.212 to any port ldaps
ufw status shows below in the attachment.
I agree that as they are on the same network OPNsense shouldn't be interfering and I can see no rules that could prevent it. I tried your suggestion to turn ufw of on both machines but the result is the same though I used
systemctl stop ufw. I then tried sytemctl disable ufw but again the result is no connection.
At this stage I'm beginning to suspect the ldap script attempting to contact and register with the mail server ldap service may be trying to resolve the external DNS FQDN IP rather than the internal IP. As I dont want this to happen I may need a way to change that.
No problem, Bernieo.
Just a couple of things:
The rule looks right. Your profile could be odd... Maybe ufw status verbose might be more revealing than ufw status numbered.
AFAIK, ufw is really just a management front-end for iptables. I'm pretty sure systemctl stop ufw won't disable the active rule-set, nor will systemctl disable ufw until after a reboot. It's possible that there's some outbound filtering happening so maybe try again after issuing ufw disable on both machines...
Having said that, your probe with netcat showed a successful connection. I presumed that was from 192.168.0.212. Is that correct? Perhaps try the above anyway.
The rule does look correct. You may want to add logging to see what happens to incoming connections:
ufw allow in log proto tcp from 192.168.0.212 to any port ldaps
IIRC, there is an OpenLDAP profile (which you can confirm by issuing ufw app list), so you could also use:
ufw allow in log proto tcp from 192.168.0.212 to any app "OpenLDAP LDAPS"
I don't think I have ever commissioned an LDAP server with port 389/tcp closed. Perhaps there is a dependency and you need both open (I think this unlikely but possible)...
Finally, in relation to your concerns regarding name resolution, you could use an entry at /etc/hosts while troubleshooting. Perhaps reboot afterwards to be sure no lingering resolver issues exist.
Quote
Just a couple of things:
The rule looks right. Your profile could be odd... Maybe ufw status verbose might be more revealing than ufw status numbered.
AFAIK, ufw is really just a management front-end for iptables. I'm pretty sure systemctl stop ufw won't disable the active rule-set, nor will systemctl disable ufw until after a reboot. It's possible that there's some outbound filtering happening so maybe try again after issuing ufw disable on both machines...
[\quote]
Yes systemctl disable ufw followed by systemctl stop ufw works.
Quote
The rule does look correct. You may want to add logging to see what happens to incoming connections:
ufw allow in log proto tcp from 192.168.0.212 to any port ldaps
[\quote]
Cheers. Done!
Quote
IIRC, there is an OpenLDAP profile (which you can confirm by issuing ufw app list), so you could also use:
ufw allow in log proto tcp from 192.168.0.212 to any app "OpenLDAP LDAPS"
[\quote]
Will look into that.
Quote
I don't think I have ever commissioned an LDAP server with port 389/tcp closed. Perhaps there is a dependency and you need both open (I think this unlikely but possible)...
I think port 389 is unsecured unless used with tls and is mostly used for MS AD but it seems to be enable here on the mail/ldaps server anyway. I think perhaps it's there for outlook/AD compatibility.
Quote
Finally, in relation to your concerns regarding name resolution, you could use an entry at /etc/hosts while troubleshooting. Perhaps reboot afterwards to be sure no lingering resolver issues exist.
Do you mean entry in the host file (like 192.168.0.212 cloud3) on ldaps host and vice versa?
Quote from: bernieo on November 08, 2021, 05:23:25 PM
Yes systemctl disable ufw followed by systemctl stop ufw works.
Best confirm that with
iptables -L methinks...
Quote from: bernieo on November 08, 2021, 05:23:25 PM
I think port 389 is unsecured unless used with tls and is mostly used for MS AD but it seems to be enable here on the mail/ldaps server anyway. I think perhaps it's there for outlook/AD compatibility.
I was more concerned that there might be an internal application dependency on being able to communicate on the interface IP between TCP ports 389 and 636. It would depend on the application (slapd) implementation. Like I said: unlikely but possible. It's more possible that if this mechanism existed, it would be on the loopback interface.
Quote from: bernieo on November 08, 2021, 05:23:25 PM
Do you mean entry in the host file (like 192.168.0.212 cloud3) on ldaps host and vice versa?
Yes, but also make an entry using the external DNS name you were concerned about, e.g. cloud3.mydomain.tld, to ensure it is resolving locally how you expect. Don't use the FQDN which has a trailing dot. Later you can add a local zone on OPNsense (presuming that's your DNS Server).