Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bernieo

#1
Quote
Just a couple of things:

The rule looks right. Your profile could be odd... Maybe ufw status verbose might be more revealing than ufw status numbered.

AFAIK, ufw is really just a management front-end for iptables. I'm pretty sure systemctl stop ufw won't disable the active rule-set, nor will systemctl disable ufw until after a reboot. It's possible that there's some outbound filtering happening so maybe try again after issuing ufw disable on both machines...
[\quote]

Yes systemctl disable ufw followed by systemctl stop ufw works.


Quote

The rule does look correct. You may want to add logging to see what happens to incoming connections:

ufw allow in log proto tcp from 192.168.0.212 to any port ldaps
[\quote]

Cheers. Done!


Quote
IIRC, there is an OpenLDAP profile (which you can confirm by issuing ufw app list), so you could also use:

ufw allow in log proto tcp from 192.168.0.212 to any app "OpenLDAP LDAPS"
[\quote]

Will look into that.


Quote
I don't think I have ever commissioned an LDAP server with port 389/tcp closed. Perhaps there is a dependency and you need both open (I think this unlikely but possible)...

I think port 389 is unsecured unless used with tls  and is mostly used for MS AD but it seems to be enable here on the mail/ldaps server anyway. I think perhaps it's there for outlook/AD compatibility.


Quote
Finally, in relation to your concerns regarding name resolution, you could use an entry at /etc/hosts while troubleshooting. Perhaps reboot afterwards to be sure no lingering resolver issues exist.
Do you mean entry in the host file (like 192.168.0.212 cloud3) on ldaps host and vice versa?
#2
Thank you Benyamin.

It's great to have you helping to confirm or clarify my thinking and steps in this troubleshoot.

Indeed the ports are open and using netcat returns: connected to 192.168.0.213 636 port [tcp/*] succeeded!

ldap/s is not aalowed on the WAN and as both VMs are on the same LAN subnet ldap has been restricted to  open and listening from 192.168.0.213 to recieve only from 192.168.0.212.
The ufw rule is:
ufw allow proto tcp from 192.168.0.212 to any port ldaps

ufw status shows below in the attachment.
I agree that as they are on the same network OPNsense shouldn't be interfering and I can see no rules that could prevent it. I tried your suggestion to turn ufw of on both machines but the result is the same though I used
systemctl stop ufw.  I then tried sytemctl disable ufw but again the result is no connection.

At this stage I'm beginning to suspect the ldap script attempting to contact and register with the mail server ldap service may be trying to resolve the external DNS FQDN IP rather than the internal IP. As I dont want this to happen I may need a way to change that.
#3
Hi
Thanks for the pointers.

When I issue  ~# lsof -i -P
on the mailserver with the ldap database I get 3 ldap readings

slapd 4242 openldap   8u IPv4 33726 0t0 TCP localhost.localdomain:389 (listen)
slapd 4242 openldap   9u IPv4 33728 0t0 TCP *:636 (listen)
slapd 4242 openldap 10u IPv4 33729 0t0 TCP *:636 (listen)

So it seems the ldaps port is open and listening.

As far as the network is concerned  thanks for the warning on the IP's but they are not my real ones.
I have and internal network lan of the range 192.168.0.0/24. Both mail and nextcloud VMs are no that lan and the ufw firewall on the mail/ldaps VM is set allow ldaps from anywhere the ip of the nextcloud/ldap client.

It is true I have one network card with an public IP set on VMBR0
The VMs access the world through VMBR1 using the separate mac address ordered specially for the second IP address bought for proxmox virtualization using a routed configuration similar to :

iface lo inet loopback

iface enp35s0 inet manual
        up route add -net X.X.X.X/X netmask 255.255.255.X gw X.X.X.X dev
enp35s0
# route X.X.X.X./X via X.X.X.X

auto vmbr0
iface vmbr0 inet static
        address <PUBLIC-IP/29>
        gateway <TO PUBLIC-GW>
        bridge-ports enp35s0
        bridge-stp off
        bridge-fd 0
        pointtopoint <PUBLIC-GW>
        bridge_hello 2
        bridge_maxage 12

auto vmbr101
iface vmbr101 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#OPNSense_LAN1

auto vmbr100
iface vmbr100 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#OPNSense_LAN0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#OPNSense_WAN0 Assigned to MAC Address of 2nd PUBLIC-IP


I can't see any reason other than the incomplete setting in OPNsense.
#4
My setup is a baremetal AMD ryzen 7 wit single nic with IP 1 for Public/internet connection + additional IP 2 and MAC address for  virtualization.This is configured as a debian 11 Linux server with proxmox and 2 Ubuntu 20.04LTS VM's for Mail-in-a-box_LDAP (VM1), Cloud-in-a-Box (Nextcloud) (VM2) and a MX Linux (VM3) Workstation for maintenance/testing.

The MIAB_LDAP uses and serves ldap over port 636 to mail users and is designed to provide the CIAB (Nextcloud) with the same ldaps services for nextcloud users so that the two services can usethe same credentials for users on both mail and collaboration/storage services.

The 4th VM in this setup is a OPNsense server with version 20.19. It serves as the gateway and firewall and once working fully may also provide proxy and automated certification services through Let's Encrypt.
I can access the internet through all VM's and the Host. But ldaps will not connect between the VM1 & VM2

I am now frustrated and tired after 3 days of failure.  There seems no reason for this failure. the default layout is as below in the attachment:

Help :'(  :confused:

:confused: