IPsec blocked for multiple User on Connection

Started by ServerCat, March 21, 2024, 10:56:16 AM

Previous topic - Next topic
Hello

I am new here and hope you are all doing well!

I face an problem with an IPsec implementation. It is probably due to the configuration. As a basis I followed this : guidehttps://docs.opnsense.org/manual/how-tos/ipsec-swanctl-rw-ikev2-eap-mschapv2.html

Implementaion:
I created two Connections and distinguish with the id Value in the Remote Authentication wihch connection is used. The Authentication is with EAP over Radius. The Radius put the Group into the Class Object. I just need to create an group with the same name on the firewall. I can also chose this group in the Groups field in the Remote authentication.

Til here everything work well. I can login with user in groupA and have access to the networks, wihch defined in Child of Connection A. Same with user in groupB

The problem:
I can understand this as follows

1. An User_A1 connect to ConnectionA.
2. Everthing work. User_A can access all the networks they definded.
2. User_A2 connect to ConnectionA. (Then 2 User are connected)
3. User_A1 can now no longer reach the networks. It looks as if the VPN is blocked. But User_A2 can work.
4. When User_A2 disconnect. Then User_A can work agin.

It looks as if the connection is passed on from user_A to user_B. When he logs out again, the connection is returned to user_A.

Wath i tryed:
May be strongswan looks every user as the same, because %any is in the EAP Identity. But in the leases section on the OPNsense GUI, every user have an own ip address from the pool. Into the docs i found (https://docs.strongswan.org/docs/5.9/swanctl/swanctlConf.html) "EAP or XAuth authentication is involved, the EAP-Identity or XAuth username is used to enforce the uniqueness policy instead." I tryed diffrent values for <conn>.unique. But nothing changed.

I played around with start_action and dpd_action. But nothing changed.

The swanctl.conf
# This file is automatically generated. Do not edit
connections {
    2de0136f-6cbc-421a-80aa-3729176f844e {
        proposals = aes256gcm16-sha256-modp2048,aes256gcm16-sha512-modp2048,aes256gcm16-sha512-x25519,aes256gcm16-sha512-x448,aes256gcm16-sha256-modp4096,aes256gcm16-sha256-modp6144,aes256gcm16-sha256-modp8192,aes256gcm16-sha512-modp4096,aes256gcm16-sha512-modp6144,aes256gcm16-sha512-modp8192
        unique = no
        aggressive = no
        version = 2
        mobike = no
        local_addrs = my.address.com
        encap = yes
        rekey_time = 600
        dpd_delay = 30
        pools = PoolA
        send_certreq = yes
        keyingtries = 0
        local-fc3a7fbe-732d-4ee4-890b-f725d40125e8 {
            round = 0
            auth = pubkey
            id = my.address.com
            certs = 654269e2e801b.crt
        }
        remote-544f43ac-e76a-4d3a-9db6-57ff389b5b0f {
            round = 0
            auth = eap-radius
            id = ConnectionA
            eap_id = %any
            groups = GroupA
        }
        children {
            cab66875-3b0a-456c-ab01-e5af7fd9a621 {
                esp_proposals = aes256-sha256-modp2048,aes256gcm16-modp2048,aes256gcm16-ecp521,aes256gcm16-x25519,aes256gcm16-x448,aes128gcm16-modp2048,aes128gcm16-ecp521,aes128gcm16-x25519,aes128gcm16-x448,aes256gcm16-sha256-x25519,aes256gcm16-sha256-x448
                sha256_96 = no
                start_action = trap|start
                close_action = trap
                dpd_action = clear
                mode = tunnel
                policies = yes
                local_ts = 192.168.10.0/24,192.168.100.0/24,192.168.50.0/24
                remote_ts = 10.30.150.0/24
                rekey_time = 600
                updown = /usr/local/opnsense/scripts/ipsec/updown_event.py --connection_child cab66875-3b0a-456c-ab01-e5af7fd9a621
            }
        }
    }
    2591fb58-5dad-43d9-a103-9d7b7c7da312 {
        proposals = aes256-sha256-modp2048,aes256gcm16-sha256-modp2048,aes256gcm16-sha256-x25519,aes256gcm16-sha512-x25519,aes256gcm16-sha256-x448,aes256gcm16-sha512-x448
        unique = replace
        aggressive = no
        version = 2
        mobike = no
        local_addrs = my.address.com
        encap = yes
        rekey_time = 2400
        dpd_delay = 30
        pools = PoolB
        send_certreq = yes
        local-5aaf9149-b04e-4a70-90cf-de79dec755c6 {
            round = 0
            auth = pubkey
            id = my.address.com
            certs = 654269e2e801b.crt
        }
        remote-c624fb9c-4af8-4942-97ee-2f2bcc66a161 {
            round = 0
            auth = eap-radius
            id = ConnectionB
            eap_id = %any
            groups = GroupB
        }
        children {
            d07262da-5444-4a2d-aaf9-63867367459d {
                esp_proposals = aes256-sha256-modp2048,aes256gcm16-x25519,aes256gcm16-x448,aes128gcm16-x25519,aes128gcm16-x448,aes256gcm16-sha256-modp4096,aes256gcm16-sha256-x25519,aes256gcm16-sha256-x448
                sha256_96 = no
                start_action = start
                close_action = none
                dpd_action = clear
                mode = tunnel
                policies = yes
                local_ts = 192.168.10.0/24,192.168.100.0/24
                remote_ts = 10.30.151.0/24
                rekey_time = 3500
                updown = /usr/local/opnsense/scripts/ipsec/updown_event.py --connection_child d07262da-5444-4a2d-aaf9-63867367459d
            }
        }
    }
}
pools {
    PoolA {
        addrs = 10.30.150.0/24
        dns = 192.168.10.1
    }
    PoolB {
        addrs = 10.30.151.0/24
        dns = 192.168.10.1
    }
}
secrets {
}
# Include config snippets
include conf.d/*.conf


Versions: OPNsense 24.1.4-amd64 and strongswan package 5.9.13_1

It seems that I have solved the problem. I like to share the solution.
The mistake was in the child and the definition of the remote_ts. Because i define an network, strongswan was creating an policy for it.
192.168.10.0/24 192.168.100.0/24 192.168.50.0/24 === 10.30.150.0/24
When an other client connect. The same policy was created. Then two policies was there with the same IP-addresses. I think for the IPsec server it was not possible to distinguish to which client an answer, eg to 10.30.150.1 should be sent. I assume that the policy is taken with the first match to 10.30.150.0/24. And it seems so with the last one created.
The solution is just, clean the "Remote" feld in the children selection. This end up that the remote_ts entry is no more reprensent into the swanctl.conf.
        children {
            cab66875-3b0a-456c-ab01-e5af7fd9a621 {
                esp_proposals = aes256-sha256-modp2048,aes256gcm16-modp2048,aes256gcm16-ecp521,aes256gcm16-x25519,aes256gcm16-x448,aes128gcm16-modp2048,aes128gcm16-ecp521,aes128gcm16-x25519,aes128gcm16-x448,aes256gcm16-sha256-x25519,aes256gcm16-sha256-x448
                sha256_96 = no
                start_action = trap|start
                close_action = trap
                dpd_action = clear
                mode = tunnel
                policies = yes
                local_ts = 192.168.10.0/24,192.168.100.0/24,192.168.50.0/24

                rekey_time = 600
                updown = /usr/local/opnsense/scripts/ipsec/updown_event.py --connection_child cab66875-3b0a-456c-ab01-e5af7fd9a621
            }


With this config, the policies look like,
192.168.10.0/24 192.168.100.0/24 192.168.50.0/24 === 10.30.150.1/32
192.168.10.0/24 192.168.100.0/24 192.168.50.0/24 === 10.30.150.2/32

And everything works.  ;D