Duplicate DHCP Leases Issues

Started by tarlennz, July 13, 2023, 01:38:01 PM

Previous topic - Next topic
I've just set up OPNSense 23.1.11 and everything is mostly working okay, but for some reason it's issuing duplicate IP addresses to a number of my Proxmox hosts.

The trimmed down logs show this:

2023-07-13T22:58:24   DHCPACK on 192.168.1.195 to ee:0e:e6:ab:41:75 (ark9) via vtnet0   
2023-07-13T22:58:24   DHCPREQUEST for 192.168.1.195 (192.168.1.1) from ee:0e:e6:ab:41:75 (ark9) via vtnet0   
2023-07-13T22:58:24   DHCPOFFER on 192.168.1.195 to ee:0e:e6:ab:41:75 (ark9) via vtnet0   
2023-07-13T22:58:23   DHCPACK on 192.168.1.195 to 3a:de:88:ad:67:dc (ark6) via vtnet0   
2023-07-13T22:58:23   DHCPREQUEST for 192.168.1.195 (192.168.1.1) from 3a:de:88:ad:67:dc (ark6) via vtnet0   
2023-07-13T22:58:23   DHCPOFFER on 192.168.1.195 to 3a:de:88:ad:67:dc (ark6) via vtnet0   
2023-07-13T22:58:21   DHCPACK on 192.168.1.195 to 66:33:1d:6e:14:b5 (ark12) via vtnet0   
2023-07-13T22:58:21   DHCPREQUEST for 192.168.1.195 (192.168.1.1) from 66:33:1d:6e:14:b5 (ark12) via vtnet0   
2023-07-13T22:58:21   DHCPOFFER on 192.168.1.195 to 66:33:1d:6e:14:b5 (ark12) via vtnet0   
2023-07-13T22:58:11   DHCPACK on 192.168.1.195 to 2e:56:8f:8e:5d:9a (ark8) via vtnet0   
2023-07-13T22:58:11   DHCPREQUEST for 192.168.1.195 (192.168.1.1) from 2e:56:8f:8e:5d:9a (ark8) via vtnet0   
2023-07-13T22:58:11   DHCPOFFER on 192.168.1.195 to 2e:56:8f:8e:5d:9a (ark8) via vtnet0   
2023-07-13T22:58:05   DHCPACK on 192.168.1.195 to 5e:ce:39:d2:d7:7c (val2) via vtnet0   
2023-07-13T22:58:05   DHCPREQUEST for 192.168.1.195 (192.168.1.1) from 5e:ce:39:d2:d7:7c (val2) via vtnet0   
2023-07-13T22:58:05   DHCPOFFER on 192.168.1.195 to 5e:ce:39:d2:d7:7c (val2) via vtnet0

You can see within a few seconds the same IP is being offered to 5 different VMs. This is understandably causing issues in the network. :)

Trying to get ahead of questions:
- There are no overlapping static mappings with the allocation range.
- The address above is within the allocation range.
- Setting static mappings for any of the MAC addresses listed does supply the correct statically assigned address.
- Range for this DHCP allocation is 192.168.1.100-192.168.1.200 (all static allocations <.60)
- None of the servers in question had that IP before setting up OPNSense.
- These are Proxmox VMs that I believe are cloned from the same initial base (this is the only thing I can link them with)
- Other Proxmox VMs in the same hosts are getting different addresses

Anyone have any idea why this would be happening? It seems super weird to me that it's even possible for OPNSense to offer the same allocation to different MAC addresses in such a short time.

Now this is weird. Is there any VLAN setup involved? Is it remotely possible that a DHCP server listens on a physical port and errorneously servers requests from systems in tagged networks?

Not that that would explain this behaviour right away, but I know that this is one of the edge cases where ISC dhcpd at least historically could behave strangely. That's why mixing tagged and untagged frames on a single interface is generally discouraged.

What is your lease duration set to?

Kind regards,
Patrick
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Lease is set to 60 mins. There is a mixed tagged/untagged VLAN setup, and this is occurring in the untagged network.

The weird part is that it's happening with "related" VMs. I saw the same with other VMs - different IP address but two related servers were given the same address. In that case it was my DNS servers, which was more of an issue. :D

Do you have enough interfaces to separate the tagged and untagged traffic? Or run everything tagged on the port from OPNsense to switch? I'd give that a try.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Hmm. I'm slowly moving stuff to VLANs, but there is still quite a lot of untagged traffic so that could be tricky. :)

Thanks for the advice though - I'll see if I can speed up the migration. :D