Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - IsaacFL

#1
Yes, I only have 2 physical interface and all vlans are on one(OPT1) and LAN with no vlans. Kea is active on all of them IPv6, and ipv4
#2
I'm using Kea for both ipv6 and ipv4 and not seeing this. I am using raw socket on ipv4 side. I don't see the setting for that on the IPv6 side
#3
I am using KEA as my DHCPv6 server now, and was curious if the Router Advertisement Service is aware of that.

There is an option "DNS options" "Use the DNS configuration of the DHCPv6 server"

I have unchecked it as I'm using the router DNS, unbound, which I believe is automatic. But it could be more clear which DHCP Server it is pulling it from.

#4
Quote from: cinergi on May 26, 2025, 12:28:30 AMWhat if I want only stateful DHCPv6 without SLAAC, which corresponds to the "Managed" mode under Services > Router Advertisements?  None of the DNSmasq RA modes seem to do this.  Possible using DNSmasq?

RA Mode set to "Default" will be same as "Managed" mode I believe. ?
#5
Quote from: franco on May 16, 2025, 09:26:16 AMPlease push this request to GitHub.  Thanks!

Cheers,
Franco

Created Issue "KEA DHCP6 Option to select for Random vs Iterative Allocation of ipv6 Addresses" #8677

I noted that Issue #8506 was the same request but for ipv4. I also noted that the same solution of "allocator": "random" works for ipv4 so possibly change could be incorporated in both dhcpv4 and dhcpv6.
#6
I couldn't get the dnsmasq/unbound combination to work well in my situation as I have a real domain with Cloudflare as my Name Servers.

So I have been using the new KEA DHCP6 and it seems to work well, and was noticing that by default it assigns ip addresses sequentially.

One option in KEA is to use the "random" allocation instead. The "allocator": "random" option might be more performant in an HA setup especially. It could be a selectable option per subnet or just be the default.

From the documentation:

{
    "Dhcp6": {
        "allocator": "iterative",
        "pd-allocator": "random",
        "subnet6": [
            {
                "id": 1,
                "subnet": "2001:db8:1::/64",
                "allocator": "random"
            },
            {
                "id": 2,
                "subnet": "2001:db8:2::/64",
                "pd-allocator": "iterative"
            }
        ]
    }
}



From the Docs:
-----------------------------
9.21.2. Iterative Allocator

This is the default allocator used by the Kea DHCPv6 server. It remembers the last offered lease and offers the following sequential lease to the next client. For example, it may offer addresses in this order: 2001:db8:1::10, 2001:db8:1::11, 2001:db8:1::12, and so on. Similarly, it offers the next sequential delegated prefix after the previous one to the next client. The time to find and offer the next lease or delegated prefix is very short; thus, this is the most performant allocator when pool utilization is low and there is a high probability that the next selected lease is available.

The iterative allocation underperforms when multiple DHCP servers share a lease database or are connected to a cluster. The servers tend to offer and allocate the same blocks of addresses to different clients independently, which causes many allocation conflicts between the servers and retransmissions by clients. A random allocation addresses this issue by dispersing the allocation order.

9.21.3. Random Allocator

The random allocator uses a uniform randomization function to select offered addresses and delegated prefixes from subnet pools. It is suitable in deployments where multiple servers are connected to a shared database or a database cluster. By dispersing the offered leases, the servers minimize the risk of allocating the same lease to two different clients at the same or nearly the same time. In addition, it improves the server's resilience against attacks based on allocation predictability.

The random allocator is, however, slightly slower than the iterative allocator. Moreover, it increases the server's memory consumption because it must remember randomized leases to avoid offering them repeatedly. Memory consumption grows with the number of offered leases; in other words, larger pools and more clients increase memory consumption by random allocation.

-----------------------------


#7
I am trying it and it works well actually. First error found is:
firewall alias resolve error HOST_PRINTERS (no nameservers)

looks like the firewall can't find the name server if not unbound
#8
In the guide, it suggests having unbound on port 53 acting as DNS and then forwarding local queries to Dnsmasq via port 53053.

Has anybody tried to reverse it where dnsmasq resides on port 53 and then uses unbound on port 5335 as the upstream resolver for dnsmasq?

That is basically what pihole does for its dns/dhcp.
https://docs.pi-hole.net/guides/dns/unbound/

Currently I had some dns looping issues as I have a real custom domain name.

#9
Quote from: dMopp on May 09, 2025, 07:25:03 PMBut now a new issue: how do i get reverse lookups working with dnsmasq?

For reverse dns, I added an 2 additional forwards to Services: Unbound DNS: Query Forwarding

I use the 10.0.0.0/8 for my local addresses, so
    <enabled>1</enabled>,
    <domain>10.in-addr.arpa</domain>
    <server>127.0.0.1</server>
    <port>53053</port>

I did the same for my ipv6, redacted:
          <domain>x.x.x.x.x.x.x.x.x.x.x.3.0.6.2.ip6.arpa</domain>
          <server>127.0.0.1</server>
          <port>53053</port>

#10
I noticed after updating to 25.1.4_1 that this issue is still active.

Did the patch not get put in this update?

# opnsense-update -zkr 25.1.3-fixlog had fixed it prior
#11
Quote from: IsaacFL on March 21, 2025, 07:33:22 PM
Quote from: franco on March 17, 2025, 05:25:30 PMThe question in FreeBSD came up why these state creations are rejected.  The other half of the story is the log does not actually record this at the moment either way. I wrote a small patch to diagnose: https://github.com/opnsense/src/commit/6f18c3b0164689d5cc83206499ade2f4f4016c6e

Would someone with the issue here try this kernel build and send in the plain filter log for the entries that cause spurious block log messages after boot?

# opnsense-update -zkr 25.1.3-reasonlog
(reboot)


Thanks in advance,
Franco

Im back in town and will try this patch today.
Update: Installed, can see the unexpected log entries again. Let me know when you would like me to send output of opnsense-log filter

Franco, I emailed you a copy of the filter logs and my rules.debug

#12
Quote from: franco on March 17, 2025, 05:25:30 PMThe question in FreeBSD came up why these state creations are rejected.  The other half of the story is the log does not actually record this at the moment either way. I wrote a small patch to diagnose: https://github.com/opnsense/src/commit/6f18c3b0164689d5cc83206499ade2f4f4016c6e

Would someone with the issue here try this kernel build and send in the plain filter log for the entries that cause spurious block log messages after boot?

# opnsense-update -zkr 25.1.3-reasonlog
(reboot)


Thanks in advance,
Franco

Im back in town and will try this patch today.
Update: Installed, can see the unexpected log entries again. Let me know when you would like me to send output of opnsense-log filter
#13
I'm out of town until Friday so can't try the patch yet.

The things that seemed to trigger from my memory:
I had a NAT forward rule  to forward DNS port 53 to my local piholes. I could disable the port forward and the log entries would go away

Another ipv4 log entry was ping to 1.1.1.1 from the WAN address. I am mostly ipv6 network and have Tayga plugin. Some Apple clients like to ping 1.1.1.1 via CLAT and so they go thru the Tayga
#14
I just applied the # opnsense-update -zkr 25.1.3-fixlog with reboot and so far seems like it has fixed it.
#15
Quote from: gpb on March 11, 2025, 08:00:01 PMNot sure who you're asking.  Here's my take...it seems like while these thousands of log messages are showing up (I have all logging disabled) I should still have connectivity and trying to hit a web page fails.  My VLANs can't talk to my LAN.  While some could be invalid states, as soon as we hit a certain elapsed time post-boot, suddenly all of them work and the logging stops (for the most part).  It's more like the interfaces in opnsense are failing to initialize or something in the core functionality (log messages aside).  So to be clear, I see two different symptoms.  Much delayed connectivity (2 to 3 minutes) and log messaging that is unwanted/unselected.  I haven't the first clue how to even try to debug this.  This is a home network, so the thousands of messages seem excessive even if they were non-valid states...?

My functionality has not been impacted, just items in the logs that should not be showing.