Recent posts

#1
26.1 Series / Re: OPNsense 26.1.4 VLAN odd b...
Last post by viragomann - Today at 09:15:17 PM
Well, and a possible reason for this behavior could be, that the network mask is set wrongly for the client VLAN interface on OPNsense.

"client VLAN" is, how you called the network in your initial post.
#2
26.1 Series / Re: OPNsense 26.1.4 VLAN odd b...
Last post by Shoresy - Today at 09:09:16 PM
The VLANs are intentionally separate routed /24 networks. The problem is not that the client can't identify its own subnet; the problem is that DNS replies are visible arriving on the services VLAN but are not visible leaving back toward the client VLAN.
#3
26.1 Series / Re: KeaDHCP dynamic DHCP quest...
Last post by meyergru - Today at 09:05:37 PM
I may have found the answer - and the observed problem is most probably uncorrelated to the host discovery service.

Matter-of fact, Kea uses two files to register dynamic leases: kea-dhcp4.leases.csv and kea-dhcp4.leases.csv.2. One of them is a journal that is sometimes flushed to the other file: https://kea.readthedocs.io/en/latest/arm/dhcp4-srv.html#why-is-lease-file-cleanup-necessary

Also note, that unlike with ISC DHCP, Kea registers static reservatione in these files. If you then change something in the reservation, say, for instance, you change a MAC because you exchange a VM for another on your Proxmox host after an upgrade and want to keep its IP or if you only move a client to another VLAN, the lease file will still hold the old entry. Matter-of-fact, that was exactly what I did before the problem that I described before occured again.

When the newly created client with the static reservation now requests an IP via DHCP, there will be a conflict with the message I showed above.

What is more, is that this seems to trigger a cascading effect in Kea which seems to be a bug and is also discussed here for the "other product":

https://forum.netgate.com/topic/198115/changing-the-mac-address-on-a-kea-static-lease-does-not-work/4

At least one of you who observed the same problem also uses Proxmox, so it is kind of likely that something like this occured.

Thus, for the time being, it seems advisable to stop Kea, manually delete conflicting (or all) leases from both lease files and start Kea again when changes to static reservations are made.
#4
High availability / Duplicated data flow
Last post by GreenMatter - Today at 08:42:34 PM
With your assistance in previous topics, I got HA in working condition, but...

To describe my setup:
  • 2x Opnsense instances in high availability mode with carp vip interfaces on single pve host. I know it's not full HA but I want software HA and also simply to test it.
  • VMs are connected through 3 bridges: 1 on WAN side, the other on LAN side (and further trunk physical link to switch) and pfsync bridge.
  • IGMP snooping, storm control are disabled in (UniFi) switches.

In order to change above configuration and (trying to) test my issue, I created additional LAN bridge for backup instance and instead of having them (2x opnsense) connected over single linux bridge - within proxmox, I connected them over physical switch.
This of course requires second downlink:
  • master/regular LAN bridge would remain connected as it is now
  • backup/new LAN bridge is connected to switch via additional downlink

But problem I'm facing is duplicated communication/data flow to and from both VMs; both instances have same looking graphs in proxmox webgui - network flow and also cpu. Despite they don't change their master/backup status (no flapping at carp status) I have something similar to split brain situation, for example if I communicate with opnsense webgui or ssh on carp vip interface, reply comes either from one of those two and toggles every few seconds. If I ping them, reply is duplicated ("DUP!"). Communication to other hosts and WAN is ok.  I have already set Mac filter to "no" in proxmox VM's firewall options (pve firewall is disabled). I tried ovs and Linux bridges with same results.

To me, it is something related to MAC and network switches; is it possible to set it up correctly?

#5
26.1 Series / Re: apcupsd LCK.. file
Last post by ohioyj - Today at 08:33:31 PM
Quote from: franco on Today at 02:01:05 PMCan you add a plugin ticket on GitHub?

Will do, thank you for the link. I'll wait for it to do it again, and see about getting a log. It was doing it regularly. Perhaps it's fixed (fingers crossed)
#6
Quote from: OPNenthu on February 20, 2026, 07:12:24 PMHostwatch can be enabled selectively on internal interfaces if WAN is the only issue.

I have the interfaces filtered to LAN on 26.1.4, but I'm still seeing WAN discovery happening.
#7
26.1 Series / Re: Problem installing OPNsens...
Last post by dirtyfreebooter - Today at 08:21:07 PM
another option with installing opnsense with zfs on top of proxmox with zfs, is do these 2 things:

opnsense uses 128k record size, proxmox defaults to 8k. make the opnsense zvol ahead of time (don't let the wizard do it)
zfs create -V 64G -b 128k rpool/data/vm-100-disk-0that minimizes the write amplification when the record size mismatches.

to eliminate the double arc, set the caching to metadata only.
zfs set primarycache=metadata rpool/data/vm-100-disk-0
for the level of IO opnsense does, this pretty much eliminates zfs on zfs issues, imo. enough so that trying to get UFS working seems like more effort than its worth...
#8
I'm on 26.1.4, and my LAN filter for hostwatch is still not working. I'm seeing 95% of entries from WAN. Is this expected?
#9
26.1 Series / Re: Can the GUI levels stay ex...
Last post by Greelan - Today at 08:14:42 PM
I realised when doing this everyone would have an opinion xD

I thought about the first option. It seemed cumbersome to me. To add a Favorite, you would need to click a button and then navigate a list and click another?

I also thought about the second option. The issue with that is there isn't consistency between pages that would make adding an icon or setting there convenient or consistent. Probably the only realistic spot to add it is at the start or end of the heading on the page.

The nice part though of having the favorite icon in the menu is that multiple options can be clicked without having to open each page.

Maybe Franco or Ad will see this and can weigh in from a preferred UI perspective before I finalise a PR.
#10
26.1 Series / Re: KeaDHCP dynamic DHCP quest...
Last post by stauf - Today at 08:01:01 PM
Thanks, that makes total sense based off what I saw.  I disabled Automatic Discovery and cleaned up the csv files, but that did not completely fix my problem.  Some of the erroneous entries were still showing up in the table.  To be fair, once the problem was "mostly" solved, that was enough for me at the time and I didn't pay super close attention to every detail.  All I can say is the next day, all of these 86400 second lifetime, MAC-less and hostname-less entries were totally gone from the KeaDHCP Leases table and, so far, have not come back.

Frank, are you saying you are getting buildup of what appear to be erroneous entries in your KeaDHCP Leases table and you have verified that Automatic Discovery is disabled?  I'm a bit confused by your statements.  You say you aren't using v6, but have v6 addresses assigned?  If you have KeaDHCPv6 disabled, these may just be auto assigned v6 addresses by Proxmox or your containers running on Proxmox.

I have not observed my KeaDHCP pool get used up while Automatic Discovery is Disabled nor was I scrutinizing the logs.  If you have logs referring to Automatic Discovery, sounds to me like you might still have it enabled.