Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - gregg098

#1
Quote from: meyergru on May 20, 2025, 12:30:52 PMAs does using "opnsense-patch e69b02c" and reapplying the DNSmasq host reservations, at least for that specific issue.


Stupid question time. How do you apply that patch?
#2
Quote from: meyergru on May 19, 2025, 11:37:18 PMI just updated the ticket title accordingly. We have discovered that in the meantime as well:

No static reservations work unless their DHCP registrations are active. This is because of a bugfix for IPv6 reservations with dynamic prefixes, which obviously cannot work before the actual IPv6 is known. This fix breaks all static IPv6 and IPv4 reservations that expect the DNS resolution regardless of how the client obtains its address.

My preferred fix for this would be to write the reservations like before, leaving out only the affected "partial" IPv6s. However, a fix make take until 25.7. as of now.

When you say the fix may take until 25.7, does this mean all local DNS resolution is broken until then for all static leases?

I migrated to DNSMasq+Unbound over the weekend and everything was running great. I upgraded to 25.1.7 earlier today and now all local resolution of static reservations is broken. Dynamic reservations work just fine. Have confirmed I am setup like the docs suggest, the queries just never resolve at the DNSMasq level for static leases.
#3
Any chance you could test with the same config, but virtial NICs (Virtio - default options) in the OPNsense VM vs passthrough? One with and one without Zenarmor would be awesome. I'm getting a similar machine soon and have always run OPNsense with virtualized NICs.  Thanks.
#4
I guess I'm saying it would be nice to just allow DHCP to give out the interface address by default and not revert to System DNS servers just because Unbound is not on port 53. This is the way it always worked in the past. Maybe just a checkbox or something to allow this as an option, with the current (new) way as the default?  Currently, I have to manually enter interfaces addresses in DHCPv4 for this to happen, or add additional firewall/NAT rules.

NextDNS CLI is a third party install and unrelated to OPNsense, but at the same time, its no different than some other third party package that listens on port 53.

Thanks.
#5
Thanks. I actually started reading that thread earlier too and incorrectly assumed it wasn't related. Need some kind of easier override here to go back to old functionality it seems.
#6
I just upgraded to 23.1.7_3 from an earlier 23.1.x release. Now, my clients get the DNS servers listed under System -> Settings -> General and not the interface IP (e.g., 192.168.10.1).

Under System -> Settings -> General, I've always had Cloudflare IPV4 and IPV6 servers listed for system use. For my main DNS, I have NextDNS CLI installed on port 53, then Unbound on port 5555.  NextDNS CLI forwards all local domain lookups to Unbound. This works great and I've been doing this forever.

Under Services -> DHCPv4 -> VLAN ID, I always left the DNS fields blank. This has always worked well by providing the Interface IP to each client. From the help, I understand that this is the expected behavior. For example, VLAN 10 is 192.168.10.0/24. It always handed out 192.168.10.1 as a DNS server.

Since the upgrade, all clients now get the Cloudflare DNS servers from System settings instead (with no ad blocking) unless I manually input the interface IPs in each DHCPv4 server. This isnt a big deal, but I cant figure out why the behavior changed. Is it because I use Unbound on a port other than 53? Or something I missed in changelogs?

Have experimented with various things like removing System DNS servers, various check boxes, etc. Nothing really changes this.

Any thoughts on why this changed all of a sudden?

Thanks.
#7
I have an EAP670 that wouldn't pickup IPV6 addresses for a few minutes. Also had some issues SSHing to various devices on that wifi. It seems to of been identifed as a hardware offloading issues or something along those lines.

There is a beta firmware that fixes the issues I'm aware of. Here's the link if anyone is interested:
https://static.tp-link.com/upload/beta/2022/202212/20221220/EAP670_v1_1.0.0_20221219(beta).zip

Note: The controller will tell you an update is available after you install it. Just ignore it. And always backup first.

Here's the relevant TP Link forum thread too. It calls out EAP650, but applies to a number of newer models, including the EAP670.

https://community.tp-link.com/en/business/forum/topic/583348?page=1
#8
I had the same issue. There are a bunch of threads on other forums about instability with Proxmox & pfsense/opnsense and these N6005 and N5105 units from Topton/Changwang/King Novy/etc.  Proxmox ran just fine, but my OPNsense VM would just stop and say "internal-error."  I gave up after trying every tweak in the book and moved on to something else. These machines are not reliable.

Some additional information: I originally migrated the VM from another machine that ran perfect for a long time. Had tons of crashes. Finally did a complete reinstall with a few config changes, imported my config, and things were OK for a few days. Then crashes again.
#9
Running 22.1.10 as a VM in Proxmox on a new 4 port i225 mini PC with a Linux Bridge for WAN and a Linux Bridge for LAN. No pass through.  I have Xfinity internet, a handful of VLANs, and I do not run IPS/IDS.

Over the last two weeks, every few mornings, I wake up to no internet. When I connect to Proxmox, I see the OPNsense VM sitting with a yellow pause symbol saying "Internal Error."  If I try to goto a console view, all I see is similar to the screenshot below. The last few log entries before I restart it are all netflow maintenance items. Then nothing.
Quote2022-07-26T00:00:30-07:00   Notice   flowd_aggregate.py   vacuum done   
2022-07-26T00:00:30-07:00   Notice   flowd_aggregate.py   vacuum interface_086400.sqlite   
2022-07-26T00:00:30-07:00   Notice   flowd_aggregate.py   vacuum interface_003600.sqlite

The weird part is that the error messages in the screenshot are always showing vlan50. This vlan is identical in every way to other vlans I have setup. I've verified firewall rules, RAs, interface settings, etc.

If I restart OPNsense, everything comes back. I'm struggling to get more information on the crash.

I *think* this started with the most recent release, but I'm not entirely sure. This setup was running great the last few months.

There are some other forum posts about flapping WAN connections in recent builds, but I'm not sure if they're related or not.

Anyone have any ideas?
#10
I got a Topton N6005 unit and setup OPNsense inside of Proxmox. Using all Linux bridges, no passthrough. One NIC connected to cable mode, two others connected to a 1 gigabit switch in LAGG, which is assigned to another bridge. OPNsense sees two interfaces still. I get ~1325 Mbps down (on the host at least - most of the network is gigabit) and ~40 bps up. Runs like a champ this way.
#11
Assuming the CPU can handle both I doubt you'd ever see any real world difference in either scenario. You might see some responses that "doing it this way gives you 1 ms better resolution" or something, but 99% of the feedback here is antecdotal. Nothing wrong with it, just saying, dont treat everything here as fact.

I currently have a cheapy little 2 NIC machine that I run Proxmox on. I use Proxmox bridges to pass to OPNsense. I'm waiting on a new unit with four 2.5G NICs. My plan is to setup similar to what you were mentioning: will setup one Proxmox bridge for WAN, and then setup a LAGG (again, on Proxmox) for LAN, and passing through the resulting bridge to OPNsense. OPNsense will only ever see those two interfaces. To me, even if there is a very slight performance hit, the benefit is that I can basically just load a backup on on my other 2 NIC machine and it will basically just work.