Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - zandrr

#1
I have actually come across the same issue.

It looks like you can specify the port in the config file on the host
/usr/local/etc/namedb/named.conf
Example below, where a manual port insertion on the primaries line did the trick for me.

zone "lan1" {
        type secondary;
        primaries { 10.1.1.1 port 53530; };
        file "/usr/local/etc/namedb/secondary/lan1.db";
        allow-transfer {
                dns_lan;
        };
        allow-query {
                dns_lan;
        };
};

The issue with this approach appears to be any GUI edit regenerates the config and wipes out said adjustment.
Granted I haven't yet explored a better/more permanent approach.

Hopefully we hear some developments from others who come across this thread. Else might be one for a GitHub feature request.
#2
Quote from: Moonshine on April 20, 2024, 12:27:39 AM
I was looking for this also.  Personally I'm waiting (hoping?) for it to be added, as the Kea integration seems pretty raw currently.  (Also doesn't seem to integrate reservation hostnames with DNS forward/reverse lookup?)

Anyway, if you're more ambitious there was info here for options in Kea:

https://kea.readthedocs.io/en/kea-2.2.0/arm/dhcp4-srv.html#custom-dhcpv4-options

And the config files in Opnsense seem to be in /usr/local/etc/kea , although I'm not sure if edits would persist through changes via the UI.

I'm holding out for DNS etc options for static leases as well. May migrate some subnets that don't utilise for now though.
Already replicated what I had in ISC via config export/import
#3
I for one actually had the same experience post-upgrade and also rolled back without too much investigative analysis. VM as well.
Don't really have anything to add sorry, just mirroring your experience. This was back in 23.7.0 though, so first release.

Will keep an eye on this thread for insights. I'm in no rush to upgrade home again, but would like to. Just waiting patiently,
#4
Thank you for the quick reply. However that doesn't appear to be the case; the nullroute(s) are still shown as invalid and thus aren't candidates for advertisement. I attempted that just now with no change in outcome, with a reboot for good measure.

I'm theorising, but could it be due to whatever the difference is with the flags in the system table; ie. the missing gateway flag?

I am also aware of redist kernel routes and network import check disablement, but would like to refrain for now - though the former may not even matter if the route is seen as invalid.
#5
Hi Team,

Just wondering if anyone else on 23.7 has static nullroutes and whether they're able to propagate those in BGP?

For reference, have tried static redistribution as well as no distribution.
BGP network statements are there of course; also working in 23.1 with no redist defined.

Under System > Routes > Configuration, 10.x.0.0/16 set with next-hop Null4 - 127.0.0.1
Under System > Routes > Status, the static null is present with next-hop 127.0.0.1 and flags USB (UGSB in 23.1)

Notably, per the flags table on below page, the flag G for Gateway is missing.
https://docs.opnsense.org/manual/routes.html#flags

Under Routing > Diagnostics > BGP > IPv4 Routing Table, the above static null is present but shown as invalid.
Under Routing > Diagnostics > General, no route is present for the static null.

The subnet is to act as an aggregate for more specific subnets within the same domain, such as interfaces.
Those propagate properly as /24's

Edit: Sorry, I read the table wrong, originally I said 23.7 flags were UGS, but it's actually USB; missing G for Gateway. Modified above. 23.1 was correct as UGSB
Also added IPv6 nullroutes to my 23.7 instance and the behaviour/flags are identical as IPv4. 23.1 has correct behaviour with both families.
#6
23.1 Legacy Series / Re: ACME LetsEncrypt + Cloudflare
August 19, 2023, 11:13:32 PM
Mine is set up similarly to the above, however under the 'DNS Sleep Time' under Challenge Types I leave it at 0 seconds, which should be the default.

In your settings (picture)

  • Revert DNS Sleep Time to 0
  • Remove in Global API Key: E-Mail and Key
  • Remove in Restricted API Token: CF Zone ID

I remember it also took a bit of fiddling to get it just right. Some fields were very particular; particularly the ALT names under the Certificates.
For additional domains, I just added certificates.

So from the top, only the fields mentioned have inputs; the rest left to detaults:

ACME Client > Accounts
Name: 'le-prod' (arbitrary) - I also have 'le-test' from testing against "Let's Encrypt Test CA"
E-Mail Address: Obvious
ACME CA: "Let's Encrypt (default)"


ACME Client > Challenge Types
Name: 'dns-challenge' (arbitrary)
Challenge Type: DNS-01
DNS Service: CloudFlare.com
CF Account ID: From CF portal in URL string
CF API Token: Generated from CF portal, needs DNS:Edit capability.


(optional) ACME Client > Automations
Name: 'restart-webui' (arbitrary)
Run command: Restart OPNsense Web UI


ACME Client > Certificates
Common Name: '*.example.com' (I use a wildcard)
ACME Account: Above
Challenge Type: Above
(optional) Automations: Above


To get more verbose logs
ACME Client > Settings > Settings tab > Log Level: change to 'debug'
view under ACME Client > Log Files > ACME Log tab
#7
I had an unfortunate issue with DNS not working properly locally after 23.1.11_1 > 23.7.
Notably it's virtualised on Proxmox (kernel 6.2.16-6). OpenvSwitch bridging with VLAN interfaces on *sense.
ISP is static with public DNS upstreams (eg 1.0.0.1, 1.1.1.1, 8.8.8.8 and ip6 equivalents).

Had to rollback, so didn't take the time to tshoot sorry. Just wondering if anyone else experienced something similar?

Cannot rule out config deviation of course, or some introduced quirk like MTU, but I played through Unbound a bit and the behaviour persisted (not that it appeared very rational in the first place). Also disabled ip6 entirely to rule out stack behaviour.

Have it running separately on a dedicated device with no issues to speak of... however that was fresh stock and not an upgrade install, so it's yet another variable. Might test again this weekend if no leads here.
#8
Hm think I've found my issue. Looks like something with my Unbound overrides, which impacts my alias parsing, which in turn impacts some firewall rules.

- If I use the upgrade feature (eg 21.7.3 -> 21.7.4) my aliases that are dependent on Unbound overrides don't appear to resolve.
- If I import my config on a later release, my Unbound overrides aren't imported, which makes my host aliases useless and breaks firewall rules.

A lot of my firewall rules use aliases that resolve DNS for hosts. I had a justification when I set them up that way eons ago (a lot of VM lab work). Can definitely change my approach now.

Makes sense now I step back, and somewhat embarrassing. No resolution means my FW rules aren't matched, which explains the lack of connections and torrent throughput...
I blame not tackling the issue 6+ months ago when I first encountered it which clouded a methodical approach to troubleshooting.

Will re-evaluate my config since a lot of stuff is stale. Then will move on to a fresh 22.1 install.
Sorry to drag you through my journey. I am somewhat happy the issue doesn't appear to be something worse.
#9
Will be finishing the attempt on E1000 adapters tonight. I started setting it up late last night.
Edit: Sorry, I actually did test it. Went to continue and realised I had already drawn a conclusion. Long couple of days at work...

Tried a physical machine before that as well, with latest 22... and same issue.
It could be the config restoration, so will attempt baremetal again with a known good version (eg 21.7.3) and then, depending on that outcome, focus on reconfiguring from scratch. I want to avoid that if less certain about it making a difference.

Can you elaborate on the virtio struggles you mention; do you have a source I can look in to?
Been running this VM setup for a few years now. Granted, possibly been lucky enough to avoid obvious problems until now.
Had 1Gb fibre for longer than I have been running OPNsense
#10
Thanks for the response. I will be progressively (but slowly) working on isolating the cause, if some obvious reason isn't provided by someone with insight.
Trying the path of least resistance while working through possibilities.

ISP is DHCP (port auth) and no protocol/port filtering. I work in my ISP's core network, so can say that with confidence.

The behaviour appears to be correlated with the software aforementioned, so related to some change present in 21.7.4 onward.
Could policy, protocol timers or even driver interactions... a few variables to rule out

A temporary physical router would be feasible, but I already have a baseline working setup in 21.7.1/21.7.3 - do you have another reason to suggest this?
Personally would probably first trend toward a baremetal system running OPN, to attempt to rule out hypervisor or drivers

Appreciate the feedback
#11
I have an issue with bad torrent performance (potentially related to concurrent connections) that I have today narrowed as being introduced in 21.7.4 ... apologies in advance as a wall of text ensues.

Been sitting on this issue of mine since I first observed it when 21.7.5 was released, many months ago.
Since then, I have periodically failed at attempts to upgrade software from 27.7.1 to a later release. Eventually rolling back and ignoring it for a long period.

Specs:
Proxmox 6.5 to 7.2 . 1000/500M fibre internet.
OPNsense VM: virtio nics, openvswitch bridges, host cpu, 4G RAM, zfs/ufs storage as scsi.

Events:
- 21.7.5 was attempted initially at time of release, but rolled back due to torrent performance issue.
- 22.1 attempted at release also experienced the same problem, resulting in rollback.
- Tried 22.1.6 this weekend, and same issue apparent.
- Tried to settle on 21.7.8, but same issue apparent.
- Determined issue was at or before 21.7.5, when the issue first presented...

Went through versions incrementally, from 21.7.1 until performance obviously deteriorated, which was in 21.7.4.

Between 21.7.3 and 21.7.4 something broke that drastically affects torrent performance.
Any other issue that might be present aren't as obvious as the performance degradation on torrents.
Speedtests appear fine between known good and bad versions, so theorised likely related to volume of connections.

Torrent seeding:
- Reference constant in 21.7.3; ~230 peers -- ~10000-20000 states
- Reference constant in 22.1.6; ~10 peers -- ~4800-6000 states

The presentation of issue is immediately apparent if I boot a 'good' or 'bad' release.
- If I boot a good version, the load ramps quickly. Most obvious is HDD noise lol.
- If I boot a bad version, there is no significant load and connections remain low.

Reading the 21.7.4 notes, nothing seems particularly obvious to me. RSS might be the most relevant, though the notes suggest the capability is introduced but defaults to disabled. I would expect something in driver, kernel or system to be relevant.

Various configurations have been attempted across 3 PVE hosts with different PHY adapters (mostly Intel); I1219-LM, X520-DA2, I225-LM. A Chelsio T520-CR has also been used.

I have not tested on a baremetal system, nor extensively with non-virtio adapters (though have tried e1000 on an earlier occasion without a positive outcome).
Have not tried the native Linux Bridging on testing systems, but have run it historically on non-afllicted versions with no distinct difference in performance.
Tried the tunable for RSS enablement in 21.7.4 (per the release notes) with no apparent success.

I have two VM's (21.7.3_3 and 22.1.6) on my PVE host that I have been booting between to compare outcomes.
Happy to perform any useful tests where time permits.


The reason I open with an apology is due to the length of time I have sat on a problem, and any potential shortfalls in providing enough useful information.
Hoping there is an issue that may either be obvious, or just overlooked while I was digging for information.

Thank you for your patience.

Also, much gratitude to OPNsense contributors. Have been using OPNsense since 19.1 and definitely want to continue doing so.