Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Apex

#1
Rebooted the firewall and was able to delete the client.

It went into a non-responsive state via the GUI when attempting to delete it, forced a reboot via SSH and it came back up. Client was still there, hit delete again and this time it removed it.
#2
I'm attempting to delete a legacy client as I no longer need the VPN tunnel. I select it, click on the "delete selected rules" which produces a pop-up of "Do you really want to delete the selected clients? No / Yes

I click on Yes, and it just sits there. It does nothing. I've deleted the interface associated with it, the gateway group the outbound firewall rule. There isn't anything associated with this client now and for some reason it will not delete using the GUI.

Is there a way to do it via the shell?
#3
I'm running unbound on 24.1.6 and I can always tell something is wrong with the firewall is because it starts dropping connections.

Unbound is consuming 4 cores and 4 threads at 100% CPU choking it until the service crashes.

I haven't increased the log level, I've read other people experiencing issues with unbound and migrated over to DNSMasq and haven't had an issue since.

I prefer unbound doing recursion, its faster and much more versatile than DNSMasq, but I can't have my firewall sporadically dropping traffic because of Unbound either.

During the issue at the standard log level, this is what I see before the service just stops, and its just pages and pages of this:

2024-05-03T09:55:07-04:00   Informational   unbound   [46703:0] info: start of service (unbound 1.19.3).   
2024-05-03T09:55:07-04:00   Notice   unbound   [46703:0] notice: init module 2: iterator   
2024-05-03T09:55:07-04:00   Notice   unbound   [46703:0] notice: init module 1: validator   
2024-05-03T09:55:07-04:00   Notice   unbound   daemonize unbound dhcpd watcher.   
2024-05-03T09:55:07-04:00   Notice   unbound   [46703:0] notice: init module 0: python   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 3: requestlist max 0 avg 0 exceeded 0 jostled 0   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 3: 0 queries, 0 answers from cache, 0 recursions, 0 prefetch, 0 rejected by ip ratelimiting   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 2: requestlist max 0 avg 0 exceeded 0 jostled 0   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 2: 0 queries, 0 answers from cache, 0 recursions, 0 prefetch, 0 rejected by ip ratelimiting   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 1: requestlist max 0 avg 0 exceeded 0 jostled 0   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 1: 0 queries, 0 answers from cache, 0 recursions, 0 prefetch, 0 rejected by ip ratelimiting   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 0: requestlist max 0 avg 0 exceeded 0 jostled 0   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: server stats for thread 0: 0 queries, 0 answers from cache, 0 recursions, 0 prefetch, 0 rejected by ip ratelimiting   
2024-05-03T09:55:06-04:00   Informational   unbound   [87043:0] info: service stopped (unbound 1.19.3).

Under System---> Logs---> General, I see this for Unbound:

2024-05-03T17:44:37-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:44:37-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:44:34-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:44:34-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:44:30-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:44:30-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:44:24-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:44:24-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:44:20-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:44:20-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:44:17-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:44:17-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:43:58-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:43:58-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:43:55-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:43:55-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:43:52-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:43:52-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())   
2024-05-03T17:43:45-04:00   Error   opnsense   /usr/local/etc/rc.linkup: The command '/bin/kill -'TERM' '49860''(pid:/var/run/unbound.pid) returned exit code '1', the output was 'kill: 49860: No such process'   
2024-05-03T17:43:45-04:00   Notice   opnsense   /usr/local/etc/rc.linkup: plugins_configure dns (execute task : unbound_configure_do())
#4
A family relative who I setup OPNSense for had similar issues, I had him try upnp for his XBox BUT also add in the static port mapping rule.

He hasn't had any issues using his gaming console since that configuration change.

It might be worthwhile to try that solution and see if you still experience that behavior.
#5
That fixed it, I just needed to do a reboot.

Thank you!
#6
Thank you, I wasn't sure which logs I should be looking at.

I was able to log into the cli and checked the unifi server log.

This is what I see:

[2024-04-19 14:30:27,467] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:31,572] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:31,573] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:35,680] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:35,680] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:39,789] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:39,790] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:43,899] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:43,899] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:48,006] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:48,006] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:52,120] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:52,120] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:30:56,227] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:30:56,227] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:31:00,382] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:31:00,382] <mongo-db> INFO  mongo  - Database process stopped, code=134
[2024-04-19 14:31:04,490] <mongo-db> WARN  mongo  - Stop listening to Mongo logs after process has exited
[2024-04-19 14:31:04,490] <mongo-db> INFO  mongo  - Database process stopped, code=134

Checking Mongo's error code site, it doesn't make a lot of sense to me:

https://www.mongodb.com/docs/manual/reference/error-codes/

134
ReadConcernMajorityNotAvailableYet

#7
I used the plugin repository provided by mimugmail  Repo URL https://www.routerperformance.net/opnsense-repo/ reference URL: https://forum.opnsense.org/index.php?topic=20827.0

I installed the latest unifi which installed unifi 8. It worked quite well for a little while and after the last reboot I get this error message in the attachment.

I've attempted to uninstall and reinstall and get the same results.

Not sure how to troubleshoot or what logs I should look at so looking for a little direction to try and get this working again.
#8
I wanted to provide an update to this thread.

I went and setup another VPN with a different provider (Trying the free option with ProtonVPN). Just wanted to see if there was an issue with my current VPN provider.

As of now all of my DNS traffic is routing over the ProtonVPN tunnel without an issue. So either NordVPN is blocking DNS traffic, or possibly root DNS servers are blocking traffic from NordVPN? I wouldn't think so, but I really don't have much to go on for why it stopped working when everything BUT DNS worked and I point DNS traffic through a different provider its working flawlessly.
#9
I'm a recent convert from PFSense to OPNSense. I have run this configuration for years in PFSense, and for about a month the same configuration was working in OPNSense.

I am using 2 unbound servers that are external to OPNsense located on my LAN. I had a firewall rule for any protocol from DNS servers (in a firewall alias) the traffic would be directed to the VPN Gateway group. I have a VPN Gateway Group with 2 Tier 1 connections for load balancing the requests, and for everything else including policy base routing it works just fine.

But for Unbound up until 2 days ago was working the same way. Then out of nowhere I started seeing DNS lookup failures with a status return of SERVFAIL and I'm not sure why.

Even the DNS servers themselves if I do a traceroute, the VPN gateway passes traffic but for Unbound at least according to the Unbound logs it appears to be failing to retrieve a proper response.

Here is the logs, I performed a lookup using the domain name "pickles.com" to make it easier to find in the unbound logs.

[1709696403] unbound[1816:2] info: 1RDdc mod2 rep pickles.com. A IN
[1709696403] unbound[1816:2] debug: cache memory msg=132120 rrset=132120 infra=14229 val=132400 subnet=140552
[1709696403] unbound[1816:2] debug: svcd callbacks end
[1709696403] unbound[1816:2] debug: close of port 45447
[1709696403] unbound[1816:2] debug: close fd 33
[1709696403] unbound[1816:2] debug: answer cb
[1709696403] unbound[1816:2] debug: Incoming reply id = 9021
[1709696403] unbound[1816:2] debug: Incoming reply addr = ip4 192.5.5.241 port 53 (len 16)
[1709696403] unbound[1816:2] debug: lookup size is 1 entries
[1709696403] unbound[1816:2] debug: received udp reply.
[1709696403] unbound[1816:2] debug: udp message[28:0] 902180950001000000000001000002000100002904D0000080000000
[1709696403] unbound[1816:2] debug: outnet handle udp reply
[1709696403] unbound[1816:2] debug: measured roundtrip at 18 msec
[1709696403] unbound[1816:2] debug: svcd callbacks start
[1709696403] unbound[1816:2] debug: worker svcd callback for qstate 0x7f8bbc3311b0
[1709696403] unbound[1816:2] debug: mesh_run: start
[1709696403] unbound[1816:2] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
[1709696403] unbound[1816:2] info: iterator operate: query . NS IN
[1709696403] unbound[1816:2] debug: process_response: new external response event
[1709696403] unbound[1816:2] info: scrub for . NS IN
[1709696403] unbound[1816:2] info: response for . NS IN
[1709696403] unbound[1816:2] info: reply from <.> 192.5.5.241#53
[1709696403] unbound[1816:2] info: incoming scrubbed packet: ;; ->>HEADER<<- opcode: QUERY, rcode: REFUSED, id: 0
;; flags: qr ra ; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
.   IN   NS

;; ANSWER SECTION:

;; AUTHORITY SECTION:

;; ADDITIONAL SECTION:
;; MSG SIZE  rcvd: 17

[1709696403] unbound[1816:2] debug: iter_handle processing q with state QUERY RESPONSE STATE
[1709696403] unbound[1816:2] info: query response was THROWAWAY
[1709696403] unbound[1816:2] debug: iter_handle processing q with state QUERY TARGETS STATE
[1709696403] unbound[1816:2] info: processQueryTargets: . NS IN
[1709696403] unbound[1816:2] debug: processQueryTargets: targetqueries 0, currentqueries 0 sentcount 32
[1709696403] unbound[1816:2] info: DelegationPoint<.>: 13 names (0 missing), 26 addrs (25 result, 0 avail) parentNS
[1709696403] unbound[1816:2] info:   a.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   b.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   c.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   d.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   e.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   f.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   g.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   h.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   i.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   j.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   k.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   l.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] info:   m.root-servers.net. * A AAAA
[1709696403] unbound[1816:2] debug:    ip4 198.41.0.4 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:503:ba3e::2:30 port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 199.9.14.201 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:200::b port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 192.33.4.12 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:2::c port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 199.7.91.13 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:2d::d port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 192.203.230.10 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:a8::e port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 192.5.5.241 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:2f::f port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 192.112.36.4 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:12::d0d port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 198.97.190.53 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:1::53 port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 192.36.148.17 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:7fe::53 port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 192.58.128.30 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:503:c27::2:30 port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 193.0.14.129 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:7fd::1 port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 199.7.83.42 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:500:9f::42 port 53 (len 28)
[1709696403] unbound[1816:2] debug:    ip4 202.12.27.33 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    ip6 2001:dc3::35 port 53 (len 28)
[1709696403] unbound[1816:2] debug: servselect ip4 202.12.27.33 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=302
[1709696403] unbound[1816:2] debug: servselect ip4 199.7.83.42 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=170
[1709696403] unbound[1816:2] debug: servselect ip4 193.0.14.129 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=302
[1709696403] unbound[1816:2] debug: servselect ip4 192.58.128.30 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=248
[1709696403] unbound[1816:2] debug: servselect ip4 192.36.148.17 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=248
[1709696403] unbound[1816:2] debug: servselect ip4 192.112.36.4 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=248
[1709696403] unbound[1816:2] debug: servselect ip4 192.5.5.241 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=248
[1709696403] unbound[1816:2] debug: servselect ip4 192.203.230.10 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=205
[1709696403] unbound[1816:2] debug: servselect ip4 199.7.91.13 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=248
[1709696403] unbound[1816:2] debug: servselect ip4 192.33.4.12 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=205
[1709696403] unbound[1816:2] debug: servselect ip4 199.9.14.201 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=205
[1709696403] unbound[1816:2] debug: servselect ip4 198.41.0.4 port 53 (len 16)
[1709696403] unbound[1816:2] debug:    rtt=248
[1709696403] unbound[1816:2] debug: selrtt 170
[1709696403] unbound[1816:2] info: sending query: . NS IN
[1709696403] unbound[1816:2] debug: sending to target: <.> 192.203.230.10#53
[1709696403] unbound[1816:2] debug: dnssec status: expected
[1709696403] unbound[1816:2] debug: EDNS lookup known=1 vs=0
[1709696403] unbound[1816:2] debug: serviced query UDP timeout=205 msec
[1709696403] unbound[1816:2] debug: inserted new pending reply id=4c35
[1709696403] unbound[1816:2] debug: opened UDP if=0 port=13824
[1709696403] unbound[1816:2] debug: comm point start listening 33 (-1 msec)
[1709696403] unbound[1816:2] debug: mesh_run: iterator module exit state is module_wait_reply
[1709696403] unbound[1816:2] info: mesh_run: end 2 recursion states (1 with reply, 0 detached), 1 waiting replies, 0 recursion replies sent, 0 replies dropped, 0 states jostled out
[1709696403] unbound[1816:2] info: 0pvCD mod2  . NS IN
[1709696403] unbound[1816:2] info: 1RDdc mod2 rep pickles.com. A IN
[1709696403] unbound[1816:2] debug: cache memory msg=132120 rrset=132120 infra=14229 val=132400 subnet=140552
[1709696403] unbound[1816:2] debug: svcd callbacks end
[1709696403] unbound[1816:2] debug: close of port 19468
[1709696403] unbound[1816:2] debug: close fd 34
[1709696404] unbound[1816:2] debug: answer cb
[1709696404] unbound[1816:2] debug: Incoming reply id = 4c35
[1709696404] unbound[1816:2] debug: Incoming reply addr = ip4 192.203.230.10 port 53 (len 16)
[1709696404] unbound[1816:2] debug: lookup size is 1 entries
[1709696404] unbound[1816:2] debug: received udp reply.
[1709696404] unbound[1816:2] debug: udp message[28:0] 4C3580950001000000000001000002000100002904D0000080000000
[1709696404] unbound[1816:2] debug: outnet handle udp reply
[1709696404] unbound[1816:2] debug: measured roundtrip at 19 msec
[1709696404] unbound[1816:2] debug: svcd callbacks start
[1709696404] unbound[1816:2] debug: worker svcd callback for qstate 0x7f8bbc3311b0
[1709696404] unbound[1816:2] debug: mesh_run: start
[1709696404] unbound[1816:2] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
[1709696404] unbound[1816:2] info: iterator operate: query . NS IN
[1709696404] unbound[1816:2] debug: process_response: new external response event
[1709696404] unbound[1816:2] info: scrub for . NS IN
[1709696404] unbound[1816:2] info: response for . NS IN
[1709696404] unbound[1816:2] info: reply from <.> 192.203.230.10#53
[1709696404] unbound[1816:2] info: incoming scrubbed packet: ;; ->>HEADER<<- opcode: QUERY, rcode: REFUSED, id: 0
;; flags: qr ra ; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
.   IN   NS

;; ANSWER SECTION:

;; AUTHORITY SECTION:

;; ADDITIONAL SECTION:
;; MSG SIZE  rcvd: 17

[1709696404] unbound[1816:2] debug: iter_handle processing q with state QUERY RESPONSE STATE
[1709696404] unbound[1816:2] info: query response was THROWAWAY
[1709696404] unbound[1816:2] debug: iter_handle processing q with state QUERY TARGETS STATE
[1709696404] unbound[1816:2] info: processQueryTargets: . NS IN
[1709696404] unbound[1816:2] debug: processQueryTargets: targetqueries 0, currentqueries 0 sentcount 33
[1709696404] unbound[1816:2] debug: request has exceeded the maximum number of sends with 33
[1709696404] unbound[1816:2] debug: return error response SERVFAIL

[1709696404] unbound[1816:2] debug: mesh_run: iterator module exit state is module_finished
[1709696404] unbound[1816:2] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_moddone
[1709696404] unbound[1816:2] info: validator operate: query . NS IN
[1709696404] unbound[1816:2] debug: validator: nextmodule returned
[1709696404] unbound[1816:2] debug: not validating response, is valrec(validation recursion lookup)
[1709696404] unbound[1816:2] debug: mesh_run: validator module exit state is module_finished


I'm at a bit of a loss to explain this as everything else works except for DNS lookups over the VPN tunnel.