Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Animosity

#1
It has to do something with AdGuard.

I retried AdGuard on a fresh install on port 53 and checked the box.

DHCP DNS for IPV6 stopped working.

Reinstalled fresh, no AdGuard. DNC DNS for IPV6 worked.

Seems easy to reproduce and test but I really have no use for AdGuard as I just use the same 2 blocklists for Unbound.
#2
23.1 Legacy Series / Re: DNS issues since 23.1.6
April 30, 2023, 11:07:12 PM
You can't run two things on the same port.

You only need the checkbox if you are running AdGuard on 53.

If you are running AdGuard on port 53, you can't be running Unbound or DNSMasq on 53 as you have to change it.

#3
23.1 Legacy Series / Re: DNS issues since 23.1.6
April 28, 2023, 02:42:16 PM
Yes, if you are using the default packages, you'll have no issues.

I just removed AdGuard and added the same 2 blocklists I had in AdGuard into Unbound DNSBL and turned on the reporting in Unbound and that really does everything I wanted anyway without another package installed.
#4
23.1 Legacy Series / Re: DNS issues since 23.1.6
April 27, 2023, 04:07:04 PM
No worries - Thanks as it's confusing me. I'll probably do some more testing in the afternoon once I'm more awake as well :)

#5
23.1 Legacy Series / Re: DNS issues since 23.1.6
April 27, 2023, 03:24:42 PM
Sorry if I'm being dense, but that is what I am doing.

I thought the goal was to forward the inbound request to say 192.168.1. 53 to 127.0.0.1 5353 so I get to my AdGuard Home instance.
#6
Quote from: newsense on April 21, 2023, 02:14:27 PM
I'm with Patrick on this one, similar setup yet simpler:


- AdguardHome installed from Michael's repo and up to date - running on 5353
- Port forward NAT rules on all interfaces directing DNS queries to AdGuardHome
- AdGuardHome handles the DoH/DoT

Running without issues on multiple firewalls for more than 6 months and not affected by any updates so far.

How did you setup the port forward rules? I did a LAN 53 -> 5353 and when I query the IPV4, it tells me a source mismatch.

```
> test.com
;; reply from unexpected source: 192.168.1.1#5353, expected 192.168.1.1#53
;; reply from unexpected source: 192.168.1.1#5353, expected 192.168.1.1#53
;; reply from unexpected source: 192.168.1.1#5353, expected 192.168.1.1#53
```

I assume I'm missing something super easy.
#7
Generally, I find NAT reflection to be annoying so I just stay away from it.

If you have Unbound, just do a DNS override for your "outside" name so it resolves inside.

So app.domain.com might be 26.246.12.23 for the outside world, but inside, I have an override for 192.168.1.30

So it works locally.

I assume you have plex.domain.com so just override it and be done with it.

You seem to have NAT reflection setup so without seeing more details on what the error is, logs, not sure how easily that will be to fix.
#8
I have extended stats turned on and I have the telegraf package and I ship the Unbound stats over to my InfluxDB running Grafana and I rarely use the actual OPNSense box for reporting as I rather have it in a single pane of glass in Grafana.

#9
I have no issues IPV6 on Verizon FIOS as it's been working quite well.

I have a rule for open inbound for 443 on IPV6 that works as well.
#10
Yeah, I don't think that's coming to FreeBSD anytime soon unfortunately.

I tried a few different things and nothing seems to work with any weighting that really helps with bloat as well which is unfortunate.

IPFire is testing out Cake currently but that just doesn't work for me as it is too unstable a platform.

My use case, I'm fine with shared bandwidth as it doesn't really matter to me but would be nice to get prioritized bandwidth as that's what I thought I was getting..
#11
Thanks for sticking with this.

I think I found a pretty decent explanation and have a better understanding why weights do nothing and I had a placebo effect as my queues/rules were working but not because of the weights.

This article had a very good explanation:

https://community.ui.com/questions/If-youre-looking-to-better-understand-what-fqcodel-HTB-and-BQL-is-/edbeb291-83d8-45e4-953f-c0d13ec5689f

So basically, if you are defaults, you get 1024 flows that are 'fairly' used so based on the concept of any FQ_xxxx active scheduled, it fairly distributes it so you can't prioritize something as it's contrary to the fair part.

So if you have 2 large apps, they'll share bandwidth and other flows also get bandwidth and ensure they work.

I simplified my setup and just made 1 queue/1 in rule / 1 out rule since it doesn't matter.

I tested with iperf3 and validated that everything is shared as you described which makes total sense now.

So you'd have to use a different scheduler if you wanted true priority as FQ_Codel and FQ_PIE will split amongst their flows.

#12
I was going by this:

https://docs.opnsense.org/manual/how-tos/shaper_prioritize_using_queues.html

and this:

https://docs.opnsense.org/manual/shaping.html

queue
A queue is an abstraction used to implement the WF2Q+ (Worstcase Fair Weighted Fair Queueing) policy, which is an efficient variant of the WFQ policy. The queue associates a weight and a reference pipe to each flow, and then all backlogged (i.e., with packets queued) flows linked to the same pipe share the pipe's bandwidth proportionally to their weights. Note that weights are not priorities; a flow with a lower weight is still guaranteed to get its fraction of the bandwidth even if a flow with a higher weight is permanently backlogged.


In my testing, if I max out a low queue, the other ones suffer no loss and work and I can see traffic flowing through my high/default queues.

It's very easy to test with iperf and using a particular IP for rules.

Example of my rules/test with iperf3.

https://imgur.com/a/oeufGnN
#13
I use 3 queues for upload/download and rules to drop things into queue.

I use weights on each of the queues 100/50/1 respectively and works like a champ.

I have a high / default. / low queue and for me, I drop all my 'backup' type traffic in the low queue and never hit any issues.

I thought the weighting with FQ_Codel was more about packets it sent rather than priority. I recall reading something about that but the docs are pretty spotty.

My queues look like:

https://imgur.com/gallery/HQDLF77

Rules are like this as a filter out a specific IP to my low queue. Rules should be top down with the most restrictive first as I want my low matches than highs with a catch all default at the end.

https://imgur.com/a/HjMhuxn

I have a gibabit FIOS link and always get A+ on waveform regardless of load on my line or what's happening.



#14
So what I ended up doing to solve the problem was more akin to comment on where the shaper lives in the flow of the packets.

In pfSense, you can't see LAN IPs going through the floating rules on the WAN.

In the Shaper in OPNSense, you can see LAN IPs so just I made my in/out rules matching the proper LAN IP I was to reduce/shape going in/out and can validate the GUI they match so I didn't use any of the normalization items as that wasn't working despite being in the GUI area to mark packets so if you can't mark them, it probably shouldn't 'appear' to work / be configurable but it is.

Needless to say, I met my solution for my initial question by using the LAN IPs which was much easier and works well.

#15
There's not a single person that has a use case to traffic shape an internal IP out?

In Pfsense, you do this by tagging LAN traffic and making a floating rule that captures the tag.

I'm just trying to see how this is replicated on OPNSense.