OPNsense Forum

English Forums => Tutorials and FAQs => Topic started by: OPNenthu on April 26, 2025, 12:48:44 PM

Title: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 26, 2025, 12:48:44 PM
EDIT: As explained in the thread below, this is not technically a work-around as I originally thought.  It is an implementation of an IPv6 control plane (a valid technique) for ICMP traffic; an example of Multi-color Shaping.  Please ignore references to "work-around."

------

This is a work-around for those of us wanting to combat bufferbloat with FQ-CoDel and ECN as per the OPNsense guide (https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html), but are seeing high packet loss on the IPv6 gateway (specifically on upload) with the shaping applied.  This issue is discussed here (https://github.com/opnsense/core/issues/7342) and here (https://github.com/opnsense/core/issues/6714), as well as in several forum posts.

packet_loss.png

(Note: some have experienced loss of IPv6 connectivity altogether although it's not clear if it's the same underlying cause.  In some cases the ISP may not be supporting ECN, as observed by @meyergru.  This won't help in those situations.)

I took the inspiration to try this from the comments in https://github.com/opnsense/core/issues/6714.  Thanks to GitHub user @aque for the hint.

Starting with the configuration from the OPNsense guide as the basis:

1. Under Firewall->Shaper->Pipes add an additional upload pipe named something like "Upload-Control".  We'll be using it to separate ICMP and ICMPv6 traffic from the CoDel shaper. You can name this more specifically like "Upload-ICMP" but you may wish to use this pipe for additional control protocols (e.g. DHCP, NTP, DNS) in the future so I went with a generic name.

I set the bandwidth for this pipe to 1 Mbit/s in my case, which seems more than enough for my home internet usage (your mileage may vary). So for example if your existing upload pipe was 40 Mbit/s, you'll reduce it to 39 Mbit/s and give the 1 Mbit/s to the new pipe.

Leave everything else default.

I personally did not create a manual queue for this (it's working without one) so I will skip over Firewall->Shaper->Queues.

2. Under Firewall->Shaper->Rules, clone the existing Upload rule and make the following edits:

- Sequence: <upload rule sequence> - 2
- Protocol: icmp
- Target: Upload-Control (the pipe you created in step 1)

Save the rule with a descriptive name like "Upload-Rule-ICMP".  The sequence needs to be at least 1 less than the default Upload rule and you may need to adjust the other rule sequence values accordingly.

3. Repeat step 2 for the ICMPv6 rule:

- Sequence: <upload rule sequence> - 1
- Protocol: ipv6-icmp
- Target: Upload-Control

Save as "Upload-Rule-ICMPv6". 

Make sure "Direction" is "out" for both of these rules (under the advanced settings).

Now when you run a speed test you should no longer see the high packet loss on the IPv6 gateway and you should see the ICMP traffic starting to get tallied under the respective rules in Firewall->Shaper->Status.

shaper_rules.png

upload_rule_status.png

Hope this helps.  Do let me know if I've done something stupid here.  I am not an expert.

(If you're curious about the TCP ACK rules in the screenshot, I followed the advice given by @Seimus in this post (https://forum.opnsense.org/index.php?topic=7423.msg222935#msg222935).)
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: dinguz on April 26, 2025, 03:08:31 PM
Nice work! I'm wondering though — is this fixing an actual problem in day-to-day use, or is it more about looking good in tests? Would love to hear a bit more about that.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: OPNenthu on April 26, 2025, 04:07:09 PM
The bufferbloat test result is not meant to give the impression of chasing numbers (apologies if it did).

I won't try and defend the use of shaping for everyone- I think it's a personal choice.  In my case I had to put my ISP gateway into bridge mode in order to run OPNsense and by doing so I've effectively disabled all the nice shaping that the ISP had included on their box.  I pay good money for a "premium" service here that is advertised heavily on TV for its low latency for gaming and video conferencing.  I might as well get what I pay for.

There is a significant difference with and without shaping, yes in terms of the raw numbers, but more importantly in terms of consistency.  With shaping enabled the latency is consistent.  Without it, I've seen it jump around a wide range (low teens to several hundred ms.)

As for routing ICMPv6 around it, purely a work-around.  OPNsense doesn't currently have a way to exclude that traffic from the shaping rule (it was requested in one of the GitHub tickets but doesn't look like it's being worked on).  I can't say whether the packet loss was having a real impact on latency as I was still getting good numbers, but the gateway status going red all the time was uncomfortable.  If it got high enough, I worried that the gateway would go down.



Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: meyergru on April 26, 2025, 04:23:45 PM
I thought this was common knowledge:

Bufferbloat plays a role when you have a download or upload running (which might also be someone in your network streaming a video) and getting a higher latency in that case, which could result in lagging online games. It can also cause noticeable interference in audio streams.

In extreme cases, you will notice slow page buildup with complex web pages that consist of dozens or hundreds of ressources, because when your buffers are full and your network stack does not know it, the content will only get transferred on the next retry after packet losses.

This becomes especially noticeable with sites that are far away in terms of turnaround time. To lessen the effects of BDP (https://en.wikipedia.org/wiki/Bandwidth-delay_product), you normally would want a buffer size as large as you can get, but this will only go so far as your ISP lets you.

Read more about it here (https://www.bufferbloat.net/projects/).

@OPNenthu : Nice work! This should probably be added to the traffic shaping guide (https://docs.opnsense.org/manual/shaping.html).
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: dinguz on April 26, 2025, 07:50:32 PM
Apologies for not wording my question more clearly. I'm fully on board with the bufferbloat issue in general; I was just wondering more specifically about the effects of ICMPv6 apparently being squashed by other traffic.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: meyergru on April 26, 2025, 10:13:43 PM
OPNenthu gave the links to the discussions of issues around this at the start of his post. Basically, using the traffic shaper breaks IPv6 connectivity under high load.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: Seimus on April 27, 2025, 01:28:45 PM
Nice write up,

When I wrote the buffer-bloat guide I didn't had a possibility to test it on IPv6.
The latency/packet loss can increase as well for IPv4 pings, the reason behind is basically the starvation of BW and Queues. But its not so prominent as its for IPv6.

What you practically did is to give dedicated BW to a specific traffic type e.g BW reservation, in a way we can look at this creating a priority Pipe/Queue or better say a dedicated Pipe/Queue.

When speaking about excluding ICMP from the Queues, well there is maybe a different possibility, instead of matching IP all, to match UDP & TCP all. Because by design ICMP is not associated with TCP nor UDP. Which would result the ICMP not be matched by the default any rules, but without specifiyng a pipe it could start to eat into the whole capacity where there is none. However your approach is better, to give the ICMP, a specific traffic type, a specific dedicated configured chunk of the BW.


There are as well other methods how to mitigate this, like having more specific queues for traffic types, because in a congestion scenario, either Flow queue is TAIL dropping in the FQ_CoDel scheduler or a IP match-AnyPacket queue in Queues is TAIL dropping once the FLOW queue is full. BUT, if a congestion is ongoing, the ICMP would be hit sooner rather than later anyway. That's why I like your approach.



If this is a valid "solution" for IPv6 problems, we can adjust the official buffer-bloat guide mentioning the need to create a specific Pipe for ICMP. Or create a separate page for IPv6 "IPv6 Fighting Bufferbloat with FQ_CoDel"

Regards,
S.


P.S there is always a queue (10002.141074) even if you don't specify it ;). When you don't set a manual queue a dynamic ones are used, 2 by default as specified in the Queue field in the Pipe config.




Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: OPNenthu on April 27, 2025, 06:42:36 PM
Thanks all- I appreciate the review/feedback. @Seimus, the explanation about BW starvation as the cause is making a lot of sense now and I can appreciate why @dinguz may not be seeing the issue depending on the available upload bandwidth.

I happen to have remote access to a second physical OPNsense at my parents' house (as well as a mini PC there) that I can use as a control for testing.  The remote instance only has the Bufferbloat fix as per the official guide and does not have the ICMPv6 work-around. The other difference is that my dad's service plan is 300/300 symmetrical compared to mine at 800/40 asymmetrical (and different ISPs).

Here's what I observe.  When I run an online speedtest on the remote network, I see that the gateway is not showing packet loss.  The delay increases slightly, but the loss remains at 0.  This tells me that it will be much harder to observe the issue on that network since it has a sufficiently wide upload pipe.  It's hard to saturate a 300Mbps link in day-to-day browsing.  I'm attaching a screenshot of the remote gateway 'Quality' graph.

On my network however, it's quite easy to start seeing the issue.  All I need to do is be connected to a VPN provider and start a couple video streams.  This creates a sustained load on the WAN upload and because of my smaller overall pipe I see the packet loss creep up.  Online speedtest is the best way to show it though as that puts an immediate heavy load.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: Seimus on April 28, 2025, 01:06:57 AM
There are two components in networks/paths that directly impact performance e.g user experience;
1. BW capacity
2. Queue size

These two have a common relationship where;

If BW capacity is saturated it will cause back back pressure on the Queues causing them go full
> if a Queue is full depending on the queue management it will perform an action > Dropping, be it TAIL or Early.

How ever there are as well traffic types that can saturate a Queue while BW capacity is not saturated
> if a Queue is full depending on the queue management it will perform an action > Dropping, be it TAIL or Early.

The later is much more harder to Tshoot.
In day to day use from perspective of us Users, Homelabers we mostly experience the 1st scenario. That one matches as well for what you describe above.

TIP: in FQ_Codel you can set the size of the Flow queues, but if set to high too many packets fill the queue and it will cause unnecessary latency. If BW saturation prevails we still maybe TAIL drop from a queue.


------------

Also I think you should not call this a "ICMPv6 work-around".

Because this is by all means how a control plane should have been taken care of.

What I mean by that is, from perspective of a packet, if you are not using a shaper all traffic is looked on as falling into default class any any, one queue, one pipe. When you use QoS/Shaper most of the time basic user needs need only one queue and one pipe. And here comes the problem with control plane and congestion situation.

If we all handle as one BIG Queue and one BIG Pipe, at a certain point what should not fail (control plain) will fail and with it will the network fail.

For example when configuring BGP, and configuring QoS/Shaper we take in mind to separate BGP control plane from other kind of traffic and handle it as a different color (Queue/Pipe). We reserve for it a specific needed BW chunk to guarantee operation and non-disruption for the network during congestion events. By doing so we prevent BGP to go down and be excluded from BW and Queue starvation.

This goes for any control plane.
When we plan QoS/Shapers we need to plan as well to take the control plane in account. Such as ICMPv6 as its necessary for proper functionality of IPv6 which makes it a control plane ;)


Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: OPNenthu on April 28, 2025, 02:42:48 AM
Noted- I'll change the description.  How about "IPv6 optimization for FQ_CoDel (anti-Bufferbloat) shaping" ?

Along the lines of a control plane, I am curious:

- Does it make sense to do this for the Download side as well?

- Is there a good way to measure the needed width of the control pipe rather than guessing at 1Mbit/s?  Does OPNsense have built in tools to measure ICMP flows?

(**EDIT: I found some netflow data under Reporting -> Insight, but it's only reporting packet and byte counts.  Not giving me an average rate.  However, the total ICMP v4/v6 count is extremely small relative to my overall traffic (<1% it seems), so probably even 0.5Mbit/s is OK.  I'll stick with 1MBit/s for now.)

- Are there other types of control traffic that make sense to go through this pipe as well?  I alluded earlier to DHCP, NTP, and possibly DNS (although I'm not noticing an issue with these).
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: Seimus on April 28, 2025, 10:27:11 AM
Quote from: OPNenthu on April 28, 2025, 02:42:48 AMNoted- I'll change the description.  How about "IPv6 optimization for FQ_CoDel (anti-Bufferbloat) shaping" ?

Sure why not, I would call it something like "IPv6 Control Plane with FQ_CoDel Shaping". Or Multi-color Shaping, because that basically what is achieved here.


----------------------------

Quote from: OPNenthu on April 28, 2025, 02:42:48 AM- Does it make sense to do this for the Download side as well?

Yes it does, if a communication is bidirectional (both ways), this needs to be specified both ways for the shaper.

Quote from: OPNenthu on April 28, 2025, 02:42:48 AM- Is there a good way to measure the needed width of the control pipe rather than guessing at 1Mbit/s?  Does OPNsense have built in tools to measure ICMP flows?

(**EDIT: I found some netflow data under Reporting -> Insight, but it's only reporting packet and byte counts.  Not giving me an average rate.  However, the total ICMP v4/v6 count is extremely small relative to my overall traffic (<1% it seems), so probably even 0.5Mbit/s is OK.  I'll stick with 1MBit/s for now.)

In-build only via netflow, otherwise you need to check the protocol specification. But considering this is control plane, it should not need much BW.

Quote from: OPNenthu on April 28, 2025, 02:42:48 AM- Are there other types of control traffic that make sense to go through this pipe as well?  I alluded earlier to DHCP, NTP, and possibly DNS (although I'm not noticing an issue with these).

This is a nice question, but when we are speaking about  control plane to "guarantee operation and non-disruption for the network during congestion events".
We are talking about control plane that has direct impact on the networks stability e.g L3 Protocols.

So if you run for example a dynamic routing protocol towards an external device, you would need it.

DHCP, DNS and NTP are L7 so from purely this view they would not mach into this category. There are situations where you need to have a separate class/Queue+Pipe dedicated BW for these, but you should not mix them with the L3 control plane class/Queue+Pipe. I think its not necessary to do this, FQ_C should be handle them fine. However you can create at least separate queues in the main FQ_C Pipe for at least DNS, this is how I have it setup-ed.

Look at this in a following way. If we have something critical or important , its maybe worth consideration to create for it a separate class/Queue+Pipe, to guarantee a minimum BW for operation purposes;
A. from network view
  > most critical is always something that has direct impact on the network stability > control plane + service plane
B. from client view 
  > important for example DHCP, DNS or > management plane (SSH)
C. from user view    
  > user important applications IPTV, RTP etc. > data plane (user defined apps)

A. needs to be always taken care of, always in its own dedicated way.

B. + C. considering FQ_C into the equation, this can be handled totally fine with it, in certain edge scenarios however there is necessary to separate them. Because FQ_C doesn't do any BW prioritization ~ it shares the BW equally.



Regards,
S.

P.S. Sorry for the lengthy replies, but we are here touching topics that I think are bit beyond simple config and done, but rather need to be understood
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 28, 2025, 06:08:42 PM
Thanks, this is enlightening and good to have these explanations for posterity IMO.

I went ahead and replicated the Control pipes and ICMP rules for Download as well.  I wanted to scratch my curiosity so also went ahead and added two manual queues and rules for DoT to/from Quad9 via the FQ_Codel pipes.  So far everything seems to be working smoothly.  Will keep an eye on it for some time.

Screenshots of the updated solution attached, although these are above and beyond the main topic here.  Just to reiterate for those only needing to solve the IPv6 WAN packet loss with FQ_CoDel, you only need to add the Control pipes & rules in both directions.  Ignore all the ACK/DoT/Quad9 stuff (I'm too lazy to delete them at this point).

(P.S. it was tricky to match DoT by 5-tuple because it is neither a true TCP or UDP protocol according to Wikipedia, and port 853 is not specific enough.  So instead I matched ip/853 to and from the Quad9 public IPs, as I have configured in Unbound).
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 29, 2025, 12:57:33 PM
Quote(P.S. it was tricky to match DoT by 5-tuple because it is neither a true TCP or UDP protocol according to Wikipedia, and port 853 is not specific enough.  So instead I matched ip/853 to and from the Quad9 public IPs, as I have configured in Unbound).

You could just match the port 53 (DNS) + 853 (DoT), these are reserved ports so no other application should be use them. However if you use DoH, which is over port 443, than you need to be more precise to specify as well Destinations.

-----------------------

I was trying to lookup technical documentation for Control plane QoS used in Enterprise solutions. Looks like the default is always 1% of the used BW in your case 1% of 40Mbit. But this takes in account that there are control planes for multiple protocols.

As you run only IPv6 the 1Mbit is enough, but in case you will need to Shape as well control planes for other protocols (example BGP) its worth to consider to increase the BW of Control plane Pipe. And use weighted queues, as the default scheduler is WFQ, so basically this way you can keep one Pipe for control plane and creates classes/queues per specific protocol to allocate proper BW reservation by the merit queue weight.

-----------------------

As pointed by @meyergru, it would be really beneficial to introduce the need and understanding of Shaping for control plane traffic.
When I will have time, I will create a PR, touching this topic in general with example for IPv6.

Of course @OPNenthu if you want you can do it and share the PR and I can just contribute to it ;)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 29, 2025, 05:53:20 PM
Thanks for confirming the needed BW.  It aligns with my observations from netflow as well.

Quote from: Seimus on April 29, 2025, 12:57:33 PMAnd use weighted queues, as the default scheduler is WFQ, so basically this way you can keep one Pipe for control plane and creates classes/queues per specific protocol to allocate proper BW reservation by the merit queue weight.

Glad you touched on this.  I was debating whether FIFO might perform better for this purpose, assuming the pipe was only being used for ICMP-type traffic.  I briefly tried it but wasn't noticing any difference, and the default (WFQ) gives us more options like you said.

QuoteOf course @OPNenthu if you want you can do it and share the PR and I can just contribute to it ;)

I appreciate it but I'm out of my depth on the topic. Happy to proofread or test.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 29, 2025, 07:27:32 PM
I just took a look at your bufferbloat submission for reference: https://github.com/opnsense/docs/pull/571

That doesn't seem to too bad to try and follow.  Maybe I can install a reStructuredText editor in VSCode and get some initial content down as a starting point.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 29, 2025, 08:11:49 PM
Quote from: OPNenthu on April 29, 2025, 05:53:20 PMGlad you touched on this.  I was debating whether FIFO might perform better for this purpose, assuming the pipe was only being used for ICMP-type traffic.  I briefly tried it but wasn't noticing any difference, and the default (WFQ) gives us more options like you said.

If there are better option don't use FiFo, it should be fine when you have only one queue per pipe.
Its better to use WFQ or QFQ which is a faster variant of WFQ, much more faster processing time.
Btw if you can you can try QFQ on the control plane Pipe for IPv6.

Quote from: OPNenthu on April 29, 2025, 07:27:32 PMI just took a look at your bufferbloat submission for reference: https://github.com/opnsense/docs/pull/571 (https://github.com/opnsense/docs/pull/571)

That doesn't seem to too bad to try and follow.  Maybe I can install a reStructuredText editor in VSCode and get some initial content down as a starting point.

Its nothing hard, reStructuredText is simple to understand and use. More or less the challenge is to write the docs properly. I am already having a draft in my head what should the docs contain and how to structure it. Feel free to start, this is the benefit of opensource (as well OPN docs) as we can co-create and co-colaborate ;)

But ultimately it depends on the OPN devs if they accept such addition to their docs :)

Regards,
S.


Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 30, 2025, 05:24:50 AM
I'm not sure how to test ICMPv6 throughput.

As a basic latency test, I tried to run 10 pings to Cloudflare DNS under load.  To generate the load I ran speedtest.net in a browser and initiated the pings during the upload portion of the speed test.  The results all seem within margin of error to me.

Of course, my gateway showed significant packet loss (up to 30%) during the baseline test with only FQ_CoDel present.  It did not do this when the Control pipe was active (either WFQ or QFQ).

Baseline - No control pipe
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=18ms
Reply from 2606:4700:4700::1111: time=17ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 18ms, Average = 14ms

Control (WFQ)
(WFQ)
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=11ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 15ms, Average = 13ms

Control (QFQ)
(QFQ)
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=16ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=12ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 16ms, Average = 13ms

I then repeated the speed tests while watching 'top' on the OPNsense, and I recorded the highest system CPU usages seen:

Baseline: Down: 22%, Up: 3.4%
WFQ: Down: 23%, Up: 3%
QFQ: Down: 23.4%, Up: 4.3%


I don't think my tests are very scientific :) and all I can say at the moment is there appears to be no downside to using a Control pipe with either scheduler type.  I don't measure or appreciate any felt difference between them, with the exception of the gateway status.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 30, 2025, 09:34:18 AM
As this is basically to test IPv6 Control plane stability the way how you tested it is okay.

----------------
1. Create a WFQ Pipe and Queue for IPV6 ICMP
2. Saturate your internet connection (speed test example)
3. Observe ICMPv6 latency, jitter
4. Observe IPV6 for stability
5. Repeat above for QFQ
6. Compare results without Control plane Pipe and Queue and with WFQ and QFQ
----------------

If we would like to test more scientifically, there is a tool for this example Crusader, that can give precise measurement specifically for buffer bloat. But we do not need this, as we have a proof of concept for a working solution.

And yes I expected WFQ and QFQ to have similar results, difference would be seen if there would be multiple Queues under the Control plane Pipe. Benefit of QFQ is that it should provide more consistent rates and tight guarantees across the multiple queues defined by the weight merit.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 30, 2025, 10:05:32 AM
I created a feature request with explanation on the docs repo. This will be used for the PR

https://github.com/opnsense/docs/issues/705

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 30, 2025, 03:59:42 PM
PR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on April 30, 2025, 09:38:02 PM
Hmmm. I just followed the new instructions. FWIW, it worked fine on one installation with 400/200 Mbit/s. I then copied the <Trafficshaper> section of config.xml to another installation of the same ISP with a higher bandwidth (1000/500) and the machine crazily went on/off. It seemed like the old problem of breaking IPv6 connectivity kicked in again there.

Since the site is remote to me and I broke connectivity doing this once, I cannot thoroughly test it there.

However, when I used the instructions on my own rig (1100/800, other ISP), I found that the Waveform Bufferbloat test stalled after the first step, taking forever "warming up". I am sure that the Shaper is the culprit, because when I disabled all rules, the test went through.

The test also went fine when I reverted to config to the initial instructions by @OPNenthu with just control rules for upstream icmp and icmp-ipv6, without intermediate queues (using only the pipes for this). I modified them to also have a downstream control rule and this works as well.

I wonder if TS has problems with higher speeds, which is something I vaguely remember reading.

My current working setup on my own rig looks like this:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>910</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue>2</queue>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>1160</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue>2</queue>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler>fq_pie</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>20</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>30</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3db79d81-b459-4558-b845-b2ba19efec31">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>fb829d32-e950-4026-a2ee-3663104a355b</target>
          <description>Upload-Control Rule ICMP</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>fb829d32-e950-4026-a2ee-3663104a355b</target>
          <description>Upload-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>883ed783-df03-4109-9364-a6c387f5954f</target>
          <description>Download-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3e5fe8fc-1b6a-4323-a95a-c24e664cd5b9">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>883ed783-df03-4109-9364-a6c387f5954f</target>
          <description>Download-Control Rule ICMP</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

I know that there are a few differences to @Seimus's instructions with what now works:

1. The control plane speeds are very low (1 Mbit/s).
2. I use masks on the pipes, as well as FQ_Codel Parameters and PIE.
3. I have rules for icmp in addition to icmp-ipv6.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: MagikMark on April 30, 2025, 10:30:50 PM
@meyergru

Do you happen to have a screenshot of your TS settings instead the xml format?
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 01:08:00 AM
Quote from: meyergru on April 30, 2025, 09:38:02 PMknow that there are a few differences to @Seimus's instructions with what now works:

1. The control plane speeds are very low (1 Mbit/s).
2. I use masks on the pipes, as well as FQ_Codel Parameters and PIE.
3. I have rules for icmp in addition to icmp-ipv6.


Thanks for testing, as I myself dont have IPv6 capable connection, any test of the config from git I created and results help to fine tune this.

This is interesting,
Where you able to observe any packet loss as reported any other users when this problem with IPv6 occurs (health graph)?

ICMPv4 should not be needed for IPv6 functionality, at least I didn't found anything much related to it.

I suspect if there is still issue present for you,e.g loss and latency for IPv6 is present, it could be potentially due to Capacity BW on the Pipes for control plane. The rules match basically any ICMPv6 not only the one originating from the OPN itself.

Looking at your working config as as you mentioned you use masks on Pipes,


<pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>

The behavior of mask on Pipes is different when masks are used on Queues.

QuoteThus,  when  dynamic  pipes are used, each   flow will get the same
        bandwidth as defined by the pipe, whereas when dynamic queues are
        used, each   flow will share   the  parent's  pipe  bandwidth   evenly
        with  other  flows    generated  by the same   queue (note that other
        queues with different weights might  be  connected    to  the  same
        pipe).

So in simple,
When you use mask on pipe, each flow will get the BW set in the Pipe.
When you use mask on queue, the total value of pipe is shared.

The config of queues on github doc, limits the total usage of the BW to the value of the Pipe, this is the reason to use queues amongst the fact we can to use the Control plane Pipe for other protocols control planes. But it does not share it equally amongst flows in that queues, its set as 1st come 1st get and rest starve. There is a chance a single flow of ICMPv6 starved the rest of flows.

This would explain why the waveform test stalled as well the break of IPv6 if did happen.

Can you maybe try again the config from git, in two config scenarios?
1. Let all as is in the doc but increase the Control Pipe BW
2. Set on the Control plane queues mask in their proper respective directions (DL - destination; UP - source)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 01, 2025, 11:13:29 AM
1. With the setup as per instructions, I had 10/10 MBit/s on the control plane, not 1/1 as with my working setup, just as a note.
2. I tried both suggestions from the last posting to no avail. I even tried setting queue masks for both the control plane and the IP queues.

I used 900/600 and 100/100 MBits for those tests. I also tried setting queue masks and increased BW on the pipes.



For reference (and check), here is the non-working configuration snippet as per your last suggestions combined:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>600</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>900</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>100</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>100</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="55c03a93-8de7-4c45-a782-aaecdcc9cc72">
          <number>10002</number>
          <enabled>1</enabled>
          <pipe>883ed783-df03-4109-9364-a6c387f5954f</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Download</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="9aaccde6-b391-4330-b2d0-6e525d2a12ee">
          <number>10003</number>
          <enabled>1</enabled>
          <pipe>fb829d32-e950-4026-a2ee-3663104a355b</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Upload</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>9aaccde6-b391-4330-b2d0-6e525d2a12ee</target>
          <description>Control-plane-IPv6-Rule-Upload</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>55c03a93-8de7-4c45-a782-aaecdcc9cc72</target>
          <description>Control-plane-IPv6-Rule-Download</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

Afterwards, I even tried to shortcut the control plane rules directly to the pipes, as is used in my working setup, alas, to no avail.

Going back to my working config immediately restored the Waveform test to a working state. The difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 11:38:26 AM
Quote from: Seimus on April 30, 2025, 03:59:42 PMPR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.


Thanks @Seimus.  Looks good to me overall.  I added one comment in the PR.

Also interested in the suggestion there r/e pf rule vs. ipfw.  I'm willing to try it but am not sure on the implementation in pf using the experimental shaping option. Would we just need a single pass rule (direction in) on WAN for ICMPv6?  I believe in pf it's from the perspective of the firewall, so both upstream & downstream requests would be seen as 'in' from the WAN perspective.

I'm thinking something like this?

Action: Pass
Interface: WAN
Direction: in
TCP/IP Version: IPv6
Protocol: IPV6-ICMP
Source: Any
Destination: Any
Traffic Shaping (rule direction): Download-Control-Pipe
Traffic Shaping (reverse dirction): Upload-Control-Pipe

(directionality for pipe assignment is unclear in this case)

My concern with this is that it overrides the default/automatic rules in OPNsense regarding ICMPv6, which is not ideal.  There are security implications as well the possibility to take down the IPv6 network.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 11:43:08 AM
Quote from: meyergru on April 30, 2025, 09:38:02 PMHowever, when I used the instructions on my own rig (1100/800, other ISP), I found that the Waveform Bufferbloat test stalled after the first step, taking forever "warming up". I am sure that the Shaper is the culprit, because when I disabled all rules, the test went through.

I experienced this once as well, when I was initially making changes.  I'm not sure what cleared it up precisely but I do recall rebooting both OPNsense and my ISP router box.  After some settling in, the Bufferbloat and speed tests were no longer stalling.

However, I did not try with manual queues.  In all my testing I always connected the ICMPv6 rules directly to the Control pipes w/ internal queues.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 11:44:38 AM
@meyergru

Many thanks for further testing!

But let me ask if I understood correctly
Quote from: meyergru on May 01, 2025, 11:13:29 AMThe difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

When you created the Control Plane Shaper per the Github instructions,
You did as well change configuration on your already working Pipes?
Especially the tuned FQ_C and FQ_P parameters?

Because how I interpret this and seeing the config it means yes.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 12:17:11 PM
Quote from: OPNenthu on May 01, 2025, 11:38:26 AM
Quote from: Seimus on April 30, 2025, 03:59:42 PMPR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.


Thanks @Seimus.  Looks good to me overall.  I added one comment in the PR.

Also interested in the suggestion there r/e pf rule vs. ipfw.  I'm willing to try it but am not sure on the implementation in pf using the experimental shaping option. Would we just need a single pass rule (direction in) on WAN for ICMPv6?  I believe in pf it's from the perspective of the firewall, so both upstream & downstream requests would be seen as 'in' from the WAN perspective.

I'm thinking something like this?

Action: Pass
Interface: WAN
Direction: in
TCP/IP Version: IPv6
Protocol: IPV6-ICMP
Source: Any
Destination: Any
Traffic Shaping (rule direction): Download-Control-Pipe
Traffic Shaping (reverse dirction): Upload-Control-Pipe

(directionality for pipe assignment is unclear in this case)

My concern with this is that it overrides the default/automatic rules in OPNsense regarding ICMPv6, which is not ideal.  There are security implications as well the possibility to take down the IPv6 network.

I think its a good idea, but not only to mention, but to create it as an optional approach within the docs. The traffic Shaper option in pf can bind to either a Pipe or a Queues as well.

You set a good question, and that's something that drills my head too. Stated in docs

https://docs.opnsense.org/manual/firewall.html#traffic-shaping-qos

QuoteTraffic shaping/rule direction > Force packets being matched by this rule into the configured queue or pipe

Traffic shaping/reverse direction > Force packets being matched in the opposite direction into the configured queue or pipe

In regarding overrides, the auto-rules are set within the floating section, which is above Interface or Group, so if those default rules are set to quick they will always take precedence. So depending where you set it it should not override but the question is there will it be even applicable?

In regards of security applications. ICMPv6 for IPv6 functionality needs to be allowed. By design any control for any protocol needs to be allowed in both ways. But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself, because we are interested into the control plane of the network device it self.

I guess we ask the devs.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 12:33:11 PM
Quote from: OPNenthu on May 01, 2025, 11:43:08 AMI experienced this once as well, when I was initially making changes.  I'm not sure what cleared it up precisely but I do recall rebooting both OPNsense and my ISP router box.  After some settling in, the Bufferbloat and speed tests were no longer stalling.

I had similar problems with FQ_C, when I did tuning in the past, results didn't give sense, rebooting OPN + cable modem usually fixed this... weird...

Quote from: OPNenthu on May 01, 2025, 11:43:08 AMHowever, I did not try with manual queues.  In all my testing I always connected the ICMPv6 rules directly to the Control pipes w/ internal queues.

Can you try it?

It would be good to have consistent results

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 01, 2025, 12:36:49 PM
Quote from: Seimus on May 01, 2025, 11:44:38 AM@meyergru

Many thanks for further testing!

But let me ask if I understood correctly
Quote from: meyergru on May 01, 2025, 11:13:29 AMThe difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

When you created the Control Plane Shaper per the Github instructions,
You did as well change configuration on your already working Pipes?
Especially the tuned FQ_C and FQ_P parameters?

Because how I interpret this and seeing the config it means yes.

Regards,
S.

Yes. I cleared the respective parts. I am at a loss what difference is actually causing the problem. Maybe it is easier to try to break my working setup by changing towards your suggested setup step-by-step to find the root cause if it is not that casual glitch both you and @OPNenthu saw.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 12:38:54 PM
QuoteIn regards of security applications. ICMPv6 for IPv6 functionality needs to be allowed. By design any control for any protocol needs to be allowed in both ways.
Understood, but, ICMPv6 has many types.  In the default OPNsense ruleset, is my router allowed to send/respond RAs and NDs on the open internet?

EDIT: I did forget for a moment that IPv6 is meant to be globally routeable, although still not sure.

I missed this earlier, too.  Makes sense.
"[...] But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself"


QuoteCan you try it?
Sure, will do some testing.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 01:25:00 PM
Quote from: meyergru on May 01, 2025, 12:36:49 PMYes. I cleared the respective parts. I am at a loss what difference is actually causing the problem. Maybe it is easier to try to break my working setup by changing towards your suggested setup step-by-step to find the root cause if it is not that casual glitch both you and @OPNenthu saw.

I must say your setup is a mystery to me of why this is happening.
It could be the glitch, when changing playing with Shaper sometimes its just janked. Even thou in CLI when you verify if the config is correct (in ipfw) which it is still the results may not be as expected.

In on paper, the Control Plane, is an addition if there is already a Pipe existing, so that means there should be no changes needed to an already established Pipe flow other than changing the BW (subtraction of BW from one Pipe to a New one)


Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 01:28:14 PM
Quote from: OPNenthu on May 01, 2025, 12:38:54 PMUnderstood, but, ICMPv6 has many types.  In the default OPNsense ruleset, is my router allowed to send/respond RAs and NDs on the open internet?

EDIT: I did forget for a moment that IPv6 is meant to be globally routeable, although still not sure.

I missed this earlier, too.  Makes sense.
"[...] But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself"

It would have to be specified by the IPv6 RFC4890. Similar as is in the default rules I believe.

Quote from: OPNenthu on May 01, 2025, 12:38:54 PMSure, will do some testing.
In advance many thanks!

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 01, 2025, 02:12:58 PM
So I tested further starting from my working setup.

1. Removed FQ-Codel parameters from the pipes. Test was O.K., but results are worse (B) (https://www.waveform.com/tools/bufferbloat?test-id=1973290a-eed6-4a1f-b660-b2e311de144b) than before (A+) (https://www.waveform.com/tools/bufferbloat?test-id=e5ef07de-3654-4c95-b35a-52666534aa8f). Switching back and forth broke my connection completely once.
2. Removing the masks from the pipes changed nothing, tests went O.K., still A+ grading.
3. Enlarging the bandwith from 1 to 10 MBit/s on the control plane pipes changed nothing.
4. Removing the masks from the up- and downstream queues changed nothing.
5. Disabling the icmp (v4) rules changed nothing.
6. Creating queues for the control plane and point the control plane rules for icmp-ipv6 to those changed nothing.

So I arrived almost at the recommended setup, with the only difference of the FQ-Codel parameters and PIE on the pipes enabled (I also tried with no PIE, which changed nothing).

Then I reduced the up- and downstream bandwidth (the old ones were optimized for attainable speed) to 900/600 to verify that the shaper had to do anything at all. This worked as well and got this result (https://www.waveform.com/tools/bufferbloat?test-id=01365e7b-58f5-44df-b450-90e5e582f4bc). Note that there is no more latency increase with either up- or download.

For reference, this is the relevant config section now:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>600</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>900</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>10</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>10</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="6c535ef5-1aa5-4760-a94e-b6f72af55dd8">
          <number>10002</number>
          <enabled>1</enabled>
          <pipe>883ed783-df03-4109-9364-a6c387f5954f</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Download</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="a71074a0-e387-4ff6-8203-1f7e08ef7b32">
          <number>10003</number>
          <enabled>1</enabled>
          <pipe>fb829d32-e950-4026-a2ee-3663104a355b</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Upload</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>a71074a0-e387-4ff6-8203-1f7e08ef7b32</target>
          <description>Upload-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>6c535ef5-1aa5-4760-a94e-b6f72af55dd8</target>
          <description>Download-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

So maybe it really is a glitch were playing around with the params does sometimes break things...

As for the bandwith limits: there seems to be a tradeoff between maximum attainable speed, which can be reached when you actually add 5% to your maximum speed without traffic shaping, at the expense of some increased latency which still gives an A+ rating. If you want no latency increase at all, you will have to sacrifice on attainable speed.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 05:47:01 PM
Testing with intermediary queues seems just as good/stable as without.  Actually the home internet is busy currently and I still am posting good pings while under additional load from speedtest:

C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=10ms
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=11ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 10ms, Maximum = 15ms, Average = 12ms

Bufferbloat (https://www.waveform.com/tools/bufferbloat?test-id=42796413-a069-4703-8889-6620a94b1fb8) remains A+.

speedtest.net result:

speedtest.png

This is with QFQ for control pipes and FQ_CoDel+ECN on default pipes.  No masks, PIE, or CoDel params.  All queue weights 100.  IPv4 icmp and other rules/queues disabled (only testing ipv6-icmp on control plane, everything else to default rules).

Actually I have a question about this: if I want to add back ipv4 icmp to the control plane, I will need to create 2 more queues.  What weights should they get?  Are they also 100, or do we split the difference (50-50) with the ipv6-icmp queues?
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 06:10:53 PM
@meyergru
Once again many thanks. So basically this confirms the config in doc works as suggested.

Few comments from my side.

Quote from: meyergru on May 01, 2025, 02:12:58 PMSo I arrived almost at the recommended setup, with the only difference of the FQ-Codel parameters and PIE on the pipes enabled (I also tried with no PIE, which changed nothing).

Actually i would not call it a difference, as mentioned previously if you already have a Pipe created, this should remain as is only BW should be subtracted. The main point of Control Plane class is to allocate it its own BW, and to take out the potential back-pressure caused by the sojourn time FQ_C for examples relies on.

That config provided looks correct to me. In your original configuration you had masks, queue(value) in Pipe. This actually doesn't do anything if you have a manually created queue connected to it. It is only applicable in case you attach a rule directly to the Pipe. ECN in queue applies only for CoDel not FQ_C.


Quote from: meyergru on May 01, 2025, 02:12:58 PMAs for the bandwith limits: there seems to be a tradeoff between maximum attainable speed, which can be reached when you actually add 5% to your maximum speed without traffic shaping, at the expense of some increased latency which still gives an A+ rating. If you want no latency increase at all, you will have to sacrifice on attainable speed.

Believe or not but this is expected :D. The reason behind it is I think the FQ that FQ_C and FQ_P use. Fair Queue has problems to provide consistent rate, or maximum rate so it takes away like 3%-5% of the BW set in Pipe. DRR for example is better in regards of this but can create insane latency due to the deficit calculation. QFQ should be able as well, but problem is FQ_C and FQ_P implementation are only available in FQ not QFQ.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 06:16:27 PM
Quote from: meyergru on May 01, 2025, 02:12:58 PM3. Enlarging the bandwith from 1 to 10 MBit/s on the control plane pipes changed nothing.

I too am observing no benefit from increasing the download control pipe.

Maybe for servers this is a good rule of thumb?  As a home internet user with an asymmetrical data plan, should I reasonably expect to have proportionally higher control traffic on ingress than on egress?

Quote from: meyergru on May 01, 2025, 02:12:58 PMAs for the bandwith limits: there seems to be a tradeoff between maximum attainable speed, which can be reached when you actually add 5% to your maximum speed without traffic shaping, at the expense of some increased latency which still gives an A+ rating

Interesting.  I always have been leaving some bandwidth on the table to optimize for latency (as per the Bufferbloat guide) but my speed without shaping measures above 900Mbit/s (sometimes bursts up to 1Gbps), even though it's advertised at only 800. 

Setting the Download pipe to 910Mbit/s gets me this: https://www.waveform.com/tools/bufferbloat?test-id=477bf3a2-4b43-4f61-91bd-f9c0f44c668a

+5 on D/L latency, but still A+.

Though I wonder if that will start to break down during peak use when all the neighbors are online.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 06:26:49 PM
@OPNenthu many thanks for testing!

I am glad you could provide similar results as @meyergru did.

Quote from: OPNenthu on May 01, 2025, 05:47:01 PMThis is with QFQ for control pipes and FQ_CoDel+ECN on default pipes. 
Perfect! > this is exactly how I would like to have it tested.

QFQ overall should provide more consistent rates vs WFQ. So its always worth to try the one or another. Yet keep in mind guys this is only affecting the Control Plane nothing else, as rest of the traffic is in different Pipes with different schedulers.


In regards of your question
Quote from: OPNenthu on May 01, 2025, 05:47:01 PMActually I have a question about this: if I want to add back ipv4 icmp to the control plane, I will need to create 2 more queues.  What weights should they get?  Are they also 100, or do we split the difference (50-50) with the ipv6-icmp queues?
Yes, keep in mind, we want to to keep control planes of different protocols separated, yet utilize the BW dedicated for Control Plane as such. The weight depends on you, or rather depends on the rate of the specific control plane, how much BW each of them you want to give.

So if you set the weights 50 & 50 in theory during saturation each of them will get 500Kbit/s if BW is 1Mbit/s.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 06:42:27 PM
Quote from: OPNenthu on May 01, 2025, 06:16:27 PMI too am observing no benefit from increasing the download control pipe.

Maybe for servers this is a good rule of thumb?  As a home internet user with an asymmetrical data plan, should I reasonably expect to have proportionally higher control traffic on ingress than on egress?

Actually no, because control plane usually is a consistent rate it should be okay with very low BW values set to the specification minimum.
We need to keep in mind that the rules for Control Plane Shaper do not only involve the control plane. It as well catches pings. Thus the 1Mbit is optimal to a certain degree.

Quote from: OPNenthu on May 01, 2025, 06:16:27 PMInteresting.  Interesting.  I always have been leaving some bandwidth on the table to optimize for latency (as per the Bufferbloat guide) but my speed without shaping measures above 900Mbit/s (sometimes bursts up to 1Gbps), even though it's advertised at only 800.

Setting the Download pipe to 910Mbit/s gets me this: https://www.waveform.com/tools/bufferbloat?test-id=477bf3a2-4b43-4f61-91bd-f9c0f44c668a

+5 on D/L latency, but still A+.

Though I wonder if that will start to break down during peak use when all the neighbors are online.

What you see is actually correct. And purely depends on how your ISP divides the BW to its customers. Its possible they have overprovisioning. Its as well possible during peak hours when more users from that ISP are online, it will cause on an aggregation point a stall and you may not reach those speeds which could cause additional latency.

Also if you over-provision your BW on one Pipe, the moment that BW is not available it will start to eat into other Pipes affecting the Control Plane.

Its always better to create a BW buffer, cause if you dont you will be on the mercy of your ISP to handle the bufferbloat.

Regards,
S.

One fun fact; When using FQ_C even if you set higher BW than you really have, FQ_C due to its algorithm is still capable somewhat to manage the latency. It will not be so good as when you have a BW buffer created, but its 10x better compared to not have any FQ_C at all.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 07:41:48 PM
Thanks @Seimus and @meyergru for all the inputs so far.  I'm learning a lot.

I went back to re-enable all my previous queues & rules for things like TCP-ACK and DNS.  In the process I renamed my objects to normalize them.  I am now again seeing the glitch that we talked about where Bufferbloat test is stalled.  Also, speedtest.net is showing reduced bandwidth.

So there really is truth to this.  There is some issue that crops up when you are adjusting the Shaper objects.

I won't reboot anything this time.  Will wait to see if it clears on its own.

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 08:19:25 PM
Adding to the previous post-

I observe that UDP traffic is impeded.  The screenshot here shows only outgoing, but it's actually in both directions.  TCP seems fine.

Not sure if these are queue drops or bad firewall state?

I've been connected to a VPN provider over UDP the whole time (on a different client/VM) and when I run speedtest these red lines appear in the FW log now.  The VPN endpoint is the destination.

Meanwhile, D/L speeds have degraded further.  VPN remains connected.  Packet drops only observed when running speedtest (I guess pointing to a queue issue).

Will post back when something changes; still holding off on reboot.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 02, 2025, 01:12:42 AM
Quote from: OPNenthu on May 01, 2025, 08:19:25 PMNot sure if these are queue drops or bad firewall state?

Queue drops would not generate those logs in Live.
You can see if a Shaper is dropping via cli sommand.

ipfw queue show
ipfw sched show
ipfw pipe show


However those live logs if real, are blocking sessions.

Also have a look as well on Interface drop count, parent interface LAN and WAN.

Question is 
is it the same Source and Destination? Or only one specific?
when you click the "i" on the blocked one what additional info does it tells you?

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 02, 2025, 08:16:06 AM
Still happening hours later.  It may be time finally for a reboot.

Quote from: Seimus on May 02, 2025, 01:12:42 AMipfw queue show
ipfw sched show
ipfw pipe show

root@firewall:~ # ipfw sched show
10002:   1.000 Mbit/s    0 ms burst 0
 sched 10002 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10009 10007
10003:   1.000 Mbit/s    0 ms burst 0
 sched 10003 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10008 10006
10000: 849.000 Mbit/s    0 ms burst 0
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
 sched 10000 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10004 10002 10000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0       82     5431  0    0   0
10001:  39.000 Mbit/s    0 ms burst 0
q75537  50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail
 sched 10001 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10005 10003 10001
  0 ip           0.0.0.0/0             0.0.0.0/0     13301 19218727 23 34500  99

If I'm reading correctly, there were 99 drops on upstream here (39Mbit/s CoDel).

root@firewall:~ # ipfw queue show
q10006  50 sl. 0 flows (1 buckets) sched 10003 weight 50 lmax 1500 pri 0 droptail
q10007  50 sl. 0 flows (1 buckets) sched 10002 weight 50 lmax 1500 pri 0 droptail
q10004  50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10005  50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
q10002  50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10003  50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
q10000  50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10001  50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
q10008  50 sl. 0 flows (1 buckets) sched 10003 weight 50 lmax 1500 pri 0 droptail
q10009  50 sl. 0 flows (1 buckets) sched 10002 weight 50 lmax 1500 pri 0 droptail

root@firewall:~ # ipfw pipe show
10002:   1.000 Mbit/s    0 ms burst 0
q141074  50 sl. 0 flows (1 buckets) sched 75538 weight 0 lmax 0 pri 0 droptail
 sched 75538 type FIFO flags 0x0 0 buckets 0 active
10003:   1.000 Mbit/s    0 ms burst 0
q141075  50 sl. 0 flows (1 buckets) sched 75539 weight 0 lmax 0 pri 0 droptail
 sched 75539 type FIFO flags 0x0 0 buckets 0 active
10000: 849.000 Mbit/s    0 ms burst 0
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
 sched 75536 type FIFO flags 0x0 0 buckets 0 active
10001:  39.000 Mbit/s    0 ms burst 0
q75537  50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail
 sched 75537 type FIFO flags 0x0 0 buckets 0 active
root@firewall:~ #

QuoteAlso have a look as well on Interface drop count, parent interface LAN and WAN.

LAN is on a LAGG group (2 x 2.5Gbps)

The 'Output Errors: 13' count was there from before.  I've noticed that for a long time on the LAGG IF.

WAN.png LAN.png

QuoteQuestion is 
is it the same Source and Destination? Or only one specific?

The speedtest is still abnormal and Bufferbloat test is stalled as before, but the only blocks in the F/W log are from those specific src/dest addresses which is the VPN connection.  Other traffic seems to be getting passed as normal.

Quotewhen you click the "i" on the blocked one what additional info does it tells you?

It pops up rule info for the in-built 'force gw' rule:

force_gw.png
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 02, 2025, 09:40:36 AM
The router reboot did the trick, but there was some settling needed as well.  Immediately following the reboot the latencies were high on the bufferbloat test and the FW log was still showing blocked traffic.  Several minutes later that all cleared up and now the tests are back to normal.

I do see tail drops on the upload data pipe/scheduler, only during the upload portion of the speed tests.  I think this is expected, though.  This is probably CoDel/AQM doing its job.

I also rebooted the VM where the VPN was connected and made sure that was again active / working during the tests.

root@firewall:~ # ipfw sched show
10002:   1.000 Mbit/s    0 ms burst 0
 sched 10002 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10009 10007
10003:   1.000 Mbit/s    0 ms burst 0
 sched 10003 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10008 10006
10000: 849.000 Mbit/s    0 ms burst 0
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
 sched 10000 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10004 10002 10000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0       19     1140  0    0   0
10001:  39.000 Mbit/s    0 ms burst 0
q75537  50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail
 sched 10001 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10005 10003 10001
  0 ip           0.0.0.0/0             0.0.0.0/0     51935 74618246 26 38507 395

So, long story short, messing with shaping can in some instances cause initial instability that needs a reboot + settling time.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 02, 2025, 10:56:17 AM
Quote from: OPNenthu on May 02, 2025, 09:40:36 AMI do see tail drops on the upload data pipe/scheduler, only during the upload portion of the speed tests.  I think this is expected, though.  This is probably CoDel/AQM doing its job.

Yes, this is basically FQ_C taking care of packets that are too long in the Flow queue. FQ_C will drop "if their sojourn times exceed the target setting for longer than the interval". Sadly because those are dynamic and under scheduler, we dont see specific Flows only as whole thats why there is 0.0.0.0/0.

Quote from: OPNenthu on May 02, 2025, 09:40:36 AMSo, long story short, messing with shaping can in some instances cause initial instability that needs a reboot + settling time.

Agree, as several peoples were able to experience this and its possible to replicate.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 02, 2025, 12:05:08 PM
Unless there is a less intrusive way of fixing this than a reboot, it should be pointed out as a caveat in the instructions. Would a fw state reset help?
Matter of fact, for me, this was unexpected and I still can neither reliably reproduce it nor are the effects consistent.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 02, 2025, 12:36:03 PM
Quote from: meyergru on May 02, 2025, 12:05:08 PMUnless there is a less intrusive way of fixing this than a reboot, it should be pointed out as a caveat in the instructions.

I agree, but thinking about it, into which section of the shaper docs to point it out? This is not specific only to the examples, but to the Shaper as whole. I think if this is the case it should be under the main Shaper section.

Quote from: meyergru on May 02, 2025, 12:05:08 PMWould a fw state reset help?
Would be worth a try.

@All
If somebody hits this problem can that person try to reset the fw states and let us know?

Quote from: meyergru on May 02, 2025, 12:05:08 PMMatter of fact, for me, this was unexpected and I still can neither reliably reproduce it nor are the effects consistent.
Its interesting this is happening at all, from the description of the problem one would assume that the problem could be due to packets not being classified properly, but in that case no BW reduction would be visible if the shaper is bypassed.


Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 03, 2025, 12:03:46 AM
Quote from: meyergru on May 02, 2025, 12:05:08 PMWould a fw state reset help?

Probably not, IMO.  I tested with an OPNsense VM in a double-NAT setup (IPv4-only), so not exactly the same situation, but I did reproduce the issue.

I configured the control & data plane pipes, queues, and rules.  I set the Download pipe to 545Mbit/s and the Upload to 34Mbit/s accounting for the VM/NAT overhead.

After applying the changes I observed a false start in the Bufferbloat test (hung on "Warming Up..."), followed by a semi-successful test (reduced performance on the Download), followed by a second false start.  See "semi_successful.png".

I then reset the F/W states from the Diagnostics menu and gave it a minute to re-establish and settle.  The next couple of Bufferbloat tests did not stall, but the Download performance was still subpar.  This was reproducible.  See "after_reset.png".

Finally I rebooted the VM and then only observed the full performance:  Result (https://www.waveform.com/tools/bufferbloat?test-id=0debaf24-d171-4b41-a97b-daa7af6dbfa6)
(Sorry, ran out of image quota on this post, so had to crop the second image and could not upload the final one).

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 03, 2025, 02:00:17 AM
Alright looks like following observations can be made,

A. There really is a glitch or BUG, when configuring or Changing the Shaper
B. Issue is causing degraded performance e.g lower than expected Throughput and/or application stall(during congestion)
C. This is somewhat reproducible
D. Affects any traffic matched by the Shaper Rules
E. Clearing States in pf doesn't fix the problem
F. FW reboot does fix the problem

So there is either something wrong with OPNsense pushing the config into ipfw/dummynet or ipfw/dummynet itself.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: vik on May 09, 2025, 10:10:20 PM
This bug is impacting my setup, opened git issue:

https://github.com/opnsense/core/issues/8649