OPNsense Forum

English Forums => Tutorials and FAQs => Topic started by: OPNenthu on April 26, 2025, 12:48:44 PM

Title: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 26, 2025, 12:48:44 PM
EDIT: As explained in the thread below, this is not technically a work-around as I originally thought.  It is an implementation of an IPv6 control plane (a valid technique) for ICMP traffic; an example of Multi-color Shaping.  Please ignore references to "work-around."

------

This is a work-around for those of us wanting to combat bufferbloat with FQ-CoDel and ECN as per the OPNsense guide (https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html), but are seeing high packet loss on the IPv6 gateway (specifically on upload) with the shaping applied.  This issue is discussed here (https://github.com/opnsense/core/issues/7342) and here (https://github.com/opnsense/core/issues/6714), as well as in several forum posts.

packet_loss.png

(Note: some have experienced loss of IPv6 connectivity altogether although it's not clear if it's the same underlying cause.  In some cases the ISP may not be supporting ECN, as observed by @meyergru.  This won't help in those situations.)

I took the inspiration to try this from the comments in https://github.com/opnsense/core/issues/6714.  Thanks to GitHub user @aque for the hint.

Starting with the configuration from the OPNsense guide as the basis:

1. Under Firewall->Shaper->Pipes add an additional upload pipe named something like "Upload-Control".  We'll be using it to separate ICMP and ICMPv6 traffic from the CoDel shaper. You can name this more specifically like "Upload-ICMP" but you may wish to use this pipe for additional control protocols (e.g. DHCP, NTP, DNS) in the future so I went with a generic name.

I set the bandwidth for this pipe to 1 Mbit/s in my case, which seems more than enough for my home internet usage (your mileage may vary). So for example if your existing upload pipe was 40 Mbit/s, you'll reduce it to 39 Mbit/s and give the 1 Mbit/s to the new pipe.

Leave everything else default.

I personally did not create a manual queue for this (it's working without one) so I will skip over Firewall->Shaper->Queues.

2. Under Firewall->Shaper->Rules, clone the existing Upload rule and make the following edits:

- Sequence: <upload rule sequence> - 2
- Protocol: icmp
- Target: Upload-Control (the pipe you created in step 1)

Save the rule with a descriptive name like "Upload-Rule-ICMP".  The sequence needs to be at least 1 less than the default Upload rule and you may need to adjust the other rule sequence values accordingly.

3. Repeat step 2 for the ICMPv6 rule:

- Sequence: <upload rule sequence> - 1
- Protocol: ipv6-icmp
- Target: Upload-Control

Save as "Upload-Rule-ICMPv6". 

Make sure "Direction" is "out" for both of these rules (under the advanced settings).

Now when you run a speed test you should no longer see the high packet loss on the IPv6 gateway and you should see the ICMP traffic starting to get tallied under the respective rules in Firewall->Shaper->Status.

shaper_rules.png

upload_rule_status.png

Hope this helps.  Do let me know if I've done something stupid here.  I am not an expert.

(If you're curious about the TCP ACK rules in the screenshot, I followed the advice given by @Seimus in this post (https://forum.opnsense.org/index.php?topic=7423.msg222935#msg222935).)
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: dinguz on April 26, 2025, 03:08:31 PM
Nice work! I'm wondering though — is this fixing an actual problem in day-to-day use, or is it more about looking good in tests? Would love to hear a bit more about that.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: OPNenthu on April 26, 2025, 04:07:09 PM
The bufferbloat test result is not meant to give the impression of chasing numbers (apologies if it did).

I won't try and defend the use of shaping for everyone- I think it's a personal choice.  In my case I had to put my ISP gateway into bridge mode in order to run OPNsense and by doing so I've effectively disabled all the nice shaping that the ISP had included on their box.  I pay good money for a "premium" service here that is advertised heavily on TV for its low latency for gaming and video conferencing.  I might as well get what I pay for.

There is a significant difference with and without shaping, yes in terms of the raw numbers, but more importantly in terms of consistency.  With shaping enabled the latency is consistent.  Without it, I've seen it jump around a wide range (low teens to several hundred ms.)

As for routing ICMPv6 around it, purely a work-around.  OPNsense doesn't currently have a way to exclude that traffic from the shaping rule (it was requested in one of the GitHub tickets but doesn't look like it's being worked on).  I can't say whether the packet loss was having a real impact on latency as I was still getting good numbers, but the gateway status going red all the time was uncomfortable.  If it got high enough, I worried that the gateway would go down.



Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: meyergru on April 26, 2025, 04:23:45 PM
I thought this was common knowledge:

Bufferbloat plays a role when you have a download or upload running (which might also be someone in your network streaming a video) and getting a higher latency in that case, which could result in lagging online games. It can also cause noticeable interference in audio streams.

In extreme cases, you will notice slow page buildup with complex web pages that consist of dozens or hundreds of ressources, because when your buffers are full and your network stack does not know it, the content will only get transferred on the next retry after packet losses.

This becomes especially noticeable with sites that are far away in terms of turnaround time. To lessen the effects of BDP (https://en.wikipedia.org/wiki/Bandwidth-delay_product), you normally would want a buffer size as large as you can get, but this will only go so far as your ISP lets you.

Read more about it here (https://www.bufferbloat.net/projects/).

@OPNenthu : Nice work! This should probably be added to the traffic shaping guide (https://docs.opnsense.org/manual/shaping.html).
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: dinguz on April 26, 2025, 07:50:32 PM
Apologies for not wording my question more clearly. I'm fully on board with the bufferbloat issue in general; I was just wondering more specifically about the effects of ICMPv6 apparently being squashed by other traffic.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: meyergru on April 26, 2025, 10:13:43 PM
OPNenthu gave the links to the discussions of issues around this at the start of his post. Basically, using the traffic shaper breaks IPv6 connectivity under high load.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: Seimus on April 27, 2025, 01:28:45 PM
Nice write up,

When I wrote the buffer-bloat guide I didn't had a possibility to test it on IPv6.
The latency/packet loss can increase as well for IPv4 pings, the reason behind is basically the starvation of BW and Queues. But its not so prominent as its for IPv6.

What you practically did is to give dedicated BW to a specific traffic type e.g BW reservation, in a way we can look at this creating a priority Pipe/Queue or better say a dedicated Pipe/Queue.

When speaking about excluding ICMP from the Queues, well there is maybe a different possibility, instead of matching IP all, to match UDP & TCP all. Because by design ICMP is not associated with TCP nor UDP. Which would result the ICMP not be matched by the default any rules, but without specifiyng a pipe it could start to eat into the whole capacity where there is none. However your approach is better, to give the ICMP, a specific traffic type, a specific dedicated configured chunk of the BW.


There are as well other methods how to mitigate this, like having more specific queues for traffic types, because in a congestion scenario, either Flow queue is TAIL dropping in the FQ_CoDel scheduler or a IP match-AnyPacket queue in Queues is TAIL dropping once the FLOW queue is full. BUT, if a congestion is ongoing, the ICMP would be hit sooner rather than later anyway. That's why I like your approach.



If this is a valid "solution" for IPv6 problems, we can adjust the official buffer-bloat guide mentioning the need to create a specific Pipe for ICMP. Or create a separate page for IPv6 "IPv6 Fighting Bufferbloat with FQ_CoDel"

Regards,
S.


P.S there is always a queue (10002.141074) even if you don't specify it ;). When you don't set a manual queue a dynamic ones are used, 2 by default as specified in the Queue field in the Pipe config.




Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: OPNenthu on April 27, 2025, 06:42:36 PM
Thanks all- I appreciate the review/feedback. @Seimus, the explanation about BW starvation as the cause is making a lot of sense now and I can appreciate why @dinguz may not be seeing the issue depending on the available upload bandwidth.

I happen to have remote access to a second physical OPNsense at my parents' house (as well as a mini PC there) that I can use as a control for testing.  The remote instance only has the Bufferbloat fix as per the official guide and does not have the ICMPv6 work-around. The other difference is that my dad's service plan is 300/300 symmetrical compared to mine at 800/40 asymmetrical (and different ISPs).

Here's what I observe.  When I run an online speedtest on the remote network, I see that the gateway is not showing packet loss.  The delay increases slightly, but the loss remains at 0.  This tells me that it will be much harder to observe the issue on that network since it has a sufficiently wide upload pipe.  It's hard to saturate a 300Mbps link in day-to-day browsing.  I'm attaching a screenshot of the remote gateway 'Quality' graph.

On my network however, it's quite easy to start seeing the issue.  All I need to do is be connected to a VPN provider and start a couple video streams.  This creates a sustained load on the WAN upload and because of my smaller overall pipe I see the packet loss creep up.  Online speedtest is the best way to show it though as that puts an immediate heavy load.
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: Seimus on April 28, 2025, 01:06:57 AM
There are two components in networks/paths that directly impact performance e.g user experience;
1. BW capacity
2. Queue size

These two have a common relationship where;

If BW capacity is saturated it will cause back back pressure on the Queues causing them go full
> if a Queue is full depending on the queue management it will perform an action > Dropping, be it TAIL or Early.

How ever there are as well traffic types that can saturate a Queue while BW capacity is not saturated
> if a Queue is full depending on the queue management it will perform an action > Dropping, be it TAIL or Early.

The later is much more harder to Tshoot.
In day to day use from perspective of us Users, Homelabers we mostly experience the 1st scenario. That one matches as well for what you describe above.

TIP: in FQ_Codel you can set the size of the Flow queues, but if set to high too many packets fill the queue and it will cause unnecessary latency. If BW saturation prevails we still maybe TAIL drop from a queue.


------------

Also I think you should not call this a "ICMPv6 work-around".

Because this is by all means how a control plane should have been taken care of.

What I mean by that is, from perspective of a packet, if you are not using a shaper all traffic is looked on as falling into default class any any, one queue, one pipe. When you use QoS/Shaper most of the time basic user needs need only one queue and one pipe. And here comes the problem with control plane and congestion situation.

If we all handle as one BIG Queue and one BIG Pipe, at a certain point what should not fail (control plain) will fail and with it will the network fail.

For example when configuring BGP, and configuring QoS/Shaper we take in mind to separate BGP control plane from other kind of traffic and handle it as a different color (Queue/Pipe). We reserve for it a specific needed BW chunk to guarantee operation and non-disruption for the network during congestion events. By doing so we prevent BGP to go down and be excluded from BW and Queue starvation.

This goes for any control plane.
When we plan QoS/Shapers we need to plan as well to take the control plane in account. Such as ICMPv6 as its necessary for proper functionality of IPv6 which makes it a control plane ;)


Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: OPNenthu on April 28, 2025, 02:42:48 AM
Noted- I'll change the description.  How about "IPv6 optimization for FQ_CoDel (anti-Bufferbloat) shaping" ?

Along the lines of a control plane, I am curious:

- Does it make sense to do this for the Download side as well?

- Is there a good way to measure the needed width of the control pipe rather than guessing at 1Mbit/s?  Does OPNsense have built in tools to measure ICMP flows?

(**EDIT: I found some netflow data under Reporting -> Insight, but it's only reporting packet and byte counts.  Not giving me an average rate.  However, the total ICMP v4/v6 count is extremely small relative to my overall traffic (<1% it seems), so probably even 0.5Mbit/s is OK.  I'll stick with 1MBit/s for now.)

- Are there other types of control traffic that make sense to go through this pipe as well?  I alluded earlier to DHCP, NTP, and possibly DNS (although I'm not noticing an issue with these).
Title: Re: Bufferbloat fix (FQCoDel-ECN) with ICMPv6 work-around
Post by: Seimus on April 28, 2025, 10:27:11 AM
Quote from: OPNenthu on April 28, 2025, 02:42:48 AMNoted- I'll change the description.  How about "IPv6 optimization for FQ_CoDel (anti-Bufferbloat) shaping" ?

Sure why not, I would call it something like "IPv6 Control Plane with FQ_CoDel Shaping". Or Multi-color Shaping, because that basically what is achieved here.


----------------------------

Quote from: OPNenthu on April 28, 2025, 02:42:48 AM- Does it make sense to do this for the Download side as well?

Yes it does, if a communication is bidirectional (both ways), this needs to be specified both ways for the shaper.

Quote from: OPNenthu on April 28, 2025, 02:42:48 AM- Is there a good way to measure the needed width of the control pipe rather than guessing at 1Mbit/s?  Does OPNsense have built in tools to measure ICMP flows?

(**EDIT: I found some netflow data under Reporting -> Insight, but it's only reporting packet and byte counts.  Not giving me an average rate.  However, the total ICMP v4/v6 count is extremely small relative to my overall traffic (<1% it seems), so probably even 0.5Mbit/s is OK.  I'll stick with 1MBit/s for now.)

In-build only via netflow, otherwise you need to check the protocol specification. But considering this is control plane, it should not need much BW.

Quote from: OPNenthu on April 28, 2025, 02:42:48 AM- Are there other types of control traffic that make sense to go through this pipe as well?  I alluded earlier to DHCP, NTP, and possibly DNS (although I'm not noticing an issue with these).

This is a nice question, but when we are speaking about  control plane to "guarantee operation and non-disruption for the network during congestion events".
We are talking about control plane that has direct impact on the networks stability e.g L3 Protocols.

So if you run for example a dynamic routing protocol towards an external device, you would need it.

DHCP, DNS and NTP are L7 so from purely this view they would not mach into this category. There are situations where you need to have a separate class/Queue+Pipe dedicated BW for these, but you should not mix them with the L3 control plane class/Queue+Pipe. I think its not necessary to do this, FQ_C should be handle them fine. However you can create at least separate queues in the main FQ_C Pipe for at least DNS, this is how I have it setup-ed.

Look at this in a following way. If we have something critical or important , its maybe worth consideration to create for it a separate class/Queue+Pipe, to guarantee a minimum BW for operation purposes;
A. from network view
  > most critical is always something that has direct impact on the network stability > control plane + service plane
B. from client view 
  > important for example DHCP, DNS or > management plane (SSH)
C. from user view    
  > user important applications IPTV, RTP etc. > data plane (user defined apps)

A. needs to be always taken care of, always in its own dedicated way.

B. + C. considering FQ_C into the equation, this can be handled totally fine with it, in certain edge scenarios however there is necessary to separate them. Because FQ_C doesn't do any BW prioritization ~ it shares the BW equally.



Regards,
S.

P.S. Sorry for the lengthy replies, but we are here touching topics that I think are bit beyond simple config and done, but rather need to be understood
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 28, 2025, 06:08:42 PM
Thanks, this is enlightening and good to have these explanations for posterity IMO.

I went ahead and replicated the Control pipes and ICMP rules for Download as well.  I wanted to scratch my curiosity so also went ahead and added two manual queues and rules for DoT to/from Quad9 via the FQ_Codel pipes.  So far everything seems to be working smoothly.  Will keep an eye on it for some time.

Screenshots of the updated solution attached, although these are above and beyond the main topic here.  Just to reiterate for those only needing to solve the IPv6 WAN packet loss with FQ_CoDel, you only need to add the Control pipes & rules in both directions.  Ignore all the ACK/DoT/Quad9 stuff (I'm too lazy to delete them at this point).

(P.S. it was tricky to match DoT by 5-tuple because it is neither a true TCP or UDP protocol according to Wikipedia, and port 853 is not specific enough.  So instead I matched ip/853 to and from the Quad9 public IPs, as I have configured in Unbound).
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 29, 2025, 12:57:33 PM
Quote(P.S. it was tricky to match DoT by 5-tuple because it is neither a true TCP or UDP protocol according to Wikipedia, and port 853 is not specific enough.  So instead I matched ip/853 to and from the Quad9 public IPs, as I have configured in Unbound).

You could just match the port 53 (DNS) + 853 (DoT), these are reserved ports so no other application should be use them. However if you use DoH, which is over port 443, than you need to be more precise to specify as well Destinations.

-----------------------

I was trying to lookup technical documentation for Control plane QoS used in Enterprise solutions. Looks like the default is always 1% of the used BW in your case 1% of 40Mbit. But this takes in account that there are control planes for multiple protocols.

As you run only IPv6 the 1Mbit is enough, but in case you will need to Shape as well control planes for other protocols (example BGP) its worth to consider to increase the BW of Control plane Pipe. And use weighted queues, as the default scheduler is WFQ, so basically this way you can keep one Pipe for control plane and creates classes/queues per specific protocol to allocate proper BW reservation by the merit queue weight.

-----------------------

As pointed by @meyergru, it would be really beneficial to introduce the need and understanding of Shaping for control plane traffic.
When I will have time, I will create a PR, touching this topic in general with example for IPv6.

Of course @OPNenthu if you want you can do it and share the PR and I can just contribute to it ;)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 29, 2025, 05:53:20 PM
Thanks for confirming the needed BW.  It aligns with my observations from netflow as well.

Quote from: Seimus on April 29, 2025, 12:57:33 PMAnd use weighted queues, as the default scheduler is WFQ, so basically this way you can keep one Pipe for control plane and creates classes/queues per specific protocol to allocate proper BW reservation by the merit queue weight.

Glad you touched on this.  I was debating whether FIFO might perform better for this purpose, assuming the pipe was only being used for ICMP-type traffic.  I briefly tried it but wasn't noticing any difference, and the default (WFQ) gives us more options like you said.

QuoteOf course @OPNenthu if you want you can do it and share the PR and I can just contribute to it ;)

I appreciate it but I'm out of my depth on the topic. Happy to proofread or test.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 29, 2025, 07:27:32 PM
I just took a look at your bufferbloat submission for reference: https://github.com/opnsense/docs/pull/571

That doesn't seem to too bad to try and follow.  Maybe I can install a reStructuredText editor in VSCode and get some initial content down as a starting point.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 29, 2025, 08:11:49 PM
Quote from: OPNenthu on April 29, 2025, 05:53:20 PMGlad you touched on this.  I was debating whether FIFO might perform better for this purpose, assuming the pipe was only being used for ICMP-type traffic.  I briefly tried it but wasn't noticing any difference, and the default (WFQ) gives us more options like you said.

If there are better option don't use FiFo, it should be fine when you have only one queue per pipe.
Its better to use WFQ or QFQ which is a faster variant of WFQ, much more faster processing time.
Btw if you can you can try QFQ on the control plane Pipe for IPv6.

Quote from: OPNenthu on April 29, 2025, 07:27:32 PMI just took a look at your bufferbloat submission for reference: https://github.com/opnsense/docs/pull/571 (https://github.com/opnsense/docs/pull/571)

That doesn't seem to too bad to try and follow.  Maybe I can install a reStructuredText editor in VSCode and get some initial content down as a starting point.

Its nothing hard, reStructuredText is simple to understand and use. More or less the challenge is to write the docs properly. I am already having a draft in my head what should the docs contain and how to structure it. Feel free to start, this is the benefit of opensource (as well OPN docs) as we can co-create and co-colaborate ;)

But ultimately it depends on the OPN devs if they accept such addition to their docs :)

Regards,
S.


Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 30, 2025, 05:24:50 AM
I'm not sure how to test ICMPv6 throughput.

As a basic latency test, I tried to run 10 pings to Cloudflare DNS under load.  To generate the load I ran speedtest.net in a browser and initiated the pings during the upload portion of the speed test.  The results all seem within margin of error to me.

Of course, my gateway showed significant packet loss (up to 30%) during the baseline test with only FQ_CoDel present.  It did not do this when the Control pipe was active (either WFQ or QFQ).

Baseline - No control pipe
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=18ms
Reply from 2606:4700:4700::1111: time=17ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 18ms, Average = 14ms

Control (WFQ)
(WFQ)
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=11ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 15ms, Average = 13ms

Control (QFQ)
(QFQ)
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=16ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=12ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 16ms, Average = 13ms

I then repeated the speed tests while watching 'top' on the OPNsense, and I recorded the highest system CPU usages seen:

Baseline: Down: 22%, Up: 3.4%
WFQ: Down: 23%, Up: 3%
QFQ: Down: 23.4%, Up: 4.3%


I don't think my tests are very scientific :) and all I can say at the moment is there appears to be no downside to using a Control pipe with either scheduler type.  I don't measure or appreciate any felt difference between them, with the exception of the gateway status.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 30, 2025, 09:34:18 AM
As this is basically to test IPv6 Control plane stability the way how you tested it is okay.

----------------
1. Create a WFQ Pipe and Queue for IPV6 ICMP
2. Saturate your internet connection (speed test example)
3. Observe ICMPv6 latency, jitter
4. Observe IPV6 for stability
5. Repeat above for QFQ
6. Compare results without Control plane Pipe and Queue and with WFQ and QFQ
----------------

If we would like to test more scientifically, there is a tool for this example Crusader, that can give precise measurement specifically for buffer bloat. But we do not need this, as we have a proof of concept for a working solution.

And yes I expected WFQ and QFQ to have similar results, difference would be seen if there would be multiple Queues under the Control plane Pipe. Benefit of QFQ is that it should provide more consistent rates and tight guarantees across the multiple queues defined by the weight merit.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 30, 2025, 10:05:32 AM
I created a feature request with explanation on the docs repo. This will be used for the PR

https://github.com/opnsense/docs/issues/705

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 30, 2025, 03:59:42 PM
PR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on April 30, 2025, 09:38:02 PM
Hmmm. I just followed the new instructions. FWIW, it worked fine on one installation with 400/200 Mbit/s. I then copied the <Trafficshaper> section of config.xml to another installation of the same ISP with a higher bandwidth (1000/500) and the machine crazily went on/off. It seemed like the old problem of breaking IPv6 connectivity kicked in again there.

Since the site is remote to me and I broke connectivity doing this once, I cannot thoroughly test it there.

However, when I used the instructions on my own rig (1100/800, other ISP), I found that the Waveform Bufferbloat test stalled after the first step, taking forever "warming up". I am sure that the Shaper is the culprit, because when I disabled all rules, the test went through.

The test also went fine when I reverted to config to the initial instructions by @OPNenthu with just control rules for upstream icmp and icmp-ipv6, without intermediate queues (using only the pipes for this). I modified them to also have a downstream control rule and this works as well.

I wonder if TS has problems with higher speeds, which is something I vaguely remember reading.

My current working setup on my own rig looks like this:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>910</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue>2</queue>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>1160</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue>2</queue>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler>fq_pie</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>20</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>30</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3db79d81-b459-4558-b845-b2ba19efec31">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>fb829d32-e950-4026-a2ee-3663104a355b</target>
          <description>Upload-Control Rule ICMP</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>fb829d32-e950-4026-a2ee-3663104a355b</target>
          <description>Upload-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>883ed783-df03-4109-9364-a6c387f5954f</target>
          <description>Download-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3e5fe8fc-1b6a-4323-a95a-c24e664cd5b9">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>883ed783-df03-4109-9364-a6c387f5954f</target>
          <description>Download-Control Rule ICMP</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

I know that there are a few differences to @Seimus's instructions with what now works:

1. The control plane speeds are very low (1 Mbit/s).
2. I use masks on the pipes, as well as FQ_Codel Parameters and PIE.
3. I have rules for icmp in addition to icmp-ipv6.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: MagikMark on April 30, 2025, 10:30:50 PM
@meyergru

Do you happen to have a screenshot of your TS settings instead the xml format?
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 01:08:00 AM
Quote from: meyergru on April 30, 2025, 09:38:02 PMknow that there are a few differences to @Seimus's instructions with what now works:

1. The control plane speeds are very low (1 Mbit/s).
2. I use masks on the pipes, as well as FQ_Codel Parameters and PIE.
3. I have rules for icmp in addition to icmp-ipv6.


Thanks for testing, as I myself dont have IPv6 capable connection, any test of the config from git I created and results help to fine tune this.

This is interesting,
Where you able to observe any packet loss as reported any other users when this problem with IPv6 occurs (health graph)?

ICMPv4 should not be needed for IPv6 functionality, at least I didn't found anything much related to it.

I suspect if there is still issue present for you,e.g loss and latency for IPv6 is present, it could be potentially due to Capacity BW on the Pipes for control plane. The rules match basically any ICMPv6 not only the one originating from the OPN itself.

Looking at your working config as as you mentioned you use masks on Pipes,


<pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>

The behavior of mask on Pipes is different when masks are used on Queues.

QuoteThus,  when  dynamic  pipes are used, each   flow will get the same
        bandwidth as defined by the pipe, whereas when dynamic queues are
        used, each   flow will share   the  parent's  pipe  bandwidth   evenly
        with  other  flows    generated  by the same   queue (note that other
        queues with different weights might  be  connected    to  the  same
        pipe).

So in simple,
When you use mask on pipe, each flow will get the BW set in the Pipe.
When you use mask on queue, the total value of pipe is shared.

The config of queues on github doc, limits the total usage of the BW to the value of the Pipe, this is the reason to use queues amongst the fact we can to use the Control plane Pipe for other protocols control planes. But it does not share it equally amongst flows in that queues, its set as 1st come 1st get and rest starve. There is a chance a single flow of ICMPv6 starved the rest of flows.

This would explain why the waveform test stalled as well the break of IPv6 if did happen.

Can you maybe try again the config from git, in two config scenarios?
1. Let all as is in the doc but increase the Control Pipe BW
2. Set on the Control plane queues mask in their proper respective directions (DL - destination; UP - source)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 01, 2025, 11:13:29 AM
1. With the setup as per instructions, I had 10/10 MBit/s on the control plane, not 1/1 as with my working setup, just as a note.
2. I tried both suggestions from the last posting to no avail. I even tried setting queue masks for both the control plane and the IP queues.

I used 900/600 and 100/100 MBits for those tests. I also tried setting queue masks and increased BW on the pipes.



For reference (and check), here is the non-working configuration snippet as per your last suggestions combined:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>600</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>900</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>100</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>100</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="55c03a93-8de7-4c45-a782-aaecdcc9cc72">
          <number>10002</number>
          <enabled>1</enabled>
          <pipe>883ed783-df03-4109-9364-a6c387f5954f</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Download</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="9aaccde6-b391-4330-b2d0-6e525d2a12ee">
          <number>10003</number>
          <enabled>1</enabled>
          <pipe>fb829d32-e950-4026-a2ee-3663104a355b</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Upload</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>9aaccde6-b391-4330-b2d0-6e525d2a12ee</target>
          <description>Control-plane-IPv6-Rule-Upload</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>55c03a93-8de7-4c45-a782-aaecdcc9cc72</target>
          <description>Control-plane-IPv6-Rule-Download</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

Afterwards, I even tried to shortcut the control plane rules directly to the pipes, as is used in my working setup, alas, to no avail.

Going back to my working config immediately restored the Waveform test to a working state. The difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 11:38:26 AM
Quote from: Seimus on April 30, 2025, 03:59:42 PMPR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.


Thanks @Seimus.  Looks good to me overall.  I added one comment in the PR.

Also interested in the suggestion there r/e pf rule vs. ipfw.  I'm willing to try it but am not sure on the implementation in pf using the experimental shaping option. Would we just need a single pass rule (direction in) on WAN for ICMPv6?  I believe in pf it's from the perspective of the firewall, so both upstream & downstream requests would be seen as 'in' from the WAN perspective.

I'm thinking something like this?

Action: Pass
Interface: WAN
Direction: in
TCP/IP Version: IPv6
Protocol: IPV6-ICMP
Source: Any
Destination: Any
Traffic Shaping (rule direction): Download-Control-Pipe
Traffic Shaping (reverse dirction): Upload-Control-Pipe

(directionality for pipe assignment is unclear in this case)

My concern with this is that it overrides the default/automatic rules in OPNsense regarding ICMPv6, which is not ideal.  There are security implications as well the possibility to take down the IPv6 network.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 11:43:08 AM
Quote from: meyergru on April 30, 2025, 09:38:02 PMHowever, when I used the instructions on my own rig (1100/800, other ISP), I found that the Waveform Bufferbloat test stalled after the first step, taking forever "warming up". I am sure that the Shaper is the culprit, because when I disabled all rules, the test went through.

I experienced this once as well, when I was initially making changes.  I'm not sure what cleared it up precisely but I do recall rebooting both OPNsense and my ISP router box.  After some settling in, the Bufferbloat and speed tests were no longer stalling.

However, I did not try with manual queues.  In all my testing I always connected the ICMPv6 rules directly to the Control pipes w/ internal queues.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 11:44:38 AM
@meyergru

Many thanks for further testing!

But let me ask if I understood correctly
Quote from: meyergru on May 01, 2025, 11:13:29 AMThe difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

When you created the Control Plane Shaper per the Github instructions,
You did as well change configuration on your already working Pipes?
Especially the tuned FQ_C and FQ_P parameters?

Because how I interpret this and seeing the config it means yes.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 12:17:11 PM
Quote from: OPNenthu on May 01, 2025, 11:38:26 AM
Quote from: Seimus on April 30, 2025, 03:59:42 PMPR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.


Thanks @Seimus.  Looks good to me overall.  I added one comment in the PR.

Also interested in the suggestion there r/e pf rule vs. ipfw.  I'm willing to try it but am not sure on the implementation in pf using the experimental shaping option. Would we just need a single pass rule (direction in) on WAN for ICMPv6?  I believe in pf it's from the perspective of the firewall, so both upstream & downstream requests would be seen as 'in' from the WAN perspective.

I'm thinking something like this?

Action: Pass
Interface: WAN
Direction: in
TCP/IP Version: IPv6
Protocol: IPV6-ICMP
Source: Any
Destination: Any
Traffic Shaping (rule direction): Download-Control-Pipe
Traffic Shaping (reverse dirction): Upload-Control-Pipe

(directionality for pipe assignment is unclear in this case)

My concern with this is that it overrides the default/automatic rules in OPNsense regarding ICMPv6, which is not ideal.  There are security implications as well the possibility to take down the IPv6 network.

I think its a good idea, but not only to mention, but to create it as an optional approach within the docs. The traffic Shaper option in pf can bind to either a Pipe or a Queues as well.

You set a good question, and that's something that drills my head too. Stated in docs

https://docs.opnsense.org/manual/firewall.html#traffic-shaping-qos

QuoteTraffic shaping/rule direction > Force packets being matched by this rule into the configured queue or pipe

Traffic shaping/reverse direction > Force packets being matched in the opposite direction into the configured queue or pipe

In regarding overrides, the auto-rules are set within the floating section, which is above Interface or Group, so if those default rules are set to quick they will always take precedence. So depending where you set it it should not override but the question is there will it be even applicable?

In regards of security applications. ICMPv6 for IPv6 functionality needs to be allowed. By design any control for any protocol needs to be allowed in both ways. But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself, because we are interested into the control plane of the network device it self.

I guess we ask the devs.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 12:33:11 PM
Quote from: OPNenthu on May 01, 2025, 11:43:08 AMI experienced this once as well, when I was initially making changes.  I'm not sure what cleared it up precisely but I do recall rebooting both OPNsense and my ISP router box.  After some settling in, the Bufferbloat and speed tests were no longer stalling.

I had similar problems with FQ_C, when I did tuning in the past, results didn't give sense, rebooting OPN + cable modem usually fixed this... weird...

Quote from: OPNenthu on May 01, 2025, 11:43:08 AMHowever, I did not try with manual queues.  In all my testing I always connected the ICMPv6 rules directly to the Control pipes w/ internal queues.

Can you try it?

It would be good to have consistent results

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 01, 2025, 12:36:49 PM
Quote from: Seimus on May 01, 2025, 11:44:38 AM@meyergru

Many thanks for further testing!

But let me ask if I understood correctly
Quote from: meyergru on May 01, 2025, 11:13:29 AMThe difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

When you created the Control Plane Shaper per the Github instructions,
You did as well change configuration on your already working Pipes?
Especially the tuned FQ_C and FQ_P parameters?

Because how I interpret this and seeing the config it means yes.

Regards,
S.

Yes. I cleared the respective parts. I am at a loss what difference is actually causing the problem. Maybe it is easier to try to break my working setup by changing towards your suggested setup step-by-step to find the root cause if it is not that casual glitch both you and @OPNenthu saw.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 12:38:54 PM
QuoteIn regards of security applications. ICMPv6 for IPv6 functionality needs to be allowed. By design any control for any protocol needs to be allowed in both ways.
Understood, but, ICMPv6 has many types.  In the default OPNsense ruleset, is my router allowed to send/respond RAs and NDs on the open internet?

EDIT: I did forget for a moment that IPv6 is meant to be globally routeable, although still not sure.

I missed this earlier, too.  Makes sense.
"[...] But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself"


QuoteCan you try it?
Sure, will do some testing.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 01:25:00 PM
Quote from: meyergru on May 01, 2025, 12:36:49 PMYes. I cleared the respective parts. I am at a loss what difference is actually causing the problem. Maybe it is easier to try to break my working setup by changing towards your suggested setup step-by-step to find the root cause if it is not that casual glitch both you and @OPNenthu saw.

I must say your setup is a mystery to me of why this is happening.
It could be the glitch, when changing playing with Shaper sometimes its just janked. Even thou in CLI when you verify if the config is correct (in ipfw) which it is still the results may not be as expected.

In on paper, the Control Plane, is an addition if there is already a Pipe existing, so that means there should be no changes needed to an already established Pipe flow other than changing the BW (subtraction of BW from one Pipe to a New one)


Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 01:28:14 PM
Quote from: OPNenthu on May 01, 2025, 12:38:54 PMUnderstood, but, ICMPv6 has many types.  In the default OPNsense ruleset, is my router allowed to send/respond RAs and NDs on the open internet?

EDIT: I did forget for a moment that IPv6 is meant to be globally routeable, although still not sure.

I missed this earlier, too.  Makes sense.
"[...] But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself"

It would have to be specified by the IPv6 RFC4890. Similar as is in the default rules I believe.

Quote from: OPNenthu on May 01, 2025, 12:38:54 PMSure, will do some testing.
In advance many thanks!

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 01, 2025, 02:12:58 PM
So I tested further starting from my working setup.

1. Removed FQ-Codel parameters from the pipes. Test was O.K., but results are worse (B) (https://www.waveform.com/tools/bufferbloat?test-id=1973290a-eed6-4a1f-b660-b2e311de144b) than before (A+) (https://www.waveform.com/tools/bufferbloat?test-id=e5ef07de-3654-4c95-b35a-52666534aa8f). Switching back and forth broke my connection completely once.
2. Removing the masks from the pipes changed nothing, tests went O.K., still A+ grading.
3. Enlarging the bandwith from 1 to 10 MBit/s on the control plane pipes changed nothing.
4. Removing the masks from the up- and downstream queues changed nothing.
5. Disabling the icmp (v4) rules changed nothing.
6. Creating queues for the control plane and point the control plane rules for icmp-ipv6 to those changed nothing.

So I arrived almost at the recommended setup, with the only difference of the FQ-Codel parameters and PIE on the pipes enabled (I also tried with no PIE, which changed nothing).

Then I reduced the up- and downstream bandwidth (the old ones were optimized for attainable speed) to 900/600 to verify that the shaper had to do anything at all. This worked as well and got this result (https://www.waveform.com/tools/bufferbloat?test-id=01365e7b-58f5-44df-b450-90e5e582f4bc). Note that there is no more latency increase with either up- or download.

For reference, this is the relevant config section now:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>600</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>900</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>10</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>10</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="6c535ef5-1aa5-4760-a94e-b6f72af55dd8">
          <number>10002</number>
          <enabled>1</enabled>
          <pipe>883ed783-df03-4109-9364-a6c387f5954f</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Download</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="a71074a0-e387-4ff6-8203-1f7e08ef7b32">
          <number>10003</number>
          <enabled>1</enabled>
          <pipe>fb829d32-e950-4026-a2ee-3663104a355b</pipe>
          <weight>100</weight>
          <mask>none</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Upload</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>a71074a0-e387-4ff6-8203-1f7e08ef7b32</target>
          <description>Upload-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>6c535ef5-1aa5-4760-a94e-b6f72af55dd8</target>
          <description>Download-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

So maybe it really is a glitch were playing around with the params does sometimes break things...

As for the bandwith limits: there seems to be a tradeoff between maximum attainable speed, which can be reached when you actually add 5% to your maximum speed without traffic shaping, at the expense of some increased latency which still gives an A+ rating. If you want no latency increase at all, you will have to sacrifice on attainable speed.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 05:47:01 PM
Testing with intermediary queues seems just as good/stable as without.  Actually the home internet is busy currently and I still am posting good pings while under additional load from speedtest:

C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=10ms
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=11ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 10ms, Maximum = 15ms, Average = 12ms

Bufferbloat (https://www.waveform.com/tools/bufferbloat?test-id=42796413-a069-4703-8889-6620a94b1fb8) remains A+.

speedtest.net result:

speedtest.png

This is with QFQ for control pipes and FQ_CoDel+ECN on default pipes.  No masks, PIE, or CoDel params.  All queue weights 100.  IPv4 icmp and other rules/queues disabled (only testing ipv6-icmp on control plane, everything else to default rules).

Actually I have a question about this: if I want to add back ipv4 icmp to the control plane, I will need to create 2 more queues.  What weights should they get?  Are they also 100, or do we split the difference (50-50) with the ipv6-icmp queues?
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 06:10:53 PM
@meyergru
Once again many thanks. So basically this confirms the config in doc works as suggested.

Few comments from my side.

Quote from: meyergru on May 01, 2025, 02:12:58 PMSo I arrived almost at the recommended setup, with the only difference of the FQ-Codel parameters and PIE on the pipes enabled (I also tried with no PIE, which changed nothing).

Actually i would not call it a difference, as mentioned previously if you already have a Pipe created, this should remain as is only BW should be subtracted. The main point of Control Plane class is to allocate it its own BW, and to take out the potential back-pressure caused by the sojourn time FQ_C for examples relies on.

That config provided looks correct to me. In your original configuration you had masks, queue(value) in Pipe. This actually doesn't do anything if you have a manually created queue connected to it. It is only applicable in case you attach a rule directly to the Pipe. ECN in queue applies only for CoDel not FQ_C.


Quote from: meyergru on May 01, 2025, 02:12:58 PMAs for the bandwith limits: there seems to be a tradeoff between maximum attainable speed, which can be reached when you actually add 5% to your maximum speed without traffic shaping, at the expense of some increased latency which still gives an A+ rating. If you want no latency increase at all, you will have to sacrifice on attainable speed.

Believe or not but this is expected :D. The reason behind it is I think the FQ that FQ_C and FQ_P use. Fair Queue has problems to provide consistent rate, or maximum rate so it takes away like 3%-5% of the BW set in Pipe. DRR for example is better in regards of this but can create insane latency due to the deficit calculation. QFQ should be able as well, but problem is FQ_C and FQ_P implementation are only available in FQ not QFQ.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 06:16:27 PM
Quote from: meyergru on May 01, 2025, 02:12:58 PM3. Enlarging the bandwith from 1 to 10 MBit/s on the control plane pipes changed nothing.

I too am observing no benefit from increasing the download control pipe.

Maybe for servers this is a good rule of thumb?  As a home internet user with an asymmetrical data plan, should I reasonably expect to have proportionally higher control traffic on ingress than on egress?

Quote from: meyergru on May 01, 2025, 02:12:58 PMAs for the bandwith limits: there seems to be a tradeoff between maximum attainable speed, which can be reached when you actually add 5% to your maximum speed without traffic shaping, at the expense of some increased latency which still gives an A+ rating

Interesting.  I always have been leaving some bandwidth on the table to optimize for latency (as per the Bufferbloat guide) but my speed without shaping measures above 900Mbit/s (sometimes bursts up to 1Gbps), even though it's advertised at only 800. 

Setting the Download pipe to 910Mbit/s gets me this: https://www.waveform.com/tools/bufferbloat?test-id=477bf3a2-4b43-4f61-91bd-f9c0f44c668a

+5 on D/L latency, but still A+.

Though I wonder if that will start to break down during peak use when all the neighbors are online.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 06:26:49 PM
@OPNenthu many thanks for testing!

I am glad you could provide similar results as @meyergru did.

Quote from: OPNenthu on May 01, 2025, 05:47:01 PMThis is with QFQ for control pipes and FQ_CoDel+ECN on default pipes. 
Perfect! > this is exactly how I would like to have it tested.

QFQ overall should provide more consistent rates vs WFQ. So its always worth to try the one or another. Yet keep in mind guys this is only affecting the Control Plane nothing else, as rest of the traffic is in different Pipes with different schedulers.


In regards of your question
Quote from: OPNenthu on May 01, 2025, 05:47:01 PMActually I have a question about this: if I want to add back ipv4 icmp to the control plane, I will need to create 2 more queues.  What weights should they get?  Are they also 100, or do we split the difference (50-50) with the ipv6-icmp queues?
Yes, keep in mind, we want to to keep control planes of different protocols separated, yet utilize the BW dedicated for Control Plane as such. The weight depends on you, or rather depends on the rate of the specific control plane, how much BW each of them you want to give.

So if you set the weights 50 & 50 in theory during saturation each of them will get 500Kbit/s if BW is 1Mbit/s.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 01, 2025, 06:42:27 PM
Quote from: OPNenthu on May 01, 2025, 06:16:27 PMI too am observing no benefit from increasing the download control pipe.

Maybe for servers this is a good rule of thumb?  As a home internet user with an asymmetrical data plan, should I reasonably expect to have proportionally higher control traffic on ingress than on egress?

Actually no, because control plane usually is a consistent rate it should be okay with very low BW values set to the specification minimum.
We need to keep in mind that the rules for Control Plane Shaper do not only involve the control plane. It as well catches pings. Thus the 1Mbit is optimal to a certain degree.

Quote from: OPNenthu on May 01, 2025, 06:16:27 PMInteresting.  Interesting.  I always have been leaving some bandwidth on the table to optimize for latency (as per the Bufferbloat guide) but my speed without shaping measures above 900Mbit/s (sometimes bursts up to 1Gbps), even though it's advertised at only 800.

Setting the Download pipe to 910Mbit/s gets me this: https://www.waveform.com/tools/bufferbloat?test-id=477bf3a2-4b43-4f61-91bd-f9c0f44c668a

+5 on D/L latency, but still A+.

Though I wonder if that will start to break down during peak use when all the neighbors are online.

What you see is actually correct. And purely depends on how your ISP divides the BW to its customers. Its possible they have overprovisioning. Its as well possible during peak hours when more users from that ISP are online, it will cause on an aggregation point a stall and you may not reach those speeds which could cause additional latency.

Also if you over-provision your BW on one Pipe, the moment that BW is not available it will start to eat into other Pipes affecting the Control Plane.

Its always better to create a BW buffer, cause if you dont you will be on the mercy of your ISP to handle the bufferbloat.

Regards,
S.

One fun fact; When using FQ_C even if you set higher BW than you really have, FQ_C due to its algorithm is still capable somewhat to manage the latency. It will not be so good as when you have a BW buffer created, but its 10x better compared to not have any FQ_C at all.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 07:41:48 PM
Thanks @Seimus and @meyergru for all the inputs so far.  I'm learning a lot.

I went back to re-enable all my previous queues & rules for things like TCP-ACK and DNS.  In the process I renamed my objects to normalize them.  I am now again seeing the glitch that we talked about where Bufferbloat test is stalled.  Also, speedtest.net is showing reduced bandwidth.

So there really is truth to this.  There is some issue that crops up when you are adjusting the Shaper objects.

I won't reboot anything this time.  Will wait to see if it clears on its own.

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 01, 2025, 08:19:25 PM
Adding to the previous post-

I observe that UDP traffic is impeded.  The screenshot here shows only outgoing, but it's actually in both directions.  TCP seems fine.

Not sure if these are queue drops or bad firewall state?

I've been connected to a VPN provider over UDP the whole time (on a different client/VM) and when I run speedtest these red lines appear in the FW log now.  The VPN endpoint is the destination.

Meanwhile, D/L speeds have degraded further.  VPN remains connected.  Packet drops only observed when running speedtest (I guess pointing to a queue issue).

Will post back when something changes; still holding off on reboot.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 02, 2025, 01:12:42 AM
Quote from: OPNenthu on May 01, 2025, 08:19:25 PMNot sure if these are queue drops or bad firewall state?

Queue drops would not generate those logs in Live.
You can see if a Shaper is dropping via cli sommand.

ipfw queue show
ipfw sched show
ipfw pipe show


However those live logs if real, are blocking sessions.

Also have a look as well on Interface drop count, parent interface LAN and WAN.

Question is 
is it the same Source and Destination? Or only one specific?
when you click the "i" on the blocked one what additional info does it tells you?

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 02, 2025, 08:16:06 AM
Still happening hours later.  It may be time finally for a reboot.

Quote from: Seimus on May 02, 2025, 01:12:42 AMipfw queue show
ipfw sched show
ipfw pipe show

root@firewall:~ # ipfw sched show
10002:   1.000 Mbit/s    0 ms burst 0
 sched 10002 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10009 10007
10003:   1.000 Mbit/s    0 ms burst 0
 sched 10003 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10008 10006
10000: 849.000 Mbit/s    0 ms burst 0
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
 sched 10000 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10004 10002 10000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0       82     5431  0    0   0
10001:  39.000 Mbit/s    0 ms burst 0
q75537  50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail
 sched 10001 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10005 10003 10001
  0 ip           0.0.0.0/0             0.0.0.0/0     13301 19218727 23 34500  99

If I'm reading correctly, there were 99 drops on upstream here (39Mbit/s CoDel).

root@firewall:~ # ipfw queue show
q10006  50 sl. 0 flows (1 buckets) sched 10003 weight 50 lmax 1500 pri 0 droptail
q10007  50 sl. 0 flows (1 buckets) sched 10002 weight 50 lmax 1500 pri 0 droptail
q10004  50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10005  50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
q10002  50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10003  50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
q10000  50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10001  50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
q10008  50 sl. 0 flows (1 buckets) sched 10003 weight 50 lmax 1500 pri 0 droptail
q10009  50 sl. 0 flows (1 buckets) sched 10002 weight 50 lmax 1500 pri 0 droptail

root@firewall:~ # ipfw pipe show
10002:   1.000 Mbit/s    0 ms burst 0
q141074  50 sl. 0 flows (1 buckets) sched 75538 weight 0 lmax 0 pri 0 droptail
 sched 75538 type FIFO flags 0x0 0 buckets 0 active
10003:   1.000 Mbit/s    0 ms burst 0
q141075  50 sl. 0 flows (1 buckets) sched 75539 weight 0 lmax 0 pri 0 droptail
 sched 75539 type FIFO flags 0x0 0 buckets 0 active
10000: 849.000 Mbit/s    0 ms burst 0
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
 sched 75536 type FIFO flags 0x0 0 buckets 0 active
10001:  39.000 Mbit/s    0 ms burst 0
q75537  50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail
 sched 75537 type FIFO flags 0x0 0 buckets 0 active
root@firewall:~ #

QuoteAlso have a look as well on Interface drop count, parent interface LAN and WAN.

LAN is on a LAGG group (2 x 2.5Gbps)

The 'Output Errors: 13' count was there from before.  I've noticed that for a long time on the LAGG IF.

WAN.png LAN.png

QuoteQuestion is 
is it the same Source and Destination? Or only one specific?

The speedtest is still abnormal and Bufferbloat test is stalled as before, but the only blocks in the F/W log are from those specific src/dest addresses which is the VPN connection.  Other traffic seems to be getting passed as normal.

Quotewhen you click the "i" on the blocked one what additional info does it tells you?

It pops up rule info for the in-built 'force gw' rule:

force_gw.png
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 02, 2025, 09:40:36 AM
The router reboot did the trick, but there was some settling needed as well.  Immediately following the reboot the latencies were high on the bufferbloat test and the FW log was still showing blocked traffic.  Several minutes later that all cleared up and now the tests are back to normal.

I do see tail drops on the upload data pipe/scheduler, only during the upload portion of the speed tests.  I think this is expected, though.  This is probably CoDel/AQM doing its job.

I also rebooted the VM where the VPN was connected and made sure that was again active / working during the tests.

root@firewall:~ # ipfw sched show
10002:   1.000 Mbit/s    0 ms burst 0
 sched 10002 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10009 10007
10003:   1.000 Mbit/s    0 ms burst 0
 sched 10003 type QFQ flags 0x0 0 buckets 0 active
   Children flowsets: 10008 10006
10000: 849.000 Mbit/s    0 ms burst 0
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
 sched 10000 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10004 10002 10000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0       19     1140  0    0   0
10001:  39.000 Mbit/s    0 ms burst 0
q75537  50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail
 sched 10001 type FQ_CODEL flags 0x0 0 buckets 1 active
 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
   Children flowsets: 10005 10003 10001
  0 ip           0.0.0.0/0             0.0.0.0/0     51935 74618246 26 38507 395

So, long story short, messing with shaping can in some instances cause initial instability that needs a reboot + settling time.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 02, 2025, 10:56:17 AM
Quote from: OPNenthu on May 02, 2025, 09:40:36 AMI do see tail drops on the upload data pipe/scheduler, only during the upload portion of the speed tests.  I think this is expected, though.  This is probably CoDel/AQM doing its job.

Yes, this is basically FQ_C taking care of packets that are too long in the Flow queue. FQ_C will drop "if their sojourn times exceed the target setting for longer than the interval". Sadly because those are dynamic and under scheduler, we dont see specific Flows only as whole thats why there is 0.0.0.0/0.

Quote from: OPNenthu on May 02, 2025, 09:40:36 AMSo, long story short, messing with shaping can in some instances cause initial instability that needs a reboot + settling time.

Agree, as several peoples were able to experience this and its possible to replicate.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on May 02, 2025, 12:05:08 PM
Unless there is a less intrusive way of fixing this than a reboot, it should be pointed out as a caveat in the instructions. Would a fw state reset help?
Matter of fact, for me, this was unexpected and I still can neither reliably reproduce it nor are the effects consistent.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 02, 2025, 12:36:03 PM
Quote from: meyergru on May 02, 2025, 12:05:08 PMUnless there is a less intrusive way of fixing this than a reboot, it should be pointed out as a caveat in the instructions.

I agree, but thinking about it, into which section of the shaper docs to point it out? This is not specific only to the examples, but to the Shaper as whole. I think if this is the case it should be under the main Shaper section.

Quote from: meyergru on May 02, 2025, 12:05:08 PMWould a fw state reset help?
Would be worth a try.

@All
If somebody hits this problem can that person try to reset the fw states and let us know?

Quote from: meyergru on May 02, 2025, 12:05:08 PMMatter of fact, for me, this was unexpected and I still can neither reliably reproduce it nor are the effects consistent.
Its interesting this is happening at all, from the description of the problem one would assume that the problem could be due to packets not being classified properly, but in that case no BW reduction would be visible if the shaper is bypassed.


Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on May 03, 2025, 12:03:46 AM
Quote from: meyergru on May 02, 2025, 12:05:08 PMWould a fw state reset help?

Probably not, IMO.  I tested with an OPNsense VM in a double-NAT setup (IPv4-only), so not exactly the same situation, but I did reproduce the issue.

I configured the control & data plane pipes, queues, and rules.  I set the Download pipe to 545Mbit/s and the Upload to 34Mbit/s accounting for the VM/NAT overhead.

After applying the changes I observed a false start in the Bufferbloat test (hung on "Warming Up..."), followed by a semi-successful test (reduced performance on the Download), followed by a second false start.  See "semi_successful.png".

I then reset the F/W states from the Diagnostics menu and gave it a minute to re-establish and settle.  The next couple of Bufferbloat tests did not stall, but the Download performance was still subpar.  This was reproducible.  See "after_reset.png".

Finally I rebooted the VM and then only observed the full performance:  Result (https://www.waveform.com/tools/bufferbloat?test-id=0debaf24-d171-4b41-a97b-daa7af6dbfa6)
(Sorry, ran out of image quota on this post, so had to crop the second image and could not upload the final one).

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on May 03, 2025, 02:00:17 AM
Alright looks like following observations can be made,

A. There really is a glitch or BUG, when configuring or Changing the Shaper
B. Issue is causing degraded performance e.g lower than expected Throughput and/or application stall(during congestion)
C. This is somewhat reproducible
D. Affects any traffic matched by the Shaper Rules
E. Clearing States in pf doesn't fix the problem
F. FW reboot does fix the problem

So there is either something wrong with OPNsense pushing the config into ipfw/dummynet or ipfw/dummynet itself.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: vik on May 09, 2025, 10:10:20 PM
This bug is impacting my setup, opened git issue:

https://github.com/opnsense/core/issues/8649
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on June 15, 2025, 01:39:57 AM
Quote from: vik on May 09, 2025, 10:10:20 PMThis bug is impacting my setup, opened git issue:

https://github.com/opnsense/core/issues/8649

Many thanks for reporting it officially and driving it with the devs. I totally missed this one due to other activities. Looks like issues was found and fixed!

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on July 24, 2025, 10:49:23 AM
The docs are officially published

https://docs.opnsense.org/manual/how-tos/shaper_control_plane.html

Many thanks everyone specifically OPNenthu & meyergru for the colab and testing you provided ;)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on July 30, 2025, 10:57:37 PM
Quote from: Seimus on July 24, 2025, 10:49:23 AMThe docs are officially published [...]

Cheers @Seimus!  This does raise the bar a bit on networking concepts. 

I've been using the setup from this thread for a few months now without any issues, and since the fix went in for the issue that @vik raised in GitHub, it's been solid.

---

On a separate note, the issue that @meyergru and I saw with the Waveform Bufferbloat test could be something on their end as well.  I'm now seeing that the test is failing to start (stays stuck), but it was working just last night with no changes on my end.  All of the other test sites, such as Cloudflare speed test (https://speed.cloudflare.com/) and speedtest.net, are working fine and with no slowdowns observed.  So just a word of warning that they might have an app issue.  Let's see if anyone else observes the same.

EDIT:  today (Jul. 31) that Bufferbloat test is working again ¯\_(ツ)_/¯
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 30, 2026, 01:07:54 AM
Necro bump-

Are you guys seeing regressions in 26.1.x?  The upload portion of my speed tests has started stalling a lot to where the tests never start (Waveform Bufferbloat) or finish (Cloudflare speedtest).  In the case of the Bufferbloat test it stays stuck on "Warming up" for that portion of the test.

There were some ISP changes in my area recently as they upgraded their infrastructure.  I noticed that my latencies increased a little bit, and I need to redo the pipe widths.  But, I don't know if this had anything to do with the shaping instability.

I also found some posts online where others notice this behavior only with Firefox (?).  I don't have any Chrome based browsers at the moment to try but maybe I should install one and compare.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 30, 2026, 08:00:37 AM
I reset the shaper a few times while tweaking the pipe b/w, so it's working at the moment.  Also changed out the Ethernet cable, just in case.  Will keep an eye on it.

I did install Chromium under Linux and compared to Firefox, it seems to give slightly better results in the tests.  The browser engine appears to affect the performance.

Also found this awesome test: https://bufferbloat.libreqos.com/

Wanted to share that link as it seems to give a lot more detailed metrics than the Waveform test and has additional tests (e.g. "Household" sim) and ISP rankings.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on March 30, 2026, 10:27:16 AM
Interesting tool, although I get a B for video calls each time and I do not understand why.

P.S.: Waveform still works for me.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: dinguz on March 30, 2026, 10:27:53 AM
Quote from: OPNenthu on March 30, 2026, 01:07:54 AMAre you guys seeing regressions in 26.1.x?  The upload portion of my speed tests has started stalling a lot to where the tests never start (Waveform Bufferbloat) or finish (Cloudflare speedtest).  In the case of the Bufferbloat test it stays stuck on "Warming up" for that portion of the test.

I have seen this behavior (Waveform Bufferbloat test stays stuck on "Warming up") ever so often, I suspect intermittent bandwidth or server load issues on their part.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on March 30, 2026, 10:35:20 AM
I saw that in the past when I over-optimized the shaper by not leaving enough headroom in the pipes' bandwidths in pursuit of maximum bandwidth. You cannot have your cake and still eat it.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 30, 2026, 11:24:41 AM
I knocked down my upload pipe width a little bit, though I don't have a lot to play with.  My ISP plan is very asymmetric: 1000Mbps down / 40Mbps up.

Now I'm getting an A in Waveform (https://www.waveform.com/tools/bufferbloat?test-id=d79d82b0-9840-4f87-b12d-8a0698a154b6), an A in LibreQoS, but an F in the Household test (was previously getting a B there).  I'm confused by that Household test now as well.  It's not showing any packet loss.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on March 30, 2026, 05:47:02 PM
This test tool is very nice it provides a lot of statistic and tests a variation of traffic types & patterns.

Its created by the people that are involved in the bufferbloat community, basically people responsible for RFC of CoDel, CAKE and the latest iteration of CAKE for ISP LibreQoS.

For those who didnt know, Dave Täht one of the original creators of CoDel & CAKE (AQMs) sadly passed away in 2025. LibreQoS and the bufferbloat initiative is his legacy

In loving memory of Dave Täht (https://libreqos.io/2025/04/01/in-loving-memory-of-dave/)
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 31, 2026, 12:31:21 AM
To Mr. Täht, who tamed our networks.  🥃
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 31, 2026, 07:07:27 AM
I think I cracked it!  DOCSIS asymmetry was the main problem here and IPv6 was also masking an issue.

The Waveform test is IPv4 only but I didn't realize that before.  The LibreQoS test supports both but it was defaulting to IPv6.  When I toggled my connection to use IPv4 and ran the LibreQoS test I noticed that the IPv4 path was performing significantly worse than the IPv6 path, and this was the first clue as to why the Waveform test was maybe stalling.  So for the remainder of the exercise I focused on shaping IPv4 first, then made sure my adjustments carried over to IPv6 (and they did).

The next thing I noticed, because I happen to have separate queues for TCP ACKs, is that >98% of the packets on the upload pipe during the tests were ACKs:

upload-acks.webp

I never really paid attention before but it hit me: my upload pipe can't keep up with the size of the download pipe and it's causing ACK congestion on the upstream.  That's why no matter how much I tweaked the upload pipe it made no difference, because I only have a very small upload in comparison to the download and the tweaks made only marginal differences.

So the fix became clear- I needed to give up a lot of download bandwidth.  I played around and found that 600Mbps was the sweet spot to balance out my upload.  My current plan is 1000/35 (advertised) and measures 1200/40 in practice due to over-provisioning.  That's a 28-30x difference.

Maybe this can help some others with significant bandwidth asymmetries.  Tuning for anti-bufferbloat isn't only about the individual pipes, it's also about the ratio of bandwidths.  The pros already know; this is just an enthusiast having an "aha!" moment ;)
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: meyergru on March 31, 2026, 10:40:08 AM
You are correct about that there is a certain relation between up- and downstream that must be met in order to allow traffic at all. That is because the ACK stream takes up upstream bandwidth.

However, I measured during the downstream part of the Waveform test and got these results:

2026-03-31 10_22_58-Status _ Shaper _ Firewall _ OPNsense.mgsoft — Mozilla Firefox.png

This shows 4 GByte downstream data and ~130 MByte Upstream, of which 80% was TCP ACKs, so roughly a 3.25% of the downstream needed for upstream. AFAIR, that is about to be expected at a theoretical worst case of ~4% and a more practical 2% (RFC 1122).
AFAIK, that should also explain your rate of 1000/35 Mbps: Your ISP wants you to have full 1000 Mbps downstream, but only the mere neccessity for the upstream with nothing left for server applications. There are some more providers which offer only a small downstream even if there is no technical neccessity to do so, like with DOCSIS.

So, in theory, you should be able to use the full 1000 Mbps downstream, not only 600?

I can imagine two things that may shift the results:

1. With TCP ACKs, you can have pure ACKs and SACKs, so the number of packets used can be severly lower than the number of data packets. That is obviously the case in my test. You did not show the downstream part of your test, you we cannot know if SACK was used, which would be dependend on the client.

2. Regardless of the net data being transferred, pure ACK packets are way shorter than data packets, so they incur a larger overhead, so the net data results may not mirror the real bandwitdhs used.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 31, 2026, 10:27:48 PM
Here is the full image of that cropped one in my post.  This was before I reduced the download pipe, so at this point I was experiencing stalls.  Note that even setting the pipe a bit lower, for example 850Mbps, did not resolve the stalling.

shaper-full.webp

My plan wasn't originally 1000/35.  It was something like 800/35, IIRC, but while on a support call a couple months ago the agent offered me a free increase to "gigabit."  Unfortunately they only increased the downstream. 

I was thinking that the original plan was better balanced overall, though what you are saying is that it doesn't matter.  All I need is that my upstream should be 2-4% of the downstream to maintain stability.

I'm not sure what my numbers reveal but following your formula, it would appear that the theoretical maximum of 4% is being exceeded by at least a factor of 2x.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on March 31, 2026, 11:17:02 PM
Unfortunately today I ran into the Waveform test stalls again, so was a little premature.  This happened despite the other tests giving me A-A+ results and happened on both browsers, Firefox and Chromium.

Either I haven't fully resolved some bottleneck, or what @dinguz said about the Waveform servers being overloaded could be true.

I'm thinking it's likely on my side because sometimes the Cloudflare speedtest also stalls mid test, though less frequent than Waveform.  Those are the only two that ever stall.  I never have any issue with speedtest.net, fast.com, and so far none with LibreQoS (https://bufferbloat.libreqos.com/).  So which do I trust?

I'll keep digging.

Note: I also set a technician appointment to inspect the exterior lines because the cable TV boxes aren't working and they couldn't measure a good signal from their end, but whatever it is doesn't seem to affect the internet.  Maybe there's a fluctuation that is impacting in a way that I can't directly measure.  However, then I would expect to see packet loss on my gateways.

There is some- I have momentary spikes of loss, but those have always been there.

The only "clean" graphs I have seen are the ones with the white theme in the attachments.  Those are from the firewall at my parents' house and they have FTTH, so altogether a different medium.  The spikes are relatively rare there.  I am surprised I caught one at all as usually both graphs are loss-free... could just be an upstream router hiccup as I am setting the monitor IP to a first or second hop on the ISP path.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 01, 2026, 07:32:59 PM
In regards of waveform bufferblaost test,

Do you by chance use ZenArmor maybe? I have seen stalls as well but caused by ZA, as ZA started to block some of the the IPs behind which waveform is hosted. They btw moved the hosting, and since that time ZA started to block it.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 01, 2026, 08:46:51 PM
@Seimus No, I only use the firewall with static IP blocklists (Spamhaus, FireHOL, ...) for outbound filtering and DNSBLs.   I'm blocking DoH/DoQ as well.



Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 02, 2026, 01:37:45 PM
Quote from: meyergru on March 31, 2026, 10:40:08 AMSo, in theory, you should be able to use the full 1000 Mbps downstream, not only 600?

I've taken this advice and increased the d/l pipe again, but also made a few changes to my shaping strategy that seem to have helped.  I'll give it some time before declaring victory but results so far seem positive and I got Waveform working again for the moment.  Browsing and streaming feel more responsive.

My thinking right now is that the separate control pipes with QFQ scheduling were putting too much pressure on my limited upstream.  It also made it difficult to split the bandwidth between the FQ_CoDel and QFQ pipes for the upstream because of the very narrow margins.  I still like the idea of separate control vs. data however.

So I made these changes:

1) Consolidated my pipes to just two with FQ_CoDel (no QFQ)
2) Consolidated my control plane to two queues (up/down), but kept the separate rules to classify ACK/ICMP/DNS.
3) Gave a 5:1 weight advantage to the control queues

control-plane-simplified-with-FQCoDel.webp

The weighting helped to smooth out a momentary latency spike when the bufferbloat test trasnitions from D/L to U/L.  The two attached tests illustrate the difference.  (Note: I have active background downloads and video streams during the tests in order to see how it behaves under network load, so latencies are higher than when I test idle.)

I think I'm happy to trade a perfect A+ with lowest possible latencies in order to get an A with more stability.  Maybe this is a fact of life with DOCSIS.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 02, 2026, 08:01:11 PM
The weight doesn't do anything when using FQ_CoDel. The FQ in the FQ_CoDel does what it means, fair queuing.

So basically FQ_CoDel, doesn't do any priority queuing nor weighted queuing

In theory even this design should not cause to much harm for control plane, as FQ_CoDel will not let starve any flow under the condition there is not an excessive amount of flows. It can however create extra DROP and TAIL DROP for the control plane e.g create AQM back-pressure.

What is interesting here is that when using FQ_CoDel and not multiples schedulers your tests show good results. So the question is, how and why two different scheduler for two different traffic planes impacted the results? Considering the Control plane if configured properly should not carry over any data plane e.g test traffic for the test.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 02, 2026, 11:39:55 PM
Quote from: Seimus on April 02, 2026, 08:01:11 PMSo basically FQ_CoDel, doesn't do any priority queuing nor weighted queuing
Yeah, I remember this from our conversations but I had convinced myself that it fixed that spike.  Most likely those spikes were just transient and it's only coincidence that they disappeared.

Quote from: Seimus on April 02, 2026, 08:01:11 PMSo the question is, how and why two different scheduler for two different traffic planes impacted the results?
I went ahead and reset back to the layout with the separate control pipes.  For the moment the control plane is back to QFQ.

If I notice another stall then I can try to find an appropriate control pipe width for the upstream that fixes it.  Having failed that, I can try WFQ instead of QFQ at the expense of some CPU overhead.  Finally, having failed all that, I can change the control pipe schedulers to FQ_CoDel and see if the problem persists.

I'm trying to establish whether the upload control pipe it just too small, or if the scheduler type is the issue.  If I do run into stalls, are there any specific outputs from the terminal that might be useful to pinpoint the issue?
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 03, 2026, 02:12:33 AM
Overall your tshoot process above described sounds like a solid plan.

In regards of CLI,
you can check

ipfw sched show
This will show for any scheduler active flows and Tot_pkt/bytes Pkt/Byte Drp. The more traffic you push the easier is to see it in the CLI outputs.


Honestly still at this point I do not think the Control plane shaping is the problem. Because the control plane classifies, marks, queues and shapes only the control plane for IPv6 > ICPMv6. No other plane or type of traffic should hit this Pipe and Scheduler.

Issue you describe the "Stall" sounds like issue with "new flow start". Considering the "buffer bloat test" traffic should hit the Data Plane (FQ_C Scheduler);

This could be as mentioned
1. Over-tuned Scheduler (see comment from @meyergru)
2. Wrongly tuned limit parameter (which can impact slow start and new flow start)
3. ECN (which can impact slow start)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 03, 2026, 03:07:20 AM
Quote from: meyergru on March 30, 2026, 10:27:16 AMInteresting tool, although I get a B for video calls each time and I do not understand why.

P.S.: Waveform still works for me.

I see similar results but for Video streaming e.g "download" results is C. For everything else I have A+ and overall far better results than is their requirement. Same goes as well for waveform, A+ 0ms increase.

The funny part is, my own testing methodology to confirm how good the Shaper is tuned is S+ rank. This is tested on real live traffic with factorial of partner yelling at you.

How to use this testing methodology, do all at once:
1. Check if you partner is watching her favorite show on live TV
2. Turn on an Online game
3. Start to Download a game on Steam
4. Run a full system upgrade on Linux
5. Observe

Result outcome:
If, 2. + 3. + 4. do not show any problems = A+
If, 1. + 2. + 3. + 4. do not show any problems = S+
If, 1. shows a problem, pretend you didn't do anything and start again

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 03, 2026, 05:09:25 AM
Quote from: Seimus on April 03, 2026, 03:07:20 AMIf, 1. shows a problem, pretend you didn't do anything and start again
This is my method too, but now she just blames me automatically even when it's not my fault :)

Regarding the Household test on the LibreQoS site, I asked ChatGPT what the test looks for and it gave an interesting response.  It said that the houshold test falls down quickly when using FQ_CoDel because it cannot distinguish between flows.  All traffic has equal priority so things like gaming, VoIP, etc. can get impacted quickly when there is traffic from multiple clients.

To get a good score there we need CAKE, which can distinguish clients and flows.

As it's not available on FreeBSD, the best we can do is prioritize into queues.  I guess for that to work with FQ_CoDel we would need multiple pipes right?  Or maybe one pipe with no scheduler and instead use CoDel within priority queues?

I would be tempted to try this but I don't know how to match the traffic accurately.  For example, how do we use rules to distinguish video streaming from regular downloads (both using HTTPS)?   Are we supposed to match by destination, e.g. all YouTube.com -> send to high prio queue?

If someone has a guide for that in OPNsense it would be great.  I'm sometimes getting an 'F' on that test.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 03, 2026, 11:28:44 AM
Quote from: OPNenthu on April 03, 2026, 05:09:25 AMThis is my method too, but now she just blames me automatically even when it's not my fault :)

Hard life of the homelabber :D


Quote from: OPNenthu on April 03, 2026, 05:09:25 AMRegarding the Household test on the LibreQoS site, I asked ChatGPT what the test looks for and it gave an interesting response.  It said that the houshold test falls down quickly when using FQ_CoDel because it cannot distinguish between flows.  All traffic has equal priority so things like gaming, VoIP, etc. can get impacted quickly when there is traffic from multiple clients.

This is not true.

FQ_Codel, can and does distinguish packets into different flows. It does uses 5-tuple to create hashes to hash packets into different slots (flows/queues).
https://datatracker.ietf.org/doc/html/rfc8290#section-1.3

These slots (flows) are within the scheduler e.g "schedulers queues". And are separate from the "Shaper Queues".
In FQ_C the default allowed number of Flows is 1000 but you can increase it in order to avoid having different flows being hashed into same slot.

Flows in FQ_C are one of its core components, because FQ_C does per packet per flow "sojourn time" tracking.

FQ_C is set properly should serve all flows equally, while consistently tracking the packet sojourn time in each flow. Thus meaning One single flow should not starve others for BW but as well processing time.


Quote from: OPNenthu on April 03, 2026, 05:09:25 AMAs it's not available on FreeBSD, the best we can do is prioritize into queues.  I guess for that to work with FQ_CoDel we would need multiple pipes right?  Or maybe one pipe with no scheduler and instead use CoDel within priority queues?

Codel is a queue discipline
FQ is a scheduler algorithm
FQ_CoDel is officially an AQM scheduler algorithm where the queue management is done in Flows the moment packet is moved into the scheduler.

If you want to do priority, yes, than you need to divide the traffic into separate pipes each with FQ_C, classifying each service.

Or

One pipe with Weighted scheduler & Queues per traffic classification (classes) set by weights with CoDel turned on each of them. All with MASKs disabled.
This should let you share the BW but dictated by ratio of the weights and have bufferbloat/latency/jitter be managed by CoDel inside each dedicated class e.g Queue. This is because the WFQ or QFQ can read the weights from the downstream connected Queues

If you would use CoDel in the Queues you would still use a Scheduler (default WFQ+). Here the queuing would be done in the Queues.

The FUN part of all of this is you actually do not need a PRIO queue/flow. The PRIO was back in the time a fix for RTP, Real time traffic. Because back in the past there was no AQM/SQM. Without the PRIO the packet could stall in a queue where once full, TAIL drop happened, but the packets in the queue stalled so even if delivered they were already out of sync. So in order to have usable VOIP or VIDEO, PRIO queue was used to manage latency.

AQMs such as FQ_C fixed it as they introduced per packet per flow "sojourn time" tracking.

QuoteHere is an overview of the FQ_CoDel algorithm that performs these tasks in parallel:

1. Separate every traffic flow's arriving packets into their own queue.

2. Remove a small batch of packets from a queue, round-robin style, and transmit that batch through the (slow) bottleneck link to the ISP. When each batch has been fully sent, retrieve a batch from the next queue, and so on.

3. Offer back pressure to flows that are sending "more than their share" of data.

This last step is the heart of the FQ_CoDel algorithm. It measures the time that a packet remains in a queue (its "sojourn time"). That's how it determines that a flow is using more than its share. If packets have been in a queue "too long" (that is, if their sojourn times exceed the target setting for longer than the interval), FQ_CoDel begins to mark or drop some of those packets to cause the sender to slow down.

Quote from: OPNenthu on April 03, 2026, 05:09:25 AMI would be tempted to try this but I don't know how to match the traffic accurately.  For example, how do we use rules to distinguish video streaming from regular downloads (both using HTTPS)?  Are we supposed to match by destination, e.g. all YouTube.com (https://youtube.com/) -> send to high prio queue?

Basically by any means, but you are right here. You would have to know the specifications of the service such as, IP, Port, Protocol. And surgically categorize it. 


Quote from: OPNenthu on April 03, 2026, 05:09:25 AMIf someone has a guide for that in OPNsense it would be great.  I'm sometimes getting an 'F' on that test.

I know this was already asked on the forum, and I provided step-by-step, but cant remember which topic it was.

Regards,
S.

Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 03, 2026, 11:55:08 PM
I think maybe these:

https://forum.opnsense.org/index.php?topic=43856.0
https://forum.opnsense.org/index.php?topic=45135.0

It'll take some time to play but if I find a way to improve the Household test score I will post back.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 04, 2026, 12:57:20 AM
Quote from: OPNenthu on April 03, 2026, 11:55:08 PMI think maybe these:

https://forum.opnsense.org/index.php?topic=43856.0
https://forum.opnsense.org/index.php?topic=45135.0

It'll take some time to play but if I find a way to improve the Household test score I will post back.

Nice you found it!

I should maybe again reconsider, to test and maybe write about Weighted scheduler + Latency Queue management. I previously did think about it, but it does require more configuration and could confuse the general user.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 04, 2026, 05:03:05 AM
I think the CoDel options outside of the FQ_CoDel scheduler are not effective, based on some preliminary testing.

I tried two things:

1)
Pipes set to WFQ
Queues set to CoDel + ECN
Result: bufferbloat score went way down (B-C range, +80ms on upload latency)

2)
Pipes set to WFQ + CoDel + ECN
Queues set plain (no options)
Result: same as above

There seems to be something special about the FQ_CoDel scheduler on pipes that makes them effective for Bufferbloat management which other CoDel options do not have.

If that's the case, then it won't be practical to classify and prioritize traffic types as the latency won't be acceptable.

The only way might be to use separate pipes with FQ_CoDel but I don't want to carve up my bandwidth.

Are you seeing the same?



Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 04, 2026, 04:14:07 PM
Quote from: OPNenthu on April 04, 2026, 05:03:05 AM2)
Pipes set to WFQ + CoDel + ECN
Queues set plain (no options)
Result: same as above

This actually will not use the CoDel in the Queues (tab). If you enable CoDel in the Pipe you need to attach your rules directly to the Scheduler instead of Queues you manually Created. Because CoDel in Pipes is configured on the dynamic created Queues. When you attach your rules to the manually created Queues, they use FiFO qdisc.


Quote from: OPNenthu on April 04, 2026, 05:03:05 AMThere seems to be something special about the FQ_CoDel scheduler on pipes that makes them effective for Bufferbloat management which other CoDel options do not have.

If that's the case, then it won't be practical to classify and prioritize traffic types as the latency won't be acceptable.

The only way might be to use separate pipes with FQ_CoDel but I don't want to carve up my bandwidth.

The magic most likely is due to the FQ scheduler, + the fact FQ_C allows to set quantum, it does not create such overhead as WFQ or QFQ. FQ_C is a fine tuned version of CoDel. CoDel itself because is just a qdisc needs a proper scheduler. Its usually advised to use CoDel with QFQ which performs better.

https://www.bufferbloat.net/projects/codel/wiki/

Quote from: OPNenthu on April 04, 2026, 05:03:05 AMAre you seeing the same?

Honestly I did not had the chance to test this yet :)

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 05, 2026, 12:48:49 AM
That makes sense regarding #2, @Seimus.  Thanks.

I had tried #1 with QFQ instead of WFQ but it was no difference.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 05, 2026, 01:46:50 AM
How are you testing #1
Do you have multiple Queues each with different Weights?
Can you try to set all the Queues with same Weight?
Can you try to run only one any/any Queue per direction?

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 05, 2026, 02:42:43 AM
(Post #1 of 4: Test 1)

In this post I've kept my existing setup with the separate control plane for ICMP/ICMPv6, but I've changed the pipes on the data plane to use QFQ.  The queues are as before, except now they are using CoDel+ECN at the queue level.  Queue weights within the data plane are all 100.

Setup images attached.

The router was rebooted to ensure a clean pick-up of the new configs.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 05, 2026, 02:43:42 AM
(Post #2 of 4: Test 1 Results)

Results images attached.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 05, 2026, 02:45:12 AM
(Post #3 of 4: Test 2)

In this post I've removed the existing data queues/rules and instead used a single queue and rule (any/any) per each direction.

There is a problem observed now that I am getting a lot of messages like this in the console:

config_aqm Unable to configure flowset, flowset busy!

I rebooted the router twice and I also completely cleared the shaper configs and started over with just an upload pipe and a download pipe (no control plane stuff).  The messages did not go away.

There seems to be a bug.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 05, 2026, 02:45:28 AM
(Post #4 of 4: Test 2 Results)

Results images attached.

Also, the console showing the flood of 'flowset busy' messages mentioned in the previous post.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 06, 2026, 10:59:53 AM
That error looks like is due to

https://github.com/opnsense/core/issues/1279#issuecomment-3417927175

That error tells you that you are trying to reconfigure a flowset (pipe or queue) while there is an actual traffic uses that flowset (it has an active scheduler).
However, there is an easy workaround to avoid this error. If you make sure that there's no traffic passes through the pipe/queue that you want to reconfigure, then you can reconfigure the pipe/queue without problems.

I would advice to disable the rules, and than try to reapply the settings.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 06, 2026, 11:18:05 AM
Honestly considering, you run Weighted scheduler with a CoDel on Queues these still good results even thought not the desired.
Basically, CoDel isn't aware on any BW target, so its managing the latency only based per packet sojourn time.

The Scheduler set to QFQ with a Weighted Queue with No MASK, has only the BW Available per the weight ratio per Queue. But here is the main point, its just a Queue without any flow recognition. So basically 1st flow comes in 1st flow gets out & the BW. Once full 50 packets size Queue, anything that will come after TAIL Drops.

For CoDel in Queue you can adjust two parameters TARGET & INTERVAL, the defaults should be enough but maybe you can try to tune them. You can focus on the Upload and set it so CoDel will more aggressively drop or ECN flag packets. Potentially as well try to disable ECN, as instead of flagging first and dropping then ECN supported flows it will drop them right away.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: OPNenthu on April 07, 2026, 08:11:22 PM
I appreciate the feedback...

Quote from: Seimus on April 06, 2026, 10:59:53 AMThat error tells you that you are trying to reconfigure a flowset (pipe or queue) while there is an actual traffic uses that flowset (it has an active scheduler).
However, there is an easy workaround to avoid this error. If you make sure that there's no traffic passes through the pipe/queue that you want to reconfigure, then you can reconfigure the pipe/queue without problems.

I would advice to disable the rules, and than try to reapply the settings.

Hmm, but, as I mentioned it persisted across reboots.  I can set it up again to make sure the rules were disabled, but if it happened on bootup then I'm not confident it won't get into that state again on the next router reboot.  The messages come early, even before OPNsense is fully booted.

Quote from: Seimus on April 06, 2026, 11:18:05 AMThe Scheduler set to QFQ with a Weighted Queue with No MASK, has only the BW Available per the weight ratio per Queue. But here is the main point, its just a Queue without any flow recognition. So basically 1st flow comes in 1st flow gets out & the BW. Once full 50 packets size Queue, anything that will come after TAIL Drops.

For CoDel in Queue you can adjust two parameters TARGET & INTERVAL, the defaults should be enough but maybe you can try to tune them. You can focus on the Upload and set it so CoDel will more aggressively drop or ECN flag packets. Potentially as well try to disable ECN, as instead of flagging first and dropping then ECN supported flows it will drop them right away.

Disabling ECN didn't seem to have a noticeable effect though I didn't try TARGET & INTERVAL.  I think I've accepted that FQ_CoDel itself is good enough and not worth the trouble to try and prioritize further (you were right, it's difficult) :)

I'm running with the original setup now (FQ_CoDel data pipes + QFQ control pipes for ICMP only) and for whatever reason the Waveform test is not stalling.  I still don't know if the test itself is sometimes faulty or if it was something on my end that got cleared, but I'm keeping an eye on it.

I gave up on trying to get an 'A' on the LibreQoS Houshold test.  I had seen a couple F's before, but mostly it's giving me a B.  I think @meyergru is seeing 'B' as well, and I should not presume that I can beat him :)
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: Seimus on April 08, 2026, 10:56:37 AM
At the end all what matters is how the real live experience feels.

Even if you get in an artificial benchmark/test worse or unwanted score, at the end its just an artificial test. For example, I am getting on waveform and cloudflare A+, but on libre C or D. But in reality, my latency across services in congestion state is fantastic and more close to the results of waveform.

Honestly I am not sure if the libre test is working properly or what the drill with it is. Or even how exactly it works. But it doesn't bother me much as in real life all works as it should.

Regards,
S.
Title: Re: IPv6 Control Plane with FQ_CoDel Shaping
Post by: dinguz on April 08, 2026, 09:16:46 PM
I concur. Keep in mind that tests try to approximate real-world experience, they're not the goal in itself. If you over-optimize for a specific test's methodology, you risk great scores but a worse actual experience.