IPv6 Control Plane with FQ_CoDel Shaping

Started by OPNenthu, April 26, 2025, 12:48:44 PM

Previous topic - Next topic
Quote from: OPNenthu on April 29, 2025, 05:53:20 PMGlad you touched on this.  I was debating whether FIFO might perform better for this purpose, assuming the pipe was only being used for ICMP-type traffic.  I briefly tried it but wasn't noticing any difference, and the default (WFQ) gives us more options like you said.

If there are better option don't use FiFo, it should be fine when you have only one queue per pipe.
Its better to use WFQ or QFQ which is a faster variant of WFQ, much more faster processing time.
Btw if you can you can try QFQ on the control plane Pipe for IPv6.

Quote from: OPNenthu on April 29, 2025, 07:27:32 PMI just took a look at your bufferbloat submission for reference: https://github.com/opnsense/docs/pull/571

That doesn't seem to too bad to try and follow.  Maybe I can install a reStructuredText editor in VSCode and get some initial content down as a starting point.

Its nothing hard, reStructuredText is simple to understand and use. More or less the challenge is to write the docs properly. I am already having a draft in my head what should the docs contain and how to structure it. Feel free to start, this is the benefit of opensource (as well OPN docs) as we can co-create and co-colaborate ;)

But ultimately it depends on the OPN devs if they accept such addition to their docs :)

Regards,
S.


Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

I'm not sure how to test ICMPv6 throughput.

As a basic latency test, I tried to run 10 pings to Cloudflare DNS under load.  To generate the load I ran speedtest.net in a browser and initiated the pings during the upload portion of the speed test.  The results all seem within margin of error to me.

Of course, my gateway showed significant packet loss (up to 30%) during the baseline test with only FQ_CoDel present.  It did not do this when the Control pipe was active (either WFQ or QFQ).

Baseline - No control pipe
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=18ms
Reply from 2606:4700:4700::1111: time=17ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=13ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 18ms, Average = 14ms

Control (WFQ)
(WFQ)
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=14ms
Reply from 2606:4700:4700::1111: time=11ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 15ms, Average = 13ms

Control (QFQ)
(QFQ)
C:\>ping -6 -n 10 2606:4700:4700::1111

Pinging 2606:4700:4700::1111 with 32 bytes of data:
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=15ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=16ms
Reply from 2606:4700:4700::1111: time=12ms
Reply from 2606:4700:4700::1111: time=13ms
Reply from 2606:4700:4700::1111: time=11ms
Reply from 2606:4700:4700::1111: time=12ms

Ping statistics for 2606:4700:4700::1111:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 16ms, Average = 13ms

I then repeated the speed tests while watching 'top' on the OPNsense, and I recorded the highest system CPU usages seen:

Baseline: Down: 22%, Up: 3.4%
WFQ: Down: 23%, Up: 3%
QFQ: Down: 23.4%, Up: 4.3%


I don't think my tests are very scientific :) and all I can say at the moment is there appears to be no downside to using a Control pipe with either scheduler type.  I don't measure or appreciate any felt difference between them, with the exception of the gateway status.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE i226-v
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE i210

As this is basically to test IPv6 Control plane stability the way how you tested it is okay.

----------------
1. Create a WFQ Pipe and Queue for IPV6 ICMP
2. Saturate your internet connection (speed test example)
3. Observe ICMPv6 latency, jitter
4. Observe IPV6 for stability
5. Repeat above for QFQ
6. Compare results without Control plane Pipe and Queue and with WFQ and QFQ
----------------

If we would like to test more scientifically, there is a tool for this example Crusader, that can give precise measurement specifically for buffer bloat. But we do not need this, as we have a proof of concept for a working solution.

And yes I expected WFQ and QFQ to have similar results, difference would be seen if there would be multiple Queues under the Control plane Pipe. Benefit of QFQ is that it should provide more consistent rates and tight guarantees across the multiple queues defined by the weight merit.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

I created a feature request with explanation on the docs repo. This will be used for the PR

https://github.com/opnsense/docs/issues/705

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

PR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Hmmm. I just followed the new instructions. FWIW, it worked fine on one installation with 400/200 Mbit/s. I then copied the <Trafficshaper> section of config.xml to another installation of the same ISP with a higher bandwidth (1000/500) and the machine crazily went on/off. It seemed like the old problem of breaking IPv6 connectivity kicked in again there.

Since the site is remote to me and I broke connectivity doing this once, I cannot thoroughly test it there.

However, when I used the instructions on my own rig (1100/800, other ISP), I found that the Waveform Bufferbloat test stalled after the first step, taking forever "warming up". I am sure that the Shaper is the culprit, because when I disabled all rules, the test went through.

The test also went fine when I reverted to config to the initial instructions by @OPNenthu with just control rules for upstream icmp and icmp-ipv6, without intermediate queues (using only the pipes for this). I modified them to also have a downstream control rule and this works as well.

I wonder if TS has problems with higher speeds, which is something I vaguely remember reading.

My current working setup on my own rig looks like this:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>910</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue>2</queue>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>1160</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue>2</queue>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler>fq_pie</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>1</pie_enable>
          <fqcodel_quantum>1500</fqcodel_quantum>
          <fqcodel_limit>20480</fqcodel_limit>
          <fqcodel_flows>65535</fqcodel_flows>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>20</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>30</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3db79d81-b459-4558-b845-b2ba19efec31">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>fb829d32-e950-4026-a2ee-3663104a355b</target>
          <description>Upload-Control Rule ICMP</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>fb829d32-e950-4026-a2ee-3663104a355b</target>
          <description>Upload-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>883ed783-df03-4109-9364-a6c387f5954f</target>
          <description>Download-Control Rule ICMPv6</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3e5fe8fc-1b6a-4323-a95a-c24e664cd5b9">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2>lan</interface2>
          <proto>icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>883ed783-df03-4109-9364-a6c387f5954f</target>
          <description>Download-Control Rule ICMP</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

I know that there are a few differences to @Seimus's instructions with what now works:

1. The control plane speeds are very low (1 Mbit/s).
2. I use masks on the pipes, as well as FQ_Codel Parameters and PIE.
3. I have rules for icmp in addition to icmp-ipv6.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

@meyergru

Do you happen to have a screenshot of your TS settings instead the xml format?

May 01, 2025, 01:08:00 AM #22 Last Edit: May 01, 2025, 01:13:33 AM by Seimus
Quote from: meyergru on April 30, 2025, 09:38:02 PMknow that there are a few differences to @Seimus's instructions with what now works:

1. The control plane speeds are very low (1 Mbit/s).
2. I use masks on the pipes, as well as FQ_Codel Parameters and PIE.
3. I have rules for icmp in addition to icmp-ipv6.


Thanks for testing, as I myself dont have IPv6 capable connection, any test of the config from git I created and results help to fine tune this.

This is interesting,
Where you able to observe any packet loss as reported any other users when this problem with IPv6 occurs (health graph)?

ICMPv4 should not be needed for IPv6 functionality, at least I didn't found anything much related to it.

I suspect if there is still issue present for you,e.g loss and latency for IPv6 is present, it could be potentially due to Capacity BW on the Pipes for control plane. The rules match basically any ICMPv6 not only the one originating from the OPN itself.

Looking at your working config as as you mentioned you use masks on Pipes,


<pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>src-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>1</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>dst-ip</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>

The behavior of mask on Pipes is different when masks are used on Queues.

QuoteThus,  when  dynamic  pipes are used, each   flow will get the same
        bandwidth as defined by the pipe, whereas when dynamic queues are
        used, each   flow will share   the  parent's  pipe  bandwidth   evenly
        with  other  flows    generated  by the same   queue (note that other
        queues with different weights might  be  connected    to  the  same
        pipe).

So in simple,
When you use mask on pipe, each flow will get the BW set in the Pipe.
When you use mask on queue, the total value of pipe is shared.

The config of queues on github doc, limits the total usage of the BW to the value of the Pipe, this is the reason to use queues amongst the fact we can to use the Control plane Pipe for other protocols control planes. But it does not share it equally amongst flows in that queues, its set as 1st come 1st get and rest starve. There is a chance a single flow of ICMPv6 starved the rest of flows.

This would explain why the waveform test stalled as well the break of IPv6 if did happen.

Can you maybe try again the config from git, in two config scenarios?
1. Let all as is in the doc but increase the Control Pipe BW
2. Set on the Control plane queues mask in their proper respective directions (DL - destination; UP - source)

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

1. With the setup as per instructions, I had 10/10 MBit/s on the control plane, not 1/1 as with my working setup, just as a note.
2. I tried both suggestions from the last posting to no avail. I even tried setting queue masks for both the control plane and the IP queues.

I used 900/600 and 100/100 MBits for those tests. I also tried setting queue masks and increased BW on the pipes.



For reference (and check), here is the non-working configuration snippet as per your last suggestions combined:

    <TrafficShaper version="1.0.3">
      <pipes>
        <pipe uuid="bbe0a667-ed41-4f7b-b47e-8ab22286a1fb">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>600</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upstream Pipe</description>
        </pipe>
        <pipe uuid="020a34ef-cd71-4081-9161-286926ee00cc">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>900</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler>fq_codel</scheduler>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>1</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Downstream Pipe</description>
        </pipe>
        <pipe uuid="fb829d32-e950-4026-a2ee-3663104a355b">
          <number>10003</number>
          <enabled>1</enabled>
          <bandwidth>100</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Upload-Control</description>
        </pipe>
        <pipe uuid="883ed783-df03-4109-9364-a6c387f5954f">
          <number>10004</number>
          <enabled>1</enabled>
          <bandwidth>100</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <buckets/>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <delay/>
          <description>Download-Control</description>
        </pipe>
      </pipes>
      <queues>
        <queue uuid="0db3f4e6-daf8-4349-a46f-b67fdde17c98">
          <number>10000</number>
          <enabled>1</enabled>
          <pipe>020a34ef-cd71-4081-9161-286926ee00cc</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Downstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="d846a66a-a668-4db8-9c92-55d5c172e7af">
          <number>10001</number>
          <enabled>1</enabled>
          <pipe>bbe0a667-ed41-4f7b-b47e-8ab22286a1fb</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Upstream Queue</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="55c03a93-8de7-4c45-a782-aaecdcc9cc72">
          <number>10002</number>
          <enabled>1</enabled>
          <pipe>883ed783-df03-4109-9364-a6c387f5954f</pipe>
          <weight>100</weight>
          <mask>dst-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Download</description>
          <origin>TrafficShaper</origin>
        </queue>
        <queue uuid="9aaccde6-b391-4330-b2d0-6e525d2a12ee">
          <number>10003</number>
          <enabled>1</enabled>
          <pipe>fb829d32-e950-4026-a2ee-3663104a355b</pipe>
          <weight>100</weight>
          <mask>src-ip</mask>
          <buckets/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <pie_enable>0</pie_enable>
          <description>Control-plane-IPv6-Queue-Upload</description>
          <origin>TrafficShaper</origin>
        </queue>
      </queues>
      <rules>
        <rule uuid="9eba5117-ad2e-450a-96ed-8416f5f278da">
          <enabled>1</enabled>
          <sequence>3</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>0db3f4e6-daf8-4349-a46f-b67fdde17c98</target>
          <description>Downstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="3c347909-3afd-4a14-b1e2-8eb105ff99a0">
          <enabled>1</enabled>
          <sequence>4</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>d846a66a-a668-4db8-9c92-55d5c172e7af</target>
          <description>Upstream Rule</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="844829a2-ece6-4d34-ab2c-27c2ba8cef76">
          <enabled>1</enabled>
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>out</direction>
          <target>9aaccde6-b391-4330-b2d0-6e525d2a12ee</target>
          <description>Control-plane-IPv6-Rule-Upload</description>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="16503037-a658-438c-8be5-7274cece9dde">
          <enabled>1</enabled>
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ipv6-icmp</proto>
          <iplen/>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <dscp/>
          <direction>in</direction>
          <target>55c03a93-8de7-4c45-a782-aaecdcc9cc72</target>
          <description>Control-plane-IPv6-Rule-Download</description>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>

Afterwards, I even tried to shortcut the control plane rules directly to the pipes, as is used in my working setup, alas, to no avail.

Going back to my working config immediately restored the Waveform test to a working state. The difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Quote from: Seimus on April 30, 2025, 03:59:42 PMPR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.


Thanks @Seimus.  Looks good to me overall.  I added one comment in the PR.

Also interested in the suggestion there r/e pf rule vs. ipfw.  I'm willing to try it but am not sure on the implementation in pf using the experimental shaping option. Would we just need a single pass rule (direction in) on WAN for ICMPv6?  I believe in pf it's from the perspective of the firewall, so both upstream & downstream requests would be seen as 'in' from the WAN perspective.

I'm thinking something like this?

Action: Pass
Interface: WAN
Direction: in
TCP/IP Version: IPv6
Protocol: IPV6-ICMP
Source: Any
Destination: Any
Traffic Shaping (rule direction): Download-Control-Pipe
Traffic Shaping (reverse dirction): Upload-Control-Pipe

(directionality for pipe assignment is unclear in this case)

My concern with this is that it overrides the default/automatic rules in OPNsense regarding ICMPv6, which is not ideal.  There are security implications as well the possibility to take down the IPv6 network.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE i226-v
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE i210

Quote from: meyergru on April 30, 2025, 09:38:02 PMHowever, when I used the instructions on my own rig (1100/800, other ISP), I found that the Waveform Bufferbloat test stalled after the first step, taking forever "warming up". I am sure that the Shaper is the culprit, because when I disabled all rules, the test went through.

I experienced this once as well, when I was initially making changes.  I'm not sure what cleared it up precisely but I do recall rebooting both OPNsense and my ISP router box.  After some settling in, the Bufferbloat and speed tests were no longer stalling.

However, I did not try with manual queues.  In all my testing I always connected the ICMPv6 rules directly to the Control pipes w/ internal queues.
"The power of the People is greater than the people in power." - Wael Ghonim

Site 1 | N5105 | 8GB | 256GB | 4x 2.5GbE i226-v
Site 2 |  J4125 | 8GB | 256GB | 4x 1GbE i210

May 01, 2025, 11:44:38 AM #26 Last Edit: May 01, 2025, 11:50:55 AM by Seimus
@meyergru

Many thanks for further testing!

But let me ask if I understood correctly
Quote from: meyergru on May 01, 2025, 11:13:29 AMThe difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

When you created the Control Plane Shaper per the Github instructions,
You did as well change configuration on your already working Pipes?
Especially the tuned FQ_C and FQ_P parameters?

Because how I interpret this and seeing the config it means yes.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Quote from: OPNenthu on May 01, 2025, 11:38:26 AM
Quote from: Seimus on April 30, 2025, 03:59:42 PMPR (Draft) created

https://github.com/opnsense/docs/pull/706

Have a look.


Thanks @Seimus.  Looks good to me overall.  I added one comment in the PR.

Also interested in the suggestion there r/e pf rule vs. ipfw.  I'm willing to try it but am not sure on the implementation in pf using the experimental shaping option. Would we just need a single pass rule (direction in) on WAN for ICMPv6?  I believe in pf it's from the perspective of the firewall, so both upstream & downstream requests would be seen as 'in' from the WAN perspective.

I'm thinking something like this?

Action: Pass
Interface: WAN
Direction: in
TCP/IP Version: IPv6
Protocol: IPV6-ICMP
Source: Any
Destination: Any
Traffic Shaping (rule direction): Download-Control-Pipe
Traffic Shaping (reverse dirction): Upload-Control-Pipe

(directionality for pipe assignment is unclear in this case)

My concern with this is that it overrides the default/automatic rules in OPNsense regarding ICMPv6, which is not ideal.  There are security implications as well the possibility to take down the IPv6 network.

I think its a good idea, but not only to mention, but to create it as an optional approach within the docs. The traffic Shaper option in pf can bind to either a Pipe or a Queues as well.

You set a good question, and that's something that drills my head too. Stated in docs

https://docs.opnsense.org/manual/firewall.html#traffic-shaping-qos

QuoteTraffic shaping/rule direction > Force packets being matched by this rule into the configured queue or pipe

Traffic shaping/reverse direction > Force packets being matched in the opposite direction into the configured queue or pipe

In regarding overrides, the auto-rules are set within the floating section, which is above Interface or Group, so if those default rules are set to quick they will always take precedence. So depending where you set it it should not override but the question is there will it be even applicable?

In regards of security applications. ICMPv6 for IPv6 functionality needs to be allowed. By design any control for any protocol needs to be allowed in both ways. But to make such rule more tighter, the source or destination depending on the rule direction should be the FW/GW itself, because we are interested into the control plane of the network device it self.

I guess we ask the devs.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Quote from: OPNenthu on May 01, 2025, 11:43:08 AMI experienced this once as well, when I was initially making changes.  I'm not sure what cleared it up precisely but I do recall rebooting both OPNsense and my ISP router box.  After some settling in, the Bufferbloat and speed tests were no longer stalling.

I had similar problems with FQ_C, when I did tuning in the past, results didn't give sense, rebooting OPN + cable modem usually fixed this... weird...

Quote from: OPNenthu on May 01, 2025, 11:43:08 AMHowever, I did not try with manual queues.  In all my testing I always connected the ICMPv6 rules directly to the Control pipes w/ internal queues.

Can you try it?

It would be good to have consistent results

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Quote from: Seimus on May 01, 2025, 11:44:38 AM@meyergru

Many thanks for further testing!

But let me ask if I understood correctly
Quote from: meyergru on May 01, 2025, 11:13:29 AMThe difference seems that I enable PIE on the IP pipes and have some FQ-Codel params set.

When you created the Control Plane Shaper per the Github instructions,
You did as well change configuration on your already working Pipes?
Especially the tuned FQ_C and FQ_P parameters?

Because how I interpret this and seeing the config it means yes.

Regards,
S.

Yes. I cleared the respective parts. I am at a loss what difference is actually causing the problem. Maybe it is easier to try to break my working setup by changing towards your suggested setup step-by-step to find the root cause if it is not that casual glitch both you and @OPNenthu saw.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+