IPv6 Control Plane with FQ_CoDel Shaping

Started by OPNenthu, April 26, 2025, 12:48:44 PM

Previous topic - Next topic
To Mr. Täht, who tamed our networks.  🥃
N5105 | 8/250GB | 4xi226-V | Community

https://www.youtube.com/watch?v=XI9NG068TwI

I think I cracked it!  DOCSIS asymmetry was the main problem here and IPv6 was also masking an issue.

The Waveform test is IPv4 only but I didn't realize that before.  The LibreQoS test supports both but it was defaulting to IPv6.  When I toggled my connection to use IPv4 and ran the LibreQoS test I noticed that the IPv4 path was performing significantly worse than the IPv6 path, and this was the first clue as to why the Waveform test was maybe stalling.  So for the remainder of the exercise I focused on shaping IPv4 first, then made sure my adjustments carried over to IPv6 (and they did).

The next thing I noticed, because I happen to have separate queues for TCP ACKs, is that >98% of the packets on the upload pipe during the tests were ACKs:

You cannot view this attachment.

I never really paid attention before but it hit me: my upload pipe can't keep up with the size of the download pipe and it's causing ACK congestion on the upstream.  That's why no matter how much I tweaked the upload pipe it made no difference, because I only have a very small upload in comparison to the download and the tweaks made only marginal differences.

So the fix became clear- I needed to give up a lot of download bandwidth.  I played around and found that 600Mbps was the sweet spot to balance out my upload.  My current plan is 1000/35 (advertised) and measures 1200/40 in practice due to over-provisioning.  That's a 28-30x difference.

Maybe this can help some others with significant bandwidth asymmetries.  Tuning for anti-bufferbloat isn't only about the individual pipes, it's also about the ratio of bandwidths.  The pros already know; this is just an enthusiast having an "aha!" moment ;)
N5105 | 8/250GB | 4xi226-V | Community

https://www.youtube.com/watch?v=XI9NG068TwI

March 31, 2026, 10:40:08 AM #62 Last Edit: March 31, 2026, 10:48:02 AM by meyergru
You are correct about that there is a certain relation between up- and downstream that must be met in order to allow traffic at all. That is because the ACK stream takes up upstream bandwidth.

However, I measured during the downstream part of the Waveform test and got these results:

You cannot view this attachment.

This shows 4 GByte downstream data and ~130 MByte Upstream, of which 80% was TCP ACKs, so roughly a 3.25% of the downstream needed for upstream. AFAIR, that is about to be expected at a theoretical worst case of ~4% and a more practical 2% (RFC 1122).
AFAIK, that should also explain your rate of 1000/35 Mbps: Your ISP wants you to have full 1000 Mbps downstream, but only the mere neccessity for the upstream with nothing left for server applications. There are some more providers which offer only a small downstream even if there is no technical neccessity to do so, like with DOCSIS.

So, in theory, you should be able to use the full 1000 Mbps downstream, not only 600?

I can imagine two things that may shift the results:

1. With TCP ACKs, you can have pure ACKs and SACKs, so the number of packets used can be severly lower than the number of data packets. That is obviously the case in my test. You did not show the downstream part of your test, you we cannot know if SACK was used, which would be dependend on the client.

2. Regardless of the net data being transferred, pure ACK packets are way shorter than data packets, so they incur a larger overhead, so the net data results may not mirror the real bandwitdhs used.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 450 up, Bufferbloat A+

March 31, 2026, 10:27:48 PM #63 Last Edit: Today at 03:18:34 AM by OPNenthu
Here is the full image of that cropped one in my post.  This was before I reduced the download pipe, so at this point I was experiencing stalls.  Note that even setting the pipe a bit lower, for example 850Mbps, did not resolve the stalling.

You cannot view this attachment.

My plan wasn't originally 1000/35.  It was something like 800/35, IIRC, but while on a support call a couple months ago the agent offered me a free increase to "gigabit."  Unfortunately they only increased the downstream. 

I was thinking that the original plan was better balanced overall, though what you are saying is that it doesn't matter.  All I need is that my upstream should be 2-4% of the downstream to maintain stability.

I'm not sure what my numbers reveal but following your formula, it would appear that the theoretical maximum of 4% is being exceeded by at least a factor of 2x.
N5105 | 8/250GB | 4xi226-V | Community

https://www.youtube.com/watch?v=XI9NG068TwI

March 31, 2026, 11:17:02 PM #64 Last Edit: Today at 03:24:40 AM by OPNenthu
EDIT: unfortunately today I ran into the Waveform test stalls again, so was a little premature.  This happened despite the other speedtests giving me A-A+ results.

Either I haven't fully resolved some bottleneck, or what @dinguz said about the Waveform servers being overloaded could be true.
N5105 | 8/250GB | 4xi226-V | Community

https://www.youtube.com/watch?v=XI9NG068TwI