IPv6 Control Plane with FQ_CoDel Shaping

Started by OPNenthu, April 26, 2025, 12:48:44 PM

Previous topic - Next topic
To Mr. Täht, who tamed our networks.  🥃
N5105 | 8/250GB | 4xi226-V | Community

https://www.youtube.com/watch?v=XI9NG068TwI

I think I cracked it!  DOCSIS asymmetry was the main problem here and IPv6 was also masking an issue.

The Waveform test is IPv4 only but I didn't realize that before.  The LibreQoS test supports both but it was defaulting to IPv6.  When I toggled my connection to use IPv4 and ran the LibreQoS test I noticed that the IPv4 path was performing significantly worse than the IPv6 path, and this was the first clue as to why the Waveform test was maybe stalling.  So for the remainder of the exercise I focused on shaping IPv4 first, then made sure my adjustments carried over to IPv6 (and they did).

The next thing I noticed, because I happen to have separate queues for TCP ACKs, is that >98% of the packets on the upload pipe during the tests were ACKs:

You cannot view this attachment.

I never really paid attention before but it hit me: my upload pipe can't keep up with the size of the download pipe and it's causing ACK congestion on the upstream.  That's why no matter how much I tweaked the upload pipe it made no difference, because I only have a very small upload in comparison to the download and the tweaks made only marginal differences.

So the fix became clear- I needed to give up a lot of download bandwidth.  I played around and found that 600Mbps was the sweet spot to balance out my upload.  My current plan is 1000/35 (advertised) and measures 1200/40 in practice due to over-provisioning.  That's a 28-30x difference.

Maybe this can help some others with significant bandwidth asymmetries.  Tuning for anti-bufferbloat isn't only about the individual pipes, it's also about the ratio of bandwidths.  The pros already know; this is just an enthusiast having an "aha!" moment ;)
N5105 | 8/250GB | 4xi226-V | Community

https://www.youtube.com/watch?v=XI9NG068TwI

Today at 10:40:08 AM #62 Last Edit: Today at 10:48:02 AM by meyergru
You are correct about that there is a certain relation between up- and downstream that must be met in order to allow traffic at all. That is because the ACK stream takes up upstream bandwidth.

However, I measured during the downstream part of the Waveform test and got these results:

You cannot view this attachment.

This shows 4 GByte downstream data and ~130 MByte Upstream, of which 80% was TCP ACKs, so roughly a 3.25% of the downstream needed for upstream. AFAIR, that is about to be expected at a theoretical worst case of ~4% and a more practical 2% (RFC 1122).
AFAIK, that should also explain your rate of 1000/35 Mbps: Your ISP wants you to have full 1000 Mbps downstream, but only the mere neccessity for the upstream with nothing left for server applications. There are some more providers which offer only a small downstream even if there is no technical neccessity to do so, like with DOCSIS.

So, in theory, you should be able to use the full 1000 Mbps downstream, not only 600?

I can imagine two things that may shift the results:

1. With TCP ACKs, you can have pure ACKs and SACKs, so the number of packets used can be severly lower than the number of data packets. That is obviously the case in my test. You did not show the downstream part of your test, you we cannot know if SACK was used, which would be dependend on the client.

2. Regardless of the net data being transferred, pure ACK packets are way shorter than data packets, so they incur a larger overhead, so the net data results may not mirror the real bandwitdhs used.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 450 up, Bufferbloat A+