Suggestion for Bufferbloat fix. Fibre to the Home. No PPoE.

Started by cookiemonster, December 01, 2025, 07:09:25 PM

Previous topic - Next topic
Quote from: OPNenthu on December 02, 2025, 09:21:40 PMLinux: https://www.waveform.com/tools/bufferbloat?test-id=964b7180-4a1f-4eed-a114-1dfb613e9b63
Win10: https://www.waveform.com/tools/bufferbloat?test-id=edad2d94-d2c8-41e1-8b63-a31eeb2539bb

These results are not bad, and if they are constant I would call it a win.

But as mentioned by @meyergru you may have differences due to congestion algorithms. On Win10 I think the default is CTCP or CUBIC.

I personally run BBR on Linux.

bat /etc/sysctl.d/bbr.conf
─────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
     │ File: /etc/sysctl.d/bbr.conf
─────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1 │ net.core.default_qdisc=fq
   2 │ net.ipv4.tcp_congestion_control=bbr
─────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

It was worth a try guys, but I'm not seeing a difference between BBR and CUBIC on my system.  At least the loaded latency in all cases doesn't go above +10ms.

I have been relentlessly trying all sort of guides and settings for bufferbloat. I have finally found a solution (1 gig fiber PPPOE), and getting A rates at waveform (https://www.waveform.com/tools/bufferbloat)

I will call this used guide guide "Dirty but effective"
That is because it states to use FQ-CoDel quantum of 3000.
But when I change the quantum to the described 1500 (mtu of connection) I get a B instead of A.

And in addition to this above guide, I added the [controle plane / ICMP ] as described on this forum elsewhere.

So although I do not understand why and it is not right or recommended to use 3000 quantum, I have to say it works great over here.
Deciso DEC850v2

Quote from: RamSense on Today at 07:37:11 AMI will call this used guide guide "Dirty but effective"

The creator of this article looks like doesn't know what their are doing.
There are several miss statements about Pipes and Queues.
There are several miss-configurations in the Pipe > BW, Queues, Quantum
There are several miss-configurations in the Queue > MASK, ECN, Enabled Codel

And so on.

Some of the configured features don't do anything and some of them have impact. Like MASK, Quantum & BW and overlapping FQ_C in scheduler and CoDel in the Queue, causes overall that there is twice running the CoDel algorithm, fighting each other.

In layman terms Quantum basically handles how much bytes at once can from a Flow queue leave (FQ_C internal one). The reason why you want to use Quantum at your MTU size when your BW is above 100Mbit is to serve 1 packet per flow (there can be thousands flows, 1 flow = 1 internal FQ_Codel queue). Higher Quantum sizes could starve smaller packets. And too small Quantum sizes could starve bigger packet sizes. Both related to the configured Interface MTU.


Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD