What I learnt about traffic shaping after 3 days of speedtesting

Started by dza, April 20, 2025, 06:17:20 AM

Previous topic - Next topic
Tunables:

Interfaces: 4x i226v
CPU: Intel N100
# Too high (3500+) would cause jitter or long connection establish delay
net.inet6.ip6.intr_queue_maxlen=3000
net.inet.ip.intr_queue_maxlen=3000
# Too high (16000+) would cause jitter or long connection establish delay
hw.igc.max_interrupt_rate=12000 # boot-time, needs reboot
net.inet.tcp.soreceive_stream=1 # boot-time
net.isr.maxthreads=-1 # boot-time
net.inet.rss.enabled=1 # boot-time
net.inet.rss.bits=2 # boot-time
net.isr.bindthreads=1 # boot-time
hw.igc.rx_process_limit=-1
net.isr.dispatch=direct
hw.ix.flow_control=0
dev.igc.0.fc=0
dev.igc.1.fc=0
dev.igc.2.fc=0
dev.igc.3.fc=0
dev.igc.4.fc=0

My line is sold as 300mbps, both ul/dl peaks stable at 311mbps. Its a GPON/ONT setup.

I use fqcodel pipe on WAN-Download (pipe) with bandwidth=295 (mbps), this is accompanied by a WAN-Download-Queue with weight=100 and WAN-Download-Any-Rule attached to WAN-Download-Queue.

Using fqcodel on the upload pipe only resulted in worse throughput and latency, so I assume the ISP is already doing some sort of shaping on the upload. I also believe this (potential shaping from ISP) might be the reason I had to set the bandwidths to theoretical speedtest max (311mbps) and NOT 80-85%. It was unnaturally rock-stable at 311mbps almost like it targets it deliberately.. reducing the bandwidth limit in any way would just make worse latency and over-compensate with bandwidth limits (a 280mbps limit would hit 180-200 for instance), this unfortunately lead me to a lot of trial and error for this reason.

Settings for my upload pipe:
bandwidth=311 # mbps
scheduler=qfq # should be more lightweight than wfq, didn't spot a difference.
enable_codel=y
codel_target=14
codel_interval=140

Read https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html#target-interval, don't just use my settings. I also found that these settings (target+interval) did very little when used with only 2 fqcodel pipes without queue+rules. Only after I finally attempted a different scheduler on the upload-pipe I finally had progress and now these kicked into effect.

For all my queues below I have
source=$MY_SUBNETS_CIDR_AND_WAN_IP` + `direction=out` for upload pipe rules.
destination=$MY_SUBNETS_CIDR_AND_WAN_IP` + `direction=in` for download pipe rules.

My upload-pipe with the quick-fair-queue (qfq) is combined with
* WAN-Upload-ICMP-queue weight=100, WAN-Upload-ICMP-Rule
* WAN-Download-ICMP,DNS,NTP,DHCP-queue weight=100, WAN-Download-ICMP,DNS,NTP,DHCP-Rule.

Next a catch-all down-prioritize
* WAN-Download-Rule weight=1
* WAN-Upload-Any-queue weight=1, WAN-Upload-Any-Rule
* WAN-Download-Any-queue weight=1, WAN-Download-Any-Rule

I have never had so fast internet, browser and such stable loaded latency.

I recommend and use https://speed.cloudflare.com as it is the most advanced in terms of both data and function. It both downloads and uploads meanwhile it tests with pauses in-between which makes a really good realworld speedtest that highlights all issues for finetuning.

My results under load;
loaded latency dl (9ms), up (9ms), jitter dl (0.684ms), ul (2.94ms)

Flood-pinging under load (while qbittorrent is peaking 30-35mb/s on multiple torrents with 1200 connections)
ping -i0.002 -c1000 1.1.1.1

ping results (under load):

1000 packets transmitted, 1000 received, 0% packet loss, time 9069ms
rtt min/avg/max/mdev = 8.531/9.070/13.014/0.381 ms, pipe 2

9ms avg and 13ms max when your rtt on unloaded and unshaped latency is 14ms and qbittorrent is maxing out is pretty darn good! Anyways these are my findings after speedtesting for 3 days straight and tuning lol. Enjoy!

Thanks! You helped get my speeds back up to 1.1-1.2 Gbps with the pipe/queue/rule settings I was already using, but that 25.1.5_5 somehow broke, coming from 25.1.4_1.

Intel N5105, same configuration of 4x i226v ports.

Quote from: kode54 on April 21, 2025, 10:14:18 AMThanks! You helped get my speeds back up to 1.1-1.2 Gbps with the pipe/queue/rule settings I was already using, but that 25.1.5_5 somehow broke, coming from 25.1.4_1.

Intel N5105, same configuration of 4x i226v ports.

You're welcome! Did you also need a separate scheduler pipe setup?

Quote from: dza on April 21, 2025, 11:46:23 AM
Quote from: kode54 on April 21, 2025, 10:14:18 AMThanks! You helped get my speeds back up to 1.1-1.2 Gbps with the pipe/queue/rule settings I was already using, but that 25.1.5_5 somehow broke, coming from 25.1.4_1.

Intel N5105, same configuration of 4x i226v ports.

You're welcome! Did you also need a separate scheduler pipe setup?

I do not need to schedule my bandwidth usage, as I have no bandwidth usage caps, only a speed limit. Though I probably have such things to look forward to considering the current political climate in the US, with regards to Net Neutrality.