1
20.1 Legacy Series / Re: Bufferbloat went to "F"
« on: April 27, 2020, 04:05:29 pm »
@AhnHEL Thanks for the link and info! Unfortunately I still receive an "F".
I have an AP running OpenWrt 19.07.2. Seems to me (being quite ignorant on this topic) that a reasonable test is to enable/disable SQM on the AP to see if that makes a difference. Sure enough the report goes from "F" to "A". Results using AP with SQM on/off:
on
http://www.dslreports.com/speedtest/62899222
http://www.dslreports.com/speedtest/62899528
off
http://www.dslreports.com/speedtest/62899335
This leads me to believe the issue is with Opnsense and/or my configuration. I assume others are receiving decent bufferbloat reports so this makes me think the issue is with my configuration. I've tried resetting and following the above linked instructions but no change.
Any #bufferbloat experts out there who know how to debug?
In case this helps...
# ipfw sched show
10000: 500.000 Mbit/s 0 ms burst 0
q75536 50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
sched 10000 type FQ_CODEL flags 0x0 0 buckets 1 active
FQ_CODEL target 5ms interval 100ms quantum 1000 limit 1000 flows 1024 ECN
Children flowsets: 10000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 1 1008 0 0 0
10001: 20.000 Mbit/s 0 ms burst 0
q10001 50 sl. 0 flows (1 buckets) sched 10001 weight 10 lmax 0 pri 0 droptail
sched 10001 type FQ_CODEL flags 0x0 0 buckets 1 active
FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 NoECN
Children flowsets: 10002 10001
0 ip 0.0.0.0/0 0.0.0.0/0 1 105 0 0 0
# ipfw queue show
q10000 50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10001 50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail
I have an AP running OpenWrt 19.07.2. Seems to me (being quite ignorant on this topic) that a reasonable test is to enable/disable SQM on the AP to see if that makes a difference. Sure enough the report goes from "F" to "A". Results using AP with SQM on/off:
on
http://www.dslreports.com/speedtest/62899222
http://www.dslreports.com/speedtest/62899528
off
http://www.dslreports.com/speedtest/62899335
This leads me to believe the issue is with Opnsense and/or my configuration. I assume others are receiving decent bufferbloat reports so this makes me think the issue is with my configuration. I've tried resetting and following the above linked instructions but no change.
Any #bufferbloat experts out there who know how to debug?
In case this helps...
# ipfw sched show
10000: 500.000 Mbit/s 0 ms burst 0
q75536 50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
sched 10000 type FQ_CODEL flags 0x0 0 buckets 1 active
FQ_CODEL target 5ms interval 100ms quantum 1000 limit 1000 flows 1024 ECN
Children flowsets: 10000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 1 1008 0 0 0
10001: 20.000 Mbit/s 0 ms burst 0
q10001 50 sl. 0 flows (1 buckets) sched 10001 weight 10 lmax 0 pri 0 droptail
sched 10001 type FQ_CODEL flags 0x0 0 buckets 1 active
FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 NoECN
Children flowsets: 10002 10001
0 ip 0.0.0.0/0 0.0.0.0/0 1 105 0 0 0
# ipfw queue show
q10000 50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 droptail
q10001 50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 droptail