My use case is to have only idle bandwidth usage on certain interfaces, both limiting up and download.
I managed to get it working quite well for a couple of weeks. Today however I found out that the download queues have become defunct. On the status page I can see that traffic is sent to the correct queues but strangely it seems that the weight value is simply ignored. Weight 100/Weight 1 share bandwith equally.
Curiously the upload direction is still working correctly.
Any hints on what goes wrong?
here is what /usr/local/etc/ipfw.rules contains regarding shaper config:
pipe 10002 config bw 205Mbit/s type wf2q+
pipe 10003 config bw 40Mbit/s type wf2q+
pipe 10004 config bw 4Gbit/s type wf2q+
#======================================================================================
# define dummynet queues
#======================================================================================
queue 10002 config pipe 10002 mask dst-ip 0xffffffff weight 100
queue 10003 config pipe 10002 mask dst-ip 0xffffffff weight 1
queue 10004 config pipe 10002 mask dst-ip 0xffffffff weight 15
queue 10005 config pipe 10003 mask src-ip 0xffffffff weight 100
queue 10006 config pipe 10003 mask src-ip 0xffffffff weight 3
queue 10007 config pipe 10004 mask dst-ip 0xffffffff weight 100
#======================================================================================
# traffic shaping section, authorized traffic
#======================================================================================
add 60000 return via any
add 60003 pipe 10004 ip from 185.65.134.82 to any src-port any dst-port any via pppoe1 // e782d702-cd1d-4548-a892-d87116f5a213 wan: Unbound
add 60005 pipe 10004 ip from any to 185.65.134.82 src-port any dst-port any via pppoe1 // 289ad008-0777-4e26-97f6-1a5f1f586c42 wan: Unbound
add 60008 queue 10006 ip from 192.168.66.0/24 to any src-port any dst-port any out via pppoe1 // 39e87e76-5f89-4341-8943-258e167c802b wan: DSLUpLow
add 60009 queue 10004 ip from any to 192.168.68.0/24 src-port any dst-port any in via pppoe1 // af4ed561-013f-4dea-8080-5e584b21cad6 wan: DSLDownMedium
add 60010 queue 10003 ip from any to 192.168.67.0/24 src-port any dst-port any in via pppoe1 // b1dc32bc-f4fd-4c97-9cbf-73bac71e4135 wan: DSLDownLow
add 60011 queue 10002 ip from any to any src-port any dst-port any in via pppoe1 // bbf06b0b-3667-41c6-b0d4-4463d6ac587f wan: DSLDownHigh
add 60012 queue 10005 ip from any to any src-port any dst-port any out via pppoe1 // edd3e781-fa78-4e6f-8651-57cca5765a58 wan: DSLUpHigh
add 60015 queue 10003 ip from any to 192.168.67.0/24 src-port any dst-port any in via wg2 // da64043b-fb74-4054-b89c-2920352f3f94 opt14: DSLDownLow
add 60016 queue 10006 ip from 192.168.67.0/24 to any src-port any dst-port any in via wg2 // 520f8d40-36c1-4d70-8ccf-0b627d53602a opt14: DSLUpLow
add 60017 queue 10006 ip from 192.168.67.0/24 to any src-port any dst-port any recv vlan012 xmit wg2 // 12dbadab-02ff-4bff-83f0-8da1d3c97a37 opt13 -> opt14: DSLUpLow
add 60018 queue 10003 ip from any to 192.168.67.0/24 src-port any dst-port any xmit vlan012 recv wg2 // 7f9ad30d-0260-449f-9acb-508b0134a594 opt14 -> opt13: DSLDownLow
Just guessing, but could it be that the port was disconnected. I mean, unplugged.
to be frank I don't understand what you're trying to say. In order that data can flow, everything required must be connected, right? That is certainly the case.
When you are saying 100/1 are shared equaly.
You mean queue 10002 & 10003 on Pipe 10002?
pipe 10002 config bw 205Mbit/s type wf2q+
queue 10002 config pipe 10002 mask dst-ip 0xffffffff weight 100
queue 10003 config pipe 10002 mask dst-ip 0xffffffff weight 1
add 60010 queue 10003 ip from any to 192.168.67.0/24 src-port any dst-port any in via pppoe1 // b1dc32bc-f4fd-4c97-9cbf-73bac71e4135 wan: DSLDownLow
add 60011 queue 10002 ip from any to any src-port any dst-port any in via pppoe1 // bbf06b0b-3667-41c6-b0d4-4463d6ac587f wan: DSLDownHigh
Can you ran these commands as root
ipfw pipe show
ipfw queue show
ipfw sched show
And can you show pics of those rules from GUI?
Regards,
S.
Quote from: schmuessla on June 12, 2024, 08:27:21 PM
strangely it seems that the weight value is simply ignored. Weight 100/Weight 1 share bandwith equally.
I'm experiencing your exact issue; the only difference is that apparently (for me) the weight doesn't seems to have any effect at all (upload / download).
I'm using WFQ as scheduler for 2 pipes (down and up).
For each pipe I've defined 5 queues (Platinum, Gold, Silver, Bronze and Copper) and I've assigned specific traffic for each of those queues (ie TCP ACK, DNS and Ping goes in Platinum, Work devices in Gold, other laptop and desktop in Silver etc etc)
I've extensively tested with traffic generated simultaneously from different queues with very bizarre results (ie a download from the same source for the queue Copper will have more bandwidth than the Gold queue while downloading from the same source... etc).
I'm, like you, on the verge of insanity :) (on paper should be very easy....).
I wish the CBQ scheduler will be added at some point... ::)
Quote from: Seimus on June 14, 2024, 12:25:12 PM
When you are saying 100/1 are shared equaly.
You mean queue 10002 & 10003 on Pipe 10002?
exactly
root@OPNsense:~ # ipfw pipe show
10004: 4.000 Gbit/s 0 ms burst 0
q141076 50 sl. 0 flows (1 buckets) sched 75540 weight 0 lmax 0 pri 0 droptail
sched 75540 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 241 172124 0 0 0
10002: 205.000 Mbit/s 0 ms burst 0
q141074 50 sl. 0 flows (1 buckets) sched 75538 weight 0 lmax 0 pri 0 droptail
sched 75538 type FIFO flags 0x0 0 buckets 0 active
10003: 40.000 Mbit/s 0 ms burst 0
q141075 50 sl. 0 flows (1 buckets) sched 75539 weight 0 lmax 0 pri 0 droptail
sched 75539 type FIFO flags 0x0 0 buckets 0 active
root@OPNsense:~ # ipfw queue show
q10006 50 sl. 2 flows (256 buckets) sched 10003 weight 3 lmax 0 pri 0 droptail
mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
148 ip 192.168.67.30/0 0.0.0.0/0 16764 16642285 0 0 0
162 ip 192.168.67.5/0 0.0.0.0/0 2 138 0 0 0
q10007 50 sl. 0 flows (256 buckets) sched 10004 weight 100 lmax 0 pri 0 droptail
mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
q10004 50 sl. 0 flows (256 buckets) sched 10002 weight 15 lmax 0 pri 0 droptail
mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
q10005 50 sl. 13 flows (256 buckets) sched 10003 weight 100 lmax 0 pri 0 droptail
mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
0 ip 0 ::/0 ::/0 4484 579580 0 0 0
26 ip 84.138.167.72/0 0.0.0.0/0 1034 72419 0 0 0
98 ip 192.168.106.101/0 0.0.0.0/0 5 244 0 0 0
98 ip 192.168.108.101/0 0.0.0.0/0 6 430 0 0 0
100 ip 192.168.85.102/0 0.0.0.0/0 9 632 0 0 0
102 ip 192.168.106.103/0 0.0.0.0/0 15 1795 0 0 0
120 ip 192.168.106.104/0 0.0.0.0/0 9 2237 0 0 0
126 ip 192.168.106.107/0 0.0.0.0/0 2425 185719 0 0 0
140 ip 192.168.85.146/0 0.0.0.0/0 154 72651 0 0 0
162 ip 192.168.51.5/0 0.0.0.0/0 14 3380 0 0 0
162 ip 192.168.67.5/0 0.0.0.0/0 235 17252 0 0 0
174 ip 192.168.51.3/0 0.0.0.0/0 6281 693157 0 0 0
176 ip 192.168.85.140/0 0.0.0.0/0 6 313 0 0 0
q10002 50 sl. 12 flows (256 buckets) sched 10002 weight 100 lmax 0 pri 0 droptail
mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
0 ip 0 ::/0 ::/0 3876 705802 0 0 0
53 ip 0.0.0.0/0 192.168.106.101/0 1 128 0 0 0
53 ip 0.0.0.0/0 192.168.108.101/0 3 242 0 0 0
54 ip 0.0.0.0/0 192.168.85.102/0 9 816 0 0 0
55 ip 0.0.0.0/0 192.168.106.103/0 18 8423 0 0 0
56 ip 0.0.0.0/0 192.168.106.104/0 11 2640 0 0 0
59 ip 0.0.0.0/0 192.168.106.107/0 25839 38411395 0 0 0
83 ip 0.0.0.0/0 192.168.51.3/0 15585 1836052 0 0 0
85 ip 0.0.0.0/0 192.168.51.5/0 20 1912 0 0 0
90 ip 0.0.0.0/0 192.168.15.10/0 1 76 0 0 0
93 ip 0.0.0.0/0 84.138.167.72/0 617 158614 0 0 0
194 ip 0.0.0.0/0 192.168.85.146/0 36 16302 0 0 0
q10003 50 sl. 2 flows (256 buckets) sched 10002 weight 1 lmax 0 pri 0 droptail
mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
78 ip 0.0.0.0/0 192.168.67.30/0 31698 4017290 0 0 0
85 ip 0.0.0.0/0 192.168.67.5/0 741 935882 0 0 0
root@OPNsense:~ # ipfw sched show
10004: 4.000 Gbit/s 0 ms burst 0
sched 10004 type WF2Q+ flags 0x0 0 buckets 0 active
Children flowsets: 10007
10002: 205.000 Mbit/s 0 ms burst 0
sched 10002 type WF2Q+ flags 0x0 0 buckets 1 active
Children flowsets: 10004 10003 10002
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 936164466 649174740067 0 0 11935485
10003: 40.000 Mbit/s 0 ms burst 0
sched 10003 type WF2Q+ flags 0x0 0 buckets 1 active
Children flowsets: 10006 10005
0 ip 0.0.0.0/0 0.0.0.0/0 492630418 291119667996 0 0 2349799
Hmm,
To me it looks ok, as well the queue to sched to pipe association is properly shown per config by ipfw/dummynet. Its weird this is not working properly.
But maybe you can try something.
Lets try to adjust the weights, currently the Pipe 10002 has 3 queues attached with weights
10004 - 15
10003 - 1
10002 - 100
Try it to adjust in a way that count of all weights is max 100, for example:
10004 - 15
10003 - 1
10002 - 84
and retest.
Regards,
S.
Changing to 84 doesn't make any difference.
On the status page I can see that the correct queues are used, maybe that's a kernel bug but I couldn't find anything useful at freebsd's bugtracker.
The crazy thing is that it has worked in the past and the upload queues are still working correctly.
I've tried as well to scope the forum and bugtracker didn't find anything either.
Well there are 2 options:
1. You can open new BUG report or forum post for this in FreeBSD. Or additionally asking people in the ipfw FreeBSD mailing list
2. Wait for OPNsense 24.7 which base will be FreeBSD14.1 and rested if it will work, if not do the above
Regards,
S.
I had an issue where the shaper was not working either.
At the end, the reason (for me at least) was the "Sequence number" in the shaper rules. For one reason or another, it took previous (deleted in meantime) rule numbers into account.
Chances are smll that you'll face the same issue, but maybe try to put a sequence number in your rules that is high enough / never been used previously.
Interesting,
if that's a case that there are some zombie or ghost rules it could be scoped out by
ipfw show
This will show as well the rules dedicated for a queue & interface per rule
Regards,
S.
Haven't changed anything related to traffic shaping except installing updates. It's working again.