[SOLVED] Traffic Shaper: Weights over 50 MBit/s faulty

Started by nuwe70, October 17, 2020, 06:34:15 PM

Previous topic - Next topic

April 28, 2021, 06:12:15 PM #16 Last Edit: April 28, 2021, 06:15:10 PM by nuwe70
I have read the article and tested with higher queue buffer. There is no difference.
If I understand it correctly, the queue buffer in the article was necessary because a single host was not able to max out the bandwidth of the pipe. This is not a problem for me.
If two hosts using the same pipe but with different queues and weights, the weight will be ignored more and more if the pipe bandwidth is higher than about 50 MBit/s. For example with a pipe bandwith of 100 MBit/s, both hosts using the same bandwidth even though one host has a weight of 1 and the other host a weight of 100. The weights are correct for a lower pipe bandwidth of, for example, 10 Mbit/s.



Ok, this is my lab now. Can you please as detailed as possible describe what should be tested with IP's/names from the diagram:

                  +--------+                         +--------+               
                  | FW-A   |        10.255.255.0     |  FW-B  |               
                  |        |-------------------------|        |               
                  +--------+                         +--------+               
                 |         |                           |       |               
                 |         |                           |       |               
                 |         |                           |       |               
192.168.10.0    |         |                           |       |   192.168.11.0
                 |         |                           |       |               
                 |         |                           |       |               
                 |         |                           |       |               
                 |         |                           |                       
         +---------+        +----------+       +-----------+    +-----------+ 
         |         |        |          |       |           |    |           | 
         | Deb-A1  |        |Deb-A2    |       | Deb-B1    |    | Deb-B2    | 
         +---------+        +----------+       +-----------+    +-----------+ 
           .201                .202                .201            .202       

April 29, 2021, 10:55:10 AM #20 Last Edit: May 03, 2021, 03:26:06 PM by nuwe70
Set up FW-A as follows.

Firewall -> Shaper -> Pipes -> New Pipe
Bandwidth: 10 Mbit/s
Description: Pipe

Firewall -> Shaper -> Queues -> New Queue
Pipe: Pipe
Weight: 100
Description: QueueHigh

Firewall -> Shaper -> Queues -> New Queue
Pipe: Pipe
Weight: 1
Description: QueueLow

Firewall -> Shaper -> Rules -> New Rule
Sequence: 1
Interface: WAN
Source: 192.168.10.201
Destination: any
Target: QueueLow
Description: RuleLow

Firewall -> Shaper -> Rules -> New Rule
Sequence: 2
Interface: WAN
Source: 192.168.10.202
Destination: any
Target: QueueHigh
Description: RuleHigh


Set up iperf3 server
Run two iperf3 servers on two different ports on the WAN side, for example on 10.255.255.10 with
iperf3 -s -p 5000
iperf3 -s -p 5001


Test 1
Check if a single host (for example 192.168.10.201) is limited to 10 Mbit/s with
iperf3 -c 10.255.255.10 -p 5000

If so, run iperf clients on both hosts in parallel:
192.168.10.201:
    iperf3 -c 10.255.255.10 -p 5000 -t 60
192.168.10.202:
    iperf3 -c 10.255.255.10 -p 5001 -t 60

Check if 192.168.10.201 if using about 1% of the bandwidth and 192.168.10.202 about 99% of the bandwidth.

Test 2
Edit Pipe to
Bandwidth: 100 Mbit/s

Do parallel iperf test again and now 192.168.10.201 and 192.168.10.202 using almost the same bandwidth. But this is not expected!

The lab is/was on a VPS hoster .. the values were too rotative. I need to set one up with real hardware.
Maybe on Friday when I'm in office

Thanks for your help! Let me know if I can help in any way.

Have you had time to reproduce this behavior on real hardware?

No, currently I have a lab with two OPN directly connected and in each side one client. Should this also bei possible with multiple streams?

June 14, 2021, 08:04:14 PM #25 Last Edit: June 14, 2021, 08:16:53 PM by nuwe70
Yes, I think so. Then you have to change the rules to match the iperf server ports instead of the entire host.

Edit:
I just tested it. Based on post #20 you have to change the following.

Firewall -> Shaper -> Rules -> New Rule
Sequence: 1
Interface: WAN
Source: 192.168.10.201
Destination: any
Dst-port: 5000
Target: QueueLow
Description: RuleLow

Firewall -> Shaper -> Rules -> New Rule
Sequence: 2
Interface: WAN
Source: 192.168.10.201
Destination: any
Dst-port: 5001
Target: QueueHigh
Description: RuleHigh

And run the following command twice on host 192.168.10.201.
iperf3 -c 10.255.255.10 -p 5000 -t 60

Holy shit! :D For more than a year I have this problem. Now I figured out what causes this behavior.

I always used virtual machines to run OPNsense. The problem only exists in virtual machines, not on real hardware. The reason is that the OS is using a different value for the kernel parameter "kern.hz" in virtual environments. This parameter sets the kernel interval timer rate and affects for example dummynet or ZFS. The default value in a VM is 100 but on real hardware 1000.

So the solution is to set the kernel parameter to a higher value, for example 1000. The higher the bandwidth, the higher the value must be.
Go to System -> Settings -> Tunables and add a new entry
Tunable: kern.hz
Description: Set the kernel interval timer rate
Value: 1000

More information here: https://groups.google.com/g/mailing.freebsd.ipfw/c/oVbFsI3JqfM