OPNsense Forum
Archive => 18.1 Legacy Series => Topic started by: didibo on April 08, 2018, 10:41:05 am
-
I'm running 18.1.5.
I've been having problems getting the traffic shaper to work. As an example, I have two simple pipes - one set to 400Mbps which I apply to download traffic on my subnet via rules, and another pipe set to 20Mbps which I apply to upload traffic on my subnet (not using queues in this example).
The interfaces on the device a gigabit - and I test using an iperf client and server on either side. The upload pipe works as expected, no problems there. However, the download pipe I only get around 250Mbit/s (when its set to 400Mbps) - no matter what settings I try for the pipe. In some circumstances, shortly after resetting the rules I see 395Mbit/s for about 4-5 seconds, and then it settles back down to around 250Mbits/s again. I've tried no mask, source, destination, codel, different scheduler types - just can't seem to get past the 250Mbit/s.
If I set the download pipe to 800Mbit/s I then get around 600Mbit/s of traffic through the interface. With no other traffic on the network, I'm struggling to see why I don't get the full speed? With no traffic shaping enabled I get around 940Mbit/s.
Any ideas why this is happening? Ultimately I'd like to start using queues to prioritise traffic but I'm just trying to get the basic pipes working for the moment - I just can't get close to the configured speed. I could fudge it by upping the bandwidth configured, but that makes no sense to me.
-
You always write queue but you mean pipe, correct?
Your download rule is direction "in" on interface "wan"?
What scheduler do you use?
-
Oops, yes. I've corrected the original post.
I've tried all the schedulers on the pipe - currently its set to weighted fair queueing.
The rules are set on the WAN interface - one rule for traffic sourced from my network directing to the upload pipe, one rule for traffic destined for my network - directing to the download pipe. Both rules have the direction setting set to default, i.e. both.
The download pipe works as I would expect up to around 200Mbit/s - if I go higher on the pipe bandwidth is where I see this type of behaviour e.g. below the download pipe is set to 400Mbit/s:
[ 5] 0.00-1.04 sec 47.5 MBytes 383 Mbits/sec
[ 5] 1.04-2.05 sec 47.5 MBytes 394 Mbits/sec
[ 5] 2.05-3.01 sec 45.0 MBytes 393 Mbits/sec
[ 5] 3.01-4.05 sec 48.8 MBytes 394 Mbits/sec
[ 5] 4.05-5.06 sec 46.2 MBytes 385 Mbits/sec
[ 5] 5.06-6.08 sec 30.0 MBytes 246 Mbits/sec
[ 5] 6.08-7.06 sec 30.0 MBytes 255 Mbits/sec
[ 5] 7.06-8.06 sec 30.0 MBytes 254 Mbits/sec
[ 5] 8.06-9.06 sec 30.0 MBytes 249 Mbits/sec
[ 5] 9.06-10.03 sec 30.0 MBytes 259 Mbits/sec
[ 5] 10.03-11.08 sec 31.2 MBytes 250 Mbits/sec
[ 5] 11.08-12.05 sec 30.0 MBytes 261 Mbits/sec
[ 5] 12.05-13.02 sec 30.0 MBytes 259 Mbits/sec
[ 5] 13.02-14.04 sec 30.0 MBytes 248 Mbits/sec
[ 5] 14.04-15.06 sec 31.2 MBytes 256 Mbits/sec
Upload pipe works as expected (example below it was set to 23 Mbit/s):
[ 5] 0.00-1.10 sec 2.00 MBytes 15.2 Mbits/sec
[ 5] 1.10-2.11 sec 2.50 MBytes 20.9 Mbits/sec
[ 5] 2.11-3.07 sec 2.50 MBytes 21.8 Mbits/sec
[ 5] 3.07-4.09 sec 2.62 MBytes 21.4 Mbits/sec
[ 5] 4.09-5.07 sec 2.50 MBytes 21.4 Mbits/sec
[ 5] 5.07-6.05 sec 2.50 MBytes 21.4 Mbits/sec
[ 5] 6.05-7.08 sec 2.62 MBytes 21.4 Mbits/sec
-
Please also set proper direction for both rules and both Interface wan
-
I've just done that - it still falls back to the around 250Mbit/sec throughput level:
[ 5] 0.00-1.01 sec 42.5 MBytes 354 Mbits/sec
[ 5] 1.01-2.05 sec 46.2 MBytes 372 Mbits/sec
[ 5] 2.05-3.03 sec 43.8 MBytes 373 Mbits/sec
[ 5] 3.03-4.04 sec 45.0 MBytes 376 Mbits/sec
[ 5] 4.04-5.03 sec 45.0 MBytes 381 Mbits/sec
[ 5] 5.03-6.02 sec 46.2 MBytes 391 Mbits/sec
[ 5] 6.02-7.03 sec 47.5 MBytes 395 Mbits/sec
[ 5] 7.03-8.08 sec 43.8 MBytes 350 Mbits/sec
[ 5] 8.08-9.06 sec 31.2 MBytes 268 Mbits/sec
[ 5] 9.06-10.06 sec 31.2 MBytes 261 Mbits/sec
[ 5] 10.06-11.02 sec 30.0 MBytes 262 Mbits/sec
[ 5] 11.02-12.04 sec 32.5 MBytes 267 Mbits/sec
[ 5] 12.04-13.08 sec 32.5 MBytes 264 Mbits/sec
[ 5] 13.08-14.07 sec 31.2 MBytes 263 Mbits/sec
[ 5] 14.07-15.04 sec 30.0 MBytes 260 Mbits/sec
[ 5] 15.04-16.07 sec 31.2 MBytes 256 Mbits/sec
[ 5] 16.07-17.05 sec 31.2 MBytes 267 Mbits/sec
[ 5] 17.05-18.07 sec 31.2 MBytes 257 Mbits/sec
-
Restarted the machine yet? Iperf with 10 parallel streams?
-
Are you CPU bound on your router?
It's probably easier to post all your config to see where the problem might be.
-
Rules only matching Just IP, not tcp or udp at first
-
Nope, I'm not CPU bound. The router is a Core i7 with 4 x 2.70GHz cores. CPU is rarely above 1-2%.
As I said, it can support faster bandwidth but with the traffic shaper set it doesn't get up to configured pipe bandwidth.
FYI - here's the iperf results when the download pipe is set to 900Mbit/sec:
[ 5] 0.00-1.00 sec 88.8 MBytes 743 Mbits/sec
[ 5] 1.00-2.01 sec 80.0 MBytes 669 Mbits/sec
[ 5] 2.01-3.03 sec 82.5 MBytes 674 Mbits/sec
[ 5] 3.03-4.02 sec 81.2 MBytes 688 Mbits/sec
[ 5] 4.02-5.02 sec 80.0 MBytes 672 Mbits/sec
[ 5] 5.02-6.02 sec 83.8 MBytes 702 Mbits/sec
[ 5] 6.02-7.02 sec 80.0 MBytes 673 Mbits/sec
[ 5] 7.02-8.03 sec 80.0 MBytes 666 Mbits/sec
[ 5] 8.03-9.01 sec 81.2 MBytes 692 Mbits/sec
[ 5] 9.01-10.02 sec 80.0 MBytes 669 Mbits/sec
And here is the iperf results with the pipe's disabled:
[ 5] 0.00-1.02 sec 102 MBytes 846 Mbits/sec
[ 5] 1.02-2.01 sec 106 MBytes 893 Mbits/sec
[ 5] 2.01-3.01 sec 110 MBytes 923 Mbits/sec
[ 5] 3.01-4.02 sec 112 MBytes 939 Mbits/sec
[ 5] 4.02-5.01 sec 111 MBytes 938 Mbits/sec
[ 5] 5.01-6.02 sec 112 MBytes 936 Mbits/sec
[ 5] 6.02-7.02 sec 111 MBytes 938 Mbits/sec
[ 5] 7.02-8.01 sec 111 MBytes 937 Mbits/sec
[ 5] 8.01-9.02 sec 112 MBytes 938 Mbits/sec
I don't think it is a hardware capacity issue. If the router can do 600-700Mbps when the pipe is set to 900Mbps - it should easily be able to do 400Mbps. The problem seems to be this slow down - which in my mind, shouldn't be happening. As I said, it only seems to happen on pipes when the configured bandwidth exceeds around 200Mbps.
Here's the traffic shaper config:
<TrafficShaper version="1.0.1">
<pipes>
<pipe uuid="41386202-308a-4557-b22d-5571e95e1d95">
<number>10000</number>
<enabled>1</enabled>
<bandwidth>403</bandwidth>
<bandwidthMetric>Mbit</bandwidthMetric>
<queue/>
<mask>none</mask>
<scheduler/>
<codel_enable>0</codel_enable>
<codel_target/>
<codel_interval/>
<codel_ecn_enable>0</codel_ecn_enable>
<fqcodel_quantum/>
<fqcodel_limit/>
<fqcodel_flows/>
<origin>TrafficShaper</origin>
<description>down-pipe</description>
</pipe>
<pipe uuid="774dca0d-c50e-4ba3-a48f-fa2fecc385a1">
<number>10001</number>
<enabled>1</enabled>
<bandwidth>23</bandwidth>
<bandwidthMetric>Mbit</bandwidthMetric>
<queue/>
<mask>none</mask>
<scheduler/>
<codel_enable>0</codel_enable>
<codel_target/>
<codel_interval/>
<codel_ecn_enable>0</codel_ecn_enable>
<fqcodel_quantum/>
<fqcodel_limit/>
<fqcodel_flows/>
<origin>TrafficShaper</origin>
<description>up-pipe</description>
</pipe>
</pipes>
<queues/>
<rules>
<rule uuid="38eea6a7-e1a1-4581-8179-2993ad000f88">
<sequence>1</sequence>
<interface>wan</interface>
<interface2/>
<proto>ip</proto>
<source>any</source>
<source_not>0</source_not>
<src_port>any</src_port>
<destination>192.168.1.0/24</destination>
<destination_not>0</destination_not>
<dst_port>any</dst_port>
<direction>in</direction>
<target>41386202-308a-4557-b22d-5571e95e1d95</target>
<description/>
<origin>TrafficShaper</origin>
</rule>
<rule uuid="6fbe7e2d-dea7-4eb6-acd4-51e25de2af1f">
<sequence>2</sequence>
<interface>wan</interface>
<interface2/>
<proto>ip</proto>
<source>192.168.1.0/24</source>
<source_not>0</source_not>
<src_port>any</src_port>
<destination>any</destination>
<destination_not>0</destination_not>
<dst_port>any</dst_port>
<direction>out</direction>
<target>774dca0d-c50e-4ba3-a48f-fa2fecc385a1</target>
<description/>
<origin>TrafficShaper</origin>
</rule>
</rules>
</TrafficShaper>
-
You can log in a shell and check the CPU as if it's slowing down, you could be hitting a capacity type item.
After you've made a lot of changes, I've noticed you need to reboot the box at times too as the settings get stuck.
Are you seeing the rules properly match as well and your packet counts going up?
ipfw -a list
60001 10729575 722118824 queue 10000 ip from 192.168.1.50 to any out via igb0
60002 0 0 queue 10000 ip from 192.168.1.51 to any out via igb0
60003 0 0 queue 10000 ip from 192.168.1.55 to any out via igb0
60004 334030 27960028 queue 10000 ip from 192.168.1.90 to any out via igb0
60005 41245052 50198644141 queue 10003 ip from any to 192.168.1.50 in via igb0
60006 0 0 queue 10003 ip from any to 192.168.1.51 in via igb0
60007 0 0 queue 10003 ip from any to 192.168.1.55 in via igb0
60008 214741 17230565 queue 10003 ip from any to 192.168.1.90 in via igb0
60009 304341484 123338730748 queue 10002 ip from 192.168.1.31 to any out via igb0
60010 15648706 859429595 queue 10002 ip from 192.168.1.30 to any dst-port 563 out via igb0
60011 0 0 queue 10005 ip from any to 192.168.1.31 in via igb0
60012 57975694 86445478932 queue 10005 ip from any to 192.168.1.30 src-port 563 in via igb0
60013 3428992560 3029988556752 queue 10001 ip from any to any out via igb0
60014 6129870360 7607157122646 queue 10004 ip from any to any in via igb0
-
Not CPU bound. Logged into the shell and ran the same iperf tests. CPU utilisation does not go above 1-2% (CPU is 98-99% idle).
On the ipfw I can see the packets counts increasing on the 'pipe' entries:
60001 627060 940224830 pipe 10000 ip from any to 192.168.1.0/24 in via xn0
60002 137984 6413798 pipe 10001 ip from 192.168.1.0/24 to any out via xn0
-
Are you running any other services like intrusion detection or anything else?
When I test my link on a Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz (4 cores), I can see when I peg my link (gigabit FIOS) and hold at ~900Mb/s, I get like 14-15% usage on the box. ~400, I can see 5-7% when the test is going on.
-
Am running other services but not IDS. This isnt a capacity issue.
See the iperf results above. When pipe is configured for 900Mbps it chugs along at 600-700Mbps. When the pipe is configured for 400Mbps it only provides around 250Mbps sustained. The issue is the drop in the rate after a few seconds which then doesnt recover with little other traffic on the network. It should be hitting or close to the pipe configured bandwidth, but it isnt.
-
Can you try just fifo and reboot the machine?
-
Ok, I set both pipes to FIFO, rebooted. Same results - pipe is configured for 403Mbps:
[ 5] 0.00-1.05 sec 47.5 MBytes 381 Mbits/sec
[ 5] 1.05-2.01 sec 45.0 MBytes 393 Mbits/sec
[ 5] 2.01-3.05 sec 48.8 MBytes 394 Mbits/sec
[ 5] 3.05-4.03 sec 46.2 MBytes 393 Mbits/sec
[ 5] 4.03-5.05 sec 47.5 MBytes 394 Mbits/sec
[ 5] 5.05-6.03 sec 46.2 MBytes 394 Mbits/sec
[ 5] 6.03-7.05 sec 45.0 MBytes 370 Mbits/sec
[ 5] 7.05-8.07 sec 30.0 MBytes 247 Mbits/sec
[ 5] 8.07-9.00 sec 28.8 MBytes 259 Mbits/sec
[ 5] 9.00-10.08 sec 32.5 MBytes 253 Mbits/sec
[ 5] 10.08-11.08 sec 30.0 MBytes 252 Mbits/sec
[ 5] 11.08-12.04 sec 28.8 MBytes 252 Mbits/sec
[ 5] 12.04-13.03 sec 30.0 MBytes 254 Mbits/sec
[ 5] 13.03-14.06 sec 31.2 MBytes 255 Mbits/sec
[ 5] 14.06-15.06 sec 30.0 MBytes 250 Mbits/sec
[ 5] 15.06-16.07 sec 31.2 MBytes 261 Mbits/sec
[ 5] 16.07-17.05 sec 30.0 MBytes 257 Mbits/sec
[ 5] 17.05-18.09 sec 31.2 MBytes 252 Mbits/sec
[ 5] 18.09-19.08 sec 31.2 MBytes 266 Mbits/sec
-
Can you try just fifo and reboot the machine?
Ok. I'm out then if you don't want to give more info. Good luck.
-
I posted the results of the FIFO and reboot as requested.
-
Will try to reproduce tomorrow ...
-
Can you post full specs of the device? I'm specifically interested in the NIC chipset and the interface (PCIe, PCI, etc.).
-
OPNsense is running in a XenCenter 7.2 hypervisor with os-xen 1.1 plugin installed.
relevant dmesg output from opnsense below:
xn0: Ethernet address: 12:c8:56:fe:5f:6c
xn1: <Virtual Network Interface> at device/vif/1xn0: on xenbusb_front0
backend features: feature-sg feature-gso-tcp4
xn1: Ethernet address: da:6c:43:19:2d:a5
xn2: <Virtual Network Interface> at device/vif/2xn1: on xenbusb_front0
backend features: feature-sg feature-gso-tcp4
xn2: Ethernet address: f2:d5:27:71:e7:19
xenbusb_back0: <Xen Backend Devices> on xenstore0
xenballoon0: <Xen Balloon Device> on xenstore0
xn2: xctrl0: backend features:<Xen Control Device> feature-sg on xenstore0
-
I would guess this is some kind of NIC tuning issue on Xen. Assuming that the underlying physical hardware is good quality (Intel NICs), I would try disabling flow control on the Xen NICs assigned to OPNsense. Also check Interfaces/Overview and see if you're getting errors/drops on any of the assigned virtual NICs.
I did some googling around and found quite a few references with Xenserver and FreeBSD NICs being slow. So this seems somewhat common to need some tweaking to run at full speed on Xen. This googling also found an old thread on OPNsense where Franco had posted this:
nettool -K <vif name> tx off
nettool -K <xen bridge> tx off
I would try disabling both TX/RX on all vNICs and see if you can re-run your tests with any improvement.
-
Thanks opnfwb - I don't think this is the issue (btw I have flow control etc. off on the NICs).
The NICs and the platform quite happily can go faster (as per my posts earlier), i.e. with the pipe's disabled I get close to full line speed:
[ 5] 6.02-7.02 sec 111 MBytes 938 Mbits/sec
[ 5] 7.02-8.01 sec 111 MBytes 937 Mbits/sec
[ 5] 8.01-9.02 sec 112 MBytes 938 Mbits/sec
If I set the pipe bandwidth to 900Mbit/sec I then see the drop:
[ 5] 7.02-8.03 sec 80.0 MBytes 666 Mbits/sec
[ 5] 8.03-9.01 sec 81.2 MBytes 692 Mbits/sec
[ 5] 9.01-10.02 sec 80.0 MBytes 669 Mbits/sec
and with the pipe bandwidth set to 400Mbit/sec I see the drop:
[ 5] 0.00-1.05 sec 47.5 MBytes 381 Mbits/sec
[ 5] 1.05-2.01 sec 45.0 MBytes 393 Mbits/sec
[ 5] 2.01-3.05 sec 48.8 MBytes 394 Mbits/sec
[ 5] 3.05-4.03 sec 46.2 MBytes 393 Mbits/sec
[ 5] 4.03-5.05 sec 47.5 MBytes 394 Mbits/sec
[ 5] 5.05-6.03 sec 46.2 MBytes 394 Mbits/sec
[ 5] 6.03-7.05 sec 45.0 MBytes 370 Mbits/sec
[ 5] 7.05-8.07 sec 30.0 MBytes 247 Mbits/sec
[ 5] 8.07-9.00 sec 28.8 MBytes 259 Mbits/sec
[ 5] 9.00-10.08 sec 32.5 MBytes 253 Mbits/sec
[ 5] 10.08-11.08 sec 30.0 MBytes 252 Mbits/sec
[ 5] 11.08-12.04 sec 28.8 MBytes 252 Mbits/sec
[ 5] 12.04-13.03 sec 30.0 MBytes 254 Mbits/sec
[ 5] 13.03-14.06 sec 31.2 MBytes 255 Mbits/sec
so the platform and the NICs can go much faster. The issue is why is there a drop? e.g. pipe set to 400 only getting 250.
Also, the pipe works as expected at speeds less than 200 Mbit/sec - I get the pipe speed. As soon as you go higher, this drop off behaviour seems to be happening with no other traffic across the opnsense router. If I want a sustained 400Mbit/sec I could set the pipe to 600Mbit/sec and I would get it, but this doesn't make any sense.
-
After some more testing, I noticed that setting the number of dynamic queues to 100 in the pipe improves matters. Still not quite up to configured pipe bandwidth (400 Mbits/sec):
[ 5] 0.00-1.05 sec 38.8 MBytes 309 Mbits/sec
[ 5] 1.05-2.03 sec 37.5 MBytes 322 Mbits/sec
[ 5] 2.03-3.06 sec 40.0 MBytes 327 Mbits/sec
[ 5] 3.06-4.05 sec 38.8 MBytes 327 Mbits/sec
[ 5] 4.05-5.06 sec 40.0 MBytes 332 Mbits/sec
[ 5] 5.06-6.06 sec 38.8 MBytes 326 Mbits/sec
[ 5] 6.06-7.01 sec 37.5 MBytes 330 Mbits/sec
[ 5] 7.01-8.04 sec 40.0 MBytes 327 Mbits/sec
[ 5] 8.04-9.04 sec 40.0 MBytes 334 Mbits/sec
[ 5] 9.04-10.03 sec 38.8 MBytes 329 Mbits/sec
and interestingly, no drop off. Is this something to do with the way dynamic queues are being handled? Is there a way to set a value higher than 100 for the pipe?
-
First of all, regarding iperf, only look at the sum value. It's impossible to shape a single stream within a second.
It tested your setup and it runs fine:
PIPE-UP
16Mbit
FQ_CoDEL
SRC 10.0.1.0/24
Direction OUT
WAN Interface
iperf3 -p 5000 -f m -V -c 10.0.2.10 -t 30 -P 10
[SUM] 0.00-30.00 sec 56.3 MBytes 15.7 Mbits/sec 1695 sender
[SUM] 0.00-30.00 sec 55.3 MBytes 15.5 Mbits/sec receiver
---
PIPE-DOWN
403Mbit
FQ_CoDEL
DST 10.0.1.0/24
Direction In
WAN Interface
iperf3 -p 5000 -f m -V -c 10.0.2.10 -t 30 -P 10 -R
[SUM] 0.00-30.00 sec 1.36 GBytes 390 Mbits/sec 28747 sender
[SUM] 0.00-30.00 sec 1.36 GBytes 390 Mbits/sec receiver
-
Thanks for looking into this mimugmail - appreciate it.
I've repeated the test using the set up you provided, plus the iperf3 commands as you had in your test set up.
PIPE-UP
16Mbit
FQ_CoDEL
src 192.168.1.0/24
direction OUT
WAN interface
iperf3 -p 5201 -f -m -V -c 192.168.0.11 -t 30 -P 10
[SUM] 0.00-31.31 sec 53.2 MBytes 14.3 Mbits/sec sender
[SUM] 0.00-31.31 sec 47.6 MBytes 12.8 Mbits/sec receiver
---
PIPE-DOWN
403Mbit
FQ_CoDEL
DST 192.168.1.0/24
direction IN
WAN interface
iperf3 -p 5201 -f -m -V -c 192.168.0.11 -t 30 -P 10 -R
[SUM] 0.00-30.71 sec 169 MBytes 46.1 Mbits/sec sender
[SUM] 0.00-30.71 sec 168 MBytes 45.8 Mbits/sec receiver
So I'm a bit stumped as to why we're seeing a difference. I'll keep trying a few different configurations. I may try a clean install on opnsense (don't think it will make a difference but will give it a go).
-
Just a wild guess, but I recently solved a VPN performance problem that was being caused by power saving options. I was running with PowerD disabled, and it apparently limited the processor. Changing to PowerD enabled with either Hiadaptive or Maximum mode fixed the problem.
-
Clean install, setup LAN/WAN, no firewall rules, no IPS etc. and then test with the shaper.
-
Exactly the same results with a clean install, nothing turned on (new empty config). Have also tried both with NAT and without. I've set up a separate VM to test with now as well - so will keep fiddling to see what I can find out.
-
Which driver do you use?
-
It's a Xen virtual machine - dmesg just reports this:
xn1: <Virtual Network Interface> at device/vif/1xn0: on xenbusb_front0
pciconf -lv doesn't list the network interfaces. How can I find out?
-
I've hit on a configuration which now works - after much tweaking and reading this post - https://forum.opnsense.org/index.php?topic=7423.0
The main difference seems to be introducing a queue (as opposed to going directly to the pipe in the rules), plus switching from a WFQ or FIFO scheduler type fo FlowQeue-CoDel.
This is now working as expected on my server:
[SUM] 0.00-30.06 sec 1.35 GBytes 387 Mbits/sec sender
[SUM] 0.00-30.06 sec 1.35 GBytes 387 Mbits/sec receiver
The above is using 10 parallel streams. I'm posting my traffic shaper config below in case anyone else may run into the same problem. Thanks all for your assistance.
<TrafficShaper version="1.0.1">
<pipes>
<pipe uuid="852389a7-b347-46f5-b037-98c2d3af03fd">
<number>10000</number>
<enabled>1</enabled>
<bandwidth>403</bandwidth>
<bandwidthMetric>Mbit</bandwidthMetric>
<queue>10</queue>
<mask>none</mask>
<scheduler>fq_codel</scheduler>
<codel_enable>0</codel_enable>
<codel_target/>
<codel_interval/>
<codel_ecn_enable>0</codel_ecn_enable>
<fqcodel_quantum>1000</fqcodel_quantum>
<fqcodel_limit>1000</fqcodel_limit>
<fqcodel_flows/>
<origin>TrafficShaper</origin>
<delay/>
<description>down-pipe</description>
</pipe>
<pipe uuid="58d6c82d-bde9-4853-8fb0-d8941f38582b">
<number>10001</number>
<enabled>1</enabled>
<bandwidth>23</bandwidth>
<bandwidthMetric>Mbit</bandwidthMetric>
<queue/>
<mask>none</mask>
<scheduler>fq_codel</scheduler>
<codel_enable>0</codel_enable>
<codel_target/>
<codel_interval/>
<codel_ecn_enable>0</codel_ecn_enable>
<fqcodel_quantum/>
<fqcodel_limit/>
<fqcodel_flows/>
<origin>TrafficShaper</origin>
<delay/>
<description>upload-pipe</description>
</pipe>
</pipes>
<queues>
<queue uuid="cb212c8a-d208-4692-8f98-41f3dc1d1aea">
<number>10000</number>
<enabled>1</enabled>
<pipe>852389a7-b347-46f5-b037-98c2d3af03fd</pipe>
<weight>100</weight>
<mask>none</mask>
<codel_enable>0</codel_enable>
<codel_target/>
<codel_interval/>
<codel_ecn_enable>0</codel_ecn_enable>
<description>main-down-q</description>
<origin>TrafficShaper</origin>
</queue>
<queue uuid="e19bbd16-bb1a-4932-8eea-5814f9f70abd">
<number>10001</number>
<enabled>1</enabled>
<pipe>58d6c82d-bde9-4853-8fb0-d8941f38582b</pipe>
<weight>100</weight>
<mask>none</mask>
<codel_enable>0</codel_enable>
<codel_target/>
<codel_interval/>
<codel_ecn_enable>0</codel_ecn_enable>
<description>main-up-q</description>
<origin>TrafficShaper</origin>
</queue>
</queues>
<rules>
<rule uuid="afa8b077-f5c9-4f40-ad1d-3c1d05fd8395">
<sequence>9</sequence>
<interface>wan</interface>
<interface2/>
<proto>ip</proto>
<source>any</source>
<source_not>0</source_not>
<src_port>any</src_port>
<destination>192.168.1.0/24</destination>
<destination_not>0</destination_not>
<dst_port>any</dst_port>
<direction/>
<target>cb212c8a-d208-4692-8f98-41f3dc1d1aea</target>
<description/>
<origin>TrafficShaper</origin>
</rule>
<rule uuid="22e3d03f-83ab-4c12-9e1a-99ed75e8de58">
<sequence>10</sequence>
<interface>wan</interface>
<interface2/>
<proto>ip</proto>
<source>192.168.1.0/24</source>
<source_not>0</source_not>
<src_port>any</src_port>
<destination>any</destination>
<destination_not>0</destination_not>
<dst_port>any</dst_port>
<direction/>
<target>e19bbd16-bb1a-4932-8eea-5814f9f70abd</target>
<description/>
<origin>TrafficShaper</origin>
</rule>
</rules>
</TrafficShaper>
-
LE: everything you describe happens for me also, regarding the smaller throughput when TS is used.
And regarding the last solution you found, are you sure this setup has any effect, but a bufferbloat improvement? (Since the speed, like it doesn't - for me, at least, it doesn't!)
Either I'm missing something, or TS really is not working as it should - Eg: status wouldn't show any flows if any other scheduler than WFQ is selected in pipes. Also, status always displays FIFO as the scheduler in pipes, no matter what other scheduler you actually use.
And if I test the speed simultaneously with somebody else we both get the maximum speed configured in pipes, meaning enough simultaneous users can reach the absolute available maximum bandwidth of the WAN, no matter what limit you choose for the pipe(s). Like the pipes limit the speed for a flow, not for all the flows.
Reaching an end with this, I start to seriously think to a dedicated different QoS solution...
-
And, if the limit in pipes is the limit of WAN BW (minus 0 - 10%) then everyone's speed is of no logic: I just tested and I got ~80/20 and the other one got ~20/80 http://www.speedtest.net/result/7243848295 (my results, his are just about the same, and transposed, my download is his upload, and so).
Of course, it's not the first time I encounter this unexpected and undesired results, I just tested once again.