Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - didibo

#16
Exactly the same results with a clean install, nothing turned on (new empty config). Have also tried both with NAT and without. I've set up a separate VM to test with now as well - so will keep fiddling to see what I can find out.
#17
Thanks for looking into this mimugmail - appreciate it.

I've repeated the test using the set up you provided, plus the iperf3 commands as you had in your test set up.

PIPE-UP
16Mbit
FQ_CoDEL

src 192.168.1.0/24
direction OUT
WAN interface

iperf3 -p 5201 -f -m -V -c 192.168.0.11 -t 30 -P 10
[SUM]   0.00-31.31  sec  53.2 MBytes  14.3 Mbits/sec              sender
[SUM]   0.00-31.31  sec  47.6 MBytes  12.8 Mbits/sec              receiver

---

PIPE-DOWN
403Mbit
FQ_CoDEL

DST 192.168.1.0/24
direction IN
WAN interface

iperf3 -p 5201 -f -m -V -c 192.168.0.11 -t 30 -P 10 -R
[SUM]   0.00-30.71  sec   169 MBytes  46.1 Mbits/sec              sender
[SUM]   0.00-30.71  sec   168 MBytes  45.8 Mbits/sec              receiver

So I'm a bit stumped as to why we're seeing a difference. I'll keep trying a few different configurations. I may try a clean install on opnsense (don't think it will make a difference but will give it a go).

#18
After some more testing, I noticed that setting the number of dynamic queues to 100 in the pipe improves matters. Still not quite up to configured pipe bandwidth (400 Mbits/sec):

[  5]   0.00-1.05   sec  38.8 MBytes   309 Mbits/sec             
[  5]   1.05-2.03   sec  37.5 MBytes   322 Mbits/sec             
[  5]   2.03-3.06   sec  40.0 MBytes   327 Mbits/sec             
[  5]   3.06-4.05   sec  38.8 MBytes   327 Mbits/sec             
[  5]   4.05-5.06   sec  40.0 MBytes   332 Mbits/sec             
[  5]   5.06-6.06   sec  38.8 MBytes   326 Mbits/sec             
[  5]   6.06-7.01   sec  37.5 MBytes   330 Mbits/sec             
[  5]   7.01-8.04   sec  40.0 MBytes   327 Mbits/sec             
[  5]   8.04-9.04   sec  40.0 MBytes   334 Mbits/sec             
[  5]   9.04-10.03  sec  38.8 MBytes   329 Mbits/sec 

and interestingly, no drop off. Is this something to do with the way dynamic queues are being handled? Is there a way to set a value higher than 100 for the pipe?
#19
Thanks opnfwb - I don't think this is the issue (btw I have flow control etc. off on the NICs).

The NICs and the platform quite happily can go faster (as per my posts earlier), i.e. with the pipe's disabled I get close to full line speed:

[  5]   6.02-7.02   sec   111 MBytes   938 Mbits/sec             
[  5]   7.02-8.01   sec   111 MBytes   937 Mbits/sec             
[  5]   8.01-9.02   sec   112 MBytes   938 Mbits/sec     

If I set the pipe bandwidth to 900Mbit/sec I then see the drop:

[  5]   7.02-8.03   sec  80.0 MBytes   666 Mbits/sec             
[  5]   8.03-9.01   sec  81.2 MBytes   692 Mbits/sec             
[  5]   9.01-10.02  sec  80.0 MBytes   669 Mbits/sec       

and with the pipe bandwidth set to 400Mbit/sec I see the drop:

[  5]   0.00-1.05   sec  47.5 MBytes   381 Mbits/sec             
[  5]   1.05-2.01   sec  45.0 MBytes   393 Mbits/sec             
[  5]   2.01-3.05   sec  48.8 MBytes   394 Mbits/sec             
[  5]   3.05-4.03   sec  46.2 MBytes   393 Mbits/sec             
[  5]   4.03-5.05   sec  47.5 MBytes   394 Mbits/sec             
[  5]   5.05-6.03   sec  46.2 MBytes   394 Mbits/sec             
[  5]   6.03-7.05   sec  45.0 MBytes   370 Mbits/sec             
[  5]   7.05-8.07   sec  30.0 MBytes   247 Mbits/sec             
[  5]   8.07-9.00   sec  28.8 MBytes   259 Mbits/sec             
[  5]   9.00-10.08  sec  32.5 MBytes   253 Mbits/sec             
[  5]  10.08-11.08  sec  30.0 MBytes   252 Mbits/sec             
[  5]  11.08-12.04  sec  28.8 MBytes   252 Mbits/sec             
[  5]  12.04-13.03  sec  30.0 MBytes   254 Mbits/sec             
[  5]  13.03-14.06  sec  31.2 MBytes   255 Mbits/sec             

so the platform and the NICs can go much faster. The issue is why is there a drop? e.g. pipe set to 400 only getting 250.

Also, the pipe works as expected at speeds less than 200 Mbit/sec - I get the pipe speed. As soon as you go higher, this drop off behaviour seems to be happening with no other traffic across the opnsense router. If I want a sustained 400Mbit/sec I could set the pipe to 600Mbit/sec and I would get it, but this doesn't make any sense.


#20
OPNsense is running in a XenCenter 7.2 hypervisor with os-xen 1.1 plugin installed.

relevant dmesg output from opnsense below:

xn0: Ethernet address: 12:c8:56:fe:5f:6c
xn1: <Virtual Network Interface> at device/vif/1xn0:  on xenbusb_front0
backend features: feature-sg feature-gso-tcp4
xn1: Ethernet address: da:6c:43:19:2d:a5
xn2: <Virtual Network Interface> at device/vif/2xn1:  on xenbusb_front0
backend features: feature-sg feature-gso-tcp4
xn2: Ethernet address: f2:d5:27:71:e7:19
xenbusb_back0: <Xen Backend Devices> on xenstore0
xenballoon0: <Xen Balloon Device> on xenstore0
xn2: xctrl0: backend features:<Xen Control Device> feature-sg on xenstore0

#21
I posted the results of the FIFO and reboot as requested.
#22
Ok, I set both pipes to FIFO, rebooted. Same results - pipe is configured for 403Mbps:

[  5]   0.00-1.05   sec  47.5 MBytes   381 Mbits/sec             
[  5]   1.05-2.01   sec  45.0 MBytes   393 Mbits/sec             
[  5]   2.01-3.05   sec  48.8 MBytes   394 Mbits/sec             
[  5]   3.05-4.03   sec  46.2 MBytes   393 Mbits/sec             
[  5]   4.03-5.05   sec  47.5 MBytes   394 Mbits/sec             
[  5]   5.05-6.03   sec  46.2 MBytes   394 Mbits/sec             
[  5]   6.03-7.05   sec  45.0 MBytes   370 Mbits/sec             
[  5]   7.05-8.07   sec  30.0 MBytes   247 Mbits/sec             
[  5]   8.07-9.00   sec  28.8 MBytes   259 Mbits/sec             
[  5]   9.00-10.08  sec  32.5 MBytes   253 Mbits/sec             
[  5]  10.08-11.08  sec  30.0 MBytes   252 Mbits/sec             
[  5]  11.08-12.04  sec  28.8 MBytes   252 Mbits/sec             
[  5]  12.04-13.03  sec  30.0 MBytes   254 Mbits/sec             
[  5]  13.03-14.06  sec  31.2 MBytes   255 Mbits/sec             
[  5]  14.06-15.06  sec  30.0 MBytes   250 Mbits/sec             
[  5]  15.06-16.07  sec  31.2 MBytes   261 Mbits/sec             
[  5]  16.07-17.05  sec  30.0 MBytes   257 Mbits/sec             
[  5]  17.05-18.09  sec  31.2 MBytes   252 Mbits/sec             
[  5]  18.09-19.08  sec  31.2 MBytes   266 Mbits/sec             
#23
Am running other services but not IDS. This isnt a capacity issue.
See the iperf results above. When pipe is configured for 900Mbps it chugs along at 600-700Mbps. When the pipe is configured for 400Mbps it only provides around 250Mbps sustained. The issue is the drop in the rate after a few seconds which then doesnt recover with little other traffic on the network. It should be hitting or close to the pipe configured bandwidth, but it isnt.
#24
Not CPU bound. Logged into the shell and ran the same iperf tests. CPU utilisation does not go above 1-2% (CPU is 98-99% idle).

On the ipfw I can see the packets counts increasing on the 'pipe' entries:

60001   627060   940224830 pipe 10000 ip from any to 192.168.1.0/24 in via xn0
60002   137984     6413798 pipe 10001 ip from 192.168.1.0/24 to any out via xn0
#25
Nope, I'm not CPU bound. The router is a Core i7 with 4 x 2.70GHz cores. CPU is rarely above 1-2%.

As I said, it can support faster bandwidth but with the traffic shaper set it doesn't get up to configured pipe bandwidth.

FYI - here's the iperf results when the download pipe is set to 900Mbit/sec:

[  5]   0.00-1.00   sec  88.8 MBytes   743 Mbits/sec             
[  5]   1.00-2.01   sec  80.0 MBytes   669 Mbits/sec             
[  5]   2.01-3.03   sec  82.5 MBytes   674 Mbits/sec             
[  5]   3.03-4.02   sec  81.2 MBytes   688 Mbits/sec             
[  5]   4.02-5.02   sec  80.0 MBytes   672 Mbits/sec             
[  5]   5.02-6.02   sec  83.8 MBytes   702 Mbits/sec             
[  5]   6.02-7.02   sec  80.0 MBytes   673 Mbits/sec             
[  5]   7.02-8.03   sec  80.0 MBytes   666 Mbits/sec             
[  5]   8.03-9.01   sec  81.2 MBytes   692 Mbits/sec             
[  5]   9.01-10.02  sec  80.0 MBytes   669 Mbits/sec       

And here is the iperf results with the pipe's disabled:

[  5]   0.00-1.02   sec   102 MBytes   846 Mbits/sec             
[  5]   1.02-2.01   sec   106 MBytes   893 Mbits/sec             
[  5]   2.01-3.01   sec   110 MBytes   923 Mbits/sec             
[  5]   3.01-4.02   sec   112 MBytes   939 Mbits/sec             
[  5]   4.02-5.01   sec   111 MBytes   938 Mbits/sec             
[  5]   5.01-6.02   sec   112 MBytes   936 Mbits/sec             
[  5]   6.02-7.02   sec   111 MBytes   938 Mbits/sec             
[  5]   7.02-8.01   sec   111 MBytes   937 Mbits/sec             
[  5]   8.01-9.02   sec   112 MBytes   938 Mbits/sec     

I don't think it is a hardware capacity issue. If the router can do 600-700Mbps when the pipe is set to 900Mbps - it should easily be able to do 400Mbps. The problem seems to be this slow down - which in my mind, shouldn't be happening. As I said, it only seems to happen on pipes when the configured bandwidth exceeds around 200Mbps.

Here's the traffic shaper config:

   <TrafficShaper version="1.0.1">
      <pipes>
        <pipe uuid="41386202-308a-4557-b22d-5571e95e1d95">
          <number>10000</number>
          <enabled>1</enabled>
          <bandwidth>403</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <description>down-pipe</description>
        </pipe>
        <pipe uuid="774dca0d-c50e-4ba3-a48f-fa2fecc385a1">
          <number>10001</number>
          <enabled>1</enabled>
          <bandwidth>23</bandwidth>
          <bandwidthMetric>Mbit</bandwidthMetric>
          <queue/>
          <mask>none</mask>
          <scheduler/>
          <codel_enable>0</codel_enable>
          <codel_target/>
          <codel_interval/>
          <codel_ecn_enable>0</codel_ecn_enable>
          <fqcodel_quantum/>
          <fqcodel_limit/>
          <fqcodel_flows/>
          <origin>TrafficShaper</origin>
          <description>up-pipe</description>
        </pipe>
      </pipes>
      <queues/>
      <rules>
        <rule uuid="38eea6a7-e1a1-4581-8179-2993ad000f88">
          <sequence>1</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <source>any</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>192.168.1.0/24</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <direction>in</direction>
          <target>41386202-308a-4557-b22d-5571e95e1d95</target>
          <description/>
          <origin>TrafficShaper</origin>
        </rule>
        <rule uuid="6fbe7e2d-dea7-4eb6-acd4-51e25de2af1f">
          <sequence>2</sequence>
          <interface>wan</interface>
          <interface2/>
          <proto>ip</proto>
          <source>192.168.1.0/24</source>
          <source_not>0</source_not>
          <src_port>any</src_port>
          <destination>any</destination>
          <destination_not>0</destination_not>
          <dst_port>any</dst_port>
          <direction>out</direction>
          <target>774dca0d-c50e-4ba3-a48f-fa2fecc385a1</target>
          <description/>
          <origin>TrafficShaper</origin>
        </rule>
      </rules>
    </TrafficShaper>
#26
I've just done that - it still falls back to the around 250Mbit/sec throughput level:

[  5]   0.00-1.01   sec  42.5 MBytes   354 Mbits/sec             
[  5]   1.01-2.05   sec  46.2 MBytes   372 Mbits/sec             
[  5]   2.05-3.03   sec  43.8 MBytes   373 Mbits/sec             
[  5]   3.03-4.04   sec  45.0 MBytes   376 Mbits/sec             
[  5]   4.04-5.03   sec  45.0 MBytes   381 Mbits/sec             
[  5]   5.03-6.02   sec  46.2 MBytes   391 Mbits/sec             
[  5]   6.02-7.03   sec  47.5 MBytes   395 Mbits/sec             
[  5]   7.03-8.08   sec  43.8 MBytes   350 Mbits/sec             
[  5]   8.08-9.06   sec  31.2 MBytes   268 Mbits/sec             
[  5]   9.06-10.06  sec  31.2 MBytes   261 Mbits/sec             
[  5]  10.06-11.02  sec  30.0 MBytes   262 Mbits/sec             
[  5]  11.02-12.04  sec  32.5 MBytes   267 Mbits/sec             
[  5]  12.04-13.08  sec  32.5 MBytes   264 Mbits/sec             
[  5]  13.08-14.07  sec  31.2 MBytes   263 Mbits/sec             
[  5]  14.07-15.04  sec  30.0 MBytes   260 Mbits/sec             
[  5]  15.04-16.07  sec  31.2 MBytes   256 Mbits/sec             
[  5]  16.07-17.05  sec  31.2 MBytes   267 Mbits/sec             
[  5]  17.05-18.07  sec  31.2 MBytes   257 Mbits/sec             

#27
Oops, yes. I've corrected the original post.

I've tried all the schedulers on the pipe - currently its set to weighted fair queueing.

The rules are set on the WAN interface - one rule for traffic sourced from my network directing to the upload pipe, one rule for traffic destined for my network - directing to the download pipe. Both rules have the direction setting set to default, i.e. both.

The download pipe works as I would expect up to around 200Mbit/s - if I go higher on the pipe bandwidth is where I see this type of behaviour e.g. below the download pipe is set to 400Mbit/s:

[  5]   0.00-1.04   sec  47.5 MBytes   383 Mbits/sec             
[  5]   1.04-2.05   sec  47.5 MBytes   394 Mbits/sec             
[  5]   2.05-3.01   sec  45.0 MBytes   393 Mbits/sec             
[  5]   3.01-4.05   sec  48.8 MBytes   394 Mbits/sec             
[  5]   4.05-5.06   sec  46.2 MBytes   385 Mbits/sec             
[  5]   5.06-6.08   sec  30.0 MBytes   246 Mbits/sec             
[  5]   6.08-7.06   sec  30.0 MBytes   255 Mbits/sec             
[  5]   7.06-8.06   sec  30.0 MBytes   254 Mbits/sec             
[  5]   8.06-9.06   sec  30.0 MBytes   249 Mbits/sec             
[  5]   9.06-10.03  sec  30.0 MBytes   259 Mbits/sec             
[  5]  10.03-11.08  sec  31.2 MBytes   250 Mbits/sec             
[  5]  11.08-12.05  sec  30.0 MBytes   261 Mbits/sec             
[  5]  12.05-13.02  sec  30.0 MBytes   259 Mbits/sec             
[  5]  13.02-14.04  sec  30.0 MBytes   248 Mbits/sec             
[  5]  14.04-15.06  sec  31.2 MBytes   256 Mbits/sec

Upload pipe works as expected (example below it was set to 23 Mbit/s):
[  5]   0.00-1.10   sec  2.00 MBytes  15.2 Mbits/sec             
[  5]   1.10-2.11   sec  2.50 MBytes  20.9 Mbits/sec             
[  5]   2.11-3.07   sec  2.50 MBytes  21.8 Mbits/sec             
[  5]   3.07-4.09   sec  2.62 MBytes  21.4 Mbits/sec             
[  5]   4.09-5.07   sec  2.50 MBytes  21.4 Mbits/sec             
[  5]   5.07-6.05   sec  2.50 MBytes  21.4 Mbits/sec             
[  5]   6.05-7.08   sec  2.62 MBytes  21.4 Mbits/sec
#28
I'm running 18.1.5.

I've been having problems getting the traffic shaper to work. As an example, I have two simple pipes - one set to 400Mbps which I apply to download traffic on my subnet via rules, and another pipe set to 20Mbps which I apply to upload traffic on my subnet (not using queues in this example).

The interfaces on the device a gigabit - and I test using an iperf client and server on either side. The upload pipe works as expected, no problems there. However, the download pipe I only get around 250Mbit/s (when its set to 400Mbps) - no matter what settings I try for the pipe. In some circumstances, shortly after resetting the rules I see 395Mbit/s for about 4-5 seconds, and then it settles back down to around 250Mbits/s again. I've tried no mask, source, destination, codel, different scheduler types - just can't seem to get past the 250Mbit/s.

If I set the download pipe to 800Mbit/s I then get around 600Mbit/s of traffic through the interface. With no other traffic on the network, I'm struggling to see why I don't get the full speed? With no traffic shaping enabled I get around 940Mbit/s.

Any ideas why this is happening? Ultimately I'd like to start using queues to prioritise traffic but I'm just trying to get the basic pipes working for the moment - I just can't get close to the configured speed. I could fudge it by upping the bandwidth configured, but that makes no sense to me.
#29
17.7 Legacy Series / Re: DHCPv6 Relay Service won't start
November 27, 2017, 08:25:30 PM
Yes, the service is configured - there is a red service indicator and it wont start (see pic).

The advertisement service does start - but the DHCP6 relay service won't (advertisement service on or off)

Both the WAN and the LAN have valid IPv6 addresses (LAN IPv6 is set to track the WAN interface).

Just get nothing - when I click start on the DHCP6 relay service little box pops up for 1/2 second saying please wait, then back to the config page (showing it hasn't started).
#30
17.7 Legacy Series / DHCPv6 Relay Service won't start
November 26, 2017, 06:56:59 PM
Hi all,

I'm unable to start the DHCPv6 Relay service.

My WAN and LAN interfaces have IPv6 addresses - I'm not running DHCPv6 Server on any of them, I have Advertisments turned off - but I just can't start the service.

I click on start, and nothing, no errors - does anyone know where I can find logs to debug this further or have any ideas what could be causing the issue?

Thanks!