OPNsense Forum

Archive => 18.1 Legacy Series => Topic started by: namezero111111 on January 29, 2018, 08:34:16 pm

Title: Queue statistics
Post by: namezero111111 on January 29, 2018, 08:34:16 pm
Hello folks,

I have a question regarding the monitoring scripts I'm writing.
I am trying to obtain queue information.

So from the ipfw manual:
Quote
Statistics

Per-flow queueing can be useful for a variety of purposes. A very simple one is counting traffic:

ipfw add pipe 1 tcp from any to any
ipfw add pipe 1 udp from any to any
ipfw add pipe 1 ip from any to any
ipfw pipe 1 config mask all

The above set of rules will create queues (and collect statistics) for all traffic. Because the pipes have no limitations, the only effect is collecting statistics. Note that we need 3 rules, not just the last one, because when ipfw tries to match IP packets it will not consider ports, so we would not see connections on separate ports as different ones.

Cool. So with /sbin/ipfw queue show I can get this output (same as shaper diag page):
Quote
q10006  50 sl. 1 flows (1 buckets) sched 10001 weight 50 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0        8     1528  0    0   0
q10007  50 sl. 0 flows (1 buckets) sched 10000 weight 50 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10004  50 sl. 1 flows (1 buckets) sched 10000 weight 99 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
  0 ip           0.0.0.0/0             0.0.0.0/0     2863  2857626  0    0 167
q10005  50 sl. 1 flows (1 buckets) sched 10001 weight 95 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
  0 ip           0.0.0.0/0             0.0.0.0/0        2     1300  0    0   0
q10002  50 sl. 1 flows (1 buckets) sched 10000 weight 80 lmax 1500 pri 0  AQM CoDel target 5ms interval 500ms NoECN
  0 ip           0.0.0.0/0             0.0.0.0/0        6      960  0    0   0
q10003  50 sl. 0 flows (1 buckets) sched 10000 weight 75 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10000  50 sl. 0 flows (1 buckets) sched 10000 weight 20 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10001  50 sl. 0 flows (1 buckets) sched 10000 weight 70 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10008  50 sl. 0 flows (1 buckets) sched 10001 weight 60 lmax 1500 pri 0  AQM CoDel target 5ms interval 500ms NoECN
q10009  50 sl. 0 flows (1 buckets) sched 10001 weight 20 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN

Cool. Combine that with data from $config["OPNsense"]["TrafficShaper"] and away we go.

However, it seems that this only collect current information about the queues (i.e. backlog).
From the ipfw manual it's unclear as to how to interpret the output (or I am too confused to read it).

Before, on pf with altq we'd use the output from /sbin/pfctl -vsq, which would give incremental counters over intervals.
Is there a similar command or a better way to monitor queue statistics for nagios RRD graph generation?

Title: Re: Queue statistics
Post by: HFsi on May 03, 2018, 05:34:07 pm
What does it means a bucket?
Title: Re: Queue statistics
Post by: guest15389 on May 03, 2018, 08:57:47 pm
I haven't found anything that gets down to that level of information in regards to reporting at the traffic shaping level unfortunately.

ipfw -a list gives you packets for all the queues that goes through but that really didn't give me what I wanted either.

Example of my section of output from my rules I have setup that goes back to the queues:
Code: [Select]
60001      327       58676 queue 10000 ip from 192.168.1.50 to any out via igb0
60002        0           0 queue 10000 ip from 192.168.1.51 to any out via igb0
60003        0           0 queue 10000 ip from 192.168.1.55 to any out via igb0
60004     4527      380654 queue 10000 ip from 192.168.1.90 to any out via igb0
60005      220       82560 queue 10003 ip from any to 192.168.1.50 in via igb0
60006        0           0 queue 10003 ip from any to 192.168.1.51 in via igb0
60007        0           0 queue 10003 ip from any to 192.168.1.55 in via igb0
60008     2875      234802 queue 10003 ip from any to 192.168.1.90 in via igb0
60009        0           0 queue 10002 ip from 192.168.1.31 to any out via igb0
60010        0           0 queue 10002 ip from 192.168.1.30 to any dst-port 563 out via igb0
60011        0           0 queue 10005 ip from any to 192.168.1.31 in via igb0
60012        0           0 queue 10005 ip from any to 192.168.1.30 src-port 563 in via igb0
60013 19640439 19770716224 queue 10001 ip from any to any out via igb0
60014 33918821 42581595638 queue 10004 ip from any to any in via igb0
Title: Re: Queue statistics
Post by: namezero111111 on May 04, 2018, 06:59:03 pm
Yes, total bytes but no drop rate.
I think dummynet creates new streams through the queues; hence why they show up as 0.0.0.0/0 to 0.0.0.0/0 and come and go rather than staying.
Not sure if this is from dummynet or how ipfw uses it...

We made do with the output of ipfw queue show but only use it if there are a certain amount of total pacekts so that the drop rate can be more accurately determined...