Can’t get the shaper on OPNsense to work.

Started by robert.haugen@gmail.com, November 15, 2025, 06:25:32 PM

Previous topic - Next topic
November 15, 2025, 06:25:32 PM Last Edit: November 15, 2025, 06:28:27 PM by robert.haugen@gmail.com
I want the LAN network to have priority for download traffic.
When the network is not congested, the GUEST network should still have full speed.
However, when both LAN and GUEST are heavily used, LAN should receive significantly higher priority.

I've tried all combinations of Pipes, Queues, and Rules without success.

Reference:
https://docs.opnsense.org/manual/how-tos/shaper_prioritize_using_queues.html

For testing, I'm using two Debian Linux clients — one on LAN and one on GUEST — running the "Speedtest by Ookla" CLI tool.

You need the pipe first as in the howto, with the total available WAN bandwidth - it must have a scheduler type of "Weighted Fair Queue".

Then you need two queues for LAN and GUEST referencing that same pipe and weights to define the relative priorities as in the howto.

Last, you define the LAN and GUEST rules referencing the resprective queue. They both use the WAN interface, apart from that they have for the LAN rule:

interface = WAN
proto = ip
source = any
src-port = any
destination = 192.168.x.0/24 (whatever your LAN network has)
dst-port = any
direction = in
target = LAN-Queue

and for the GUEST rule:

interface = WAN
proto = ip
source = any
src-port = any
destination = 192.168.y.0/24 (whatever your GUEST network uses)
dst-port = any
direction = in
target = GUEST-Queue

You probably used the LAN and GUEST interfaces in the rules, that will not work.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Thanks.

Using IPv6, I think the client is communicating with the default gateway using its link-local address. The link-local subnet is the same on both GUEST and LAN:

fe80::/64

Could that be the culprit?

Quote from: robert.haugen@gmail.com on November 15, 2025, 06:25:32 PMwant the LAN network to have priority for download traffic.
When the network is not congested, the GUEST network should still have full speed.
However, when both LAN and GUEST are heavily used, LAN should receive significantly higher priority.

Priority in QoS is feature, where a packet of a certain application will leave the router sooner than the packet from any other application.

This by nature is not possible.

IPFW which is the underlying feature used for Shaping, doesn't have a scheduler that allows to set traffic priority or a priority queue. What you can do, is to set weights using a weight based scheduler to allocate a ratio of a BW to a specific application.

 
Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Today at 10:52:12 AM #4 Last Edit: Today at 10:55:58 AM by meyergru
If that is true, @Seimus, which is unexpected to me, but seems true, as far as I now have tested, then there is a problem:

It does not work for applications (= ports), either. That means, I just tested that the whole example in the documentation does not work as expected.

Did I get you right? You seem to imply that not all inputs in the rules can be used as selectors, in this case the destination IPs, but the source ports (i.e. the application) can. The latter is what the documentation describes. But I just tested with two VMs with iperf3 against different ports (9203 and 9207) on paris.bbr.iperf.bytel.fr. I used those as src-ports in two rules to select two pipes to queues with weights of 1 and 9.

Then, I let both tests run at the same time. They showed the same speeds at both VMs.

Thus, to me, it seems even worse than you describe. On the other hand, there are a lot of new parameters that become visible only when you enable advanced settings that may or may not be the culprit. In the rules settings, there are even "interface 2" settings now which suggest that instead of a netmask, you can simply specify the destination interface for a rule and also a "direction" parameter. I tried the interface 2 parameters, and they did not work, either.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Today at 11:58:46 AM #5 Last Edit: Today at 05:12:49 PM by Seimus
From perspective of the rules, you should be able to match based on 5-tuple (source IP/port, destination IP/port, protocol). This as I remember was never a problem to match the packet.

What I meant with "Priority" or "Priority Queue" Is the way how the packets leave the device. A Priority Queue will be emptied first, it has precedence over all other Queues. This is not only about BW but its really about which Queue will be processed first.

Quote from: meyergru on Today at 10:52:12 AMThe latter is what the documentation describes. But I just tested with two VMs with iperf3 against different ports (9203 and 9207) on paris.bbr.iperf.bytel.fr. I used those as src-ports in two rules to select two pipes to queues with weights of 1 and 9.

Then, I let both tests run at the same time. They showed the same speeds at both VMs.

In the docs the Pipe and Queue configuration looks correct to me but those Rules....

Two very important things:
1. The direction is not set, default direction is BOTH; this means the same Pipe and Queue should be used for UP and DOWN (this is never a good idea). But here is the problem, it will be only used if the packet matches the rule. This needs to match IN and OUT. So in reality only 1 direction is matched.
2. The "interface 2" provides just another selector possibility. But its an addition, if you use it, it needs to match as well the configured 5-tuple of the rule.


The rule here is bit clunky
src-port : https
Direction: both (default)

This is applied for both direction OUT and IN on WAN interface. What is happening here is following

For OUT direction:

From client to server >
This is basically UPLOAD, packet from the FW leaves towards a destination with destination port of 443. This of course will not match the rule as we expect to match the port on source instead.


For IN direction:

From server to client >
This is basically DOWNLOAD, packet enters to the FW towards a destination with source port of 443. This should match and be put into the Queue.



But in ipfw, I didn't find any BOTH option, thus this implies that instead of BOTH statement two separate rules are created one with IN and one with OUT statement using source port 443.

Overall this config should work in the docs but.. Its not the best way configured.

Quote from: meyergru on Today at 10:52:12 AMBut I just tested with two VMs with iperf3 against different ports (9203 and 9207) on paris.bbr.iperf.bytel.fr. I used those as src-ports in two rules to select two pipes to queues with weights of 1 and 9.

So you created two separate Pipes and each of them has one Queue? Like:
Pipe1 > Queue1 weight 1
Pipe2 > Queue2 weight 9

The way how the weighted scheduler works is, it does not provide a BW CAP, the BW CAP is done on the PIPE. This means if I have a PIPE of 10Mbit with several Queues of different weights like 1 and 9. Where each Queue is used for specific application, only in case the PIPE is utilized by both of those applications at the same time it will split the ratio depending of the Weight. If only one application saturates the PIPE it will get the whole BW.

So to do the the BW allocation properly it needs to follow per direction:
1. One PIPE
2. Queues per application attached to the same PIPE (scheduler)
3. Rules in proper direction and proper 5-tuple matching attached to the Queues

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

I tested the config in the docs and its working, but as mentioned the config example is but clunky to my taste.


ratio 9:1

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.06  sec  11.6 MBytes  9.69 Mbits/sec   38            sender
[  5]   0.00-10.00  sec  10.4 MBytes  8.70 Mbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.07  sec  1.61 MBytes  1.34 Mbits/sec    0            sender
[  5]   0.00-10.00  sec  1.12 MBytes   944 Kbits/sec                  receiver

ratio 7:3

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.04  sec  9.36 MBytes  7.82 Mbits/sec   58            sender
[  5]   0.00-10.00  sec  8.12 MBytes  6.82 Mbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.05  sec  4.23 MBytes  3.53 Mbits/sec    0            sender
[  5]   0.00-10.00  sec  3.50 MBytes  2.94 Mbits/sec                  receiver

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Today at 04:21:41 PM #7 Last Edit: Today at 05:47:32 PM by meyergru
I do not get it. I created one pipe which has less than my real downstream bandwidth:

You cannot view this attachment.

Note: The FQ-Codel cheduler was the problem, you have to use "Weighted Fair Queue" to make it work.

Two queues with weights 1 and 99:

You cannot view this attachment.
You cannot view this attachment.

Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Today at 04:26:03 PM #8 Last Edit: Today at 04:30:42 PM by meyergru
Then, two rules to select ports 9207 and 9203:

You cannot view this attachment.
You cannot view this attachment.

I also used the "in" direction from the WAN, as to denote downstream.

Then, I tested with:

VM1: iperf3 -4 -c paris.bbr.iperf.bytel.fr -p 9207 -R -i10
VM2: iperf3 -4 -c paris.bbr.iperf.bytel.fr -p 9207 -R -i10

I got:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.04  sec  65.2 MBytes  54.5 Mbits/sec  728             sender
[  5]   0.00-10.00  sec  58.2 MBytes  48.9 Mbits/sec                  receiver

and

#iperf3 -4 -c paris.bbr.iperf.bytel.fr -p 9207 -R -i10
Connecting to host paris.bbr.iperf.bytel.fr, port 9207
Reverse mode, remote host paris.bbr.iperf.bytel.fr is sending
[  5] local 192.168.10.3 port 58524 connected to 5.51.3.41 port 9207
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  58.2 MBytes  48.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.04  sec  65.2 MBytes  54.5 Mbits/sec  728             sender
[  5]   0.00-10.00  sec  58.2 MBytes  48.9 Mbits/sec                  receiver

which is approximately the same.


The "in" direction is correct, because when I test with

iperf3 -4 -c paris.bbr.iperf.bytel.fr -p 9207 -i10 (i.e. without "-R")

I get my full upstream speed on both VMs, i.e. I do not use the rules, apparently.

Also, with:

iperf3 -4 -c paris.bbr.iperf.bytel.fr -p 9205 -i10

I get the unlimited downstream speed, so the rules obviously apply to select a queue. Yet, the weights do get get applied.

WTH?
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Today at 04:33:00 PM #9 Last Edit: Today at 04:37:30 PM by meyergru
OMG, I need "Weigthed Fair Queue" scheduler type in the pipe... then it works. I copied from another pipe with FQ-Codel...

And it works with netmasks as destinations instead of src ports as well.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Today at 05:10:06 PM #10 Last Edit: Today at 05:19:48 PM by Seimus
Quote from: meyergru on Today at 04:33:00 PMOMG, I need "Weigthed Fair Queue" scheduler type in the pipe... then it works. I copied from another pipe with FQ-Codel...

Happens :).... the reason why there is no mention of scheduler in the docs is cause it uses the default which is the WFQ2+

Quote from: meyergru on Today at 04:33:00 PMAnd it works with netmasks as destinations instead of src ports as well.

Or with combination. As mentioned you can do 5-tuple.

FQ_C is not a weighted scheduler, its a Fair Queue, by default it will share the BW equally amongst all flows.


P.S. Pro TIP, if you are testing or playing with Shaper,  check the CLI using commands. This is how I double check If I didnt make a mistake.
ipfw show
ipfw pipe show
ipfw sched show

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Quote from: robert.haugen@gmail.com on November 15, 2025, 11:08:06 PMThanks.

Using IPv6, I think the client is communicating with the default gateway using its link-local address. The link-local subnet is the same on both GUEST and LAN:

fe80::/64

Could that be the culprit?

The configuration example in the docs is using protocol IP, which should match for both IPv6 and IPv4. As you want to shape divide BW based on VLANs/Networks. You need to properly configure the network (source for OUT and destination for IN) per the WAN rule to be matched to the Queue.

Regards.
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Today at 05:43:18 PM #12 Last Edit: Today at 05:45:49 PM by meyergru
The gateway IP will most certainly not be the culprit, as this is very common. Also, the destination IP can and almost guaranteed will be the routable IPv6 of your client, so instead of "destination" = 192.168.x.0/24 you would use whatever destination IPv6 network(s) you have.

Also, instead of using subnet addresses, you can use the "interface 2" instead like so:

You cannot view this attachment.

This way, you can leave the destination "any".

I tried it via:

#iperf3 -6 -c paris.bbr.iperf.bytel.fr -p 9207 -R -i10
Connecting to host paris.bbr.iperf.bytel.fr, port 9207
Reverse mode, remote host paris.bbr.iperf.bytel.fr is sending
[  5] local 2001:a61:524:xxxx:e5db:5a2d:dbaf:xxxx port 47370 connected to 2001:864:f003::2:1 port 9207
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.01  sec   113 MBytes  94.7 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.06  sec   118 MBytes  98.3 Mbits/sec  3513             sender
[  5]   0.00-10.01  sec   113 MBytes  94.7 Mbits/sec                  receiver

As you can see, it works just fine.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+