Hi,
how do I specify the source and destination addresses for IPv6 in the traffic shaping rules when following this guide: https://docs.opnsense.org/manual/how-tos/shaper_share_evenly.html for instances where the IPv6 addresses are not static?
You would not. Those instructions are quite old. I would argue that the IP ranges in the WAN rules are only there to signify the direction for the attached pipes. With advanced settings, you can now do the same with "direction" and "interface 2", if need be. You do not even need the subnets any more.
Hmm. The help texts say: "secondary interface, matches packets traveling to/from interface (1) to/from interface (2). can be combined with direction." and "matches incoming or outgoing packets or both (default)".
Obviously it doesn't say which direction it is when 'direction' is not set to 'both'.
So which direction is which?
How would a rule without a direction make sense?
I would argue that's not necessarily about any direction but about excluding other interfaces --- like the ones for various vlans and VPN interfaces --- from the rule applying to them. After all, traffic going out from one interface would be the traffic going to the other one while other traffic going out the same interface to other interfaces shouldn't have the rule apply to it. Does that happen? That would make it hilariously complicated to set it all up if you want to do traffic shaping for a bunch of interfaces instead of only two.
Like how do you know what the maximum bandwidth between a LAN interface and a VLAN interface on a different physical network card is? And what is the bandwidth when the interfaces are on the same physical network card?
It's just so that in this case, my ISP is suggesting to set bandwidth limits for upload and download, so I thought I'd play with it and see if it makes a difference. So far it doesn't.
It seems obvious to me that "in" is like "in" on firewall rules for the first interface. So you would only need "in" and "out" rules for WAN with any/any as src/dst with no second interface at all if you do not want it to be significant.
You normally only want to limit in and out directions for WAN, for which the limit exists in the first place and you know what bandwidth it has. That is no different that with the documentation example. The sole thing that did not exist back when that was written is the in/out distinction and the second interface, such that nextmasks were used instead.
While you could use the same approach for different VLANs as well, it is more like a bandwitdh restriction for specific VLANs then, and yes, if you want that, you will have to set it up.
Apart from that: what are you trying to achieve? Fair distribution amongst your clients or bufferbloat optimization because of asymmetric up/downstream? For the latter, there is another documentation section that has most recently been reworked and that does actually work.
Unless you have clients that are far more powerful than others, I would expect this distribution not to do anything much visible. For bufferbloat issues, that is another story.
Quote from: meyergru on October 09, 2025, 08:34:03 PMIt seems obvious to me that "in" is like "in" on firewall rules for the first interface.
That isn't at all obvious to me. It could be 'in to the WAN interface (1st interface) out of the LAN interface (2nd interface, or it's network if an IP address is used instead of an interface)'. I could also be 'out of the 1st interface to the internet' or 'out of the 2nd interface in to the 1st interface'. Or if I don't specify a 2nd interface or a network, it could mean 'in to the 1st interface from anywhere' or 'out of the 1st interface (i. e. to the internet in this case or to any interface)" or whatever. It's entirely unclear to me.
QuoteSo you would only need "in" and "out" rules for WAN with any/any as src/dst with no second interface at all if you do not want it to be significant.
Is that even possible? For a rule that is supposed to limit the upload bandwidth to the ISP (internet), it would make sense because a rule limited to the LAN interface wouldn't limit the other interfaces. The guide seems to require an IP address ...
QuoteYou normally only want to limit in and out directions for WAN, for which the limit exists in the first place and you know what bandwidth it has.
If I have a guest network, for example, I might want to use different limits for that. In that case I'd have to set up rules which, for example, allow only 20% of the upload bandwidth for the guest network and 80% for the LAN. When you have a bunch of interfaces you might want to limit, like a VOIP interface for a VOIP vlan and some others, that gets tedious to set up.
QuoteThat is no different that with the documentation example.
The difference would be that all traffic (from all interfaces) that goes out to the internet rather than only the traffic from then LAN interface would be limited. The latter is what the guide does. I actually should use a rule that limits all traffic from all interfaces that wants to go out to the internet.
QuoteThe sole thing that did not exist back when that was written is the in/out distinction and the second interface, such that nextmasks were used instead.
That was a great improvement.
QuoteWhile you could use the same approach for different VLANs as well, it is more like a bandwitdh restriction for specific VLANs then, and yes, if you want that, you will have to set it up.
right
QuoteApart from that: what are you trying to achieve? Fair distribution amongst your clients
I'm merely simply trying out the feature after I upgraded my connection to a bit more bandwidth, and my ISP suggested to put specific limits for upload and download. It never hurts to learn something, and I'm curious if it would make any difference.
From what I've seen so far, OPNsense does a great job of distributing the bandwidth between the clients by default without any extra traffic shaping.
Also, I'm not a fan of traffic shaping. The problem with that is that you can't really accomplish anything other than dropping packets. Having to drop packets because you don't have enough bandwidth doesn't improve anything because packets inevitably get dropped anyway. You can decide which packets to drop, but when you don't have enough bandwidth, the line is still plugged and the solution is to get more bandwidth. --- There may be cases in which you might be able to achieve improvements because you have better control over the traffic, like through the switches on your own network, but for your internet connection, the bottleneck is out of your control, and traffic shaping is pointless.
Quoteor bufferbloat optimization because of asymmetric up/downstream? For the latter, there is another documentation section that has most recently been reworked and that does actually work.
What does it do?
QuoteUnless you have clients that are far more powerful than others, I would expect this distribution not to do anything much visible. For bufferbloat issues, that is another story.
I don't expect it to do anything, see above. But the ISP must have some reasons to tell customers to use certain bandwith limitations, and it's a learning experience for me to try it out, and I'm curious to see what happens.
So what I'm trying to accomplish is to limit upload and download to the numbers my ISP gave me, for both IPv4 and IPv6. But there isn't even an option to set up a rule for both IPv4 and IPv6 at the same time, which I guess would be required. And since I can't get a static IPv6 prefix, it seems I'm limited to use interfaces instead of networks when setting up rules.
Since there is no option to make a rule for both IPv4 and IPv6 at the same time, how am I supposed to set up a limit? Do I just make two rules each for each IPv4 and IPv6 and hope that the pipe will still take care of the limit?
Now let's see if I can just use a single interface for a rule ...
Hm, what does 'ip' mean in a rule? Does that mean both IPv4 and IPv6 or something else?
When I make a rule for upload with only the WAN interface, what does the direction mean? I don't want to limit traffic that goes out of the WAN interface into the router but only to the ISP (internet).
And here we go: it won't let me apply the rules when I don't specify an IP address and not a 2nd interface either. It only says "Error reconfiguring service. 200".
Why do I need queues when I can use a pipe as target for the rule?
PS: Apparently I got logged out of OPNsense, which is why it wouldn't let me apply the rules.
However, I can't make a rule without specifing an IP address. What's the 2nd interface for then?
And that takes me back to the question: How do I specify an IPv6 address in the rule when I don't have a static prefix?
Quote from: defaultuserfoo on October 09, 2025, 11:58:20 PMI don't expect it to do anything, see above. But the ISP must have some reasons to tell customers to use certain bandwith limitations, and it's a learning experience for me to try it out, and I'm curious to see what happens.
I assume that fighting bufferbloat is their aim and I am 99% sure that this is what your ISP refers to, because of the mentioning of the limits. It happens to be what most people - including me - use traffic shaping for. Read this (https://forum.opnsense.org/index.php?topic=42985.0), point 26 for an explanation.
Quote from: meyergru on October 10, 2025, 08:06:31 AMQuote from: defaultuserfoo on October 09, 2025, 11:58:20 PMI don't expect it to do anything, see above. But the ISP must have some reasons to tell customers to use certain bandwith limitations, and it's a learning experience for me to try it out, and I'm curious to see what happens.
I assume that fighting bufferbloat is their aim and I am 99% sure that this is what your ISP refers to, because of the mentioning of the limits. It happens to be what most people - including me - use traffic shaping for. Read this (https://forum.opnsense.org/index.php?topic=42985.0), point 26 for an explanation.
You're probably right about that. I downloaded a 7.9GB video today and was watching the pretty graphs and could see the bandwith usage go up and down. So it may be a good idea to use traffic shaping.
But how could I get that to work without specifing IPs in the rules? I tried that and the window to edit the rule kept saying that an IP address is required no matter if I was setting a second interface or not. So I removed all the pipes, queues and rules yesterday because I didn't want a broken setup to get in the way that doesn't work anyway because I can't specify an IPv6 address range.
I didn't want to follow https://forum.opnsense.org/index.php?topic=46990.0 because I'm not seeing packet loss and rather start with a simple, basic approach and go from there. If there is packet loss, I can always 'upgrade'.
I do not understand what your problem is. If you follow the docs for bufferbloat and control plane priorization, you do not need any IP addresses (i.e. you put "any" where you would have an IP in the rules, which is literally what the docs show).
As @meyergru said.
Both of those guides were tested and are used by several users. So following them should yield proper results.
One comment; do not use the Interface2 option in the Shaper Rules. This is only used for more granular control. 99% of all use cases you do not need it, you only need the Interface1 with the proper direction related to the Pipe/Queue config.
Regards,
S.
Quote from: meyergru on October 10, 2025, 06:06:59 PMI do not understand what your problem is. If you follow the docs for bufferbloat and control plane priorization, you do not need any IP addresses (i.e. you put "any" where you would have an IP in the rules, which is literally what the docs show).
I just left the entry field blank and it kept saying I need to specify an IP address. There is no indication that you have to put 'any'. The GUI should just assume 'any' when the field is left blank. Otherwise the users have to assume that they can't set up a rule without an address.
Quote from: Seimus on October 11, 2025, 01:11:58 PMAs @meyergru said.
Both of those guides were tested and are used by several users. So following them should yield proper results.
One comment; do not use the Interface2 option in the Shaper Rules. This is only used for more granular control. 99% of all use cases you do not need it, you only need the Interface1 with the proper direction related to the Pipe/Queue config.
Regards,
S.
Well, I followed https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html and am not really happy with the results. Basically, according to speedtest.net, I'm getting a little less bandwidth than I should when I'm using the numbers for the bandwidth I got from the ISP. When I increase the numbers, I get more bandwidth. I can't really tell what the latency is, though.
When I use way lower numbers (like half the bandwith I have) for the pipes, then I'm not getting as much bandwidth as I'd expect but less than the numbers would allow.
Does that mean that something isn't working right or is this to be expected?
That is expected.
When you try to control bufferbloat, even if you would set your full BW for example 100Mbit as is your contracted speed, you will never get 100Mbit. FQ_C takes some % from the total throughput (I think 5% by default).
The point of all of this is to have control over the possible congestion; prevention, handling & management.
To achieve this in first place, you need to take the control from the ISP. This is done by limiting the BW in order to not trigger ISP based queue management which is usually terrible.
Tradeoff is you will have a lower Throughput, benefit is usually a total mitigation or/and control over the latency during congestion state.
P.S. The docs contain pictures with examples, one of them is for ookla speeds test where it show where to look for the latency during load.
Regards,
S.
As said, speed suffers a little because of the reserve for control packets. To actually see the benefits of traffic shaping, you should try https://www.waveform.com/tools/bufferbloat or https://speed.cloudflare.com/ and compare the latency before and after applying these settings.
Quote from: Seimus on October 12, 2025, 02:40:25 AMThat is expected.
When you try to control bufferbloat, even if you would set your full BW for example 100Mbit as is your contracted speed, you will never get 100Mbit. FQ_C takes some % from the total throughput (I think 5% by default).
I thought it would adjust to the numbers I'm giving it, and I'd use appropriate numbers.
QuoteThe point of all of this is to have control over the possible congestion; prevention, handling & management.
To achieve this in first place, you need to take the control from the ISP. This is done by limiting the BW in order to not trigger ISP based queue management which is usually terrible.
Why is the limiting by the ISP so terrible?
QuoteTradeoff is you will have a lower Throughput, benefit is usually a total mitigation or/and control over the latency during congestion state.
P.S. The docs contain pictures with examples, one of them is for ookla speeds test where it show where to look for the latency during load.
During that speedtest, the numbers for the latency keep changing all the time, so I can't really tell what it is. Upload latency seems to be much lower than the download latency.
When I set the pipe numbers to half the bandwiths I'm supposed to have, bandwith is lower and the latency seems the same. When I set the numbers to double the bandwidth, latency is like half and I'm getting about 10% more bandwidth than I'm supposed to get. When I set the numbers to 10% over what I'm supposed to get --- and that is more than the limits the ISP suggests --- I'm getting better bandwidth and lower latency. I can only guess that shaping on the router doesn't kick in because I'm not reaching the bandwith. So I guess I could as well delete all the settings because they don't give any measurable benefits and only lower the usable bandwidth.
So what is the point of this traffic shaping? It seems to only lower the bandwith I'm getting and to increase the latency. This doesn't make sense.
What do you suggest? Should I just delete the settings?
BTW, there's one difference with no traffic shaping: at the start of the test, download bandwidth may spike up to about over 2.5 times of what I'm supposed to get before it goes down. With traffic shaping, that doesn't happen. But does it even matter?
Quote from: meyergru on October 12, 2025, 09:05:06 AMAs said, speed suffers a little because of the reserve for control packets. To actually see the benefits of traffic shaping, you should try https://www.waveform.com/tools/bufferbloat or https://speed.cloudflare.com/ and compare the latency before and after applying these settings.
The first one gets stuck at "Warming up" in the "Active" part. The second one only produces a message "Application error: a client-side exception has occurred (see the browser console for more information).". They need to fix their tests ...
I doubt that. I had the same problem, when my MTU settings were wrong and once more, when I used the traffic shaper and found that sometimes, you need a reboot to apply the traffic shaper settings correctly. That was discussed here: https://forum.opnsense.org/index.php?topic=46990.0, but I thought the problem had been fixed in the meantime.
If the tests fails for you, then something is wrong in your setup. Maybe, you use adblockers that interfere with the tests on those pages.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMWhy is the limiting by the ISP so terrible?
Ask your ISP.
ISPs usually don't care to accommodate possible congestion in their networks. The bufferbloat community developed a LibreQoS platform that is targeted to be used by ISP, they try to make ISPs to take advantage of this to mitigate buffer bloat in their networks. But once again most of the ISPs do not care.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMDuring that speedtest, the numbers for the latency keep changing all the time, so I can't really tell what it is. Upload latency seems to be much lower than the download latency.
You have to check the latency number after the test is done if you do ooklaspeedtest... But, just go with the other two test sites they will tell you more precisely.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMWhen I set the pipe numbers to half the bandwiths I'm supposed to have, bandwith is lower and the latency seems the same. When I set the numbers to double the bandwidth, latency is like half and I'm getting about 10% more bandwidth than I'm supposed to get. When I set the numbers to 10% over what I'm supposed to get --- and that is more than the limits the ISP suggests --- I'm getting better bandwidth and lower latency.
This doesn't give any sense at all. Plus the way how you previously described that you check for latency during load, I am honestly not sure if you even get proper readings in first place.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMI can only guess that shaping on the router doesn't kick in because I'm not reaching the bandwith. So I guess I could as well delete all the settings because they don't give any measurable benefits and only lower the usable bandwidth.
FQ_C is an active AQM, even if you are not reaching the "BW" it measures time of each packet within the flow/queue. Once again I am honestly not sure if you even get proper readings in first place.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMSo what is the point of this traffic shaping? It seems to only lower the bandwith I'm getting and to increase the latency. This doesn't make sense.
This is just not true. If you are having worse latency when doing FQ_C, most likely you do something wrong. Or you don't properly read the output of the testing results.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMBTW, there's one difference with no traffic shaping: at the start of the test, download bandwidth may spike up to about over 2.5 times of what I'm supposed to get before it goes down. With traffic shaping, that doesn't happen. But does it even matter?
It does matter, that latency bump you see on start is cause by burst of traffic. This causes a lot of problems with start of transmissions and application startups.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMWhat do you suggest? Should I just delete the settings?
Read the documentation and properly read the outputs of the tests. Because from what I read so far I am really not sure you read the outputs of the test correctly.
Regards,
S.