Hi,
how do I specify the source and destination addresses for IPv6 in the traffic shaping rules when following this guide: https://docs.opnsense.org/manual/how-tos/shaper_share_evenly.html for instances where the IPv6 addresses are not static?
You would not. Those instructions are quite old. I would argue that the IP ranges in the WAN rules are only there to signify the direction for the attached pipes. With advanced settings, you can now do the same with "direction" and "interface 2", if need be. You do not even need the subnets any more.
Hmm. The help texts say: "secondary interface, matches packets traveling to/from interface (1) to/from interface (2). can be combined with direction." and "matches incoming or outgoing packets or both (default)".
Obviously it doesn't say which direction it is when 'direction' is not set to 'both'.
So which direction is which?
How would a rule without a direction make sense?
I would argue that's not necessarily about any direction but about excluding other interfaces --- like the ones for various vlans and VPN interfaces --- from the rule applying to them. After all, traffic going out from one interface would be the traffic going to the other one while other traffic going out the same interface to other interfaces shouldn't have the rule apply to it. Does that happen? That would make it hilariously complicated to set it all up if you want to do traffic shaping for a bunch of interfaces instead of only two.
Like how do you know what the maximum bandwidth between a LAN interface and a VLAN interface on a different physical network card is? And what is the bandwidth when the interfaces are on the same physical network card?
It's just so that in this case, my ISP is suggesting to set bandwidth limits for upload and download, so I thought I'd play with it and see if it makes a difference. So far it doesn't.
It seems obvious to me that "in" is like "in" on firewall rules for the first interface. So you would only need "in" and "out" rules for WAN with any/any as src/dst with no second interface at all if you do not want it to be significant.
You normally only want to limit in and out directions for WAN, for which the limit exists in the first place and you know what bandwidth it has. That is no different that with the documentation example. The sole thing that did not exist back when that was written is the in/out distinction and the second interface, such that nextmasks were used instead.
While you could use the same approach for different VLANs as well, it is more like a bandwitdh restriction for specific VLANs then, and yes, if you want that, you will have to set it up.
Apart from that: what are you trying to achieve? Fair distribution amongst your clients or bufferbloat optimization because of asymmetric up/downstream? For the latter, there is another documentation section that has most recently been reworked and that does actually work.
Unless you have clients that are far more powerful than others, I would expect this distribution not to do anything much visible. For bufferbloat issues, that is another story.
Quote from: meyergru on October 09, 2025, 08:34:03 PMIt seems obvious to me that "in" is like "in" on firewall rules for the first interface.
That isn't at all obvious to me. It could be 'in to the WAN interface (1st interface) out of the LAN interface (2nd interface, or it's network if an IP address is used instead of an interface)'. I could also be 'out of the 1st interface to the internet' or 'out of the 2nd interface in to the 1st interface'. Or if I don't specify a 2nd interface or a network, it could mean 'in to the 1st interface from anywhere' or 'out of the 1st interface (i. e. to the internet in this case or to any interface)" or whatever. It's entirely unclear to me.
QuoteSo you would only need "in" and "out" rules for WAN with any/any as src/dst with no second interface at all if you do not want it to be significant.
Is that even possible? For a rule that is supposed to limit the upload bandwidth to the ISP (internet), it would make sense because a rule limited to the LAN interface wouldn't limit the other interfaces. The guide seems to require an IP address ...
QuoteYou normally only want to limit in and out directions for WAN, for which the limit exists in the first place and you know what bandwidth it has.
If I have a guest network, for example, I might want to use different limits for that. In that case I'd have to set up rules which, for example, allow only 20% of the upload bandwidth for the guest network and 80% for the LAN. When you have a bunch of interfaces you might want to limit, like a VOIP interface for a VOIP vlan and some others, that gets tedious to set up.
QuoteThat is no different that with the documentation example.
The difference would be that all traffic (from all interfaces) that goes out to the internet rather than only the traffic from then LAN interface would be limited. The latter is what the guide does. I actually should use a rule that limits all traffic from all interfaces that wants to go out to the internet.
QuoteThe sole thing that did not exist back when that was written is the in/out distinction and the second interface, such that nextmasks were used instead.
That was a great improvement.
QuoteWhile you could use the same approach for different VLANs as well, it is more like a bandwitdh restriction for specific VLANs then, and yes, if you want that, you will have to set it up.
right
QuoteApart from that: what are you trying to achieve? Fair distribution amongst your clients
I'm merely simply trying out the feature after I upgraded my connection to a bit more bandwidth, and my ISP suggested to put specific limits for upload and download. It never hurts to learn something, and I'm curious if it would make any difference.
From what I've seen so far, OPNsense does a great job of distributing the bandwidth between the clients by default without any extra traffic shaping.
Also, I'm not a fan of traffic shaping. The problem with that is that you can't really accomplish anything other than dropping packets. Having to drop packets because you don't have enough bandwidth doesn't improve anything because packets inevitably get dropped anyway. You can decide which packets to drop, but when you don't have enough bandwidth, the line is still plugged and the solution is to get more bandwidth. --- There may be cases in which you might be able to achieve improvements because you have better control over the traffic, like through the switches on your own network, but for your internet connection, the bottleneck is out of your control, and traffic shaping is pointless.
Quoteor bufferbloat optimization because of asymmetric up/downstream? For the latter, there is another documentation section that has most recently been reworked and that does actually work.
What does it do?
QuoteUnless you have clients that are far more powerful than others, I would expect this distribution not to do anything much visible. For bufferbloat issues, that is another story.
I don't expect it to do anything, see above. But the ISP must have some reasons to tell customers to use certain bandwith limitations, and it's a learning experience for me to try it out, and I'm curious to see what happens.
So what I'm trying to accomplish is to limit upload and download to the numbers my ISP gave me, for both IPv4 and IPv6. But there isn't even an option to set up a rule for both IPv4 and IPv6 at the same time, which I guess would be required. And since I can't get a static IPv6 prefix, it seems I'm limited to use interfaces instead of networks when setting up rules.
Since there is no option to make a rule for both IPv4 and IPv6 at the same time, how am I supposed to set up a limit? Do I just make two rules each for each IPv4 and IPv6 and hope that the pipe will still take care of the limit?
Now let's see if I can just use a single interface for a rule ...
Hm, what does 'ip' mean in a rule? Does that mean both IPv4 and IPv6 or something else?
When I make a rule for upload with only the WAN interface, what does the direction mean? I don't want to limit traffic that goes out of the WAN interface into the router but only to the ISP (internet).
And here we go: it won't let me apply the rules when I don't specify an IP address and not a 2nd interface either. It only says "Error reconfiguring service. 200".
Why do I need queues when I can use a pipe as target for the rule?
PS: Apparently I got logged out of OPNsense, which is why it wouldn't let me apply the rules.
However, I can't make a rule without specifing an IP address. What's the 2nd interface for then?
And that takes me back to the question: How do I specify an IPv6 address in the rule when I don't have a static prefix?
Quote from: defaultuserfoo on October 09, 2025, 11:58:20 PMI don't expect it to do anything, see above. But the ISP must have some reasons to tell customers to use certain bandwith limitations, and it's a learning experience for me to try it out, and I'm curious to see what happens.
I assume that fighting bufferbloat is their aim and I am 99% sure that this is what your ISP refers to, because of the mentioning of the limits. It happens to be what most people - including me - use traffic shaping for. Read this (https://forum.opnsense.org/index.php?topic=42985.0), point 26 for an explanation.
Quote from: meyergru on October 10, 2025, 08:06:31 AMQuote from: defaultuserfoo on October 09, 2025, 11:58:20 PMI don't expect it to do anything, see above. But the ISP must have some reasons to tell customers to use certain bandwith limitations, and it's a learning experience for me to try it out, and I'm curious to see what happens.
I assume that fighting bufferbloat is their aim and I am 99% sure that this is what your ISP refers to, because of the mentioning of the limits. It happens to be what most people - including me - use traffic shaping for. Read this (https://forum.opnsense.org/index.php?topic=42985.0), point 26 for an explanation.
You're probably right about that. I downloaded a 7.9GB video today and was watching the pretty graphs and could see the bandwith usage go up and down. So it may be a good idea to use traffic shaping.
But how could I get that to work without specifing IPs in the rules? I tried that and the window to edit the rule kept saying that an IP address is required no matter if I was setting a second interface or not. So I removed all the pipes, queues and rules yesterday because I didn't want a broken setup to get in the way that doesn't work anyway because I can't specify an IPv6 address range.
I didn't want to follow https://forum.opnsense.org/index.php?topic=46990.0 because I'm not seeing packet loss and rather start with a simple, basic approach and go from there. If there is packet loss, I can always 'upgrade'.
I do not understand what your problem is. If you follow the docs for bufferbloat and control plane priorization, you do not need any IP addresses (i.e. you put "any" where you would have an IP in the rules, which is literally what the docs show).
As @meyergru said.
Both of those guides were tested and are used by several users. So following them should yield proper results.
One comment; do not use the Interface2 option in the Shaper Rules. This is only used for more granular control. 99% of all use cases you do not need it, you only need the Interface1 with the proper direction related to the Pipe/Queue config.
Regards,
S.
Quote from: meyergru on October 10, 2025, 06:06:59 PMI do not understand what your problem is. If you follow the docs for bufferbloat and control plane priorization, you do not need any IP addresses (i.e. you put "any" where you would have an IP in the rules, which is literally what the docs show).
I just left the entry field blank and it kept saying I need to specify an IP address. There is no indication that you have to put 'any'. The GUI should just assume 'any' when the field is left blank. Otherwise the users have to assume that they can't set up a rule without an address.
Quote from: Seimus on October 11, 2025, 01:11:58 PMAs @meyergru said.
Both of those guides were tested and are used by several users. So following them should yield proper results.
One comment; do not use the Interface2 option in the Shaper Rules. This is only used for more granular control. 99% of all use cases you do not need it, you only need the Interface1 with the proper direction related to the Pipe/Queue config.
Regards,
S.
Well, I followed https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html and am not really happy with the results. Basically, according to speedtest.net, I'm getting a little less bandwidth than I should when I'm using the numbers for the bandwidth I got from the ISP. When I increase the numbers, I get more bandwidth. I can't really tell what the latency is, though.
When I use way lower numbers (like half the bandwith I have) for the pipes, then I'm not getting as much bandwidth as I'd expect but less than the numbers would allow.
Does that mean that something isn't working right or is this to be expected?
That is expected.
When you try to control bufferbloat, even if you would set your full BW for example 100Mbit as is your contracted speed, you will never get 100Mbit. FQ_C takes some % from the total throughput (I think 5% by default).
The point of all of this is to have control over the possible congestion; prevention, handling & management.
To achieve this in first place, you need to take the control from the ISP. This is done by limiting the BW in order to not trigger ISP based queue management which is usually terrible.
Tradeoff is you will have a lower Throughput, benefit is usually a total mitigation or/and control over the latency during congestion state.
P.S. The docs contain pictures with examples, one of them is for ookla speeds test where it show where to look for the latency during load.
Regards,
S.
As said, speed suffers a little because of the reserve for control packets. To actually see the benefits of traffic shaping, you should try https://www.waveform.com/tools/bufferbloat or https://speed.cloudflare.com/ and compare the latency before and after applying these settings.
Quote from: Seimus on October 12, 2025, 02:40:25 AMThat is expected.
When you try to control bufferbloat, even if you would set your full BW for example 100Mbit as is your contracted speed, you will never get 100Mbit. FQ_C takes some % from the total throughput (I think 5% by default).
I thought it would adjust to the numbers I'm giving it, and I'd use appropriate numbers.
QuoteThe point of all of this is to have control over the possible congestion; prevention, handling & management.
To achieve this in first place, you need to take the control from the ISP. This is done by limiting the BW in order to not trigger ISP based queue management which is usually terrible.
Why is the limiting by the ISP so terrible?
QuoteTradeoff is you will have a lower Throughput, benefit is usually a total mitigation or/and control over the latency during congestion state.
P.S. The docs contain pictures with examples, one of them is for ookla speeds test where it show where to look for the latency during load.
During that speedtest, the numbers for the latency keep changing all the time, so I can't really tell what it is. Upload latency seems to be much lower than the download latency.
When I set the pipe numbers to half the bandwiths I'm supposed to have, bandwith is lower and the latency seems the same. When I set the numbers to double the bandwidth, latency is like half and I'm getting about 10% more bandwidth than I'm supposed to get. When I set the numbers to 10% over what I'm supposed to get --- and that is more than the limits the ISP suggests --- I'm getting better bandwidth and lower latency. I can only guess that shaping on the router doesn't kick in because I'm not reaching the bandwith. So I guess I could as well delete all the settings because they don't give any measurable benefits and only lower the usable bandwidth.
So what is the point of this traffic shaping? It seems to only lower the bandwith I'm getting and to increase the latency. This doesn't make sense.
What do you suggest? Should I just delete the settings?
BTW, there's one difference with no traffic shaping: at the start of the test, download bandwidth may spike up to about over 2.5 times of what I'm supposed to get before it goes down. With traffic shaping, that doesn't happen. But does it even matter?
Quote from: meyergru on October 12, 2025, 09:05:06 AMAs said, speed suffers a little because of the reserve for control packets. To actually see the benefits of traffic shaping, you should try https://www.waveform.com/tools/bufferbloat or https://speed.cloudflare.com/ and compare the latency before and after applying these settings.
The first one gets stuck at "Warming up" in the "Active" part. The second one only produces a message "Application error: a client-side exception has occurred (see the browser console for more information).". They need to fix their tests ...
I doubt that. I had the same problem, when my MTU settings were wrong and once more, when I used the traffic shaper and found that sometimes, you need a reboot to apply the traffic shaper settings correctly. That was discussed here: https://forum.opnsense.org/index.php?topic=46990.0, but I thought the problem had been fixed in the meantime.
If the tests fails for you, then something is wrong in your setup. Maybe, you use adblockers that interfere with the tests on those pages.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMWhy is the limiting by the ISP so terrible?
Ask your ISP.
ISPs usually don't care to accommodate possible congestion in their networks. The bufferbloat community developed a LibreQoS platform that is targeted to be used by ISP, they try to make ISPs to take advantage of this to mitigate buffer bloat in their networks. But once again most of the ISPs do not care.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMDuring that speedtest, the numbers for the latency keep changing all the time, so I can't really tell what it is. Upload latency seems to be much lower than the download latency.
You have to check the latency number after the test is done if you do ooklaspeedtest... But, just go with the other two test sites they will tell you more precisely.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMWhen I set the pipe numbers to half the bandwiths I'm supposed to have, bandwith is lower and the latency seems the same. When I set the numbers to double the bandwidth, latency is like half and I'm getting about 10% more bandwidth than I'm supposed to get. When I set the numbers to 10% over what I'm supposed to get --- and that is more than the limits the ISP suggests --- I'm getting better bandwidth and lower latency.
This doesn't give any sense at all. Plus the way how you previously described that you check for latency during load, I am honestly not sure if you even get proper readings in first place.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMI can only guess that shaping on the router doesn't kick in because I'm not reaching the bandwith. So I guess I could as well delete all the settings because they don't give any measurable benefits and only lower the usable bandwidth.
FQ_C is an active AQM, even if you are not reaching the "BW" it measures time of each packet within the flow/queue. Once again I am honestly not sure if you even get proper readings in first place.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMSo what is the point of this traffic shaping? It seems to only lower the bandwith I'm getting and to increase the latency. This doesn't make sense.
This is just not true. If you are having worse latency when doing FQ_C, most likely you do something wrong. Or you don't properly read the output of the testing results.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMBTW, there's one difference with no traffic shaping: at the start of the test, download bandwidth may spike up to about over 2.5 times of what I'm supposed to get before it goes down. With traffic shaping, that doesn't happen. But does it even matter?
It does matter, that latency bump you see on start is cause by burst of traffic. This causes a lot of problems with start of transmissions and application startups.
Quote from: defaultuserfoo on October 12, 2025, 02:13:37 PMWhat do you suggest? Should I just delete the settings?
Read the documentation and properly read the outputs of the tests. Because from what I read so far I am really not sure you read the outputs of the test correctly.
Regards,
S.
As far as I can tell, the latency shown by the ookla speedtest shows the last value that showed up while the test was running. So that value is some more or less random number and not even an average. I'd expect the test to run for a couple weeks at least to provide a good average and meaningful results.
So no, I don't think I'm reading the output incorrectly.
Why should there be a problem with latency at the start of the test when no traffic shaping on the router is in place? It's not like the transmitted data would arrive later because the buffers are full. A single packet may arrive later when a buffer is full, but if there were traffic shaping in place, such a packet would be either dropped and later retransmitted, and/or it would be sent later. So the latency might be not higher, or not lower.
What would explain that there would be higher latency in regards to the data that is transmitted whithout traffic shaping? When the transfer is sustained, the bandwidth settles at the limit anyway.
I can imagine that certain types (i. e. marked) packets could be made to experience less latency by giving those a preference over other packets, but the basic traffic shaping that limits the usage of the overall bandwidth doesn't do that -- or at least I didn't configure any such preferences myself.
Now, that is not the way it works. Of course you are seeing changing numbers without traffic shaping. That is exactly the pumping effect that traffic shaping tries to avoid.
By dropping packets earlier via traffic shaping, you can avoid the pumping effect over high BDP (bandwidth * delay product) links. If the other side does not get notified early about the buffer being overrun, it will pump data into the pipe which will subsequently get discarded. Then TCP relies on the retry mechanism to reget the dropped packets. Over a high-latency connection, it simply takes too long to accomodate for small buffers at the target.
Thus, it is better if the target (i.e.: you) signals early when it it gets overwhelmed by the amount of data the source (i.e.: any website you open) can pump into its uplink, which is often 10x or more than what your downlink can chew.
Quote from: meyergru on November 02, 2025, 09:16:47 PMNow, that is not the way it works. Of course you are seeing changing numbers without traffic shaping. That is exactly the pumping effect that traffic shaping tries to avoid.
Are you referring to the bandwidth or to the latency? Bandwidth settles in after the spike at the start. The latency continues to go up and down during the test no matter what. When the bandwith usage is over, the last number that showed up during the test continues to be displayed.
QuoteBy dropping packets earlier via traffic shaping, you can avoid the pumping effect over high BDP (bandwidth * delay product) links. If the other side does not get notified early about the buffer being overrun, it will pump data into the pipe which will subsequently get discarded. Then TCP relies on the retry mechanism to reget the dropped packets. Over a high-latency connection, it simply takes too long to accomodate for small buffers at the target.
How does the other side get notified about the buffers being overrun? The traffic shaping on my router doesn't have any information about the conditions of the buffers that may be between the router and the target and thus has no way to inform the target about their conditions.
What does it matter where the packets are being dropped? One way or another, some are being dropped somewhere.
QuoteThus, it is better if the target (i.e.: you) signals early when it it gets overwhelmed by the amount of data the source (i.e.: any website you open) can pump into its uplink, which is often 10x or more than what your downlink can chew.
My router doesn't get overwhelmed. The bandwidth is limited somewhere else before it ever reaches my router. The ISP allows only so much bandwidth to go through my connection, and my router doesn't have any influence over that. Same goes for upstream, it doesn't overwhelm the router but the ISP limits it somewhere else.
The only thing my router can do is limit the bandwidth even further. And why would I want it to do that?
It still doesn't make sense.
I get why you are skeptical — it really sounds like "slowing yourself down on purpose" would not help. But the key is that traffic shaping is not about speed, it is about control.
Without shaping, your router sends and receives as fast as it can until something upstream (your ISP) gets overwhelmed. Packets then pile up in the ISP's queue, latency goes through the roof, and the sender (like a website or download server) does not notice the congestion until a packet finally gets dropped far away. That delay in feedback is what causes the up-and-down "pumping" effect you see.
When you enable shaping, your router makes sure your own queue is the first place that fills up — not the ISP's. It starts dropping or delaying packets slightly earlier, so the congestion signals (packet delay or loss) happen right at your end. TCP reacts much faster, keeping the link steady and latency low.
That is easy to see on the upload side, but it actually helps downloads too. TCP slow start and congestion control work based on the acknowledgments (ACKs) your system sends back upstream. If your upload queue gets bloated, those ACKs get delayed — which makes the sender think the link is slower than it really is, then it overshoots and backs off again. It is a mess of ramp-ups and stalls. By shaping your outgoing traffic, you keep those ACKs flowing smoothly, which makes your downloads smoother too.
And if you shape downstream directly (some routers can), you do the same thing on incoming packets: you hold them just before they hit your LAN instead of letting the ISP's buffer clog up. That way, you control the bottleneck and the TCP sender sees early feedback during slow start, preventing those massive bursts that cause latency spikes.
You can see this in action: Do a ping test or play an online game while maxing out your connection — watch the latency jump without shaping. Then turn TS on and do it again. Pings stay steady, downloads are smoother, and everything just feels "snappier". That is basically what most test sites proposed at bufferbloat.net do.
So yes — it might sound like limiting yourself for no reason, but shaping actually moves the bottleneck to where you can control it. The physics of TCP feedback do not care where the limit is, just how quickly the sender learns about it. Also, some ISPs already do a good work at adressing this. If your measurements are fine without traffic shaping enabled, then leave it be.
Other than that, my suggestion is: Try it first, measure the pings, then argue. 😄
P.S.: Do you really think so many people would think about this, fine-tune it and/or use it, if it were useless?
Quote from: meyergru on November 03, 2025, 09:43:22 AMI get why you are skeptical — it really sounds like "slowing yourself down on purpose" would not help. But the key is that traffic shaping is not about speed, it is about control.
Without shaping, your router sends and receives as fast as it can until something upstream (your ISP) gets overwhelmed. Packets then pile up in the ISP's queue, latency goes through the roof, and the sender (like a website or download server) does not notice the congestion until a packet finally gets dropped far away. That delay in feedback is what causes the up-and-down "pumping" effect you see.
I'm not seeing a pumping effect. It can be expected that there's a spike at the start of the test before the parties involved settle on a bandwidth, and I'm not seeing any disadvantages from that. I see big advantages because I get about 10% higher bandwidth though. So I disabled it and am not seeing any disadvantages, either.
QuoteWhen you enable shaping, your router makes sure your own queue is the first place that fills up — not the ISP's. It starts dropping or delaying packets slightly earlier, so the congestion signals (packet delay or loss) happen right at your end. TCP reacts much faster, keeping the link steady and latency low.
I think you have a misconception about latency. Reducing the bandwidth through shaping only makes it so that the packets get delayed at a different place, i. e. at the router. That means that they are being sent later and arrive later. Without traffic shaping, the packets are being sent earlier and are then delayed at a different place, so they are being sent earlier and arrive later.
The outcome is the same. The disadvantage is that you can not use the available bandwidth but loose about 10%. Why would I want that?
QuoteThat is easy to see on the upload side, but it actually helps downloads too.
Upload latency according to the speedtest is between 1 and 6ms. Traffic shaping has no influence on that. I have doubts that these values are correct, but that's what the test says.
QuoteTCP slow start and congestion control work based on the acknowledgments (ACKs) your system sends back upstream. If your upload queue gets bloated, those ACKs get delayed — which makes the sender think the link is slower than it really is, then it overshoots and backs off again. It is a mess of ramp-ups and stalls. By shaping your outgoing traffic, you keep those ACKs flowing smoothly, which makes your downloads smoother too.
That's not what the test shows. It shows a spike at the start and after that, it settles on a bandwidth. Acknowledging the packets seems to work fine. That is what is to be expected.
QuoteAnd if you shape downstream directly (some routers can), you do the same thing on incoming packets: you hold them just before they hit your LAN instead of letting the ISP's buffer clog up. That way, you control the bottleneck and the TCP sender sees early feedback during slow start, preventing those massive bursts that cause latency spikes.
I have not seen or experienced latency spikes with or without shaping.
QuoteYou can see this in action: Do a ping test or play an online game while maxing out your connection — watch the latency jump without shaping. Then turn TS on and do it again. Pings stay steady,
What do you suggest I ping?
Quotedownloads are smoother, and everything just feels "snappier". That is basically what most test sites proposed at bufferbloat.net do.
If anything, things are snappier with traffic shaping disabled.
QuoteSo yes — it might sound like limiting yourself for no reason, but shaping actually moves the bottleneck to where you can control it. The physics of TCP feedback do not care where the limit is, just how quickly the sender learns about it.
That's what I've been saying, the packets are being delayed one way or another. When the bottleneck is closer to the sender, I can assume that the sender learns sooner where the limit is. Creating the bottlneck the furthest away from the sender as it possibly can would be the worst setup.
After all experimentation I've done, I still can only say that when you have issues that would seem to make traffic shaping useful, the issues don't go away with traffic shaping, and the only solution is to get more bandwidth.
QuoteAlso, some ISPs already do a good work at adressing this. If your measurements are fine without traffic shaping enabled, then leave it be.
This one suggests I enable traffic shaping. They didn't say that before I upgraded to more bandwidth. I wouldn't have bothered with it if they hadn't suggested it and if I hadn't been curious.
QuoteOther than that, my suggestion is: Try it first, measure the pings, then argue. 😄
I'll try that, but what should I ping?
I'm curious about it. In practise, it won't make a difference because I'm the only user of this connection, so there's noone else who would use all the bandwidth and cause delays for me. Reducing the usable bandwidth for myself is more disadvantageous than anything else.
QuoteP.S.: Do you really think so many people would think about this, fine-tune it and/or use it, if it were useless?
People say lots of things. So far, no people have actually shown any benefit of traffic shaping to me. My own experimentations always have shown no benefits of traffic shaping.
I've even tried it with a 348Kbit connection. You would think that traffic shaping makes all the difference, but no, it doesn't. The congestion on a connection with so little bandwidth overcomes all traffic shaping, and it's not really usable nowadays, shaping or not.
comment deleted
I did not state that anywhere, did I? Initially, I responded to a question that asked to something that was quite obvious (to me) from the context, then the thread went on to discuss what most people try to achieve with traffic shaping and how it is done these days. If you do not need it (maybe because your ISP does fine without it), do not use it.
I simply object to people stating: TS cannot work because "this is the way I understand it", when several other people say that it helped in their situation. And I see why that is - but I ain't gonna argue about this topic any more.
Anyone can take anyone's advice or leave it be - frankly, I do not care all that much.
P.S.: I am less concerned about convincing anybody since I learned what survivorship bias (https://en.wikipedia.org/wiki/Survivorship_bias) is. I think the fact that the rating system the forum once had are now gone is a bad thing (tm), because now people have to find out themselves whose advice is good, but whatever... a great mind in this forum often puts it this way: "You do you".
Why would this survivorship bias only consider the overlooked failures and not overlooked successes and everything else? That doesn't make any sense.
IIUC you're now saying that traffic shaping may randomly have benefits (because some ppl claim that there are benefits, but obviously noone is able to show the benefits in practise) --- or not. All the while its disadvantages remain excluded from the consideration.
This means that everyone needs to do their own testing and decide if they think it has benefits or not, and they can claim whatever they want. That's a kind of mysticism, and it doesn't provide valid answers.
You are obviously wrong. See this for an example: https://forum.opnsense.org/index.php?topic=49509.msg251434#msg251434
And to be crystal clear what I mean by "survivorship bias": For a long time, I was under the false impression that this forum is a means to have discussions among experts about advanced networking topics. I found out that this was actually the smaller part of posts. When I discussed with a friend, that I was disappointed that people coming into the forum asking questions that could have been answered by using a quick forum search, he said: "That is because of survivorship bias. What you see in most forums is people who were neither capable of doing Google searches nor ask ChatGPT for an answer, but rather think they are entitled to be held by the hand and led through tough technical topics - all along complaining why the product does not do everything by itself.
Thus, I conclude that there are:
10% of experts who can actually help with problems because they have an understanding of how networking works.
80% of "survivors" that - after they have got nowhere else to go, come in here and ask questions that are already answered in the docs and/or tutorials.
10% of people who think they are smarter than the experts and do everything "their own way" or question everything without good reason.
I do not mean: Do your own testing, like you seem to imply. What I do say is: It is anybody's choice to take advice from anybody. The problem is that there is no way to tell for the 80% to tell who can offer good advice (the first 10%) and who is the last 10%. This was easier when we had a rating system, where you could easily tell when somebody had 3000 posts and 1500 thanks and somebody else who had 1000 posts and only 10 thanks.
Also, note that I do not point out who belongs to which category. Just my 2 cents. I will not argue any further on this topic.
Sorry but I have to correct this, as this things plague as well the enterprise world.
Quote from: defaultuserfoo on November 04, 2025, 02:31:07 AMAfter all experimentation I've done, I still can only say that when you have issues that would seem to make traffic shaping useful, the issues don't go away with traffic shaping, and the only solution is to get more bandwidth.
No. To get more BW is not the solution, this is a bandage fix. Because there are and there will be cases where no matter what BW you have you still will be affected. The proper solution is to have correct and proper Queue management disciplines with proper BW sizing.
Quote from: defaultuserfoo on November 04, 2025, 10:37:17 PMIIUC you're now saying that traffic shaping may randomly have benefits (because some ppl claim that there are benefits, but obviously noone is able to show the benefits in practise) --- or not. All the while its disadvantages remain excluded from the consideration.
This means that everyone needs to do their own testing and decide if they think it has benefits or not, and they can claim whatever they want. That's a kind of mysticism, and it doesn't provide valid answers.
These are no random benefits, there are benefits which are tied to the specific implementation. Disadvantages are not excluded, they are being considered within that specific implementation.
There is nothing mystic, magic or madeup about using QoS/Shaper with Queue management. All of the algorithms were tested, there are papers and RFCs with test results. You can go and read the RFCs or lookup the white papers.
Regards,
S.
Quote from: meyergru on November 04, 2025, 11:35:10 PMYou are obviously wrong. See this for an example: https://forum.opnsense.org/index.php?topic=49509.msg251434#msg251434
I'm not wrong. The graphics you point to seem inconclusive. They look pretty similar, and the differences can be due to lots of other factors we don't know about. I'm getting differences between speedtests depending on wich server is being used and differences between tests when the same server is being used, with or without traffic shaping. The only effect of traffic shaping I can observe here is that it lowers the usable bandwidth, and that doesn't seem to be an advantage.
QuoteI do not mean: Do your own testing, like you seem to imply.
Perhaps I misunderstood you.
Do you suggest that ppl just follow someones advice blindly without testing?
You do not implement shaping to achieve higher results in a throughput oriented speed test. That's not possible. If the provider's network or some interchange is not grossly oversubscribed you will max out your local link, anyway.
The intention of shaping is to be able to do a video conference while a speed test (or multiple bulk downloads for that matter) is running without getting flaky video or audio.
And you do sacrifice a little bit of your peak bandwidth for that, yes.
Kind regards,
Patrick
Quote from: Seimus on November 06, 2025, 02:54:55 PMSorry but I have to correct this, as this things plague as well the enterprise world.
Quote from: defaultuserfoo on November 04, 2025, 02:31:07 AMAfter all experimentation I've done, I still can only say that when you have issues that would seem to make traffic shaping useful, the issues don't go away with traffic shaping, and the only solution is to get more bandwidth.
No. To get more BW is not the solution, this is a bandage fix. Because there are and there will be cases where no matter what BW you have you still will be affected. The proper solution is to have correct and proper Queue management disciplines with proper BW sizing.
By saying that you need 'proper BW sizing' you're implying that you may need more bandwidth because traffic shaping doesn't help when you don't have enough bandwidth. So we don't disagree other than that I have never actually seen any effects of traffic shaping that would make it worthwhile to use it.
QuoteQuote from: defaultuserfoo on November 04, 2025, 10:37:17 PMIIUC you're now saying that traffic shaping may randomly have benefits (because some ppl claim that there are benefits, but obviously noone is able to show the benefits in practise) --- or not. All the while its disadvantages remain excluded from the consideration.
This means that everyone needs to do their own testing and decide if they think it has benefits or not, and they can claim whatever they want. That's a kind of mysticism, and it doesn't provide valid answers.
These are no random benefits, there are benefits which are tied to the specific implementation. Disadvantages are not excluded, they are being considered within that specific implementation.
There is nothing mystic, magic or madeup about using QoS/Shaper with Queue management. All of the algorithms were tested, there are papers and RFCs with test results. You can go and read the RFCs or lookup the white papers.
Regards,
S.
I don't doubt that some implementation of traffic shaping does what the implementation is supposed to do. That can be tested in a lab environment.
What I'm not seeing is a benefit in practise, outside some lab environment, with internet connections you can get from some ISP. I'm also not saying that there can't be benefits in cases in which there are benefits, only I've never come across such a case.
And as long as I don't see any benefit from traffic shaping I don't see why I should bother to use it.
Quote from: Patrick M. Hausen on November 11, 2025, 04:38:13 PMYou do not implement shaping to achieve higher results in a throughput oriented speed test. That's not possible. If the provider's network or some interchange is not grossly oversubscribed you will max out your local link, anyway.
The intention of shaping is to be able to do a video conference while a speed test (or multiple bulk downloads for that matter) is running without getting flaky video or audio.
And you do sacrifice a little bit of your peak bandwidth for that, yes.
Kind regards,
Patrick
I agree. That's an example for one of the cases where traffic shaping may have benefits; only I never had such a case. Should I come across one, traffic shaping would probably the first thing I'd try.
Quote from: defaultuserfoo on November 11, 2025, 04:43:18 PMBy saying that you need 'proper BW sizing' you're implying that you may need more bandwidth because traffic shaping doesn't help when you don't have enough bandwidth. So we don't disagree other than that I have never actually seen any effects of traffic shaping that would make it worthwhile to use it.
No. I am not implying such thing. You are just cherry picking without understanding the context.
Shaping and QoS as such is here to manage and handle states of congestion. Of course if you are constantly saturated, than increase of BW is needed. Yet Shaping/QoS still helps a lot even in such case to keep the congestion under check. E.g not to have the latency go high-wire and prevent a particular stream/application to eat into others.
Quote from: defaultuserfoo on November 11, 2025, 04:43:18 PMI don't doubt that some implementation of traffic shaping does what the implementation is supposed to do. That can be tested in a lab environment.
What I'm not seeing is a benefit in practise, outside some lab environment, with internet connections you can get from some ISP. I'm also not saying that there can't be benefits in cases in which there are benefits, only I've never come across such a case.
And as long as I don't see any benefit from traffic shaping I don't see why I should bother to use it.
This is funny, because particular CoDel, FQ_C and CAKE all were not only tested in LAB environment but as well on an asymmetrical Internet circuits. The whole point of these algorithms is to deal with bufferbloat (latency) on such a usecase. Further more LibreQoS, is a deployment for ISPs to handle bufferbloat (latency) in their networks on grand Scale.
If you dont see any benefit when using it, well I guess nice! Most likely your ISP has enough capacity or is properly handling bufferbloat in background. Yet this is your experience and usecase, this doesn't cover everyone else whom has problems with bufferbloat and latency.
If you don't see any benefit you are free not to use it, as already mentioned.
But stating Shaper/QoS is usless is just nonsense.
Regards,
S.