Reflection for port forwards should be enabled in this case I think, and you might (on this I am not certain) need to see if you need to disable the force gateway.
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Show posts Menu"2025-12-06T12:42:00 Error firewall alias resolve error IP_PublicDNS (error fetching alias url https://raw.githubusercontent.com/jpgpi250/piholemanual/master/DOHipv4.txt)"So I had missed that alias failing to update and I can see why.Quoteyes I'm keeping the list in remote server. Firewall Aliases has a rules ( URL IP's tabele) who is checking every 60 sec for update the remote black list. from this rule i got Floating who does actual restriction to the network.It is impossible to tell why "this does not work anymore", your mechanism to fetch the list I imagine is the Alias automation on OPN. But the content might not be "correct".
Before the update if I want restrict an IP, just have to add it to the remote server black list. And Firewall Aliases fetching this list automatic and blocking the new ip's.
Now this doesn't work anymore , to do so i need to go to Firewall: Diagnostics: States: find were is the new ip or IP's and manual drop it. And then the actual block comes in force.
Quote from: meyergru on December 02, 2025, 11:08:38 PMMaybe that is due to the TCP congestion algorithms used. You can change it with Windows, I think under Win10, it was BBR2, but that had some problems, so they reverted back to CUBIC for Win11.
With Linux, you can easily change it via sysctl. These are the values I use:net.core.rmem_default = 2048000
net.core.wmem_default = 2048000
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_rmem = 4096 1024000 33554432
net.ipv4.tcp_wmem = 4096 1024000 33554432
# don't cache ssthresh from previous connection
#net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_adv_win_scale = 5
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 30000
# for 10 GigE, use this
# net.core.netdev_max_backlog = 30000
net.ipv4.tcp_syncookies = 1
# Enable BBR for Kernel >= 4.9
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
Quote from: Seimus on December 02, 2025, 11:30:01 PMQuote from: cookiemonster on December 02, 2025, 06:14:28 PMHey. I've been using a windows laptop for testing the bufferbloat so far. Normally I use linux but had a need to stay booted on Win last few days. This one is connected via a Wi-Fi 6 (802.11ax) Wifi network using a Intel(R) Wi-Fi 6E AX210 160MHz adapter. Depending on location I can get as little as 480/721 (Mbps) agregated link speed (rec/tran) so I have a bottleneck there at times. Wired connections are only one for a PC but I can't get to it most of the time.
For OPN's CPU I'm using an AMD Ryzen 5 5600U on Proxmox with two vCPUs. Just did a ubench run on it and gives: Ubench Single CPU: 910759 (0.41s). So I think that is Ok.
I've now reset the shaper to docs defaults. This time also the upload side. I need to reboot (had limit and flows on the pipe), I'll update the post.
HW should be okay to handle ZA + Shaper and that throughput.
But keep in mind the stuff about WiFi I mentioned above.
Regards,
S.
Quote from: pfry on December 01, 2025, 08:18:47 PMIs a downstream shaper (particularly a single queue) likely to have the effect you want? I used downstream shapers in the past, but my purpose was to control offered load by adding latency, using multiple queues on a CBQ shaper. I didn't bother after my link passed 10Mb; it did help at 6-10Mb.You mean set it up as per the docs https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html ?
I'd think a simple fair queue with no shaper would be the best option for you. I don't know the best way to accomplish that - perhaps open the pipe beyond 520Mb/s (toward single-station LAN speed). I haven't looked at the fq-codel implementation in... a while. The one I recall used a flow hash, and you could set the number of bits (up to 16, I believe). It looks like the ipfw implementation has that limit (65536). I'd think more can't hurt - fewer (potential) collisions. I wouldn't expect any negatives, but you never can tell. PIE just sounds like a RED implementation - I can't see that it'd have much if any effect, as I wouldn't expect your queue depths/times to reach discard levels.
Of course, you could have upstream issues, at any point in the path.