Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - dza

#1
Conclusion it does work, also without adding the link-local address.. Just not from within opnsense shell via ssh and ping/curl, unless you do a specific source addr/bind.. I can't remember the exact line I used right now to get this to work..

However all clients get IPv6 and test 10/10 on test-ipv6.com .. Seems like kviknet/ewii have made a small change allowing without this workaround..
#2
Since a few days ago I was without IPv6 again.

I initially thought it was the same renew bug. I tried removing the link-local addr from the gateway addr and now I actually got a gateway addr from ewii/kviknet.

You still need 'Advanced'-tab settings as shown above. If I get issues from now on or return to the link-local gw fix I will update this thread.


Nevermind, it was just a fake/cached response from when applying the settings.
#3
Quote from: kode54 on April 21, 2025, 10:14:18 AMThanks! You helped get my speeds back up to 1.1-1.2 Gbps with the pipe/queue/rule settings I was already using, but that 25.1.5_5 somehow broke, coming from 25.1.4_1.

Intel N5105, same configuration of 4x i226v ports.

You're welcome! Did you also need a separate scheduler pipe setup?
#4
Tunables:

Interfaces: 4x i226v
CPU: Intel N100
# Too high (3500+) would cause jitter or long connection establish delay
net.inet6.ip6.intr_queue_maxlen=3000
net.inet.ip.intr_queue_maxlen=3000
# Too high (16000+) would cause jitter or long connection establish delay
hw.igc.max_interrupt_rate=12000 # boot-time, needs reboot
net.inet.tcp.soreceive_stream=1 # boot-time
net.isr.maxthreads=-1 # boot-time
net.inet.rss.enabled=1 # boot-time
net.inet.rss.bits=2 # boot-time
net.isr.bindthreads=1 # boot-time
hw.igc.rx_process_limit=-1
net.isr.dispatch=direct
hw.ix.flow_control=0
dev.igc.0.fc=0
dev.igc.1.fc=0
dev.igc.2.fc=0
dev.igc.3.fc=0
dev.igc.4.fc=0

My line is sold as 300mbps, both ul/dl peaks stable at 311mbps. Its a GPON/ONT setup.

I use fqcodel pipe on WAN-Download (pipe) with bandwidth=295 (mbps), this is accompanied by a WAN-Download-Queue with weight=100 and WAN-Download-Any-Rule attached to WAN-Download-Queue.

Using fqcodel on the upload pipe only resulted in worse throughput and latency, so I assume the ISP is already doing some sort of shaping on the upload. I also believe this (potential shaping from ISP) might be the reason I had to set the bandwidths to theoretical speedtest max (311mbps) and NOT 80-85%. It was unnaturally rock-stable at 311mbps almost like it targets it deliberately.. reducing the bandwidth limit in any way would just make worse latency and over-compensate with bandwidth limits (a 280mbps limit would hit 180-200 for instance), this unfortunately lead me to a lot of trial and error for this reason.

Settings for my upload pipe:
bandwidth=311 # mbps
scheduler=qfq # should be more lightweight than wfq, didn't spot a difference.
enable_codel=y
codel_target=14
codel_interval=140

Read https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html#target-interval, don't just use my settings. I also found that these settings (target+interval) did very little when used with only 2 fqcodel pipes without queue+rules. Only after I finally attempted a different scheduler on the upload-pipe I finally had progress and now these kicked into effect.

For all my queues below I have
source=$MY_SUBNETS_CIDR_AND_WAN_IP` + `direction=out` for upload pipe rules.
destination=$MY_SUBNETS_CIDR_AND_WAN_IP` + `direction=in` for download pipe rules.

My upload-pipe with the quick-fair-queue (qfq) is combined with
* WAN-Upload-ICMP-queue weight=100, WAN-Upload-ICMP-Rule
* WAN-Download-ICMP,DNS,NTP,DHCP-queue weight=100, WAN-Download-ICMP,DNS,NTP,DHCP-Rule.

Next a catch-all down-prioritize
* WAN-Download-Rule weight=1
* WAN-Upload-Any-queue weight=1, WAN-Upload-Any-Rule
* WAN-Download-Any-queue weight=1, WAN-Download-Any-Rule

I have never had so fast internet, browser and such stable loaded latency.

I recommend and use https://speed.cloudflare.com as it is the most advanced in terms of both data and function. It both downloads and uploads meanwhile it tests with pauses in-between which makes a really good realworld speedtest that highlights all issues for finetuning.

My results under load;
loaded latency dl (9ms), up (9ms), jitter dl (0.684ms), ul (2.94ms)

Flood-pinging under load (while qbittorrent is peaking 30-35mb/s on multiple torrents with 1200 connections)
ping -i0.002 -c1000 1.1.1.1

ping results (under load):

1000 packets transmitted, 1000 received, 0% packet loss, time 9069ms
rtt min/avg/max/mdev = 8.531/9.070/13.014/0.381 ms, pipe 2

9ms avg and 13ms max when your rtt on unloaded and unshaped latency is 14ms and qbittorrent is maxing out is pretty darn good! Anyways these are my findings after speedtesting for 3 days straight and tuning lol. Enjoy!
#5
Quote from: franco on March 21, 2025, 08:30:00 AMThere was an upstream bug that was fixed in 25.1.3 WRT potential to miss the route handed out by SLAAC...

https://github.com/opnsense/src/issues/242

Thanks for the relevant comment. I'm on OPNsense 25.1.3-amd64, but still I can't get a connection at all (only addressing), unless I input the link-local address at the "Interfaces -> Overview" into "IP Address" for WAN_DHCPv6 at "System -> Gateways" - I don't think its a lack of route, because adding that does not change output of
netstat -rn -f inet6
Neither does it change anything under
ifconfig igc0 inet6
by applying that link-local address as "IP Address" on the
Interfaces -> Gateways -> WAN_DHCPv6
So I'm unsure why this works - but the only other option is disabling the WAN_DHCPv6 gateway (which STILL allows IPv6 connections and addr despite), which seems even more illogical.

Our ISP's IPv6 setup is quite special and expects a very specific setup.
#6
Hey guys! I also got the same message from support that "you should be able to use it with default" settings.. No.. At least not in opnsense - you won't ever get Prefix Delegation without those custom options.

Here's some more details to get it working properly and consistently - the instructions above are not enough with the recent kviknet/ewii ISP changes (latest version of opnsense?) in my experience.

Interfaces -> Overview: Copy link-local fe80::xx:1 from the Gateway field under WAN.

System -> Gateways -> WAN_DHCPv6: IP-address <insert here>, apply ..

Enjoy! Finally.. ;)
#7
Can anyone suggest a better reliability test than ping/ICMP for intermittent loss ?
#8
I wanted to test reliability of both the client (RTL8125) and the router (hunsn n100 4x 2.5GBe i226V)'s interfaces, since both of those were so controversial on its inception for various issues.

So I thought (continous) ping might be the best for an intermittent test..
#9
Quote from: sliman on August 24, 2024, 07:01:32 PM
I dont know, how exactlly implemented ICMP in your cards or the OPNsense, but maybe try to send ICMP with higher QoS Priority and compare results, if its critical. Never does it, but could a nice experiment.
Do you have an example or elaborate how this could be done?

Quote from: meyergru on August 24, 2024, 05:59:04 PM
You have already found that you can limit the number of ICMP packets per second in FreeBSD. Without exactly knowing, I would think that ICMP has less priority than other network packets, so if there is anything else going on over your router, ICMP packets may get dropped in favor of other IP traffic.

Also, since many devices of you network may want to use the default route, which presumably passes a LAN port of your OpnSense, they share the port's bandwidth. Thus, the switch may drop packets, even before your OpnSense becomes aware of it.
Do you have a better reliability test that could be used for intermittent fallouts?
#10
General Discussion / Packet loss when pinging opnsense
August 24, 2024, 05:40:48 PM
I've been chasing this issue of the opnsense router dropping ICMP packets when being pinged continously for some time.

Whats weird is that my access point on the same subnet (and other clients) also connected to the same LAN-interface does not produce any packet loss at all for several days continously.

If I do `ping -t 10.0.0.1` over several hours, sometimes as little as 30min, sometimes 8, sometimes 24 it almost always produces at least 2-20+ packets lost.

I have tried to raise `net.icmp.icmplim=1000` with no results.

Initially I had `hw.acpi.cpu.cx_lowest=c3` but its now at `hw.acpi.cpu.cx_lowest=c1` for sake of testing. `dev.hwpstate_intel.0.epp` to `dev.hwpstate_intel.3.epp` is also at 0.

Since no addresses except gw produce any packet loss I can only conclude that the opnsense gateway must be dropping these packets actively somehow on intervals, because it is the only address on the LAN that does this.

So even if the opnsense router is at the central point, other clients can ping each other without packet loss its only the gateway (opnsense) that produces packet loss from time to time.

Does anyone know why this could happen?