Has anyone noticed significant (a full magnitude or greater slow down) on the average recursion time in unbound after enabling this option?I tried with the latest stable build 21.7.5.Cheers,
Linux does have RPS to deal with PPPoE acceleration but FreeBSD has no equivalent.
Configuration:Setting Current LimitThread count 4 4Default queue limit 256 10240Dispatch policy direct n/aThreads bound to CPUs enabled n/aProtocols:Name Proto QLimit Policy Dispatch Flagsip 1 1000 cpu hybrid C--igmp 2 256 source default ---rtsock 3 256 source default ---arp 4 256 source default ---ether 5 256 cpu direct C--ip6 6 256 cpu hybrid C--ip_direct 9 256 cpu hybrid C--ip6_direct 10 256 cpu hybrid C--Workstreams:WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled 0 0 ip 0 10 0 533094 0 11564 544658 0 0 igmp 0 0 2 0 0 0 2 0 0 rtsock 0 2 0 0 0 36 36 0 0 arp 0 0 1625 0 0 0 1625 0 0 ether 0 0 2350239 0 0 0 2350239 0 0 ip6 0 0 0 14 0 0 14 0 0 ip_direct 0 0 0 0 0 0 0 0 0 ip6_direct 0 0 0 0 0 0 0 1 1 ip 0 11 0 0 0 335277 335277 1 1 igmp 0 0 0 0 0 0 0 1 1 rtsock 0 0 0 0 0 0 0 1 1 arp 0 0 0 0 0 0 0 1 1 ether 0 0 0 0 0 0 0 1 1 ip6 0 1 0 0 0 8 8 1 1 ip_direct 0 0 0 0 0 0 0 1 1 ip6_direct 0 0 0 0 0 0 0 2 2 ip 0 14 0 1235 0 478622 479857 2 2 igmp 0 0 0 0 0 0 0 2 2 rtsock 0 0 0 0 0 0 0 2 2 arp 0 0 0 0 0 0 0 2 2 ether 0 0 333485 0 0 0 333485 2 2 ip6 0 1 0 0 0 1 1 2 2 ip_direct 0 0 0 0 0 0 0 2 2 ip6_direct 0 0 0 0 0 0 0 3 3 ip 0 13 0 0 0 475546 475546 3 3 igmp 0 0 0 0 0 0 0 3 3 rtsock 0 0 0 0 0 0 0 3 3 arp 0 0 0 0 0 0 0 3 3 ether 0 0 0 0 0 0 0 3 3 ip6 0 1 0 0 0 1 1 3 3 ip_direct 0 0 0 0 0 0 0 3 3 ip6_direct 0 0 0 0 0 0 0
speedtest -s 4302 Speedtest by Ookla Server: Vodafone IT - Milan (id = 4302) ISP: Tecno General S.r.l Latency: 8.26 ms (0.16 ms jitter) Download: 937.00 Mbps (data used: 963.1 MB) Upload: 281.64 Mbps (data used: 141.4 MB)Packet Loss: Not available. Result URL: https://www.speedtest.net/result/c/0e691806-5212-4fc3-b199-2b2e92660367
I wanted to thank you guys for getting this into the 21.7.x release lately. After doing the works with the mentioned tunables, I know finally get my full internet I currently should have (1000/500). Before these changes the download side throttled around 500, so I ended up with a 500/500 line.so THANK YOU!
hw.pci.enable_msix="1"machdep.hyperthreading_allowed="0"hw.em.rx_process_limit="-1"net.link.ifqmaxlen="8192"net.isr.numthreads=4net.isr.defaultqlimit=4096net.isr.bindthreads=1net.isr.maxthreads=4net.inet.rss.enabled=1net.inet.rss.bits=2dev.em.3.iflib.override_nrxds="4096"dev.em.3.iflib.override_ntxds="4096"dev.em.3.iflib.override_qs_enable="1"dev.em.3.iflib.override_nrxqs="4"dev.em.3.iflib.override_ntxqs="4"dev.em.2.iflib.override_nrxds="4096"dev.em.2.iflib.override_ntxds="4096"dev.em.2.iflib.override_qs_enable="1"dev.em.2.iflib.override_nrxqs="4"dev.em.2.iflib.override_ntxqs="4"dev.em.1.iflib.override_nrxds="4096"dev.em.1.iflib.override_ntxds="4096"dev.em.1.iflib.override_qs_enable="1"dev.em.1.iflib.override_nrxqs="4"dev.em.1.iflib.override_ntxqs="4"dev.em.0.iflib.override_nrxds="4096"dev.em.0.iflib.override_ntxds="4096"dev.em.0.iflib.override_qs_enable="1"dev.em.0.iflib.override_nrxqs="4"dev.em.0.iflib.override_ntxqs="4"dev.em.0.fc="0"dev.em.1.fc="0"dev.em.2.fc="0"dev.em.3.fc="0"
Unbound so-reuseport is buggy with RSS enabled so we removed it to avoid further problems. It might jus be that the outcome is the same speed wise either with so-reuseport disabled or RSS enabled mutually exclusive.We will be looking into it, but with the beta just out it's better to concentrate on more urgent issues.Cheers,Franco
I'm using Unbound and RSS at home and I don't notice any difference. The situation needs some sort of fix in the kernel, but for day to day use it's good enough.Cheers,Franco
penguin@OPNsense:~ % sudo sysctl -a | grep -i 'isr.bindthreads\|isr.maxthreads\|inet.rss.enabled\|inet.rss.bits'net.inet.rss.enabled: 1net.inet.rss.bits: 2net.isr.bindthreads: 1net.isr.maxthreads: 4penguin@OPNsense:~ % sudo netstat -QConfiguration:Setting Current LimitThread count 4 4Default queue limit 256 10240Dispatch policy direct n/aThreads bound to CPUs enabled n/aProtocols:Name Proto QLimit Policy Dispatch Flagsip 1 1000 cpu hybrid C--igmp 2 256 source default ---rtsock 3 256 source default ---arp 4 256 source default ---ether 5 256 cpu direct C--ip6 6 256 cpu hybrid C--ip_direct 9 256 cpu hybrid C--ip6_direct 10 256 cpu hybrid C--