iperf3 -V -f m -c 192.168.20.237[ 5] local 192.168.30.220 port 34084 connected to 192.168.20.237 port 5201Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0[ ID] Interval Transfer Bitrate Retr Cwnd[ 5] 0.00-1.00 sec 27.1 MBytes 227 Mbits/sec 0 1.22 MBytes [ 5] 1.00-2.00 sec 23.8 MBytes 199 Mbits/sec 57 1.13 MBytes [ 5] 2.00-3.00 sec 23.8 MBytes 199 Mbits/sec 0 1.24 MBytes [ 5] 3.00-4.00 sec 23.8 MBytes 199 Mbits/sec 0 1.33 MBytes [ 5] 4.00-5.00 sec 23.8 MBytes 199 Mbits/sec 0 1.39 MBytes [ 5] 5.00-6.00 sec 23.8 MBytes 199 Mbits/sec 4 1.01 MBytes [ 5] 6.00-7.00 sec 23.8 MBytes 199 Mbits/sec 0 1.08 MBytes [ 5] 7.00-8.00 sec 23.8 MBytes 199 Mbits/sec 0 1.14 MBytes [ 5] 8.00-9.00 sec 23.8 MBytes 199 Mbits/sec 0 1.17 MBytes [ 5] 9.00-10.00 sec 23.8 MBytes 199 Mbits/sec 0 1.20 MBytes - - - - - - - - - - - - - - - - - - - - - - - - -Test Complete. Summary Results:[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-10.00 sec 241 MBytes 202 Mbits/sec 61 sender[ 5] 0.00-10.07 sec 238 MBytes 198 Mbits/sec receiverCPU Utilization: local/sender 1.6% (0.0%u/1.5%s), remote/receiver 21.5% (1.3%u/20.2%s)snd_tcp_congestion cubicrcv_tcp_congestion cubic
iperf3 -V -f m -c 192.168.20.230[ 5] local 192.168.30.220 port 55432 connected to 192.168.20.230 port 5201Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0[ ID] Interval Transfer Bitrate Retr Cwnd[ 5] 0.00-1.00 sec 29.8 MBytes 250 Mbits/sec 0 1.33 MBytes [ 5] 1.00-2.00 sec 26.2 MBytes 220 Mbits/sec 60 1.15 MBytes [ 5] 2.00-3.00 sec 26.2 MBytes 220 Mbits/sec 0 1.26 MBytes [ 5] 3.00-4.00 sec 25.0 MBytes 210 Mbits/sec 0 1.35 MBytes [ 5] 4.00-5.00 sec 26.2 MBytes 220 Mbits/sec 0 1.41 MBytes [ 5] 5.00-6.00 sec 26.2 MBytes 220 Mbits/sec 2 1.04 MBytes [ 5] 6.00-7.00 sec 26.2 MBytes 220 Mbits/sec 0 1.11 MBytes [ 5] 7.00-8.00 sec 25.0 MBytes 210 Mbits/sec 0 1.15 MBytes [ 5] 8.00-9.00 sec 26.2 MBytes 220 Mbits/sec 0 1.18 MBytes [ 5] 9.00-10.00 sec 26.2 MBytes 220 Mbits/sec 0 1.20 MBytes - - - - - - - - - - - - - - - - - - - - - - - - -Test Complete. Summary Results:[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-10.00 sec 264 MBytes 221 Mbits/sec 62 sender[ 5] 0.00-10.07 sec 260 MBytes 217 Mbits/sec receiverCPU Utilization: local/sender 1.6% (0.1%u/1.5%s), remote/receiver 16.7% (1.6%u/15.1%s)snd_tcp_congestion cubicrcv_tcp_congestion cubic
#cpu_microcode_load="YES"#cpu_microcode_name="/boot/firmware/intel-ucode.bin"# agree with Intel license termsamdtemp_load="YES"ahci_load="YES"aesni_load="YES"if_igb_load="YES"flowd_enable="YES"flowd_aggregate_enable="YES"legal.intel_igb.license_ack="1"legal.intel_ipw.license_ack=1legal.intel_iwi.license_ack=1# this is the magic. If you don't set this, queues won't be utilized properly# allow multiple processes for receive/transmit processinghw.igb.rx_process_limit="-1"hw.igb.tx_process_limit="-1"# more settings to play with below. Not strictly necessary.# force NIC to use 1 queue (don't do it on APU)# hw.igb.num_queues=1# give enough RAM to network buffers (default is usually OK)#kern.ipc.nmbclusters="1000000"net.pf.states_hashsize=2097152#hw.igb.rxd=4096#hw.igb.txd=4096#net.inet.tcp.syncache.hashsize="1024"#net.inet.tcp.syncache.bucketlimit="100"#kern.smp.disabled=1#hw.igb.0.fc=3#hw.igb.1.fc=3#hw.igb.2.fc=3hw.igb.num_queues=0#net.link.ifqmaxlen="8192"hw.igb.enable_aim=1#hw.igb.max_interrupt_rate="64000"hw.igb.enable_msix=1hw.pci.enable_msix=1hw.igb.rx_process_limit="-1"hw.igb.tx_process_limit="-1"#net.inet.ip.maxfragpackets="0"#net.inet.ip.maxfragsperpacket="0"#dev.igb.0.eee_disabled="1"#dev.igb.1.eee_disabled="1"#dev.igb.2.eee_disabled="1"vm.pmap.pti = 0hw.ibrs_disable = 0hint.p4tcc.0.disabled=1hint.acpi_throttle.0.disabled=1hint.acpi_perf.0.disabled=1hint.p4tcc.1.disabled=1hint.acpi_throttle.1.disabled=1hint.acpi_perf.1.disabled=1hint.p4tcc.2.disabled=1hint.acpi_throttle.2.disabled=1hint.acpi_perf.2.disabled=1hint.p4tcc.3.disabled=1hint.acpi_throttle.3.disabled=1hint.acpi_perf.3.disabled=1
hw.ibrs_disable = 1
See https://teklager.se/en/knowledge-base/opnsense-performance-optimization/You have to edit /boot/loader.conf.local and also set up as parameters through the GUI.E.g. my file is:<snip>
sysctl dev.cpu.0.freqdev.cpu.0.freq: 1400
core performance boost can increase only 1 core, and this one only up to max 1.4Ghz, and only for a short moment of time. Then it will get back to 1.0Ghz.
hint.acpi_perf.0.disabled 1hint.acpi_throttle.0.disabled 1hint.p4tcc.0.disabled 1
APU Core Performance Boosthttps://github.com/pcengines/apu2-documentation/blob/master/docs/apu_CPU_boost.mdmiroco