iperf3 -R -c 10.90.70.250Connecting to host 10.90.70.250, port 5201Reverse mode, remote host 10.90.70.250 is sending[ 5] local 10.90.70.110 port 42160 connected to 10.90.70.250 port 5201[ ID] Interval Transfer Bitrate[ 5] 0.00-1.00 sec 93.5 MBytes 784 Mbits/sec [ 5] 1.00-2.00 sec 93.6 MBytes 785 Mbits/sec [ 5] 2.00-3.00 sec 93.6 MBytes 786 Mbits/sec [ 5] 3.00-4.00 sec 94.2 MBytes 790 Mbits/sec [ 5] 4.00-5.00 sec 95.8 MBytes 803 Mbits/sec [ 5] 5.00-6.00 sec 95.1 MBytes 798 Mbits/sec [ 5] 6.00-7.00 sec 95.8 MBytes 803 Mbits/sec [ 5] 7.00-8.00 sec 96.1 MBytes 806 Mbits/sec [ 5] 8.00-9.00 sec 95.9 MBytes 805 Mbits/sec [ 5] 9.00-10.00 sec 96.1 MBytes 806 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-10.00 sec 950 MBytes 797 Mbits/sec 0 sender[ 5] 0.00-10.00 sec 950 MBytes 797 Mbits/sec receiver
iperf3 -R -p 5206 -c bouygues.iperf.frConnecting to host bouygues.iperf.fr, port 5206Reverse mode, remote host bouygues.iperf.fr is sending[ 5] local 192.168.1.158 port 58658 connected to 89.84.1.222 port 5206[ ID] Interval Transfer Bitrate[ 5] 0.00-1.00 sec 111 MBytes 930 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 941 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 942 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 942 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec [ 5] 8.00-9.00 sec 112 MBytes 942 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-10.00 sec 1.10 GBytes 946 Mbits/sec 0 sender[ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec receiver
Found some other BSD relating optimization stuff. need to look into it.https://calomel.org/network_performance.htmlhttps://calomel.org/freebsd_network_tuning.html
net.inet.ip.ifq.maxlenkern.ipc.maxsockbufnet.inet.tcp.recvbuf_incnet.inet.tcp.recvbuf_maxnet.inet.tcp.recvspacenet.inet.tcp.sendbuf_incnet.inet.tcp.sendbuf_maxnet.inet.tcp.sendspacenet.inet.tcp.tso
I think you are mixing two things in this thread:This thread is about the optimization of APU-based hardware devices, which can only do 1GBit/s when specifically optimized on FreeBSD.The other issue could be performance problems of 21.1 on XEN based virtualization at best. There are already more participants here in the forum with this observation.I would rather not discuss the XEN issue in this APU thread, as you are more likely to meet users who are also concerned.
Start with e.g. these (from this thread):net.inet6.ip6.redirect = 0net.inet.ip.redirect = 0hw.igb.rx_process_limit = -1 (these are hardware dependent and will probably not match your NIC in the VM)hw.igb.tx_process_limit = -1 (these are hardware dependent and will probably not match your NIC in the VM)
Switch to a Odroid H2+, it achieves Gigabit with no issue.
Quote from: mater on February 09, 2021, 06:41:06 amSwitch to a Odroid H2+, it achieves Gigabit with no issue.Realtek LAN? Sorry, I'll pass. Yes, I see the problem with the apu, but Odroid is not the solution, IMHO.Protectli looks good ... a bit difficult to find in Europe, though.