[ ID] Interval Transfer Bandwidth Retr[ 5] 0.00-10.23 sec 2.34 GBytes 1.96 Gbits/sec 0 sender[ 5] 0.00-10.23 sec 2.34 GBytes 1.96 Gbits/sec receiver[ 7] 0.00-10.23 sec 2.09 GBytes 1.75 Gbits/sec 0 sender[ 7] 0.00-10.23 sec 2.09 GBytes 1.75 Gbits/sec receiver[ 9] 0.00-10.23 sec 1.67 GBytes 1.40 Gbits/sec 0 sender[ 9] 0.00-10.23 sec 1.67 GBytes 1.40 Gbits/sec receiver[ 11] 0.00-10.23 sec 1.65 GBytes 1.39 Gbits/sec 0 sender[ 11] 0.00-10.23 sec 1.65 GBytes 1.39 Gbits/sec receiver[SUM] 0.00-10.23 sec 7.75 GBytes 6.50 Gbits/sec 0 sender[SUM] 0.00-10.23 sec 7.75 GBytes 6.50 Gbits/sec receiver
root@gateway:/ # uname -aFreeBSD gateway.webtool.space 12.1-RELEASE-p10-HBSD FreeBSD 12.1-RELEASE-p10-HBSD #0 517e44a00df(stable/20.7)-dirty: Mon Sep 21 16:21:17 CEST 2020 root@sensey64:/usr/obj/usr/src/amd64.amd64/sys/SMP amd64root@gateway:/ # iperf3 -c 192.168.1.56Connecting to host 192.168.1.56, port 5201[ 5] local 192.168.1.1 port 13640 connected to 192.168.1.56 port 5201[ ID] Interval Transfer Bitrate Retr Cwnd[ 5] 0.00-1.00 sec 125 MBytes 1.05 Gbits/sec 0 2.00 MBytes [ 5] 1.00-2.00 sec 126 MBytes 1.06 Gbits/sec 0 2.00 MBytes [ 5] 2.00-3.00 sec 132 MBytes 1.11 Gbits/sec 0 2.00 MBytes [ 5] 3.00-4.00 sec 131 MBytes 1.10 Gbits/sec 0 2.00 MBytes [ 5] 4.00-5.00 sec 132 MBytes 1.11 Gbits/sec 0 2.00 MBytes [ 5] 5.00-6.00 sec 135 MBytes 1.13 Gbits/sec 0 2.00 MBytes [ 5] 6.00-7.00 sec 138 MBytes 1.16 Gbits/sec 0 2.00 MBytes [ 5] 7.00-8.00 sec 137 MBytes 1.15 Gbits/sec 0 2.00 MBytes [ 5] 8.00-9.00 sec 133 MBytes 1.12 Gbits/sec 0 2.00 MBytes [ 5] 9.00-10.00 sec 131 MBytes 1.10 Gbits/sec 0 2.00 MBytes - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-10.00 sec 1.29 GBytes 1.11 Gbits/sec 0 sender[ 5] 0.00-10.00 sec 1.29 GBytes 1.11 Gbits/sec receiveriperf Done.
avery@debbox:~$ uname -aLinux debbox 5.4.0-4-amd64 #1 SMP Debian 5.4.19-1 (2020-02-13) x86_64 GNU/Linuxavery@debbox:~$ iperf3 -c 192.168.1.56Connecting to host 192.168.1.56, port 5201[ 5] local 192.168.1.39 port 58064 connected to 192.168.1.56 port 5201[ ID] Interval Transfer Bitrate Retr Cwnd[ 5] 0.00-1.00 sec 688 MBytes 5.77 Gbits/sec 0 2.00 MBytes [ 5] 1.00-2.00 sec 852 MBytes 7.15 Gbits/sec 0 2.00 MBytes [ 5] 2.00-3.00 sec 801 MBytes 6.72 Gbits/sec 1825 730 KBytes [ 5] 3.00-4.00 sec 779 MBytes 6.53 Gbits/sec 33 1.13 MBytes [ 5] 4.00-5.00 sec 788 MBytes 6.61 Gbits/sec 266 1.33 MBytes [ 5] 5.00-6.00 sec 828 MBytes 6.94 Gbits/sec 392 1.43 MBytes [ 5] 6.00-7.00 sec 830 MBytes 6.96 Gbits/sec 477 1.49 MBytes [ 5] 7.00-8.00 sec 826 MBytes 6.93 Gbits/sec 1286 749 KBytes [ 5] 8.00-9.00 sec 826 MBytes 6.93 Gbits/sec 0 1.26 MBytes [ 5] 9.00-10.00 sec 775 MBytes 6.50 Gbits/sec 278 1.38 MBytes - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr[ 5] 0.00-10.00 sec 7.81 GBytes 6.71 Gbits/sec 4557 sender[ 5] 0.00-10.00 sec 7.80 GBytes 6.70 Gbits/sec receiveriperf Done.
It is odd that so many of us seem to find an artificial ~1gbps limit when testing OPNsense 20.7 on VMware ESXi and vmxnet3 adapters. It looks like there's at least 3 of us that are able to re-produce these results now?I've disable the hardware blacklist and did not see a difference in my test results from what I had posted here prior. The only way I can get a little bit better throughput is to add more vCPU to the OPNsense VM, however this does not scale well. For instance, if I go from 2vCPU to 4vCPU, I can start to get between 1.5gbps and 2.2gbps depending on how much parallelism I select on my iperf clients.
Be honest to yourself, would you buy a piece of hardware with only 2 cores if you have to requirement for 10G? The smallest hardware with 10 interfaces has 4 core minimum.
I have customers pushing 6Gbit over vmxnet driver.
You guys got me interested in this subject. I have tested plenty of iperf3 against my VMs in my little 3-host homelab, my 10GbE is just a couple DACs connected between the 10Gbe "backbone" IFs of my Dell Powerconnect 7048P, which is really more of a gigabit switch.
With proxmox using vnet adapter the speed is fine, but using pfsense based on freebsd 11 works fine with vmxnet3 too.So the issue is with the HBSD and the vmxnet adapter. I dont understand why opnsense based on a half dead OS. HBSD is abandoned most of the devs. Just drop it and use the standard freebsd again.
Quote from: Archanfel80 on October 26, 2020, 10:27:47 amWith proxmox using vnet adapter the speed is fine, but using pfsense based on freebsd 11 works fine with vmxnet3 too.So the issue is with the HBSD and the vmxnet adapter. I dont understand why opnsense based on a half dead OS. HBSD is abandoned most of the devs. Just drop it and use the standard freebsd again.FreeBSD 12.1 has the same issues ..
Just keep using 20.1 with all the security related caveats and missing features. I really don't see the point in complaining about user choices.Cheers,Franco