Hi,
I just received my Kaleao Mini PCI-E Interl I350-T3 and doing some tests with :
OPNsense 23.1.1_2-amd64
FreeBSD 13.1-RELEASE-p6
OpenSSL 1.1.1t 7 Feb 2023
CPU i I5-4690S 8GB memory as a virtual machine on ESX with passthrough and getting the following restults:
Iperf -c IPADDRESS:
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 56.4 MBytes 473 Mbits/sec 16 166 KBytes
[ 5] 1.00-2.00 sec 57.2 MBytes 480 Mbits/sec 4 277 KBytes
[ 5] 2.00-3.00 sec 55.5 MBytes 466 Mbits/sec 9 248 KBytes
[ 5] 3.00-4.00 sec 58.4 MBytes 489 Mbits/sec 13 217 KBytes
[ 5] 4.00-5.00 sec 58.6 MBytes 492 Mbits/sec 24 139 KBytes
[ 5] 5.00-6.00 sec 58.1 MBytes 487 Mbits/sec 16 190 KBytes
[ 5] 6.00-7.00 sec 55.9 MBytes 469 Mbits/sec 11 183 KBytes
[ 5] 7.00-8.00 sec 56.5 MBytes 474 Mbits/sec 2 337 KBytes
[ 5] 8.00-9.00 sec 57.7 MBytes 484 Mbits/sec 30 295 KBytes
[ 5] 9.00-10.00 sec 59.5 MBytes 499 Mbits/sec 2 307 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 574 MBytes 481 Mbits/sec 127 sender
[ 5] 0.00-10.00 sec 573 MBytes 481 Mbits/sec receiver
with:
iperf3 -c IPADDRESS -u -t 60 -i 10 -b 1000M
[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-10.00 sec 1.11 GBytes 957 Mbits/sec 1678000
[ 5] 10.00-20.00 sec 1.11 GBytes 956 Mbits/sec 1554568
[ 5] 20.00-30.00 sec 1.11 GBytes 956 Mbits/sec 1530808
[ 5] 30.00-40.00 sec 1.11 GBytes 956 Mbits/sec 1792589
[ 5] 40.00-50.00 sec 1.11 GBytes 956 Mbits/sec 1582055
[ 5] 50.00-60.00 sec 1.11 GBytes 956 Mbits/sec 1712025
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 6.68 GBytes 956 Mbits/sec 0.000 ms 0/9850045 (0%) sender
[ 5] 0.00-60.01 sec 6.51 GBytes 932 Mbits/sec 0.016 ms 5057030/9844748 (51%) receiver
I would expect similar results with both iperf3 commands however first one gives me 1/2 the speed.
So my question is there something wrong and which shows the actual performance of my fiber connection.
Feel free to chip in.
*side note: did test with both IPS enabled and disabled, CRC, TSO,LRO are also disabled.
Thanks,
Hemant
Will get way better results with newer cpu & not virtualizing it.. Just saying.
The main difference in your testing seems UDP vs. TCP. And you use just one connection (-P1), which can be a limiting factor.
With TCP, there will be a ramp-up phase during which the buffer size is being estimated. When you test for only a short duration (-t), the influence of that can be significant.
To calculate packets in flight can be tricky depending on what direction your are testing (-R) and if the connection speeds are asymmetric (which they often are with GPON).
You can influence that by using a traffic shaper, which also reduces buffer bloat (see https://www.waveform.com/tools/bufferbloat for a test, there are several guides for traffic shaping with OpnSense).