Hello together
I am running a DEC740 and went to 10G via fiber for my internal network.
Running with MTU 1500 I get about 1.4 Gbit in both directions between my OPNsense and my Server.
Running with MTU 9000 I get almost 10 Gbit but only in one direction? On the other direction I get 0 Mbit.
Here are my iperf results.
First is my server as iperf host and second is my OPNsense as iperf host
root@Host:~# iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 10.10.20.1, port 64465
[ 5] local 10.10.20.10 port 5201 connected to 10.10.20.1 port 3916
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.09 GBytes 9.35 Gbits/sec
[ 5] 1.00-2.00 sec 1.06 GBytes 9.13 Gbits/sec
[ 5] 2.00-3.00 sec 1.12 GBytes 9.63 Gbits/sec
[ 5] 3.00-4.00 sec 1.07 GBytes 9.20 Gbits/sec
[ 5] 4.00-5.00 sec 1.12 GBytes 9.66 Gbits/sec
[ 5] 5.00-6.00 sec 1.12 GBytes 9.60 Gbits/sec
[ 5] 6.00-7.00 sec 1.12 GBytes 9.64 Gbits/sec
[ 5] 7.00-8.00 sec 1.11 GBytes 9.50 Gbits/sec
[ 5] 8.00-9.00 sec 1.12 GBytes 9.61 Gbits/sec
[ 5] 9.00-10.00 sec 1.02 GBytes 8.74 Gbits/sec
[ 5] 10.00-10.00 sec 944 KBytes 9.24 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
root@Host:~# iperf3 -c 10.10.1.1
Connecting to host 10.10.1.1, port 5201
[ 5] local 10.10.20.10 port 41716 connected to 10.10.1.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 489 KBytes 4.01 Mbits/sec 3 8.74 KBytes
[ 5] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 1 8.74 KBytes
[ 5] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes
[ 5] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 1 8.74 KBytes
[ 5] 4.00-5.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes
[ 5] 5.00-6.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes
[ 5] 6.00-7.00 sec 0.00 Bytes 0.00 bits/sec 1 8.74 KBytes
[ 5] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes
[ 5] 8.00-9.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes
[ 5] 9.00-10.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 489 KBytes 401 Kbits/sec 6 sender
[ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec receiver
iperf Done.
But my network seems working fine.
What am I doing wrong here?
Thanks for any hint.
I found the actual maximum MTU to be around 4K, not 9K, with the 7x0's ax adapter. So, in one direction you experience complete packet loss via UDP. Because of PMTU discovery, this does not affect most TCP traffic, so many things work despite this error.
I also found enlarging the MTU not worth the hassle - it works only if all clients support setting a larger MTU and the performance benefits are not all that great, either.
Thanks for this hint. I set MTU to 4k but results are still not pretty well:
root@Host:~# iperf3 -c 10.10.20.1 --bidir
Connecting to host 10.10.20.1, port 5201
[ 5] local 10.10.20.10 port 52932 connected to 10.10.20.1 port 5201
[ 7] local 10.10.20.10 port 52944 connected to 10.10.20.1 port 5201
[ ID][Role] Interval Transfer Bitrate Retr Cwnd
[ 5][TX-C] 0.00-1.00 sec 188 MBytes 1.58 Gbits/sec 0 301 KBytes
[ 7][RX-C] 0.00-1.00 sec 487 MBytes 4.08 Gbits/sec
[ 5][TX-C] 1.00-2.00 sec 82.5 MBytes 692 Mbits/sec 0 293 KBytes
[ 7][RX-C] 1.00-2.00 sec 709 MBytes 5.95 Gbits/sec
[ 5][TX-C] 2.00-3.00 sec 122 MBytes 1.03 Gbits/sec 0 254 KBytes
[ 7][RX-C] 2.00-3.00 sec 602 MBytes 5.05 Gbits/sec
[ 5][TX-C] 3.00-4.00 sec 151 MBytes 1.27 Gbits/sec 0 270 KBytes
[ 7][RX-C] 3.00-4.00 sec 584 MBytes 4.90 Gbits/sec
[ 5][TX-C] 4.00-5.00 sec 145 MBytes 1.22 Gbits/sec 0 285 KBytes
[ 7][RX-C] 4.00-5.00 sec 602 MBytes 5.05 Gbits/sec
[ 5][TX-C] 5.00-6.00 sec 121 MBytes 1.01 Gbits/sec 0 278 KBytes
[ 7][RX-C] 5.00-6.00 sec 658 MBytes 5.52 Gbits/sec
[ 5][TX-C] 6.00-7.00 sec 144 MBytes 1.21 Gbits/sec 0 270 KBytes
[ 7][RX-C] 6.00-7.00 sec 592 MBytes 4.97 Gbits/sec
[ 5][TX-C] 7.00-8.00 sec 150 MBytes 1.25 Gbits/sec 0 316 KBytes
[ 7][RX-C] 7.00-8.00 sec 607 MBytes 5.09 Gbits/sec
[ 5][TX-C] 8.00-9.00 sec 129 MBytes 1.09 Gbits/sec 0 293 KBytes
[ 7][RX-C] 8.00-9.00 sec 616 MBytes 5.17 Gbits/sec
[ 5][TX-C] 9.00-10.00 sec 108 MBytes 908 Mbits/sec 0 320 KBytes
[ 7][RX-C] 9.00-10.00 sec 635 MBytes 5.32 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.00 sec 1.31 GBytes 1.13 Gbits/sec 0 sender
[ 5][TX-C] 0.00-10.00 sec 1.31 GBytes 1.12 Gbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 5.95 GBytes 5.11 Gbits/sec 43 sender
[ 7][RX-C] 0.00-10.00 sec 5.95 GBytes 5.11 Gbits/sec receiver
iperf Done.
I guess there is something else missing in my OPNsense settings......
When I said 4K, I actually meant slightly less than 4096 Bytes, IIRC it was 4082 here.
Results like yours actually shout "too big". You should try pinging with different packet sizes to find your actual limit if you absolutely must. But as I said, it is probably not worth the effort anyways.
Quote from: meyergru on May 14, 2023, 09:03:48 PM
When I said 4K, I actually meant slightly less than 4096 Bytes, IIRC it was 4082 here.
Results like yours actually shout "too big". You should try pinging with different packet sizes to find your actual limit if you absolutely must. But as I said, it is probably not worth the effort anyways.
Excuse the delay. I switched to 4082 now and tested again. But same problems. One direction works, the other not:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 6.18 GBytes 5.30 Gbits/sec 2870 sender
[ 5] 0.00-10.00 sec 6.17 GBytes 5.30 Gbits/sec receiver
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 264 KBytes 216 Kbits/sec 5 sender
[ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec receiver
When I ping with different packet sizes there is no issue at all.
I have absolutely no idea how to troubleshoot this...
Try over IPv6 perhaps? It automagically does PMTUD
Quote from: bartjsmit on May 19, 2023, 08:42:27 PM
Try over IPv6 perhaps? It automagically does PMTUD
I will try this.
Meanwhile I found the highest possible MTU value with 4078. Giving me results about 5Gbits in both directions
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 6.00 GBytes 5.15 Gbits/sec 3562 sender
[ 5] 0.00-10.00 sec 6.00 GBytes 5.15 Gbits/sec receiver
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 6.54 GBytes 5.62 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 6.54 GBytes 5.62 Gbits/sec receiver
one direction still with a lot of retransmission.
A bit awkward because DEC740 should be able to handle MTU 9000 with easy regarding to this article:
https://wiki.junicast.de/en/junicast/review/opnsense_dec740
As I said, the actual limit is ~4K, depending different factors like 802.1q, PPPoE and so on.
I discussed those discrepancies with junicast a while ago and I suspect that his T-REX tests used TCP instead of UDP such that the MTU did not matter or at least caused no problems.
Also, using iperf with only one thread cannot max out the connection. You will have to use -P8 in order to use multiple threads. I that case, you will likely get better results even with smaller MTUs. You will notice that junicast tested with 20 threads.