Slow upload vs previous router

Started by chamley, May 05, 2023, 10:55:54 PM

Previous topic - Next topic
October 29, 2023, 11:51:19 PM #15 Last Edit: October 29, 2023, 11:53:53 PM by chamley
Quote from: meyergru on October 29, 2023, 08:18:25 PM
I have no experience with IPoE, but with PPPoE as an encapsulating protocol, there is an overhead for the data packets which often forces to have a smaller MTU set on the WAN interface. If that is not considered, connections can be much slower because of retries and/or refragmentation.

I.E.: You could try lowering the MTU of the WAN interface to somthing smaller like 1400 Bytes.


I wondered about MTU as well. 1500 appears to be the correct value for my connection. This is what the Synology router uses, and testing with different ping packet sizes indicates that 1500 is ok. Thank you for the suggestion!

I'm worried that searches for "OPNsense slow upload" or "pfsense slow upload" show lots of forum/Reddit posts with problems similar to mine, and the conclusion is often that something changed in freeBSD which is the cause :(

Quote from: chamley on October 29, 2023, 11:51:19 PM
Quote from: meyergru on October 29, 2023, 08:18:25 PM
I have no experience with IPoE, but with PPPoE as an encapsulating protocol, there is an overhead for the data packets which often forces to have a smaller MTU set on the WAN interface. If that is not considered, connections can be much slower because of retries and/or refragmentation.

I.E.: You could try lowering the MTU of the WAN interface to somthing smaller like 1400 Bytes.


I wondered about MTU as well. 1500 appears to be the correct value for my connection. This is what the Synology router uses, and testing with different ping packet sizes indicates that 1500 is ok. Thank you for the suggestion!

I'm worried that searches for "OPNsense slow upload" or "pfsense slow upload" show lots of forum/Reddit posts with problems similar to mine, and the conclusion is often that something changed in freeBSD which is the cause :(

When you tested MTU with ping did you set the DF flag on? This will tel lyou what is the highest non Fragmented MTU size thru your provider. Without setting the DF bit you can ping any MTU size because any L3 HOP can de-facto fragment the packet to the needed MTU size they have set on their egress interface.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Quote from: chamley on October 29, 2023, 05:50:04 PM
Interfaces are set to 1000Base-T full-duplex in the Interface configuration menu in OPNsense.  The dashboard reports this as well.  All switch ports are configured to this as well.

iPerf to the NUC from the PC gives 0.96Gbps in both directions (10 streams).

Changed setup to:
PC -> Netgear switch -> NUC -> Other Netgear switch -> Laptop
(OPNsense is still doing NAT and firewall)
iPerf between PC and laptop gives 0.95Gbps in both directions (10 streams).

Back to the normal setup.
iPerf from the PC to a public server gives 0.2Gbps upload (10 streams).

Interesting idea of a switch between the router and modem.  I will try to find an unmanaged switch and test that.

It does seem to imply that the problem is with the NUC and your modem.  Another test you can try is to put the old router in place and then the NUC behind it and test that.  Additionally, hang a laptop off of the other router as well so you're testing against a local network.

I'm not sure if the type of switch matters but then I don't know if the switch will fix your issue or not.  It's just something that I came across when I was researching the i225/i226 NICs.  From what I've been able to gather, the issues are primarily in the embedded versions.  I don't think I saw anyone posting about problems with PCIe versions.  I'm currently using an official Intel i225 PCIe NIC and prior to that an off brand one and both have been rock solid.

Quote from: Seimus on October 30, 2023, 10:24:38 AM

When you tested MTU with ping did you set the DF flag on? This will tel lyou what is the highest non Fragmented MTU size thru your provider. Without setting the DF bit you can ping any MTU size because any L3 HOP can de-facto fragment the packet to the needed MTU size they have set on their egress interface.

Regards,
S.

Yes I did  :)

Quote from: CJ on October 30, 2023, 01:36:39 PM
It does seem to imply that the problem is with the NUC and your modem.  Another test you can try is to put the old router in place and then the NUC behind it and test that.  Additionally, hang a laptop off of the other router as well so you're testing against a local network.

I'm not sure if the type of switch matters but then I don't know if the switch will fix your issue or not.  It's just something that I came across when I was researching the i225/i226 NICs.  From what I've been able to gather, the issues are primarily in the embedded versions.  I don't think I saw anyone posting about problems with PCIe versions.  I'm currently using an official Intel i225 PCIe NIC and prior to that an off brand one and both have been rock solid.

Thanks CJ. I'll try doing some more tests and post the results.
I agree, it does seem like it's between the NUC and modem. When the connection is being established, does the router set any parameters? Could it be somehow misconfigurating the connection?


Its negotiates speed and duplex if its set for auto. There could be a slight possibility of speed/duplex mismatch on the modem side. On OPN you can clearly see whats is set, can you see it or log into to modem as well to check it?

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Quote from: chamley on October 30, 2023, 09:46:28 PM
Thanks CJ. I'll try doing some more tests and post the results.
I agree, it does seem like it's between the NUC and modem. When the connection is being established, does the router set any parameters? Could it be somehow misconfigurating the connection?

No. My guess is that the problem is with your i226 NICs.  No idea why you would see slow upload speeds, though.  I would have expected the problems to be both ways.

November 13, 2023, 06:52:29 PM #21 Last Edit: November 13, 2023, 07:03:02 PM by chamley
Quote from: CJ on October 31, 2023, 01:26:35 PM
Quote from: chamley on October 30, 2023, 09:46:28 PM
Thanks CJ. I'll try doing some more tests and post the results.
I agree, it does seem like it's between the NUC and modem. When the connection is being established, does the router set any parameters? Could it be somehow misconfigurating the connection?

No. My guess is that the problem is with your i226 NICs.  No idea why you would see slow upload speeds, though.  I would have expected the problems to be both ways.

Ok, I've now tested with an unmanaged gigabit switch between the modem and OPNsense.  Speedtest results are the same, and OPNsense still has high CPU usage during the upload test.  As before, top shows that if_io_tqg is taking the CPU resources.

Quote from: Seimus on October 31, 2023, 10:19:59 AM
Its negotiates speed and duplex if its set for auto. There could be a slight possibility of speed/duplex mismatch on the modem side. On OPN you can clearly see whats is set, can you see it or log into to modem as well to check it?

Regards,
S.

OPNsense has both WAN and LAN set to 1000baseT full-duplex.  I can't login to the modem.

November 14, 2023, 05:51:54 PM #22 Last Edit: November 14, 2023, 05:54:11 PM by CJ
Quote from: chamley on November 13, 2023, 06:52:29 PM
Ok, I've now tested with an unmanaged gigabit switch between the modem and OPNsense.  Speedtest results are the same, and OPNsense still has high CPU usage during the upload test.  As before, top shows that if_io_tqg is taking the CPU resources.

I missed that bit earlier.  It looks like it's not an i226 issue.  This thread may have some useful information, but it appears to be an issue that has affected a variety of different configurations.

https://forum.opnsense.org/index.php?topic=18754.30

Have you tried testing with just the default install config?  And are you still using 23.1 or have you switched to 23.7?  Might be worth a reinstall and testing with no changes.

It may also be worth trying this type from the above linked thread.

https://forum.opnsense.org/index.php?topic=18754.msg159739#msg159739

November 14, 2023, 08:41:02 PM #23 Last Edit: November 14, 2023, 08:45:13 PM by chamley
Quote from: CJ on November 14, 2023, 05:51:54 PM
I missed that bit earlier.  It looks like it's not an i226 issue.  This thread may have some useful information, but it appears to be an issue that has affected a variety of different configurations.

https://forum.opnsense.org/index.php?topic=18754.30

Have you tried testing with just the default install config?  And are you still using 23.1 or have you switched to 23.7?  Might be worth a reinstall and testing with no changes.

It may also be worth trying this type from the above linked thread.

https://forum.opnsense.org/index.php?topic=18754.msg159739#msg159739

Thanks, that looks like a similar problem.  There are some posts elsewhere that describe a similar problem too.

I'm still using 23.1 but I will test 23.7 and let you know the result.  I've tested pfSense CE 2.7.0 and that has the exact same problem.  I was hoping that FreeBSD 14 might fix this from 13.1, but I guess not.

I have been using the default configuration of 23.1.  Then I've tried a couple of tunables:

net.isr.dispatch=deferred
net.isr.maxthreads=-1
net.isr.bindthreads = 1  # This combination appeared to give a small improvement

hw.ibrs_disable=1  # Also appeared to give a small improvement

Enabled powerd and set to Maximum  # No clear improvement

That gets me to 200 Mbps upload.  Better, but still a way to go. 

What are your clock speeds showing?  Is the CPU boosting up to 4.6?  Seems odd that you'd have issues with that fast a chip.