Hi, have a problem that bugs me for years, no solution found yet:
Two clients with Linux, same update status:
1. SLOW: Libretrend i7 with Coreboot, Realtek NIC
2. FAST: Old Dell Precision M6500 notebook with Intel NIC.
Problem: when downloading e.g. updates, the FAST ist 30-times faster than SLOW, see attached.
Same mirrors,RJ45 cables changed twice, both attached to the same switch. So no real explanation.
Yesterday I did some iperf between the two clients.
For UDP, it does not matter, which is server and whicch is client:
20260118145326,SLOW,FAST,45678,1,0.0-30.0,3935190,1048952,0.025,0,2677,0.000,0
20260118145446,FAST,SLOW,45678,1,0.0-30.0,3935190,1048950,-nan,0,-1,-0.000,0
But for TCP direction matters:
20260118144505,SLOW,FAST,45678,1,0.0-30.0,3533963328,941434163
20260118144740,FAST,SLOW,45678,1,0.0-30.1,1690173504,449722645
-------------------------
First thought: RJ45 in SLOW machine is Realtek. But I have Realtek in other machines with same linux, always maxing out the bandwith possible. So I don't think it'S simply Realtek.
Why only TCP makes a difference? Is there offloading and that doesn't work properly in the SLOW machine? Maybe due to Coreboot?
Any ideas how this difference in TCP-speed might be explainable?
:-)
Your topic name is a bit intriguing.
When you think about a packet, the finial destination on a device is the CPU. If a packet is delivered its being pegged to the CPU to process it. Of cource the packet needs to be 1st processed on the NIC.
What Distro you are using?
What realtek NIC does it use?
What is the realtek driver loaded for the NIC?
Did you try to upgrade the BIOS?
What are the temps during high volume downloads/uploads?
Can you post the NIC statistics (counters)?
Did you disabled ASPM?
Regards,
S.
Quote from: Seimus on January 19, 2026, 10:07:51 AM1 What Distro you are using?
2 What realtek NIC does it use?
3 What is the realtek driver loaded for the NIC?
4 Did you try to upgrade the BIOS?
5 What are the temps during high volume downloads/uploads?
6 Can you post the NIC statistics (counters)?
7 Did you disabled ASPM?
1 opensuse Tumbleweed
2 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller (rev 07)
Subsystem: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller
3 Kernel driver in use: r8169
Kernel modules: r8169
4 Yes, but it's Coreboot, the company Libretrend dose not provide any newer Coreboot version
5 Unremarkable temps, cooling is appropriate
6 like this? On the SLOW i have:
sudo ethtool -S eth0
NIC statistics:
tx_packets: 3175999
rx_packets: 8314948
tx_errors: 0
rx_errors: 0
rx_missed: 36
align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
unicast: 8313262
broadcast: 1686
multicast: 0
tx_aborted: 0
tx_underrun: 07 No. It's an onboard NIC, so relevant?
sudo ethtool --show-eee eth0
EEE settings for eth0:
enabled - inactive
0 (us)
Supported EEE link modes: 100baseT/Full
1000baseT/Full
Advertised EEE link modes: 100baseT/Full
1000baseT/Full
Link partner advertised EEE link modes: Not reported
To make things even more complicated:
- I downloaded a large file on SLOW with FF 147.0 and get an amazing 100MB/s...
What's going on here? Only the weekly updates of Tumbleweed slow?!?! But it's not the server, see OP.
I would definitely advice to disable ASPM, either via BIOS or in Linux.
ASPM enabled can do a lot of performance related problems and realtek is not excluded from this.
The NIC stats, look good, there is no errors or dirty packets seen.
In regards of your testing, you have here some interesting results;
1. Iperf > fast to slow = throughput limited
2. Linux package updates = throughput slow
3. Browser download = fast
For
1. Iperf > fast to slow
Can you try to restest this but set P2 at least to trigger multicore spread of iperf? And post the results
Try scenarios where the slow is the client as well server, and during scenario where its client try with and without the flag -R
2. Linux package updates
This one is curious, cause you can be rate limited, try to refresh your mirrors
3. Browser download
No clue about this, I would assume similar results as for Iperf, but maybe this can be due to the fact the browsers is using multiple cores to process the packets.
As well what kind of congestion algoritm are you using? Maybe you can try to switch it to BBR.
Regards,
S.
Thanks for reading.
RE 2: Mirrors are "hardcoded" and identical on FAST and SLOW. The download of weekly packages on FAST and SLOW is simultaneously in the attachment of OP, so how/why only one client should be rate limited?
Congestion algorithm? Hmmm... ;-)
RE: BBR
I had in Tumbleweed:
sudo sysctl net.ipv4.tcp_available_congestion_control
net.ipv4.tcp_available_congestion_control = reno cubic
Then I followed this:
https://www.techrepublic.com/article/how-to-enable-tcp-bbr-to-improve-network-speed-on-linux/
and did
sudo nano /etc/sysctl.conf
-> and add the following two lines:
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr
Then I have after
sudo sysctl -p
sudo sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = bbr
Quote from: chemlud on January 20, 2026, 02:22:33 PMRE 2: Mirrors are "hardcoded" and identical on FAST and SLOW. The download of weekly packages on FAST and SLOW is simultaneously in the attachment of OP, so how/why only one client should be rate limited?
True if this is the case. I assumed you have dynamic mirrors and downloads are in different intervals. If you would be rate limited it would be for the Public IP. But if both of them are NATed behind one, it would affect them both. BTW based on this when you only run the update for the slow, result is the same correct?
Quote from: chemlud on January 20, 2026, 02:22:33 PMCongestion algorithm? Hmmm... ;-)
I could talk about this whole day long, but to simplify it, its the often omitted brother of the TCPs scaling window.
Sender > Responsible for the Scaling Window
Receiver > Responsible for the Congestion management
Congestion tells the sender to slow down, and how to recover from the congestion event (this not 100% correct but enough to see both of them are important).
Current enabled congestion algo, default should be at least CUBIC
sysctl net.ipv4.tcp_congestion_controlAll available congestion algos
sysctl net.ipv4.tcp_available_congestion_controlFor BRR you you usally need to enable some extra flags in system but nothing to hard...
Regards,
S.
Quote from: Seimus on January 20, 2026, 03:05:44 PM... BTW based on this when you only run the update for the slow, result is the same correct?
Yes, SLOW ist slow for years, all other Tumbleweeds are normal download speed.
RE ASPM
Apparently regulated in BIOS (not reachable with Coreboot) or via a kernel boot command.
Have to check when not remote, don't want to loose the machine by tinkering with kerne boot.
RE ASPM
I did
sudo lspci -vvv
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller (rev 07)
Subsystem: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 18
Region 0: I/O ports at 2000 [size=256]
Region 2: Memory at d1300000 (64-bit, non-prefetchable) [size=4K]
Region 4: Memory at d1200000 (64-bit, prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
Address: 0000000000000000 Data: 0000
Capabilities: [70] Express (v2) Endpoint, IntMsgNum 1
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 10W TEE-IO-
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 4096 bytes
DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 <64us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp-
LnkCtl: ASPM Disabled; RCB 64 bytes, LnkDisable- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- FltModeDis-
LnkSta: Speed 2.5GT/s, Width x1
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS- TPHComp- ExtTPHComp-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
AtomicOpsCtl: ReqEn-
IDOReq- IDOCompl- LTR- EmergencyPowerReductionReq-
10BitTagReq- OBFF Disabled, EETLPPrefixBlk-
LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: unsupported, FltMode-
Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00000800
Capabilities: [d0] Vital Product Data
pcilib: sysfs_read_vpd: read failed: No such device
Not readable
Capabilities: [100 v1] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP-
ECRC- UnsupReq- ACSViol- UncorrIntErr- BlockedTLP- AtomicOpBlocked- TLPBlockedErr-
PoisonTLPBlocked- DMWrReqBlocked- IDECheck- MisIDETLP- PCRC_CHECK- TLPXlatBlocked-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP-
ECRC- UnsupReq- ACSViol- UncorrIntErr- BlockedTLP- AtomicOpBlocked- TLPBlockedErr-
PoisonTLPBlocked- DMWrReqBlocked- IDECheck- MisIDETLP- PCRC_CHECK- TLPXlatBlocked-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+
ECRC- UnsupReq- ACSViol- UncorrIntErr- BlockedTLP- AtomicOpBlocked- TLPBlockedErr-
PoisonTLPBlocked- DMWrReqBlocked- IDECheck- MisIDETLP- PCRC_CHECK- TLPXlatBlocked-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- CorrIntErr- HeaderOF-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ CorrIntErr- HeaderOF-
AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [140 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
Status: NegoPending- InProgress-
Capabilities: [160 v1] Device Serial Number 01-00-00-00-68-4c-e0-00
Kernel driver in use: r8169
Kernel modules: r8169
so apparently
...
LnkCtl: ASPM Disabled
...
ASPM should not be in the way, or?
Yes, that means ASPM is disabled for the NIC.
Regards,
S.
Quote from: chemlud on January 19, 2026, 09:10:41 AMHi, have a problem that bugs me for years, no solution found yet:
Two clients with Linux, same update status:
1. SLOW: Libretrend i7 with Coreboot, Realtek NIC
2. FAST: Old Dell Precision M6500 notebook with Intel NIC.
Problem: when downloading e.g. updates, the FAST ist 30-times faster than SLOW, see attached.
Same mirrors,RJ45 cables changed twice, both attached to the same switch. So no real explanation.
Yesterday I did some iperf between the two clients.
For UDP, it does not matter, which is server and whicch is client:
20260118145326,SLOW,FAST,45678,1,0.0-30.0,3935190,1048952,0.025,0,2677,0.000,0
20260118145446,FAST,SLOW,45678,1,0.0-30.0,3935190,1048950,-nan,0,-1,-0.000,0
But for TCP direction matters:
20260118144505,SLOW,FAST,45678,1,0.0-30.0,3533963328,941434163
20260118144740,FAST,SLOW,45678,1,0.0-30.1,1690173504,449722645
-------------------------
First thought: RJ45 in SLOW machine is Realtek. But I have Realtek in other machines with same linux, always maxing out the bandwith possible. So I don't think it'S simply Realtek.
Why only TCP makes a difference? Is there offloading and that doesn't work properly in the SLOW machine? Maybe due to Coreboot @soundboard (https://soundbuttonspro.com/)?
Any ideas how this difference in TCP-speed might be explainable?
:-)
Based on your iperf results and the specific hardware combination (Realtek + Coreboot), this is a classic case of TCP Offloading or Interrupt Throttling issues common with Realtek drivers on Linux.
Here is an analysis and potential solutions:
1. The "TCP vs. UDP" Clue: Hardware Offloading
The fact that UDP works fine but TCP is slow strongly points to TCP Segmentation Offload (TSO) or Generic Segmentation Offload (GSO). Realtek chips often have buggy implementations of these features in their firmware/drivers.
When these are enabled, the NIC tries to process TCP packets itself to save CPU. If it fails, it leads to packet retransmissions and massive slowdowns.
The Fix: Disable offloading on the SLOW machine:
bash
sudo ethtool -K <interface_name> tso off gso off gro off
Use code with caution.
Try running your iperf test again after this.
2. Driver Conflict: r8169 (Kernel) vs. r8168 (Vendor)
Most Linux distros use the open-source r8169 driver by default. While improved, it still struggles with certain Realtek revisions, especially regarding Receive Side Scaling (RSS).
The Fix: Install the official Realtek driver (often found as r8168-dkms in your package manager):
Ubuntu/Debian: sudo apt install r8168-dkms
Arch: sudo pacman -S r8168
Note: You may need to blacklist the r8169 driver in /etc/modprobe.d/.
3. Coreboot & ASPM (Power Management)
Coreboot sometimes doesn't initialize PCIe Active State Power Management (ASPM) correctly. If the Realtek chip enters a low-power state (L1) during the tiny gaps between TCP acknowledgments, the latency spikes and throughput collapses.
The Fix: Disable ASPM via the kernel command line:
Edit /etc/default/grub.
Add pcie_aspm=off to GRUB_CMDLINE_LINUX_DEFAULT.
Update grub (sudo update-grub) and reboot.
4. Energy Efficient Ethernet (EEE)
Realtek NICS often have "Green Ethernet" (EEE) enabled. This can cause synchronization issues with certain switches, leading to the 30x speed difference you see.
The Fix: Disable EEE:
bash
sudo ethtool --set-eee <interface_name> eee off
Use code with caution.
Why the Dell (Intel NIC) is faster:
Intel NICs (like the one in your M6500) have superior hardware-level buffers and much more mature Linux drivers. They handle TCP window scaling and offloading far more gracefully than Realtek chips, which rely heavily on driver-side workarounds.
Recommended sequence for troubleshooting:
Test 1: Disable TSO/GSO with ethtool (Immediate effect, no reboot).
Test 2: Disable EEE.
Test 3: Switch from r8169 to r8168 driver.
Test 4: Disable pcie_aspm in GRUB.
hope can help you
@klevinsourd, that was my impression, explaining the thread title ;-)
Will try the suggestions and report back :-)
RE 1: Checked the status for offloading:
sudo ethtool --show-offload eth0
[sudo] password for root:
Features for eth0:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: off
tx-scatter-gather: off
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: off
tx-tcp-accecn-segmentation: off [fixed]
generic-segmentation-offload: off [requested on]
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-tunnel-remcsum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
tx-gso-list: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
rx-gro-list: off
macsec-hw-offload: off [fixed]
rx-udp-gro-forwarding: off
hsr-tag-ins-offload: off [fixed]
hsr-tag-rm-offload: off [fixed]
hsr-fwd-offload: off [fixed]
hsr-dup-offload: off [fixed]
So GSO and TXO apparently off at the moment
It will be interesting if ASPM with coreboot is the culprit, as there is a very similar issue affecting a particular Protectli device: https://protectli.com/news/vp2440-coreboot-issue/
So that may not be limited to just Realtek NICs. It could be an issue with coreboot handling of ASPM.
(EDIT: I saw that @chemlud's PCI link has ASPM disabled already, so am not sure if this still applies. The Protectli work-around is to disable ASPM altogether at the OS level until a coreboot update is available.)
@OPNenthu Thanks for reading, yes, ASPM and offloading are apparently off the list at that point.
EEE (enabled, but apparently "inactive", see above) and the "wrong" driver (8169, which works perfectly on another Tumbleweed with old ATOM CPU with legacy BIOS and Realtek 8168 hardware, btw...) are on the list.
Not much left, apparently...
Understood, although there might be a reason why Protectli found that ASPM must be disabled globally rather than disabling it on a per-device basis with PCI sysctls. Usually you don't use the nuclear option unless there's a reason, but who knows.
To be honest usually you want to disable. e.g force disabled ASPM off globally on OS level cause the per-device per-line disabling may not work always as it should... I usually disable ASPM in BIOS on everything or if not available or I have suspicions its not enough I force disable it globally in Linux.
https://wiki.archlinux.org/title/Power_management#Active_State_Power_Management
Regards,
S.
OK, so best bet is:
pcie_aspm=off
added to kernel boot line and reboot.
Will try... :-)
I have here after reboot:
sudo dmesg | grep ASPM
[ 0.018934] [ T0] PCIe ASPM is disabled
[ 0.121764] [ T1] acpi PNP0A08:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
iperf -p 45678 -c FAST -t 30 -y C -P 1
20260121133626,SLOW,FAST,45678,1,0.0-30.0,3478388800,926208295
Other direction:
20260121133925,FAST,SLOW,45678,1,0.0-30.1,1693319232,450764744
So nothing really changed.
Does openSUSE perhaps also require this step?
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
This is needed for RHEL-based distros, but not Debian. I'm unsure about openSUSE. According to ChatGPT (attached) it is needed.
There's an interesting note there that pcie_aspm=off is only telling Linux to not change it from whatever the firmware has set, so if coreboot has it enabled (but hides the option from the user) then Linux will also use it.* Maybe the pcie_aspm.policy=performance option is better.
Apologies @chemlud. I don't want to send you down a rabbit hole that might not be fruitful, but I'd rather share this extra info for you to decide.
* of course, ChatGPT is a serial liar, so... there's that!
dmesg reports ASPM disabled. I edit the kernel boot line in YaST, that does the grub mkinit magic stuff automatically ;-)
With pcie_aspm.policy=performance in the kernel boot line I see:
sudo dmesg | grep ASPM
[sudo] password for root:
[ 0.121549] [ T1] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
and the troughput is:
20260121142126,SLOW,FAST,45678,1,0.0-30.0,3487694912,929146430
20260121142412,FAST,SLOW,45678,1,0.0-30.0,1691353152,450561206
...so no change here.
With EEE disabled in kernel I get:
sudo dmesg | grep EEE
[ 0.000000] [ T0] Command line: BOOT_IMAGE=/boot/vmlinuz-6.18.5-1-default root=UUIDxxxxxxx splash=silent net.ifnames=0 kvm.enable_virt_at_load=0 ipv6.disable=1 quiet igb.EEE=0 mitigations=auto
[ 0.018795] [ T0] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.18.5-1-default root=UUIDxxxxxxxx splash=silent net.ifnames=0 kvm.enable_virt_at_load=0 ipv6.disable=1 quiet igb.EEE=0 mitigations=auto
and still
20260121143745,FAST,SLOW,45678,1,0.0-30.1,1692139584,450325523
so no progress here.
For the fun itself, can you spin the iperf and just test from the same device?
Basically in one CLI window start it as server and in another as client.
Regards,
S.
Then I see on SLOW:
20260121150015,127.0.0.1,48856,127.0.0.1,45678,1,0.0-30.0,117651669056,31373002456
Tells me what? ;-)
Ladies and Gentlemen,
apparently we are running out of options here. Don't want to mess with the driver, as long as the machine is remote, just in case I loose network connectivity.
EEE, ASPM, offloading apparently not the culprit, what might be an option?
Quote from: chemlud on January 21, 2026, 03:01:11 PMTells me what? ;-)
Tells you if there is something on the device itself beyond the NIC that could case the behaviour.
Yea the next step would be to mess with the driver. Best do it indeed locally.
Regards,
S.
Weekly Tumbleweed updates and on starting the update download for a very short moment I see download with 5.6MiB/s, which immediately collapses to 300-400KiB/s and persists at creepy bandwidth.
What is going on here? FAST has 10.6MiB/s with same servers at same time.
PS: Just for the record: Confirmed again that both machines use identical, hard-coded update servers.
What can downgrade HTTPS for a specific client? Fingerprinting, ID of install, whatever?
Quote from: chemlud on January 23, 2026, 01:36:40 PMTumbleweed
Is that the only thing that shows this issue ?!
AFAIK Tumbleweed is a rolling distro based on "Not yet Stable code" like Debian Testing for example.
Could that be the issue ?
What happens when you boot a random Live environment of any other distro ?
QuoteWhat can downgrade HTTPS for a specific client? Fingerprinting, ID of install, whatever?
I would say :
- Traffic Shaping Rules.
- IDS/IPS software anywhere in your network.
Tumbleweed is quite stable. Most of the time, you have to know, when you better NOT update, but that's not that hard.
The devices with Coreboot are "in production", so not easy to swap the OS. And as the problem is with TW updates, not with the browser (see above): how to test then?
So largely: Self-inflicted pain, one might say. It bugs me not to know, what is going on here. OPNsense has no traffic shaper enabled, what should IPS/IDS do to the bandwith of one client, but not to another on the same switch?
Quote from: chemlud on January 23, 2026, 05:58:19 PMThe devices with Coreboot are "in production", so not easy to swap the OS.
I am not talking about swapping anything : Just boot a Live ISO from a USB Stick !!
QuoteAnd as the problem is with TW updates, not with the browser (see above): how to test then?
Download stuff manually via the browser or wget on the Terminal ?
QuoteSo largely: Self-inflicted pain, one might say. It bugs me not to know, what is going on here.
I can relate to that! :)
QuoteOPNsense has no traffic shaper enabled, what should IPS/IDS do to the bandwith of one client, but not to another on the same switch?
It would be based on IP address but if you know for sure you have not configured anything in OPNsense or on the Client/Server that is having these issues then there is not much to do there I guess...