Hi,
Im not sure if this is just normal behaviour, but when I saturate my 1Gbit NIC, it creates 3-12% packetloss whilst CPU is average 30% and never above 50% (see screenshots)
Is this due to the NIC (hardware/driver) itself, or something else? Would changing to a 2.5 or 10Gbit NIC solve the problem if my internet speed is maximum 1Gbit anyway?
Some more info:
- Hardware: Supermicro A2SDi-4C-HLN4F with 4x 1Gbit/s LAN onBoard (Intel C3000 SoC) and Intel Atom C3558 4-Core 2,20GHz
Intrusion Detection & IPS disabled
Services installed: Crowdsec, Telegraf, ACME client
I tried the optimisations mentioned here as well:
https://kb.protectli.com/kb/pppoe-and-opnsense/
Anyone else has the same experience of a solution to it?
Unsure if Protectli guidance fully applies to your HW, the better thread to look at is here.
https://forum.opnsense.org/index.php?topic=24409.0 (https://forum.opnsense.org/index.php?topic=24409.0)
It is also unclear what the NIC drivers are for A2SDi-4C-HLN4F, i211 ? something else ?
Look into setting up the Shaper, probably FQ (Fair Queue) Codel.
Set an up/down pipe limit of 900Mbps - maximum throughput of gigabit ethernet is 940Mbps, so this gives it some wiggle room.
... and possibly RSS.
How many of your 4x NICs are in use? Just 2, with a WAN/LAN?
If so, then there is not really much benefit to 2.5Gbps or even 10Gbps interfaces - even on the LAN side.
If your LAN is just 1 subnet, the only traffic that would probably go to/through the firewall is external/internet traffic - so if you have a 1Gbps WAN connection, the bottleneck is the same.
Shaping would help to elevate situations where the link is saturated and using a Fair Queuing algo would apply just that - a fair weighted queue, to all traffic, to hopefully prevent loss through saturation.
If, for arguments sake, you have WAN/LAN/GUEST (3 interfaces/zones) then 2.5/10Gbps on the inside would have a benefit - if the 1Gbps WAN is saturated by a LAN/GUEST device, then traffic between LAN <-> GUEST would be unaffected.
Previously when I had a Qotom 4 NIC box, I used to have 1 dedicated for WAN, then a Round Robin LAGG with the other 3x1Gbps NICs with tagged VLANS on (giving the tagged VLANs shared 3Gbps bandwidth). Used to work ok.
Quote from: newsense on August 07, 2023, 11:37:29 PM
It is also unclear what the NIC drivers are for A2SDi-4C-HLN4F, i211 ? something else ?
ix0: <Intel(R) X553 (1GbE)> mem 0xddc00000-0xdddfffff,0xdde04000-0xdde07fff at device 0.0 on pci5
ix0: Using 2048 TX descriptors and 2048 RX descriptors
ix0: Using 4 RX queues 4 TX queues
ix0: Using MSI-X interrupts with 5 vectors
ix0: allocated for 4 queues
ix0: allocated for 4 rx queues
ix0: Ethernet address: 3c:ec:ef:00:54:30
ix0: eTrack 0x80000877
Output from TrueNAS, not OPNsense. FreeBSD 13.1-STABLE. Same mainboard as the OP.
Quote from: iMx on August 10, 2023, 03:11:27 PM
Look into setting up the Shaper, probably FQ (Fair Queue) Codel.
Set an up/down pipe limit of 900Mbps - maximum throughput of gigabit ethernet is 940Mbps, so this gives it some wiggle room.
... and possibly RSS.
I'll read up a bit on it, I'm fairly new to Opnsense. I had a Ubiquiti UDM-PRO before on same WAN connection without packetloss when saturating it, so I was wondering why.
But your explanation makes sense, I'll try out setting Shaper.
Concerning the other questions, Opnsense runs on a standalone (dedicated) machine.
NIC1 = WAN
NIC2 = LAN
NIC2 is connected to another machine that runs Proxmox where I run multiple VM and LXC's on
QuoteI had a Ubiquiti UDM-PRO before on same WAN connection without packetloss when saturating it, so I was wondering why.
Don't they call it 'Smart Queuing' or something? Wouldn't be surprised if it is fq_codel based, many such implementations are.
Likely they already do some form of queuing out of the box, might be wrong.