Hello,
I updated my WAN connection to fiber, so now if I connect directly to the modem (in bridge mode) I get ~800 Mbps, but through the OpnSense it's only ~250 Mbps. My OpnSense installation is running on PcEngines APU2 board with 3xi210AT NICs.
One strange observation: in the interfaces widget (contrary to LAN and OPT1 where `1000baseT <full-duplex>` is displayed) nothing is shown for WAN (screenshot attached).
I'd appreciate very much any hints/ideas/etc.
Thank you.
From my observation, interface that shows like that tends to be "virtual one".
That WAN Interface is assigned physical interface via Interfaces > Assignments?
What is the configuration on that WAN interface?
Also can you check how is the interface negotiated and details in Interface > Overview?
In regards of Throughput, I think you will not be able to get 800Mbit over APU2 without tuning. And even if tuned, you still may not reach it. There are several threads in regards of APU and Throughput.
BTW, are you running a SHAPER?
Is there any other configuration expect the very base one? (like additional rules, NAT, etc.)
Regards,
S.
Thanks a lot for your fast response!
> From my observation, interface that shows like that tends to be "virtual one".
> That WAN Interface is assigned physical interface via Interfaces > Assignments?
Yes, it is assigned to pppoe0 (igb2)
> What is the configuration on that WAN interface?
> Also can you check how is the interface negotiated and details in Interface > Overview?
It reads as follows:
Flags 88d1
Capabilities
Options
MAC Address 00:00:00:00:00:00 - XEROX CORPORATION
Supported Media
Physical
Device pppoe0
mtu 1492
Routes default
1.1.1.1
8.8.4.4
8.8.8.8
109.226.1.20
Status up
Identifier wan
Description WAN
Enabled true
Link Type pppoe
IPv4 Addresses
109.226.17.116/32
VLAN Tag
Gateways 109.226.1.20
Driver ng0
Index 9
Promiscuous Listeners 0
Send Queue Length 0
Send Queue Max Length 50
Send Queue Drops 0
Type Proprietary virtual interface
Address Length 0
Header Length 0
Link State 0
vhid 0
Data Length 152
Metric 0
Line Rate 64.00 Kbit/s
Packets Received 575236
Input Errors 0
Packets Transmitted 176952
Output Errors 0
Collisions 0
Bytes Received 698720893
Bytes Transmitted 33788670
Multicasts Received 0
Multicasts Transmitted 0
Input Queue Drops 0
Packets for Unknown Protocol 0
Hardware Offload Capabilities 0x0
Uptime at Attach or Statistics Reset 27858
> In regards of Throughput, I think you will not be able to get 800Mbit over APU2 without tuning.
> And even if tuned, you still may not reach it. There are several threads in regards of APU and Throughput.
Do you think I'd better upgrade my platform? If yes, what is recommended?
> BTW, are you running a SHAPER?
No
> Is there any other configuration expect the very base one? (like additional rules, NAT, etc.)
Not much; I have a vlan on igb0 as OPT1 (blocked by NAT), and some simple port forwrds.
But the point is that the very same setup worked fine for years with my previous slow (~100Mbps) WAN connection.
Thank you very much!
Sincerely,
Lev.
A lot of reading I've done of many older threads on this topic seems to suggest that with OPNsense running on a PCEngines APU2E4 the max achievable throughput on a single NIC may be ~600 Mbit second under ideal conditions.
I have an APU2E4 (which has the Intel 210AT NICs, i.e. "the good ones,") and have been doing some iPerf3 testing recently to see "where things are" with max theoretical performance.
In the following hardware setup:
MacBook Pro 16" M1 laptop --> Belkin 1GB USB-Ethernet converter --> new 7' long "Cable Matters™" Cat6e patch cable --> APU2E4 igb2 port, configured as a simple no-vlan static IP subnet.
And this OPNsense version:
Quoteroot@myhost# uname-a
FreeBSD myhost.mydomain.com 13.2-RELEASE-p10 FreeBSD 13.2-RELEASE-p10 stable/24.1-n254984-f7b006edfa8 SMP amd64
And the following iPerf3 settings:
server (running on the APU2E4 Opnsense box):
iperf3 -s
client (running on the MacBook Pro 16" M1Pro):
iperf3 -c 192.168.7.1 -w 1MB -P 8
the results look like this:
[SUM] 0.00-10.00 sec 724 MBytes 607 Mbits/sec sender
[SUM] 0.00-10.01 sec 710 MBytes 595 Mbits/sec receiver
So far, no settings of any kind within OPNsense GUI (Interface --> Settings, or System --> Settings --> Tunables) seem to make any impact at all. Neither does enabling / disabling the firewall, or enabling/disabling the 2 small /low bandwidth Wireguard tunnels I have configured.
Granted, it may be that this limit is due to trying to run iperf3 (in server mode) ON the OPNsense box itself. At present I only have 1 laptop to test with, no other devices. Perhaps if I get a second laptop + ethernet adaptor and try the iperf3 test between the two laptops, routed through the OPNsense box, I will see higher bandwidth because OPNsense / BSD kernel is not busy trying to run iperf itself during the test.
I will add further --
Several years ago, circa ~2018 to 2020 or so, there were posts and articles floating around demonstrating how to achieve true 1Gbps (actually ~950Mb/sec) throughput with an APU2-series device + pfSense (and possibly OPNsense), but this was under earlier versions of FreeBSD.
From what I gather, core aspects of the network stack have been changed in some ways in more recent versions of FreeBSD that "break" those older optimizations, perhaps making it no longer possible to achieve that ~1Gbps throughput.
If that is the case, probably at least part of what is going on is that the APU2-series boxes use a CPU that is now fully 1 decade old, and there is no longer much interest in trying to squeeze performance out of it.
Probably if one wants to route 1Gbps-and-more under recent or current versions of OPNsense, the simpler approach would be to upgrade to much more recent hardware with more basic CPU power. (And good Intel™ NICs, not Realtek or other derivatives.)
In my case, my WAN ISP connection is only ~95 Mbps and I have no need for anything beyond 500 Mbps on the LAN side, so I'll continue to use my trusty APU2E4 for the time being.
Sorry for the excess posts, I just noticed also that OP is running a PPPoE WAN connection.
Multiple articles & forum threads on APU2-series over the past several years indicates that PPPoE WAN tends to severely limit throughput because, as I understand it, by nature PPPoE only allows a single network connection at a time.
In my single TCP connection testing with APU2, the max throughput I saw under best-case LAN conditions was just 137 Mbit / second. The only way I could achieve 600Mbit per second described in earlier posts was by running multiple connections at once.
Thank you!
Meanwhile, I did some minor tweaking, which added ~100Mbps:
net.isr.numthreads=4
net.isr.maxthreads=4
net.isr.dispatch=deferred
Are there any more tweaks that can be applied?
BTW, I stumbled very interesting approach:
https://www.neelc.org/posts/multicore-pppoe/
https://www.neelc.org/posts/opnsense-pppoe-kvm/
P.S.
Also after applying these (from opnsense manual https://docs.opnsense.org/troubleshooting/performance.html):
net.isr.bindthreads = 1
net.isr.maxthreads = -1
net.inet.rss.enabled = 1
net.inet.rss.bits = 2
I'm now at ~590 Mbps.
Does anybody know if it is possible to squeeze more from APU2?
Sorry for repeated postings, but just maybe it can be useful for somebody.
I borrowed these tweaks:
net.isr.defaultqlimit=2048
net.inet.tcp.soreceive_stream = 1
hw.ibrs_disable=1
From this very useful guide https://binaryimpulse.com/2022/11/opnsense-performance-tuning-for-multi-gigabit-internet/
Interesting. I tried adding all those tunables (via the GUI) to my OPNsense instance, then re-testing using the physical 1Gb ethernet setup I described earlier and same iperf3 options. No discernable difference at all, still tops out at about 600 Mbps.
Thats about right if I remember properly, 600-700M MAX on APU. I was able to hit 850M few times on APU with IPERF, without Shaping and without any extra config other than the very default one back on 21.X-22.X, in a perfect case scenario.
But the average is really MAX 600M and its already making the CPUs sweating.
Back in time on APU I was using these tunables:
#added hw.em.rx_process_limit -1 - obsolete
#added net.isr.bindthreads 1
#added net.isr.maxthreads -1
#added net.isr.dispatch deferred
#added hw.em.max_interrupt_rate 16000 - obsolete
#added dev.igb.0.eee_control # EEE (energy efficient ethernet) runtime 0
#added dev.igb.0.fc # To adjust flow control on igb cards within FreeBSD runtime 0
#added dev.igb.1.eee_control # EEE (energy efficient ethernet) runtime 0
#added dev.igb.1.fc # adjust flow control on igb cards within FreeBSD runtime 0
#added dev.igb.2.eee_control # EEE (energy efficient ethernet) runtime 0
#added dev.igb.2.fc # adjust flow control on igb cards within FreeBSD
Regards,
S.