CPU recommendations for 1Gbps w/PPPoE

Started by ck42, February 18, 2022, 05:12:49 PM

Previous topic - Next topic
March 12, 2022, 08:45:06 PM #15 Last Edit: March 12, 2022, 08:50:03 PM by qarkhs
Quote from: Dimi3 on March 12, 2022, 07:07:33 PM
In used fitlet2 with celeron CPU till recently, and it can do 1Gbps pppoe with ease. But i didn't run any additional plugins like suricata or zenarmor. For openvpn it can do 500mbps. Hope it helps, its a great little box just running a little hot.

I am running my Fitlet2 J3455 with Zenarmor. I am on a 300 Mbps connection. It's using about 2gb of the 8GB of  RAM I have installed. The CPU isn't being taxed but I'm only using it on a home network. Every time I check the temp it's running around 46C to 47C.

The single core performance of all the new Elkhart Lake CPUs (x6211E, J6412 & x6425E) in the Fitlet3 are a lot faster than the J3455: https://www.cpubenchmark.net/compare/Intel-Celeron-J3455-vs-Intel-Atom-x6211E-vs-Intel-Celeron-J6412-vs-Intel-Atom-x6425E/2875vs4347vs4474vs4753

The Fitlet3 also supports DDR4 and NVMe drives. I believe the NICs are Intel I210.


To futur
Quote from: ck42 on February 18, 2022, 06:53:36 PM
Found this in a week old comment in an OPN subreddit:

So here we have someone claiming near full gig throughput on a LOWLY Pentium J5005. Base freq 1.5GHz, burst 2.8GHz. 10W TDP which is really nice. 

This is interesting and I did a little research. I have an Atom E3845 and use PPPoE W/Vlans on gigabit, and can can confirm I get ~700mbps max. It's a bummer. According to geekbench5, lowly the J5005 is over twice as fast as mine in single core performance. So that discrepancy makes sense. (I've switch to OpenWRT for now and it works full speed no sweat).

I think a lot of the folks running into pppoe gigabit limits are using similar old lower power systems. Think of the super popular APU2 and related PCS. Or the J1900 mini PC systems. These are all similarly slow compared to a "lowly" J5005.

Bottom line for those interested: when deciding, compare single core benchmarks to these systems to get an idea of how your prospective system will perform. Report back on how things pan out.

The problem is, there is no router-benchmark test, that could reliably tell how many megabits or gigabits/sec a certain CPU could do under Freebsd 12/13, if PPPoE is the WAN protocol.

So the only true option you have, is to buy something that is 2-3-4 gigahertz (translation: overpowered 2-3times, just to be safe), and is actually a very recent microprocessor. In recent I mean: from the past 3-4 years. And be careful! I said not the product itself should be maximum 3-4 years, but the building block CPU/SoC age should be max. 3-4 years. As "some" companies ehem..ehem.. pcengines..ehem are selling a 10 year old AMD Jaguar CPU in their APU2-3-4-5-6-7 product line. At the end of 2022. So the product, like APU7 may be new (because they are sort of hiding the details about APU5-6-7, the public may be in the dark), but the CPU on the board is a rusty p.o.s in terms of routing performance in 2022.

Hello all,

So in case it's still of interest: I can report that it is possible to utilize a Deutsche Telekom fiber connection (1 GBit/s down) via the provided fiber modem with an OPNsense DEC2750. So to say tested with the NETBOARD A10 Gen3 and the AMD Embedded Ryzen V1500B.

Best regards,
Mike

Quote from: Mike Forster on May 02, 2023, 09:45:10 AM
Hello all,

So in case it's still of interest: I can report that it is possible to utilize a Deutsche Telekom fiber connection (1 GBit/s down) via the provided fiber modem with an OPNsense DEC2750. So to say tested with the NETBOARD A10 Gen3 and the AMD Embedded Ryzen V1500B.

Best regards,
Mike

With or without special tuning?

May 31, 2023, 01:14:45 PM #20 Last Edit: May 31, 2023, 01:36:40 PM by Kali
For reference I have a MiniPC YL-J3160L4 which is an Intel(R) Celeron(R) CPU J3160 @ 1.60GHz (4 cores, 4 threads) with 4GB of ram and 4x1GBit ethernet (Intel(R) I210 Flashless (Copper))

Recently I change my connection from vdsl2 200/30 to ftth 1000/300
Initially I have just swapped the "isp router" and everything was working fine with speedtest results up to 906/310
As the new isp modem is an huge white box which is very ugly and as all Italian (maybe europe?) FTTH ISP use PPPoE and OPNSense support it, I decided to remove the "isp router" and use directly the OPNSense box and here started my trouble with performance which dropped down to 430/310

after some reading here, pfsense and general freebsd forums I found a configuration that raised performance up to 902/310

Enabled powerd
Disabled Hardware CRC/TSO/LRO and VLAN Filtering
Adjusted this Tunables:

net.isr.dispatch=deferred
net.isr.maxthreads=-1
net.isr.bindthreads=1
net.inet.ip.intr_queue_maxlen=3000
net.inet.rss.enabled=1
dev.igb.0.iflib.override_qs_enable=1


only the last one is hw specific for my Intel I210 and igb0 is the interface connected to the Optical Network Terminal (Optical to Ethernet signal converter)

about VoIP in OPNSense I have assigned a static Lan IP to the Wan interface of the "isp modem" and added a Port Forward of WAN address port 5060/UDP to the previous static IP and I was able to move the isp modem away from my eyes :D

I am running a Protectli VP2420. Celeron J6412 Quad Core 2GHz. Getting full speed on my Fiber 1GB Down/940GB Up on PPPoE. Performance-wise I left everything default and it's fine.

I have been doing a lot of research too, because of anticipated PPPoE-related issues and it would seem the single core issue has been addressed for a few years. Since OPNsense uses MPD (https://mpd.sourceforge.net/) to establish PPPoE connections, it should utilise all core by default. All the configurables documented everywhere should have little to no effect. To be taken with a grain of salt, I am a fairly new user to OPNsense and FreeBSD in general.

Here is a thread discussing this specifically: https://forum.opnsense.org/index.php?topic=30925.0

@Prevok can you confirm this is for when your ISP provides your internet connection as PPPoE?
I'm thinking you might be talking about running a PPP server on OPNsense and this is when MPD5 is used.

Sorry, but just to confirm, could you confirm your ISP is PPPoE and run top -P or top -1 (can't remember which one it is) to see if your first core is used or is the load spread when running at full download bandwidth.

I have moved to an ISP that does not use PPPoE so cannot confirm its fixed sorry.

Thanks

I tried it (top -P) and saw even distribution over 4 threads. My DEC750 has 8, but only the first four got utilized at ~30% load for 600 MBit/s. On another N5105-based box, all 4 cores were being used at 1000 MBit/s and ~10% load.

After I changed the number of RX and TX queues from 4 to 8 on the DEC750, the load on individual cores became so low that the distribution was uneven and it seemed like only 3 thread were used more. Matter-of-fact, with those speeds, both boxes are far from their limits.
Intel N100, 4 x I226-V, 16 GByte, 256 GByte NVME, ZTE F6005

1100 down / 770 up, Bufferbloat A

@Prevok that wasn't my case
my box with default settings cant reach 500Mbit, probably due the weak cpu

Quote from: bunchofreeds on June 13, 2023, 05:00:41 AM
@Prevok can you confirm this is for when your ISP provides your internet connection as PPPoE?
I'm thinking you might be talking about running a PPP server on OPNsense and this is when MPD5 is used.

Sorry, but just to confirm, could you confirm your ISP is PPPoE and run top -P or top -1 (can't remember which one it is) to see if your first core is used or is the load spread when running at full download bandwidth.

I have moved to an ISP that does not use PPPoE so cannot confirm its fixed sorry.

Thanks

I am not running any PPP server. I need to establish a PPPoE connection, over a specific vLAN, with my ISP.

When running a speedtest, with 'top -P -H' you can see activity jumping to roughly 30% on CPU 1 and 2. The number of threads increases by 2-3 as well (although unsure how accurate this might be).

June 13, 2023, 04:56:10 PM #26 Last Edit: June 13, 2023, 05:00:51 PM by Prevok
Quote from: Kali on June 13, 2023, 09:43:36 AM
@Prevok that wasn't my case
my box with default settings cant reach 500Mbit, probably due the weak cpu

Yeah has to be it. My previous box was running a J3160 (Protectli FW4B) as well. On OpenWRT and Debian I was able to reach 500-600Mbit, slightly less with pfSense and OPNsense with default configs, I wasn barely reaching 200Mbit.

After various tunables, I don't remember even reaching 400Mbit :(

At some point, even with multiple core and multithreading support, the IPC on that CPU is too low (among other things).

Edit: Didn't realised you actually almost maxed the line on that box. Good job :) I gave up and replaced the box :D

I find the information around PPPoE on FreeBSD very confusing.

maybe i was luckly, after switching from the isp router to opnsense I noticed the poor performance and after few read I found a big difference with:

net.isr.dispatch=deferred
net.isr.maxthreads=-1
net.isr.bindthreads=1

the rest make some kind of difference but not big as this one

btw FW4B is a rebranded YL-J3160L4 (I'm running protectcli coreboot bios on it)

June 15, 2023, 06:15:25 AM #28 Last Edit: June 16, 2023, 12:53:48 AM by bunchofreeds
I agree that it's super confusing trying to understand what's actually going on here.

I 'believe' it was to do with multi queue network adapters working with multi core cpu's.
So it became worse if you had a multi queue adapter with a multi core cpu that didn't have a high GHZ.

https://docs.netgate.com/pfsense/en/latest/hardware/tune.html#pppoe-with-multi-queue-nics
https://redmine.pfsense.org/issues/4821

So I'm really not sure if it has been resolved or not?

Possibly some additional testing from those impacted to confirm what their hardware is (including NIC and capabilities)

Or perhaps someone more knowledgeable than me to confirm :)

Found this great write up about it

https://eyegog.co.uk/posts/a-sad-slow-pppoe-story/

Its solved in this case by replacing NIC hardware and using current software.