SOLVED: Poor NIC performance on APU2c4 board

Started by andbaum, January 20, 2019, 10:39:30 PM

Previous topic - Next topic
January 20, 2019, 10:39:30 PM Last Edit: January 21, 2019, 03:49:55 PM by andbaum
Hi together,

I'm new to Opnsense and use it for about 2 months now. It is a very cool product and I really enjoy using it.

Actually I have a problem, I couldn't solve myself: I use opnsense on a APU2c4 (Intel NICs). Opnsense is installed on a sd card. When I iperf to my firewall (from a macbook pro with thunderbolt ethernet) I only get about 110-120 Mbit/s bitrate.
I already tried two things:

  • Changing "Interfaces: Settings" in several ways: When I enable HW Support I get a slight improvement of performance, but it's only about 5-10 Mbit/s
  • Connecting the APU Board with a LACP LAG to my switch (Netgear ProSafe GS724T) (I had one interface free for the LAG): In fact, enabling the LACP LAG really doubles my performance. But starting from 110-120 Mbit/s, I'm now at about 250 Mbit/s

Can anyone help me?

Yours, Andreas

Don't have an answer for you, but there is a 6 pages long thread on this subject here: https://forum.opnsense.org/index.php?topic=9264.0

Thanks for your hint. I already know the other performance related threads but there was no real suggestion that could solve my problem.

The most confusing thing: In most of the treads I found, users comply about just hitting 500-600 MBit/s on the APU Boards with a BSD System. My APU Board just hits about 150 MBit per NIC :(

> When I iperf to my firewall (from a macbook pro with thunderbolt ethernet) I only get about 110-120 Mbit/s bitrate.

People have stated that when you measure transit speed that's a lot better than testing throughput *to* the firewall. The reason for this is that once you test the firewall host itself userspace application slows networking in the kernel down...


Cheers,
Franco

THANKS! That was the problem!

Checking a LAN Client on the other side of the APU board:

$ iperf -c 172.20.1.1 -p 4712 -u -t 60 -i 10 -b 1000M
------------------------------------------------------------
Client connecting to 172.20.1.1, UDP port 4712
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.10 port 63781 connected with 172.20.1.1 port 4712
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 10.0-20.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 20.0-30.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 30.0-40.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 40.0-50.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 50.0-60.0 sec  1.22 GBytes  1.05 Gbits/sec

;D

Even with some IDS rules enabled  8)


Quote from: andbaum on January 21, 2019, 03:49:30 PM
THANKS! That was the problem!

Checking a LAN Client on the other side of the APU board:

$ iperf -c 172.20.1.1 -p 4712 -u -t 60 -i 10 -b 1000M
------------------------------------------------------------
Client connecting to 172.20.1.1, UDP port 4712
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.10 port 63781 connected with 172.20.1.1 port 4712
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 10.0-20.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 20.0-30.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 30.0-40.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 40.0-50.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 50.0-60.0 sec  1.22 GBytes  1.05 Gbits/sec

;D

Even with some IDS rules enabled  8)

Your results look highly unlikely!

Are you performing network address translation (a.k.a NAT) in your firewall config (the builtin "pf" firewall by default does NAT,  unless you switched it off explicitely). Otherwise you are just routing traffic. There is no problem routing @1Gbit speed on this platform. Its the NAT, that kills the performance miserably!

You are right.
But in my first attempt I just wanted to see if the NICs are really capable of 1GBit/s, or if there is any issue like the RPi USB-bootleneck.
After seeing this result I built up a little setup with a CubieTruck (had one lying around) on the WAN side (actually my DMZ) and my MacBook on the LAN side.
The CubieTruck brings about 700-750 MBit (same switched LAN, TCP performance in iperf). Without IDS (NAT and about 30 FW Rules enabled), my APU board handled about 500 to 550 MBit to the Cubietruck.
As my internet connection only offers 100Mbit downstream, I could play with some IDS rules. Now I "tuned" my setting to about 200-250 MBit throughput. I have about 20000 "simple" IDS rules. With simple, I mean blacklisted IPs, URLs, etc. Enabling more complex rules (really looking into the traffic behavior), it's no problem to bring the performance down to about 50 MBit with one ruleset.

Hope this brings some clarity.

Quote from: Ricardo on February 07, 2019, 07:18:25 PM
unless you switched it off explicitely
In this report, NAT was enabled any my client (10.0.0.10 was masqueraded to the server at 172.20.1.1). I think the use of UDP rather than TCP hits the better bandwidth results.