18.1 Network Performance (17.7.11-12 was fine)?

Started by funar, January 30, 2018, 08:11:30 AM

Previous topic - Next topic
I performed a clean install of 18.1 on my Dell R610 server that has quad-broadcom ports installed and restored my configuration from backup. I'm experiencing some rather significant performance issues now. I have 360Mbps downstream, but SpeedTest is only getting me ~100Mbps now.

On 17.7.11 (and 17.7.12), I added the following to /etc/loader.conf.local to mitigate the issue, but this doesn't seem to be having the same result anymore. I'm wondering if I need to return to 17?

hw.bce.tso_enable=0
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288

Does anyone have any thoughts?


The top two lines were from a FreeBSD guide for mitigating issues with Broadcom cards - specifically to disable hardware TSO offloading. The same settings can be found in the pfsense wiki.

The buffer sizes were expanded to make use of some of the 48gb of RAM the server has (from it's previous role).

I would start removing them, reboot, Test. Then add one after another.

I'd check if the Broadcom card is supported at al https://www.freebsd.org/releases/11.1R/hardware.html#ethernet
[i386,amd64] The bce(4) driver provides support for various NICs based on the QLogic NetXtreme II family of Gigabit Ethernet controllers, including the following:
QLogic NetXtreme II BCM5706 1000Base-SX
QLogic NetXtreme II BCM5706 1000Base-T
QLogic NetXtreme II BCM5708 1000Base-SX
QLogic NetXtreme II BCM5708 1000Base-T
QLogic NetXtreme II BCM5709 1000Base-SX
QLogic NetXtreme II BCM5709 1000Base-T
QLogic NetXtreme II BCM5716 1000Base-T
Dell PowerEdge 1950 integrated BCM5708 NIC
Dell PowerEdge 2950 integrated BCM5708 NIC
Dell PowerEdge R710 integrated BCM5709 NIC
HP NC370F Multifunction Gigabit Server Adapter
HP NC370T Multifunction Gigabit Server Adapter
HP NC370i Multifunction Gigabit Server Adapter
HP NC371i Multifunction Gigabit Server Adapter
HP NC373F PCIe Multifunc Giga Server Adapter
HP NC373T PCIe Multifunction Gig Server Adapter
HP NC373i Multifunction Gigabit Server Adapter
HP NC373m Multifunction Gigabit Server Adapter
HP NC374m PCIe Multifunction Adapter
HP NC380T PCIe DP Multifunc Gig Server Adapter
HP NC382T PCIe DP Multifunction Gigabit Server Adapter
HP NC382i DP Multifunction Gigabit Server Adapter
HP NC382m DP 1GbE Multifunction BL-c Adapter


Then check if the card has errors (Interfaces -> Overview or netstat -idb -I bce0).

And read the bce(4) manaual page. Maybe increasing the sysctl's hw.bce.rx_pages and hw.bce.tx_pages could help.

Quote from: faunsen on January 30, 2018, 09:49:01 AM
I'd check if the Broadcom card is supported at al https://www.freebsd.org/releases/11.1R/hardware.html#ethernet
[i386,amd64] The bce(4) driver provides support for various NICs based on the QLogic NetXtreme II family of Gigabit Ethernet controllers, including the following:
QLogic NetXtreme II BCM5706 1000Base-SX
QLogic NetXtreme II BCM5706 1000Base-T
QLogic NetXtreme II BCM5708 1000Base-SX
QLogic NetXtreme II BCM5708 1000Base-T
QLogic NetXtreme II BCM5709 1000Base-SX
QLogic NetXtreme II BCM5709 1000Base-T
QLogic NetXtreme II BCM5716 1000Base-T
Dell PowerEdge 1950 integrated BCM5708 NIC
Dell PowerEdge 2950 integrated BCM5708 NIC
Dell PowerEdge R710 integrated BCM5709 NIC
HP NC370F Multifunction Gigabit Server Adapter
HP NC370T Multifunction Gigabit Server Adapter
HP NC370i Multifunction Gigabit Server Adapter
HP NC371i Multifunction Gigabit Server Adapter
HP NC373F PCIe Multifunc Giga Server Adapter
HP NC373T PCIe Multifunction Gig Server Adapter
HP NC373i Multifunction Gigabit Server Adapter
HP NC373m Multifunction Gigabit Server Adapter
HP NC374m PCIe Multifunction Adapter
HP NC380T PCIe DP Multifunc Gig Server Adapter
HP NC382T PCIe DP Multifunction Gigabit Server Adapter
HP NC382i DP Multifunction Gigabit Server Adapter
HP NC382m DP 1GbE Multifunction BL-c Adapter


Then check if the card has errors (Interfaces -> Overview or netstat -idb -I bce0).

And read the bce(4) manaual page. Maybe increasing the sysctl's hw.bce.rx_pages and hw.bce.tx_pages could help.
The ethernet device is a BCM5709, and is listed on the compatibility list for the bce driver.

I went ahead and increased the tx_pages and rx_pages to "8" (the max) and there's no difference in the throughput.

For grins, I enabled verbose debugging on the bcm driver as well. There are no notices of buffer overruns or anything else that would be of concern. In fact, I don't see much of a difference in the dmesg output between debugging off or on.

I'm at a complete loss as to what to try next. It seems very odd that the throughput is slamming into the wall it's hitting - 100Mbps.



Sent from my Pixel 2 XL using Tapatalk


And no errors on the interface? I guess there aren't any.  ;)

Hmm, 360Mbps is not Gigabit. Could be that the problem is not on the Broadcom cards.
How do you measure the throughput?

There are no errors on any of the interfaces. All are running at 1G, full-duplex.

I'm testing via speedtest.net. Connecting my laptop to the cable modem directly, I get 367M downstream, but the same laptop through opnsense, it slams into a 100M wall. Literally 100M. It's almost as if there's a traffic shaper installed, but there isn't. The tests are all done hard-wired.

There's no gigabit internet service in this area yet. The best we can get is this oddball 360M/8M service. Maybe someday...

I may end up trying to go back to v17 and compare the sysctl output to what I'm seeing in v18.  Otherwise, I'm not sure what else to try.

Sent from my Pixel 2 XL using Tapatalk


I had a similar issue. Accidentally enabled IDS promiscuous mode and didn't notice it.
Might be your case too?

Or some QoS service is active?
OPNsense v18 | HW: Gigabyte Z370N-WIFI, i3-8100, 8GB RAM, 60GB SSD, | Controllers: 82575GB-quad, 82574, I221, I219-V | PPPoE: RDS Romania | Down: 980Mbit/s | Up: 500Mbit/s

Team Rebellion Member

I checked... I didn't have IDS enabled. Good thought, for sure.

I'm going to roll back to 17 this evening and see what happens.

Sent from my Pixel 2 XL using Tapatalk


The issue does not exist in Intel NIC Cards.  The following result is for a 1 Gbps Fiber connection


Gigabit.... If only.  What model card is that you're using?


I have the same issue.  I' have 900 down and 500 up as measured using Speedtest.net with my local ISP when connected directly to one of the LAN ports on my FritzBox! router.  When I connect my OPNSense box directly to the FritzBox! and then to its LAN port, I see only 384 down and 384 up.  The OPNSense Box is a single VM running on a ESXi 6.0 host which has an Atom C2758 8 core CPU and 8 GB's of ECC RAM.  The OPNSense guest is provisioned with 4 GB RAM and 4 vCPU's.

Please advise on how I can improve throughput on OPNSense 18.1.

Same situation here, HP DL380 G6, ESXi 5.5, no more than 450 Mb/s symmetrical out of 1000 Mbps symmetrical.

I'm out of clues.