I performed a clean install of 18.1 on my Dell R610 server that has quad-broadcom ports installed and restored my configuration from backup. I'm experiencing some rather significant performance issues now. I have 360Mbps downstream, but SpeedTest is only getting me ~100Mbps now.
On 17.7.11 (and 17.7.12), I added the following to /etc/loader.conf.local to mitigate the issue, but this doesn't seem to be having the same result anymore. I'm wondering if I need to return to 17?
hw.bce.tso_enable=0
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
Does anyone have any thoughts?
Where did you get this values?
The top two lines were from a FreeBSD guide for mitigating issues with Broadcom cards - specifically to disable hardware TSO offloading. The same settings can be found in the pfsense wiki.
The buffer sizes were expanded to make use of some of the 48gb of RAM the server has (from it's previous role).
I would start removing them, reboot, Test. Then add one after another.
I'd check if the Broadcom card is supported at al https://www.freebsd.org/releases/11.1R/hardware.html#ethernet
[i386,amd64] The bce(4) driver provides support for various NICs based on the QLogic NetXtreme II family of Gigabit Ethernet controllers, including the following:
QLogic NetXtreme II BCM5706 1000Base-SX
QLogic NetXtreme II BCM5706 1000Base-T
QLogic NetXtreme II BCM5708 1000Base-SX
QLogic NetXtreme II BCM5708 1000Base-T
QLogic NetXtreme II BCM5709 1000Base-SX
QLogic NetXtreme II BCM5709 1000Base-T
QLogic NetXtreme II BCM5716 1000Base-T
Dell PowerEdge 1950 integrated BCM5708 NIC
Dell PowerEdge 2950 integrated BCM5708 NIC
Dell PowerEdge R710 integrated BCM5709 NIC
HP NC370F Multifunction Gigabit Server Adapter
HP NC370T Multifunction Gigabit Server Adapter
HP NC370i Multifunction Gigabit Server Adapter
HP NC371i Multifunction Gigabit Server Adapter
HP NC373F PCIe Multifunc Giga Server Adapter
HP NC373T PCIe Multifunction Gig Server Adapter
HP NC373i Multifunction Gigabit Server Adapter
HP NC373m Multifunction Gigabit Server Adapter
HP NC374m PCIe Multifunction Adapter
HP NC380T PCIe DP Multifunc Gig Server Adapter
HP NC382T PCIe DP Multifunction Gigabit Server Adapter
HP NC382i DP Multifunction Gigabit Server Adapter
HP NC382m DP 1GbE Multifunction BL-c Adapter
Then check if the card has errors (Interfaces -> Overview or netstat -idb -I bce0).
And read the bce(4) manaual page. Maybe increasing the sysctl's hw.bce.rx_pages and hw.bce.tx_pages could help.
Quote from: faunsen on January 30, 2018, 09:49:01 AM
I'd check if the Broadcom card is supported at al https://www.freebsd.org/releases/11.1R/hardware.html#ethernet
[i386,amd64] The bce(4) driver provides support for various NICs based on the QLogic NetXtreme II family of Gigabit Ethernet controllers, including the following:
QLogic NetXtreme II BCM5706 1000Base-SX
QLogic NetXtreme II BCM5706 1000Base-T
QLogic NetXtreme II BCM5708 1000Base-SX
QLogic NetXtreme II BCM5708 1000Base-T
QLogic NetXtreme II BCM5709 1000Base-SX
QLogic NetXtreme II BCM5709 1000Base-T
QLogic NetXtreme II BCM5716 1000Base-T
Dell PowerEdge 1950 integrated BCM5708 NIC
Dell PowerEdge 2950 integrated BCM5708 NIC
Dell PowerEdge R710 integrated BCM5709 NIC
HP NC370F Multifunction Gigabit Server Adapter
HP NC370T Multifunction Gigabit Server Adapter
HP NC370i Multifunction Gigabit Server Adapter
HP NC371i Multifunction Gigabit Server Adapter
HP NC373F PCIe Multifunc Giga Server Adapter
HP NC373T PCIe Multifunction Gig Server Adapter
HP NC373i Multifunction Gigabit Server Adapter
HP NC373m Multifunction Gigabit Server Adapter
HP NC374m PCIe Multifunction Adapter
HP NC380T PCIe DP Multifunc Gig Server Adapter
HP NC382T PCIe DP Multifunction Gigabit Server Adapter
HP NC382i DP Multifunction Gigabit Server Adapter
HP NC382m DP 1GbE Multifunction BL-c Adapter
Then check if the card has errors (Interfaces -> Overview or netstat -idb -I bce0).
And read the bce(4) manaual page. Maybe increasing the sysctl's hw.bce.rx_pages and hw.bce.tx_pages could help.
The ethernet device is a BCM5709, and is listed on the compatibility list for the bce driver.
I went ahead and increased the tx_pages and rx_pages to "8" (the max) and there's no difference in the throughput.
For grins, I enabled verbose debugging on the bcm driver as well. There are no notices of buffer overruns or anything else that would be of concern. In fact, I don't see much of a difference in the dmesg output between debugging off or on.
I'm at a complete loss as to what to try next. It seems very odd that the throughput is slamming into the wall it's hitting - 100Mbps.
Sent from my Pixel 2 XL using Tapatalk
And no errors on the interface? I guess there aren't any. ;)
Hmm, 360Mbps is not Gigabit. Could be that the problem is not on the Broadcom cards.
How do you measure the throughput?
There are no errors on any of the interfaces. All are running at 1G, full-duplex.
I'm testing via speedtest.net. Connecting my laptop to the cable modem directly, I get 367M downstream, but the same laptop through opnsense, it slams into a 100M wall. Literally 100M. It's almost as if there's a traffic shaper installed, but there isn't. The tests are all done hard-wired.
There's no gigabit internet service in this area yet. The best we can get is this oddball 360M/8M service. Maybe someday...
I may end up trying to go back to v17 and compare the sysctl output to what I'm seeing in v18. Otherwise, I'm not sure what else to try.
Sent from my Pixel 2 XL using Tapatalk
I had a similar issue. Accidentally enabled IDS promiscuous mode and didn't notice it.
Might be your case too?
Or some QoS service is active?
I checked... I didn't have IDS enabled. Good thought, for sure.
I'm going to roll back to 17 this evening and see what happens.
Sent from my Pixel 2 XL using Tapatalk
The issue does not exist in Intel NIC Cards. The following result is for a 1 Gbps Fiber connection
(http://i68.tinypic.com/143p1j8.png)
Gigabit.... If only. What model card is that you're using?
Intel I211AT - 10/100/1000 Controller
I have the same issue. I' have 900 down and 500 up as measured using Speedtest.net with my local ISP when connected directly to one of the LAN ports on my FritzBox! router. When I connect my OPNSense box directly to the FritzBox! and then to its LAN port, I see only 384 down and 384 up. The OPNSense Box is a single VM running on a ESXi 6.0 host which has an Atom C2758 8 core CPU and 8 GB's of ECC RAM. The OPNSense guest is provisioned with 4 GB RAM and 4 vCPU's.
Please advise on how I can improve throughput on OPNSense 18.1.
Same situation here, HP DL380 G6, ESXi 5.5, no more than 450 Mb/s symmetrical out of 1000 Mbps symmetrical.
I'm out of clues.
Also slowdowns here. Using Intel i350T4
Here is a comparison using same ISP connection.
150 down with OPNsense
310 down with PFsense.
Upload speed does not seem affected
Was not like this with 17.7.12
Not happy, hope there is an explanation or patch soon
Is this all measurement with NAT in place?
We can try two things:
(a) Use different (older) kernels, even 11.0 to see if the issue is there, or
(b) Some quirk in the NAT rule generation setup causes this slowdown which seems to be half of what is expected?
Cheers,
Franco
Not sure, but I had to do the NAT patch to get NAT to work.
What is the shell command to revert back to 17.7? Or should I reinstall 17.7 to test it again just to make sure. And if I reinstall 17.5, how do I prevent it upgrading to 18.1. I would want to get back to 17.7.12
Something else I noticed right off, the memory usage has dropped from 12% with 17.7 to 6% with 18.1
Try this first, reverting to the RC kernel where changes are minimal:
# opnsense-update -kr 18.1.r1
# /usr/local/etc/rc.reboot
Applied it.
System>Firmware still shows 18.1 installed. And console shows 18.1_1 installed
Tried the speedtest again with no improvement.
Not sure the downgrade worked
If I reinstall 17.7.5, how do I update to 17.7.12 without going to 18.1?
"uname -a" will reveal the src / kernel commit. Okay, so it's not the PCP change[1].
You can upgrade to 17.7.12 no problem, 18.1 is on a different update track, you can't escape 17.7.x in the "normal" updates.
Cheers,
Franco
[1] https://github.com/opnsense/src/commit/dabc3cf4
Ok, great I will try it and report back the results. Glad I take lots of configuration backups.
The most useful test is probably 18.1 with a FreeBSD 11.0 kernel underneath, but I want to double-check this is stable first.
Otherwise, 17.7.12 performance matters not so much.... rather opnsense-devel and a clean reboot.
Last test is 17.7.12 with FreeBSD 11.1 kernel (I know this should be stable but is tricky to update to, but works from opnsense-devel).
That should give us enough info which puzzle piece is responsible for the slowdown.
Cheers,
Franco
PS: From the looks of it and missing reports for beta and all it's either driver specific or something with the new way of unrolling the NAT rules.
I have a solution for my Broadcom-NIC'ed Dell R610. It was definitely a driver issue more than a NAT or anything else. The hw.bce.* settings only apply to Broadcom, but there's probably similar options for the others who have also replied to this thread.
In /boot/loader.conf.local:
hw.bce.verbose=0
hw.bce.tso_enable=0
hw.bce.rx_pages=8
hw.bce.tx_pages=8
hw.pci.enable_msix=1
hw.pci.enable_msi=1
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
I'm now getting full bandwidth in both directions via opnsense. After getting that sorted out, I enabled dual-stack IPv4/IPv6, IDS and upstream traffic shaping. No more issues.
It should go without saying, but you must reboot after making changes to /boot/loader.conf.local ...
Tried 17.7.12 and all went back to expected speeds.
Then I realized that the upgrade had rewritten some of my performance tweaks.
All speeds look normal now running 18.1_1.
I also noticed a new suricata.yaml. I will have to dive into that because I also had changes in there as well.
Did you use /boot/loader.conf for these? Or were they really gone from /boot/loader.conf.local ? It shouldn't touch .local ever...
Cheers,
Franco
PS: For everyone else still having issues, please state your network driver name for reference. :)
Quote from: franco on February 01, 2018, 11:46:47 PM
Did you use /boot/loader.conf for these? Or were they really gone from /boot/loader.conf.local ? It shouldn't touch .local ever...
Cheers,
Franco
I added them to loader.conf.local manually so they would be safe from being overwritten by future updates.
When I installed v18.1 originally, I did a clean install... Mainly because the v17 install used 100gb of my storage for swap by default. So, loader.conf.local was empty.
Sent from my Pixel 2 XL using Tapatalk
Ah, makes sense....
dcol suggested this a bit ago and I will implement it soon so all the loader.conf stuff can reside in config.xml backup and restore as well:
https://github.com/opnsense/core/issues/2083
Cheers,
Franco
Quote from: franco on February 02, 2018, 12:01:19 AM
Ah, makes sense....
dcol suggested this a bit ago and I will implement it soon so all the loader.conf stuff can reside in config.xml backup and restore as well:
https://github.com/opnsense/core/issues/2083
Cheers,
Franco
That would be awesome. I appreciate all you guys do. Opnsense is great.
Dell R610, 48gb RAM, 2x146gb SAS (RAID 1)
Hi
Just in case it helps Intel NIC owners, following what funar explained
QuoteI have a solution for my Broadcom-NIC'ed Dell R610. It was definitely a driver issue more than a NAT or anything else. The hw.bce.* settings only apply to Broadcom, but there's probably similar options for the others who have also replied to this thread.
In /boot/loader.conf.local:
hw.bce.verbose=0
hw.bce.tso_enable=0
hw.bce.rx_pages=8
hw.bce.tx_pages=8
hw.pci.enable_msix=1
hw.pci.enable_msi=1
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
I'm now getting full bandwidth in both directions via opnsense. After getting that sorted out, I enabled dual-stack IPv4/IPv6, IDS and upstream traffic shaping. No more issues.
I found that these values for an HP NC360T based on Intel 82571EB (em driver) makes network performance OK , unless for me
Here's my actual /boot/loader.conf.local :
hw.pci.enable_msix=1
hw.pci.enable_msi=1
hw.em.rxd=4096
hw.em.txd=4096
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
Regards,
Jorge
Quote from: jorgegmayorgas on February 02, 2018, 06:20:46 PM
Hi
Just in case it helps Intel NIC owners, following what funar explained QuoteI have a solution for my Broadcom-NIC'ed Dell R610. It was definitely a driver issue more than a NAT or anything else. The hw.bce.* settings only apply to Broadcom, but there's probably similar options for the others who have also replied to this thread.
In /boot/loader.conf.local:
hw.bce.verbose=0
hw.bce.tso_enable=0
hw.bce.rx_pages=8
hw.bce.tx_pages=8
hw.pci.enable_msix=1
hw.pci.enable_msi=1
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
I'm now getting full bandwidth in both directions via opnsense. After getting that sorted out, I enabled dual-stack IPv4/IPv6, IDS and upstream traffic shaping. No more issues.
I found that these values for an HP NC360T based on Intel 82571EB (em driver) makes network performance OK , unless for me
Here's my actual /boot/loader.conf.local :
hw.pci.enable_msix=1
hw.pci.enable_msi=1
hw.em.rxd=4096
hw.em.txd=4096
net.inet.tcp.tso=0
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
Regards,
Jorge
I have a SuperMicro A1SAi C2758 MB with 4 x Intel i354 Gigabit Ethernet NIC's. Can I use the above settings in the driver config or should I use something else?
the i354 uses igb drivers, not em or bce. So the em and bce references would have to change to igb.
Use the guide found here https://forum.opnsense.org/index.php?topic=6590.0 for igb