OPNsense Forum

English Forums => Intrusion Detection and Prevention => Topic started by: dcol on December 08, 2017, 05:13:30 pm

Title: Performance tuning for IPS maximum performance
Post by: dcol on December 08, 2017, 05:13:30 pm
I have researched and tested tunables because I have experienced too many down links and poor performance when using IPS/Inline on the WAN interface that could no longer be ignored. This file, loader.conf.local along with adding some system tunables in the WebGUI, has fixed this for me so I thought I would share with the OPNsense community. Sharing is what makes on open-source project successful. Share your experiences using the info in this post. You may or may not see much performance improvement depending on your hardware, but you will see less dropped connections. If you have any other tunable recommendations, please share and post those experiences here. This thread is for performance tuning ideas.

The biggest impact was from the Flow Control (FC) setting. FC is a level 1 layer adding pause frames before the data is transmitted. My assumption is Netmap has issues with FC which causes the dropped connections. Recommendations from many sources, including Cisco, suggest disabling FC altogether and let the higher levels handle the flow. There are exceptions, but these usually involve ESXi, VMware and other special applications.

I have done all my testing using an Intel i350T4 and i340T4, common NICs used for firewalls, in 4 different systems and, by the way, neither NIC had any performance advantage. I have tested these system for 5 days without any down links experienced after the changes were made. Without these changes every system was plagued with down WAN links and poor performance using the default settings.

Do not use this file if you are not using an igb driver. igb combined with other drivers is ok as long as you have at least one igb NIC, and I recommend you use the igb for all WAN interfaces.

Add the file below in the '/boot' folder and call it 'loader.conf.local' right besides 'loader.conf'. I use WinSCP, in a Windows environment, as a file manager to get easy access to the folders. Don't forget to Enable Secure Shell. I have tried using the 'System Tunables' in the WebGUI to add these settings. Some worked and some didn't using that method. Not sure why. Better to just add this file. If you're a Linux guru, I am not, then use your own methods to add this file.

The two most IMPORTANT things to insure is that power management be disabled in the OPNsense settings and also in the BIOS settings of the system (thanks wefinet). And the second is to disable flow control (IEEE 802.3x) on all ports. It is advisable to not connect an IPS interface to any device which has flow control on. Flow control should be turned off to allow the congestion to be managed higher up in the stack

Please test all tunables in a test environment before you apply to a production system.

# File starts below this line, use Copy/Paste #####################
# Check for interface specific settings and add accordingly.
# These ae tunables to improve network performance on Intel igb driver NICs

# Flow Control (FC) 0=Disabled 1=Rx Pause 2=Tx Pause 3=Full FC
# This tunable must be set according to your configuration. VERY IMPORTANT!
# Set FC to 0 (<x>) on all interfaces
dev.igb.<x>.fc=0 #Also put this in System Tunables dev.igb.<x>.fc: value=0

# Set number of queues to number of cores divided by number of ports. 0 lets FreeBSD decide
dev.igb.num_queues=0

# Increase packet descriptors (set as 1024,2048, or 4096) ONLY!
# Allows a larger number of packets to be processed.
# Use "netstat -ihw 1" in the shell and make sure the idrops are zero
# If the NIC has constant disconnects, lower this value
# if not zero then lower this value.
dev.igb.rxd="4096" # For i340/i350 use 2048
dev.igb.txd="4096" # For i340/i350 use 2048
net.link.ifqmaxlen="8192" # value here equal sum of above values. For i340/i350 use 4096

# Increase Network efficiency
dev.igb.enable_aim=1

# Increase interuppt rate
dev.igb.max_interrupt_rate="64000"

# Network memory buffers
# run "netstat -m" in the shell and if the 'mbufs denied' and 'mbufs delayed' are 0/0/0 then this is not needed
# if not zero then keep adding 400000 until mbufs are zero
kern.ipc.nmbclusters="1000000"

# Fast interrupt handling
# Normally set by default. Use these settings to insure it is on.
# Allows NIC to process packets as fast as they are received
dev.igb.enable_msix=1
dev.pci.enable_msix=1

# Unlimited packet processing
# Use this only if you are sure that the NICs have dedicated IRQs
# View the IRQ assignments by executing this in the shell "vmstat -i"
# A value of "-1" means unlimited packet processing
dev.igb.rx_process_limit="-1"
dev.igb.tx_process_limit="-1"
###################################################
# File ends above this line ##################################

##UPDATE 12/12/2017##
After testing I have realized that some of these settings are NOT applied via loader.conf.local and must be added via the WebGUI in System>Settings>Tunables. I have moved these from the file above to this list.
Add to Tunables

Disable Energy Efficiency - set for each igb port in your system
This setting can cause Link flap errors if not disabled
Set for every igb interface in the system as per these examples
dev.igb.0.eee_disabled: value=1
dev.igb.1.eee_disabled: value=1
dev.igb.2.eee_disabled: value=1
dev.igb.3.eee_disabled: value=1

IPv4 Fragments - 0=Do not accept fragments
This is mainly need for security. Fragmentation can be used to evade packet inspection
net.inet.ip.maxfragpackets: value=0
net.inet.ip.maxfragsperpacket: value=0

Set to 0 (<x>) for every port used by IPS
dev.igb.<x>.fc: value=0

##UPDATE 1/16/2018##
Although the tuning in this thread so far just deals with the tunables, there are other settings that can impact IPS performance. Here are a few...

In the Intrusion Detection Settings Tab.

Promiscuous mode- To be used only when multiple interfaces or VLAN's are selected in the Interfaces setting.
This is used so that IPS will capture data on all the selected interfaces. Do not enable if you have just one interface selected. It will help with performance.

Pattern matcher: This setting can select the best  algorithm to use when pattern matching. This setting is best set by testing. Hyperscan seems to work well with Intel NIC's. Try different ones and test the bandwidth with an internet speed test.

Home networks (under advanced menu.
Make sure the interfaces fall within the actual local networks. You may want to change the generic 192.168.0.0/16 to your actual local network ie 192.168.1.1/24

###################################################
USEFUL SHELL COMMANDS
sysctl net.inet.tcp.hostcache.list # View the current host cache stats
vmstat -i # Query total interrupts per queue
top -H -S # Watch CPU usage
dmesg | grep -i msi # Verify MSI-X is being used by the NIC
netstat -ihw 1 # Look for idrops to determine hw.igb.txd and rxd
grep <interface> /var/run/dmesg.boot # Shows useful info like netmap queue/slots
sysctl -A # Shows system variables
###################################################
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on December 08, 2017, 08:19:13 pm
Thanks for sharing!!
What were the results before and after Tuning?
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on December 08, 2017, 09:35:29 pm
When the flow control was set, I couldn't stay up long enough to get a reading.
Then I tested all the other settings using a line that is rated 300/30 (download/upload)
Using speedtest.net with the settings I get consistent readings @ 311/31 to 315/32 (10 tests)
With the default settings, without changing FC, I get inconsistent readings that varied from 230/20 to 308/28 (also 10 tests). Most tests were below 275/25.

The settings make a difference. Try it.
Title: Re: Performance tuning for IPS maximum performance
Post by: fabian on December 08, 2017, 09:41:06 pm
@dcol Do you want me to make this sticky?
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on December 08, 2017, 09:41:55 pm
Most definitely! I hope we get some feedback on this with other results

#UPDATE#
I have added some more descriptions and some tests to the original post. Enjoy!
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on December 08, 2017, 10:54:02 pm
Are you really sure the FC values did the trick and not the others?
Normally FC will influence your network badly with TCP .. and most switches doesn't support it either (in both directions).

Would be really interesting, I never did any testing on BSD :)
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on December 08, 2017, 11:04:01 pm
Actually I am in the process to figure out how to determine if flow control is enabled or not on a device. Unfortunately ethtool is not part of the distro, so I cannot figure it out yet. Would be nice to have ethtool available as an add-on package.

The command 'ethtool --show-pause igb0' would show if RX or TX was off (no FC) or on (FC enabled).

For me, when FC is enabled on the WAN the link crashes a lot. I spoke with the ISP and they confirmed that there is no FC on the bridged connection.

Most modern unmanaged switches do support flow control, 802.3x, and it is selectable on managed switches and most NIC's.

Also, if you look at the netmap documentation it suggests that flow control can negatively affect performance.
https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4#end
Title: Re: Performance tuning for IPS maximum performance
Post by: franco on December 15, 2017, 07:06:18 am
Actually I am in the process to figure out how to determine if flow control is enabled or not on a device. Unfortunately ethtool is not part of the distro, so I cannot figure it out yet. Would be nice to have ethtool available as an add-on package.

Not aware of a FreeBSD sibling here, sorry. :(

Old mailing list threads only suggest sysctl like you found:

https://lists.freebsd.org/pipermail/freebsd-net/2012-July/032868.html


Cheers,
Franco
Title: Re: Performance tuning for IPS maximum performance
Post by: Noctur on December 16, 2017, 04:18:45 pm
Thank you dcol for doing this work and sharing...

Does anyone know or has anyone tried this function with em NICs/drivers? No igb in my box, but I'd like test.

TIA
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on December 16, 2017, 05:45:07 pm
The following settings will work for the em driver

Put in loader.conf.local
# Flow Control (FC) 0=Disabled 1=Rx Pause 2=Tx Pause 3=Full FC
# This setting must be set according to your configuration. VERY IMPORTANT!
# Set FC to 0(<x>) on every interfaces used by IPS
hw.em.<x>.fc=0 - Also put in System Tunables hw.em.<x>.fc: value=0

hw.em.rx_process_limit=-1
hw.em.enable_msix=1
hw.em.txd=2048
hw.em.rxd=2048
net.link.ifqmaxlen="4096"

Put in Settings>System Tunables
hw.em.eee_setting:  value=0
dev.em.<x>.eee_control: value=0 # replace <x> with interface#, repeat for all installed ports
Title: Re: Performance tuning for IPS maximum performance
Post by: franco on January 17, 2018, 10:54:21 pm
I thought I'd drop by this link regarding previous discussions so that it is not forgotten and can be prodded further. Thanks for your work here. <3

https://github.com/opnsense/core/issues/2083
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on January 29, 2018, 12:32:07 pm
The following settings will work for the em driver

Put in loader.conf.local
# Flow Control (FC) 0=Disabled 1=Rx Pause 2=Tx Pause 3=Full FC
# This setting must be set according to your configuration. VERY IMPORTANT!
# Set FC to 0(<x>) on every interfaces used by IPS
hw.em.<x>.fc=0 - Also put in System Tunables hw.em.<x>.fc: value=0

Is this really hw.driver.number? I only find dev.driver.number is sysctl ..
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on January 29, 2018, 04:15:43 pm
These were the only em settings in my sysctl
hw.em.eee_setting: 1
hw.em.rx_process_limit: 100
hw.em.enable_msix: 1
hw.em.sbp: 0
hw.em.smart_pwr_down: 0
hw.em.txd: 1024
hw.em.rxd: 1024
hw.em.rx_abs_int_delay: 66
hw.em.tx_abs_int_delay: 66
hw.em.rx_int_delay: 0
hw.em.tx_int_delay: 66
hw.em.disable_crc_stripping: 0

I did see some dev.em settings in the pfsense sysctl but not in OPNsense.
It is possible more settings show up if you have an em driver active. The pfsense did have one active em device.

I would also put these in the tunables
hw.em.eee_setting   value=0
dev.em.<x>.eee_control   value=0 ,<x> being the IPS interface#
Then recheck sysctl and make sure they changed


Tunables are a trial and error thing, but certainly can't hurt to disable any em.eee setting.
Title: Re: Performance tuning for IPS maximum performance
Post by: nines on February 12, 2018, 09:14:15 pm
does anyone know if there are some sort of tunables or fc settings in special for vmxnet3 drivers?

thanks!
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on February 12, 2018, 09:27:12 pm
Best thing to do is take a look at your sysctl using sysctl -A in a shell
Then see what drivers are in there.
Then you will have a good idea on which drivers you can manipulate.
For example
hw.em.txd is for an Intel driver
hw.igb.txd is for an Intel driver
hw.re.txd is for an Realtek driver
and so on.
Title: Re: Performance tuning for IPS maximum performance
Post by: elektroinside on February 20, 2018, 12:07:24 pm
Took another closer look at these, after analyzing my settings and reconfigured my box.
They might work, I can't really "feel" if they made much of a difference, but kinda feels like faster.
What I did notice though, that my CPU usage dropped considerably after your settings.
When first trying them out, I must have misconfigured something as things got significantly worse, so I think it was my fault, I wasn't paying the necessary attention.

Thanks dcol for your work!

Title: Re: Performance tuning for IPS maximum performance
Post by: elektroinside on February 26, 2018, 10:32:41 am
An important observation:

These settings (if correctly applied) not only enhance IDS/IPS performance, but OpenVPN as well. OpenVPN performs significantly better without any other configured parameters!
Title: Re: Performance tuning for IPS maximum performance
Post by: Evil_Sense on April 10, 2018, 12:57:45 am
I implemented the tuning settings on my apu2c4.

I added the settings to the tunables in the GUI and used the mentioned command's to check before and afterwards, without noticing something odd.

Like elektroinside said, it fells faster, but it seems that cpu usage and memory usage has slightly increased and sometimes the system feels slower than usual.
I'm not sure if this comes from the rx & tx packet descriptor size of 4096 or the high max interrupt rate of 64000 but it seems a bit too heavy for the little apu2c4 :D.

Suggestions are welcome :)
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on April 10, 2018, 01:02:26 am
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Title: Re: Performance tuning for IPS maximum performance
Post by: Evil_Sense on April 10, 2018, 01:15:13 am
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Thanks, will try with interrupt value of 42000 and see if it gets a bit better :)
Title: Re: Performance tuning for IPS maximum performance
Post by: jenmonk on April 12, 2018, 05:18:53 am
With IPS/IDS my internet speed drops to 60mbs from 300mbs.
 I want to try your suggestions. Appreciate if you could let me know how to check ports used by IPS
"Set to 0 (<x>) for every port used by IPS
dev.igb.<x>.fc: value=0"

I followed "Fast and easy way to protect your home and/or small office network with OPNsense"  for my Initial setup
Thanks
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on April 12, 2018, 11:42:02 am
I did some testing in a 10G Lab:

#####################################
OPNsense 18.1.6-amd64
FreeBSD 11.1-RELEASE-p9
OpenSSL 1.0.2o 27 Mar 201

Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz (8 cores)

16GB RAM

Intel X520DA SFP+

Suricata, 11700 Rules enabled

Tests with iperf3:
iperf3 -p 5000 -f m -V -c 10.0.2.10 -t 30 -P 10 -w 12M
#####################################

No IPS enabled: 9400Mbit (30% CPU load)
IDS enabled: 9400Mbit (45% CPU load)
IPS enabled (Default Pattern matcher): 550Mbit (17% CPU load)
IPS enabled (Hyperscan): 1400Mbit (17% CPU load)

Title: Re: Performance tuning for IPS maximum performance
Post by: jenmonk on April 12, 2018, 07:44:13 pm
With IPS/IDS my internet speed drops to 60mbs from 300mbs.
 I want to try your suggestions. Appreciate if you could let me know how to check ports used by IPS
"Set to 0 (<x>) for every port used by IPS
dev.igb.<x>.fc: value=0"

I followed "Fast and easy way to protect your home and/or small office network with OPNsense"  for my Initial setup
Thanks

Appreciate all the help
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on April 21, 2018, 11:00:17 pm
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Thanks, will try with interrupt value of 42000 and see if it gets a bit better :)
Hi EVIL_Sense,
after changing the 42000 value, have you noticed some changes / speed ?
i am willing to get this configured on a production soon as we are from 1024MB when IDS is activated we reach 400MB

Title: Re: Performance tuning for IPS maximum performance
Post by: Evil_Sense on April 21, 2018, 11:29:38 pm
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Thanks, will try with interrupt value of 42000 and see if it gets a bit better :)
Hi EVIL_Sense,
after changing the 42000 value, have you noticed some changes / speed ?
i am willing to get this configured on a production soon as we are from 1024MB when IDS is activated we reach 400MB
Well, with 42000 I got a reasonable balance between resource usage and (at least I hope) good/better networking performance.
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on April 22, 2018, 10:32:13 pm
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Thanks, will try with interrupt value of 42000 and see if it gets a bit better :)
Hi EVIL_Sense,
after changing the 42000 value, have you noticed some changes / speed ?
i am willing to get this configured on a production soon as we are from 1024MB when IDS is activated we reach 400MB
Well, with 42000 I got a reasonable balance between resource usage and (at least I hope) good/better networking performance.
Can you share the value ? how much is before and after the IDS is activated ?
i am willing to configure this as the firewall is not near to me, if things missed up i will need to travel like 4 hrs go and 4 hr back.
Title: Re: Performance tuning for IPS maximum performance
Post by: Evil_Sense on April 23, 2018, 12:45:39 am
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Thanks, will try with interrupt value of 42000 and see if it gets a bit better :)
Hi EVIL_Sense,
after changing the 42000 value, have you noticed some changes / speed ?
i am willing to get this configured on a production soon as we are from 1024MB when IDS is activated we reach 400MB
Well, with 42000 I got a reasonable balance between resource usage and (at least I hope) good/better networking performance.
Can you share the value ? how much is before and after the IDS is activated ?
i am willing to configure this as the firewall is not near to me, if things missed up i will need to travel like 4 hrs go and 4 hr back.
I don't use IDS, so I can't give a statement on it.
Since I didn't write down the original settings and didn't make speed tests before and after, I'm not really able to provide reliable values. I could however try to remove the settings and measuring against the current state tomorrow.
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on April 23, 2018, 10:15:40 pm
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.
Thanks, will try with interrupt value of 42000 and see if it gets a bit better :)
Hi EVIL_Sense,
after changing the 42000 value, have you noticed some changes / speed ?
i am willing to get this configured on a production soon as we are from 1024MB when IDS is activated we reach 400MB
Well, with 42000 I got a reasonable balance between resource usage and (at least I hope) good/better networking performance.
Can you share the value ? how much is before and after the IDS is activated ?
i am willing to configure this as the firewall is not near to me, if things missed up i will need to travel like 4 hrs go and 4 hr back.
I don't use IDS, so I can't give a statement on it.
Since I didn't write down the original settings and didn't make speed tests before and after, I'm not really able to provide reliable values. I could however try to remove the settings and measuring against the current state tomorrow.

Thank you,
if you could do that i'll appreciate it.
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on April 25, 2018, 01:20:33 am
I have IDS enabled using only 1 rule " abuse.ch/SSL IP Blacklist " after testing the speed test its drop simnifically .

hardware is
Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz (4 cores)
Memory 16 % ( 1301/8054 MB )
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on April 25, 2018, 05:53:32 am
IDS or IPS?
Do you use Hyperscan?
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on April 25, 2018, 01:09:29 pm
IDS or IPS?
Do you use Hyperscan?
yes i am using hyperscan and using Intrusion Detection with IPS mode on see screenshot.
Title: Re: Performance tuning for IPS maximum performance
Post by: Evil_Sense on April 25, 2018, 03:35:17 pm
I finally found time for some tests..

I first tested with the tunables and a system running for couple weeks.

I then removed the tunables, rebooted, waited for 5 minutes and tested again.

Lastly I added the tunables again, rebooted, waited for 5 minutes and tested again.

As you see, the results are within tolerance, could be because my provider connection doesn't saturate the nic capacity of my apu2c4.
Title: Re: Performance tuning for IPS maximum performance
Post by: Evil_Sense on April 25, 2018, 03:36:26 pm
I also attached utilization screenshots, with the tunables it's higher, but since I don't mind using the hardware a bit more I'm ok with that.

(Second post, because only 4 pictures per posts allowed)
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on April 26, 2018, 03:04:24 pm
Thank you Evil Sens for your answer.
as i understand you didnt really noticed the speed but les on the hardware use.
as i understand i dont mind using the hardware that why we have it there :)
Title: Re: Performance tuning for IPS maximum performance
Post by: neoso on May 30, 2018, 09:22:41 am
Some of the tunables and settings do come with a resource price. Try reducing the interrupt rate. The queue size is a NIC dependent setting and depends of the buffer size in the NIC itself.

Hi,

Is possible put youtr config, in APU2C4?

I read de tuto, bit when insert the config in the loader.conf, when reboot i lost all config.

I hace a FFTH 600MB/600MB .
IPS/IDS activo :  100/100MB
IDS/IPS not active:   300MB/600MB

Is posible that the APU2C4 is poor hardware?

I have ordered a QOTOM on ALLIEXPRESS core i7 8Gb RAM

Do you think that installing PFSENSE will improve this in the APU2C4?
Title: Re: Performance tuning for IPS maximum performance
Post by: xmichielx on August 02, 2018, 10:54:36 am
The config should be in loader.conf.local and some in the tunables.
I tried it for the APU 2C4 but still max ~10/11 MB/s with Suricata inline, Snort with some PF magic (PFSense) gives the full bandwidth.
It's not a true inline IPS but works pretty good for home usage.
Perhaps one day when home hardware (like the APU2c4 which is quad core with 4 GB memory) works nicely with Suricata I will switch, untill then I use Snort since losing 60% of your bandwidth is just not worth it.
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on August 02, 2018, 10:57:02 am
The config should be in loader.conf.local and some in the tunables.
I tried it for the APU 2C4 but still max ~10/11 MB/s with Suricata inline, Snort with some PF magic (PFSense) gives the full bandwidth.
It's not a true inline IPS but works pretty good for home usage.
Perhaps one day when home hardware (like the APU2c4 which is quad core with 4 GB memory) works nicely with Suricata I will switch, untill then I use Snort since losing 60% of your bandwidth is just not worth it.

How many rules do you run on Snort vs Suricata? Can you try changing the Scan engine?
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on August 02, 2018, 04:07:45 pm
Two point.
OPNsense does not have Snort. OPNsense was built optimizing Suricata.
Some Snort rules are not compatible with Suricata.
Title: Re: Performance tuning for IPS maximum performance
Post by: xmichielx on August 09, 2018, 10:17:17 am
The config should be in loader.conf.local and some in the tunables.
I tried it for the APU 2C4 but still max ~10/11 MB/s with Suricata inline, Snort with some PF magic (PFSense) gives the full bandwidth.
It's not a true inline IPS but works pretty good for home usage.
Perhaps one day when home hardware (like the APU2c4 which is quad core with 4 GB memory) works nicely with Suricata I will switch, untill then I use Snort since losing 60% of your bandwidth is just not worth it.

How many rules do you run on Snort vs Suricata? Can you try changing the Scan engine?

the same ammount; I use the ET Open rules and both work for both Snort and Suricata.
Tried enabling 1 rule to using 15 rules - no difference.
Also tried changing the Scan engine, Hyperscan has the best performance (Intel nic's are used on the APU 2) but no profit there.
Title: Re: Performance tuning for IPS maximum performance
Post by: xmichielx on August 09, 2018, 10:19:59 am
Two point.
OPNsense does not have Snort. OPNsense was built optimizing Suricata.
Some Snort rules are not compatible with Suricata.

I never said that OPNsense have snort that is why I use/used PFsense.
I know that some Snort rules are incompatible with Suricata, I use the supplied ET Open rules and they work for both IDS/IPS.
Still not related to the performance hit on the APU 2, actually I can not find 1 single post where someone says he has 75%-100% of his/hers bandwidth after using Suricata inline (this has nothing to do with OPNsense but is related to Suricata and its scanning engine which caps bandwidth inline when used on 'smaller' hardware for home use).
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on August 09, 2018, 10:55:05 am
I think this is somewhat clocking related which impacts higher on slow hardware.
No idea how Snort on PF works, perhaps it adds an pf rules after match which doesn't require real inline so it might be more performant on smaller hardware, but just a guess.


I'm not against building a snort plugin .. but I'm not sure if it's worth the work since IPS on home use is debatable (my personal opinion)
Title: Re: Performance tuning for IPS maximum performance
Post by: xmichielx on August 09, 2018, 11:36:08 am
I must nuance my 'rant' about Suricata; after enabling just the ones that are the most necessary for me (aka trojan, malware, mobile_malware, explot) and using Hyperscan I get a more reasonable ~14-16 MB/s (where 22 MB/s is my max) which is acceptable for me.
I no have the benefit of using a NIDS/IPS blocking/filtering on the LAN/GUEST_VLAN interfaces and still remain some of my bandwidth.
So big tip for all APU 2 users: use the Hyperscan Scan engine and choose only what is necessary.
I did not use any of the tweaks except above mentioned :)
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on August 09, 2018, 04:02:23 pm
Using only rules that are 'necessary' is always the proper method. Just takes some homework. If your internal LAN is trusted, then you don't need to use IDS on it. Logic is always the best approach.
Title: Re: Performance tuning for IPS maximum performance
Post by: xmichielx on August 10, 2018, 09:42:58 am
I use the IPS mainly for my LAN/Guest VLAN since I want to detect malware. But I can understand that people also use it on front of their servers etc.
PS changing the networks from 3 private ranges to only 192.168.0.0/16 seems also to effect the bandwith (+/- 1 or 2 MB/s profit!)
Title: Re: Performance tuning for IPS maximum performance
Post by: Julien on November 17, 2018, 02:06:28 am
I use the IPS mainly for my LAN/Guest VLAN since I want to detect malware. But I can understand that people also use it on front of their servers etc.
PS changing the networks from 3 private ranges to only 192.168.0.0/16 seems also to effect the bandwith (+/- 1 or 2 MB/s profit!)
Our internal LAN is trusted as its clean and we know what is running in the internal.
Do you mean we do not need to use IDS for this ? we do have some servers behind and want them to be protect

we keep having one alert from this IP 150.109.50.77 on port 25 in and out and the action is allowed

Code: [Select]
Timestamp 2018-11-17T01:58:28.386557+0100
Alert SURICATA SMTP data command rejected
Alert sid 2220008
Protocol TCP
Source IP 2.51.55.22
Destination IP 150.109.50.77
Source port 25
Destination port 35064
Interface wan
any suggestions how to trade this alert ?
Title: Re: Performance tuning for IPS maximum performance
Post by: massaquah on November 20, 2018, 11:09:00 am
I recently got an upgrade for my internet badwidth from 200/50 mbit to 1000/50 mbit.

Sadly, my initial speed tests only resulted in 160 / 50 mbit.

I quickly identified Suricata with activated IPS as the bottleneck. I tried each combination of  hyperscan vs aho-corasick, activation of Suricata on LAN (igb), LAN+WAN, WAN(em), every performance tuning rule described in the first post of this thread but still I got only around 160 / 50 with IPS enabled.

I also noticed that the Suricata process uses 100% of one CPU core during speed tests whereas the remaining three cores were ideling.
Also, disabling most of the rules resulted in a "successfull" speed test of 950 / 50 mbit.

So my question is, why doesn't Suricata make use of all four cores? Why is the clock speed of a single core the bottleneck here? From what I understood reading about Suricata, it should be capable of multithreading?

Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on November 20, 2018, 11:29:14 am
What's your hardware? It always depends on hardware ...
Title: Re: Performance tuning for IPS maximum performance
Post by: massaquah on November 20, 2018, 12:16:17 pm
Intel Pentium G4560T (2 cores, 4 threads) with 2.90 GHZ + 8 GB RAM.

But apart from the clock speed, why is only one core being used by suricata?
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on November 20, 2018, 12:54:17 pm
ps aufxH  (H is important)
Title: Re: Performance tuning for IPS maximum performance
Post by: Sahbi on February 12, 2019, 09:39:03 pm
Had some severe performance issues after enabling IPS mode, like barely saturating 50% of my ISP connection (supposed to be 250/25 Mbps). So I figured I'd chime in with some of my experiences. I'm assuming that since I have an APU4C4 with i211AT NICs, flow control is set to 3 (Full) since it seems to support that according to this here datasheet (https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/i211-ethernet-controller-datasheet.pdf). Also I'm using speedtest.net because it's still the most popular one and at least they have decent connected servers close to me, unlike e.g. Google which goes all the way to damn Atlanta. I always used the same server, as well as the relatively new "multi" feature. I'm also running the speedtests from a computer behind OPNSense and not from the box itself. Finally, I have pretty much everything enabled at this point, this includes a transparent HTTPS proxy which requires me to disable hardware offloading for some networking stuff.

First, let's list the rulesets I have in use. Now, I'm not that familiar with OPN nor Suricata yet so I'm not entirely sure if below data is "clean", but should be close enough.
Code: [Select]
root@opn:/usr/local/etc/suricata/rules # ls *.rules
OPNsense.rules emerging-icmp_info.rules
abuse.ch.feodotracker.rules emerging-imap.rules
abuse.ch.sslblacklist.rules emerging-info.rules
abuse.ch.sslipblacklist.rules emerging-malware.rules
abuse.ch.urlhaus.rules emerging-misc.rules
botcc.portgrouped.rules emerging-mobile_malware.rules
botcc.rules emerging-rpc.rules
ciarmy.rules emerging-scan.rules
compromised.rules emerging-shellcode.rules
drop.rules emerging-smtp.rules
dshield.rules emerging-sql.rules
emerging-activex.rules emerging-trojan.rules
emerging-attack_response.rules emerging-user_agents.rules
emerging-current_events.rules emerging-web_client.rules
emerging-deleted.rules emerging-web_server.rules
emerging-dns.rules emerging-web_specific_apps.rules
emerging-dos.rules emerging-worm.rules
emerging-exploit.rules opnsense.test.rules
emerging-ftp.rules opnsense.uncategorized.rules
emerging-icmp.rules

root@opn:/usr/local/etc/suricata/rules # cat *.rules | sed 's/^ *#.*//' | sed '/^ *$/d' | wc -l
   41614

The rules are divided about 50/50 in regards to drop/alert actions, but I don't think that matters for performance because it has to log stuff regardless.

This is before applying any of the tunables mentioned in the OP (at my speeds I don't care about decimals so I'll just round that shit):
I read somewhere on these forums that Hyperscan is preferred in most cases, as such I had that active which caused a significant performance drop compared to A-C. So this was the cause for my issues, at least at the moment. :>

After running sysctl dev.igb.<x>.fc=0 for all interfaces (no need to reboot for these so figured I'd just go ahead and try):
A slight improvement for both algos, with Hyperscan closing the most distance. RAM usage for both tests stayed pretty much the same, there's currently 50% in use after having been a day in full production. Also, after every reboot I waited for the startup beep to go off, then checked with top to see if any startup stuff was still running. Only when everything calmed down will I proceed with the next test.

Now let's try some more tunables:
Code: [Select]
### loader.conf.local

# Flow Control (FC): 0 = Disabled, 1 = Rx Pause, 2 = Tx Pause, 3 = Full FC
hw.igb.0.fc=0
hw.igb.1.fc=0
hw.igb.2.fc=0
hw.igb.3.fc=0

# Set number of queues to number of cores divided by number of ports, 0 lets FreeBSD decide (should be default)
hw.igb.num_queues=0

# Increase packet descriptors (set as 1024, 2048 or 4096 ONLY)
hw.igb.rxd="4096" # Default = 1024
hw.igb.txd="4096"
net.link.ifqmaxlen="8192" # Sum of above two (default = 50)

# Increase network efficiency (Adaptive Interrupt Moderation, should be default)
hw.igb.enable_aim=1

# Increase interrupt rate # Default = 8000
hw.igb.max_interrupt_rate="64000"

# Fast interrupt handling, allows NIC to process packets as fast as they are received (should be default)
hw.igb.enable_msix=1
hw.pci.enable_msix=1

# Unlimited packet processing
hw.igb.rx_process_limit="-1"
hw.igb.tx_process_limit="-1"

### WebGUI > System > Settings > Tunables

# Disable Energy Efficient Ethernet
dev.igb.0.eee_disabled=1
dev.igb.1.eee_disabled=1
dev.igb.2.eee_disabled=1
dev.igb.3.eee_disabled=1

# Set Flow Control
hw.igb.0.fc=0
hw.igb.1.fc=0
hw.igb.2.fc=0
hw.igb.3.fc=0

dev.igb.0.fc=0
dev.igb.1.fc=0
dev.igb.2.fc=0
dev.igb.3.fc=0

# Do not accept IPv4 fragments
net.inet.ip.maxfragpackets=0
net.inet.ip.maxfragsperpacket=0

And reboot. =]

RAM usage is still hovering fine and dandy around 45%.
Now one thing I also noticed while watching top -HS is that Suricata no longer takes an entire core + a bit from the second, but instead distributes its load over 3 cores with the total load being around 180% (out of 400%). It also feels like the web interface is "snappier"; the dashboard page used to take quite some time to load but it's mucho faster now.



So it seems that just disabling flow control brings some slight improvements already, but Hyperscan in particular benefits hugely from adjusting hw.igb.rxd/txd, net.link.ifqmaxlen and hw.igb.max_interrupt_rate. Apparently with newer BSDs (like 10.x onwards) there's a newer driver which reduces the amount of interrupts significantly (https://calomel.org/freebsd_network_tuning.html), so you can probably just set it to 16000 and have the same results. I'm routing a lot of stuff due to a complex homelab setup, so I'll just leave it at 64k for now. =] Probably worth mentioning too, but my lil' APU's CPU temps have never gone over 60C so far while after a cold boot it starts at around 59.

Since the difference between A-C and HS at this point is negligible and most likely just the result of tiny factors such as other services happening to check in at the time, I'm satisfied with the current settings and will end my tunables testing here. For shits and giggles I did run an iperf just now, from the same computer behind OPN to a VPS with gigabit in the same country:
Code: [Select]
$ iperf -c vps1 -p 4712 -u -t 60 -i 10 -b 1000M
------------------------------------------------------------
Client connecting to vps1, UDP port 4712
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  1.11 GBytes   954 Mbits/sec
[  5] 10.0-20.0 sec  1.11 GBytes   952 Mbits/sec
[  5] 20.0-30.0 sec  1.11 GBytes   954 Mbits/sec
[  5] 30.0-40.0 sec  1.11 GBytes   953 Mbits/sec
[  5] 40.0-50.0 sec  1.11 GBytes   955 Mbits/sec
[  5]  0.0-60.0 sec  6.66 GBytes   953 Mbits/sec
[  5] Sent 4864635 datagrams

Suricata takes a little less than 1 core and the temps are still around 59C. :>
Title: Re: Performance tuning for IPS maximum performance
Post by: juliocbc on March 01, 2019, 05:14:55 pm
After applying the tunnables, I did some tests here, but something went wrong! :-(

My Lab hardware:
OPNsense 18.7.10_4
hw.model: Intel(R) Atom(TM) CPU  C2758  @ 2.40GHz
hw.machine: amd64
hw.ncpu: 8
16GB RAM
Intel i210AT

When I've pressed ENTER to start the iperf tests, system crashed:
client's iperf params:
Code: [Select]
iperf -p 5201 -c 192.168.1.99 -u -b 10m -P 100 -d -t 60
Code: [Select]
Tracing command kernel pid 0 tid 100162 td 0xfffff8001ffb1560
sched_switch() at sched_switch+0x4aa/frame 0xfffffe0467a1daa0
mi_switch() at mi_switch+0xe5/frame 0xfffffe0467a1dad0
sleepq_wait() at sleepq_wait+0x3a/frame 0xfffffe0467a1db00
_sleep() at _sleep+0x255/frame 0xfffffe0467a1db80
taskqueue_thread_loop() at taskqueue_thread_loop+0x121/frame 0xfffffe0467a1dbb0
fork_exit() at fork_exit+0x85/frame 0xfffffe0467a1dbf0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0467a1dbf0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---

Tracing command kernel pid 0 tid 100173 td 0xfffff800099dd000
sched_switch() at sched_switch+0x4aa/frame 0xfffffe0467a54aa0
mi_switch() at mi_switch+0xe5/frame 0xfffffe0467a54ad0
sleepq_wait() at sleepq_wait+0x3a/frame 0xfffffe0467a54b00
_sleep() at _sleep+0x255/frame 0xfffffe0467a54b80
taskqueue_thread_loop() at taskqueue_thread_loop+0x121/frame 0xfffffe0467a54bb0
fork_exit() at fork_exit+0x85/frame 0xfffffe0467a54bf0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0467a54bf0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
db:0:kdb.enter.default>  capture off
db:0:kdb.enter.default>  call doadump
= 0x6
db:0:kdb.enter.default>  reset
cpu_reset: Restarting BSP
cpu_reset_proxy: Stopped CPU 7
Title: Re: Performance tuning for IPS maximum performance
Post by: lrosenman on April 05, 2019, 01:18:05 pm
I added the em tunables (on the 19.1.4 netmap kernel), with the https://github.com/aus/pfatt bypass (using my pull requested config).

And my UPLOAD is back to ~800Meg, but the Download side is ~600 meg.

This is ATT Fiber 1G/1G.

SpeedTest: https://www.lerctr.org/~ler/ST-2019-04-05-06-12-21.png
Tunables added: https://www.lerctr.org/~ler/tuneables-2019-04-05-06-13-14.png

Ideas on what I can do on the Download side (with all the netgraph fun)?

EDIT: This is with *NO* IPS/IDS running.
Title: Re: Performance tuning for IPS maximum performance
Post by: lrosenman on April 09, 2019, 04:40:33 am
To followup, Brent Cowing of Protectli sent me a i3-7100U based box and my speeds are back to 910/949.

see also:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237072
https://github.com/HardenedBSD/hardenedBSD/issues/376

I will also have a 2nd E3845 box here this week (thanks Brent), and will able to play and not affect my internet connection. 
Title: Re: Performance tuning for IPS maximum performance
Post by: harshw on May 06, 2019, 08:01:53 pm
To followup, Brent Cowing of Protectli sent me a i3-7100U based box and my speeds are back to 910/949.

see also:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237072
https://github.com/HardenedBSD/hardenedBSD/issues/376

I will also have a 2nd E3845 box here this week (thanks Brent), and will able to play and not affect my internet connection.

Is this with IPS/IDS turned on? I get 870/950 with the igbX tunables and no IPS/IDS. When I turn on IPS/IDS, the speedtest.net download speed starts at 800-900 mbps and slowly levels off at 100-200 mbps. The upload speed starts at 10 mbps and then the test errors out. I wonder if this has something to do with netgraph ...
Title: Re: Performance tuning for IPS maximum performance
Post by: lrosenman on May 06, 2019, 08:08:34 pm
NO, this was without IDS/IPS on.

I've not gotten the testing done yet. 
Title: Re: Performance tuning for IPS maximum performance
Post by: lrosenman on May 06, 2019, 08:27:59 pm
To followup, Brent Cowing of Protectli sent me a i3-7100U based box and my speeds are back to 910/949.

see also:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237072
https://github.com/HardenedBSD/hardenedBSD/issues/376

I will also have a 2nd E3845 box here this week (thanks Brent), and will able to play and not affect my internet connection.

Is this with IPS/IDS turned on? I get 870/950 with the igbX tunables and no IPS/IDS. When I turn on IPS/IDS, the speedtest.net download speed starts at 800-900 mbps and slowly levels off at 100-200 mbps. The upload speed starts at 10 mbps and then the test errors out. I wonder if this has something to do with netgraph ...

netgraph(4) is definitely on my list of things to look at.  I suspect there is something(tm) nish-kosher there.  What, I'm not sure yet. 
Title: Re: Performance tuning for IPS maximum performance
Post by: spetrillo on June 01, 2020, 01:53:30 am
I have researched and tested tunables because I have experienced too many down links and poor performance when using IPS/Inline on the WAN interface that could no longer be ignored. This file, loader.conf.local along with adding some system tunables in the WebGUI, has fixed this for me so I thought I would share with the OPNsense community. Sharing is what makes on open-source project successful. Share your experiences using the info in this post. You may or may not see much performance improvement depending on your hardware, but you will see less dropped connections. If you have any other tunable recommendations, please share and post those experiences here. This thread is for performance tuning ideas.

The biggest impact was from the Flow Control (FC) setting. FC is a level 1 layer adding pause frames before the data is transmitted. My assumption is Netmap has issues with FC which causes the dropped connections. Recommendations from many sources, including Cisco, suggest disabling FC altogether and let the higher levels handle the flow. There are exceptions, but these usually involve ESXi, VMware and other special applications.

I have done all my testing using an Intel i350T4 and i340T4, common NICs used for firewalls, in 4 different systems and, by the way, neither NIC had any performance advantage. I have tested these system for 5 days without any down links experienced after the changes were made. Without these changes every system was plagued with down WAN links and poor performance using the default settings.

Do not use this file if you are not using an igb driver. igb combined with other drivers is ok as long as you have at least one igb NIC, and I recommend you use the igb for all WAN interfaces.

Add the file below in the '/boot' folder and call it 'loader.conf.local' right besides 'loader.conf'. I use WinSCP, in a Windows environment, as a file manager to get easy access to the folders. Don't forget to Enable Secure Shell. I have tried using the 'System Tunables' in the WebGUI to add these settings. Some worked and some didn't using that method. Not sure why. Better to just add this file. If you're a Linux guru, I am not, then use your own methods to add this file.

The two most IMPORTANT things to insure is that power management be disabled in the OPNsense settings and also in the BIOS settings of the system (thanks wefinet). And the second is to disable flow control (IEEE 802.3x) on all ports. It is advisable to not connect an IPS interface to any device which has flow control on. Flow control should be turned off to allow the congestion to be managed higher up in the stack

Please test all tunables in a test environment before you apply to a production system.

# File starts below this line, use Copy/Paste #####################
# Check for interface specific settings and add accordingly.
# These ae tunables to improve network performance on Intel igb driver NICs

# Flow Control (FC) 0=Disabled 1=Rx Pause 2=Tx Pause 3=Full FC
# This tunable must be set according to your configuration. VERY IMPORTANT!
# Set FC to 0 (<x>) on all interfaces
hw.igb.<x>.fc=0 #Also put this in System Tunables hw.igb.<x>.fc: value=0

# Set number of queues to number of cores divided by number of ports. 0 lets FreeBSD decide
hw.igb.num_queues=0

# Increase packet descriptors (set as 1024,2048, or 4096) ONLY!
# Allows a larger number of packets to be processed.
# Use "netstat -ihw 1" in the shell and make sure the idrops are zero
# If the NIC has constant disconnects, lower this value
# if not zero then lower this value.
hw.igb.rxd="4096" # For i340/i350 use 2048
hw.igb.txd="4096" # For i340/i350 use 2048
net.link.ifqmaxlen="8192" # value here equal sum of above values. For i340/i350 use 4096

# Increase Network efficiency
hw.igb.enable_aim=1

# Increase interuppt rate
hw.igb.max_interrupt_rate="64000"

# Network memory buffers
# run "netstat -m" in the shell and if the 'mbufs denied' and 'mbufs delayed' are 0/0/0 then this is not needed
# if not zero then keep adding 400000 until mbufs are zero
kern.ipc.nmbclusters="1000000"

# Fast interrupt handling
# Normally set by default. Use these settings to insure it is on.
# Allows NIC to process packets as fast as they are received
hw.igb.enable_msix=1
hw.pci.enable_msix=1

# Unlimited packet processing
# Use this only if you are sure that the NICs have dedicated IRQs
# View the IRQ assignments by executing this in the shell "vmstat -i"
# A value of "-1" means unlimited packet processing
hw.igb.rx_process_limit="-1"
hw.igb.tx_process_limit="-1"
###################################################
# File ends above this line ##################################

##UPDATE 12/12/2017##
After testing I have realized that some of these settings are NOT applied via loader.conf.local and must be added via the WebGUI in System>Settings>Tunables. I have moved these from the file above to this list.
Add to Tunables

Disable Energy Efficiency - set for each igb port in your system
This setting can cause Link flap errors if not disabled
Set for every igb interface in the system as per these examples
dev.igb.0.eee_disabled: value=1
dev.igb.1.eee_disabled: value=1
dev.igb.2.eee_disabled: value=1
dev.igb.3.eee_disabled: value=1

IPv4 Fragments - 0=Do not accept fragments
This is mainly need for security. Fragmentation can be used to evade packet inspection
net.inet.ip.maxfragpackets: value=0
net.inet.ip.maxfragsperpacket: value=0

Set to 0 (<x>) for every port used by IPS
dev.igb.<x>.fc: value=0

##UPDATE 1/16/2018##
Although the tuning in this thread so far just deals with the tunables, there are other settings that can impact IPS performance. Here are a few...

In the Intrusion Detection Settings Tab.

Promiscuous mode- To be used only when multiple interfaces or VLAN's are selected in the Interfaces setting.
This is used so that IPS will capture data on all the selected interfaces. Do not enable if you have just one interface selected. It will help with performance.

Pattern matcher: This setting can select the best  algorithm to use when pattern matching. This setting is best set by testing. Hyperscan seems to work well with Intel NIC's. Try different ones and test the bandwidth with an internet speed test.

Home networks (under advanced menu.
Make sure the interfaces fall within the actual local networks. You may want to change the generic 192.168.0.0/16 to your actual local network ie 192.168.1.1/24

###################################################
USEFUL SHELL COMMANDS
sysctl net.inet.tcp.hostcache.list # View the current host cache stats
vmstat -i # Query total interrupts per queue
top -H -S # Watch CPU usage
dmesg | grep -i msi # Verify MSI-X is being used by the NIC
netstat -ihw 1 # Look for idrops to determine hw.igb.txd and rxd
grep <interface> /var/run/dmesg.boot # Shows useful info like netmap queue/slots
sysctl -A # Shows system variables
###################################################

Hello,

I am curious. Does loader.conf.local get loaded after loader.conf? I did as instructed but what happened was a complete slowdown. Rtt and Rttd shot up, to the point of making my Internet connection unusable. I removed loader.conf.local and rebooted. The Internet was back and Rtt/Rttd was back to normal.

I am going to start testing with one option in loader.conf.local and see where the connection becomes unusable. I left all the options in the Tunables section of the GUI.

Thanks,
Steve
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on June 01, 2020, 06:54:55 am
Yes, it's loaded after loader.conf. Good try to test it one by one :)
Title: Re: Performance tuning for IPS maximum performance
Post by: spetrillo on June 03, 2020, 10:07:03 pm
It looks like kern.ipc.nmbclusters="1000000" was the culprit.
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 12, 2020, 04:28:05 pm
I had performance problems while connecting a Fritzbox 6591 to my opnsense box. The trick with the fc works fine for me; full 1GB/s throughput; before just ~300MB/s.

But... I added the commands to tunables (GUI) and /boot/loader.conf.local. After reboot, dev.igb.x.fc is set to 0, but is does not speed up the things. After entering "sysctl dev.igb.x.fc=0" by hand from console, things speed up magically. It looks like the commands are not working when executed from /boot/loader.conf.x ...

/boot/loader.conf.local:
Code: [Select]
### loader.conf.local

# Flow Control (FC): 0 = Disabled, 1 = Rx Pause, 2 = Tx Pause, 3 = Full FC
hw.igb.0.fc=0
hw.igb.1.fc=0
dev.igb.0.fc=0
dev.igb.1.fc=0

# Set number of queues to number of cores divided by number of ports, 0 lets FreeBSD decide (should be default)
hw.igb.num_queues=0
# Increase packet descriptors (set as 1024, 2048 or 4096 ONLY)
hw.igb.rxd="2048" # Default = 1024
hw.igb.txd="2048"
net.link.ifqmaxlen="4096" # Sum of above two (default = 50)

# Increase network efficiency (Adaptive Interrupt Moderation, should be default)
hw.igb.enable_aim=1

# Increase interrupt rate # Default = 8000
hw.igb.max_interrupt_rate="64000"

# Fast interrupt handling, allows NIC to process packets as fast as they are received (should be default)
hw.igb.enable_msix=1
hw.pci.enable_msix=1

# Unlimited packet processing
hw.igb.rx_process_limit="-1"
hw.igb.tx_process_limit="-1"


and the rest of /boot/loader.conf:
Code: [Select]
...

net.inet.ip.redirect="0"
net.inet.icmp.drop_redirect="1"
hw.igb.1.fc="0"
dev.igb.1.fc="0"
hw.igb.0.fc="0"
dev.igb.0.fc="0"

# dynamically generated console settings follow
#comconsole_speed
#boot_multicons
#boot_serial
#kern.vty
console="vidconsole"

The NIC is a i350-T2.
opnsense is pretty new for me, and I have no idea what I am doing wrong... any help is welcome :-)

Title: Re: Performance tuning for IPS maximum performance
Post by: Supermule on June 12, 2020, 09:05:33 pm
I am hitting no more than 300/300 with IDS/IPS and running a 16core/32GB highend server.

IDS takes a big hit on performance.
Title: Re: Performance tuning for IPS maximum performance
Post by: spetrillo on June 12, 2020, 09:08:46 pm
I am curious...is there a way to know which tunable options are actually in effect when the system is up? Can I run a command to list all of them as active?
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 12, 2020, 11:14:24 pm
That's a superb question... When I check the settings with sysctl -A | grep dev.igb, everything is fine; which means is set to 0. Obviously, it isn't; else, I would not expect to see any change regarding the throughput when typing the settings on console...
And, how can I be sure that the other settings related to the NICs are applied correctly ? They all show up fine; but who knows ...
Btw, I disabled IPS; just checking is active. I run a smal and well controlled network. I just want to know, in case of some possible problems. With IPS enabled, I achieved close to 300M... 8 cores don't help, afaik... It looks like only one core is used.
Title: Re: Performance tuning for IPS maximum performance
Post by: spetrillo on June 15, 2020, 03:52:31 am
It definitely would be helpful to know that those options you have selected are indeed active.

As to your test it seems there is a real premium on higher frequency cores rather than many lower frequency cores, if only one core is used.
Title: Re: Performance tuning for IPS maximum performance
Post by: Supermule on June 15, 2020, 11:25:18 am
what IDS profile are you using??

There is a setting to change how IDS uses the process/cores.

That's a superb question... When I check the settings with sysctl -A | grep dev.igb, everything is fine; which means is set to 0. Obviously, it isn't; else, I would not expect to see any change regarding the throughput when typing the settings on console...
And, how can I be sure that the other settings related to the NICs are applied correctly ? They all show up fine; but who knows ...
Btw, I disabled IPS; just checking is active. I run a smal and well controlled network. I just want to know, in case of some possible problems. With IPS enabled, I achieved close to 300M... 8 cores don't help, afaik... It looks like only one core is used.
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 15, 2020, 07:08:06 pm
I use Hyperscan, promiscuous mode (due to VLANs), IDS enabled, IPS disabled. Currently are abt. 1900 rules enabled. But there is still some space for more, until I loose the 1 Gbit/s.
Where do you configure the CPU usage ? I don't have such an option, even in advanced mode. I run a 4 core CPU (AMD FX-8800 P), where 3 cores most of the time feel quite bored  ;D
Title: Re: Performance tuning for IPS maximum performance
Post by: hushcoden on June 15, 2020, 10:03:23 pm
what IDS profile are you using??

There is a setting to change how IDS uses the process/cores.
Sorry, where is it ?
Title: Re: Performance tuning for IPS maximum performance
Post by: Supermule on June 16, 2020, 12:10:16 am
what IDS profile are you using??

There is a setting to change how IDS uses the process/cores.
Sorry, where is it ?

Sorry. I mixed up OPNSense with pfsense. Running both to compare.
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on June 16, 2020, 05:53:12 am
Profiles come with 20.7
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 16, 2020, 09:09:18 am
I changed to development firmware upgrade. It's 20.7 now, but still with 11.2 BSD.
Performance is significantly improved. I can run now IDS and IPS, with increased rule set (~3000) at 1GB/s; with Hyperscan and net.bpf.zerocopy_enabled=1. The load goes to slightly more than 1 without IPS, and close to 2 with IPS enabled. Powerd is set to hiactive.
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on June 16, 2020, 09:58:59 am
I changed to development firmware upgrade. It's 20.7 now, but still with 11.2 BSD.
Performance is significantly improved. I can run now IDS and IPS, with increased rule set (~3000) at 1GB/s; with Hyperscan and net.bpf.zerocopy_enabled=1. The load goes to slightly more than 1 without IPS, and close to 2 with IPS enabled. Powerd is set to hiactive.

Switching to devel mode will only update the UI, not (yet!) the OS, but it should install Suricata 5 and allows to set profile mode.
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 16, 2020, 10:22:55 am
That's what it looks like now....

(http://)
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 16, 2020, 03:44:25 pm
I did an ISO upgrade and 20.7 with 12.1 is running now. Currently ~8000 rules are activated, IDS and IPS enabled, Hyperscan gives abt. 850MB/s. The other algorithms gave significantly worse results; down to 100MB/s.
Do you have any hints for me regarding the profile ? Do I have to edit the settings file, or can this be done by GUI ? 
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on June 16, 2020, 07:34:26 pm
Hit Advanced in General
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 16, 2020, 09:37:11 pm
Thanks.... Got it...

Best results with Hyperscan and profile "High"... Abt. 780Mb/s with 8556 rules. The changes between the profiles are marginal; between 740MB/s and 780Mb/s.
The other algorithmns are far slower... Maximum 400Mb/s, down to 140Mb/s... With any profile.

Current "optimum" settings attached.

If I can test anything special for you, fell free to ask  8)
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on June 17, 2020, 06:09:56 am
Which hardware?
Title: Re: Performance tuning for IPS maximum performance
Post by: dl3it on June 17, 2020, 08:34:18 am
Board: https://www.biostar.com.tw/app/en/mb/introduction.php?S_ID=935 (https://www.biostar.com.tw/app/en/mb/introduction.php?S_ID=935)
NIC: intel i350-T2 https://ark.intel.com/content/www/us/en/ark/products/59062/intel-ethernet-server-adapter-i350-t2.html (https://ark.intel.com/content/www/us/en/ark/products/59062/intel-ethernet-server-adapter-i350-t2.html)
8G RAM

Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on June 17, 2020, 11:15:45 am
Sounds reasonable for such a board  :)
Title: Re: Performance tuning for IPS maximum performance
Post by: annoniempjuh on July 04, 2020, 07:15:26 pm
i was thinking of some performance tuning, did disabled:
- Hardware CRC
- Hardware TCO
- Hardware LRO
- VLAN Hardware Filtering
changed the Pattern matcher to 'hyperscan'
enabled  IPS mode and Promiscuous mode.
i didn't change anything else.

iperf3:
Code: [Select]
iperf3 -c 10.0.3.31 -u -t 60 -i 10 -b 1000M
Connecting to host 10.0.3.31, port 5201
[  5] local 10.0.3.1 port 44924 connected to 10.0.3.31 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-10.00  sec  1.16 GBytes  1000 Mbits/sec  856118 
[  5]  10.00-20.00  sec  1.16 GBytes  1.00 Gbits/sec  856870 
[  5]  20.00-30.00  sec  1.16 GBytes  1000 Mbits/sec  857061 
[  5]  30.00-40.00  sec  1.16 GBytes  1.00 Gbits/sec  856166 
[  5]  40.00-50.00  sec  1.16 GBytes  1000 Mbits/sec  857113 
[  5]  50.00-60.00  sec  1.16 GBytes  1.00 Gbits/sec  857192 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-60.00  sec  6.98 GBytes  1000 Mbits/sec  0.000 ms  0/5140520 (0%)  sender
[  5]   0.00-60.00  sec  3.34 GBytes   479 Mbits/sec  0.046 ms  2680818/5140353 (52%)  receiver

iperf Done.
server statics say: 962Mbit/sec.

well.... i don't need any tuning?  ::)

Suricata is active on WAN and LAN, tested iperf on Lan.
if i change the pattern match to aho-corasick its around the 450Mbit.

rules: 56019
is this command the right one?:
Code: [Select]
root@OPNsense:/usr/local/etc/suricata/rules # cat *.rules | sed 's/^ *#.*//' | sed '/^ *$/d' | wc -lHardware:
AMD Ryzen 3 2200G with Radeon Vega Graphics (4 cores)
8GB RAM
Intel PRO/1000 PT Dual Port Server Adapter (PCI-e 4x) (driver: EM)
OPNsense 20.1.8_1
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on July 04, 2020, 10:10:34 pm
Looks good :)
Title: Re: Performance tuning for IPS maximum performance
Post by: seed on August 17, 2020, 10:41:49 am
I just reach 712 MBit Max on my System:

Xeon E-2236
Asus P11c-M/4L
32 GB 2666 mhz ECC RAM
NIC: i340-t4 + 4 x Intel I210AT (onboard)


Powerd shows this output:
root@OPNsense:~ # powerd -v
powerd: unable to determine AC line status
load 156%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 100%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 100%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 114%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 157%, current freq 3401 MHz ( 0), wanted freq 6802 MHz


so i assume the Cpu is using its turbo of max 4,80 GHz

I testted with a iperf3 Server in my management vlan and the client in my lan.
OPNsense is fresh installed. Tunables are default. Top Shows one CPU core fully utilised.


root@OPNsense:/usr/local/etc/suricata/rules # cat *.rules | sed 's/^ *#.*//' | sed '/^ *$/d' | wc -l
   47263

With suricata disabled i reach 112 Mbyte (good).
Title: Re: Performance tuning for IPS maximum performance
Post by: seed on August 17, 2020, 10:44:09 am
I just reach 712 MBit Max on my System:

Xeon E-2236
Asus P11c-M/4L
32 GB 2666 mhz ECC RAM
NIC: i340-t4 + 4 x Intel I210AT (onboard)


Powerd shows this output:
root@OPNsense:~ # powerd -v
powerd: unable to determine AC line status
load 156%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 100%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 100%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 114%, current freq 3401 MHz ( 0), wanted freq 6802 MHz
load 157%, current freq 3401 MHz ( 0), wanted freq 6802 MHz


so i assume the Cpu is using its turbo of max 4,80 GHz

I testted with a iperf3 Server in my management vlan and the client in my lan.
OPNsense is fresh installed. Tunables are default. Top Shows one CPU core fully utilised.


root@OPNsense:/usr/local/etc/suricata/rules # cat *.rules | sed 's/^ *#.*//' | sed '/^ *$/d' | wc -l
   47263

With suricata disabled i reach 112 Mbyte (good).

Sorry. i forgot the sceenshot showing my suricata settings.
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on August 17, 2020, 11:56:44 am
Try only WAN and disable promisc
Title: Re: Performance tuning for IPS maximum performance
Post by: seed on August 17, 2020, 07:50:33 pm
I testet only with the WAN interface (which is nating) with disables Promisc mode.
This is What i got:

before bios "optimisations"

Quote
[  5]   0.00-1.00   sec  70.3 MBytes   589 Mbits/sec   48    636 KBytes       
[  5]   1.00-2.00   sec  94.9 MBytes   796 Mbits/sec    0    744 KBytes       
[  5]   2.00-3.00   sec  97.4 MBytes   817 Mbits/sec    2    625 KBytes       
[  5]   3.00-4.00   sec  98.6 MBytes   827 Mbits/sec    0    737 KBytes       
[  5]   4.00-5.00   sec  98.6 MBytes   828 Mbits/sec    6    617 KBytes       
[  5]   5.00-6.00   sec  97.4 MBytes   817 Mbits/sec    0    728 KBytes       
[  5]   6.00-7.00   sec  94.9 MBytes   796 Mbits/sec    3    602 KBytes       
[  5]   7.00-8.00   sec  96.1 MBytes   806 Mbits/sec    0    714 KBytes       
[  5]   8.00-9.00   sec  97.3 MBytes   817 Mbits/sec    9    588 KBytes       
[  5]   9.00-10.00  sec  91.1 MBytes   764 Mbits/sec    0    697 KBytes       
[  5]  10.00-11.00  sec  96.2 MBytes   807 Mbits/sec    6    564 KBytes       
[  5]  11.00-12.00  sec  97.4 MBytes   817 Mbits/sec    0    683 KBytes       
[  5]  12.00-13.00  sec   100 MBytes   839 Mbits/sec    1    554 KBytes       
[  5]  13.00-14.00  sec  97.5 MBytes   818 Mbits/sec    0    679 KBytes       
[  5]  14.00-15.00  sec  96.2 MBytes   807 Mbits/sec    9    546 KBytes       
[  5]  15.00-16.00  sec  96.2 MBytes   807 Mbits/sec    0    667 KBytes       
[  5]  16.00-17.00  sec  96.2 MBytes   807 Mbits/sec    0    772 KBytes       
[  5]  17.00-18.00  sec  96.2 MBytes   807 Mbits/sec    5    655 KBytes       
^C[  5]  18.00-18.60  sec  58.7 MBytes   818 Mbits/sec    0    721 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-18.60  sec  1.73 GBytes   799 Mbits/sec   89             sender
[  5]   0.00-18.60  sec  0.00 Bytes  0.00 bits/sec                  receiver
iperf3: interrupt - the client has terminated

with "optimized bios"

Quote
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  98.4 MBytes   826 Mbits/sec   54    694 KBytes       
[  5]   1.00-2.00   sec  95.0 MBytes   797 Mbits/sec    5    563 KBytes       
[  5]   2.00-3.00   sec  96.2 MBytes   807 Mbits/sec    0    683 KBytes       
[  5]   3.00-4.00   sec  97.5 MBytes   818 Mbits/sec    5    550 KBytes       
[  5]   4.00-5.00   sec  97.5 MBytes   818 Mbits/sec    0    672 KBytes       
[  5]   5.00-6.00   sec  96.2 MBytes   807 Mbits/sec    3    542 KBytes       
[  5]   6.00-7.00   sec  97.5 MBytes   818 Mbits/sec    0    665 KBytes       
[  5]   7.00-8.00   sec  96.2 MBytes   807 Mbits/sec    0    769 KBytes       
[  5]   8.00-9.00   sec  98.7 MBytes   828 Mbits/sec    7    653 KBytes       
[  5]   9.00-10.00  sec  97.5 MBytes   818 Mbits/sec    0    759 KBytes       
[  5]  10.00-11.00  sec  96.2 MBytes   807 Mbits/sec    8    639 KBytes       
[  5]  11.00-12.00  sec  97.5 MBytes   818 Mbits/sec    0    748 KBytes       
[  5]  12.00-13.00  sec  95.0 MBytes   797 Mbits/sec    1    629 KBytes       
[  5]  13.00-14.00  sec  95.0 MBytes   797 Mbits/sec    0    734 KBytes       
[  5]  14.00-15.00  sec  96.2 MBytes   807 Mbits/sec    2    612 KBytes       
[  5]  15.00-16.00  sec  96.2 MBytes   807 Mbits/sec    0    725 KBytes       
^C[  5]  16.00-16.06  sec  5.00 MBytes   686 Mbits/sec    0    730 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-16.06  sec  1.52 GBytes   811 Mbits/sec   85             sender
[  5]   0.00-16.06  sec  0.00 Bytes  0.00 bits/sec                  receiver
iperf3: interrupt - the client has terminated

Very close. but still not what i expected to see.
Why is the result different from the "lan" Interface? What stops the system from performing better?
I mean. The Xeon E-2236 is really good.

@mimugmail:
I Read your blogpost testing with the Xeon E3-1240 v6. You got better results. The CPU is slightly older. So what black magic is happening here?
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on August 17, 2020, 10:32:39 pm
I tested with 10g interfaces ;)
Title: Re: Performance tuning for IPS maximum performance
Post by: webdb on August 26, 2020, 05:03:45 pm
Hi
I have a 1Gig connection and OPNsense works perfectly fine with IPS enabled (approx 3k rules). But when I download big files from Usenet (e.g. 5-10 gig) the performance goes from 900Mbps down to a few Kbps and up again. This isn't really an issue for me as I have no time constraints for such downloads. However teh firewall/DNS seems to freez as my 60 devices can't connect to the internet after such a download and I always have to restart Opnsense.
When I turn on my old Kerio Control and do the same scenario I see drops to approx 50mbps and the firewall doesn't freeze.

Has anyone similar issues and found a solution? I love Opnsense and don't want to go back to Kerio again or switch to another product such as Zyxel ATP 200

Thanks
Daniel

Hardware: Initel Core i7, 16GB Memory, SSD, only Dyndns and IPS running on Opnsense
Title: Re: Performance tuning for IPS maximum performance
Post by: alexroz on December 16, 2020, 10:07:45 pm
I have mini-pc https://www.aliexpress.com/item/4000859041000.html  based on Celeron 3865U with 4GB RAM.
And I am experiencing sharp download bandwidth drop when I turn IPS on. I get download throughput just below 1GBps when Suricata is OFF and between 300 to 400 when Suricata is ON.
Any performance tuning suggestions?
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on December 17, 2020, 06:07:08 am
Only enable Rules you really need. No phpnuke stuff and so on
Title: Re: Performance tuning for IPS maximum performance
Post by: alexroz on December 18, 2020, 11:03:21 pm
Can someone explain how promiscuous mode (https://en.wikipedia.org/wiki/Promiscuous_mode) can improve Suricata's performance?
Title: Re: Performance tuning for IPS maximum performance
Post by: spetrillo on December 20, 2020, 12:30:10 am
Only enable Rules you really need. No phpnuke stuff and so on

Is there a guide on what we should enable?
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on December 20, 2020, 07:06:20 am
Can someone explain how promiscuous mode (https://en.wikipedia.org/wiki/Promiscuous_mode) can improve Suricata's performance?

It doesn't, you only need this when you listen to igb0 while running several vlans on it so you don't have to select every single interface
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on December 20, 2020, 07:08:26 am
Only enable Rules you really need. No phpnuke stuff and so on

Is there a guide on what we should enable?


At first, read those descriptions:
https://tools.emergingthreats.net/docs/ETPro%20Rule%20Categories.pdf

After this I'm sure you know which you don't need.
Title: Re: Performance tuning for IPS maximum performance
Post by: alexroz on December 20, 2020, 09:04:16 pm
Only enable Rules you really need. No phpnuke stuff and so on

Is there a guide on what we should enable?

$1000000 question.....

Title: Re: Performance tuning for IPS maximum performance
Post by: harryincs on January 01, 2021, 10:02:14 pm
what would be the correct setting for disabling flow control with a Realtek driver?
hw.igb.0.fc=0 is for Intel as I understand.

my Nic cards show up as em0 and re0

thanks,

Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on January 02, 2021, 06:29:29 am
sysctl -a | grep fc and check if its available. I'd guess a RE doesnt even support it
Title: Re: Performance tuning for IPS maximum performance
Post by: harryincs on January 02, 2021, 07:51:11 pm
This is what I get and these are probably the statements of interest:

hw.ixl.enable_tx_fc_filter: 1
dev.em.0.fc_low_water: 20552
dev.em.0.fc_high_water: 23584
dev.em.0.fc: 3

---------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------

root@IncsFW1:~ # sysctl -a | grep fc
device   ocs_fc
2 LABEL gptid/ffcdb6e8-434e-11eb-b1ee-64006a9003d0 209715200 512 i 0 o 0
z0xfffff80003a3ca00 [shape=box,label="DEV\ngptid/ffcdb6e8-434e-11eb-b1ee-64006a9003d0\nr#4"];
z0xfffff80003a3c800 [shape=hexagon,label="gptid/ffcdb6e8-434e-11eb-b1ee-64006a9003d0\nr0w0e0\nerr#0\nsector=512\nstripe=4096"];
      <name>gptid/ffcdb6e8-434e-11eb-b1ee-64006a9003d0</name>
       <rawuuid>ffcdb6e8-434e-11eb-b1ee-64006a9003d0</rawuuid>
       <efimedia>HD(1,GPT,ffcdb6e8-434e-11eb-b1ee-64006a9003d0,0x28,0x64000)</efimedia>
     <name>gptid/ffcdb6e8-434e-11eb-b1ee-64006a9003d0</name>
vfs.reassignbufcalls: 10443885
vfs.getnewbufcalls: 4196070
net.inet.ip.rfc6864: 1
net.inet.tcp.rfc1323: 1
net.inet.tcp.rfc3465: 1
net.inet.tcp.rfc3390: 1
net.inet.tcp.rfc3042: 1
net.inet.tcp.rfc6675_pipe: 0
net.link.generic.system.ifcount: 7
net.inet6.ip6.rfc6204w3: 1
net.inet6.icmp6.nd6_onlink_ns_rfc4861: 0
hw.ixl.enable_tx_fc_filter: 1
     ConventionalMemory 000000100000          0x0 0003befc UC WC WT WB
             LoaderData 00003bffc000          0x0 00004004 UC WC WT WB
dev.em.0.fc_low_water: 20552
dev.em.0.fc_high_water: 23584
dev.em.0.fc: 3
Title: Re: Performance tuning for IPS maximum performance
Post by: annoniempjuh on February 07, 2021, 10:24:53 am
a view days ago i did upgrade OPNsense and my server to 10Gbit NICs

hardware:
Intel Ethernet Converged Network Adapter X540-T2  (OPNsense)
Mellanox ConnectX-3 CX311A (unRAID server)
MikroTik Cloud Smart Switch 326-24G-2S+RM (switch)


Iperf3 results:

suricata OFF = cpu usage 40% / 51%
Code: [Select]
iperf3 -c 10.0.3.1 -t 60 -i 10
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.2 port 35558 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  3.03 GBytes  2.60 Gbits/sec    0    252 KBytes       
[  5]  10.00-20.00  sec  2.99 GBytes  2.57 Gbits/sec    0    246 KBytes       
[  5]  20.00-30.00  sec  2.98 GBytes  2.56 Gbits/sec    0    243 KBytes       
[  5]  30.00-40.00  sec  2.96 GBytes  2.54 Gbits/sec    0    209 KBytes       
[  5]  40.00-50.00  sec  2.93 GBytes  2.52 Gbits/sec    0    277 KBytes       
[  5]  50.00-60.00  sec  2.97 GBytes  2.55 Gbits/sec    0    260 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  17.9 GBytes  2.56 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  17.9 GBytes  2.56 Gbits/sec                  receiver

iperf Done.

iperf3 -c 10.0.3.1 -t 60 -i 10 -R
Connecting to host 10.0.3.1, port 5201
Reverse mode, remote host 10.0.3.1 is sending
[  5] local 10.0.3.2 port 36642 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  3.82 GBytes  3.28 Gbits/sec                 
[  5]  10.00-20.00  sec  3.89 GBytes  3.35 Gbits/sec                 
[  5]  20.00-30.00  sec  3.82 GBytes  3.28 Gbits/sec                 
[  5]  30.00-40.00  sec  3.75 GBytes  3.22 Gbits/sec                 
[  5]  40.00-50.00  sec  3.60 GBytes  3.09 Gbits/sec                 
[  5]  50.00-60.00  sec  3.76 GBytes  3.23 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  22.6 GBytes  3.24 Gbits/sec  8384             sender
[  5]   0.00-60.00  sec  22.6 GBytes  3.24 Gbits/sec                  receiver

iperf Done.


suricata ON = cpu usage 59% / 76%
Code: [Select]
iperf3 -c 10.0.3.1 -t 60 -i 10
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.2 port 43546 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec   753 MBytes   632 Mbits/sec    2   5.66 KBytes       
[  5]  10.00-20.00  sec   748 MBytes   627 Mbits/sec    8    219 KBytes       
[  5]  20.00-30.00  sec   745 MBytes   625 Mbits/sec    5    209 KBytes       
[  5]  30.00-40.00  sec   774 MBytes   649 Mbits/sec   12    188 KBytes       
[  5]  40.00-50.00  sec   744 MBytes   624 Mbits/sec    5    218 KBytes       
[  5]  50.00-60.00  sec   795 MBytes   667 Mbits/sec    7    215 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  4.45 GBytes   637 Mbits/sec   39             sender
[  5]   0.00-60.00  sec  4.45 GBytes   637 Mbits/sec                  receiver

iperf Done.

iperf3 -c 10.0.3.1 -t 60 -i 10 -R
Connecting to host 10.0.3.1, port 5201
Reverse mode, remote host 10.0.3.1 is sending
[  5] local 10.0.3.2 port 38420 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  1.40 GBytes  1.21 Gbits/sec                 
[  5]  10.00-20.00  sec  1.37 GBytes  1.17 Gbits/sec                 
[  5]  20.00-30.00  sec  1.40 GBytes  1.20 Gbits/sec                 
[  5]  30.00-40.00  sec  1.39 GBytes  1.19 Gbits/sec                 
[  5]  40.00-50.00  sec  1.40 GBytes  1.20 Gbits/sec                 
[  5]  50.00-60.00  sec  1.41 GBytes  1.21 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  8.37 GBytes  1.20 Gbits/sec   18             sender
[  5]   0.00-60.00  sec  8.37 GBytes  1.20 Gbits/sec                  receiver

iperf Done.

UDP:
Code: [Select]
iperf3 -c 10.0.3.1 -u -t 60 -i 10 -b 10000M
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.2 port 59369 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-10.00  sec  2.88 GBytes  2.48 Gbits/sec  2138663 
[  5]  10.00-20.00  sec  2.89 GBytes  2.48 Gbits/sec  2143473 
[  5]  20.00-30.00  sec  2.85 GBytes  2.45 Gbits/sec  2110755 
[  5]  30.00-40.00  sec  2.81 GBytes  2.41 Gbits/sec  2081894 
[  5]  40.00-50.00  sec  2.87 GBytes  2.46 Gbits/sec  2126508 
[  5]  50.00-60.00  sec  2.92 GBytes  2.51 Gbits/sec  2167670 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-60.00  sec  17.2 GBytes  2.47 Gbits/sec  0.000 ms  0/12768963 (0%)  sender
[  5]   0.00-60.01  sec  12.5 GBytes  1.79 Gbits/sec  0.001 ms  3471092/12768963 (27%)  receiver

iperf Done.

i know that there are some posts going on that on OPNsense 21.1 there is slowdown...

its looks like i am not the only one who doesn't get 10Gb speeds...

i tried a view tuneables but it didn't do anything:
Code: [Select]
kern.ipc.maxsockbuf:  16777216
net.inet.ip.intr_queue_maxlen:  2048
net.inet.tcp.recvspace:  4194304
net.inet.tcp.sendspace:  2097152
net.inet.tcp.recvbuf_max:  16777216
net.inet.tcp.recvbuf_inc:  524288
net.inet.tcp.sendbuf_max:  16777216
net.inet.tcp.sendbuf_inc:  32768
net.route.netisr_maxqlen:  2048
net.link.ifqmaxlen:  2048

need to do more investigation why it won't do 10Gb, maybe its the switch who has wrong settings (it using the defaults settings) or maybe it's the unRAID server...
Title: Re: Performance tuning for IPS maximum performance
Post by: mimugmail on February 07, 2021, 11:28:41 am
Does this also happen with 20.7.8?
Title: Re: Performance tuning for IPS maximum performance
Post by: annoniempjuh on February 08, 2021, 12:36:24 pm
Does this also happen with 20.7.8?

i didn't test it on 20.7.8
i tried to downgrade to 20.7.8 but it didn't succeed:
Code: [Select]
opnsense-update -r 20.7.8[/s]
Fetching base-20.7.8-amd64.txz: .. failed, no signature found

edit:
did a clean install of 20.7, upgraded it to 20.7.8_4
same results...
not sure what the problem is, guess i have to investigate if its not OPNsense but unRAID or the switch.
Title: Re: Performance tuning for IPS maximum performance
Post by: annoniempjuh on May 18, 2021, 05:02:18 pm
view months later and did an another rounds of trail and error. this time i got some better results:
OPNsense 21.1.5
currently i use the following custom configs:
Code: [Select]
net.inet.tcp.tso=0
net.inet.udp.checksum=0
net.isr.maxthreads=-1
net.isr.dispatch=deferred
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
kern.ipc.maxsockbuf=16777216
kern.ipc.nmbclusters=1000000
kern.ipc.nmbjumbop=524288
hw.bce.tso_enable=0
hw.vtnet.lro_disable=1
hw.ix.flow_control=0
hw.ix.rx_process_limit=-1
hw.ix.tx_process_limit=-1
hw.intr_storm_threshold=10000

net.inet6.ip6.redirect=0
net.inet.ip.intr_queue_maxlen=3000
net.inet.tcp.mssdflt=1460
net.inet.tcp.minmss=1300
net.inet.tcp.syncookies=0

in /boot/loader.conf.local:
#cc_htcp_load="YES"
if_ix_updated_load="YES"
hw.ix.tx_process_limit="-1"
hw.ix.rx_process_limit="-1"
hw.ix.enable_aim="1"
hw.ix.max_interrupt_rate="64000"
hw.ix.rxd="4096"
hw.ix.txd="4096"
net.link.ifqmaxlen="8192"
hw.ix.num_queues="8"

Iperf3 results:
suricata on:
Code: [Select]
iperf3 -c 10.0.3.1 -t 20 -i 10
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.47 port 38402 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  1.44 GBytes  1.24 Gbits/sec    0    580 KBytes       
[  5]  10.00-20.00  sec  1.49 GBytes  1.28 Gbits/sec    0    580 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-20.00  sec  2.94 GBytes  1.26 Gbits/sec    0             sender
[  5]   0.00-20.00  sec  2.93 GBytes  1.26 Gbits/sec                  receiver

iperf Done.
iperf3 -c 10.0.3.1 -t 20 -i 10 -R
Connecting to host 10.0.3.1, port 5201
Reverse mode, remote host 10.0.3.1 is sending
[  5] local 10.0.3.47 port 38744 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  1.58 GBytes  1.35 Gbits/sec                 
[  5]  10.00-20.00  sec  1.61 GBytes  1.38 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-20.00  sec  3.19 GBytes  1.37 Gbits/sec   20             sender
[  5]   0.00-20.00  sec  3.19 GBytes  1.37 Gbits/sec                  receiver

iperf Done.
iperf3 -c 10.0.3.1 -t 20 -i 10 -u -b 10000M
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.47 port 44492 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-10.00  sec  4.91 GBytes  4.22 Gbits/sec  3641085 
[  5]  10.00-20.00  sec  4.91 GBytes  4.21 Gbits/sec  3637990 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-20.00  sec  9.82 GBytes  4.22 Gbits/sec  0.000 ms  0/7279075 (0%)  sender
[  5]   0.00-20.01  sec  5.74 GBytes  2.46 Gbits/sec  0.002 ms  3022771/7279075 (42%)  receiver

iperf Done.
iperf3 -c 10.0.3.1 -t 20 -i 10 -u -b 10000M -R
Connecting to host 10.0.3.1, port 5201
Reverse mode, remote host 10.0.3.1 is sending
[  5] local 10.0.3.47 port 41435 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  2.56 GBytes  2.20 Gbits/sec  0.007 ms  146252/2042299 (7.2%) 
[  5]  10.00-20.00  sec  2.53 GBytes  2.18 Gbits/sec  0.008 ms  126068/2004900 (6.3%) 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-20.00  sec  5.09 GBytes  2.19 Gbits/sec  0.000 ms  0/4048582 (0%)  sender
[  5]   0.00-20.00  sec  5.09 GBytes  2.19 Gbits/sec  0.008 ms  272320/4047199 (6.7%)  receiver

iperf Done.
suricata off:
Code: [Select]
iperf3 -c 10.0.3.1 -t 10 -i 10
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.47 port 43458 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  3.43 GBytes  2.95 Gbits/sec    0    577 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  3.43 GBytes  2.95 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  3.43 GBytes  2.95 Gbits/sec                  receiver

iperf Done.

iperf3 -c 10.0.3.1 -t 10 -i 10 -u -b 10000M
Connecting to host 10.0.3.1, port 5201
[  5] local 10.0.3.47 port 47876 connected to 10.0.3.1 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-10.00  sec  4.82 GBytes  4.14 Gbits/sec  3571818 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  4.82 GBytes  4.14 Gbits/sec  0.000 ms  0/3571818 (0%)  sender
[  5]   0.00-10.00  sec  4.70 GBytes  4.04 Gbits/sec  0.001 ms  84942/3571817 (2.4%)  receiver

iperf Done.

those test are with a direct connection wit OPNsense and my Desktop (its using a Intel X550 t2)
i did also test the connection between my unRAID server and desktop with a direct connection between those two, 9.90Gbit...

but i also see that the cpu usage of OPNsense spike up to 90%, i guess that the AMD Ryzen 3 2200G is struggling with those speeds...

and i know, using OPNsense as iperf server isn't recommeded, i did those iperf test also with my unraid server as server and desktop connected on its switch..
Title: Re: Performance tuning for IPS maximum performance
Post by: annoniempjuh on June 04, 2021, 11:43:00 am
i just tried some new iperf test, this time i did remove the loader.conf.local and reset the tunables to default and only apply those custom setting:
Code: [Select]
kern.ipc.nmbclusters=1000000
net.inet.tcp.tso=0
net.inet.ip.redirect=0
net.inet6.ip6.redirect=0
net.isr.bindthreads=1
net.isr.maxthreads=-1
hw.intr_storm_threshold=10000
hw.ix.flow_control=0
net.isr.numthreads=-1
net.route.netisr_maxqlen=2048
hw.ibrs_disable=1      <<<<<-- the Ryzen 5 3600 isn't vulnerable
vm.pmap.pti=0      <<<<<-- the Ryzen 5 3600 isn't vulnerable

in > Interfaces > Settings > hardware offloading, everything disabled.

did some iperf test to a iperf server on a vlan on unRAID: (suricata off)
Code: [Select]
iperf3 -c 10.0.15.6 -P 8
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec   32             sender
[  5]   0.00-10.00  sec  3.31 GBytes  2.84 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  1.25 GBytes  1.07 Gbits/sec   20             sender
[  7]   0.00-10.00  sec  1.24 GBytes  1.06 Gbits/sec                  receiver
[  9]   0.00-10.00  sec   429 MBytes   359 Mbits/sec   14             sender
[  9]   0.00-10.00  sec   425 MBytes   356 Mbits/sec                  receiver
[ 11]   0.00-10.00  sec   881 MBytes   739 Mbits/sec   18             sender
[ 11]   0.00-10.00  sec   872 MBytes   731 Mbits/sec                  receiver
[ 13]   0.00-10.00  sec  2.12 GBytes  1.82 Gbits/sec   26             sender
[ 13]   0.00-10.00  sec  2.11 GBytes  1.81 Gbits/sec                  receiver
[ 15]   0.00-10.00  sec  1.24 GBytes  1.06 Gbits/sec   22             sender
[ 15]   0.00-10.00  sec  1.24 GBytes  1.06 Gbits/sec                  receiver
[ 17]   0.00-10.00  sec   937 MBytes   786 Mbits/sec   16             sender
[ 17]   0.00-10.00  sec   930 MBytes   780 Mbits/sec                  receiver
[ 19]   0.00-10.00  sec   795 MBytes   667 Mbits/sec   21             sender
[ 19]   0.00-10.00  sec   791 MBytes   663 Mbits/sec                  receiver
[SUM]   0.00-10.00  sec  10.9 GBytes  9.36 Gbits/sec  169             sender
[SUM]   0.00-10.00  sec  10.8 GBytes  9.31 Gbits/sec                  receiver

iperf Done.

iperf test on OPNsense: (suricata off)
Code: [Select]
iperf3 -c 10.0.3.1 -P 8
[SUM]   0.00-10.00  sec  8.81 GBytes  7.57 Gbits/sec    0             sender
[SUM]   0.00-10.01  sec  8.79 GBytes  7.54 Gbits/sec                  receiver

iperf Done.

iperf to unraid (vlan) with Sucicata on:
Code: [Select]
iperf3 -c 10.0.15.6 -P 8
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  72.2 MBytes  60.6 Mbits/sec  616             sender
[  5]   0.00-10.01  sec  69.7 MBytes  58.4 Mbits/sec                  receiver
[  7]   0.00-10.00  sec   127 MBytes   106 Mbits/sec  512             sender
[  7]   0.00-10.01  sec   124 MBytes   104 Mbits/sec                  receiver
[  9]   0.00-10.00  sec   398 MBytes   334 Mbits/sec  315             sender
[  9]   0.00-10.01  sec   396 MBytes   332 Mbits/sec                  receiver
[ 11]   0.00-10.00  sec   458 MBytes   384 Mbits/sec  883             sender
[ 11]   0.00-10.01  sec   456 MBytes   382 Mbits/sec                  receiver
[ 13]   0.00-10.00  sec   227 MBytes   190 Mbits/sec  1091             sender
[ 13]   0.00-10.01  sec   224 MBytes   187 Mbits/sec                  receiver
[ 15]   0.00-10.00  sec  48.4 MBytes  40.6 Mbits/sec  220             sender
[ 15]   0.00-10.01  sec  46.6 MBytes  39.1 Mbits/sec                  receiver
[ 17]   0.00-10.00  sec   249 MBytes   209 Mbits/sec  712             sender
[ 17]   0.00-10.01  sec   247 MBytes   207 Mbits/sec                  receiver
[ 19]   0.00-10.00  sec   168 MBytes   141 Mbits/sec  962             sender
[ 19]   0.00-10.01  sec   166 MBytes   139 Mbits/sec                  receiver
[SUM]   0.00-10.00  sec  1.71 GBytes  1.47 Gbits/sec  5311             sender
[SUM]   0.00-10.01  sec  1.69 GBytes  1.45 Gbits/sec                  receiver

iperf Done.

iperf single stream to unRAID (vlan) with Suricata on:
Code: [Select]
iperf3 -c 10.0.15.6
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.61 GBytes  1.38 Gbits/sec   96             sender
[  5]   0.00-10.01  sec  1.61 GBytes  1.38 Gbits/sec                  receiver

iperf Done.

iperf test to unRAID (vlan) using UDP with Suricata on:
Code: [Select]
iperf3 -c 10.0.15.6 -u -b 10000M
Connecting to host 10.0.15.6, port 5201
[  5] local 10.0.3.47 port 42165 connected to 10.0.15.6 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   500 MBytes  4.19 Gbits/sec  362060 
[  5]   1.00-2.00   sec   499 MBytes  4.18 Gbits/sec  361268 
[  5]   2.00-3.00   sec   502 MBytes  4.21 Gbits/sec  363826 
[  5]   3.00-4.00   sec   502 MBytes  4.21 Gbits/sec  363828 
[  5]   4.00-5.00   sec   502 MBytes  4.22 Gbits/sec  363884 
[  5]   5.00-6.00   sec   502 MBytes  4.21 Gbits/sec  363707 
[  5]   6.00-7.00   sec   504 MBytes  4.23 Gbits/sec  365292 
[  5]   7.00-8.00   sec   507 MBytes  4.26 Gbits/sec  367461 
[  5]   8.00-9.00   sec   505 MBytes  4.24 Gbits/sec  365711 
[  5]   9.00-10.00  sec   503 MBytes  4.22 Gbits/sec  364081 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  4.91 GBytes  4.22 Gbits/sec  0.000 ms  0/3641118 (0%)  sender
[  5]   0.00-10.01  sec  2.19 GBytes  1.88 Gbits/sec  0.002 ms  2016349/3641118 (55%)  receiver

iperf Done.

iperf test to unRAID (vlan) using UDP with Suricata on: (reverse)
Code: [Select]
iperf3 -c 10.0.15.6 -u -b 10000M -R
Connecting to host 10.0.15.6, port 5201
Reverse mode, remote host 10.0.15.6 is sending
[  5] local 10.0.3.47 port 45202 connected to 10.0.15.6 port 5201
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec   270 MBytes  2.26 Gbits/sec  0.007 ms  12487/207721 (6%) 
[  5]   1.00-2.00   sec   256 MBytes  2.15 Gbits/sec  0.008 ms  26181/211449 (12%) 
[  5]   2.00-3.00   sec   242 MBytes  2.03 Gbits/sec  0.004 ms  22832/198170 (12%) 
[  5]   3.00-4.00   sec   278 MBytes  2.33 Gbits/sec  0.021 ms  6834/208187 (3.3%) 
[  5]   4.00-5.00   sec   278 MBytes  2.33 Gbits/sec  0.006 ms  6573/207709 (3.2%) 
[  5]   5.00-6.00   sec   278 MBytes  2.33 Gbits/sec  0.005 ms  10012/211426 (4.7%) 
[  5]   6.00-7.00   sec   279 MBytes  2.34 Gbits/sec  0.015 ms  3345/205200 (1.6%) 
[  5]   7.00-8.00   sec   266 MBytes  2.23 Gbits/sec  0.006 ms  1160/193969 (0.6%) 
[  5]   8.00-9.00   sec   257 MBytes  2.15 Gbits/sec  0.011 ms  0/185941 (0%) 
[  5]   9.00-10.00  sec   251 MBytes  2.11 Gbits/sec  0.008 ms  0/181951 (0%) 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  2.71 GBytes  2.33 Gbits/sec  0.000 ms  0/2011786 (0%)  sender
[  5]   0.00-10.00  sec  2.59 GBytes  2.23 Gbits/sec  0.008 ms  89424/2011723 (4.4%)  receiver

iperf Done.

Iperf3 test IPv6 to OPNsense (Suricata On): UDP

Code: [Select]
iperf3 -c [IPv6 address of OPNsense] -u -b 10000M
Connecting to host [IPv6 address of OPNsense], port 5201
[  5] local [desktop IPV6 address] port 35200 connected to [IPv6 address of OPNsense] port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   499 MBytes  4.19 Gbits/sec  368714 
[  5]   1.00-2.00   sec   490 MBytes  4.11 Gbits/sec  361542 
[  5]   2.00-3.00   sec   493 MBytes  4.13 Gbits/sec  363771 
[  5]   3.00-4.00   sec   493 MBytes  4.14 Gbits/sec  364085 
[  5]   4.00-5.00   sec   493 MBytes  4.14 Gbits/sec  364046 
[  5]   5.00-6.00   sec   492 MBytes  4.13 Gbits/sec  363624 
[  5]   6.00-7.00   sec   493 MBytes  4.13 Gbits/sec  363868 
[  5]   7.00-8.00   sec   493 MBytes  4.13 Gbits/sec  363977 
[  5]   8.00-9.00   sec   492 MBytes  4.13 Gbits/sec  363473 
[  5]   9.00-10.00  sec   493 MBytes  4.14 Gbits/sec  364084 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  4.82 GBytes  4.14 Gbits/sec  0.000 ms  0/3641184 (0%)  sender
[  5]   0.00-10.00  sec  3.02 GBytes  2.60 Gbits/sec  0.002 ms  1354919/3640263 (37%)  receiver
iperf Done.

Iperf test IPv6 to OPNsense (Suricata on): UDP (reverse)
Code: [Select]
iperf3 -c [IPv6 address of OPNsense] -u -b 10000M -R
Connecting to host [IPv6 address of OPNsense], port 5201
Reverse mode, remote host [IPv6 address of OPNsense] is sending
[  5] local [desktop IPV6 address] port 47373 connected to [IPv6 address of OPNsense] port 5201
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec   491 MBytes  4.12 Gbits/sec  0.003 ms  23388/385996 (6.1%) 
[  5]   1.00-2.00   sec   533 MBytes  4.47 Gbits/sec  0.004 ms  31061/424933 (7.3%) 
[  5]   2.00-3.00   sec   535 MBytes  4.49 Gbits/sec  0.022 ms  25175/419985 (6%) 
[  5]   3.00-4.00   sec   543 MBytes  4.55 Gbits/sec  0.003 ms  20191/420819 (4.8%) 
[  5]   4.00-5.00   sec   532 MBytes  4.46 Gbits/sec  0.001 ms  28162/421172 (6.7%) 
[  5]   5.00-6.00   sec   533 MBytes  4.47 Gbits/sec  0.002 ms  26736/420413 (6.4%) 
[  5]   6.00-7.00   sec   532 MBytes  4.47 Gbits/sec  0.003 ms  29979/423165 (7.1%) 
[  5]   7.00-8.00   sec   532 MBytes  4.46 Gbits/sec  0.002 ms  34746/427258 (8.1%) 
[  5]   8.00-9.00   sec   532 MBytes  4.46 Gbits/sec  0.004 ms  25774/418755 (6.2%) 
[  5]   9.00-10.00  sec   533 MBytes  4.47 Gbits/sec  0.002 ms  22481/415869 (5.4%) 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  5.38 GBytes  4.62 Gbits/sec  0.000 ms  0/4179500 (0%)  sender
[  5]   0.00-10.00  sec  5.17 GBytes  4.44 Gbits/sec  0.002 ms  267693/4178365 (6.4%)  receiver
iperf Done.

so conclution: the default setting of OPNsense are fine, no need to customize the settings
Title: Re: Performance tuning for IPS maximum performance
Post by: NoncarbonatedClack on November 15, 2021, 12:59:36 am
Quote
# File starts below this line, use Copy/Paste #####################
# Check for interface specific settings and add accordingly.
# These ae tunables to improve network performance on Intel igb driver NICs

# Flow Control (FC) 0=Disabled 1=Rx Pause 2=Tx Pause 3=Full FC
# This tunable must be set according to your configuration. VERY IMPORTANT!
# Set FC to 0 (<x>) on all interfaces
hw.igb.<x>.fc=0 #Also put this in System Tunables hw.igb.<x>.fc: value=0

Just wanted to throw my .02 at this in case anyone else sees it... no matter what I did above, I could not get FC disabled. The solution was to add "dev.igb.0.fc" to tuneables, with a value of 0. That resolved it for me.

I'm trying to verify now that "hw.igb.enable_aim=1" works, but I'm not really sure how.
I'm also wondering how to check/enable handling VLANs on the hardware level, not sure how to go about that.


This is on OPNsense OPNsense 21.7.5-amd64, FreeBSD 12.1-RELEASE-p21-HBSD

The system is an HP ProLiant ML310e Gen8
Intel Xeon E3-1220 V2 3.10 GHz 4c/4t
16 GBDDR3 ECC
120 GB SSD
Intel i350 T4 V1

Symmetrical 1Gbps internet connection.

I'm not running IDS/IPS yet but preparing to.
Title: Re: Performance tuning for IPS maximum performance
Post by: dcol on November 15, 2021, 05:29:57 pm
I think changing to dev was introduced in a newer version of FreeBSD. I will change my original post
Title: Re: Performance tuning for IPS maximum performance
Post by: LOTRouter on July 30, 2022, 07:09:09 pm
I have been trying to tune IPS for the Intel i225 2.5G NIC on my N5105 based router using 22.7_4.  When I disable IPS and just leave IDS on, I can consistently get 1.4G down (after adding the below tuning). If I enable IPS, even with just the opnsense.test.rules enabled, I can only get between 800M to 1.2G down and with significant jitter introduced.

The most relevant tuning I made was disabling flow control.  Before doing so, I could never get above 1.2G down even with IDS disabled.  I added these tunables:

SYSTEM | SETTINGS | TUNABLES
Interface igc0 Flow Control | dev.igc.0.fc | 0
Interface igc1 Flow Control | dev.igc.1.fc | 0
Interface igc2 Flow Control | dev.igc.2.fc | 0
Interface igc3 Flow Control | dev.igc.3.fc | 0


Bottom line, if you want the full 1.4G Comcast provisions and you want IPS, then an N5105 is probably a bit to underpowered for it.  If you just want IDS, then it can handle that just fine at full speed.  I have both an N6005 and an i5 1135G7 on its way I will also try, but I don't expect them until Late August.
Title: Re: Performance tuning for IPS maximum performance
Post by: lilsense on December 20, 2022, 05:30:43 pm
Are there any hi performance tuning sets specifically for Decisio DEC850?

I am using RSS as well.
https://forum.opnsense.org/index.php?topic=24409.msg116941
Title: Re: Performance tuning for IPS maximum performance
Post by: MagikMark on March 09, 2024, 10:57:59 pm
Guys,

Any update on the tunables for igc?  Which among them are still relevant?