Suricata IPS 10Gbps

Started by seed, December 12, 2022, 06:50:59 PM

Previous topic - Next topic
December 30, 2022, 12:05:41 AM #30 Last Edit: December 30, 2022, 04:17:47 PM by dcol
Had a few minutes to get this together.
Here is a sample iperf3 going from the firewall to Windows PC on the physical LAN. 192.168.100.2 is the PC and 192.168.100.1 is the firewall. Let me know if I should be using other endpoints.
LAN is using x710-DA4 2-10GB ports setup as lacp lagg0. WAN is Intel i210.
CPU is i5-7600, RAM 16GB

So you have Suricata running on your gigabit interface. But you claim that you reach 10 gigabit throughput. Your screenshot even proves that your statement is not correct (2,8Gb). Also, the requirement was to route the traffic through the OPNsense. Sorry but you missed the point.
i want all services to run with wirespeed and therefore run this dedicated hardware configuration:

AMD Ryzen 7 9700x
ASUS Pro B650M-CT-CSM
64GB DDR5 ECC (2x KSM56E46BD8KM-32HA)
Intel XL710-BM1
Intel i350-T4
2x SSD with ZFS mirror
PiKVM for remote maintenance

private user, no business use

Also ... you are using IPerf from LAN interface to a LAN host while Suricata only runs on WAN. :)

First off, I never said I achieved 10GB speeds. I just stated that it works. If I had better instructions on what you wanted to see maybe you would have what you wanted. My goal was to start a conversation about how to improve IDS performance not a condemnation. I just wasted my time with this thread. Thanks.

Here are some comparisons, using IDS on LAN only and 10GB NIC's on both LANs
Even without IDS, I can only achieve around 6 Gb/s, so IDS doesn't slow it down too much.
IDS is using 4 rulesets. Same computer specs and NIC's on both sides.

Remember that your SATA bus doesnt push more than 6gbit/s no matter what.

So many of the systems sold cannot push more than that.

SAS pushes 12gbit/s and Nvme is limitless. (more depending on NIC's and CPU).


Using NVMe not SATA on both systems

Quote from: Supermule on December 31, 2022, 10:59:11 AM
Remember that your SATA bus doesnt push more than 6gbit/s no matter what.

So many of the systems sold cannot push more than that.

SAS pushes 12gbit/s and Nvme is limitless. (more depending on NIC's and CPU).

This thread is getting spammed by people who completely miss the topic.
Can the moderators close this topic?

It may take some cpu generations until 10gbps IPS are in reach. Until then this discussion goes nowhere.
i want all services to run with wirespeed and therefore run this dedicated hardware configuration:

AMD Ryzen 7 9700x
ASUS Pro B650M-CT-CSM
64GB DDR5 ECC (2x KSM56E46BD8KM-32HA)
Intel XL710-BM1
Intel i350-T4
2x SSD with ZFS mirror
PiKVM for remote maintenance

private user, no business use

to answer your own question, get a threadripper with 10Gig card and see if you can make it sweat. :D

Quote from: seed on December 31, 2022, 11:43:47 PM
Quote from: Supermule on December 31, 2022, 10:59:11 AM
Remember that your SATA bus doesnt push more than 6gbit/s no matter what.

So many of the systems sold cannot push more than that.

SAS pushes 12gbit/s and Nvme is limitless. (more depending on NIC's and CPU).

This thread is getting spammed by people who completely miss the topic.
Can the moderators close this topic?

It may take some cpu generations until 10gbps IPS are in reach. Until then this discussion goes nowhere.

So because you dont agree or dont like, then you ask for a closure....

It can easily be done. Servergrade hardware (Dual Xeon's) and I710-T4 nics. This is what we use. It just keeps tugging along at about 1,4MM PPS hardly breaking a sweat.

What does disk bandwidth - though factually correct - have to do with IPS performance?
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)


But with 10 Gbps network to scan as the OP asked, and 9X% of all traffic being irrelevant - do you really think SATA could ever become a bottleneck?

You don't log unsuspicious/permitted connections, do you?
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Quote from: pmhausen on January 01, 2023, 02:03:13 PM
But with 10 Gbps network to scan as the OP asked, and 9X% of all traffic being irrelevant - do you really think SATA could ever become a bottleneck?

You don't log unsuspicious/permitted connections, do you?

It becomes a bottleneck when Suricata writes to the logs no matter the ruleset/traffic.

In "the other sense" as soon as it sees above the 200.000 PPS mark it becomes sluggish because of the disk subsystem and the logging...

I've looked into this a lot...and admittedly, it's hard to find up-to-date and reliable information. From everything I have investigated, it is even more challenging to get close to 10Gbps IPS using Suricata on FreeBSD because of Netmap.

Although Sucircata can utilize more than one CPU core, Netmap's implementation on FreeBSD has historically been limited to a single CPU core when using Suricata in IPS mode. Apparently, there is work underway to change this behavior, but I haven't been able to find the current state of progress.

This was previously brought up by a forum admin in the post I've quoted below. It has been almost two years since the post though...so I'm on the hunt for any updates on this.

Quote from: tuto2 on July 27, 2021, 11:09:23 AM
Hi,

Suricata on FreeBSD uses Netmap to achieve IPS functionality. Judging by your logs, you are indeed using netmap to bypass the host stack and enable Suricata to inspect packets straight off the wire.

Note the way ports are opened:

ix0/R (Receive thread) --> ix0^ (Host stack)
ix0^ (Host stack) --> ix0/T (Transmit thread)

This simply means that on initialization, netmap opens two "ports" - one on which to capture packets, at which point Suricata will be able to do it's thing, and another port that represents the host stack (using the '^' symbol), which is used by Suricata to forward inspected packets back to the host stack. The same principle applies on the transmit side (but reversed) - totalling a thread usage of 4 in a default setup.

The way Netmap is currently implemented does not allow for more than one thread to connect to the host stack on both the receive and transmit side. Manually increasing the amount of threads will not ensure a gain in throughput, and any measured increase in throughput will be wrong, since packets on different threads might not even reach Suricata and thus could potentially even skip by Suricata, due to a lack of synchronization.

In conclusion, Suricata on FreeBSD currently only supports one thread in IPS mode. However, Netmap has recently committed support for multiple threads towards the host stack in FreeBSD, and Suricata is in the process of integrating this into their software - so keep an eye on that.

Cheers,

Stephan