I would like to respond on this thread. I think its an important topic till this day.
We also have Suricata running in IPS mode. Which is using netmap under the hood.
I found and read the following reply from Giuseppe, which is one of the collaborators of netmap here.
Stating:
Meaning, you can of course increase the buffer size itself, but you most likely want to increase the number of buffers available to netmap.
What I tried thus far is:
You might want to add these values to the tunables and then reboot the system.
WARNING:
Increasing this values do requires sufficient RAM memory to be present (at least 4GB or more). You have been warned in case you do not have enough RAM left.
During reboot Suricata might use some CPU cycles and sysctl dev.netmap | grep curr will initially show "0" until everything is allocated. I believe this is expected.
Eventually dev.netmap.buf_curr_num should match the buf_num set earlier.
That being said... Running a speedtest over a 3+ Gbit/s fiber connection still causes buffer issues in netmap however, despite these settings above:
So at this moment I also monitored the processes using ps -axfu during a speedtest. As expected Suricata is using the most CPU cycles, but not maxing out, meaning there is more CPU power left that Suricata is not using.
My conclusion: Increase the buffers might help but doesn't solve the issue. Suricata is just too slow, at this moment, in processing the traffic. Or finally, other fine tuning or configuration might be required to not fill the buffer too much. I have no idea what other tunables options might increase the throughput of Suricata in IPS mode. Maybe enabling RSS?? No idea at this moment how to continue further.
Ps. I also found this note: https://docs.opnsense.org/troubleshooting/performance.html#note-regarding-ips saying that limited by 1 thread. But not sure if this note is still valid or not.
We also have Suricata running in IPS mode. Which is using netmap under the hood.
I found and read the following reply from Giuseppe, which is one of the collaborators of netmap here.
Stating:
QuoteThe one you are interested in are ring_num and buf_num
Meaning, you can of course increase the buffer size itself, but you most likely want to increase the number of buffers available to netmap.
What I tried thus far is:
- Doubling the buffer size, by setting; dev.netmap.buf_size to: 4096
- More importantly increase the buffers, using; dev.netmap.buf_num to 327680
- As well as setting; dev.netmap.ring_num to 400
You might want to add these values to the tunables and then reboot the system.
WARNING:
Increasing this values do requires sufficient RAM memory to be present (at least 4GB or more). You have been warned in case you do not have enough RAM left.
During reboot Suricata might use some CPU cycles and sysctl dev.netmap | grep curr will initially show "0" until everything is allocated. I believe this is expected.
Eventually dev.netmap.buf_curr_num should match the buf_num set earlier.
That being said... Running a speedtest over a 3+ Gbit/s fiber connection still causes buffer issues in netmap however, despite these settings above:
Code Select
2025-10-17T02:00:29 Notice kernel [99224] 229.066251 [4335] netmap_transmit ax1 full hwcur 430 hwtail 179 qlen 250
2025-10-17T02:00:29 Notice kernel [99224] 229.059118 [4335] netmap_transmit ax1 full hwcur 430 hwtail 179 qlen 250
2025-10-17T02:00:28 Notice kernel [99223] 228.063878 [4335] netmap_transmit ax1 full hwcur 448 hwtail 194 qlen 253
2025-10-17T02:00:28 Notice kernel [99223] 228.055056 [4335] netmap_transmit ax1 full hwcur 449 hwtail 224 qlen 224
2025-10-17T02:00:27 Notice kernel [99222] 227.047952 [4335] netmap_transmit ax1 full hwcur 288 hwtail 505 qlen 294
2025-10-17T02:00:27 Notice kernel [99222] 227.039051 [4335] netmap_transmit ax1 full hwcur 289 hwtail 68 qlen 220
2025-10-17T02:00:26 Notice kernel [99221] 226.092928 [4335] netmap_transmit ax1 full hwcur 467 hwtail 238 qlen 228
2025-10-17T02:00:26 Notice kernel [99221] 226.084023 [4335] netmap_transmit ax1 full hwcur 468 hwtail 240 qlen 227
2025-10-17T02:00:25 Notice kernel [99220] 225.196415 [4335] netmap_transmit ax1 full hwcur 233 hwtail 482 qlen 262
2025-10-17T02:00:25 Notice kernel [99220] 225.188117 [4335] netmap_transmit ax1 full hwcur 483 hwtail 233 qlen 249
2025-10-17T02:00:24 Notice kernel [99219] 224.038394 [4335] netmap_transmit ax1 full hwcur 54 hwtail 338 qlen 227
2025-10-17T02:00:24 Notice kernel [99219] 224.030190 [4335] netmap_transmit ax1 full hwcur 339 hwtail 54 qlen 284
2025-10-17T02:00:23 Notice kernel [99218] 223.335506 [4335] netmap_transmit ax1 full hwcur 301 hwtail 29 qlen 271
2025-10-17T02:00:23 Notice kernel [99218] 223.325235 [4335] netmap_transmit ax1 full hwcur 30 hwtail 301 qlen 240
2025-10-16T22:57:20 Notice kernel [88235] 240.462029 [4335] netmap_transmit ax1 full hwcur 466 hwtail 188 qlen 277
2025-10-16T22:57:20 Notice kernel [88235] 240.452645 [4335] netmap_transmit ax1 full hwcur 189 hwtail 466 qlen 234
2025-10-16T17:41:57 Notice kernel [69312] 317.711273 [4335] netmap_transmit ax1 full hwcur 169 hwtail 391 qlen 289
2025-10-16T17:41:57 Notice kernel [69312] 317.702335 [4335] netmap_transmit ax1 full hwcur 170 hwtail 483 qlen 198
2025-10-16T13:31:43 Notice kernel [54299] 303.926446 [4335] netmap_transmit ax1 full hwcur 463 hwtail 188 qlen 274
2025-10-16T06:41:43 Notice kernel [29698] 703.601969 [4335] netmap_transmit ax1 full hwcur 12 hwtail 270 qlen 253
2025-10-16T06:41:43 Notice kernel [29698] 703.593897 [4335] netmap_transmit ax1 full hwcur 271 hwtail 12 qlen 258
2025-10-16T06:41:43 Notice kernel [135] ax1: VLAN Stripping Disabled
2025-10-16T06:41:43 Notice kernel [135] ax1: VLAN filtering Disabled
2025-10-16T06:41:43 Notice kernel [135] ax1: Receive checksum offload Disabled
2025-10-16T06:41:43 Notice kernel [135] ax1: RSS Enabled
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 7
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 6
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 5
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 4
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 3
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 2
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 1
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 0
2025-10-16T06:41:43 Notice kernel [135] ax1: VLAN Stripping Disabled
2025-10-16T06:41:43 Notice kernel [135] ax1: VLAN filtering Disabled
2025-10-16T06:41:43 Notice kernel [135] ax1: Receive checksum offload Disabled
2025-10-16T06:41:43 Notice kernel [135] ax1: RSS Enabled
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 7
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 6
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 5
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 4
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 3
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 2
2025-10-16T06:41:43 Notice kernel [135] ax1: xgbe_config_sph_mode: SPH disabled in channel 1
So at this moment I also monitored the processes using ps -axfu during a speedtest. As expected Suricata is using the most CPU cycles, but not maxing out, meaning there is more CPU power left that Suricata is not using.
My conclusion: Increase the buffers might help but doesn't solve the issue. Suricata is just too slow, at this moment, in processing the traffic. Or finally, other fine tuning or configuration might be required to not fill the buffer too much. I have no idea what other tunables options might increase the throughput of Suricata in IPS mode. Maybe enabling RSS?? No idea at this moment how to continue further.
Ps. I also found this note: https://docs.opnsense.org/troubleshooting/performance.html#note-regarding-ips saying that limited by 1 thread. But not sure if this note is still valid or not.
"