netmap_transmit error

Started by awptechnologies, February 23, 2025, 03:39:16 AM

Previous topic - Next topic
Quote from: Melroy vd Berg on October 18, 2025, 02:09:59 PMI have no idea what other tunables options might increase the throughput of Suricata in IPS mode. Maybe enabling RSS??

Not sure about Suricata, but I have tuned RSS due to ZenArmor. I was able to go from ~1G cap to around ~1.7G, so there is definitely a performance gain when done properly.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

October 24, 2025, 07:50:58 AM #16 Last Edit: October 24, 2025, 08:07:48 AM by inquiredadvice
Quote from: Seimus on October 19, 2025, 04:04:44 PMNot sure about Suricata, but I have tuned RSS due to ZenArmor. I was able to go from ~1G cap to around ~1.7G, so there is definitely a performance gain when done properly.

Could you share the tunes you made?

Quote from: Melroy vd Berg on October 18, 2025, 02:09:59 PMI found and read the following reply from Giuseppe, which is one of the collaborators of netmap here.

Stating:
QuoteThe one you are interested in are ring_num and buf_num

Meaning, you can of course increase the buffer size itself, but you most likely want to increase the number of buffers available to netmap.

What I tried thus far is:

  • Doubling the buffer size, by setting; dev.netmap.buf_size to: 4096
  • More importantly increase the buffers, using; dev.netmap.buf_num to 327680
  • As well as setting; dev.netmap.ring_num to 400

You might want to add these values to the tunables and then reboot the system.


I noticed that when installing Zenarmor the following is added in tunables:

dev.netmap.buf_num   runtime  1000000    Automatically added by Zenarmor
dev.netmap.ring_num  runtime     1024    Automatically added by Zenarmor

Thnx, IA.

Quote from: inquiredadvice on October 24, 2025, 07:50:58 AMCould you share the tunes you made?

Sure,

Just.. Dont configure them without understanding what they mean. Tuning is tailored per system. These are for N100 and work fine.


 added dev.igc.0.fc    # To adjust flow control on igc cards within FreeBSD    runtime    0
 added dev.igc.1.fc    # adjust flow control on igc cards within FreeBSD    runtime    0
 added dev.igc.2.fc    # adjust flow control on igc cards within FreeBSD
 added dev.igc.3.fc    # adjust flow control on igc cards within FreeBSD

 removed dev.cpu.0.cx_lowest    lowest CX sleep state to used    runtime    C3
 removed dev.cpu.1.cx_lowest    lowest CX sleep state to used    runtime    C3
 removed dev.cpu.2.cx_lowest    lowest CX sleep state to used    runtime    C3
 removed dev.cpu.3.cx_lowest    lowest CX sleep state to used    runtime    C3
 removed hw.acpi.cpu.cx_lowest    lowest CX sleep state to used    runtime    C3

 added  net.inet.tcp.recvspace 65536
 added  net.inet.tcp.sendspace  65536
 added  net.inet.tcp.recvbuf_max 4194304
 added  net.inet.tcp.sendbuf_inc 65536
 added  net.inet.tcp.sendbuf_max 4194304

 changed kern.ipc.maxsockbuf    default to 614400000

 changed hw.ibrs_disable        default to 1
 changed vm.pmap.pti            default to 0
 added  vm.pmap.pcid_enabled  0

 added  net.isr.maxthreads  -1
 added  net.isr.bindthreads 1
 added  net.isr.dispatch    hybrid
 added  net.inet.rss.enabled 1
 added  net.inet.rss.bits    2

 added  net.inet.tcp.soreceive_stream 1
 added  net.pf.source_nodes_hashsize 1048576

 added  net.inet.tcp.mssdflt 1460
 added  net.inet.tcp.abc_l_var 44
 added  net.inet.tcp.initcwnd_segments 44
 added  net.inet.tcp.minmss 536

 added  net.inet.tcp.rfc6675_pipe 1

 added  kern.random.fortuna.minpoolsize 128
 added  net.isr.defaultqlimit 2048


Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD