21.1.5 Suricata broken (partially solved)

Started by binaryanomaly, May 04, 2021, 07:12:47 PM

Previous topic - Next topic
May 04, 2021, 07:12:47 PM Last Edit: May 12, 2021, 07:38:53 PM by binaryanomaly
Hi,

Are there any known issues with suricata since 21.1.5?


2021-05-04T17:37:14 suricata[80991] [100697] <Error> -- [ERRCODE: SC_ERR_NETMAP_CREATE(263)] - opening devname netmap:vtnet1/R failed: Invalid argument
2021-05-04T17:36:42 suricata[38241] [100229] <Notice> -- This is Suricata version 5.0.6 RELEASE running in SYSTEM mode


It doesn't want to run here?
Also hyperscan is broken but that seems to be a known issue?


1. No suricata works fine
2. HyperScan is limited to certain NIC's, AFAIK realtek/whatever will not work, Intel nics work well

I am using HyperScan and it works fine (quite an improvement actually).  Do you have sensei installed? 2 apps cant use the same interface with netmap...if something else is bound to that int suricata will fail to start...any other logs?

Ok thanks. Yes Intel NICs :(

Sensei is installed but only running on the LAN interface.


2021-05-04T20:53:25 kernel 405.154949 [2197] netmap_buf_size_validate error: large MTU (9000) needed but vtnet1 does not support NS_MOREFRAG


Something not good with the jumbos?

Suricata 6 was shipped with opnsense-devel on 21.1.4 and it did not have Hyperscan support. 21.1.5 fixed that, but version 6 seems to exhibit strange Netmap operation.

None of this affected Suricata 5 on 21.1.x releases.


Cheers,
Franco

It seems to be a NIC / netmap / jumbo issue.


2021-05-04T20:53:25 kernel 405.154949 [2197] netmap_buf_size_validate error: large MTU (9000) needed but vtnet1 does not support NS_MOREFRAG


Not yet sure what exactly is wrong because the NIC can definitely do it and the VM is configured to support it as well...

vtnet1 does not support NS_MOREFRAG

This is a NIC reply NOT a suricata/OS reply.....Unfortunately this means its most likely a FreeBSD driver issue with your particular NIC.  What are you using? Bare metal or virtual? vtnet1 means you have it virtualized? In any case its a driver issue.  If this is virtualized you could try a different driver to pass thru.

I'm using an intel X710-T4 with Proxmox. Surprised that everything else works besides suricata so far.

I'll give SR-IOV / PCI passthrough a try and hope it will go away.

May 05, 2021, 01:54:19 PM #7 Last Edit: May 05, 2021, 01:56:11 PM by jclendineng
Aw it IS proxmox.  Yes that is 100% the issue.  Try either a different driver OR pass through the entire NIC port.  Everything else would work as most other things are generic, netmap/suricata needs specific driver support, HyperScan doubly so.

Edit: SR-IOV is the best driver for this card and will allow full pass-through as you stated, try that and report back on any improvements...

Will give it a try as soon as I find time and report the results.

Interestingly enough sensei seems also fine.
Afaik that would be comparable to suricata in terms of driver support?

Suricata works fine for me on 21.1.5 with proxmox and virtio.

I'm not using jumbo frames however.

Well it does work here, too.

But
a) not with hyperscan
suricata[55228] [100337] <Error> -- [ERRCODE: SC_ERR_INVALID_YAML_CONF_ENTRY(139)] - Invalid mpm algo supplied in the yaml conf file: "hs"

b) not with jumbos

At least hyperscan I had working before iirc.

I think I found the hyperscan issue. Although I can't tell for sure cos I'm using a different NIC now I'm pretty sure it has the same root cause:

Proxomox by default suggests the kvm64 cpu type. This seems a generic cpu type optimized for compatibility and portability that lacks many modern instructions and optimisations.

As soon as I changed cpu type to "host" hyperscan started working. Hope this might help once somebody else.

According to Intel
"When deployed on an Intel processor-based platform, HyperScan takes advantage of features such as hyper-threading, receive side scaling, and SIMD instructions to provide optimized scanning performance".

https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/hyperscan-suricata-solution-brief.pdf

It is mostly comon on x86 cpus, RISK-V, and GPU's
https://en.wikipedia.org/wiki/SIMD

Speaking of GPU, here is a crazy idea, can we use GPU compute to do it? Because for a router box an AMD APU is a good platform and has a Vega grafics.

The issue with Hyperscan is that it heavily relies on instruction set extensions which from a packages build perspective introduce incompatibilities with amd64 hardware across the board (e.g. being newer and older or AMD vs. Intel chips).

To that end, FreeBSD in order to make some use of Hyperscan needs to use a CPU type that is higher than decades ago but still common enough to be included by most chips:

https://github.com/freebsd/freebsd-ports/blob/3fb36d0318145fee4cb91482fb2cc85a6ff18cc3/devel/hyperscan/Makefile#L31-L32

That architecture is the core2 architecture and if you don't have the chip for it Hyperscan will fail to work.

The other end of the problem is that although native builds would be the best builds for getting maximum performance enabling the native build will actually build the native architecture for the build machine and break Hyperscan for even more users in terms of portability.

So yes Core 2 is the minimum requirement for Hyperscan and it is from 2006 according to Wikipedia:

https://en.wikipedia.org/wiki/Intel_Core_2


Cheers,
Franco