Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - JL

#16
check my post here (both IPS and IDS are working now)
if need be I can help out debugging this issue[/size]https://forum.opnsense.org/index.php?topic=38140.0the main issue of Suricata failing or not failing are MTU inconsistenciesThere's a typical overhead (8 bytes for Windows / 22 bytes for Linux) to consider but bridges and ppp also add overhead.So, if you start with the default MTU of 1500 (1518) or have  jumbo frames (<=9000 MTU) this will have great effect.I can say with confidence this approach works. Suricata is now up 100% of the time since 24 hours.[/size][/font]
#17
don't try tuning network cards


unless somehow the driver is very broken and the system deadlocks there should be output in /var/log for causing the freeze

did you try inserting a different network card ?

bnxtload suggest this is a server network card with multiple nic ?

since you mention the ZFS pool, is this by any chance opnsense running in a VM ?
#18
check my post here (both IPS and IDS are working now)

https://forum.opnsense.org/index.php?topic=38140.0

the main issue of Suricata failing or not failing are MTU inconsistencies

There's a typical overhead (8 bytes for Windows / 22 bytes for Linux) to consider but bridges and ppp also add overhead.

So, if you start with the default MTU of 1500 (1518) or have  jumbo frames (<=9000 MTU) this will have great effect.

I can say with confidence this approach works. Suricata is now up 100% of the time since 24 hours.
#19
Please like or share a comment if this post is helpful or you have more questions.


This how-to-fix post to inform people on how Suricata crashes with OPNSense on Proxmox (any version) can be remediated. The advisories here may not be suitable for production environments, I trust you know this already.

SHORT FORM specific to Proxmox


set all bridge interfaces used for opnsense to the same MTU
(it may be required to set the bridge-if MTU to the physical inteface MTU-22)

use the opnsense VM intefaces used for suricata only with virtio network adapters
set the network adapter MTU to 1 to adopt the bridge MTU from proxmox
in opnsense, leave the MTU for the interface blank
in opnsense, leave the MTU for suricata blankin opnsense for Suricata keep the MTU blank and disable promiscuous mode
in opnsense for Suricata set the exact network masks configured for each interface, it may help to add remove networks to match the interfaces enable for Suricata
add the tunables:
> set net.devmap.bufsize to the value display for NS.MOREFRAG or to the MTU value of proxmo (trial and error)
> set the net.devmap.ad to 1
> set the ns.morefrag to the same as for net.defmap.bufsize
reboot the VMsuricata should now be stable


Context


VM-hardware has Q35 chipset and uses virtio network interfaces.
The OPNSense host has qemu-guest-agent installed.


Indicator (console output)
Jan 28 12:39:45 opnsense kernel: 385.664273 [2197] netmap_buf_size_validate error: large MTU (8192) needed but igb1 does not support NS_MOREFRAG

Assumption
This indicates MTU inconsistency when MTU is set >1500 on the bridge and this is 'broken' in-between the bridge and the IPS. To my understanding the network interfaces available on Proxmox are well supported by OPNSense.

For non-virtualised systems the issue may be the same. Check the MTU of the network, match the MTU of the network on the physical interfaces. Consider subtracting 22 from the MTU for compatibility.

Recommended is to check if
MTU on the bridge is >1500

configure : within Proxmox

check and set the VM-hardware network-interface(s) to 1 so these adopt the MTU of the connected network.
you can consider decreasing the MTU with 22 (now named PMTU)


configure : within OPNSense


[ for Suricata] under the 'advanced' section of the IPS service : check and/or clear default packet size (MTU) setting
setting the MTU here can affect detection reliability and 'drop' or 'conflate' frames on inspection, consider setting MTU-22


[ for Interfaces ] check and/or clear MTU settings for the monitored interfaces OR recommended is to set the PMTU as value
important know that on non-enterprise network cards there may not be support for 'real' Jumbo frames which permits MTU >1500


Look up the specifications for the network interface cards (NIC) and do not set the MTU higher than the hardware supports, even if the MTU on the connecting switch is set to a much higher value.


[ for SYSTEM: SETTINGS: TUNABLES ] manually create the key dev.netmap.bufsize with value = <PMTUvalue>
this to work around issues with some NIC where MTU is not working well, so hard-set it here with this key


configure : optionally for OPNSense


[ for SYSTEM: SETTINGS: TUNABLES ] manually create the key dev.netmap.admode with value = 1this to avoid flapping between native and emulation state for the network interface


[ for Suricata] you must consider set the MTU-22 as size for stability


Considerations

when the value for the MTU is cleared for an interface this defaults to 1500
consider this may severely impact IPS performance and/or accuracy

Resources

https://docs.opnsense.org/manual/ips.html
https://man.freebsd.org/cgi/man.cgi...eBSD+12.1-RELEASE+and+Ports#SUPPORTED_DEVICES
https://man.freebsd.org/cgi/man.cgi?vtnet
#20
original comment removed

this requires modifying the suricata.yaml file to include the correct sections for the mentioned App-Layer protocols which are missing, this is a best practice since the behavior will change in the future and the protocols will no longer be auto-enabled


"This behavior will change in Suricata 7, so please update your config"



if you have not tweaked the suricata.yaml file, consider looking for a suricata.yaml from a more recent versions


check if these sections are present as such in suricata.yaml, consider adding them at the appropriate place


#- dnp3
        - dcerpc
        - ftp
          #- ikev2   
        - krb5
        - nfs
        - rdp
        - rfb
        - sip
        - smb
        - snmp
        - tftp
        - dhcp:
           ......



    # Note: parser depends on Rust support
    ntp:
      enabled: yes


    dhcp:
      enabled: yes


    sip:
      enabled: yes
    http2:
      enabled: yes
    snmp:
      enabled: yes
    rfb:
      enabled: yes
    mqtt:
      enabled: yes
    rdp:
      enabled: yes
#21
After much wrestling and worrying the fact NTPd does not sync appears to be due to the bogon filtering flag.

I've noticed other mishaps with bogon filtering in the past, it seems to be there should be some automated excemptions so this flag can be left enabled.

Regards,

JL
#22
From some reason i ended up with over 60GB of logs for unbound.

These are not compressed. I had hoped opnsense would compress the logs automatically, yet i fail to find an option to configure it do so. Typically on a *nix system there's little in the way for reading compressed text log files so I'd prefer to go ahead.

Is there anything to consider for doing so ?

Documentation seems lacking.
#23
22.7 Legacy Series / Re: Failing DNS services
January 03, 2023, 08:13:10 PM
Quote from: newsense on December 31, 2022, 07:11:02 PM
If DNScrypt is a must, use the latest version in a docker container. The one in OPNsense is quite old, unsure where the issue is there but I wouldn't use it on the internet until it is upgraded to current.

Bind -- zone management on the FW wouldn't be my first choice.

For anything else Unbound is more than fit for the job, and latest version as well.


Removing one or two if possible from the chain would help you narrow down the DNS issues.

Thanks, to me these are DoS issues caused by unknown origin.

It is interesting you mention dnscrypt is outdated, i'll check, thanks. Personally, i stay away from Docker, don't like it for no tangible reason. It is bad IT to me.

I disagree Unbound is adequate, it is not a very stable service.
I disagree running zone mgmt on a firewall should not be first choice, if a firewall cannot stay up or stay intact, little use for it.

#24
22.7 Legacy Series / Failing DNS services
December 31, 2022, 03:39:58 PM
While on older opnsense the 'intrusion detection service' frequently crashed, after the upgrade to 22.7 there are new issues, now it are the DNS services crashing ... which worked fine with older opnsense releases.

There are no apparent log entries indicating the reasons why for the DNS service crashes, using unbound+dnscrypt+bind
To my surprise all three service go down simultaneously. As I've noticed at least one succesful (likely) DNS spoofing attempt I'm not confident these crashes are benign.
#25
how about

host -v opnsense.org

?

or

host -U -v opnsense.org

?
#26
22.7 Legacy Series / Re: Vulnerabilities from the WAN ?
December 31, 2022, 03:31:14 PM
Typically OpenSSL vulnerabilities are either or both client-server based. Most often it requires either side to be patched to at least mitigate a vulnerability.

When you ask if this could impact the WAN side, this is somewhat vague, it must be you have services listening on the WAN side. If there are not services listening on the WAN-side you are typically not vulnerable.

If by that question you mean there could be an attackers coming in over the wan after compromising an SSL/TLS connection from a client over the WAN to a service on the internet, then you could be vulnerable.

Note that ' a vulnerability ' is different from ' an exploitable vulnerability ', the conditions to exploit a vulnerability may be relax (a bad thing) or very strict (a good thing)

When it comes to OpenSSL, visit https://www.openssl.org/news/vulnerabilities.html and verify if you understand what version is installed because there may be specifics to the OS you're using. For example, RedHat backports patches to old version number by appending minor version indicators. Thus a low version may not be vulnerable because it was patched as if it were a 3.x version

The most recent OpenSSL vulnerabilities are typically not easy to exploit and require specific conditions to occur as they pivot on x509 certificates, specific parameters to be enabled. It is my distinct impression there is not a very high risk for typical scenario's.
The most prevalent risk is that of a Denial-of-Service, implying you'd be running for example an https enabled web service on WAN.

I hope this brings some clarity.


#27
Thanks for sharing. That's roughly how i'm going about it.

By now i've found renting an extra public-IP is affordable and i've assigned this extra public IP to a bridge interface which is now exposed to the VM as a routed network interface (qemu/KVM)

The opnsense-VM appear to be running as expected in HA mode using carp. Now i want to add the IP assigned to the bridge interface as a HA IP to which i can bind various services.

so the set-up is now: [ public IP #1 ]-[ eth0 ] -> [bridge]-[public IP #2] -> [ opnsenseVM]

the PIP#2 is reachable from the internet but the traffic does not show in opnsense-VM

i understand this is becasue the PIP#2 responds to the external traffic arriving over PIP#1 but i do not understand in what set-up PIP#2 is 'owned' by the opnsense-VM cluster
#28
hey

thanks for taking a little bit of time to share your thoughts


I have this server at my disposal yet just one public IP

The server is a dual CPU 8c/16t with plenty of RAM and disk

the set-up i have in mind is    [ pubic IP] > [virbr0, virbr1, virbr2] > ( opensense-fw-1, opensense-fw-2) > virtual-LAN > VM1...N
on VM1..N there will be just a few VM running services

so, now  i have the public IP to which i configure DNS to resolve and i want to have this traffic arrive at both of VM1..N on different ports

to this end i expected to use the public-IP a a VIP-WAN but now i' m not certain if the ssh service running on the VM-host will still be reachable if i do so

or for that matter, if i could have the opnsense-ha-cluster correctly resolve the DNS and match with the hosts behind the NAT









#29
20.7 Legacy Series / Re: repeat crashing
November 09, 2020, 11:42:04 AM
Quote from: Gauss23 on October 31, 2020, 07:16:18 AM
Sensei or Suricata enabled?

hey, why did you ask ?
#30
20.7 Legacy Series / Re: repeat crashing
October 31, 2020, 10:07:04 PM
Quote from: Gauss23 on October 31, 2020, 07:16:18 AM
Sensei or Suricata enabled?

Suricata only and detection only