Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - jonny5

#1
Ok - so it was the option in Unbound "Register DHCP Static Mappings" which more or less cancelled the forwarding for the local domain, with that disabled, it follows forwarding rules...

So the Overrides work as intended, which is great, and now Aliases asks Unbound (localhost:53) for hosts and my Aliases now update as expected.
#2
Quote from: Maurice on February 09, 2026, 08:11:13 PMShould be whatever is configured in System: Settings: General.

No, I believe it is the localhost's (the OPNSense) port 53 if it is turned on at all - and for me that is Unbound.

The reason I think this is that the Alias wasn't able to get the IPv6 addresses for hosts that are on the LAN and not overridden in Unbound. If Unbound had to forward for the local domain AAAA resolution, it did not work/resolve. It does Upstream correctly, all public (not LAN) AAAA upstream resolve, but Conditionally (locally?) Forwarded AAAA does not happen - and since this is the issue, each affected Alias has only 1 IP, its IPv4 address.

To summarize the first post, Unbound does not Query Forward for a "local domain" AAAA, but does Query Forward both local A and PTR (and PTR for either IPv4 or IPv6). Further, yes it does the TLS resolution for all public stuffs, I'm only having the issue with otherwise local (but technically a /64 public DHCP IPv6 subnet) AAAA Unbound Query Forwarding.
#3
Environment detail:
OPNSense Unbound for DNS Upstream + Overrides (so when external to OPNSense local domain BIND is down critical infra still works), Hosts use PiHoles, and both OPNSense Unbound and PiHoles use local BIND infra for the Local Domain. The Local BIND has the forward and reverse lookup all setup, and populated, and Unbound and the PiHoles are set to forward for the local domain and all /24 IPv4 and /64 IPv6 subnets for reverse DNS lookup. This worked previously - I am considering going back to verify.

Upgrade journey:
Migrated from ISC to KEA, Upgraded, did the firewall migration, removed ISC plugin, most everything works well - most hosts seem to correctly populate their Alias content counts for IPs to Hostnames.

Testing the process:
There is a Python script I wrote that updates forward and reverse records in the local Bind infra for the hostnames via OPNSense(ARP/NDP/Reservations)/Portainer(Docker Hosts) and I can
drill fqdn @pihole or
drill -x ip @pihole for A and AAAA/IPv4 and IPv6, and together I get 2+ IPs back as expected. In this case the hostname happens to be "plex.localdomain.home" (not really but close enough), and yeah, most/all other hostnames appear to correctly populate their counts (especially those that are overridden via IPv4 and IPv6 entries in Unbound's Override space).

Problem:
The issue is that the OPNSense Firewall Alias for the FQDN in question only has one value for its "content", or just one IP resolved. This FQDN is not overridden in Unbound. OPNSense's Host discovery / Host detect sees all the IPs for the FQDN's associated MAC address, and all of them resolve to the FQDN against PiHoles/BIND, but Alias does not? Seems odd. I'm curious where the configuration/direction for OPNSense's firewall to resolve hosts comes from - which DNS source of truth is it using?

!!! Interesting:
Doing a drill against the OPNSense for that FQDN and AAAA returns nothing, but from either PiHole or BIND, results. Interestingly though, if I do a reverse lookup on the FQDN's IPv6 against the OPNSense it would seem Unbound responds with the IPv6's FQDN, so A (IPv4 forward DNS) and IPv4 and IPv6 PTR (both IPv4 and IPv6 reverse DNS) works, but AAAA (IPv6 forward DNS) does not for Unbound query forward/response?
!!! Further:
After disabling all Unbound Overrides for the local domain, it still has the same issue - AAAA query for local domains fails - and yes, I have the local domain added to the "Private Domains" in Unbound's Advanced settings. Extra, in this, it would seem to only know about the IPv6 addresses for FQDNs that were overridden, and is unable to do a conditionally forwarded AAAA/forward-ipv6 lookup (unless the FQDN in question is IPv6 overridden manually, and then it isn't forwarding/asking, it is merely answering if you will).

(Extra - I'm considering setting up the BIND plugin on the OPNSense just so I can have my existing Primary BIND send updates to what would be OPNSense's Secondary BIND. Want to possibly understand why it doesn't already work, and maybe explore what is necessary to configure the BIND plugin to be a secondary BIND server as a part of my existing infra while keeping the state in OPNSense conf/backup - fix 1 problem w/possibly 2 or more problems lol, but if anyone has pointers on the original issue, pls lmk)

Ok - so it was the "Register DHCP Static Mappings" which more or less cancelled the forwarding for the local domain, with that disabled, it follows forwarding rules...
#4
Currently still on 25.7.11_9 and have transitioned from ISC to KEA, and so far things are working okay.

With ISC, I could find all of my leases for DHCPv4 and DHCPv6, but with KEA, that does not seem accessible. I tried looking into "host discovery" / "host watch", but maybe it isn't built out in 25.7.x yet. Curious what we can expect to use "host discovery" for and if the data will be available via the API?

Are there plans with KEA to allow us to see our DHCPv6 leases via API, both reserved and un-reserved?
#5
Quote from: BrandyWine on October 15, 2025, 09:23:52 PMWhat more info is needed? What should I look at?

Logs are being rotated daily, settings say weekly.
More than 4 logs are saved, settings say save 4.

gotta admit, i have mine set at 2 weekly, and i only have 2... i was about to say "that's 4 weeks of logs..." but i only have two files and 2 + weekly... not sure if either of our retention is matching the configured state

i did figure out how to enable manual rotation of an extra suricata log file i have created through the use of suricata's custom.yaml, and this file has stuck around through several upgrades

file name example:
/usr/local/etc/newsyslog.conf.d/suricataxff.conf:

content example:
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/suricata/evexff.json      root:wheel      640     1       500000  $W0D23  B       /var/run/suricata.pid   1
#6
There's one isr setting that I believe really enables the rest, see attached image too.

net.isr.dispatch: deferred
#7
Checked the app-layer section of the latest suricata.yaml file against suricata.yaml, this is a strong inspection enablement

app-layer:
  # error-policy: ignore
  protocols:
    telnet:
      enabled: yes
    rfb:
      enabled: yes
      detection-ports:
        dp: 5900, 5901, 5902, 5903, 5904, 5905, 5906, 5907, 5908, 5909
    mqtt:
      enabled: yes
      # max-msg-length: 1 MiB
      # subscribe-topic-match-limit: 100
      # unsubscribe-topic-match-limit: 100
      # Maximum number of live MQTT transactions per flow
      # max-tx: 4096
    krb5:
      enabled: yes
    bittorrent-dht:
      enabled: yes
    snmp:
      enabled: yes
    ike:
      enabled: yes
    tls:
      enabled: yes
      detection-ports:
        dp: 443

      # Generate JA3/JA4 fingerprints from client hello. If not specified it
      # will be disabled by default, but enabled if rules require it.
      ja3-fingerprints: auto
      ja4-fingerprints: auto

      # What to do when the encrypted communications start:
      # - track-only: keep tracking TLS session, check for protocol anomalies,
      #            inspect tls_* keywords. Disables inspection of unmodified
      #            'content' signatures. (default)
      # - bypass:  stop processing this flow as much as possible. No further
      #            TLS parsing and inspection. Offload flow bypass to kernel
      #            or hardware if possible.
      # - full:    keep tracking and inspection as normal. Unmodified content
      #            keyword signatures are inspected as well.
      #
      # For best performance, select 'bypass'.
      #
      #encryption-handling: track-only

    pgsql:
      enabled: yes
      # Stream reassembly size for PostgreSQL. By default, track it completely.
      stream-depth: 0
      # Maximum number of live PostgreSQL transactions per flow
      max-tx: 1024
    dcerpc:
      enabled: yes
      # Maximum number of live DCERPC transactions per flow
      # max-tx: 1024
    ftp:
      enabled: yes
      # memcap: 64 MiB
    websocket:
      enabled: yes
      # Maximum used payload size, the rest is skipped
      # Also applies as a maximum for uncompressed data
      max-payload-size: 64 KiB
    rdp:
      #enabled: yes
    ssh:
      enabled: yes
      # hassh: no

      # What to do when the encrypted communications start:
      # - track-only: keep tracking but stop inspection (default)
      # - full:    keep tracking and inspect as normal
      # - bypass:  stop processing this flow as much as possible.
      #            Offload flow bypass to kernel or hardware if possible.
      # For the best performance, select 'bypass'.
      #
      # encryption-handling: track-only
    doh2:
      enabled: yes
    http2:
      enabled: yes
      # Maximum number of live HTTP2 streams in a flow
      #max-streams: 4096
      # Maximum headers table size
      #max-table-size: 65536
      # Maximum reassembly size for header + continuation frames
      #max-reassembly-size: 102400
    smtp:
      enabled: yes
      raw-extraction: no
      # Maximum number of live SMTP transactions per flow
      # max-tx: 256
      # Configure SMTP-MIME Decoder
      mime:
        # Decode MIME messages from SMTP transactions
        # (may be resource intensive)
        # This field supersedes all others because it turns the entire
        # process on or off
        decode-mime: yes

        # Decode MIME entity bodies (ie. Base64, quoted-printable, etc.)
        decode-base64: yes
        decode-quoted-printable: yes

        # Maximum bytes per header data value stored in the data structure
        # (default is 2000)
        header-value-depth: 2000

        # Extract URLs and save in state data structure
        extract-urls: yes
        # Scheme of URLs to extract
        # (default is [http])
        #extract-urls-schemes: [http, https, ftp, mailto]
        # Log the scheme of URLs that are extracted
        # (default is no)
        #log-url-scheme: yes
        # Set to yes to compute the md5 of the mail body. You will then
        # be able to journalize it.
        # Set it to no to disable it.
        # Default is auto: not enabled until a rule needs it
        # body-md5: auto
      # Configure inspected-tracker for file_data keyword
      inspected-tracker:
        content-limit: 100000
        content-inspect-min-size: 32768
        content-inspect-window: 4096
    imap:
      enabled: detection-only
    pop3:
      enabled: yes
      detection-ports:
        dp: 110
      # Stream reassembly size for POP3. By default, track it completely.
      stream-depth: 0
      # Maximum number of live POP3 transactions per flow
      # max-tx: 256
    smb:
      enabled: yes
      detection-ports:
        dp: 139, 445
      # Maximum number of live SMB transactions per flow
      # max-tx: 1024

      # Stream reassembly size for SMB streams. By default track it completely.
      #stream-depth: 0

    nfs:
      enabled: yes
      # max-tx: 1024
    tftp:
      enabled: yes
    dns:
      tcp:
        enabled: yes
        detection-ports:
          dp: 53
      udp:
        enabled: yes
        detection-ports:
          dp: 53
    http:
      enabled: yes

      # Byte Range Containers default settings
      # byterange:
      #   memcap: 100 MiB
      #   timeout: 60

      # memcap:                   Maximum memory capacity for HTTP
      #                           Default is unlimited, values can be 64 MiB, e.g.

      # default-config:           Used when no server-config matches
      #   personality:            List of personalities used by default
      #   request-body-limit:     Limit reassembly of request body for inspection
      #                           by http_client_body & pcre /P option.
      #   response-body-limit:    Limit reassembly of response body for inspection
      #                           by file_data, http_server_body & pcre /Q option.
      #
      #   For advanced options, see the user guide


      # server-config:            List of server configurations to use if address matches
      #   address:                List of IP addresses or networks for this block
      #   personality:            List of personalities used by this block
      #
      #                           Then, all the fields from default-config can be overloaded
      #
      # Currently Available Personalities:
      #   Minimal, Generic, IDS (default), IIS_4_0, IIS_5_0, IIS_5_1, IIS_6_0,
      #   IIS_7_0, IIS_7_5, Apache_2
      libhtp:
         default-config:
           personality: IDS

           # Can be specified in KiB, MiB, GiB.  Just a number indicates
           # it's in bytes.
           request-body-limit: 100 KiB
           response-body-limit: 100 KiB

           # inspection limits
           request-body-minimal-inspect-size: 32 KiB
           request-body-inspect-window: 4 KiB
           response-body-minimal-inspect-size: 40 KiB
           response-body-inspect-window: 16 KiB

           # response body decompression (0 disables)
           response-body-decompress-layer-limit: 2

           # auto will use http-body-inline mode in IPS mode, yes or no set it statically
           http-body-inline: auto

           # Decompress SWF files. Disabled by default.
           # Two types: 'deflate', 'lzma', 'both' will decompress deflate and lzma
           # compress-depth:
           # Specifies the maximum amount of data to decompress,
           # set 0 for unlimited.
           # decompress-depth:
           # Specifies the maximum amount of decompressed data to obtain,
           # set 0 for unlimited.
           swf-decompression:
             enabled: no
             type: both
             compress-depth: 100 KiB
             decompress-depth: 100 KiB

           # Use a random value for inspection sizes around the specified value.
           # This lowers the risk of some evasion techniques but could lead
           # to detection change between runs. It is set to 'yes' by default.
           #randomize-inspection-sizes: yes
           # If "randomize-inspection-sizes" is active, the value of various
           # inspection size will be chosen from the [1 - range%, 1 + range%]
           # range
           # Default value of "randomize-inspection-range" is 10.
           #randomize-inspection-range: 10

           # decoding
           double-decode-path: no
           double-decode-query: no

           # Can enable LZMA decompression
           #lzma-enabled: false
           # Memory limit usage for LZMA decompression dictionary
           # Data is decompressed until dictionary reaches this size
           #lzma-memlimit: 1 MiB
           # Maximum decompressed size with a compression ratio
           # above 2048 (only LZMA can reach this ratio, deflate cannot)
           #compression-bomb-limit: 1 MiB
           # Maximum time spent decompressing a single transaction in usec
           #decompression-time-limit: 100000
           # Maximum number of live transactions per flow
           #max-tx: 512
           # Maximum used number of HTTP1 headers in one request or response
           #headers-limit: 1024

         server-config:

           #- apache:
           #    address: [192.168.1.0/24, 127.0.0.0/8, "::1"]
           #    personality: Apache_2
           #    # Can be specified in KiB, MiB, GiB.  Just a number indicates
           #    # it's in bytes.
           #    request-body-limit: 4096
           #    response-body-limit: 4096
           #    double-decode-path: no
           #    double-decode-query: no

           #- iis7:
           #    address:
           #      - 192.168.0.0/24
           #      - 192.168.10.0/24
           #    personality: IIS_7_0
           #    # Can be specified in KiB, MiB, GiB.  Just a number indicates
           #    # it's in bytes.
           #    request-body-limit: 4096
           #    response-body-limit: 4096
           #    double-decode-path: no
           #    double-decode-query: no

    # Note: Modbus probe parser is minimalist due to the limited usage in the field.
    # Only Modbus message length (greater than Modbus header length)
    # and protocol ID (equal to 0) are checked in probing parser
    # It is important to enable detection port and define Modbus port
    # to avoid false positives
    modbus:
      # How many unanswered Modbus requests are considered a flood.
      # If the limit is reached, the app-layer-event:modbus.flooded; will match.
      #request-flood: 500

      enabled: yes
      detection-ports:
        dp: 502
      # According to MODBUS Messaging on TCP/IP Implementation Guide V1.0b, it
      # is recommended to keep the TCP connection opened with a remote device
      # and not to open and close it for each MODBUS/TCP transaction. In that
      # case, it is important to set the depth of the stream reassembling as
      # unlimited (stream.reassembly.depth: 0)

      # Stream reassembly size for modbus. By default track it completely.
      stream-depth: 0

    # DNP3
    dnp3:
      enabled: yes
      detection-ports:
        dp: 20000

    # SCADA EtherNet/IP and CIP protocol support
    enip:
      enabled: yes
      detection-ports:
        dp: 44818
        sp: 44818

    ntp:
      enabled: yes

    quic:
      enabled: yes

    dhcp:
      enabled: yes

    sip:
      enabled: yes

    ldap:
      tcp:
        enabled: yes
        detection-ports:
          dp: 389, 3268
      udp:
        enabled: yes
        detection-ports:
          dp: 389, 3268
      # Maximum number of live LDAP transactions per flow
      # max-tx: 1024

    mdns:
      enabled: yes

A reference to what I am checking against:
https://github.com/OISF/suricata/blob/main/suricata.yaml.in
#8
You will have to re-register your agents (clients/parsers/blockers) on any CrowdSec Server setup, you cannot migrate. This is true too if you switch DBs for the Server as it stores this in the DB.

That said, if you deploy the CrowdSec Server outside of the OPNSense and just use the CrowdSec Agents (Parser and Blocker) features on the OPNSense, you can more or less have a 'hard coded' deployed to work list of agents as you can seed the agents on Docker Compose deploy via the Environment element of the 'docker-compose.yml' file.

The other option would be to have a script that creates your machines and bouncers (the parser and blocker/bouncer agents) in the Server. Once you have that, just maintain your IaC (Infrastructure as Code) to match your environment and if you have to rebuild, you can have your setup tool-kit.
#9
Understood, but no I am not "asking us to do away with security measures". It is the original (and hails from the pulled-pork/oinkmaster days with Snort if you have used IDS that long) way.

You do not have to use it, OPNSense has designed their own way to enable/disable rules and the policy system works. It does not let you 'edit' the rules - and yes, if you know what you are doing, you can increase your security with a few useful rule edits.

Here's a git repo I made with some 'suricata-update' config examples to help people get started.
https://github.com/j0nny55555/noiseless-suricata-update

The 'suricata-update' method is the way that ships with Suricata by default (and is already present on the OPNSense, it gets installed w/suricata) but OPNSense does not use it. IMHO, the GUI that should be made for the OPNSense should use 'suricata-update' in the background and then we can disable/enable/modify/drop rules very quickly as it works quite fast.

If you are like me and have an aging box running your OPNSense, the 'suricata-update' method will put a decent tax on the system resources... so now I have a docker container run the same suricata version on my OPNSense and build my rules file that I have the OPNSense download. It is pretty clever, fast, and resource light on the router as another beefy box does all the rule building heavy lifting (regex mods can get heavy on 200k+ rules).

#10
Please read this - the way everyone else does it with Suricata is 'suricata-update', so, to do 'suricata-update' on OPNSense you will have to change some things...

https://www.nova-labs.net/using-suricata-update-on-opnsense/
#11
Tutorials and FAQs / Re: READ THIS FIRST
October 16, 2025, 06:19:26 AM
Thank you for the note about RSS in older hosts (desktop hand me downs), can confirm. Getting good throughput even after disabling RSS and getting a huge power cost save in the process. Didn't realize RSS added that much load to an older CPU.

Edit: to any others with older CPU hand me down hosts, if you have over 1 Gbps fiber, you will likely need to enable RSS to get the speed/throughput you are paying for. It will cost you a little more in watts, but, you have to make the choice if you want to "get the rest of it".

Also, want to mention that CrowdSec is not an IPS. It can parse a Suricata fast or json log, but that does not make it an IDS/IPS.

CrowdSec is much more like a modular Fail2Ban and when it reads a log entry that matches an attack pattern (CrowdSec's acquis.d folder and its conf files), can add that IP (IPv4 or IPv6) to a blocklist locally, and/or escalate it to CrowdSec for the 'hive' the be protected too. You setup the agents/parsers/blockers/etc. (a server is also a good idea for a Multi-Server install) and it will read which ever log files are about services it recognizes or you configure manually.

Agree with you about "don't drink the koolaid", but I will say security is only best in layers (and when monitored and followed up on, a SOC), and one should not think it will be easy to accomplish or be something that is in any way turn-key. If you are interested in a project and in the Computer/Network security career space, might be a good project to maintain. Exercise if you see it worth while.

One of the best/easiest ways to actually protect yourself is to use blocklists from reputable providers, which fundamentally, you are doing with CrowdSec - the only difference is you can be a part of the 'blocklist provider'. Again, not easy (and not a sure fix), but do able and IMHO not a bad idea (and something that will not break things if setup correctly).

IMHO, IDS good, IPS not so good.

Planning on coming back and reading the rest of your post because, wow, quite the valuable "check here first" list of things to know as you get into running an OPNSense. Thank you again!
#12
Installed 8.0.1 - works in IDS (OPNsense 25.7.5-amd64 - we do not IPS)

Use the logging and have modded things to use 'suricata-update' instead of the Policy rule management OPNSense feature

All of which still works great! Seems there was minimal 'suricata.yaml' file modifications too, will follow up here after combing through the latest published Suricata config file example

It should be mentioned (and this might be more in plugin or core - looking for help/direction):
It has been difficult to keep a 'custom.yaml' file, which can allow us to customize the Suricata config even more
We significantly use this, and as we've disabled the OPNSense IDS update cron task our 'custom.yaml' file at /usr/local/etc/suricata/ does not get replaced any more. It would be neat to either now, or in the future see about having a way to have a heavily customized 'custom.yaml' for Suricata that stays around natively (currently if we modify the template it breaks on copy/import).

Extra - the suricata-update thing:
https://www.nova-labs.net/using-suricata-update-on-opnsense/
#13
As the CrowdSec default firewall is only stopping incoming for items on the list, I wanted to upgrade how that feature worked, and honestly allow a few hosts I have to not be blocked by the firewall - an unfiltered host if you will.

So, I made my own "Hosts" based Firewall Alias, and have a Python script that will get the latest list and put it in there.

This took a little bit usually, and so I tried to see if I could thread the operation to increase speed, I might try to multi-process it next as the dual sends is about the fastest send (two sub lists, each about 30,000 items) and if there is a change I just update the whole sublist that changed and do a reconfigure.

Still, this takes too long as 10 seconds. The other kinds of aliases are interesting to me, such as Internal and External.

It would seem (and I did this too... but didn't understand how to 'reconfigure' or set the updates as active) that you can do it faster via pfctl via python, but, how do you reconfigure after updating a "Hosts" based Alias? Do the Internal or External Alias types not need a 'reconfigure' to have their populations be active in the rules that use them?

I'm fairly new to pf/FreeBSD so please do not take for granted anything I might 'should' or 'could' know, teach me!
#14
Issue:
Last two updates, on restart, WAN has IPv6 address but no IPv4 address

Further:
The solve is easy, I just go to WAN interface and hit 'Save' at the bottom, it applies, and then I have an IPv4 address on WAN followed by a little later the IPv6 address shows up again on WAN. LAN interfaces have their IPv6 (and IPv4) details, but the missing IPv4 on WAN is initially keeping a few things from working after the update.

I will be able to do more testing later as far as the reboot w/out an update, but I do not have that change management window right now.
#15
While I do not know much about OSPF, I have looked up a lot of tuning elements for OPNSense and FreeBSD as mine runs on older metal host and I'm doing 10G intranet.

https://calomel.org/freebsd_network_tuning.html

This website ^ has details about many tuning elements, but they use a different reference of values for kern.ipc.maxsockbuf and so while I do not think there is a limitation/drawback to increasing the value (much the opposite it seems), using a logical value seems wise.

For my router, I have the value set at: 614400000

Seems yours is already working, but who knows if there is odd grouping/read/writes to that space due to a unqiue value - that said, how this values comes to be seems pretty odd to me, and so I didn't do all the work necessary to evaluate your "33554432" - just wanted to share a resource I found that has helped lower buffer bloat and latency.

The specific part from their website that seems important:
# speed:   1 Gbit   maxsockbuf:   2MB   wscale:  6   in-flight:  2^6*65KB =    4MB (default)
# speed:   2 Gbit   maxsockbuf:   4MB   wscale:  7   in-flight:  2^7*65KB =    8MB
# speed:  10 Gbit   maxsockbuf:  16MB   wscale:  9   in-flight:  2^9*65KB =   32MB
# speed:  40 Gbit   maxsockbuf: 150MB   wscale: 12   in-flight: 2^12*65KB =  260MB
# speed: 100 Gbit   maxsockbuf: 600MB   wscale: 14   in-flight: 2^14*65KB = 1064MB
#
#kern.ipc.maxsockbuf=2097152    # (wscale  6 ; default)
#kern.ipc.maxsockbuf=4194304    # (wscale  7)
kern.ipc.maxsockbuf=16777216   # (wscale  9)
#kern.ipc.maxsockbuf=157286400  # (wscale 12)
#kern.ipc.maxsockbuf=614400000   # (wscale 14)