Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - morik_opnsense

#1
How to use environment variables with caddy?

I had a need to reuse certain parameters in different reverse proxy handler blocks. The manual way of duplicating the same value in multiple places works ofcourse. It'd be easier to use environment variables. So, I tried add caddy.conf in /usr/local/opnsense/service/conf/configd.conf.d with content like so

CADDY_RESOLVERS_DNS_TLS=149.112.112.112 1.1.1.1 8.8.8.8 8.8.4.4

followed by

service configd restart
configctl caddy restart
caddy environ

startup fails w/ the environment variable value not found. Perhaps a wiser being could tell me whether there is a trick to this getting the env variables working?
#2
Quote from: Monviech (Cedrik) on May 04, 2025, 04:47:53 PMI evaluated it and its possible but very brittle. So it's not going to be included in the plugin.
Thank you for taking the time to look into this matter!

QuoteThe build will be thinned out soon to only include cloudflare, which will make caddy add-package less prone to fail.
Good to hear! I finally got around to using DNS-01 challenge with Porkbun DNS, so had to rebuild caddy 2.10.1 with porkbun module using xcaddy. go124 upgrade was done implicitly. Slightly painful but works. Thank you for writing such a helpful plugin!
#4
Thank you, Cedrik.

OPNSense: You want to do what?
Me: pkg install xcaddy
OPNSense: Did you take me for a fool?
OPNSense:
pkg install xcaddy
Updating OPNsense repository catalogue...
Fetching meta.conf: 100%    163 B   0.2kB/s    00:01
Fetching packagesite.pkg: 100%  249 KiB 255.0kB/s    00:01
Processing entries: 100%
OPNsense repository update completed. 870 packages processed.
All repositories are up to date.
pkg: No packages available to install matching 'xcaddy' have been found in the repositories

Me:
sed -in 's/no/yes/'  /usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: { enabled: yes }


pkg install xcaddy
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
Updating OPNsense repository catalogue...
Fetching meta.conf: 100%    163 B   0.2kB/s    00:01
Fetching packagesite.pkg: 100%  249 KiB 255.0kB/s    00:01
Processing entries: 100%
OPNsense repository update completed. 870 packages processed.
All repositories are up to date.
New version of pkg detected; it needs to be installed first.
The following 1 package(s) will be affected (of 0 checked):

Installed packages to be UPGRADED:
pkg: 1.19.2_5 -> 2.1.2 [FreeBSD]

Number of packages to be upgraded: 1

The process will require 31 MiB more space.
12 MiB to be downloaded.

Proceed with this action? [y/N]: N


OPNSense: Told you, I'll win.
Me: I give up

Also, few minutes later,
Me: having never done ports, why don't we hose our system...
pkg install git
...
git clone --depth=1 https://git.FreeBSD.org/ports.git /usr/ports
cd /usr/ports/www/xcaddy
make install clean
....

mkdir -p ~/caddy_build && cd ~/caddy_build
xcaddy build \
  --with github.com/caddyserver/ntlm-transport \
  --with github.com/mholt/caddy-dynamicdns \
  --with github.com/mholt/caddy-l4 \
  --with github.com/mholt/caddy-ratelimit \
  --with github.com/hslatman/caddy-crowdsec-bouncer \
  --with github.com/caddyserver/transform-encoder
...
...
././caddy version
v2.10.0 h1:fonubSaQKF1YANl8TXqGcn4IbIRUDdfAkpcsfI/vX5U=

<< make my changes on crowdsec >>
configctl caddy restart
OK

Phew.. No idea what else did I break, but the feature which I wanted works now. Of course, I do not know how future os-caddy updates will behave. Life is indeed an adventure :-)
#5
Thank you, Cedrik.

Quote from: Monviech (Cedrik) on May 01, 2025, 06:55:57 AMTry using xcaddy instead for your personal build.
I presume this'd mean I'd issue xcaddy with all the package names indicated in the previously failed command, like:

xcaddy build \
 --with github.com/caddy-dns/scaleway \
 --with github.com/caddy-dns/desec \
 ...
 (+ packages of my interest e.g. crowdsec)

/r
morik
PS: Your efforts in maintaining os-caddy port for OPNsense and other software are very much appreciated!
#6
Hello,

I recently upgraded from Opnsense (FreeBSD 14.2-RELEASE-p2) Business Edition 24.10 --> 25.4, and caddy server stopped working.

caddy --version
v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=

I have crowdsec plugin on caddy which I use in Caddyfile for integration w/ crowdsec. So, usually, after major upgrades on Opnsense I end up doing the following:

caddy add-package github.com/hslatman/caddy-crowdsec-bouncer github.com/caddyserver/transform-encoder
and off to the races I go. However, this time around, I'm getting a weird 400 error.

caddy add-package github.com/hslatman/caddy-crowdsec-bouncer github.com/caddyserver/transform-encoder
2025/05/01 01:17:17.322 INFO this executable will be replaced {"path": "/usr/local/bin/caddy"}
2025/05/01 01:17:17.322 INFO requesting build {"os": "freebsd", "arch": "amd64", "packages": ["github.com/caddy-dns/desec@v0.0.0-20240526070323-822a6a2014b2", "github.com/caddy-dns/scaleway@v0.0.0-20231227190624-561fd7f77b1b", "github.com/caddyserver/transform-encoder", "github.com/caddy-dns/namedotcom@v0.1.3-0.20231028060845-b9fae156cd97", "github.com/caddy-dns/ovh@v0.0.3", "github.com/caddy-dns/porkbun@v0.2.1", "github.com/caddy-dns/rfc2136@v0.1.1", "github.com/hslatman/caddy-crowdsec-bouncer", "github.com/caddy-dns/netcup@v0.1.1", "github.com/caddy-dns/vultr@v0.0.0-20230331143537-35618104157e", "github.com/caddyserver/ntlm-transport@v0.1.3-0.20230224201505-e0c1e46a3009", "github.com/caddy-dns/cloudflare@v0.0.0-20240703190432-89f16b99c18e", "github.com/caddy-dns/directadmin@v0.3.1", "github.com/caddy-dns/hetzner@v0.0.2-0.20240820184004-23343c04385f", "github.com/caddy-dns/linode@v0.7.2", "github.com/caddy-dns/namecheap@v0.0.0-20240114194457-7095083a3538", "github.com/caddy-dns/acmedns@v0.3.0", "github.com/caddy-dns/bunny@v0.1.1-0.20240209091254-71ced26b4224", "github.com/caddy-dns/acmeproxy@v1.0.6", "github.com/caddy-dns/infomaniak@v1.0.1", "github.com/caddy-dns/inwx@v0.3.1", "github.com/caddy-dns/mailinabox@v0.0.2-0.20240829173454-39d0e3ce8e25", "github.com/caddy-dns/powerdns@v1.0.1", "github.com/caddy-dns/azure@v0.5.0", "github.com/caddy-dns/gandi@v1.0.4-0.20240531160843-d814cce86812", "github.com/caddy-dns/hexonet@v0.1.0", "github.com/mholt/caddy-ratelimit@v0.1.0", "github.com/mholt/caddy-l4@v0.0.0-20250102174933-6e5f5e311ead", "github.com/caddy-dns/dnsmadeeasy@v1.1.3", "github.com/mholt/caddy-dynamicdns@v0.0.0-20241025234131-7c818ab3fc34", "github.com/caddy-dns/duckdns@v0.4.0", "github.com/caddy-dns/ionos@v1.1.0"]}
Error: download failed: download failed: HTTP 400: unable to fulfill download request (id=43358b0e-5041-4319-adac-d96d6a1e570e)

caddy upgrade
2025/05/01 01:24:33.309 INFO this executable will be replaced {"path": "/usr/local/bin/caddy"}
2025/05/01 01:24:33.309 INFO requesting build {"os": "freebsd", "arch": "amd64", "packages": ["github.com/caddy-dns/cloudflare@v0.0.0-20240703190432-89f16b99c18e", "github.com/caddy-dns/gandi@v1.0.4-0.20240531160843-d814cce86812", "github.com/caddy-dns/inwx@v0.3.1", "github.com/caddy-dns/acmeproxy@v1.0.6", "github.com/caddy-dns/dnsmadeeasy@v1.1.3", "github.com/caddy-dns/duckdns@v0.4.0", "github.com/caddy-dns/hetzner@v0.0.2-0.20240820184004-23343c04385f", "github.com/caddy-dns/mailinabox@v0.0.2-0.20240829173454-39d0e3ce8e25", "github.com/caddy-dns/namecheap@v0.0.0-20240114194457-7095083a3538", "github.com/mholt/caddy-ratelimit@v0.1.0", "github.com/mholt/caddy-l4@v0.0.0-20250102174933-6e5f5e311ead", "github.com/caddy-dns/bunny@v0.1.1-0.20240209091254-71ced26b4224", "github.com/caddy-dns/directadmin@v0.3.1", "github.com/caddy-dns/linode@v0.7.2", "github.com/caddy-dns/infomaniak@v1.0.1", "github.com/caddy-dns/netcup@v0.1.1", "github.com/caddy-dns/vultr@v0.0.0-20230331143537-35618104157e", "github.com/caddy-dns/acmedns@v0.3.0", "github.com/caddy-dns/azure@v0.5.0", "github.com/caddy-dns/desec@v0.0.0-20240526070323-822a6a2014b2", "github.com/caddy-dns/ovh@v0.0.3", "github.com/caddy-dns/porkbun@v0.2.1", "github.com/caddy-dns/scaleway@v0.0.0-20231227190624-561fd7f77b1b", "github.com/caddyserver/ntlm-transport@v0.1.3-0.20230224201505-e0c1e46a3009", "github.com/caddy-dns/rfc2136@v0.1.1", "github.com/caddy-dns/hexonet@v0.1.0", "github.com/caddy-dns/ionos@v1.1.0", "github.com/caddy-dns/namedotcom@v0.1.3-0.20231028060845-b9fae156cd97", "github.com/caddy-dns/powerdns@v1.0.1", "github.com/mholt/caddy-dynamicdns@v0.0.0-20241025234131-7c818ab3fc34"]}
Error: download failed: download failed: HTTP 400: unable to fulfill download request (id=704dc2db-afa9-4ee4-953a-6ba7ffec9803)

Firewall is not blocking either DNS translation of GitHub or ip connectivity to it. I am wondering if anyone else is having this issue?
#7
    Quote from: Monviech (Cedrik) on January 07, 2025, 05:52:10 PMIf Caddy(Main) and Caddy(Sub) are in a trusted network, you can reverse_proxy between these Caddies without using TLS.

    TLS is only important on the way through untrusted networks, e.g. from Client on the Internet to your Caddy(Main).

    Caddy(Main) will hold all certificates and terminate tls. The other Caddies would not need to issue any certificates.

    So you do not really need an ACME Server or a ACME Challenge redirection. Just use Plaintext.
    I understand. Perhaps a better qualification of one of my use-case is in order.
    • Telegraf, Influx and Grafana stacks are employed for telemetry. Latter two are web-based interfaces which require explicit certificate configuration in order to use http(s). Telegraf client configurations need root_ca for http(s) and host-specific keys if TLS verification option is enabled.
    • Now, first-order question I asked myself was: Do I really need to protect telemetry information from telegram clients to influx db server? Not really : is the honest answer. That said, telemetry information reveals plenty about topology. So, having telemetry data streamed using TLS protection does carry benefits
    • The certs (for external domain) will now reside on Caddy(master). With ACME server enabled on Caddy, I can have certbot request the necessary certs and auto-provision (e.g. during cert renewal) certs onto appropriate roles

    Reference telegraf client information required:
      ## Optional TLS Config for use on HTTP connections.
      # tls_ca = "/etc/telegraf/ca.pem"
      # tls_cert = "/etc/telegraf/cert.pem"
      # tls_key = "/etc/telegraf/key.pem"
      ## Use TLS but skip chain & host verification
      # insecure_skip_verify = false

    Similar information is required for influx and Grafana.

    Quote from: Monviech (Cedrik) on January 07, 2025, 05:52:10 PMRegarding the build:

    https://caddyserver.com/docs/command-line#caddy-add-package

    You can use this to add any package you want from the command line. It will not be persistent though. If the opnsense repo pushes an update at some point you must do it again.
    Thank you for the reference. I understand.

    The crowdsec usage I referred to earlier is as follows:

    --> Global block of Caddyfile (info generated using cscli bouncer add
    {
    crowdsec {
    api_url http://<Opnsense_fw>:8080
    api_key <valid_key>
    ticker_interval 15s
    }
    }

    --> In the site-block of clients
    {
    route {
    # crowdsec based filtering
    crowdsec
                    ... whatever logic is necessary ...
            }

    }


    Quote from: Monviech (Cedrik) on January 07, 2025, 05:52:10 PMCurrently the plugin is rather finished and very specific or overly complicated things will most likely not be added to prevent feature creep.
    Thank you for sharing the plugin status and roadmap. I only asked about crowdsec modules (which can be used like above) because a log-based integration of crowdsec is already implemented on your plugin for Opnsense. Addition of the few extra parameters required may increase the hardening posture.[/list]
    #8
    Quote from: Monviech (Cedrik) on January 07, 2025, 07:20:58 AMYou could use the HTTP01 challenge redirection on the Caddy Server of OPNsense to the cascaded other Caddy Servers.

    I see. I was under the impression that the challenges on cascades servers would work because they get their certs (and therefore a hierarchy) from cascade master. Meaning, Caddy opnsense has, for a given external site, a valid cert. If acme challenge is sent to an internal (hosting an internal site) caddy instance, then the request would not have the same valid verification chain on client-side? Please grant me follow-ups in case I get stuck.


    An other unrelated questions for your guidance:
    GitHub code for this plugin indicates a custom caddy build (with DNS providers, L4 etc) is being used (120 standard modules, 80 or so optional modules). Would there be steps on how to add modules of interest eg crowdsec? I have used xcaddy in the past to generate custom images on Linux but not on freebsd. I can update this post with a working caddy config where crowdsec together with rate L5-7 rate limiters can take L4-7 information (not just L3 - which is what I think present crowdsec integration provides) to block unwanted behaviors.
    #9
    @Cedrik, First off, thank you so very much for creating this (and other) wonder plugins for OPNsense. You make our lives so convenient.

    I've read through plugin configuration documentation and through this thread. But, an answer didn't jump out. So, I hope you could excuse me for bothering you with a little issue of mine. TLDR: If os-caddy plugin acts as an ACME server for a site, then a) what would be the ACME url? b) will the location of root ca be /var/db/caddy/data/caddy/certificates/local/<site>?

    Presently, I use caddy cascaded in my internal network with a made-up domain name TLS'ed by Caddy. I did so prior to having an externally valid domain name - which I now do. Previously working caddy setup at a high-level was:

    Caddy master instance (say on internal1.domain)
     {
       acme_server
       tls_internal
      }

    Caddy slaves instances (say on internal2.domain)
     https://internal2.domain{
       tls {
         ca https://<internal1.domain>/acme/local/directory
         ca_root <path_to_caddy_master_ca>.pem
       }
      }

    Caddy slaves instances (say on internal3.domain)
     https://internal3.domain{
       tls {
         ca https://<internal1.domain>/acme/local/directory
         ca_root <path_to_caddy_master_ca>.pem
       }
      }
    ... and so on


    Now, with os-caddy on Opnsense being available, I'd like to rid of Caddy master on <instance1.domain> and utilize Opnsense caddy plug-in instead. GUI doesn't provide an option to declare acme_server, tls_internal. But inclusion of *.conf on a per-site basis is allowed. So, I added it like so: (reverse proxied external domain to internal2.domain running caddy slave1)

    # Reverse Proxy Domain: "104d6421-2b1f-407e-af83-f087021ee1b1"
    valid.domain {
    log {
    output file /var/log/caddy/access/104d6421-2b1f-407e-af83-f087021ee1b1.log {
    roll_keep_for 10d
    }
    }
    acme_server
    tls internal


    @08831d42-ad4c-40dc-a7d5-53af52ac6490_validdomain {
    not client_ip 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
    }
    handle @08831d42-ad4c-40dc-a7d5-53af52ac6490_validdomain {
    abort
    }

    handle {
    reverse_proxy internal2.domain {
                         transport http {
    tls_insecure_skip_verify
    }
         header_up Host {http.reverse_proxy.upstream.hostport}
      }
    }
    }

    Then Caddy slaves instances's (internal2.domain) config is changed as:
    https://internal2.domain{
       tls {
         ca https://<external.domain>/acme/local/directory
         ca_root <path_to_caddy_master_ca copied from /var/db/caddy/data/caddy/certificates/local/internal1.domain>.crt
       }
      }


    Unbound resolves external.domain to Opsense's address (192.168.0.1) via host overrides. This is to prevent local request for external domain being handled and served locally without having to go out into the interwebs.

    Because Opnsense webGUI is running on 8443, I also tried https://<192.168.0.1>:8443/acme/local/directory but to no avail. ACME server's url appears invalid. I'm unsure how to proceed. Your guidance would be much appreciated.
    #10
    Quote from: Patrick M. Hausen on December 09, 2024, 04:53:42 PM
    There should be nothing in need of change or reconfiguration on the FreeBSD side. Are you sure the ports on the new switch are configured for LACP?


    Thank you @Patrick. S2 has the same port-channel configuration as S1. Switch-side configuration can be found below. Initial post is updated w/ it as well.


    interface port-channel9
      switchport mode trunk
      mtu 9216

    interface Ethernet1/9/1
      switchport mode trunk
      mtu 9216
      channel-group 9 mode active

    interface Ethernet1/9/2
      switchport mode trunk
      mtu 9216
      channel-group 9 mode active


    For Cisco NX-OS series switches, mode active engages LACP protocol negotiation from the switch.
    #11
    Hello everyone,
    Decisio DEC4040 with 2x25G SFP28 ports (ice0,1) are in a lagg (lagg0) with a switch (S1). Various VLAN subnets and derivative interfaces rules etc depend on lagg0. This lagg is the "main" LAN connection. 100G breakout DAC is used. I need to move the 2 physical connections to a different switch (S2) in the short-term and later in the medium-term split the connections across two switches (S2,3) which are in a vPC configuration. For the short-term move, I have tried the following steps neither of which have yielded a successful outcome:

    • Move the 100G cable from S1 to S2. LACP isn't re-established. "No carrier" status is shown on OpnSense
    • Move the 100G cable from S1 to S2 after "ifconfig lagg0 down". LACP isn't re-established. "No carrier" status is shown on OpnSense
    • Move the 100G cable from S1 to S2 after bringing down all involved interfaces (lagg0,ice0,ice1).LACP isn't re-established. "No carrier" status is shown on OpnSense
    • Put new 100G QSFP on S2 + 2x25G SFP28 on DEC4040 and connected with MTP-to-LC breakout cable. Repeat combination of the above steps wrt ifconfig up/down. LACP isn't re-established. "No carrier" status is shown on OpnSense

    Start status of lagg0

    ifconfig -v ice0
    ice0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 9000
    options=4800028<VLAN_MTU,JUMBO_MTU,HWSTATS,MEXTPG>
    ether f4:90:ea:00:9f:72
    inet6 fe80::f690:eaff:fe00:a206%ice0 prefixlen 64 scopeid 0x5
    media: Ethernet autoselect (25G-AUI <full-duplex>)
    status: active
    nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
    drivername: ice0
    plugged: SFP/SFP+/SFP28 25GBASE-CR CA-25G-S (Copper pigtail)
    vendor: CISCO-LEONI PN: L45593-D278-B30 SN: LCC2506GADX-CH3 DATE: 2021-02-10
    root@MorikCage:~ # ifconfig -v lagg0
    lagg0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 9000
    description: main_LAGG (opt1)
    options=4800028<VLAN_MTU,JUMBO_MTU,HWSTATS,MEXTPG>
    ether f4:90:ea:00:9f:72
    hwaddr 00:00:00:00:00:00
    inet 192.168.98.1 netmask 0xffffff00 broadcast 192.168.98.255
    inet6 fe80::f690:eaff:fe00:9f72%lagg0 prefixlen 64 scopeid 0xd
    laggproto lacp lagghash l2,l3,l4
    lagg options:
    flags=0<>
    flowid_shift: 16
    lagg statistics:
    active ports: 2
    flapping: 0
    lag id: [(8000,F4-90-EA-00-9F-72,09A8,0000,0000),
    (8000,E8-0A-B9-75-49-87,0001,0000,0000)]
    laggport: ice0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3d<ACTIVITY,AGGREGATION,SYNC,COLLECTING,DISTRIBUTING>
    [(8000,F4-90-EA-00-9F-72,09A8,8000,0005),
    (8000,E8-0A-B9-75-49-87,0001,8000,01C3)]
    laggport: ice1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3d<ACTIVITY,AGGREGATION,SYNC,COLLECTING,DISTRIBUTING>
    [(8000,F4-90-EA-00-9F-72,09A8,8000,0006),
    (8000,E8-0A-B9-75-49-87,0001,8000,01C4)]
    groups: lagg FG_ALL_VLANs FG_CRITICAL_LAN
    media: Ethernet autoselect
    status: active
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
    drivername: lagg0


    S1 (and S2) configs

    interface port-channel9
      switchport mode trunk
      mtu 9216

    interface Ethernet1/9/1
      switchport mode trunk
      mtu 9216
      channel-group 9 mode active

    interface Ethernet1/9/2
      switchport mode trunk
      mtu 9216
      channel-group 9 mode active


    End status in each of the above cases for when moving lagg0 was

    ifconfig -vv lagg0
    lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
    description: main_LAGG (opt1)
    options=4800028<VLAN_MTU,JUMBO_MTU,HWSTATS,MEXTPG>
    ether f4:90:ea:00:9f:72
    hwaddr 00:00:00:00:00:00
    inet 192.168.98.1 netmask 0xffffff00 broadcast 192.168.98.255
    inet6 fe80::f690:eaff:fe00:9f72%lagg0 prefixlen 64 scopeid 0xd
    laggproto lacp lagghash l2,l3,l4
    lagg options:
    flags=0<>
    flowid_shift: 16
    lagg statistics:
    active ports: 0
    flapping: 0
    lag id: [(0000,00-00-00-00-00-00,0000,0000,0000),
    (0000,00-00-00-00-00-00,0000,0000,0000)]
    laggport: ice0 flags=0<> state=41<ACTIVITY,DEFAULTED>
    [(8000,F4-90-EA-00-9F-72,8005,8000,0005),
    (FFFF,00-00-00-00-00-00,0000,FFFF,0000)]
    laggport: ice1 flags=0<> state=41<ACTIVITY,DEFAULTED>
    [(8000,F4-90-EA-00-9F-72,8006,8000,0006),
    (FFFF,00-00-00-00-00-00,0000,FFFF,0000)]
    groups: lagg FG_ALL_VLANs FG_CRITICAL_LAN
    media: Ethernet autoselect
    status: no carrier
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
    drivername: lagg0


    I think I may be missing something basic here. When changing lagg endpoints, does FreeBSD require removal and re-addition of the physical interfaces? Or perhaps some another subtlety I may be overlooking?
    #12


    I have a similar setup. I too didn't see a need to execute the 'policy-based routing' section. Although, I have yet to thoroughly test the failover.

    One question I did have on the following:

    Quote from: dirtyfreebooter on October 15, 2024, 04:13:10 PM
    a question or check of my setup. i recently added a backup internet connection.

    the only other adjustments i had to make were:

    • any port forwards, i had to add both WAN interfaces to the forward definitions.
    • forwarding 80/443 to public for caddy reverse proxy, so had to duplicate that rule on each WAN interface

    in rule configuration there's a gateway option whose tooltip says eave as 'default' to use the system routing table. Or choose a gateway to utilize policy based routing. . So, in theory, additional rules definition shouldn't be required?

    #13
    Hello,
    Running ISC DHCPv4 on OPNsense 24.10.1-amd64 (business edition). KEA DHCPv4 server seems stable enough to considering moving over. For ISC, Opnsense GUI provided only 2 values for DNS servers per subnet. However, one could use Additional Options-->6 followed by hex string to add >2 DNS alternatives. I have 4 configured (adguard+pihole running on VMs + on Pi2 - for when server reboots are required).

    c0:a8:64:24:c0:a8:64:22:c0:a8:64:23:c0:a8:64:24

    I'd like to maintain this setup in KEA. Their https://downloads.isc.org/isc/kea/2.6.1/doc/html/arm/dhcp4-srv.html indicates that multiple values are possible. But, because I haven't migrated to KEA yet, I can't tell whether such multiple (more than 2) values in DNS options will be supported. Any guidance would be much appreciated.

    Furthemore, in order to provide Ruckus/Cisco Wi-APs with controller information, i use Option 43 like so

    type=string "hex 060c3139322e3136382e302e3431"

    But, in the GUI, i'm unable to find a way to provide such additional custom options which survive reboot?
    #14
    Hello all,
    I'm sure many of you have implemented bsmithio's project https://github.com/bsmithio/OPNsense-Dashboard/tree/master for obtaining and rendering OPNSense telemetry data via TIG stack (Telegraf, InfluxDBv2, Grafana).

    A recent change in OPNSense's filterlog data caused the filterlog processing in graylog to break. I forked (https://github.com/morikplay/OPNsense-Dashboard) the aforementioned project and fixed it for IPv4 packets (+ few enhancements). Not having enabled IPv6 in my home network, I am unable to complete the changes for others to potentially benefit from. I did look at this link https://github.com/opnsense/ports/blob/master/opnsense/filterlog/files/description.txt Franco sent in the one of the forum threads. But, I'd appreciate a few (5-10) sample filterlog traces for ipv6 packets with UDP, TCP and ICMP each, please. This will help me verify implementation and give back to the open-source community.
    #15
    (updated w/ logs - initial post was done via cellphone)
    Hello experts,
    When on 23.x business edition, life was great. 24. X Upgrade was to make it better. To a large degree it is. But, I have a strange new problem which I'm unable to solve. Two plugins: crowdsec (8080 port) and Telegraf (port 8086 for influx) stopped working. Logs indicate a connection timeout for both services. The destination endpoints (on opt6) are fine, and reachable to:from elsewhere both inside and outside the network; just not for when originating from firewall for non-ICMP traffic. No rule changes at my end. Results in a timeout.


    traceroute to 192.168.100.21 (192.168.100.21), 64 hops max, 40 byte packets
    1  crowdsec-lapi (192.168.100.21)  0.656 ms  0.416 ms  0.330 ms


    Live log doesn't show packet blocks. It does show "let packets from firewall itself in the out direction but nothing in the reverse direction (which should be allowed by default given the stateful nature of flows).


    curl -vi --connect-timeout 10 http://crowdsec-lapi.esco.ghaar:8080
    * Host crowdsec-lapi.esco.ghaar:8080 was resolved.
    * IPv6: (none)
    * IPv4: 192.168.100.21
    *   Trying 192.168.100.21:8080...
    * ipv4 connect timeout after 9999ms, move on!
    * Failed to connect to crowdsec-lapi.esco.ghaar port 8080 after 10006 ms: Timeout was reached
    * Closing connection
    curl: (28) Failed to connect


    interface capture shows:


    Servers
    vlan0.100 2024-06-28
    07:37:50.442037 f4:90:ea:00:9f:72 00:50:56:82:d8:b4 ethertype IPv4 (0x0800), length 74: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.1.31315 > 192.168.100.21.8080: Flags [S], cksum 0x8070 (correct), seq 445912424, win 65535, options [mss 8960,nop,wscale 12,sackOK,TS val 1292126707 ecr 0], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:50.442400 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xe967 (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838080763 ecr 1292126707,nop,wscale 9], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:51.442697 f4:90:ea:00:9f:72 00:50:56:82:d8:b4 ethertype IPv4 (0x0800), length 74: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.1.31315 > 192.168.100.21.8080: Flags [S], cksum 0x7c87 (correct), seq 445912424, win 65535, options [mss 8960,nop,wscale 12,sackOK,TS val 1292127708 ecr 0], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:51.443231 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xe57e (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838081764 ecr 1292126707,nop,wscale 9], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:52.462713 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xe182 (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838082784 ecr 1292126707,nop,wscale 9], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:53.642675 f4:90:ea:00:9f:72 00:50:56:82:d8:b4 ethertype IPv4 (0x0800), length 74: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.1.31315 > 192.168.100.21.8080: Flags [S], cksum 0x73ef (correct), seq 445912424, win 65535, options [mss 8960,nop,wscale 12,sackOK,TS val 1292129908 ecr 0], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:53.643161 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xdce6 (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838083964 ecr 1292126707,nop,wscale 9], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:55.662758 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xd502 (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838085984 ecr 1292126707,nop,wscale 9], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:57.842474 f4:90:ea:00:9f:72 00:50:56:82:d8:b4 ethertype IPv4 (0x0800), length 74: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.1.31315 > 192.168.100.21.8080: Flags [S], cksum 0x6387 (correct), seq 445912424, win 65535, options [mss 8960,nop,wscale 12,sackOK,TS val 1292134108 ecr 0], length 0
    Servers
    vlan0.100 2024-06-28
    07:37:57.842885 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xcc7e (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838088164 ecr 1292126707,nop,wscale 9], length 0
    Servers
    vlan0.100 2024-06-28
    07:38:01.966765 00:50:56:82:d8:b4 f4:90:ea:00:9f:72 ethertype IPv4 (0x0800), length 74: (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
        192.168.100.21.8080 > 192.168.100.1.31315: Flags [S.], cksum 0xbc62 (correct), seq 3873949677, ack 445912425, win 43440, options [mss 1460,sackOK,TS val 3838092288 ecr 1292126707,nop,wscale 9], length 0


      Repeating of seq#s indicates (to me) that .100.1 (opnsense) is:

      • establishing socket open to .100.21:8080 (server in question)
      • server responds with SYN ACK
      • but opnsense doesn't respond with an ACK

      iii would mean opnsense is eating it up? But, why?

      I've tried enabling various combination of explicit rules to allow "opt 6 address" —> "server net + ports" combination to no avail. On disabling entire firewall, the first issuance of curl command succeeds. In that I get a 401 unauthorized. But immediately following it, subsequent connection attempts end up in a black hole.

      How might I go about troubleshooting this behavior?

      Edit#1: What is strange(r) indeed is that this behavior is occuring on every subnet as long as a) traffic originates from opnsense and b) initial few attempts of connection establishment succeed, but then subsequent attempts time out.


    #nc -4znvw 10 192.168.0.58 443
    Connection to 192.168.0.58 443 port [tcp/*] succeeded!
    #nc -4znvw 10 192.168.0.58 443
    nc: connect to 192.168.0.58 port 443 (tcp) failed: Operation timed out
    # nc -4znvw 10 192.168.0.58 443
    nc: connect to 192.168.0.58 port 443 (tcp) failed: Operation timed out