Recent posts

#1
26.1, 26,4 Series / Re: All Port Forwards Broken w...
Last post by meyergru - Today at 09:08:57 AM
Are you sure that you are talking about a "modem", not a "router"? This sounds more like a router-behind-router setup where OpnSense does not control its real WAN IP address.
#2
26.1, 26,4 Series / Re: 26.1.6_2 - All traffic blo...
Last post by thormir84 - Today at 08:54:22 AM
Quote from: passeri on Today at 12:38:30 AMPlease attach screenshots.

Links are not attachments.

My reasons for the request are thread longevity and user security.

By the way, from which version were you upgrading?

I apologize, i corrected it; i was trying to exceed the limit of 256kb.

The update was carried out starting from the immediately previous version, 26.1.6; 26.1.6_2 is a hotfix released on 23-04.




EDIT:

Since the firewall is a VM on Proxmox, i performed a restore of it using a previous backup (specifically, the backup dates back to the night of 25-04, before i installed the hotfix). Once it restarted, everything started working again as before.
At this point, i have the impression that the problem is in the hotfix.
#3
Quote from: tuzzemets on January 23, 2026, 09:28:03 AM4) Проверям, что в VPN>Proxy Suite нет ошибок (во втором окне будет лог).
После запуска (конфигурация с моими ключами) сыпятся ошибки
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;168m1892884632␛[0m 1.27s] connection: open connection to 149.154.175.60:443 using outbound/direct[direct]: dial tcp 149.154.175.60:443: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;207m2989468863␛[0m 4.26s] connection: open connection to 149.154.175.53:443 using outbound/direct[direct]: dial tcp 149.154.175.53:443: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;73m1148687673␛[0m 3.24s] connection: open connection to 149.154.175.60:80 using outbound/direct[direct]: dial tcp 149.154.175.60:80: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;131m3362697587␛[0m 1.27s] connection: open connection to 149.154.175.53:443 using outbound/direct[direct]: dial tcp 149.154.175.53:443: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;115m1295056995␛[0m 1.25s] connection: open connection to 149.154.175.60:80 using outbound/direct[direct]: dial tcp 149.154.175.60:80: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;121m1747394670␛[0m 1.25s] connection: open connection to 149.154.175.53:80 using outbound/direct[direct]: dial tcp 149.154.175.53:80: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;86m3444713798␛[0m 3.26s] connection: open connection to 149.154.175.53:443 using outbound/direct[direct]: dial tcp 149.154.175.53:443: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;140m3005752188␛[0m 4.25s] connection: open connection to 149.154.175.60:80 using outbound/direct[direct]: dial tcp 149.154.175.60:80: operation was canceled
+0300 2026-04-26 09:41:53 ␛[31mERROR␛[0m [␛[38;5;222m1817526222␛[0m 4.26s] connection: open connection to 149.154.175.60:443 using outbound/direct[direct]: dial tcp 149.154.175.60:443: operation was canceled

UPD: это было на конфигурации из поста
Quote from: scorpid on February 15, 2026, 03:40:35 PM{
  "log": {
    "disabled": false,
    "level": "error",// debug > info > warn > error > fatal После успешного тестирования измените debug на error, чтобы уменьшить объем хранилища журнала.
    "timestamp": true
  },
  "experimental": {
      "cache_file": {
            "enabled": true,
            "path": "/usr/local/etc/sing-box/cache.db"
        }
    },
  "inbounds": [
    {
      "mtu": 9000,
      "type": "tun",
      "tag": "tun-in",
      "auto_route": true,
      "strict_route": true,
      "interface_name": "tun_3000",
      "address": ["172.19.0.0/30"],
      "endpoint_independent_nat": false,
      "stack": "system" //system > mixed > gvisor
    }
  ],
  "outbounds": [
    {
      "tag": "direct",
      "type": "direct"
    },
    {
      "type": "vless",
      "tag": "reality-outFX",
      "server": "0.0.0.0",
      "server_port": 443,
      "uuid": "000000000000000000000000000000",
      "packet_encoding": "xudp",
      "flow": "xtls-rprx-vision",
      "tls": {
            "enabled": true,
            "insecure": false,
            "server_name": "00000.nl",
            "utls": {
                    "enabled": true,
                    "fingerprint": "chrome"
            },
            "reality": {
                    "enabled": true,
                    "public_key": "00000000000000000000000",
                    "short_id": "0000000000"
            }
        }
    }
  ],
  "route": {
     "default_domain_resolver": {
      "server": "aghDNS",
      "rewrite_ttl": 60
      },
    "rules": [
      {
        "action": "sniff"
      },
      {
        "action": "hijack-dns", # в случае если будет добавлен tun или другой inbound
        "protocol": "dns"

      },
      {
        "action": "route",
        "ip_is_private": true,
        "outbound": "direct"
      },
      {
        "action": "route",
        "domain_suffix": [
            "reshutka.ru"
                ],
        "outbound": "direct"
      },
      {
        "action": "route",
        "rule_set": [
          "antizapret"
                ],
        "outbound": "reality-outFX"
      },
      {
        "action": "route",
        "domain_suffix": [
            ".youtube.com",
            ".googlevideo.com",
            ".nhacmp3youtube.com",
            ".1e100.net",
            ".ytimg.com",
            ".youtu.be",
            ".gvt1.com",
            ".googleusercontent.com",
            ".google.com",
            ".googleapis.com",
            ".gstatic.com",
            ".intel.com",
            ".caddy.community",
            ".caddyserver.com",
            ".gl-inet.com",
            ".ghcr.io",
            ".lscr.io",
            ".ntc.party",
            ".ghostbsd.org",
            ".pushover.net",
            ".gitlab.com",
            ".github.com",
            ".openbittorrent.com",
            ".desync.com",
            ".opentrackr.org",
            ".coppersurfer.tk",
            ".clamav.net",
            ".reddit.com",
            ".homenetworkguy.com",
            ".mmonit.com"
                ],
        "outbound": "reality-outFX"
      },
      {
        "action": "route",
        "domain_keyword": [
            "caddy",
            "caddyserver",
            "github",
            "4pda"
                ],
        "outbound": "reality-outFX"
      },
      {
        "action": "route",
        "ip_cidr": [
            "3.76.113.134",
            "3.5.6.213",
            "5.9.243.187",
            "5.100.80.204",
            "9.9.9.10",
            "23.50.131.142",
            "23.73.2.158",
            "31.13.72.52",
            "46.8.236.143",
            "57.144.45.32",
            "62.183.19.177",
            "82.209.105.218",
            "84.42.76.104",
            "85.140.0.237",
            "94.140.14.140",
            "94.140.14.141",
            "149.112.112.10",
            "157.240.0.60",
            "157.240.199.60",
            "157.240.31.60",
            "157.240.205.60",
            "157.240.209.60",
            "159.138.202.173",
            "163.70.158.60",
            "163.70.159.60",
            "172.233.41.171",
            "212.35.165.37"
            ],
        "outbound": "reality-outFX"
      },
      {
        "action": "reject",
        "protocol": "quic"
      }
      ],
    "rule_set": [
      {
        "tag": "antizapret",
        "type": "remote",
        "url": "https://github.com/savely-krasovsky/antizapret-sing-box/releases/latest/download/antizapret.srs",
        "format": "binary",
        "download_detour": "reality-outFX"
      }
     ],
    "auto_detect_interface": false,
    "final": "direct"
  },
  "dns": {
    "servers": [
      {
        "type": "udp",
        "tag": "aghDNS",
        "server": "127.0.0.1",
        "server_port": 53,
      }
    ],
    "strategy": "ipv4_only",
    "disable_cache": true,
    "disable_expire": true,
    "independent_cache": false,
    "cache_capacity": 0,
    "reverse_mapping": false
  }
}

Вот мой конфиг в секции "tag": "antizapret" идет скачивание списка РКН
С конфигурацией из первого поста всё ок
#4
26.1, 26,4 Series / Re: 26.1.6_2 - All traffic blo...
Last post by thormir84 - Today at 08:43:57 AM
Quote from: pfry on Today at 03:15:49 AMThat's a very odd pair of rules. They may be outside of my experience, as I don't use any static NAT. As is, they do not appear to match the marked flows in your logs (source and destination ports and destination address do not match). For more info (e.g. "reason"), hit the "i" to the right of the log entries.

The rules have been created to route traffic coming from the outside targeting ports 80 and 443 to NPM (Nginx Proxy Manager). NPM, in turn, handles forwarding the request to the required Docker container, based on the custom domain that has been pointed (for example: https://bitwarden.my_domain.xxx or https://paperless.my_domain.xxx).

The IP 192.168.84.2 is the IP of the WAN port. The router's IP is 192.168.84.1, and it is set to expose the firewall without filters (so that the traffic management is entirely handled by it).

The local network is 172.22.8.0/24. The IP of the LXC with Docker is 172.22.8.4.

In fact, the rules route all traffic on ports 80 and 443 arriving at 192.168.84.2 to 172.22.8.4 on ports 8443 and 8484. These 2 ports, within NPM, are translated into:
8443 -> 443
8484 -> 80

Schematically:

http://service.my_domain.xxx = public IP -> router -> WAN -> rules -> LAN -> NPM -> Docker
#5
26.1, 26,4 Series / Re: This makes me want to cry!...
Last post by lmoore - Today at 06:32:36 AM
Quote from: Monviech (Cedrik) on April 25, 2026, 05:29:52 PMPlease:

root@opn-dev-02:~ # sysctl kern.boottime

Also, after booting:

sysctl kern.msgbuf
#6
26.1, 26,4 Series / Re: All Port Forwards Broken w...
Last post by LostSpark - Today at 04:24:13 AM
So, this is wild... I've  not experienced this before, and perhaps it has something to do with the update, or just some misfortune... my WAN (fiber) IP was wrong inside OPNsense. I didn't catch it until many hours of frustration, but it was simply not the same IP as what I got when I typed "what is my IP" into google. Manually restarting my modem fixed the problem...

Normally this is something most people do in the early stages, but my modem is in a crawl space under my house and I hadn't had this issue before.
#7
26.1, 26,4 Series / Re: 26.1.6_2 - All traffic blo...
Last post by pfry - Today at 03:15:49 AM
That's a very odd pair of rules. They may be outside of my experience, as I don't use any static NAT. As is, they do not appear to match the marked flows in your logs (source and destination ports and destination address do not match). For more info (e.g. "reason"), hit the "i" to the right of the log entries.
#8
26.1, 26,4 Series / Re: All Port Forwards Not Work...
Last post by LostSpark - Today at 02:50:29 AM
After reviewing the changelog, I can see a lot of NAT changes were made in this latest update. Something about these changes has broken the entire way I had things set up. This is very likely the biggest clue to follow here, I'm thinking.
#9
26.1, 26,4 Series / All Port Forwards Broken with ...
Last post by LostSpark - Today at 02:48:37 AM
I admit I may be doing something wrong here, and I'm hoping I can get some pointers from some kind onlookers.

I had multiple ports forwarded correctly just two days ago, but I've updated twice since then and realized that since this period all of my port forwards are now completely broken.

I was previously already upgraded to 26.1, so this was likely 26.1.5 or 26.1.6 that did me in. Everything worked on 26.1.4.

If it's relevant, I use two gateway groups with failover from fiber to cellular (works great, and sends me alerts through monit). This is the only non-standard thing I can think of here...

I am still using the "old" rules, but I went ahead and deleted every rule for one of my game server ports and I started fresh with a new Destination NAT forward (with "pass" set up). I also tried a manual setup where I created a old rule, and then a new rule, and absolutely nothing let the traffic through...

The only other thing I can think of here is that somehow Crowdsec is blocking this. I just don't know why it would suddenly do this, that's all... so I figured I'd ask here and see if anyone else has had a similar problem, or could shed light on this problem that's plagued me for 6-7 hours now.

Thank you in advance to anyone who might be able to help!
#10
26.1, 26,4 Series / Re: Zenarmor + Azure VM + hn1:...
Last post by sopex8260 - Today at 01:15:12 AM
This is an azure limitation, you need azure accelerated networking. Otherwise it doesn't respect your settings...