Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - ednt

#1
Today I updated our Master to 25.

First critical point was an update circle 24.7 -> 25.1 -> 24.7

I already had this behaviour on an other opnsense and fixed it:

If you enter at console pkg update you will see a segmentation fault.
Then you need:
pkg boostrap -fAfter this the update works as expected.

Ok, back to the problem:
In the GUI one OpenVPN server is marked red and it is not possible to start it.
In the log you can read ... device is busy

Solution was to kill the running openvpn task on the CLI.
Then it was possible to start the server in the GUI.
#2
Thank you!

HAProxy 3.0.8 is starting via the GUI
#3
Also with an update to 25.1.2 HA Proxy is not starting automatically or via GUI.

I need a solution, or I have to go back to 24.7.
#4
Hi,

I'm not sure if it is a 25.1.1 problem, but with 24.7.12 this problem does not occure.

I updated the 'slave' to 25.1.1 and switched over from 'master' (24.7.11, before I syncronised the config)
Then I got infos that an internal web service was not reachable anymore.

HAProxy on 'slave' was not running.
I try to start it via web-gui -> no success
But also no warnings with the current date in the web-gui log.

Strange thing:
If I start HA proxy via CLI:
/usr/local/sbin/haproxy -f /usr/local/etc/haproxy.conf -dIt starts without any problems.
Also the state in the gui is green and I can stop it via gui, but I can still not start it again via gui.

The only thing what I can see in the gui log, when I try to start it via gui are the results of the health checks.
So it try to start.
But ... the logs in the gui have the level 'NOTICE' and in the end, the service is not running.
Quote2025-02-19T09:23:01   Notice   haproxy   Health check for backup server isp-web-dmz-1_crm_backend/isp-web-dmz-1m succeeded, reason: Layer7 check passed, code: 200, check duration: 189ms, status: 3/3 UP.   
2025-02-19T09:23:00   Notice   haproxy   Health check for server isp-web-dmz-1_crm_backend/isp-web-dmz-1 succeeded, reason: Layer7 check passed, code: 200, check duration: 98ms, status: 3/3 UP.   
2025-02-19T09:22:59   Notice   haproxy   Health check for backup server isp-web-int-1_backend/isp-web-int-1m succeeded, reason: Layer7 check passed, code: 200, check duration: 89ms, status: 3/3 UP.   
2025-02-19T09:22:59   Notice   haproxy   Health check for server isp-web-int-1_backend/isp-web-int-1 succeeded, reason: Layer7 check passed, code: 200, check duration: 83ms, status: 3/3 UP.   
2025-02-19T09:22:58   Notice   haproxy   Health check for backup server isp-web-dmz-1_backend/isp-web-dmz-1m succeeded, reason: Layer7 check passed, code: 200, check duration: 226ms, status: 3/3 UP.   
2025-02-19T09:22:57   Notice   haproxy   Health check for server isp-web-dmz-1_backend/isp-web-dmz-1 succeeded, reason: Layer7 check passed, code: 200, check duration: 103ms, status: 3/3 UP.

If I start it via CLI I get the health checks with level 'WARNING'.

Quoteroot@OPNsenseSlave:~ # /usr/local/sbin/haproxy -f /usr/local/etc/haproxy.conf -d
Note: setting global.maxconn to 353117.
Available polling systems :
    kqueue : pref=300,  test result OK
      poll : pref=200,  test result OK
    select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.

Available filters :
        [BWLIM] bwlim-in
        [BWLIM] bwlim-out
        [CACHE] cache
        [COMP] compression
        [FCGI] fcgi-app
        [SPOE] spoe
        [TRACE] trace
Using kqueue() as the polling mechanism.
[WARNING]  (21747) : Health check for server isp-web-dmz-1_backend/isp-web-dmz-1 succeeded, reason: Layer7 check passed, code: 200, check duration: 103ms, status: 3/3 UP.
[WARNING]  (21747) : Health check for backup server isp-web-dmz-1_backend/isp-web-dmz-1m succeeded, reason: Layer7 check passed, code: 200, check duration: 226ms, status: 3/3 UP.
[WARNING]  (21747) : Health check for server isp-web-int-1_backend/isp-web-int-1 succeeded, reason: Layer7 check passed, code: 200, check duration: 83ms, status: 3/3 UP.
[WARNING]  (21747) : Health check for backup server isp-web-int-1_backend/isp-web-int-1m succeeded, reason: Layer7 check passed, code: 200, check duration: 89ms, status: 3/3 UP.
[WARNING]  (21747) : Health check for server isp-web-dmz-1_crm_backend/isp-web-dmz-1 succeeded, reason: Layer7 check passed, code: 200, check duration: 98ms, status: 3/3 UP.
[WARNING]  (21747) : Health check for backup server isp-web-dmz-1_crm_backend/isp-web-dmz-1m succeeded, reason: Layer7 check passed, code: 200, check duration: 189ms, status: 3/3 UP.
0000000c:GLOBAL.accept(0009)=000e from [unix:1] ALPN=<none>
0000000c:GLOBAL.clicls[000e:ffff]
0000000c:GLOBAL.srvcls[000e:ffff]
0000000c:GLOBAL.closed[000e:ffff]

What happens?

Any ideas?
#5
In my opinion it is a bug.

The functionality is not as written.

Your hint gives a workaround. (Thanks for this)
#6
Hi,

I added a rule where I want to block external access to the 'local' WAN addresses of a CARP system.

It looks like:

Block IPv4 * ! WAN net, LAN net * WAN_LocalAdresses * * *

I thought that then access from WAN net and LAN net is allowed.
But a ping in the shell from 'master' to 'slave' WAN address is then not possible

It does not work with multiple selected nets.

I had to remove the LAN net to make it work.

Is there a bug in the logic?
Is not the complete result is inverted?

Best regards
#7
Ich habe nun beides mit drin, geht aber auch nicht.
#8
Ich bin gerade über das gleiche Problem gestolpert.

Leider zeigt ein zusätzliches !github.com keinen Effekt.

OPNsense 24.7.5

Hat da jemand eine Idee dazu?
#9
Yes, but pressing a button is not the solution.

Maybe you don't have to change anything. A working and running configuration.

The master refreshes the certs.
The old ones are outdated.

Now it happens. CARP is switching over and all HA with offloading results in a cert error.

There should be a 'schedule' in System, where you can include a sync job with restarting services.
(in my opinion)

Or an addition to the ACME job:
Sync the certs and restart all jobs which can be affected by the certs when one of the certs is renewed.
#10
Hi,

today I did an update of our 2 OPNsense firewalls.
Update 'slave' no problem.
'Master' entering Persistent CARP Maintenance Mode -> colleagues noted that some webpages tells:
outdateded cert.

The certs are synchronized and the latest version were available on the 'slave'.
But the HA-Proxy on the 'slave' did never a restart to activate the new certs.
I had to restart the HA-Proxy on the 'slave' manually to activate the latest synchronized certs.

Is there a way to avoid this problem?

I only update the ACME certs on the 'master'.

Best regards,

Bernd
#11
Hi,

I tried this with monit, but this is not possible.

I need to know when the address of the WAN interface changed.
Monit does only check in intervals and so it is not possible to detect the change.

Here in the forum were several same questions, but no solution.

Any hint?

Best regards.
#12
22.7 Legacy Series / Re: 22.7.4 High Availabilty
November 18, 2022, 06:38:39 PM
You saved my day!

I had to add the PFSYNC interface, which is used for the High Availability, to the allowed interfaces for the Web-
GUI.

But we used the HA sync since ages with no problems in the same configuration.

Anyway,

thank you for this hint.
#13
22.7 Legacy Series / [Solved] 22.7.4 High Availabilty
November 18, 2022, 02:18:17 PM
I wanted to update our Master/Slave combination from 22.7.4 to 22.7.8 and, as always, I tried a
synchronisation from master to slave.

But ...

QuoteThe backup firewall is not accessible or not configured.

No idea what's wrong.
I can ping the slave on the PFSYNC interface and I see also traffic with tcpdump on the slave.
But I see no listener on the slave network interface.

Then I updated the slave to 22.7.8, but still the same.

How can I check or start the service on the slave side?
#14
Ok, found it:

an alias called 'Private_Networks'
#15
Ok , found something in /usr/local/opnsense/scripts/filter/update_bogons.sh:

    # private and pseudo-private networks will be excluded
    # as they are being operated by a separate GUI option
    egrep -v "^100.64.0.0/10|^192.168.0.0/16|^172.16.0.0/12|^10.0.0.0/8" ${WORKDIR}/fullbogons-ipv4.txt > ${DESTDIR}/bogons

So they are explicit excluded from the bogon file.
But what is the separate GUI option ?