Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Styx13

#1
Sorry for the late reply.

I am not sure what to say, other than I did not need to update the ddclient configuration in a while (I only recently moved to porkbun).

Updating the configuration in the past (when I did it probably 6 months ago or so) via the GUI was not a problem. So I thought maybe it is a recent change in OPNSense that introduced this issue?

Nobody else has this issue in 25.1.5_5 or more recent?
For me it happens every single time I try to update the ddclient configuration via the GUI.
#2
Hello,

I found out today after editing the configuration of ddclient via the GUI that it some characters were being added at the end of some lines:

syslog=yes                  # log update msgs to syslog
pid=/var/run/ddclient.pid   # record PID in file.
verbose=yes

use=cmd, cmd="/usr/local/opnsense/scripts/ddclient/checkip -t 0 -s freedns --timeout 10", \
protocol=porkbun, \
apikey=my_secret_api_key, \
secretapikey=my_secret_secret_key, \
login=my_secret_api_key, \
password=my_secret_secret_key \
my.fully.qualified.hostname

Those ", \" are effectively making the configuration invalid/confusing to ddclient and causing errors like:

WARNING: Could not determine an IP for my.fully.qualified.hostname
WARNING: my.fully.qualified.hostname: unable to determine IP address with strategy use=cmd
WARNING: found neither IPv4 nor IPv6 address

After manually removing the extra ", \" and " \" from the configuration file and restarting ddclient, everything works fine again!

OPNSense version: OPNsense 25.1.5_5-amd64

PS: I know that the documentation recommends moving to the "native" client, however, I use porkbun for resolution of some of my domains and the native client does not support it.
#3
Quote from: newsense on April 16, 2025, 05:11:50 AMstart playing with Dnsmasq DNS & DHCP - which is where probably most people will end up when ISC goes away.
[...]
Feature wise there's a ton more stuff you'll be able to do with Dnsmasq DHCP compared to what was possible in KEA - right out of the gate.
[...]
it's probably best to cut your losses and start preparing for better times than hoping magic will suddenly happen overnight in KEA land.

Thanks for your reply, a quick question though: will the new dnsmasq DHCP implementation support HA out of the box?
Both ISC and KEA do support HA today and according to KEA's doc, KEA's HA implementation "should be" much better.

So if dnsmasq DHCP implementation supports HA, then I will very likely go that route, but if it does not, what's the plan?
#4
Hello,

Similar to a few other post I could find here or in the KEA DHCP mailing-list, I do sometimes get the HA_LEASE_UPDATE_CONFLICT message in the KEA DHCP logs.

Eventually, this leads to KEA DHCP terminating HA (based on the max-rejected-lease-updates (default 10))

I noticed that it usually happens after the primary node gets rebooted (after an update for example) or when I "Enter Persistent CARP Maintenance Mode" on the primary node and then eventually get out of it.

As I was looking at the KEA DHCP configuration files to try and find a clue as to why it may happen, I noticed that the "kea-dhcp4.conf" configuration file content had all it's slashes ('/') escaped => '\/'
I wonder what is the reason for that? I thought from what I read that the only thing that needed to be escaped in the KEA configuration files are the commas (',')
Also looking around on KEA configuration file examples, I did not notice anybody else escaping the slashes in their configuration files.

Other than that, I did no see anything particular that could explain the issue I am facing.

Below the logs on the primary (hot) when the issue happens:
2025-04-15T20:51:47-04:00 Warning kea-dhcp4 WARN [kea-dhcp4.ha-hooks.0x395ec216600] HA_LEASE_UPDATE_CONFLICT OPNsense-primary: lease update [hwtype=1 xx:xx:xx:xx:2b:25], cid=[no info], tid=0x5418305d sent to OPNsense-backup (http://10.99.0.252:8001/) returned conflict status code: ResourceBusy: IP address:10.90.0.54 could not be updated. (error code 4)
and the corresponding log on the backup (standby):
2025-04-15T20:51:47-04:00 Warning kea-dhcp4 WARN [kea-dhcp4.lease-cmds-hooks.0x38dd92616d00] LEASE_CMDS_UPDATE4_CONFLICT lease4-update command failed due to conflict (parameters: { "expire": 1744766507, "force-create": true, "fqdn-fwd": false, "fqdn-rev": false, "hostname": "REDACTED", "hw-address": "xx:xx:xx:xx:2b:25", "ip-address": "10.90.0.54", "origin": "ha-partner", "state": 0, "subnet-id": 6, "valid-lft": 1800 }, reason: ResourceBusy: IP address:10.90.0.54 could not be updated.)
I redacted part of the MAC address and the hostname.

Eventually, after enough of those warning, eventually it leads to termination:
On the primary (hot):
2025-04-15T20:51:47-04:00 Error kea-dhcp4 ERROR [kea-dhcp4.ha-hooks.0x395ec216600] HA_TERMINATED HA OPNsense-primary: service terminated due to an unrecoverable condition. Check previous error message(s), address the problem and restart!
2025-04-15T20:51:47-04:00 Error kea-dhcp4 ERROR [kea-dhcp4.ha-hooks.0x395ec216600] HA_LEASE_UPDATE_REJECTS_CAUSED_TERMINATION OPNsense-primary: too many rejected lease updates cause the HA service to terminate

and on the backup (standby):
2025-04-15T20:51:51-04:00 Error kea-dhcp4 ERROR [kea-dhcp4.ha-hooks.0x38dd92615f00] HA_TERMINATED HA OPNsense-backup: service terminated due to an unrecoverable condition. Check previous error message(s), address the problem and restart!
Running OPNsense 25.1.5_5-amd64 at the time of writing
#5
I do have the exact same issue after updating from 24.7.12-4 to 25.1.1.

I tried the "Reset Log Files", but that did not work for me. The Firewall widget does not display anything other than "Waiting for data...".

I too have a setup with 2 firewalls (Primary / Standby). The Standby firewall has no problem with the widget, but the primary has the issue. (the primary was fine prior to the update when it was running 24.7.12-4).
#6
Hello,

OPNsense 24.7.3_1

When I enable /var/log RAM disk (Use memory file system for /var/log) and reboot, I cannot see the Kea DHCP logs anymore.
I noticed the following error message in the backend log:
2024-09-22T21:40:28-04:00
Error
configd.py
[c6d73318-a7ab-448f-b5dd-926982c4c82d] Script action failed with Command '/usr/local/opnsense/scripts/syslog/queryLog.py --limit '500' --offset '0' --filter '' --module 'core' --filename 'kea' --severity 'Emergency,Alert,Critical,Error,Warning,Notice,Informational' --valid_from '1726969229.675'' returned non-zero exit status 1. at Traceback (most recent call last): File "/usr/local/opnsense/service/modules/actions/script_output.py", line 76, in execute subprocess.check_call(script_command, env=self.config_environment, shell=True, File "/usr/local/lib/python3.11/subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '/usr/local/opnsense/scripts/syslog/queryLog.py --limit '500' --offset '0' --filter '' --module 'core' --filename 'kea' --severity 'Emergency,Alert,Critical,Error,Warning,Notice,Informational' --valid_from '1726969229.675'' returned non-zero exit status 1.
#7
So, I eventually figured out the problem and it was of course on the user / admin side ...
On the standby node, I did not type in the IP address correctly for the PFsync Synchronize Peer IP .. I had typed 10.90.0.251 instead of 10.99.0.251 ...  :-[

Now that this is fixed, everything is working as expected.
#8
I did a short package capture on my PFsync interface from the both the primary and standby, and I can see the pfsync traffic going from primary to standby.

PRIMARY

Interface                     Timestamp                     SRC                  DST                 output
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.532333    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1488: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1454 insert count 2 update compressed count 7 delete compressed count 23 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.622583    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1416: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1382 insert count 2 update compressed count 6 delete compressed count 24 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.843370    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 230: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 196 update compressed count 2 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.974409    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1429: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1395 insert count 3 update compressed count 3 delete compressed count 23 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.020705    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 146: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 112 update compressed count 1 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.099974    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1369: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1335 insert count 3 update compressed count 2 delete compressed count 25 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.416010    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1208: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1174 insert count 2 update compressed count 7 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.486072    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1124: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1090 insert count 2 update compressed count 6 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.529206    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1040: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1006 insert count 2 update compressed count 5 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.638759    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1208: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1174 insert count 2 update compressed count 7 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.807782    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1233: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1199 insert count 3 update compressed count 4 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.870700    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1317: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1283 insert count 3 update compressed count 5 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:04.109522    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1292: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1258 insert count 2 update compressed count 8 eof count 1


STANDBY

Interface                     Timestamp                     SRC                  DST                 output
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.533114    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1488: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1454 insert count 2 update compressed count 7 delete compressed count 23 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.623336    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1416: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1382 insert count 2 update compressed count 6 delete compressed count 24 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.844388    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 230: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 196 update compressed count 2 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:02.975642    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1429: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1395 insert count 3 update compressed count 3 delete compressed count 23 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.021657    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 146: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 112 update compressed count 1 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.101193    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1369: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1335 insert count 3 update compressed count 2 delete compressed count 25 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.417133    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1208: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1174 insert count 2 update compressed count 7 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.487132    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1124: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1090 insert count 2 update compressed count 6 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.530410    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1040: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1006 insert count 2 update compressed count 5 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.639878    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1208: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1174 insert count 2 update compressed count 7 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.808977    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1233: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1199 insert count 3 update compressed count 4 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:03.871860    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1317: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1283 insert count 3 update compressed count 5 eof count 1
VLAN99_PFSYNC lagg0_vlan99    2024-08-12 19:31:04.110615    RE:DA:CT:ED:##:32    RE:DA:CT:ED:##:7b    IPv4, length 1292: 10.99.0.251 > 10.99.0.252: PFSYNCv5 len 1258 insert count 2 update compressed count 8 eof count 1


so pfsync packets are going through normally.

Still wondering what's causing this issue.
#9
Ok i got it to work by giving the elasticsearch user for zenarmor the "superuser" role.
And now every is green and working.

So why isn't it sufficient to give it "all" for cluster and "all" for "zenarmor_*" indices.?
#10
Hello

I have a similar issue here.

Running OPNsense 24.7.1

Zenarmor version:
Engine 1.17.6  (Aug 5, 2024 12:52 PM)
Database 1.17.24080514 (Aug 5, 2024 12:52 PM)

Elasticsearch 8.11.3

The index check reports connection fails as described by furfix with the following message:
"Error (200)
Remote database connection failed."

But in addition to that, on the Zenarmor Dashboard, the reporting database status is Stopped:
"Reporting Database
Status: Stopped
Type: Remote Elasticsearch
Version: 8.11.3"

Also in the dashboard, I never see any data, other than the traffic graph (no top threat, no top host, no top apps, ..)
Under report: there is no data to display
Under live sessions: empty

But under Settings > Reporting & Data, if I click "Perform Health Check" I guess a success message:
"Success
Health check performed successfully."

If I connect to Elasticsearch and look for indices, I can find the indices created by zenarmor.
If I query those indices for all records, I can see over 2000 records in some of them and growing.

So somehow it is failing to access the data that it is writing.

I checked my user's role and it has permissions "all" for cluster and "all" for "zenarmor_*" indices.

Am I missing something?
#11
Hello,

I have OPNsense configured with HA, CARP works fine, no issues with it.
However, PFsync seems to not work properly as when switch to backup (or back to primary), all my current established connections die. (I did the test with an ssh connections to a host behind both firewalls and it hangs and then reset when the CARP switch PRIMARY=>BACKUP happens).

This is not my first HA setup, I have been running OPNsense with HA for 4 years and it has been working very well (both CARP and PFSync with seemless transition to backup without losing any connection).

So maybe I did something wrong here in this new setup I did and I may need another pair of eyes to look at my setup and figure out what is wrong.

For 24.7, I did a fresh install.

Both the primary and backup are VMs, just like my previous setup with 24.1 was (and that previous setup was working fine for year, started with 20.7 on it all the way to 24.1 upgrades).

(important notes: my previous 24.1 setup is not running anymore I shutdown and now deleted those VMs, so only the new setup exists)

One difference with my new system, is that the VM for the primary is using PCI Passthru for the 10Gb port (LAN - ix0) and the 1Gb port (igb0 WAN).

On the backup VM, it is using Virtio adapter for both (vtnet0 & vtnet1).

So on both sides, I created failover LAGG interfaces (with a single port in each) and configured lagg0 for LAN and lagg1 for WAN in order to have the interface name match on both side as it is important for state syncing as indicated in the doc.

Then on top of the LAN LAGG interface (lagg0) I created a bunch of VLANs as this port is a trunk port with several tagged VLANs.
That part of the setup (VLANs) is identical to my previous on (with 24.1) where all my networks are connected to the firewall via a single port and tagged VLANs.

So I end up with multiple lagg0_vlanXX vlan interface which are assigned and I made sure that on both sides (primary and backup) the optXX matches. (for example, on both sides, lagg0_vlan10 is opt1, lagg0_vlan20 is opt2, etc ..).

I have a dedicated VLAN for PFSYNC (VLAN99 - assigned to opt7 on both sides) which is also used by KEA DHCP for peer traffic.
On the primary that interface is configured with IP 10.90.0.251/24
On the backup that interface is configured with IP 10.90.0.252/24

The firewall rules for the PFSYNC interface are:

     Protocol     Source                     Port  Destination    Port         Gateway  Schedule   Description 
pass IPv4 PFSYNC  VLAN99_PFSYNC net          *     This Firewall  *            *        *          Allow pfSync traffic 
pass IPv4 TCP     VLAN99_PFSYNC net          *     This Firewall  443 (HTTPS)  *        *          Allow HTTPS traffic for config synchronization 
pass IPv4 TCP     VLAN99_PFSYNC net          *     This Firewall  8001         *        *          Allow Kea DHCP HA Peer traffic


System: High Availability: Settings - On the primary node:

Synchronize States: checked
Synchronize Interface: VLAN99_PFSYNC
Sync Compatiliby: OPNsense 24.7 or above
Synchronize Peer IP: 10.99.0.252
Synchronize Config: 10.99.0.252
Remote System Username: <the username of my backup node>
Remote System Password: <the password of my backup node>
Services to synchronize (XMLRPC Sync): Aliases, Certificates, Dashboard, Firewall Categories, Firewall Groups, Firewall Log Templates, Firewall Rules, Firewall Schedules, IPsec, Kea DHCP, NAT, Network Time, Unbound DNS, Virtual IPS


System: High Availability: Settings - On the secondary node:

Synchronize States: checked
Synchronize Interface: VLAN99_PFSYNC
Sync Compatiliby: OPNsense 24.7 or above
Synchronize Peer IP: 10.99.0.251
(fields that are not indicated are either empty or default value)

System: High Availability: Status -  On the primary node:
<showing the backup firewall version and services, all green, and synchronization of configuration works fine>

System: High Availability: Status -  On the backup node:
The backup firewall is not accessible or not configured.



When I look at the Firewall: Diagnostics: States on both nodes, I can see a "similar" number of states: ~1700 on primary and ~1500 on backup.

But if I switch from Primary to backup (by enabling Persistent Carp Maintenance Mode on the primary), then any established connections (like ssh) hang and die. but also when I compare the states in Firewall: Diagnostics: States on both nodes, then the primary node shows ~500 states and the backup shows ~2200 states.

So something must be wrong somewhere, but I cannot figure out what. Is there a log/place where I can see more details about PFSync activity? and make sure it is working as expected?
Let me know if you need more information.

Thank you
#12
I guess I may be the only one who tried that ?

Should I open an issue/ticket somewhere? if so where should that be?
#13
Hello,

I was able to successfully configure IPSec roadwarrior using EAP-MSCHAPv2 + Certificate (using the new connections (swanctl.conf)).
I just followed the instruction from the wiki for EAP-MSCHAPv2 and then I added another round (round 0) of remote authentication using Public Key before the EAP-MSCHAPv2 one (round 1) and that was it.

But then, I wanted to add more certificates for multiple users to connect, so I created certificates for all my users and added them in the Public Key authentication round (as it allows to select more than 1 certificate - see screenshot attached).

However, I noticed that only 1 of the client could connect, the others cannot.
The other clients get a "no matching peer config found" error:

2024-08-05T21:16:17-04:00 Informational charon 10[CFG] <19> no matching peer config found

It turns out that the client that can connect correspond to the client that was selected first in the list.

I tried by selecting them in a different order and then another client could connect but none of the other.

So I am not sure how this Certificates field really works, but it seems that only the first certificate in the list is used.

I was reading the swanctl.conf doc and the description is
Quotecerts: Comma separated list of certificates to accept for authentication. The certificates may use a relative path from the swanctl/x509 directory or an absolute path

I looked at my generated swanctl.conf and that section looks as follow:

        remote-8ccbba89-c628-4ea0-a7ee-15fa7e0d71c2 {
            round = 0
            auth = pubkey
            certs = 66ad6e885fe21.crt,66b16e44c13bc.crt,66aff2593ebc7.crt,66ae72bb9bd73.crt
        }


So all 4 certificates are in the list .. but only the first one seems to work.
And in deed if I select them in a different order, the first one changes and another client can connect but not the others.
So somehow, the list does not seem to work and it seems to only check against the first one.

Is this a swanctl bug? or am I misconfiguring something?
#14
Under "Services: ACME Client: Log Files" both  tabs "System Log" and "ACME Log" are both always empty for me.
The only logs I can see related to acme are in "System: Log Files: General"

I think @zyon is talking about the certificate list available under "System: Trust: Certificates"
Once ACME client issues or renew a certificate, it adds it to that list and there you have the option to download it (either as a P12 or a pem).
It used not to work for me yesterday, but it is working today and I had a reboot in between, so as zyon said, it seems a reboot is fixing that part, but that did not fix the log part for me nor the automation scripts.

And just want to mention that I am running a fresh 24.7 install (not an upgrade)
#15
Same problem here.

I also noticed that the ACME logs do not appear under "Services: ACME Client: Log Files" (this is always empty). The only way to see the logs is in "System: Log Files: General"

Finally, there seem to be something wrong with the automation scripts. I am using one that is running a remote ssh command. If I edit it to change the remote command, the edit seems to work (the UI display the command I changed). But when I run the automation (either via "Test Connection" directly in the Automation edit panel, or via the "Run automations" from a certificate that calls this automation) it still runs the old command that was there prior to editing.


It seems that the ACME service is not working very well with 24.7