Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - johnride

#1
Found the problem. Log files getting filled due to ` Log packets matched from the default pass rules ` being enabled by default. I'm not entirely sure why it is so much worse with these new setups but reducing allowed space usage by logs is the proper fix in my situation. On a machine with only 2G of RAM I can't let it use 1G.

Somehow tmpfs never crossed my mind when analyzing ram usage. I used to have problems with this so often 10 years ago but it hasn't happened in such a long time....

A good reminder to revisit your basics !

FYI I found it when I reverted to 23.7 with the old dashboard that showed me /var/log 99% full of type tmpfs. My brain is wired to ignore tmpfs entries when running df commands, will have to unwire that one.
#2
I did the upgrade to 24.7.5 on both firewalls and the issue still occurs.

I will post more logs soon, I'm still investigating. Basically I was able to see Wired memory go up but as memory usage goes higher, memory statistics from top don't add up anymore, and I don't see anything obvious with ps either.
#3
So it seems I was not blind. Proper opnsense automation is still kind of hacky. I will have a look at that puzzle collection, maybe it's good enough for now until proper APIs are implemented.

Thanks for the info!
#4
I have been running opnsense for about 5 years on about 8 instances on Watchguard XTM-5 and Firebox M470 and have had no issue with similar configurations until my two latest installations with 24.7.x

Now, every day around 11PM eastern time memory usage will go up about 200-300MB for a reason that still eludes me. This happens on an XTM-5 with 2GB RAM and also on a Firebox M470 with 4GB ram.

Both instances are fresh installs. I have not yet upgraded my other firewalls running similar configurations with stable memory around 400-500MB.

If I let it go on the XTM-5, after about 4 days services will start crashing OOM but memory usage will never go lower.

At this point I suspect a kernel bug, but I am not sure how to further identify it. I am more of a linux admin, relatively noob with freebsd.

Here is my setup on an XTM-5 :



Installed packages
  • os-haproxy

Configuration

  • HAProxy is running 4 TCP public services on 6 real servers, very low usage at this point, let's say around 1 request per second during office hours.
  • 3 LAN, with each between 5 and 15 clients.
  • 2 DHCP servers with a few static leases and PXE boot options
  • Wireguard vpn with 1-3 clients connected.
  • PPPoE WAN connection

And on the Firebox M470

Installed packages

  • os-haproxy
  • os-tftp

Configuration

  • HAProxy is not configured yet
  • 1 LAN, with each between 5 and 15 clients.
  • 1 DHCP server, no static lease
  • Openvpn vpn with 0-1 clients connected.
  • tftp serving a few files for PXE boot
  • DHCP WAN connection

In dmesg I see a few occurences of this

sonewconn: pcb 0xfffff80055812a80 (10.10.16.1:6443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (1 occurrences), euid 0, rgid 0, jail 0
sonewconn: pcb 0xfffff8000918aa80 (10.10.16.1:6443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (1 occurrences), euid 0, rgid 0, jail 0
sonewconn: pcb 0xfffff800072e0000 (10.10.16.1:6443 (proto 6)): Listen queue overflow: 193 already in queue awaiting acceptance (1 occurrences), euid 0, rgid 0, jail 0


It is haproxy that is running on 10.10.16.1:6443 but it has normal memory usage that does not go higher with time. Also memory usage does not drop when I restart haproxy or any other service.

How can I further isolate the problem ? Any pointer would be much appreciated !
#5
I am also considering crafting automatically a config.xml file that I would send to the config restoration service.

I am concerned about preserving the backup/restore and High Availability sync capabilities of OPNSense. I feel like directly editing the dhcpd and dns config files might fly under the radar of opnsense sync/backup features.
#6
Hey there,

I am building an infrastructure as code orchestrator and I am looking at the best way to automatically set up DHCP static mappings and iPXE related services.

The end goal is to have a series of modules that will 100% automate OPNSense configuration when building an Openshift / OKD cluster on bare metal.

The question is :

- What is the best way to automate OPNSense configuration today ?
- What is the vision for the API that is currently in the works ? Is the plan to cover all core OPNsense features ?

For now, I found the API is not mature yet and quite a few posts online on this topic that all seem to fall back on ssh editing the dhcpd configuration file for DHCP. I also need the internal DNS-DHCP integration enabled and to automate DNS overrides setup.

Eventually I will also need to automate interface assignment, VLANs, VPNs and possibly WAF.

Thanks !