Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - geotek

#1
On average the filter logs are only 2 GBytes per day, but I have already observed 5 GBytes per hour on some occasions. When I get a disk space warning from my network management it is usually already too late and the GUI is inoperable because of low disk space.

I would expect that local logging is intelligent enough to just delete the oldest log files if the disk is in danger of running out of space, but apparently this is not the case.
#2
Under heavy load, the log files in /var/log/filter are sometimes filling up all my disk space.

I have already configured the "maximum file size" and the "maximum preserved files" under System / Settings / Local, but the filter logs are getting much larger than they should, so this setting does not seem to have any effect on the number and size of the filter logs.

Is this behavior intended? If so, is there another way to rotate filter logs so that there is no risk of running out of disk space?

Shutting off the logs completely is no option for me.

I am using OPNsense 25.1.5_5-amd64
#3
One more observation: Only boxes with one of the three Aho-Corasick Pattern matchers are affected, even with today's updated rules. Boxes with Hyperscan matcher were not affected. After changing the matcher to Hyperscan, the problem was solved on all of our previously affected firewalls.

I hope this helps identifying and fixing the cause.
#4
We are using the proofpoint rules and all OPNsense versions from 25.1 to 24.x are affected. Error message is:

<Error> -- Just ran out of space in the queue. Fatal Error. Exiting. Please file a bug report on this

It looks like a broken rule update is responsible for this, since ample memory and disk space is available on our boxes.
#5
24.7, 24.10 Legacy Series / Re: New dashboard widgets
August 12, 2024, 06:21:48 PM
I fully agree with waxhead. In my opinion the new widgets are fun to play around with, but for commercial use they are a step backward.

  • More screen space is wasted as before
  • More effort is needed to set up the dashboard in an informative and consistent way as before
  • There is no separate CPU gauge anymore that allows to judge CPU utilization at a glance. The CPU graphs look beautiful but are of little use as they automatically scale.
  • Widget layout from 24.1 is lost after migration to 24.7. I would have expected to at least keep the number of columns in the previous layout

Some suggestions:


  • Make a new widget that combines the gauges for CPU, Memory, Swap and Disk in one widget. Or maybe re-include them in the main system widget. I can't imagine that anyone would not want to see at a glance if CPU or memory usage is constantly at 100%
  • Make at least the CPU, Memory, Swap and Disk Graphs editable so that their scale is fixed to 100%
  • Add a GUI button to export and import the widget configuration, including widget selection, positioning and individual configuration. This would allow to make dashboards look alike across multiple firewalls. As is, manually arranging the dashboard on 50 firewalls identically is a nightmare

Please don't get me wrong, I am very enthusiastic about OPNsense, but these new widgets are just not yet ready for prime time.
#6
Quote from: Embroider5378 on March 22, 2023, 03:00:25 AM
I have Adguard on OPNSense and also had this issue.

To fix this, I set the DNS to a public resolver (Cloudflare) on System-->Settings-->General as suggested. However, that doesn't actually work unless you also check the box on the same page "Do not use the local DNS service as a nameserver for this system".

This solved the issue here too, but it is more of a workaround than a solution. The root cause for slow backups was an activated Unbound DNS service where no forwarding was enabled. I assume that this caused Unbound to use the root DNS servers, which explains the painfully slow update process.

So, a better solution would be to either disable Unbound DNS if not needed, or to check the box Services / Unbound DNS / Query Forwarding / "Use System Nameservers"
#7
Das gleiche Phänomen haben wir hier auch. Eine stabile Verbindung als Default und eine zweite als Failover Backup. Sobald der dpinger auf der Backup-Verbindung einen Packet-Loss-Alarm gibt, gehen Verbindungen auf der Hauptleitung verloren, obwohl gar keine Gateway-Umschaltung erfolgt. Ich hatte bisher allerdings noch keine Zeit, das weiter zu analysieren.
#8
20.7 Legacy Series / Re: GeoIP 20.7 solution
August 30, 2020, 04:11:04 PM
There is definitely something wrong with GeoIP processing in V.20.7.1. After Upgrading to this version GeoIP falsely blocked legitimate IPs. Setting "Firewall Maximum Table Entries" to 200000 resolved this issue instantly. When I leave this box empty, the help says "On your system the default size is: 200000" But this can't be, otherwise setting this value explicitly to the same value should not change anything.

This is repeatable. After booting with this field kept empty I get falsely blocket IPs, setting "Firewall Maximum Table Entries" to 200000 resolves this issue again.
#9
General Discussion / Re: Route Based IPsec Limitation
January 23, 2020, 10:51:26 PM
Take this as an example:

Location A (OPNsense)
  LAN: 192.168.10.0/24
  Public IP: 1.1.1.1
  VPNGW1: 2.2.2.2
  Static Route: 192.168.20.0/24 => VPNGW1

Location B (Juniper SRX)
  LAN: 192.168.20.0/24
  Public IP: 2.2.2.2

IPsec Tunnel between LAN A and LAN B works fine, also does NAT Traffic from LAN to Internet. So everything is fine, except that hosts on LAN A can't reach Public IP of Location B (2.2.2.2), neither ping nor any other port responds.

Since Juniper does Route based IPsec directly and does not have an OpenVPN-like transfer-Net I have to set VPNGW1 to the public IP of site B.



#10
I just found out that when using Route Based IPsec tunnels, Port Forwarding only works if

  Firewall / Settings / Advanced / Use shared forwarding

is disabled. It is enabled by default. I wonder if this is a bug or spmething that is specific to my environment. If this is by design, it should be added to the manual as it is not obvious.

This was obsered on a fresh installation of OPNsense 19.7.9.
#11
I fell into the same ditch as you until I found out via this very helpful post that "Install Policy" must be unchecked. I hope this behaviour will be changed in the future because simply selecting "Route Based" with the default settings must not render the whole firewall unreachable.
#12
General Discussion / Route Based IPsec Limitation
January 22, 2020, 04:53:25 PM
Scenario: Private LAN on Location A connected via OPNsense 19.7.9 to Internet. OPNsense has Route-Based IPsec tunnel to location B. Everything works as expected, except that the public IP of location B is now unreachable for hosts in private LAN of location A.

I assume that all traffic from LAN to the public IP of location B is erroneously sent via Tunnel Gateway through the tunnel instead of being Natted to the standard default route.

Is this behaviour a general design flaw of Route-Based IPsec on OPNsense or can it be solved somehow?

#13
This might indeed be the case. We noted that all snmp crashes were preceded by an sudden increase in memory usage. We also noticed that there was no swap partition even though the disk was created with 1GB free space. Manually creating a swap partition should give some additional memory headroom against snmp crashes.
#14
The SNMP service stopped crashing here after increasing memory from 2 GB to 3 GB
#15
We are evaluating OPNsense 17.1.2 for production use and observed that the snmp service is crashing regularly after about 5 days. We are sending snmp requests every five minutes for the most common linux parameters (cpu util, memory usage, interface usage, uptime) using bare OIDs, so this can't be a MIB issue.

One other thing that bothers me is that we don't get any snmp response from the standard Nagios check_snmp_storage request. All Linux hosts of different flavors give a valid reply for this SNMP query, only OPNsense does not. We could live with the fact that we can't monitor disk usage on OPNsense boxes but it is inconvenent that we have to treat these boxes seperatly from all other linux hosts.

Looks like the snmp service is dying because it's logfile growth, but this should not happen IMO. SNMPD is notoriously talkative and a quick and dirty solution would be to set the dontLogTCPWrappersConnects option. Of course this would not solve the underlying problem that it must not be possible to kill the service by sending ligitimate snmp get requests.