\var\log full

Started by MajStealth, April 25, 2023, 03:04:11 PM

Previous topic - Next topic
April 25, 2023, 03:04:11 PM Last Edit: April 28, 2023, 12:07:47 PM by MajStealth
Is there a way to clear the log-folder when the FW does not boot anymore?

"mountroot" and "db" as "user" after the kernel can not boot up root

yes i could reinstall the fw and configure with a backup, but i could also just clear the most likely problem, a 32gb log-partition

question is - how?

boot with a .iso and go from there?


part solution

boot the cd, dvd, .iso as livecd, go into shell, either directly or with an assigned ip, then follow this:
mkdir -p /tmp/staging
mkdir -p /tmp/old-root
zpool import -fR /tmp/staging zroot
mount -t zfs zroot/ROOT/default /tmp/old-root

zpool list - should show the pool, Health is important?
zfs list - should show the folders

cd /tmp/old-root/var/log/
rm - - delete stuff in log
zfs list - shows available space, if more than 0B it worked
zpool status zroot
zpool scrub zroot - checks the filesystem
zpool status zroot
zpool export zroot - unmounts the zpool

Not if it can't be booted but depending on the filesystem used if you can mount the filesystem on a different machine or this one but on live mode, then you would mount it and delete files as needed.

booting the installer .iso and going from there would have been my windows way but i am not that used in linux, let alone mount a drive.

Ok, are you able to take the media out of the firewall device, assuming sata or ssd disk, and connecting it to say a laptop via a usb-to-sata or similar?
If not, burn a live usb image of any linux/freebsd os, boot with the firewall with it.
If you can do any of of these two, then we can guide you on what to do next to do the mounting.

And what would being familiar with Linux have to do with OPNsense?

Boot from an install medium, login as "root", not as "installer". Invoke a shell, report the output of "gpart show" here. We will probably be able to help you.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

April 25, 2023, 03:52:43 PM #5 Last Edit: April 25, 2023, 04:01:38 PM by MajStealth
from what 'gpart show' gives me, i would assume 'da4 32g' to be the log partition -' freebsd-zfs', not boot, not swap

with 'zpool import -f -R /old/zfs zroot'
i could import said zfs pool but only as read only because it was used in another system

for completeness and to avoid a potential oops, I suggest you post the whole result:
$ sudo gpart show
Password:
=>      40  67108784  ada0  GPT  (32G)
        40    532480     1  efi  (260M)
    532520      1024     2  freebsd-boot  (512K)
    533544       984        - free -  (492K)
    534528  16777216     3  freebsd-swap  (8.0G)
  17311744  49795072     4  freebsd-zfs  (24G)
  67106816      2008        - free -  (1.0M)

Next show the datasets please. Hopefully something like this:
$ zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
zroot               1.78G  21.0G       96K  /zroot
zroot/ROOT          1.58G  21.0G       96K  none
zroot/ROOT/default  1.58G  21.0G     1.58G  /
zroot/tmp            512K  21.0G      512K  /tmp
zroot/usr            384K  21.0G       96K  /usr
zroot/usr/home        96K  21.0G       96K  /usr/home
zroot/usr/ports       96K  21.0G       96K  /usr/ports
zroot/usr/src         96K  21.0G       96K  /usr/src
zroot/var            173M  21.0G       96K  /var
zroot/var/audit       96K  21.0G       96K  /var/audit
zroot/var/crash       96K  21.0G       96K  /var/crash
zroot/var/log        173M  21.0G      173M  /var/log
zroot/var/mail       112K  21.0G      112K  /var/mail
zroot/var/tmp        112K  21.0G      112K  /var/tmp



April 25, 2023, 04:24:30 PM #7 Last Edit: April 25, 2023, 04:28:48 PM by MajStealth
it does look like yours, different numbers of course, mounted to /old/zfs/var-usr-tmp etc except you have available space, i have straight 0B
i would love to copy post, but i only have the vmware webconsole, thus acting like a monitor, not like a shell
gpart https://ibb.co/vvywYxb
zfs https://ibb.co/YjZZbHs

So you seem to have mounted the dataset already on /old/zfs/var/log
In that case you just need to navigate there ie "# cd /old/zfs/var/log/" then start removing chunks of files to free up space.

Sorry to hijack the thread, just adding my comment:

since CLOG has been deprecated, and as people are upgrading to a newer opnsense releae that uses syslog-ng now, this logging topic will become more and more an issue. Small FW servers with limited storage (especially small SBCs with tiny SSD storage) will easily face storage full issue due to logs.

Maybe. Just maybe: if logging is running for a couple of hours, it would be great to have some form of prediction to be run, that tries to tell how many hours (under extreme conditions) or days of logs can be stored on the storage before it fills up. Calculating log storage consumption delta divided by the sampling period to give a rough estimation. So it becomes immediately known that the settings can stay as is (X number of days of logs to be kept is good enough, or have to increase it to X-Y days).

cd /old/ ... seems not to want to work

https://ibb.co/ScDgSTN

my case might be a bit special - since my fw runs on the same hosts as the rest of vmware runs, with promiscuous mode active, so it sees everything that is happeneing on the specific host. thus it seems to spam the logs like crazy, also it does not help to have logging for accepted packets on either....

A couple of bits. Please, please, try to find a way to post the text instead of screenshots. If I'm working I can't use my work machine to see them but have to go to another place, use a personal device. Not possible when in the office.
You'll need to dial down the logging to prevent this from happening again, but that is only after you have recovered the system.
That out of the way, can you confirm you have booted your firewall with a live media?

Once booted to live media, you should be able to do something like this (I've tested it with a VM):
1. # mkdir -p /tmp/staging
2. # mkdir -p /tmp/old-root
3. # zpool import -fR /tmp/staging zroot
4. # mount -t zfs zroot/ROOT/default /tmp/old-root
1. and 2. creates two directories to import the pool into and to mount the filesystem to. They are created in the live media filesystem so they're transient.
3. imports the pool. -f to force it as it is flagged as in use in another system, -R is defining the altroot
4. with the pool imported into the system, you can now mount the filesystem
After this, you should be able to remove files as needed.
p.s. 4. might need tweaking depending on what live media you are using ie linux or freebsd. "zfs mount" might need to be used.

April 27, 2023, 08:30:38 AM #13 Last Edit: April 27, 2023, 08:44:46 AM by MajStealth
comands worked so far, logs are cleared 29gb are available again.

still, mounting and thus booting is not possible due to
"
trying to mount root from vfs.root.mountfrom=zfs:zroot/].../default
Mounting from vfs.root.mountfrom=zfs:zroot/ailed with error 2: unknown file system.
"
what i assume it tried to load was "vfs.root.mountfrom=zfs:zroot/ROOT/default"

i assume we also have to fix the filesystem, maybe because zfs could not correctly write mft-entries? if zfs even uses masterfiletables.

Yes the important part is to clear the space. Then taking out the virtual media and rebooting the VM will mount the filesystem as normal. At that point you really need to clear as much as possible and prevent all this logging.
Good luck.