Is there a way to clear the log-folder when the FW does not boot anymore?
"mountroot" and "db" as "user" after the kernel can not boot up root
yes i could reinstall the fw and configure with a backup, but i could also just clear the most likely problem, a 32gb log-partition
question is - how?
boot with a .iso and go from there?
part solution
boot the cd, dvd, .iso as livecd, go into shell, either directly or with an assigned ip, then follow this:
mkdir -p /tmp/staging
mkdir -p /tmp/old-root
zpool import -fR /tmp/staging zroot
mount -t zfs zroot/ROOT/default /tmp/old-root
zpool list - should show the pool, Health is important?
zfs list - should show the folders
cd /tmp/old-root/var/log/
rm - - delete stuff in log
zfs list - shows available space, if more than 0B it worked
zpool status zroot
zpool scrub zroot - checks the filesystem
zpool status zroot
zpool export zroot - unmounts the zpool
Not if it can't be booted but depending on the filesystem used if you can mount the filesystem on a different machine or this one but on live mode, then you would mount it and delete files as needed.
booting the installer .iso and going from there would have been my windows way but i am not that used in linux, let alone mount a drive.
Ok, are you able to take the media out of the firewall device, assuming sata or ssd disk, and connecting it to say a laptop via a usb-to-sata or similar?
If not, burn a live usb image of any linux/freebsd os, boot with the firewall with it.
If you can do any of of these two, then we can guide you on what to do next to do the mounting.
And what would being familiar with Linux have to do with OPNsense?
Boot from an install medium, login as "root", not as "installer". Invoke a shell, report the output of "gpart show" here. We will probably be able to help you.
from what 'gpart show' gives me, i would assume 'da4 32g' to be the log partition -' freebsd-zfs', not boot, not swap
with 'zpool import -f -R /old/zfs zroot'
i could import said zfs pool but only as read only because it was used in another system
for completeness and to avoid a potential oops, I suggest you post the whole result:
$ sudo gpart show
Password:
=> 40 67108784 ada0 GPT (32G)
40 532480 1 efi (260M)
532520 1024 2 freebsd-boot (512K)
533544 984 - free - (492K)
534528 16777216 3 freebsd-swap (8.0G)
17311744 49795072 4 freebsd-zfs (24G)
67106816 2008 - free - (1.0M)
Next show the datasets please. Hopefully something like this:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 1.78G 21.0G 96K /zroot
zroot/ROOT 1.58G 21.0G 96K none
zroot/ROOT/default 1.58G 21.0G 1.58G /
zroot/tmp 512K 21.0G 512K /tmp
zroot/usr 384K 21.0G 96K /usr
zroot/usr/home 96K 21.0G 96K /usr/home
zroot/usr/ports 96K 21.0G 96K /usr/ports
zroot/usr/src 96K 21.0G 96K /usr/src
zroot/var 173M 21.0G 96K /var
zroot/var/audit 96K 21.0G 96K /var/audit
zroot/var/crash 96K 21.0G 96K /var/crash
zroot/var/log 173M 21.0G 173M /var/log
zroot/var/mail 112K 21.0G 112K /var/mail
zroot/var/tmp 112K 21.0G 112K /var/tmp
it does look like yours, different numbers of course, mounted to /old/zfs/var-usr-tmp etc except you have available space, i have straight 0B
i would love to copy post, but i only have the vmware webconsole, thus acting like a monitor, not like a shell
gpart https://ibb.co/vvywYxb
zfs https://ibb.co/YjZZbHs
So you seem to have mounted the dataset already on /old/zfs/var/log
In that case you just need to navigate there ie "# cd /old/zfs/var/log/" then start removing chunks of files to free up space.
Sorry to hijack the thread, just adding my comment:
since CLOG has been deprecated, and as people are upgrading to a newer opnsense releae that uses syslog-ng now, this logging topic will become more and more an issue. Small FW servers with limited storage (especially small SBCs with tiny SSD storage) will easily face storage full issue due to logs.
Maybe. Just maybe: if logging is running for a couple of hours, it would be great to have some form of prediction to be run, that tries to tell how many hours (under extreme conditions) or days of logs can be stored on the storage before it fills up. Calculating log storage consumption delta divided by the sampling period to give a rough estimation. So it becomes immediately known that the settings can stay as is (X number of days of logs to be kept is good enough, or have to increase it to X-Y days).
cd /old/ ... seems not to want to work
https://ibb.co/ScDgSTN
my case might be a bit special - since my fw runs on the same hosts as the rest of vmware runs, with promiscuous mode active, so it sees everything that is happeneing on the specific host. thus it seems to spam the logs like crazy, also it does not help to have logging for accepted packets on either....
A couple of bits. Please, please, try to find a way to post the text instead of screenshots. If I'm working I can't use my work machine to see them but have to go to another place, use a personal device. Not possible when in the office.
You'll need to dial down the logging to prevent this from happening again, but that is only after you have recovered the system.
That out of the way, can you confirm you have booted your firewall with a live media?
Once booted to live media, you should be able to do something like this (I've tested it with a VM):
1. # mkdir -p /tmp/staging
2. # mkdir -p /tmp/old-root
3. # zpool import -fR /tmp/staging zroot
4. # mount -t zfs zroot/ROOT/default /tmp/old-root
1. and 2. creates two directories to import the pool into and to mount the filesystem to. They are created in the live media filesystem so they're transient.
3. imports the pool. -f to force it as it is flagged as in use in another system, -R is defining the altroot
4. with the pool imported into the system, you can now mount the filesystem
After this, you should be able to remove files as needed.
p.s. 4. might need tweaking depending on what live media you are using ie linux or freebsd. "zfs mount" might need to be used.
comands worked so far, logs are cleared 29gb are available again.
still, mounting and thus booting is not possible due to
"
trying to mount root from vfs.root.mountfrom=zfs:zroot/].../default
Mounting from vfs.root.mountfrom=zfs:zroot/ailed with error 2: unknown file system.
"
what i assume it tried to load was "vfs.root.mountfrom=zfs:zroot/ROOT/default"
i assume we also have to fix the filesystem, maybe because zfs could not correctly write mft-entries? if zfs even uses masterfiletables.
Yes the important part is to clear the space. Then taking out the virtual media and rebooting the VM will mount the filesystem as normal. At that point you really need to clear as much as possible and prevent all this logging.
Good luck.
i updated the opening post with the so far "solution"
root@OPNsense:/ # zpool status zroot
pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:00:02 with 0 errors on Fri Apr 28 09:59:59 2023
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
da0p4 ONLINE 0 0 0
errors: No known data errors
so space is available, the zpool/filesystem is healthy and clean - but it still does not want to boot as normal
root@OPNsense:/ # zfs get mountpoint
NAME PROPERTY VALUE SOURCE
zroot mountpoint /tmp/staging/zroot local
zroot/ROOT mountpoint none local
zroot/ROOT/default mountpoint /tmp/staging local
zroot/tmp mountpoint /tmp/staging/tmp local
zroot/usr mountpoint /tmp/staging/usr local
zroot/usr/home mountpoint /tmp/staging/usr/home inherited from zroot/usr
zroot/usr/ports mountpoint /tmp/staging/usr/ports inherited from zroot/usr
zroot/usr/src mountpoint /tmp/staging/usr/src inherited from zroot/usr
zroot/var mountpoint /tmp/staging/var local
zroot/var/audit mountpoint /tmp/staging/var/audit inherited from zroot/var
zroot/var/crash mountpoint /tmp/staging/var/crash inherited from zroot/var
zroot/var/log mountpoint /tmp/staging/var/log inherited from zroot/var
zroot/var/mail mountpoint /tmp/staging/var/mail inherited from zroot/var
zroot/var/tmp mountpoint /tmp/staging/var/tmp inherited from zroot/var
root@OPNsense:/ # zfs get canmount
NAME PROPERTY VALUE SOURCE
zroot canmount on default
zroot/ROOT canmount on default
zroot/ROOT/default canmount noauto local
zroot/tmp canmount on default
zroot/usr canmount off local
zroot/usr/home canmount on default
zroot/usr/ports canmount on default
zroot/usr/src canmount on default
zroot/var canmount off local
zroot/var/audit canmount on default
zroot/var/crash canmount on default
zroot/var/log canmount on default
zroot/var/mail canmount on default
zroot/var/tmp canmount on default
going by https://forums.freebsd.org/threads/zfs-boot-issue.81284/ that should be looking good so far, also did not change a thing. the efi boot partition was never full though. i will need to test more over the long weekend