Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - MajStealth

#1
23.1 Legacy Series / Re: \var\log full
April 28, 2023, 02:15:51 PM
root@OPNsense:/ # zfs get mountpoint
NAME                PROPERTY    VALUE                   SOURCE
zroot               mountpoint  /tmp/staging/zroot      local
zroot/ROOT          mountpoint  none                    local
zroot/ROOT/default  mountpoint  /tmp/staging            local
zroot/tmp           mountpoint  /tmp/staging/tmp        local
zroot/usr           mountpoint  /tmp/staging/usr        local
zroot/usr/home      mountpoint  /tmp/staging/usr/home   inherited from zroot/usr
zroot/usr/ports     mountpoint  /tmp/staging/usr/ports  inherited from zroot/usr
zroot/usr/src       mountpoint  /tmp/staging/usr/src    inherited from zroot/usr
zroot/var           mountpoint  /tmp/staging/var        local
zroot/var/audit     mountpoint  /tmp/staging/var/audit  inherited from zroot/var
zroot/var/crash     mountpoint  /tmp/staging/var/crash  inherited from zroot/var
zroot/var/log       mountpoint  /tmp/staging/var/log    inherited from zroot/var
zroot/var/mail      mountpoint  /tmp/staging/var/mail   inherited from zroot/var
zroot/var/tmp       mountpoint  /tmp/staging/var/tmp    inherited from zroot/var
root@OPNsense:/ # zfs get canmount
NAME                PROPERTY  VALUE     SOURCE
zroot               canmount  on        default
zroot/ROOT          canmount  on        default
zroot/ROOT/default  canmount  noauto    local
zroot/tmp           canmount  on        default
zroot/usr           canmount  off       local
zroot/usr/home      canmount  on        default
zroot/usr/ports     canmount  on        default
zroot/usr/src       canmount  on        default
zroot/var           canmount  off       local
zroot/var/audit     canmount  on        default
zroot/var/crash     canmount  on        default
zroot/var/log       canmount  on        default
zroot/var/mail      canmount  on        default
zroot/var/tmp       canmount  on        default


going by https://forums.freebsd.org/threads/zfs-boot-issue.81284/ that should be looking good so far, also did not change a thing. the efi boot partition was never full though. i will need to test more over the long weekend
#2
23.1 Legacy Series / Re: \var\log full
April 28, 2023, 12:06:29 PM
i updated the opening post with the so far "solution"

root@OPNsense:/ # zpool status zroot
  pool: zroot
state: ONLINE
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Fri Apr 28 09:59:59 2023
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          da0p4     ONLINE       0     0     0

errors: No known data errors

so space is available, the zpool/filesystem is healthy and clean - but it still does not want to boot as normal
#3
23.1 Legacy Series / Re: \var\log full
April 27, 2023, 08:30:38 AM
comands worked so far, logs are cleared 29gb are available again.

still, mounting and thus booting is not possible due to
"
trying to mount root from vfs.root.mountfrom=zfs:zroot/].../default
Mounting from vfs.root.mountfrom=zfs:zroot/ailed with error 2: unknown file system.
"
what i assume it tried to load was "vfs.root.mountfrom=zfs:zroot/ROOT/default"

i assume we also have to fix the filesystem, maybe because zfs could not correctly write mft-entries? if zfs even uses masterfiletables.
#4
23.1 Legacy Series / Re: \var\log full
April 26, 2023, 11:28:26 AM
cd /old/ ... seems not to want to work

https://ibb.co/ScDgSTN

my case might be a bit special - since my fw runs on the same hosts as the rest of vmware runs, with promiscuous mode active, so it sees everything that is happeneing on the specific host. thus it seems to spam the logs like crazy, also it does not help to have logging for accepted packets on either....
#5
23.1 Legacy Series / Re: \var\log full
April 25, 2023, 04:24:30 PM
it does look like yours, different numbers of course, mounted to /old/zfs/var-usr-tmp etc except you have available space, i have straight 0B
i would love to copy post, but i only have the vmware webconsole, thus acting like a monitor, not like a shell
gpart https://ibb.co/vvywYxb
zfs https://ibb.co/YjZZbHs
#6
23.1 Legacy Series / Re: \var\log full
April 25, 2023, 03:52:43 PM
from what 'gpart show' gives me, i would assume 'da4 32g' to be the log partition -' freebsd-zfs', not boot, not swap

with 'zpool import -f -R /old/zfs zroot'
i could import said zfs pool but only as read only because it was used in another system
#7
23.1 Legacy Series / Re: \var\log full
April 25, 2023, 03:18:52 PM
booting the installer .iso and going from there would have been my windows way but i am not that used in linux, let alone mount a drive.
#8
23.1 Legacy Series / \var\log full
April 25, 2023, 03:04:11 PM
Is there a way to clear the log-folder when the FW does not boot anymore?

"mountroot" and "db" as "user" after the kernel can not boot up root

yes i could reinstall the fw and configure with a backup, but i could also just clear the most likely problem, a 32gb log-partition

question is - how?

boot with a .iso and go from there?


part solution

boot the cd, dvd, .iso as livecd, go into shell, either directly or with an assigned ip, then follow this:
mkdir -p /tmp/staging
mkdir -p /tmp/old-root
zpool import -fR /tmp/staging zroot
mount -t zfs zroot/ROOT/default /tmp/old-root

zpool list - should show the pool, Health is important?
zfs list - should show the folders

cd /tmp/old-root/var/log/
rm - - delete stuff in log
zfs list - shows available space, if more than 0B it worked
zpool status zroot
zpool scrub zroot - checks the filesystem
zpool status zroot
zpool export zroot - unmounts the zpool
#9
try to use a different repository. there seems to be some sort of problem, starting a few days back.
known so far to be working is the DE leaseweb
#10
i hopefully made it more clear now, but "nice" that i am not the only one....
#11
i have a strange problem to update after having 23.1.4:1 for ~2 weeks

i were able to update fine and quick before but am blocked to go any further.

i tried to find and resolve the error but am at a wall now, dns is working, there is no other firewall that could block it...

i send the request to update, sometimes get the changelog "fast" ~20sec, sometimes after 15min, sometimes hours later, but thats it.

now i got a new error in the log:

2023-04-04T22:30:18 Notice configd.py unable to sendback response [Judy|||1.0.5_3|||General purpose dynamic array|||1.19MiB
|||0|||0|||LGPL21|||OPNsense|||devel/judy acme.sh|||3.0.5|||ACME protocol client written in shell|||1.12MiB
|||0|||0|||GPLv3+|||OPNsense|||security/acme.sh addrwatch|||1.0.2|||Supports IP/Ethernet pairing for IPv4 and IPv6|||81.5KiB
|||0|||0|||GPLv3|||OPNsense|||net/addrwatch ap24-mod_security|||2.9.6|||Intrusion detection and prevention engine|||1.91MiB
|||0|||0|||APACHE20|||OPNsense|||www/mod_security apache24|||2.4.56|||Version 2.4.x of Apache web server|||5.59MiB
|||0|||0|||APACHE20|||OPNsense|||www/apache24 apcupsd|||3.14.14_4|||Set of programs for controlling APC UPS|||588KiB
|||0|||0|||GPLv2|||OPNsense|||sysutils/apcupsd apr|||1.7.0.1.6.1_2|||Apache Portability Library|||2.37MiB
|||0|||0|||APACHE20|||OPNsense|||devel/apr1 arc|||5.21p|||Create & extract files from DOS .ARC files|||87.5KiB
|||0|||0|||GPLv2|||OPNsense|||archivers/arc argp-standalone|||1.5.0|||Standalone version of arguments parsing functions from GLIBC|||119KiB
|||0|||0|||LGPL21+|||OPNsense|||devel/argp-standalone arj|||3.10.22_9|||Open source implementation of the ARJ archiver|||460KiB
|||0|||0|||GPLv2|||OPNsense|||archivers/arj asterisk16|||16.30.0|||Open Source PBX and telephony toolkit|||40.0MiB
|||0|||0|||GPLv2|||OPNsense|||net/asterisk16 augeas|||1.12.0_3|||Configuration editing tool|||3.34MiB
|||0|||0|||LGPL21|||OPNsense|||textproc/augeas autoconf|||2.71|||Generate configure scripts and related files|||3.12MiB
|||0|||0|||GPLv3+|||OPNsense|||devel/autoconf autoconf|||2.71|||Generate configure scripts and related files|||3.12MiB
|||0|||0|||GPLv2+|||OPNsense|||devel/autoconf autoconf|||2.71|||Generate configure scripts and related files|||3.12MiB
|||0|||0|||GFDL|||OPNsense|||devel/autoconf autoconf|||2.71|||Generate configure scripts and related files|||3.12MiB
|||0|||0|||EXCEPTION|||OPNsense|||devel/autoconf autoconf-switch|||20220527|||Wrapper script to switch between autoconf versions|||524B
|||0|||0|||BSD2CLAUSE|||OPNsense|||devel/autoconf-switch automake|||1.16.5|||GNU Standards-compliant Makefile generator|||2.03MiB
|||0|||0|||GPLv2+|||OPNsense|||devel/automake automake|||1.16.5|||GNU Standards-compliant Makefile generator|||2.03MiB
|||0|||0|||GFDL|||OPNsense|||devel/automake autossh|||1.4g|||Automatically restart SSH sessions and tunnels|||32.5KiB
|||0|||0|||BSD3CLAUSE|||OPNsense|||security/autossh avahi-app|||0.8_1|||Service discovery on a local network|||1.60MiB
|||0|||0|||LGPL21+|||OPNsense|||net/avahi-app awscli|||1.20.61|||Universal Command Line Interface for Amazon Web Services|||9.47MiB
|||0|||0|||APACHE20|||OPNsense|||devel/awscli azure-agent|||2.8.0.11|||Microsoft Azure Linux Agent|||3.14MiB
|||0|||0|||APACHE20|||OPNsense|||sysutils/azure-agent bandwidthd|||2.0.1_12|||Tracks bandwidth usage by IP address|||62.1KiB
|||0|||0|||GPLv3+|||OPNsense|||net-mgmt/bandwidthd bash|||5.2.15|||GNU Project's Bourne Again SHell|||2.19MiB
|||0|||0|||GPLv3+|||OPNsense|||shells/bash beats7|||7.17.9_3|||Send logs, network, metrics and heartbeat to elasticsearch or logstash|||155MiB
|||0|||0|||APACHE20|||OPNsense|||sysutils/beats7 beep|||1.0_1|||Beeps a certain duration and pitch out of the PC Speaker|||9.76KiB
|||0|||0|||BSD4CLAUSE|||OPNsense|||audio/beep bind-tools|||9.18.13|||Command line tools from BIND: delv, dig, host, nslookup...|||9.58MiB
|||0|||0|||MPL20|||OPNsense|||dns/bind-tools bind918|||9.18.13|||BIND DNS suite with updated DNSSEC and DNS64|||10.9MiB
|||0|||0|||MPL20|||OPNsense|||dns/bind918 bird2|||2.0.12|||Dynamic IP routing daemon|||1.05MiB
|||0|||0|||GPLv2|||OPNsense|||net/bird2 bison|||3.8.2,1|||Parser generator from FSF, (mostly) compatible with Yacc|||2.03MiB
|||0|||0|||GPLv3+|||OPNsense|||devel/bison boehm-gc|||8.2.2|||Garbage collection and memory leak detection for C and C++|||776KiB
|||0|||0|||BDWGC|||OPNsense|||devel/boehm-gc boehm-gc-threaded|||8.2.2|||Garbage collection and memory leak detection for C and C++|||661KiB
|||0|||0|||BDWGC|||OPNsense|||devel/boehm-gc-threaded boost-
2023-04-04T22:23:28 Notice configd.py [dcd5f4df-fbe9-4500-bbf6-1a4bc53f682d] Retrieve upgrade progress status
2023-04-04T22:23:27 Notice configd.py [5f730416-4c30-4a82-99f1-e0ce12c94da8] Retrieve firmware product info
2023-04-04T22:23:27 Notice configd.py [a1986260-42c1-4de6-9463-bad18a107dfe] Retrieve changelog index
2023-04-04T22:23:27 Notice configd.py [36c691f2-f7c5-4b3e-a358-7ad082ca903f] view local packages
2023-04-04T22:23:27 Error configd.py Timeout (120) executing : firmware remote
2023-04-04T22:21:26 Notice configd.py [7940535d-6ce8-4665-ad72-38c5f62fc42a] view remote packages

the timeout (120) does come more than a dozend times

when it is "finished"

***GOT REQUEST TO CHECK FOR UPDATES***
Currently running OPNsense 23.1.4_1 at Tue Apr  4 20:18:33 CEST 2023
Fetching changelog information, please wait... done
Updating OPNsense repository catalogue...
Fetching meta.conf: . done
Fetching packagesite.pkg: .......... done
Processing entries: .......... done
OPNsense repository update completed. 817 packages processed.
All repositories are up to date.
Checking integrity... done (0 conflicting)
Your packages are up to date.
Checking for upgrades (73 candidates): .......... done
Processing candidates (73 candidates): .. done
The following 13 package(s) will be affected (of 0 checked):

Installed packages to be UPGRADED:
ca_root_nss: 3.88.1 -> 3.89
dnsmasq: 2.89,1 -> 2.89_1,1
glib: 2.74.6,2 -> 2.76.1,2
libcbor: 0.10.1 -> 0.10.2
libcjson: 1.7.15 -> 1.7.15_1
libfido2: 1.12.0 -> 1.13.0
libmspack: 0.10.1 -> 0.11alpha
libnghttp2: 1.51.0_1 -> 1.52.0
openssl: 1.1.1t,1 -> 1.1.1t_1,1
opnsense: 23.1.4_1 -> 23.1.5_4
opnsense-update: 23.1.2 -> 23.1.5
py39-ujson: 5.0.0 -> 5.7.0
radvd: 2.19_1 -> 2.19_2

Number of packages to be upgraded: 13

11 MiB to be downloaded.
***DONE***

but still on 23.1.4 and also health audit seems to be fine

I changed to another internal and then external DNS, DNS tests always worked, for usual stuff and the repository domain. other repositories were also tested.

this is ONE update-try - around 2.5k lines of the "same" infos

2023-04-05T08:44:16 Notice configd.py [0926b134-bcb3-40e1-a744-b61fb9e9e1b1] Retrieve upgrade progress status
2023-04-05T08:44:16 Notice configd.py [912e84b6-8069-4ff6-ad1c-e73f1b066cbf] Retrieve firmware product info
2023-04-05T08:44:16 Notice configd.py [bf2e343b-ee33-42d4-8333-297c20130481] Retrieve changelog index
2023-04-05T08:44:16 Notice configd.py [b7cf7006-8e0b-41ad-b5ae-c57c9fe7dd5a] view local packages
2023-04-05T08:44:16 Notice configd.py [d13d95e0-5368-43b1-a94d-bb3b6b669210] view remote packages
2023-04-05T08:44:16 Notice configd.py [388de048-86c5-4d72-ab8c-1011d785dbcf] Retrieve upgrade progress status
...
2023-04-05T08:21:01 Notice configd.py [e3a10092-058b-48fc-b84c-15b37012b97d] Retrieve upgrade progress status
2023-04-05T08:21:01 Notice configd.py [947cf331-2a45-4101-8fb6-ebc44c4bfa53] Retrieve firmware product info
2023-04-05T08:21:01 Notice configd.py [4b2c781c-243a-417c-b98a-9481147ac0a6] Retrieve changelog index
2023-04-05T08:21:01 Notice configd.py [358252d2-fb57-4655-9472-e2e7e319b4a5] view local packages
2023-04-05T08:21:01 Notice configd.py [d5b94475-f598-49f1-ae47-803e1737cd64] Retrieve upgrade progress status
...
2023-04-05T08:19:46 Notice configd.py [884b2ed6-79e8-4200-91fe-2208e9df36a4] Retrieve upgrade progress status
2023-04-05T08:19:46 Notice configd.py [4a6857da-afb9-4022-ba28-a11f005bace7] system status
2023-04-05T08:19:46 Informational configd.py message 2ee35bbf-d5ac-4665-85b5-0fc77d83c6dd [firmware.audit] returned OK
2023-04-05T08:19:46 Notice configd.py [2ee35bbf-d5ac-4665-85b5-0fc77d83c6dd] Retrieve vulnerability report
2023-04-05T08:19:46 Notice configd.py [0572ae8f-a6ad-4cf5-9411-2ec91e06f6d2] retrieve firmware execution status
2023-04-05T08:19:46 Notice configd.py [b33d5370-92eb-410e-8946-c92db88d0abd] view remote packages
...
2023-04-05T08:11:47 Notice configd.py [8a524bba-c9e8-441e-a109-9bbea480a20d] Retrieve upgrade progress status
2023-04-05T08:11:46 Notice configd.py [6f860a25-3e61-420b-8c78-748a255d8d50] Retrieve firmware product info
2023-04-05T08:11:46 Notice configd.py [dbe1933e-636d-45ae-855b-aaf78fb22830] Retrieve changelog index
2023-04-05T08:11:46 Notice configd.py [bbbb3055-5f53-4078-adac-80f0f03e3754] view local packages
2023-04-05T08:11:46 Error configd.py Timeout (120) executing : firmware remote
2023-04-05T08:11:46 Notice configd.py [b6ea7aa7-b661-498a-8f7f-38c825bbc5ca] Retrieve upgrade progress status
...
2023-04-05T08:09:45 Notice configd.py [0d49ecc6-1b57-4170-b48b-1693d9cf3860] Retrieve upgrade progress status
2023-04-05T08:09:45 Notice configd.py [0c88853e-e8d8-4e9f-b854-21fba4119904] system status
2023-04-05T08:09:45 Informational configd.py message 831f239c-37e3-44a3-9ad8-82ab45c83b4b [firmware.audit] returned OK
2023-04-05T08:09:45 Notice configd.py [831f239c-37e3-44a3-9ad8-82ab45c83b4b] Retrieve vulnerability report
2023-04-05T08:09:45 Notice configd.py [584344fd-d2b2-4d69-a706-41eb0031d69e] view remote packages
2023-04-05T08:09:45 Notice configd.py [0c71a470-356e-4932-8f58-4e778bbe87de] retrieve firmware execution status
2023-04-05T08:09:43 Notice configd.py [c2e21f5d-31ac-4a8b-bd8b-b7ebee16f1b0] Retrieve firmware product info
2023-04-05T08:09:43 Notice configd.py [39ceaacd-b574-4b37-abb0-a62bc845dfd3] request traffic stats
2023-04-05T08:09:42 Notice configd.py [d0b6c47f-2adc-484d-8410-daaf10a9b308] list gateway status
2023-04-05T08:09:38 Notice configd.py [37c37169-b1df-4b7c-918a-c46927486837] system status
2023-04-05T08:09:38 Notice configd.py [a7817e17-4a0d-44e3-a5ae-3878a9ff694c] request traffic stats
2023-04-05T08:09:37 Notice configd.py [3d9b3eac-028a-4c92-a71a-aa82e458bc41] request traffic stats
2023-04-05T08:09:37 Notice configd.py [15eb39c8-d6fd-48f7-933b-d4e1a27cebb9] Retrieve firmware product info
2023-04-05T08:09:37 Notice configd.py [3eabaf8f-d86d-4d0e-b9c4-1136a9d2aba1] list gateway status
2023-04-05T08:09:36 Notice configd.py [0e3b3368-ce4c-492f-9a2e-ae8ba4b725a4] Query OpenVPN status (client,server)
2023-04-05T08:01:00 Informational configd.py message 6b4d79c2-15b7-44ea-b1e8-e76135485a3b [syslog.archive] returned OK
2023-04-05T08:01:00 Notice configd.py [6b4d79c2-15b7-44ea-b1e8-e76135485a3b] Archive syslog files


this looks interesting:
***GOT REQUEST TO AUDIT CONNECTIVITY***
Currently running OPNsense 23.1.4_1 at Wed Apr  5 09:00:15 CEST 2023
Checking connectivity for host: pkg.opnsense.org -> 89.149.211.205
PING 89.149.211.205 (89.149.211.205): 1500 data bytes
1508 bytes from 89.149.211.205: icmp_seq=0 ttl=54 time=22.223 ms
1508 bytes from 89.149.211.205: icmp_seq=1 ttl=54 time=20.832 ms
1508 bytes from 89.149.211.205: icmp_seq=2 ttl=54 time=21.040 ms
1508 bytes from 89.149.211.205: icmp_seq=3 ttl=54 time=20.781 ms

--- 89.149.211.205 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 20.781/21.219/22.223/0.588 ms
Checking connectivity for repository (IPv4): https://pkg.opnsense.org/FreeBSD:13:amd64/23.1
Updating OPNsense repository catalogue...
Fetching meta.conf: . done
Fetching packagesite.pkg: .......... done
Processing entries: .......... done
OPNsense repository update completed. 817 packages processed.
All repositories are up to date.
Checking connectivity for host: pkg.opnsense.org -> 2001:1af8:4f00:a005:5::
ping: UDP connect: No route to host
Checking connectivity for repository (IPv6): https://pkg.opnsense.org/FreeBSD:13:amd64/23.1
Updating OPNsense repository catalogue...
pkg: https://pkg.opnsense.org/FreeBSD:13:amd64/23.1/latest/meta.txz: Non-recoverable resolver failure
repository OPNsense has no meta file, using default settings
pkg: https://pkg.opnsense.org/FreeBSD:13:amd64/23.1/latest/packagesite.pkg: Non-recoverable resolver failure
pkg: https://pkg.opnsense.org/FreeBSD:13:amd64/23.1/latest/packagesite.txz: Non-recoverable resolver failure
Unable to update repository OPNsense
Error updating repositories!
***DONE***


IPv6 is router-side disabled in our network, so there should be no way for opnsense to get anything IPv6 from and to the outside

----------------------------------------------------------------------------------------------------

"FIX"

Use the 1 DE leaseweb mirror, it ran 04/05.04.2023 - there might be more that work

"/FIX"
#12
You would want to have an IP in said VLAN-range so that you could contact the FW´s in said VLAN-segment directly, individually.
of course one could open up the lan-ip/VIP from any other vlan, if your ruleset allows that.
#13
High availability / misc HA-questions
March 27, 2023, 09:44:20 AM
1. do i only need to enable promiscuous mode on a vSwitch in esxi, or also additionally, or only in the opnsense-fw?
2. is it intended to show me TB in traffic over a weekend in a not so busy environment? 1 VLAN only via vlan-enabled vNIC from esxi with promiscuous mode on vSwitch.
3 when i enable promiscuous mode on the vSwitch-NIC´s, HA works in the event that i shutdown the master, but not if i only disable the wan or lan-vNIC. the effected VIP switches but the rest stays at the FW without WAN, why is that?
#14
Zum Thema VMWare

Ausnahmsweise möchte ich hier VMWare mal in Schutz nehmen, das Problem ist BSD/OPNsense.

Bei einem Reboot werden die NIC´s in dem BSD von OPNsense, mindestens die 11/12/13, nach MAC-Adresse sortiert. Warum, weshalb, wozu, weiß vielleicht Gott.
Standard in VMWare ist die MAC zufällig, und stört auch "nie", hier ist der erste Shutdown tödlich.
Die nach Mac sortierten NIC´s werden dann wieder VMX0-7 zugeordnet und bekommen in der OPNsense die falschen logischen Interfaces.
Jetzt stellt man sich eine Konstellation mit 8 oder mehr NIC´s vor, alle per VLAN im vSwitch getrennt, und plötzlich liegt das WAN im Management, das LAN im WAN etc, totales Chaos.

Ich bin gerade massiv am zweifeln ob ich OPNsense einsetzten möchte.