Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - z0rk

#1
Quote from: Patrick M. Hausen on August 20, 2025, 09:25:08 PMZFS rules. The snapshot/rollback function alone. Accessible from the UI.

Damn, why didn't I know this. System > Snapshots
Alright, I will switch back.

Any thoughts on my original question though? I don't mind SMART giving me a warning whenever the short test runs, but why is my log being spammed like it did; or maybe it's being triggered by something else?

I would appreciate any insight you may have.

Thanks
#2
Quote from: BrandyWine on August 20, 2025, 09:07:38 PMBut in post #1 you said you installed on UFS.

My first ZFS install, then the NVMe failed, then I did a fresh UFS install on a new NVMe. I've been using OPNsense for a few years now and always used UFS without any performance issues, but about a month ago I wanted to try out ZFS and added some more RAM and did a fresh install. Sorry for the confusion.
#3
Quote from: pfry on August 20, 2025, 05:45:17 PM
Quote from: z0rk on August 20, 2025, 03:54:14 AM[...]
Has anyone experienced a comparable situation and how to address it?

No, my SSDs all have low P/E cycles - I didn't find any with spare usage. A few have >90000 Power On Hours (10 years). Even so, I'm paranoid about endurance, so I would consider any SSD with high spare usage unreliable for any application other than temporary storage.

1310 POH with 26TBW is a heck of a write rate.

I had Zenarmor installed. RAM usage was consistently around 14GB. I will need to read up on it some more before I reinstall it to avoid excessive writes and memory usage.
#4
Quote from: BrandyWine on August 20, 2025, 06:41:23 AMYou have it on a decent fast UPS? ZFS is usually the choice filesystem for this fw application.

Maybe start here https://man.freebsd.org/cgi/man.cgi?smartd.conf%285%29

This was the first ZFS installation I ever did. I never had any performance issues in the past.
#5
I previously had a ZFS install of OPNsense (mirrored, 1x 2.5 SATA and 1x NVMe m.2). I had the SMART plug installed and enabled to run short self-tests twice a week. One day the widget reported the NVMe drive failed, and I received failure notifications every second spamming the log.
The following is the health information:

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x04
Temperature:                        35 Celsius
Available Spare:                    100%
Available Spare Threshold:          1%
Percentage Used:                    104%
Data Units Read:                    2,314,455 [1.18 TB]
Data Units Written:                 52,182,266 [26.7 TB]
Host Read Commands:                 36,689,077
Host Write Commands:                486,151,530
Controller Busy Time:               7,726
Power Cycles:                       18
Power On Hours:                     1,310
Unsafe Shutdowns:                   15
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               35 Celsius
Temperature Sensor 2:               35 Celsius

When I researched the issue, I came across the following.

https://forum.proxmox.com/threads/how-to-get-rid-of-smart-reliability-notifications.130103/

QuoteDear Client,
'Critical Warning: 0x04' is caused by "Percentage Used" being above 100%. In its own right, this only indicates that the drive is now out of warranty by the manufacturer. However, as long as 'Available Spare' is greater than 'Available Spare Threshold', you can safely ignore this.
Unfortunately, tools like smartctl will report the disk as failed, so you might need some custom filters for your monitoring.
This topic has been investigated and analyzed with our vendors for a very long time. Unfortunately, it is not possible to disable this warning for our use case. If you insist on it nonetheless, we can offer to replace the SSD for you as a gesture of goodwill.
Thank you very much for your understanding.

I have since replaced the NVMe with a new one and did a fresh installation of OPNsense on UFS. I would like to reuse my old NVMe again with OPNsense later, because there's nothing wrong with it.

My question:

I would like to keep using SMART, but I would like to a) avoid receiving SMART notification every second filling up the log and b) possibly not having SMART report the drive as failed to begin with.

Has anyone experienced a comparable situation and how to address it?

Thank you
#6
Ah my bad, I included the wrong screenshot. The settings are correct because I also received an email.
Thank you for your feedback.
#7
OPNsense 25.1.5_5-amd64
os-apcupsd (installed) 1.2_3

I am using monit to restart apcupsd when the process fails (pasted_image.png)

I would also like to receive an email alert upon status changes. I came across the following post.
https://forum.opnsense.org/index.php?topic=23071.0

(pasted_image002.png, pasted_image003.png)

Script:

#!/bin/sh

STATUS=$(/usr/local/sbin/apcaccess -p STATUS)
OK='ONLINE'
if [ "$STATUS" != $OK ]; then
echo "$STATUS"
exit 1
else
exit 0
fi

Email alert message content:

Status failed Service UPSStatusCheck

   Date:        Mon, 21 Apr 2025 13:00:54
   Action:      alert
   Host:        CPUUsage
   Description: status failed (1) -- ONLINE

Your faithful employee,
Monit

When I enable the service check I get the email (see above) reporting a failed status. Does anyone have a suggestion what I am doing wrong here?

Thank you
#8
General Discussion / NUT respawns old settings
August 24, 2024, 08:29:31 PM
I previously had NUT set up as service mode netclient. It connected to my NUT master server simply fine and the diagnostics page on OPSsense pulled the correct configuration settings.
Now I want to change my service mode to standalone. I had uninstalled NUT from my master server. I changed my NUT settings on the OPNsense end. This should be straightforward based on examples I've googled, such as this

https://schnerring.net/blog/configure-nut-for-opnsense-and-truenas-with-the-cyberpower-pr750ert2u-ups/

Unfortunately, no matter how hard I've tried, I can't get it to work. My setup is as follows (see attached).

Yet, OPNsense is not able to establish a connection to the UPS and it pulls some old configuration that points to my defunct NUT master server. Also, the the diagnostics page on OPSsense is blank, which makes sense since it's not working correctly. This is what I get on the terminal.

Broadcast Message from root@opnsense                               
        (no tty) at 10:25 PDT...                                               
                                                                               
UPS cyberpower@192.x.x.x:3493 is unavailable

I have uninstalled the plug-in, rebooted, disconnected the UPS, and deleted the NUT folder at /usr/local/etc/nut several times.
After the lates plug-in reinstall (again) the settings still point to my defunct NUT master server.

/usr/local/etc/nut $ less upsmon.conf
# Please don't modify this file as your changes might be overwritten with
# the next update.
#
MONITOR cyberpower 1 monuser PWD master
SHUTDOWNCMD "/usr/local/etc/rc.halt"
POWERDOWNFLAG /etc/killpower
MONITOR cyberpower@192.x.x.x:3493 1 nutslave slave slave
SHUTDOWNCMD "/usr/local/etc/rc.halt"
POWERDOWNFLAG /etc/killpower

Where are these settings coming from? Why are they not being overwritten after I made my configuration changes?

Thank you
#9
OPNsense 23.7.12_5-amd64

I have several interfaces but only WLAN WAN is selected in vnstat for usage reporting. It's consistently off by hundreds of GBs. My ISP enforces a data cap of ~1200GB, last month vnstat reported ~1790GB usage. I did not exceed my ISP's data cap.
Any suggestions? Thank you
#10
I am attempting to export my ntopng configuration settings.

Web Gui
Settings > Configurations > Manage Configurations > Configurations
> select: Entire ntopng configuration (includes users, preferences, and all configurations below)
> select 'Export'

Browser download manager error (independent of browser type/version):
couldn't download - network issue

I've also noticed that no backups of configuration settings are being generated under
Settings > Configurations > Manage Configurations > Nightly Backups

Thank you
#11
@rkubes
Thanks for this.
I did some more research and found the following at https://forum.opnsense.org/index.php?topic=21898.msg103540#msg103540

Solution:
tunefs -t disable /

I ran this command a week ago and the CAM and TRIM errors disappeared.
The SSD in question is a Lexar NQ100 which was the least expensive SSD I could find on Amazon at the time.
#12
OPNsense 23.7.7_1-amd64
FreeBSD 13.2-RELEASE-p3
OpenSSL 1.1.1w 11 Sep 2023

I've just recently deployed a new instance of opnsense. SYSTEM: LOG FILES: GENERAL log shows the following errors:

2023-10-27T11:15:55-07:00 Notice kernel (ada0:ahcich0:0:0:0): DSM TRIM. ACB: 06 01 00 00 00 40 00 00 00 00 01 00
2023-10-27T11:10:17-07:00 Notice kernel (ada0:ahcich0:0:0:0): DSM TRIM. ACB: 06 01 00 00 00 40 00 00 00 00 01 00
2023-10-27T10:06:04-07:00 Notice kernel (ada0:ahcich0:0:0:0): DSM TRIM. ACB: 06 01 00 00 00 40 00 00 00 00 01 00

2023-10-27T11:15:55-07:00 Notice kernel (ada0:ahcich0:0:0:0): CAM status: Command timeout
2023-10-27T11:10:17-07:00 Notice kernel (ada0:ahcich0:0:0:0): CAM status: Command timeout
2023-10-27T10:06:04-07:00 Notice kernel (ada0:ahcich0:0:0:0): CAM status: Command timeout
2023-10-27T09:37:45-07:00 Notice kernel (ada0:ahcich0:0:0:0): CAM status: Command timeout


There are more entries, but for brevity I've just included a sample.
Some searches indicate that this could relate to a hardware problem (SSD drive, SATA cable, or like) or a FreeBSD bug.
My SSD is brand new and SMART reports no errors. BIOS is set to AHCI vs. SATA.
Does anyone have some initial impressions before I explore any other potential hardware issues?

Thank you
#13
Thanks for clarifying, Franco. 👍
#14
I ran a security audit and got the following.

***GOT REQUEST TO AUDIT SECURITY***
Currently running OPNsense 23.1.9 at Mon Jun  5 19:21:32 PDT 2023
vulnxml file up-to-date
openssl-1.1.1t_2,1 is vulnerable:
  OpenSSL -- Possible DoS translating ASN.1 identifiers
  CVE: CVE-2023-2650
  WWW: https://vuxml.freebsd.org/freebsd/eb9a3c57-ff9e-11ed-a0d1-84a93843eb75.html

py39-setuptools-63.1.0 is vulnerable:
  py39-setuptools -- denial of service vulnerability
  CVE: CVE-2022-40897
  WWW: https://vuxml.freebsd.org/freebsd/1b38aec4-4149-4c7d-851c-3c4de3a1fbd0.html

2 problem(s) in 2 installed package(s) found.
***DONE***

I've seen posts dating back to 2021/2022 that talk about similar or possibly the same issue. Is there any concern?

Thank you
#15
23.1 Legacy Series / Re: Wireguard
January 28, 2023, 05:29:08 AM
After upgrading to 23.1 my Wireguard service broke. I noticed that the WG interface (wg0) was down. After rebooting multiple times I tried

root:~ # service netif restart wg0
/etc/rc.d/netif: WARNING: wg0 does not exist.  Skipped.
Starting Network: wg0.
ifconfig: interface wg0 does not exist

Any idea what might be going on? Otherwise the upgrade went fine.

Thank you