I modified a ZFS monitoring script a bit, and use it on Opnsense. It will monitor your "zroot" ZFS pool if you have installed Opnsense on ZFS (you should, ZFS is amazing).
First copy this script to your Opnsense install, I have it in /root. Make sure it's executable.
Then add a new service to your monit configuration in Opnsense. The "80" is a parameter for one of the alerts, specifically triggering when the pool is 80% full. Of course the script will also trigger on serious issues, such as a degraded pool if one the disks in your mirror is offline.
![](https://i.ibb.co/5FTTPb2/image.png)
That's it, assuming you have configured monit correctly to send emails, for example I am using:
First copy this script to your Opnsense install, I have it in /root. Make sure it's executable.
Code Select
#! /bin/sh
#
## ZFS health check script for monit.
## Original script from:
## Calomel.org
## https://calomel.org/zfs_health_check_script.html
#
# Parameters
maxCapacity=$1 # in percentages
usage="Usage: $0 maxCapacityInPercentages\n"
if [ ! "${maxCapacity}" ]; then
printf "Missing arguments\n"
printf "${usage}"
exit 1
fi
# Output for monit user interface
printf "==== ZPOOL STATUS ====\n"
printf "$(/sbin/zpool status)"
printf "\n\n==== ZPOOL LIST ====\n"
printf "%s\n" "$(/sbin/zpool list)"
# Health - Check if all zfs volumes are in good condition. We are looking for
# any keyword signifying a degraded or broken array.
condition=$(/sbin/zpool status | grep -E 'DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover')
if [ "${condition}" ]; then
printf "\n==== ERROR ====\n"
printf "One of the pools is in one of these statuses: DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover!\n"
printf "$condition"
exit 1
fi
# Capacity - Make sure the pool capacity is below 80% for best performance. The
# percentage really depends on how large your volume is. If you have a 128GB
# SSD then 80% is reasonable. If you have a 60TB raid-z2 array then you can
# probably set the warning closer to 95%.
#
# ZFS uses a copy-on-write scheme. The file system writes new data to
# sequential free blocks first and when the uberblock has been updated the new
# inode pointers become valid. This method is true only when the pool has
# enough free sequential blocks. If the pool is at capacity and space limited,
# ZFS will be have to randomly write blocks. This means ZFS can not create an
# optimal set of sequential writes and write performance is severely impacted.
capacity=$(/sbin/zpool list -H -o capacity | cut -d'%' -f1)
for line in ${capacity}
do
if [ $line -ge $maxCapacity ]; then
printf "\n==== ERROR ====\n"
printf "One of the pools has reached it's max capacity!"
exit 1
fi
done
# Errors - Check the columns for READ, WRITE and CKSUM (checksum) drive errors
# on all volumes and all drives using "zpool status". If any non-zero errors
# are reported an email will be sent out. You should then look to replace the
# faulty drive and run "zpool scrub" on the affected volume after resilvering.
errors=$(/sbin/zpool status | grep ONLINE | grep -v state | awk '{print $3 $4 $5}' | grep -v 000)
if [ "${errors}" ]; then
printf "\n==== ERROR ====\n"
printf "One of the pools contains errors!"
printf "$errors"
exit 1
fi
# Finish - If we made it here then everything is fine
exit 0
Then add a new service to your monit configuration in Opnsense. The "80" is a parameter for one of the alerts, specifically triggering when the pool is 80% full. Of course the script will also trigger on serious issues, such as a degraded pool if one the disks in your mirror is offline.
![](https://i.ibb.co/5FTTPb2/image.png)
That's it, assuming you have configured monit correctly to send emails, for example I am using:
![](https://i.ibb.co/gMLJTrW/image.png)