Icinga2 Monitoring Agent

Started by pongafence, March 15, 2018, 07:28:36 AM

Previous topic - Next topic
Hi guys,

Is there any plans of adding the Icinga2 monitoring agent to the package list?  We've made the decision to roll out Icinga2,  so it'd be nice to have that included if possible.


Thanks,


I use Check_MK here (based on Nagios) with the SNMP Plugin on the OPNsense.
Of course - not all features like a full monitor client on the box but fine enough to know that it is online and get some infos about the interfaces.

What's the exact name in the ports for this package?



Icinga2 do not really have an agent.
"Server" or "agent" are just configurations of the same package.

Then you use some external scripts to monitor what you need.
It's not mandatory but usually "nagios-plugins" are used as defaults checks.







i would also like to see a icinga2 package. unfortunately i had no luck to compile the source code myself :(

When you tell me the exact port and send me your config I can build a plugin

July 02, 2018, 08:40:54 PM #9 Last Edit: July 02, 2018, 08:50:36 PM by vita
as mentioned above the port is called net-mgmt/icinga2. withit you can setup a icinga2 master instance (server), a satellite instance or basically an agent node. the agent node is useful to run local checks, for example checking the hardware-environment, special squid checks, local filesystem checks etc.

what did you exactly mean with config? do you need a example icinga2 config or a kind of config to build the package? excuse me this question, i'm not really familar with building a package from source.

@vita: I think Michael is asking you for a config of a server instance.
@Michael: Icinga logs into a remote system and executes a command via SSH. All files (server and client) are in the same package. For the use case of vita, no plugin is needed but you can for sure build a server instance if you like. I am not sure but there may be an exception with nrpe.

all right. here is a example config for a agent node scenario:

/etc/icinga2/constants.conf
/**
* This file defines global constants which can be used in
* the other configuration files.
*/

/* The directory which contains the plugins from the Monitoring Plugins project. */
const PluginDir = "/usr/lib/nagios/plugins"

/* The directory which contains the Manubulon plugins.
* Check the documentation, chapter "SNMP Manubulon Plugin Check Commands", for details.
*/
const ManubulonPluginDir = "/usr/lib/nagios/plugins"

/* The directory which you use to store additional plugins which ITL provides user contributed command definitions for.
* Check the documentation, chapter "Plugins Contribution", for details.
*/
const PluginContribDir = "/usr/lib/nagios/plugins"

/* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
* This should be the common name from the API certificate.
*/
const NodeName = "<AGENT-NODE-FQDN>"

/* Our local zone name. */
const ZoneName = "<AGENT-NODE-FQDN>"

/* Secret key for remote node tickets */
const TicketSalt = ""


/etc/icinga2/zones.conf
/*
* Generated by Icinga 2 node setup commands
* on 2017-11-17 18:56:55 +0100
*/

object Endpoint "<MASTER-FQDN>" {
        host = "<MASTER-IP>"
        port = "5665"
}

object Zone "master" {
        endpoints = [ "<MASTER-FQDN>" ]
}

object Zone "global-templates" {
        global = true
}

object Endpoint NodeName {
}

object Zone ZoneName {
        endpoints = [ NodeName ]
        parent = "master"
}


/etc/icinga2/icinga2.conf
/**
* Icinga 2 configuration file
* - this is where you define settings for the Icinga application including
* which hosts/services to check.
*
* For an overview of all available configuration options please refer
* to the documentation that is distributed as part of Icinga 2.
*/

/**
* The constants.conf defines global constants.
*/
include "constants.conf"

/**
* The zones.conf defines zones for a cluster setup.
* Not required for single instance setups.
*/
include "zones.conf"

/**
* The Icinga Template Library (ITL) provides a number of useful templates
* and command definitions.
* Common monitoring plugin command definitions are included separately.
*/
include <itl>
include <plugins>
include <plugins-contrib>
include <manubulon>

/**
* This includes the Icinga 2 Windows plugins. These command definitions
* are required on a master node when a client is used as command endpoint.
*/
include <windows-plugins>

/**
* This includes the NSClient++ check commands. These command definitions
* are required on a master node when a client is used as command endpoint.
*/
include <nscp>

/**
* The features-available directory contains a number of configuration
* files for features which can be enabled and disabled using the
* icinga2 feature enable / icinga2 feature disable CLI commands.
* These commands work by creating and removing symbolic links in
* the features-enabled directory.
*/
include "features-enabled/*.conf"

/**
* Although in theory you could define all your objects in this file
* the preferred way is to create separate directories and files in the conf.d
* directory. Each of these files must have the file extension ".conf".
*/
include_recursive "conf.d"

sorry, i've forgot to mention the need of the package called monitoring-plugins (net-mgmt/monitoring-plugins). it contains all of the common check plugins for local and remote checks.

Ok, agent seems to be easy, how about satellite? Anyone using this? Then this plugin would make more sense to me .. like with Zabbix Agent and Proxy.

it's nearly the same to configure a satellite or a agent node. personally i don't need the satellite feature on my OPNsense. i am fine with the basic agent functionality :)

if you want to create a satellite you have to define a new satellite zone.

1. put your choosen zone name in constants.conf

...
const ZoneName = "<MY-SATELLITE-ZONE>"
...



2. zones.conf didn't need any changes

/*
* Generated by Icinga 2 node setup commands
* on 2017-11-17 18:56:55 +0100
*/

object Endpoint "<MASTER-FQDN>" {
        host = "<MASTER-IP>"
        port = "5665"
}

object Zone "master" {
        endpoints = [ "<MASTER-FQDN>" ]
}

object Zone "global-templates" {
        global = true
}

object Endpoint NodeName {
}

object Zone ZoneName {
        endpoints = [ NodeName ]
        parent = "master"
}



on your master you have to define the agent node as endpoint object in your choosen satellite zone. there you have to store the config in /etc/icinga2/zones.d/<MY-SATELLITE-ZONE>/endpoints.conf

object Endpoint "<AGENT-NODE-FQDN>" {
    host = "<AGENT-NODE-IP>"
    log_duration = 0s
}

object Zone "<AGENT-NODE-FQDN>" {
    parent = "<MY-SATELLITE-ZONE>"
    endpoints = [ "<AGENT-NODE-FQDN>" ]
}


if the key exchange for the encrypted connection was successfully the master will sync the satellite specific configuration to the satellite node. you can check the receipt of objects under /var/lib/icinga2/api/zones/*.


3. disable the include of local config objects in icinga2.conf to avoid the deployment of double objects

...
#include_recursive "conf.d"
...