Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).

(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)


I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)

After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because a) I'm not using NRPE with this check b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...

./check_url some-domain.com | echo $0

returns "0" (which indicates the check was successful)

I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):

#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $*  /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*

Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):

'check_url' command definition
define command{
       command_name    check_url
       command_line    $USER1$/check_url $url$
}

(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)


Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:

  1. mkdir /usr/lib/nagios/libexec/check_url_status/
  2. downloaded both check_url_status and utils.pm
  3. Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
  4. Run the following:

    ./check_user_status -U some-domain.com. When I run the above command, I kept getting the following error:

bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in @INC (@INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34. BEGIN failed--compilation aborted at ./check_url_status line 34.


So at this point, I give up, and have a couple of questions:

  1. Which of these two plugins would you recommend? check_url or check_url_status? (After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
  2. Now, how would I fix my problem with whichever plugin you recommended?

At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).

The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:

###
# Monitoring Individual URLs...
#
###
define service{
        host_name                       {my-shared-web-server}
        service_description             URL: somedomain.com
        check_command                   check_url!somedomain.com
        max_check_attempts              5
        check_interval                  3
        retry_interval                  1
        check_period                    24x7
        notification_interval           30
        notification_period             workhours
}
like image 775
David W Avatar asked Feb 12 '12 05:02

David W


2 Answers

I was making things WAY too complicated.

The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:

My Service Definition:

  define service{
            host_name                       myers
            service_description             URL: my-url.com
            check_command                   check_http_url!http://my-url.com
            max_check_attempts              5
            check_interval                  3
            retry_interval                  1
            check_period                    24x7
            notification_interval           30
            notification_period             workhours
    }

My Command Definition:

define command{
        command_name    check_http_url
        command_line    $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
like image 77
David W Avatar answered Sep 21 '22 12:09

David W


The better way to monitor urls is by using webinject which can be used with nagios.

The below problem is due to the reason that you dont have the perl package utils try installing it.

bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in @INC (@INC contains:

like image 34
Priya Avatar answered Sep 20 '22 12:09

Priya