Setup Dell Openmanage on Centos 5.5 with Nagios check_openmanage plugin

Another tutorial that i wish to keep in mind before i need to do research again whenever i setup a new server.Basically for all dell server, you will most likely will like to setup Dell Openmanage software in your Linux server. The reason is pretty simple, Dell openmanage allows you to monitor almost every hardware in your server which can really benefit you a lot whenever some hardware fails.

Anyway, here we go. Firstly, we will need to setup openmanage into my centos 5.5 server. The shortages road to prevent pain is just to key in the following command.

wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash
yum install srvadmin-all -y

P.S: if you see Missing Dependency: perl(LWP::UserAgent) error, please try to go into your /etc/yum.conf to see whether any repo is disabled!

The above command will install dell openmanage for you in centos 5.5. Next you will pretty much need to start dell openmanage service before nagios can do any checking on it. Fire up the following command to start or stop dell openmanage service.

/opt/dell/srvadmin/sbin/srvadmin-services.sh start
/opt/dell/srvadmin/sbin/srvadmin-services.sh stop

There are also other commands such as enable|disable|help|start|stop|restart|status. Once your Dell Openmanage is successfully installed and started. We will have to install one of nagios check_openmanage plugin. Here are the steps to install check_openmanage.

wget "http://folk.uio.no/trondham/software/files/nagios-plugins-check-openmanage-3.7.2-1.el5.x86_64.rpm"
rpm -ivh nagios-plugins-check-openmanage-3.7.2-1.el5.x86_64.rpm

then go to your server nrpe.cfg folder located at /usr/local/nagios/etc/nrpe.cfg and add the command below

command[check_dell_storage]=/usr/lib64/nagios/plugins/check_openmanage --only storage
command[check_dell_fans]=/usr/lib64/nagios/plugins/check_openmanage --only fans
command[check_dell_memory]=/usr/lib64/nagios/plugins/check_openmanage --only memory
command[check_dell_power]=/usr/lib64/nagios/plugins/check_openmanage --only power
command[check_dell_temp]=/usr/lib64/nagios/plugins/check_openmanage --only temp
command[check_dell_cpu]=/usr/lib64/nagios/plugins/check_openmanage --only cpu
command[check_dell_voltage]=/usr/lib64/nagios/plugins/check_openmanage --only voltage
command[check_dell_batteries]=/usr/lib64/nagios/plugins/check_openmanage --only batteries
command[check_dell_amperage]=/usr/lib64/nagios/plugins/check_openmanage --only amperage
command[check_dell_intrusion]=/usr/lib64/nagios/plugins/check_openmanage --only intrusion
command[check_dell_sdcard]=/usr/lib64/nagios/plugins/check_openmanage --only sdcard
command[check_dell_esmhealth]=/usr/lib64/nagios/plugins/check_openmanage --only esmhealth
command[check_dell_esmlog]=/usr/lib64/nagios/plugins/check_openmanage --only esmlog
command[check_dell_alertlog]=/usr/lib64/nagios/plugins/check_openmanage --only alertlog
command[check_dell_critical]=/usr/lib64/nagios/plugins/check_openmanage --only critical
command[check_dell_warning]=/usr/lib64/nagios/plugins/check_openmanage --only warning

Then your monitor server should be able to pick up the instruction and start monitoring your dell hardware šŸ™‚

Nagios Monitoring Server + Nagios Monitored Servers + MySQL Setup

Strictly speaking, this is not an article i wrote myself. I am here simply combine this up for my conveniences and for people who visited this blog [ there are just too many article flying around for me to search each time šŸ™ ]. I am using Centos 5.5

Easy way to install Nagios

Apparently there is a simple way via yum

yum install epel-release
yum install nagios nagios-devel nagios-plugins* gd gd-devel httpd php gcc glibc glibc-common

Above will install all nagios required plugins and the only thing you will need to do is to install apache to get it up.

Installing Nagios on the Monitoring server

Please refer to the quick installation guide at http://nagios.sourceforge.net/docs/3_0/quickstart-fedora.html

Nagios Monitoring Server

Downlad and Install NRPE Plugin

# mkdir -p /opt/Nagios/Nagios_NRPE

# cd /opt/Nagios/Nagios_NRPE

Save file to directory /opt/Nagios

http://www.nagios.org/download/download.php

As of this writing NRPE 2.12 (Stable)

Extract the Files:

# tar -xzf nrpe-2.12.tar.gz

# cd nrpe-2.12

Compile and Configure NRPE

# ./configure

# make all

# make install-plugin

Test Connection to NRPE daemon on Remote Server

Lets now make sure that the NRPE on our Nagios server can talk to the NRPE daemon on the remote server we want to monitor. Replace ā€œā€ with the remote servers IP address. Please take note that must be a remote IP address that you wish to monitor. At this point of time, you may not have any such server. Hence, you can skip this if you wish to.

# /usr/local/nagios/libexec/check_nrpe -H
NRPE v2.12

Create NRPE Command Definition

A command definition needs to be created in order for the check_nrpe plugin to be used by nagios.

# vi /usr/local/nagios/etc/objects/commands.cfg

Add the following:

###############################################################################
# NRPE CHECK COMMAND
#
# Command to use NRPE to check remote host systems
###############################################################################

define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
        }

Create Linux Object template

In order to be able to add the remote linux machine to Nagios we need to create an object template file adn add some object definitions.

Create new linux-box-remote object template file:

# vi /usr/local/nagios/etc/objects/linux-box-remote.cfg

Add the following and replace the values ā€œhost_nameā€ ā€œaliasā€ ā€œaddressā€ with the values that match your setup:

** The ā€œhost_nameā€ you set for the ā€œdefine_hostā€ section must match the ā€œhost_nameā€ in the ā€œdefine_serviceā€ section **

define host{
          name                  linux-box-remote             ; Name of this template
          use                   generic-host          ; Inherit default values
          check_period          24x7
          check_interval        5
          retry_interval        1
          max_check_attempts    10
          check_command         check-host-alive
          notification_period   24x7
          notification_interval 30
          notification_options  d,r
          contact_groups        admins
          register              0          ; DONT REGISTER THIS - ITS A TEMPLATE
          }

define host{
          use       linux-box-remote     ; Inherit default values from a template
          host_name Centos5    ; The name we're giving to this server
          alias     Centos5 ; A longer name for the server
          address   192.168.0.5   ; IP address of the server
          }

define service{
          use                 generic-service
          host_name           Centos5
          service_description CPU Load
          check_command       check_nrpe!check_load
          }
define service{
          use                 generic-service
          host_name           Centos5
          service_description Current Users
          check_command       check_nrpe!check_users
          }
define service{
          use                 generic-service
          host_name           Centos5
          service_description /dev/hda1 Free Space
          check_command       check_nrpe!check_hda1
          }
define service{
          use                 generic-service
          host_name           Centos5
          service_description Total Processes
          check_command       check_nrpe!check_total_procs
          }
define service{
          use                 generic-service
          host_name           Centos5
          service_description Zombie Processes
          check_command       check_nrpe!check_zombie_procs
          }

Activate the linux-box-remote.cfg template:

# vi /usr/local/nagios/etc/nagios.cfg

And add:

# Definitions for monitoring remote Linux machine
cfg_file=/usr/local/nagios/etc/objects/linux-box-remote.cfg

Next you will need to add nrpe onto your command list.

[root@ns ~]# vi /etc/nagios/objects/commands.cfg

# add at the bottom
define command{
command_namecheck_nrpe
command_line$USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

this will allow you to use the command check_nrpe.

Verify Nagios Configuration Files:

# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Total Warnings: 0
Total Errors:   0

Restart Nagios:

# service nagios restart

Check Nagios Monitoring server that the remote linux box was added and is being monitored !

Nagios Monitored Server

This is the setting for ALL of your monitored servers (Server that you want to be monitor by the central monitoring server)

Firstly, install the required package.

yum install gcc glibc glibc-common gd gd-devel openssl-devel make

Setup the users

Just setup nagios as a user to execute all nagios instruction later

useradd nagios
passwd nagios

add your own password.

Download and Install Nagios Plugins

go to your src folder and download all the required nagios stuff which is nagios plugin and nrpe. Both plugin and nrpe is located at http://www.nagios.org/download/download.php. find the link and wget it like i show below.
here's the link of the two nagios required plugins

  • http://www.nagios.org/download/plugins/
  • http://exchange.nagios.org/directory/Addons/Monitoring-Agents/NRPE-%252D-Nagios-Remote-Plugin-Executor/details
cd /usr/local/src/
wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.15.tar.gz
wget http://prdownloads.sourceforge.net/sourceforge/nagios/nrpe-2.12.tar.gz

Once download is completed, tar both files

tar xzf nagios-plugins-1.4.15.tar.gz
tar xzf nrpe-2.12.tar.gz

Compile and Configure Nagios Plugins

We will need to install the openssl library before installing them

yum install -y openssl-devel

Once you install and tar both files, its time to install them.

cd nagios-plugins-1.4.15
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make
make install
chown nagios.nagios /usr/local/nagios
chown -R nagios.nagios /usr/local/nagios/libexec

Now, we will need to install xinetd to ensure it is secure.

yum install -y xinetd

Next, we will need to configure xinetd to allow certain port and nrpe.

Install NRPE Daemon

Time to install NRPE Daemon!

cd nrpe-2.12
./configure
make all
make install-plugin
make install-daemon
make install-daemon-config
make install-xinetd

We will need to confiure xinetd now.

Post NRPE Configuration

Edit Xinetd NRPE entry:

Add Nagios Monitoring server to the "only_from" directive

vi /etc/xinetd.d/nrpe

find only_from directive and add your nagios monitoring server ip address so that the monitoring server can access your monitored server.

only_from = 127.0.0.1

Edit services file entry:

Add entry for nrpe daemon

vi /etc/services

add nrpe into the list.

nrpe      5666/tcp    # NRPE

lastly restart the service and make it start on boot time.

chkconfig xinetd on
service xinetd restart

Open Firewall port for NRPE

Next, we will need to open up the firewall

vi /etc/sysconfig/iptables

add 5666 to your whitelist

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 5666 -j ACCEPT

Its time to test!

Test NRPE Daemon Install

Check NRPE daemon is running and listening on port 5666:

netstat -at |grep nrpe

Output should be:

tcp    0    0 *:nrpe    *.*    LISTEN

Check NRPE daemon is functioning:

/usr/local/nagios/libexec/check_nrpe -H localhost

Output should be NRPE version:

NRPE v2.12

Monitoring MySQL Server With Nagios

Nagios Ping /bin/ping Unknown status problem

This is simply a permission problem caused by the script /bin/ping. Hence, all you need to do is the following,

chmod u+s /bin/ping

After a while, nagios should be able to ping your server ip.

TroubleShooting

NRPE ./configure error:

checking for SSL headersā€¦ configure: error: Cannot find ssl headers

Solution:

You need to install the openssl-devel package

# yum -y install openssl-devel

CHECK_NRPE: Error ā€“ Could not complete SSL handshake

Solution:

This is most likely not a probem with SSL but rather with Xinetd access restrictions.

Check the following files:

/etc/xinetd.d/nrpe

/etc/hosts.allow

/etc/hosts.deny

no acceptable c compiler found in $PATH

When I gave the ./configure command i got this error saying: no acceptable c compiler found in $PATH and then it stops.

After a quick google search I found a topic saying that i needed to install gcc so i entered:

yum install gcc glibc glibc-common gd gd-devel

Hope it helps, you if need any web hosting solutions or have any question, feel free to pm me šŸ™‚

How to Setup GFS2 or GFS in Linux Centos

It has been a nightmare for me setting up GFS2 with my 3 shared hosting servers and 1 SAN Storage. I have been reading all over the internet and the solutions to this is either outdated or contains bug that cannot make my SAN storage SAN to work. Finally, i managed to setup my GFS2 on my Dell MD3200i with 10TB of disk space.

GFS2/GFS Test Environment

Here is the test environment equipment that i utilized for this setup.

  1. 3 Centos Web Server
  2. 1 MD3200i Dell SAN Storage
  3. 1 Switch to connect all these equipment together

Assumption

I will assume you would have setup all your 3 Centos servers to communicate with your SAN ISCSI storage. This means that all your 3 Centos servers will be able to view your newly created LUN using iscsiadmn. And you have switch off your iptabls and selinux. If your iscsi storage hasn't configure, you can do so at cyberciti.

Setup GFS2/GFS packages

On all of your 3 Centos servers, you must install the following packages:

  1. cman
  2. gfs-utils
  3. kmod-dlm
  4. modcluster
  5. ricci
  6. luci
  7. cluster-snmp
  8. iscsi-initiator-utils
  9. openais
  10. oddjobs
  11. rgmanager

Or you can simple type the following yum on all 3 Centos machine

yum install -y cman gfs-utils kmod-gfs kmod-dlm modcluster ricci luci cluster-snmp iscsi-initiator-utils openais oddjob rgmanager

Or even simplier, you can just add the cluster group via the following line

yum groupinstall -y Clustering
yum groupinstall -y "Storage Cluster"

Oh, remember to update your Centos before proceeding to do all of the above.

yum -y check-update
yum -y update

After you have done all of the above, you should have all the packages available to setup GFS2/GFS on all your 3 Centos machine.

Configuring GFS2/GFS Cluster on Centos

Once you have your required centos packages installed, you would need to setup your Centos machine. Firstly, you would need to setup all your hosts file with all 3 servers machine name. Hence, i appended all my 3 servers machine name across and in each machine i would have the following additional line in my /etc/hosts file.

111.111.111.1 gfs1.hungred.com
111.111.111.2 gfs2.hungred.com
111.111.111.3 gfs3.hungred.com

where *.hungred.com is each machine name and the ip beside it are the machine ip addresses which allows each of them to communicate with each other by using the ip stated there.

Next, we will need to setup the cluster configuration of the server. On each machine, you will need to execute the following instruction to create a proper cluster configuration on each Centos machine.

ccs_tool create HungredCluster
ccs_tool addfence -C node1_ipmi fence_ipmilan ipaddr=111.111.111.1 login=root passwd=machine_1_password
ccs_tool addfence -C node2_ipmi fence_ipmilan ipaddr=111.111.111.2 login=root passwd=machine_2_password
ccs_tool addfence -C node3_ipmi fence_ipmilan ipaddr=111.111.111.3 login=root passwd=machine_3_password

ccs_tool addnode -C gfs1.hungred.com -n 1 -v 1 -f node1_ipmi
ccs_tool addnode -C gfs2.hungred.com -n 2 -v 1 -f node2_ipmi
ccs_tool addnode -C gfs3.hungred.com -n 3 -v 1 -f node3_ipmi

Next, you will need to start cman.

service cman start
service rgmanager start

cman should starts without any error. If you have any error while starting cman, your GFS2/GFS will not work. If everything works fine, you should see the following when you type the command as shown below,

[root@localhost ]# cman_tool nodes
10.0.0.1
Node  Sts   Inc   Joined               Name
   1   M     16   2011-1-06 02:30:27  gfs1.hungred.com
   2   M     20   2011-1-06 02:30:02  gfs2.hungred.com
   3   M     24   2011-1-06 02:36:01  gfs3.hungred.com

If the above shows, this means that you have properly setup your GFS2 cluster. Next we will need to setup GFS2!

Setting up GFS2/GFS on Centos

You will need to start the following services.

  • service gfs start
  • service gfs2 start

Once, this two has been started. All you need to do is to partition your SAN storage LUN. If you want to use GFS2, partition it with gfs2

/sbin/mkfs.gfs2 -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb

Likewise, if you like to use gfs, just change it to gfs instead of gfs2

/sbin/mkfs.gfs -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb

A little explanation here. HungredCluster is the one we created while we were setup out GFS2 Cluster. /dev/sdb is the SAN storage lun space which was discovered using iscsiadm. -j 10 is the number of journals. each machine within the cluster will require 1 cluster. Therefore, it is good to determine the number of machine you will place into this cluster. -p lock_dlm is the lock type we will be using. There are other 2 more types beside lock_dlm which you can search online.

P.S: All of the servers that will belong to the GFS cluster will need to be located in the same VLAN. Contact support if you need assistance regarding this.
If you are only configuring two servers in the cluster, you will need to manually edit the file /etc/cluster/cluster.conf file on each server. After the tag, add the following text:

If you do not make this change, the servers will not be able to establish a quorum and will refuse to cluster by design.

Setup GFS2/GFS run on startup

Key the following to ensure that GFS2/GFS starts everytime the system reboot.

chkconfig gfs on
chkconfig gfs2 on
chkconfig clvmd on //if you are using lvm
chkconfig cman on
chkconfig iscsi on
chkconfig acpid off
chkconfig rgmanager on
echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
mount /dev/sdb

Once this is done, your GFS2/GFS will have mount on your system to /home. You can check whether it works using the following command.

[root@localhost ~]# df -h

You should now be able to create files on one of the nodes in the cluster, and have the files appear right away on all the other nodes in the cluster.

Optimize clvmd

We can try to optimize lvmd to control the type of locking lvmd is using.

vi /etc/clvmd/clvmd.conf
find the below variables and change it to the variable as shown below
locking_type = 3
fallback_to_local_locking = 0
service clvmd restart

credit goes to http://pbraun.nethence.com/doc/filesystems/gfs2.html

Optimize GFS2/GFS

There are a few ways to optimize your gfs file system. Here are some of them.
Set your plock rate to unlimited and ownership to 1 in /etc/cluster/cluster.conf


Set noatime and nodiratime in your fstab.

echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab

lastly, we can tune gfs directy by decreasing how often GFS2 demotes its locks via this method.

echo "
gfs2_tool settune /GFS glock_purge 50
gfs2_tool settune /GFS scand_secs 5
gfs2_tool settune /GFS demote_secs 20
gfs2_tool settune /GFS quota_account 0
gfs2_tool settune /GFS statfs_fast 1
gfs2_tool settune /GFS statfs_slots 128
" >> /etc/rc.local

credit goes to linuxdynasty.

iptables and gfs2/gfs port

If you wish to have iptables remain active, you will need to open up the following ports.

-A INPUT -i 10.10.10.200 -m state --state NEW -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dport 5404, 5405 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 8084 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 11111 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 14567 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 16851 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 21064 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 41966,41967,41968,41969 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50006,50008,50009 -j ACCEPT
-A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50007 -j ACCEPT

Once these ports are open on your iptables, your cman should be able to restart properly without getting start either on fencing or cman starting point. Good Luck!

Troubleshooting

You might face some problem setting up GFS2 or GFS. Here are some of them which might be of some help

CMAN fencing failed

You get something like the following when you start your cman

Starting cluster:
Loading modules... done
Mounting configfs... done
Starting ccsd... done
Starting cman... done
Starting daemons... done
Starting fencing... failed

One of the possibility that is causing this is that your gfs2 has already been mounted to a drive. Hence, fencing failed. Try to unmount it and start it again.

mount.gfs2 error

if you are getting the following error

mount.gfs2: can't connect to gfs_controld: Connection refused

you need to try to start the cman service

Reference

Making Mount DVD/CDROM Executable in Linux

Interestingly, if you try to mount your dvd or cdrom and try to run the files in your dvd/cdrom in linux, chances are you will most likely get an error stating that the file doesn't have the permission to perform the task. If you mount your media into linux and receives an error saying that your cd/dvd is write-protected and your mounted drive is only good for read-only, you will definitely get permission denial error when you try to execute any .sh files in your mounted drive.

The solutions for this is pretty simple. All you need to do is to fire up your fstab file at

vi /etc/fstab

and add/edit the following line so that you can mount and execute the files on your mounted drive.

/dev/dvd        /mnt/dvd        auto        ro,user,noauto,exec      0 0

The trick to make your dvd/cd executable is to set "exec" after "user" because by default once "user" is seen, it will automatically change your media to "noexec" and overwrite the "exec" you have defined either before or after you mount the dvd.

mount -o exec /dev/dvd 

and you will see that it stills fail if on your fstab user is placed at the end or after exec statement. Cheers!

Changing SSH Port Fail To Login In Centos – No route to host

Recently i have been setting up with my own server in Centos playing around with Centos and understanding more about Linux. It has been a challenging and interesting process for me. From a beginner point of view, there is really a lot to learn and explore with hardware. One of this problem i faced was ssh giving me a headache when i change the ssh port to something different rather than port 22.

The whole process of changing SSH port 22 to something else was really to harden the security side for SSH. However, who would have though problem will come for something so simple such as changing SSH port to something else rather than 22?

If you are getting the following message

connect to host xxx.xxx.xxx.xxx port 2222: No route to host

and you are sure that you did the correct thing and started staring at your hardware switch. Don't. This should have nothing to do with your layer 3 switch if you hasn't touch it yet.

The reason why only port 22 is accessible via SSH and not other port was because Centos has its own Firewall called Iptables. If you are like me who suspect it might be Centos firewall who is causing the problem, you have found the right answer.

In order to determine whether is it the rule of Centos Iptables who is causing this problem, all you have to do is to initialize the following command,

iptables -F

this will flush the iptables rules and make it clean from centos default rules. Now, try to ssh to your machine and see whether it works?

If it does, you just found the culprit for your headache. Next, we will need to change the iptables rule so that it stays permanent on the iptables. Navigation down to

/etc/sysconfig/iptables

look for the last 3rd line where you will see --dport 22, change it to your new ssh port and restart your iptables and sshd

service sshd restart
service iptables restart

and you should be able to ssh properly from another machine to your server. Cheers!