I.T. Security and Linux Administration

March 21, 2011  2:35 PM

Installing Nagios on Linux Part 2: Installing the Plugins

Eric Hansen Eric Hansen Profile: Eric Hansen

In part one of this series, Installing Nagios Part 1: Handling the Core, I talked about how to install Nagios’ core to get the fundamentals up and running.  This time, I will be covering installing the plug ins that are required to monitor systems and do other essential things.

Step 1: Download and Extract

There’s only one download for the essential plugins, which makes this task a lot easier.

  1. Download nagios-plugins-1.4.15.tar.gz
    1. wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.15.tar.gz
  2. Extract the tar.gz file
    1. tar xf nagios-plugins-1.4.15.tar.gz
  3. Go to the newly created directory
    1. cd nagios-plugins-1.4.15/
Step 2: Configure and Make
This is a pretty short step, and takes a bit of less time here than doing the core package.
  1. ./configure –with-nagios-user=nagios –with-nagios-group=nagios
    1. There’s more configuration options available, but this is the essential list that I feel is needed
  2. make && make install
    1. If you’ve never used the “&&” option on CLI, it basically runs the first command (“make”), and ONLY IF that succeeds, it’ll run the other command (“make install”).
After this, the plug ins will be ready to use.  We only have a couple of more steps before Nagios is up and working as it should.
Step 3: Configure Contact Information and Web Server (Apache and Lighttpd)
The first step will be to configure the contact information for nagiosadmin.  This is done because alerts will be sent from this account, and you don’t want your SMTP server to be continuously keeping bad e-mails in the queue.  After that, we’ll configure both Apache and Lighttpd to use Nagios.  Since they both do things differently, and Lighttpd is used among popular websites (i.e.: YouTube, various BitTorrent tracker sites, etc…), it should be noted here.
  1. cd /usr/local/nagios/etc/objects
  2. Open up “contacts.cfg” in your editor (I use nano personally)
  3. Edit line 35, where it asks for the e-mail.  I put in a generic alerts e-mail here, but you can put whatever you want in it really.
After that, save the file and exit out of the editor.  Since Apache is a major web server to this date, I’ll cover how to configure Nagios with that first, then with Lighttpd.  Before doing this, make sure that Nagios isn’t running (/etc/init.d/nagios stop).
  1. Make note of line 22 (“AuthUserFile”) in /etc/httpd/conf.d/extra/nagios.conf.  Mine says /usr/local/nagios/etc/htpasswd.users but yours could be different.
    1. If /etc/httpd/conf.d/extra/nagios.conf doesn’t exist, check to see if nagios-3.2.3/sample-config/httpd.conf exists.  If it does, move it to /etc/httpd/conf/extra/nagios.conf
  2. Use htpasswd to generate an account that’ll be able to access Nagios
    1. htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
      1. nagiosadmin can be any username really, as the default nagios.conf file only asks for a valid user
  3. Edit httpd.conf to include nagios.conf
    1. Near the bottom of httpd.conf, there’ll be a bunch of “Include [...].conf” lines, somewhere in there, put “Include conf/extra/nagios.conf” without quotes.
  4. Make sure /usr/local/nagios is owned by nagios:nagios by doing this: chown -R nagios:nagios /usr/local/nagios
  5. (Re)start httpd
    1. /etc/init.d/httpd restart
Here’s how to do this for Lighttpd:
  1. Edit your lighttpd.conf file, and make sure that you have the following modules enabled (uncommented):
    1. mod_alias
    2. mod_auth
    3. mod_cgi
  2. Configure alias.url to be like this:
    1. alias.url = ( “/nagios/cgi-bin” => “/usr/local/nagios/sbin”, “/nagios” => “/usr/local/nagios/share” )
  3. Make sure Lighttpd doesn’t run Nagios’ compiled CGI programs through Perl
    1. $HTTP["url"] =~ “^/nagios/cgi-bin” { cgi.assign = ( “” => “” ) }
  4. Since Lighttpd doesn’t come with a htpasswd program, I put a script at the bottom of this post for a simple one you can use (it’s from the following post: Minimalistic HIDS: OSSEC)
    1. htpasswd “nagiosadmin” “Nagios Access” “nagiosadmin password” >> /etc/lighttpd/nagios.users
      1. Replace “nagiosadmin password” with the account’s password, but remember to keep the quotes there
  5. Configure Lighttpd for authentication
      $HTTP["url"] =~ "nagios" {
      auth.backend = "htpasswd"
      auth.backend.htpasswd.userfile = "/etc/lighttpd/nagios.users"
      auth.require = ( "" => (
      "method" => "basic",
      "realm" => "nagios",
      "require" => "user=nagiosadmin"
  6. Restart Lighttpd
      1. /etc/init.d/lighttpd restart
After you configure your web server, you should be able to access http://localhost/nagios/ with the username nagiosadmin and the password you assigned it to the password file.  If you’re getting an error saying that the documents can’t be found, make sure that /usr/local/nagios/ is owned by nagios (including the files inside of share).  Also make sure you have PHP set up for your server, and that the configuration files are correctly set.

March 20, 2011  11:42 PM

Installing Nagios on Linux Part 1: Handling the Core

Eric Hansen Eric Hansen Profile: Eric Hansen

When it comes to intrusion detection systems (IDSs), there’s more than a handful of choices out there, especially for cross-platform and variant systems like Linux.  However, I don’t think any of them have taken the security world by storm as Nagios has.

While I have covered Trend Micro’s IDS product, OSSEC, here, I do believe also that one of the “king of the domains” deserves it’s own place here as well.  While OSSEC claims itself as being compliant to various standards (including governmental and health [i.e.: HIPAA]), Nagios totes itself as being the industry-standard.  A drawback with Nagios though is that while you can install and configure the core, it doesn’t include the monitoring tools you need for it to be a useful IDS, so you need to install the plug ins as well.  But, if you look past this part, you’ll see that Nagios is quite powerful in what it does, and that is be a very robust host-based intrusion detection system.  In this guide, I’ll walk through how to install Nagios core on Linux, and following posts will go through installing other components as well.

Step 1: Downloading and Extracting

Unlike my OSSEC posts, where I pre-installed the software and just walked through the steps of what I did, I’ll be documenting all the steps I take as I do them, so it’ll be as though you’re installing it along with me.  For those wondering, Nagios is native to the Linux system, but does have ports and (limited) support for Windows systems (using NSC++) as well.

But, first, you’ll need to download the tarball and extract the files:

  1. Visit http://www.nagios.org/download/core/thanks/
    1. Or directly download it: wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.3.tar.gz
  2. Extract the tarball
    1. tar xf nagios-3.2.3.tar.gz
  3. Change directories to Nagios
    1. cd nagios-3.2.3

In case you haven’t noticed in my previous posts, when I use tar, I don’t use the v or z commands like some guides direct you to.  Quite honestly, I don’t see the extra printout with “v” (which just shows you the files being extracted) being useful, and “z” I’ve found is only useful if you decide to use tar with a non-tar.gz file.

Step 2: Creating Nagios Users and Groups

After handling the downloading and extracting of Nagios, we need to prep our system as well.  To do this, we need to create a user and group for Nagios so that we can complete the install.  You can run ./configure and make just fine, but once you try make install, and you skip you this step, make will error out mentioning no user for Nagios.  Please note that the username (“nagios”) and group name (“nagcmd”) can be replaced by you, but you do need to make things consistent, so if you change these in your install, replace what needs to be.

  1. Create the Nagios user account (and give it a password):
    1. useradd -m nagios
    2. passwd nagios
      1. A password is given to nagios because it’s non-sysadmin-like to leave an account without a password, even dummy ones.  You could also add a fake login shell, but that’ll be up to the admin.
  2. Create the Nagios group
    1. groupadd nagcmd
  3. Assign usernames to nagcmd
    1. usermod -a -G nagcmd nagios
    2. usermod -a -G nagcmd www-data
      1. This is for the web UI.  If you don’t have your files/server chown’ed to it’s own specific user, make sure that the user that the web server is owned to can access the Nagios files.  If your web server is owned by someone else, then replace www-data with that username.

Step 3: Running ./configure & make

It does require g++ and make to be installed in order to pass this step, but most systems I run into have these installed already.  If not, then install these before moving on.  Depending on the system and how it’s set up, it’ll install Nagios into different directories by default.

  1. Run this command (use ./configure –help for more switches):
    1. ./configure –with-nagios-user=nagios –with-nagios-group=nagios –with-command-user=nagcmd –with-command-group=nagcmd
      1. At the end of this running, make sure you make note of the user and group names you used for both command and Nagios.  Also, make note of the URLs that it supplies to you as well.
  2. Run make to complete the compilation process
    1. make all
      1. This doesn’t take long even on my old AMD Sempron 3100+ processor with 720 MB of RAM (about 5 minutes max).
      2. This makes sure that everything fits nicely and compiles certain files in compiled CGI files.
    2. After make all finishes without errors, you’ll have to run the following in this order:
      1. make install
        1. Installs the core Nagios files into the system
      2. make install-init
        1. Installs files where ever your init (boot up) scripts are stored; usually /etc/init.d/ or /etc/rc.d
      3. make install-commandmode
        1. Installs and configures directories and files for Nagios to use them (such as sendmail)
      4. make install-config
        1. Installs template configuration files that are needed
      5. make install-webconf
        1. Installs the Apache files for the web UI.  If your web configuration path isn’t /etc/httpd/conf.d, then edit the nagios-3.2.3/Makefile line 35.

After running all the make commands above, Nagios itself is finally installed.  From here, it’d be best not to start Nagios just yet

The next guide will go over Nagios plug ins, and probably configuring the web server.

(Photo: Nagios.org)

March 17, 2011  11:26 PM

Installing the Linux OSSEC agent

Eric Hansen Eric Hansen Profile: Eric Hansen

In my previous post, Minimalistic HIDS: OSSEC, I gave a guide on how to install the OSSEC server on Linux. While the server has a built in agent (so you can monitor the server’s activities itself), if you have more than one Linux machine to monitor, you’ll need to install agents on those machines as well.

Step 1: Add the agent to the server

You’d think installing the agent on the machine would be first, right? Not with OSSEC. What you actually need to do, before anything else, is set up a new agent on the server. So, fire up your SSH session and get ready for some fun. For this, go to where you installed OSSEC (I’ll be using /var/ossec/ here), and go to the bin directory (/var/ossec/bin/). Inside, run the manage_agents file (“./manage_agents”).

From there, enter “a” (without quotes, and it’s not case sensitive) to add an agent to the server. The first thing it’ll prompt you for is a name of the agent. I put in the host name of the machine, as it makes things easier later on.

The next thing it’ll want is the IP of the agent. This field is a little tricky. If it gets it’s IP from a DHCP server, you’ll need to use the CIDR format (i.e.: if the machine’s IP is currently, you’ll have to use if it’s on a subnet). This is due to how OSSEC works, and quite honestly, I can’t explain it beyond that…but, I scratched my head a lot at first.

Lastly, it’ll prompt you for an ID for the host. This has to be an ID not currently used. Also, please be aware that in any future OSSEC tasks requiring the ID of the agent, any trailing 0′s are required (so if you give the ID 003, you need to have 003 for any ID requests). This makes sense, but is also kind of annoying at the same time. After that, it’ll ask you to confirm, so type “y” and hit enter.

Step 2: Get the authorization key for the agent

While you could restart the server (which is required to for it to see new agents), we’ll do that later. Instead, we’ll get the authorization key for it. After adding the agent, it should return you to the main menu of the program. This type, hit “e” and enter. This will list all the available agents it’s registered to. Enter the ID of the agent, and hit enter. You’ll see text appear for the agent key. Copy and paste this into a document somewhere, as you’ll need this later, and exit out of the manage_agents program. From here, restart OSSEC before continuing on to make sure OSSEC is refreshed.

Step 3: Set up the agent

Here, I’m going to assume you already downloaded and untar’ed the compressed file (this step is #1 in my first guide linked above). So, I’m going to skip straight to the installation step.

Inside of the downloaded and untared OSSEC folder, run install.sh and choose your language preference again. If you are deciding for some reason to install an agent on a server that already has OSSEC installed (which wouldn’t be necessary as all the other types already install an agent), say no to upgrading OSSEC. Otherwise, choose agent as your install. It’ll ask you if you want to install it into (by default /var/ossec/), I chose the default path, which is what I’ll be using in all my guides for OSSEC anyways. After that, it’ll ask you for the IP address of the OSSEC server. Put in the actual IP address of the server (not a CIDR-notated IP). For the sake of consistency, I’m installing all the default options here as well. After this, it will compile OSSEC and install it into the path specified.

Step 4: Import the agent key

On the new agent, run manage_agents from /var/ossec/bin/, and choose “I”. From there, paste in the authentication key you received from the server once you added the agent there. Once you paste the code, it’ll ask if you want to confirm the addition, tell it yes. After this, restart the agent (/var/ossec/bin/ossec-control restart), and OSSEC should start monitoring the agent.

To make sure that the agent is working properly, check to make sure there’s no “cannot connect to server” type errors in /var/ossec/logs/ossec.log.

March 11, 2011  3:17 PM

Minimalistic HIDS: OSSEC

Eric Hansen Eric Hansen Profile: Eric Hansen

While this software is far from new in the world of IDSes, I’m not sure just how many people actually know about this IDS (especially since Snort takes the crown most of the time).  However, while I’m not a fan of Trend Micro and their products, they are backing what can easily be a perfect IDS in OSSEC.

What is a HIDS and OSSEC?

Without going into all the gritty details, a HIDS is a host-based intrusion detection system.  Basically, what it does is monitor systems on the network (instead of the network itself, which a NIDS [network intrusion detection system] does).  From what I’ve found, a NIDS is good if you want to monitor the whole network, and a HIDS is useful if you only want to monitor specific systems (such as just an e-mail and web server).  Generally, to compensate for the lack of network-wide detection, HIDS monitor logs as well, such as syslog.  Some also monitor file system activity and the like, but it’s not a priority.

About OSSEC, it pretty much exemplifies what a HIDS is used for.  You install the server itself on a machine (if possible, not on the one you want to monitor), and agents (or sensors) on the machines you want to monitor.  It will monitor the file system for changes, logs and essentially anything else you tell it to (with pug ins).  OSSEC also offers a notification system for those who want to be e-mailed about changes it detects.  Lastly, OSSEC runs on many systems (including Linux, Mac, Windows and Unix), so there is a wide range of support, not including the fact that their mailing list is very active with support.

How Do I Download and Install OSSEC?

First thing that should be pointed out is that Windows offers an agent only, so you will have to use a different operating system for a server.  This might change in the future, but for now, this is how it is.  This will be installed on my home server, and here are the specs:

Linux (Arch Linux) running kernel 2.6.37 on a AMD Sempron 3100+ processor with about 720 MB of RAM (and 7 MB of swap)

I’ll also go through setting up the OSSEC Web UI, as it is quite helpful on monitoring the system as well.  However, this was done using Lighttpd running PHP 5.3.3 (FastCGI).  The Apache installation on OSSEC’s website should be enough for that web server.  For those wondering why I’m not using Apache for this, it’s always been a resource hog (even in it’s 1.x days), and I find Lighttpd easier to maintain and manage.  One last point I’m going to make here is that this will go through how to install the server itself.  Installing and adding sensors will come in a different article as it has its own heartaches and breaks.

Before installing OSSEC, you need to have gcc (or g++) installed (or some other C compiler), htdigest (if you’re using Apache, it’s already there), md5sum and sha1sum.  If a whereis md5sum or whereis sha1sum returns no path, then see if your system has md5 and sha1 (most flavors have one or the other), and create a symlink to it’s sum partner.  In regards to htdigest, here’s a script that’ll work if you don’t want to install Apache just for this tool:


hash=`echo -n "$user:$realm:$pass" | md5sum | cut -b -32`

echo "$user:$realm:$hash"

How to use: htdigest “username” “name of OSSEC realm” “password” > ossec.htdigest (example: htdigest “bob” “OSSEC” “denver”). The quotes are required if you’re using a space in any of them.

Step 1: Download and untar

The latest build right now is 2.5.1, and according to their site, they send out a new release (including new rules and definitions) every 3-4 months.  While there’s no change log or anything to indicate release dates, ls -liha shows most files being modified on Oct 12th (I untared the file today).

First, download and extract the latest tarball:

wget http://www.ossec.net/files/ossec-hids-2.5.1.tar.gz && tar -xf ossec-hids-2.5.1.tar.gz

This will create a ossec-hids-2.5.1 folder.  Inside there, it’ll have install.sh, which you need to run:


The first prompt will be which language do you want to use.  The default is English (en), but choose it as needed. After, a message saying that a C compiler needs to be installed, Just hit enter. Here is where you choose what you want to install.

Server: If you are going to have more than one system to monitor, this is the choice to choose. Besides including monitoring the system its installed on, it also offers the ability for remote administration of the agents as well.
Local: Same as server, minus the ability to monitor agents.
Agent: Basically a node on the network (useful if it’s a server that is used at least moderately). Installing OSSEC as an agent allows the computer to connect to the server and send various information.

My personal choice here was server, but local can work as well. If you choose local you’ll receive less options during the following steps, but I’ll go with server just for the full experience.

Choosing the path is pretty simple, the default location (/var/ossec) is generally the best. If you change this, however, make note of it. E-mail notifications are enabled by default, which I kept on as it’s needed. Another thing to make note of here, though, is that if you are using GMail as your SMTP server, you have to use this: gmail-smtp-in.l.google.com

The integrity check daemon runs on the server and monitors important files, and if they are modified (checksums change), it’ll send out a notice. When it comes to the rootkit detection, though, I turn that off. It’s a Linux machine and isn’t used much from the outside so I’m not worried about it. As for active response, it’s best to keep it enabled, especially if you decide to develop your own plug ins for OSSEC later on. This feature basically lets you have OSSEC act as a firewall of sorts as well, depending on what you have it do (i.e.: if php.ini is modified by a user that’s not root, then you can block access to that user). Firewall-drop events are enabled for me, as the server I have OSSEC installed on is heavily used by SSH, so I prefer to just be safer than sorry. All it really is though is allowing you to add iptable rules. Remote syslog is disabled as I don’t have any other Linux machines connecting to the server, but if you do, or plan to, it’s safe to enable this. After this, OSSEC will run it’s make file, compiling all the files needed and the like. Even on my old server it doesn’t take longer than 5 minutes.

Step 2: Getting OSSEC to Run

For some systems, it’ll install an init script at the end of a successful compile. If this doesn’t happen for some reason, though, you can use this simple init script:


/var/ossec/bin/ossec-control "$1"

I tried creating a simple symlink, but the path to ossec-control isn’t hard-coded, so it would cause ossec-control to be run from /etc/rc.d/ instead of /var/ossec. You can pass the generic arguments to it (even status).

Step 3: Installing the Web UI

Once you have OSSEC running, it’s time to install the web UI. The process itself is easy, but there are some configurations that need to be made for it to work correctly. For modules in Lighttpd, you need to have mod_fcgi (for FastCGI use) and mod_auth (for authentication) enabled. The basic set up of Lighttpd + PHP is out of the scope of this article. This assumes that the web-root is in /srv/http. Make the necessary changes to suite your server.

To begin, you need to download and untar the web UI file. For the sake of simplicity, the current directory will be in the web root (/srv/http).

wget http://www.ossec.net/files/ui/ossec-wui-0.3.tar.gz && tar -xf ossec-wui-0.3.tar.gz

To make things easier you can rename the folder (mv ossec-wui-0.3 owui). If you changed the install path of OSSEC, make sure that $ossec_dir points to the correct path in ossec_config.php. For Apache, you can run setup.sh to create a user for logging into OSSEC with (that’s all the file does), but if you’re using Lighttpd you can ignore that file.

Step 4: Modify PHP

You can do this step at any point, but since we’ll have to restart the web server anyways, may as well do this now. You need to edit your php.ini file, and add the path of OSSEC to open_basedir. You can skip this step if you wish, but if you get an error later saying that the web UI can’t open the OSSEC directory, this is what fixed it for me.

Step 4: Configure Lighttpd

First, I’ll show what I have set in my config file for Lighttpd, and then explain the important parts.

$HTTP["url"] =~ "ossec" {
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/ossec_auth"
auth.require = ( "" =>
"method" => "digest",
"realm" => "ossec",
"require" => "user=raevin"

The authentication method is up to you, digest is recommended on the OSSEC install guide, however. You don’t even need authentication, but it’s highly recommended. The line:

auth.backend.htdigest.userfile = “/etc/lighttpd/ossec_auth”

Should point to the authentication file (see my script above for more information). After that, make sure that the realm matches what you put into your authentication file, as well as the require-valid-user line below that.

Step 5: Testing the Web UI

Restart the Lighttpd daemon, clear the temp files that FastCGI creates (if you don’t have your init script do that automatically for you), and check to see if you can access the web UI at http://localhost/owui/. If it does work, you should see at least one monitored host ( If you have any issues, leave a comment and I’ll try to help.


To uninstall or update your OSSEC files, you will need to download the newest tar file and run install.sh again. After you choose your language, it’ll ask if you wish to update your set up. If you choose no, then it’ll ask if you wish to uninstall OSSEC.

To make plug ins and such for OSSEC, it’s kind of tricky. I’ll try to make a guide on this after I go through how to set up agents.

March 8, 2011  10:40 PM

On-the-fly Data Compression: Wingless Bird or Golden Savior?

Eric Hansen Eric Hansen Profile: Eric Hansen

With solid-state drives (SSDs) getting a lot of usage now, especially with the further adaption of netbook usage, instead of the aging notebook and laptops, there have been people bringing up the idea of on-the-fly data compression.  The main question is, how efficient is this concept?  As it stands, I have yet to find a Linux file system that supports this out of the box.  What I’m wanting to address in this post, is whether this should change or not.

What Is On-the-Fly Data Compression?

Basically what this whole post is even about is the ability to compress data on the hard drive, and uncompress it when it’s accessed/requested.  The premise is that this will save disk space, and that’s it.  Since SSDs have came into the market with a punch now, there has been a little bit more pressure for this ability, especially with non-Linux file systems being able to do this for years now.

Why Should I Use This?

In a non-biased manner, it’s not something that is clean-cut (similar to “which Linux distro/flavor should I use?”).  It has its moments, definitely, but it also has its limits…both of which I will do my best to address in this post.

How Do I Use This In Linux?

This is what draws me away from this technology/feature.  For Linux, ext2 and 3 do support this, but you have to compile in a patch into the kernel in order for it to be there.  For ext2, there’s e2compr, which works with 2.2, 2.4 and 2.6 kernels (with the last 2.6 support being in 2.6.22-25).  For ext3, there’s the ported version of e2compr called e3compr.  The problem with both these solutions is that they haven’t been updated in some time (e2compr since 2009, e3compr since 2008).  As for the other file systems (including ext4 and ResierFS), I’m not able to find any information on these supporting this feature.

Why Wouldn’t I Use This?

I really didn’t want to make a post starting out with the negatives on this from the get-go, as I love this concept.  But there are a few issues here that I see with it, that I haven’t brought up yet.

The first issue, is disk performance.  SSDs already have a lower life-span than it’s hard drive step-brother.  This makes me wonder why people think this technology would be great for SSDs to begin with.  I don’t see much of any issues when it comes from reading the files.  There’s no difference between a compressed and uncompressed file in this regard, stats() will still show the same information.  However, the write portion of files is what scares me the most.  If you think there isn’t much to it, then consider this.  Even with laptops able to have 2 or more GB of memory, how does the file get uncompressed?  If it’s uncompressed on the drive, then you’ll be using up to double the space (even if it is temporary, you’d also have to make sure you have the space).  If you decide to uncompress the files to a tmpfs (RAM drive), you still gotta make sure you have the space for RAM (which is even trickier as RAM usage fluctuates a lot more).  Of course, there’s the possibility of swap helping out here, but it seems like a loss cause for the fact there’s a bigger chance of data loss or corruption, especially if you end up running a program that gets caught in a buffer overflow/overrun.


Would I use this in my every day life?  No, not even on my netbook.  I don’t feel that the possible threats justify the possible gain of disk space.  Also, while I love using Linux, I don’t like tinkering with the kernel, especially with patches that are more than 6 months old (let alone 2-3 years).

There are systems out there (such as NTFS) that do offer this, but I feel there is a reason why this isn’t enabled by default.  If you have a SSD set up, you might enjoy the added space, as they are limited in what they can hold, but with hard drives being a good 500 GB to 1 TB or more, the sacrifice is too great.

March 3, 2011  2:18 PM

Nagios Checker [Bash Script]

Eric Hansen Eric Hansen Profile: Eric Hansen

While working on another post I’ll be making here later, I decided to venture back into bash scripting.  While there’s plenty enough Nagios scripts out there I’m sure (I really haven’t looked myself, but Nagios does have a big enough community), I decided to write my own, basic one.  The only requirement (besides having Nagios installed), is having SSMTP installed as your SMTP server.  If you use a different SMTP server, however, then you should be able to simply modify the send-mail line with ease.  For this, I’m going to post the code first, then go into detail about the important parts.


NAGIOS=$(/etc/rc.d/nagios status | grep “is running”)

if [ -n $NAGIOS ]; then

echo “To: admin@domain.com” > /tmp/nag_check

echo “From: nagiosadmin@domain.com” >> /tmp/nag_check

echo “Subject: Nagios system down on `hostname`” >> /tmp/nag_check

echo “” >> /tmp/nag_check

echo -e “`date \”+%D %r\”`: Nagios was found to not be running.  If this was found in error, please correct this issue as soon as possible.” >> /tmp/nag_check

/usr/sbin/ssmtp -t < /tmp/nag_check


First thing to note, is the NAGIOS line:

NAGIOS=$(/etc/rc.d/nagios status | grep “is running”)

The path (/etc/rc.d) applies to Arch Linux, which my server runs.  You will have to modify the path to fit your server if there isn’t a /etc/rc.d/nagios file.  All that file is is a basic init script, which also exists in /etc/nagios (/etc/nagios/daemon-init to be exact).  Again, the path may vary depending on the flavor used, but this one should be more standardized.  All the entire line is doing is asking for the status of Nagios (is it running or not?).  If it’s running, the output will be something of “Nagios (pid: ###) is running…”, and if it’s not, it’ll say it can’t find the lock file.  Next, we check to see if we found the “it’s running” text (the “-n” switch in the if statement).  While this isn’t perfect-standard programming (the $NAGIOS variable should be in quotes), it still does its job effectively.  Besides this, there’s nothing else to cover about the code, as it’s pretty self-explanatory.  The “-t” switch for ssmtp just says to scan the /tmp/nag_check file for the To/From/BCC/Subject lines instead of being passed via CLI (which is why we add them in to the file).

There’s two other ways to make this easier, though.  On my system, the lock file is located here: /var/nagios/nagios.lock (which, again, varies on the flavor of your system…it’s best to do a find /etc -iname nagios.cfg and scan that file for a lock file variable to find the correct path for you).  Instead of having the NAGIOS line, you could take that out completely and replace the “if [ -n $NAGIOS ]; then” with “if [ ! -e /var/nagios/nagios.lock ]; then” instead.  This will basically check to see if the file doesn’t exist (-e is to see if the file exists, and the ! states “if this condition is not true/if the file is not present”).  This is a little bit faster (not a noticeable difference, however), but not sure what other performance differences there are.

Lastly, and one I just thought of while writing this post, is probably the most efficient in terms of processing.  You can either strip out all of the NAGIOS checking, so it just sends out an e-mail, or move the e-mail code into /etc/nagios/daemon-init itself where it detects that it’s not running, and let the init script do it by itself.  The only difference in the cron job would be that instead of calling the script, you’d call the status check.

The e-mail, when sent out, will look like this, though:

Subject: Nagios system down on *hostname*

Body: 03/03/11 01:49:16 PM: Nagios was found to not be running.  If this was found in error, please correct this issue as soon as possible.

February 25, 2011  1:04 PM

Software VPN vs. SSH: Which is better?

Eric Hansen Eric Hansen Profile: Eric Hansen

In the IT world lately there’s been a lot of buzz about VPN, and how to effectively use it for remote administration.  During this time, it seems a lot of people are forgetting the roots of remote administration (at least within the last few years), when VPN was just starting to get recognized really.  While both do have their pros and cons, which one is better for the administration?  Let’s go a little in-depth with both and see.


Hands down, in the general scheme of things, I would say VPN has this area won.  While SSH does in fact have a high security standard, the way VPN handles encryption is more intelligent.  This is pretty much all because of VPN’s encapsulation method (which will be covered again later) that adds an additional security measure that SSH doesn’t.

For those who aren’t familiar with encapsulation, especially with VPN, here’s a general overview.  When you connect through a VPN tunnel, the TCP/IP packets have more overhead.  This is due in part to the VPN service placing an encryption header on top of the packet itself.  While this does slow down traffic (which will be covered later as well), companies have gotten wise to this technology and added measures to make sure this cannot be tampered with.  This is generally done with a token (such as a 6-digit RSA SecurID) which authenticates the user, instead of using (just) a username and password.

In the realm of SSH, the protocol it uses is pretty intensive when it comes to security.  The fundamental flaw to this, though, is when a key isn’t used.  In previous articles I gave a step-by-step guide on how to set this up, and there’s a major reason for this.  When you log in via SSH without using a key, your log in information is sent via plain text.  Now, if you’re only connecting to a home computer that’s setting right next to you, it’s almost never an issue.  But, say you’re at work and you want to check the iostats on the server, sending your data via plain text would not be the logical choice.  That’s basically telling everyone to log into your server.  VPN doesn’t have this issue, as the encapsulation header doesn’t contain anything damaging to the user itself, as all the data is inside the packet underneath the encapsulation header.


This is the one that gets to me the most here.  For anyone that’s used both protocols, I’m sure they will agree that SSH is the faster of the two.  Fortunately, the reason is simple.

In the last point, I talked about how VPN has that encapsulation header to provide an additional layer of security.  While it adds that extra sense, it comes at a great price.  With VPN, what happens is (not necessarily like this but the basic idea is there), the packet payload is segregated (as a TCP/IP packet length can only be 1500 bytes).  This means that less data reaches the destination, requiring more packets to be sent, increasing bandwidth.  Once the encapsulation header has been added to the packet, and reaches the destination, the destination then has to remove the header to get the payload from the packet.  So, basically, a website that generally takes about 5 seconds to load (for example), can take 10-15 seconds (if you’re lucky).

SSH, however, plays nicely with the data.  With or without a key, packets don’t take as long to get from A to B.  There is no overhead (i.e.: extra headers) to deal with when it comes to SSH, and while there is a slight speed performance hit, even when using SSH as a proxy that same page can take only 2 seconds longer to load.

Ease of Use

For the administrator setting up the service, this sort of depends.  Since most Linux flavors have at least 1 VPN solution in their repos, and I have yet to run into one that doesn’t have an SSH server/client in them, it’s pretty easy for the administrator to just run a single command to install it all.  This category, though, is really intended for the users the administrator will have to support with said software.

When I worked on the Ford help desk, I got a lot of calls every day asking for help on how to get VPN to work.  While the calls were easy, there was never an easy “click here to solve it all” button we could push.  Granted, most of the time it dealt with either a transport/tunneling issue, or sometimes even a port issue, there was a lot more calls than we should’ve received from it.  Then there was also the problem of the previously-mentioned SecurIDs desynching.  Again, a very easy fix, but it seemed if it wasn’t one thing it was another.  I can’t fault the user though, completely, as the e-mail documentation (read: a couple of paragraphs) didn’t go into any detail what-so-ever to assist the user.

SSH, while I never really dealt with it in the same manner as VPN, is a lot easier to use.  The thing is, using a key makes the task even easier.  Unfortunately there’s not a lot to really say here, as it also depends on how you use SSH (i.e.: PuTTy, OpenSSH client, another SSH client, etc…), but it all pretty much amounts to the same result.  It’s all relatively easy to set up, and the administrator can simply pass out a config file (like a script file for Linux) to make the task even easier if it uses a generic key.


Every server is different, which makes this category a bit hard to judge.  However, in general SSH is easier to manage, especially when you have a big network.  The reason why I say this is because, at least OpenSSH, uses one, maybe two configuration files you have to modify generally.  Probably the hardest task is setting it up to use keys instead of password authentication.  VPN does make this task easier, though, by offering an online configuration (at least OpenVPN does).  Which in the end does make configuration even easier, while you only have to worry about the network configuration (i.e.: making sure VPN access from outside is possible).


When it comes down to it, I personally prefer SSH.  VPN would be great if I was running a medium or large network, and/or was super paranoid.  However, SSH with key-authentication satisfies my necessities just fine.  Basically, if you’re using VPN, know that it might (probably will) need extra configuration compared to it’s sibling, but adds a hefty amount of extra security SSH doesn’t (at a price).  If you prefer, you could also set up both, and use VPN when it’s not time-critical, while SSH is used for those moments where you’re against the clock.

February 21, 2011  4:17 PM

Drive Encryption (feat. TrueCrypt) Part 2

Eric Hansen Eric Hansen Profile: Eric Hansen


Unfortunately, TrueCrypt’s official website does not offer a simple click-this-link-to-download, instead you have to choose between a GUI (x86 or x64) or CLI (x86 or x64) version. As my server’s pretty old (read: very, very old), I had to use the x86 version, so I cannot verify currently what changes there are between the two (if any). What I did was download the CLI/console x86 version, and then use WinSCP to copy the tar.gz file over to my server.

Note that most Linux flavors tend to have TrueCrypt in their repos, so if you’re not to follow the installation procedure, you can easily do it this way. Personally, though, I prefer to use the manual wrote, so that’s what I’ll cover here.

Once you download, transfer, and untar the TrueCrypt .tar.gz file, you’ll be left with one file:


All you have to do is run this file, and it’ll give you two options:

TrueCrypt 7.0a Setup

Installation options:

1) Install truecrypt_7.0a_console_i386.tar.gz
2) Extract package file truecrypt_7.0a_console_i386.tar.gz and place it to /tmp

To select, enter 1 or 2:

I chose option 1, since 2 is more of a “see what’s involved in this”. After choosing 1, you’ll be prompted to acknowledge the EULA, where the Page Down key comes in handy quite a bit, as there’s a lot to read. Once you reach the end, you can type either “y” or “yes” (w/o the quotes) to acknowledge the EULA. After that, you’ll see the following screen:

Uninstalling TrueCrypt:

To uninstall TrueCrypt, please run ‘truecrypt-uninstall.sh’.

Installing package…
usr/share/truecrypt/doc/TrueCrypt User Guide.pdf

Press Enter to exit…

That’s it for the installation, it’s a pretty easy install I’d say.


I’m not going to cover hidden volumes in this post, so I’ll address what the help option shows for TrueCrypt in regard to the hidden volume type:

Inexperienced users should use the graphical user interface to create a hidden
volume. When using the text user interface, the following procedure must be
followed to create a hidden volume:
1) Create an outer volume with no filesystem.
2) Create a hidden volume within the outer volume.
3) Mount the outer volume using hidden volume protection.
4) Create a filesystem on the virtual device of the outer volume.
5) Mount the new filesystem and fill it with data.
6) Dismount the outer volume.
If at any step the hidden volume protection is triggered, start again from 1).

In part one I covered what a hidden and normal volume is, as well as some other fundamental concepts for using this software. So, right now I’ll go through creating a TrueCrypt volume.

Before going into creating a volume, though, there is a switch (–random-source) that you can use, instead of inputting 320 random characters. While the help menu says “Use file as source of random data”, you can also use a device (such as /dev/urandom). You’ll have to wait a little bit though for it to finish, as it won’t output anything to let you know it’s gathering data.

truecrypt -t -c

Very simple, yet will become very powerful. You’ll be prompted with the following:

Volume type:
1) Normal
2) Hidden

My own personal experience, I never have found a need for a hidden volume, so I always go with normal (1). Once you do that, it’ll ask you for the volume path. It should be noted here, however, that it is the full absolute path (/home/tcvolumes/…, not ~/…), including the filename of the volume itself. When I was first working with TrueCrypt, this threw me off a little bit. Then the real fun begins.

Now it’ll ask you for the size of the volume, with megabytes (M) being the default size. If you want to specify kilobytes or gigabytes, then type in the size then K or G (i.e.: for 500 kilobytes, type 500K). Please note, even though it probably shouldn’t have to be said, make sure the partition or disk you use has the necessary space for the volume you are creating.

The first interesting bit during a volume set up is the encryption choice. Here’s what you’ll see next:

Encryption algorithm:
1) AES
2) Serpent
3) Twofish
4) AES-Twofish
5) AES-Twofish-Serpent
6) Serpent-AES
7) Serpent-Twofish-AES
8 ) Twofish-Serpent

While the default is #1, I prefer using #5 or #7, but outside of those, I recommend any of the AES ones. The cipher of AES is pretty robust, is among one of the hardest encryption algorithms to crack, and just overall provides a better security foreground to your volume.

After that, you’ll choose the hash algorithm to use, which mostly seems dependent on the encryption algorithm you just chose. I generally choose Whirlpool as it gathers data more randomly. For the file system, the default is to use FAT, but I like to stay consistent, so I stick w/ whatever format I have set up for the main partition (which is ext3 for my server). Besides that, make sure your kernel supports the file system format you want to use.

It’ll ask you for a password after you choose the file system. This is used for mounting the volume to ensure wondering eyes aren’t peeping into your files. While you don’t have to provide a password, it is very highly recommended that you do, unless you’re positive you’re the only one who will be able to access/mount/etc…the volume. If you enter a password shorter than 20 characters, it’ll warn you that a short password is generally easier to crack, and ask if you wish to still use the password. Alternatively, you can provide a keyfile (multiple keyfiles are possible) to use instead of a password. Please note though that the keyfile does have to exist prior, or else you’ll receive an error saying that the keyfile does not exist.

Then the real fun begins during the process, as you finally get to type in different characters to generate the encryption key. While it requires at least 320 characters, if you press Enter at any time, it’ll show you how many characters are left. After that, the volume will be created and then you’ll just have to mount it.


(NOTE: Any switches with a * before the name means there’s two dashes instead of one.  WordPress screws up the formatting.)

This is pretty simple. The command is as follows:

truecrypt {volume} {mount point}

Just as if you were using the mount command, though, the mount point does have to exist before you mount the volume, or else you’ll get an error. If you decided to use a password for your volume, it’ll prompt you for it’s password before mounting. You’ll also be prompted for a keyfile (even if you didn’t provide one during creation), so if you didn’t give one during set up, you can just hit enter. After that, it’ll ask if you if you want to protect the hidden volume if there is any, which defaults to no. Then you’re done with mounting the volume. At this point, you can do pretty much whatever to the volume, as it’ll act just like any other drive would, mounted.

For mounting, if you want to speed up the process a bit, you can use this:

truecrypt -t -k “” *protect-hidden=no {volume} {mount point}

This will basically speed up the mounting process, only asking for a password. The -t switch says to use a text-based user interface, -k is to specify keyfiles (“” means no keyfiles), –protect-hidden does the same as the prompt during the regular mount process. Another switch you can use, but pretty much defeats the purpose of using passwords, is the -p switch, which lets you specify the password for the volume. In the case you decide to use this switch, the command (if you want to skip all user intervention) would look like this:

truecrypt -t -k “” -p “password here” *protect-hidden=no {volume} {mount point}

If you use the -m switch, you can set mount options (such as -m ro for read-only mount), similar (again) to the mount command. Another option that might come in handy is changing the password, which can be done using the -C switch. To do this, you would simply use the following command:

truecrypt -C {volume}

It’ll prompt you for the volume’s current password (which you can use the -p switch again if you like), then the new password and another 320 character string of random text.


I’ll cover this shortly as there’s only two commands really for dismounting a volume.  First, if you want to just dismount a specific volume, you would type in the following:

truecrypt -d {volume}

So, if your volume file was located at /root/tcv, you’d type truecrypt -d /root/tcv.

Lastly, if you would like to dismount all of your volumes, you would just use the following:

truecrypt -d


I’ll be covering more of TrueCrypt in future articles, as well as other programs that add to security and safety.  Keep checking back for more articles, as I’ll also be covering more about the IT world as well.

February 16, 2011  1:47 PM

Drive Encryption (feat. TrueCrypt) Part 1

Eric Hansen Eric Hansen Profile: Eric Hansen


“Who’s going to look in this file named ‘top secret details.xls’?” Does this type of question sound familiar? In the IT world, it should be like a second language of sorts the types of questions we get. However, once this user gets that file stolen by some means, you’re the first one they come to in order to save them. What if all of this could have been avoided to begin with, though?

This brings in my personal favorite, TrueCrypt. I’m sure most people have already heard of this software, as it is quite well known. Throughout this article, though, it’s going to cover the general aspects. Later ones are going to go more in-depth with this, and other, software.

Good vs. Bad

Before going into installing TrueCrypt, I want to cover some of the benefits of using this. What TrueCrypt does is basically creates a virtual drive (or virtual volume) of sorts that it (un)mounts, where you can put any type of document there. This may not sound at all amazing, especially in the Linux world where you can even just repartition a small hard drive for this purpose. The added benefit to TrueCrypt though is that it encrypts the contents inside of it’s virtual drive. This is done through the set up process of the virtual drive as to what encryption, strength, etc…will be used. Another benefit is that it works on multiple platforms (Linux, Windows and Mac OS X so far), so it’s quite easy to carry the virtual drive via thumb or network drive and not have to worry if it’s going to work or not. There’s also the fact that it can encrypt the entire drive, also requiring a password on boot for the hard drive to be decrypted and bootable. While I have not done any personal studies on how effective this is against forensics, it’s still a nice security feature to combat those who try to access your data without you knowing. This doesn’t just affect hard drives though, but also solid state drives like thumb drives. Lastly, I would like to point out that it’s seed and encryption algorithms are of top quality. The seed pool it uses with the GUI is by having the user move the mouse around a box to collect random data. As for the algorithms used, it’s mostly AES and Twofish-based, which is really quite nice…especially since AES is a cipher-based algorithm and one of the more difficult schemes to decrypt.

With the good though, does come the bad. While it may not come as a surprise, TrueCrypt does have to be installed on the host computer before a TrueCrypt-ed drive can be mounted. A remedy for this does exist though, in the form of a portable TrueCrypt. Last time I checked though it was only for Windows, and essentially required you to partition your thumb drive to use it anyways. You also need administrator rights to mount the drive (this goes for Windows and Linux, most likely Mac as well). If this is being used on your own PC/laptop, then this really doesn’t pose much of a problem…but, if you are trying to use this at work, a friend’s computer, etc…it might cause a security concern.

There’s one more point I want to go into before going into the installation and such of TrueCrypt is that you can create two different types of volumes. One is a regular volume, which is the same as a partition, nothing special. The other type of volume is a hidden volume. A hidden volume works in kind of a hierarchical fashion. It’s embedded inside of a regular volume, but is hidden until it’s mounted, as TrueCrypt embeds it inside of the free space on the volume it’s used with. Both volumes (should) use a separate encryption/keyphrase as well, to further the usefulness and secrecy that it is meant to hold.

Here is where this entry will leave off. Next will cover installation, usage and any other pros and cons I forgot to mention in this entry. Look forward to this next entry very soon.

January 19, 2011  9:40 PM

Dual-booting Linux and Windows 7: The 0xc0000225 Error

Eric Hansen Eric Hansen Profile: Eric Hansen

Disclaimer: This entry is focused on both Windows and Linux…but, I’m sure people will have this same issue, so I will detail my steps.

Recently, I decided to reinstall Linux on my desktop (gave up after KDE 4.0 was released and I loathe Gnome) after seeing how much KDE has improved, and went through the process. Mind you, I did all of this via a USB drive, as I have no working optical drive. I dual-booted Kubuntu 10.10 with Windows 7 Premium (x64 for both), and everything went fine. Or so I thought.

I could boot into Kubuntu just fine after the install. Installed programs, programmed a bit, and ventured around a little to get the feel of Ubuntu/Debian-based systems again (too used to Arch Linux). Long story short of re-enjoying KDE/Linux, I rebooted to start Windows back up to make sure I didn’t screw anything up, and…this is where the story starts.

I got the following error whenever I booted Windows 7:

0xc0000225 Boot selection failed because a required device is inaccessible.

Here, I thought it would be a simple fix. I rebooted again into Linux, and searched high and low along the Interwebs’ wall of information. I tried a bunch of fdisk tricks (most of which wouldn’t work as I was currently on the mounted drive and couldn’t unmount it), cfdisk would fail saying that the partition table is invalid. Something similar between (c)fdisk is that it gave a partition table error (don’t remember the errors off the top of my head, but basically invalid cylinder ranges).

Everywhere I read on the Internet said that this was something to not be worried about…well, they didn’t want to boot back into Windows 7 apparently. I Google’d the fdisk error (as I know cfdisk is very finicky about everything), and everyone just did the short/easy route, reinstall both OSes. This wasn’t very viable to me, just because I didn’t feel like making a USB bootable of Windows 7 for the nth-time now. So, I took it a different route…I Google’d the Windows error I received. That is what solved my problem.

Step #1: Download the Windows 7 Rescue Disk

Microsoft was smart with Windows 7, and offered a rescue disk for those who dared to do silly and things. Download the version of the disk that corresponds with your version of Windows 7 (i.e.: x64 for x64 installs).

Step #2: Format the USB as NTFS (if it’s not already; if it is, then delete whatever is on it)

I’m not going to go into how to delete contents of a USB drive, but to format one as NTFS, use mkntfs. The command I used (my USB was /dev/sdc) is:

mkntfs /dev/sdc1

You have to specify the partition number to format. You could optionally use the -Q (quick format) switch, but I had things to occupy myself with anyways so I let it do it’s own thing…took about 45 minutes. It’ll zero-out the partition (which takes the longest) and then do the formatting.

Step #3: Mount the USB

I let KDE do it by itself by unplugging and replugging in the USB device. But, if you don’t want to, you can easily use the mount command again:

mount /dev/sdc1 /mnt/usb

Again, /dev/sdc1 was for me, it might be different for you. The second option is the path where to mount the device, just make sure it exists and is empty.

Step #4: Copy over the ISO content

This one is probably the easiest step, to me. Command:

Now, I’m not gonna lie, I kind of cheated here and use UNetBootin. Reason being is that it combined about 2-3 steps into one. You can get this software at http://unetbootin.sourceforge.net/ and it will run on Linux and Windows. For those on Ubuntu-based (and probably Debian as well) systems, you can use apt-get install unetbootin and it’ll install for you.

All you have to do is choose to use a disk image (ISO) file (second radio button). Choose your Windows 7 rescue disk ISO you downloaded, and choose “Show All Drives”, choosing USB as the type. The reason why you mounted the drive earlier was because at this stage, if you don’t, it won’t let you use your drive until you do mount it. So, choose your drive partition (i.e.: /dev/sdc1) and continue on. It’ll copy over the files, and install a boot loader.

Step #5: Reboot and Run System Repair

Reboot your computer and choose to boot the USB (this varies from motherboard to motherboard, most of the time you’ll choose HDD or USB-HDD). You let the rescue disk find your Windows partition, and then choose “Startup Repair”. Some people say you should do this 3 times, but I just had to do it once. It’ll fix the partition table that got fubar’ed and then click “Finish” to reboot.

At this point, Windows 7 should be booting just fine now.

I’m not sure if this only caused by Ubuntu or not, but most of the time it’s more of an issue with resizing a Windows partition (which is what I did to cause a partition table boo-boo).

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: