I.T. Security and Linux Administration

March 28, 2011  2:49 PM

Going Green with Virutalization

Eric Hansen Eric Hansen Profile: Eric Hansen

For quite a while, there has been a strong push for companies (and individuals even) to “go green”.  By this, it’s meant that we depend on other means of operating besides using a ton of electricity, emitting more fumes into the ozone, etc…  However, it wasn’t until I started working as a Support Analyst a few years ago that I truly understood the actual power behind this movement.

In a sense, you can think of it as sort of a VPS (virtual private server).  You allocate so much amount of total system resources to virtual set ups (virtual machines), so it’s still running on the same energy as the host, but only those who actually see it will know it.  In a way this is similar to a VPS, as they allocate so much resources to run a virtual server to the user(s).  This is where virtualization comes in.

Back to my Ford experience.  Near the end of my tenure there, I ended up having to help a manager there fix a problem in the server room.  This ended up leading to me seeing that we really did not have as many servers as it seemed.  Apparently, for every server we had there, it was running about 4 virtual machines.  I wasn’t able to inspect them beyond that, but it does make one wonder…just how powerful can this technology and purpose be?  For these servers to be able to hold up a monster of a network like Ford Motor Company’s is pretty impressive.

Fast forward a year, where I’m now working at a web-hosting company.  While I didn’t get much experience with it, we did run XenServer for our VPS hosting solution.  This acted the same as running a virtual machine pretty much, just with a harder-to-use management console.  It still used all of the hosts’ system resources to allocate it to the subsystems (virtual servers), which ran just as a single server to the end user.  This saved a lot of space and energy consumption as well, as we didn’t need a new server for each VPS client (which makes sense, but I think you get my point here).

My general thought is that if this is possible, then isn’t it just as possible to run a “dedicated server” in a virtual setting?  The only real problems here are disk space allocation and CPU power, but there is another way to think about this.  If you have a customer who wants 3 dedicated servers (1 for web, 1 for database and 1 for payment gateway [PCI compliance]), then you could build a server that has 3x the amount for each of these (i.e.: 32 GB RAM, 2 TB drive, 8-core processor), which could essentially run as a dedicated server.  The user would not notice a difference in the systems, and it would not only save rack space, but save energy consumption as well.

The capability of virtualization to manage many different types of set ups and networks is amazing.  While it’s still relatively new, this is a fastly growing desire from businesses, especially with governmental pressure and money budgeting.  If you think about it, if a company can run 4 servers with the power consumption of one, verses 4-times the power consumption for one server per each, the virtualization will be more cost effective in the end.  In the end, the only problem for companies in the beginning, if you didn’t start out with virutalization, is migrating services and data over.  However, in the end, it will be a lot more worth the desire to have sysadmins who know how to deploy, maintain and understand this technology.

If you would like more information in regards to this, a fellow blogger here, Ed Tittel, made a very helpful blog post about VCP (VMware Certified Professional), here: Lots of VMWare VCP-410 Cert Resources available.  I would like to thank Ed for providing me the information he has, and hope that other people who read these blogs will see the potential and need for this knowledge in the future.

March 25, 2011  12:33 PM

Should You Use Google Apps as a Business E-mail Solution?

Eric Hansen Eric Hansen Profile: Eric Hansen

Back when I was working at my last employer, we had SMTP servers set up on all of our managed servers using qmail (running on CentOS). This worked great for the most part, as we could see when and why e-mail would flop, who sent it, and any other informative bits of information. However, a lot of time, people would also use Google Apps for their own e-mail service as well, which gave users more space dedicated to e-mail. After a while, we would advice users who were having issues with our system, to instead sign up for Google Apps and use their service.

Fast forward a couple of months later, I was starting to venture into entrepreneurship, which included debating whether to house my own e-mail server, or use Google Apps. In the end, I decided to use Google Apps (which, when I say “third party”, is who I’m referring to in this article) as management would be easier, and I wouldn’t have to worry about disk space on my own server. But, is this the best route for everyone, or are there times to use in-house solutions? While it is a matter of preference as well, here are some points for, and against, using third parties.


Simply put, using third-party solutions makes e-mail management almost non-existent. Most of the work is done off-site, and by their management systems. Granted, while the sysadmin still will need to intervene in some cases (see: user forgets their password), about 90-95% of the work a sysadmin would usually have to do, is eliminated. This allows them to direct their resources and attention towards other pressing issues instead. Most (if not all by now) systems also include an intuitive control panel to keep track of users.

Storage Space

When it comes to space, buying hard drives is cheap-change to big companies, but if you’re just trying to get off your feet, it could be a big issue. While users are usually capped by the service provider’s space, most of them offer plenty enough space for all users. For example, Google Apps offers about 2-3 GB of space per user, and their free service allows up to 50 accounts. Even at a minimum of 2 GB x 50 users, that’s 100 GB total set aside for your company, for free…where as a 100 GB drive will cost you money regardless. However, if you’re already an established company, having to buy an extra 100 GB hard drive every so often probably won’t phase you much.

Account Management

I’m not really sure how to classify this, so I’m just labeling it account management. By this, I’m talking about keeping track of users, essentially. As said in my last point, Google Apps offers a total of 50 users (and more for their business plans) for your domain. While you can have an unlimited amount of users when running your own server, you also have to remember about administrating these accounts, too. There is a ton of ways to handle this, and most mail servers have management consoles of some sort, as well. If you have a small amount of users (10-20 maximum), I can see running your own server for this. But even at 20, let alone 30, it can get very hectic, unless (a majority of) these accounts are for things such as PayPal transactions, for example.


Companies now are always trying to find ways to save an extra penny. But, there are times when throwing out some extra money into an external support system could be more cost efficient. If you have the man-power to maintain in-house e-mail servers (figure 1 administrator for every 10-15 accounts), then using your own systems could be viable. However, if you are limited on the amount of administrators you can delegate to handle these tasks, whether you buy an extended package or use a free service, a third-party would probably reduce stress and work load for those involved. While this is completely up to the business, it’d probably be cheaper to pay for a $100/year service than paying your sysadmin over-time every week trying to figure e-mail issues out.


Do you use an already-in-place web interface, or set one up yourself? While there are e-mail clients for phones, computers and virtually anything else with an Internet connection now, web-interface e-mail systems are still highly valuable. None of them are without their flaws, but most of the ones people use day-to-day are proprietary, which means you can’t use them for your own use (with exceptions, of course). For example, you can’t download and install the web interface and use it on your own server. Yet, everyone is so used to these systems that when they use something such as SquirrelMail, for example, they get lost and frustrated, then turned off from the system. Not only does this tend to lead to a waste of space, but a loss of resources that could’ve been used on something else.


While this was a short article of sorts on this topic, it does address some of the important factors of the ever-growing e-mail systems. Do I think Google Apps should be used? Everything considered, definitely. All it takes is signing up for the service, and adding a few new DNS records to your name server. After that, most of the work is on Google now, and you can spend your time fixing systems, problems and making things run more efficiently.

March 22, 2011  3:03 PM

IT Knowledge Exchange and it’s badge reward system

Eric Hansen Eric Hansen Profile: Eric Hansen

A few weeks ago, Melanie posted on ITKE’s community blog, “Earning Badges Pay Off – Literally“.  In it, she addresses a new promotion of sorts they are running with the ITKE community.  Basically, what happens is that them or active you are on the IT Questions section, the sooner you can get some nice swag or a gift card, value ranging from $10-$100 for Amazon.com, depending on your contributions.

You might be wondering what the catch is in regards to this, and it’s simple.  Be active (as well as live in America, Canada or Europe if you want the perks from being active).

Wondering even what “be active” means?  Do you have to be a blogger to get these?  Should you treat the site like it’s your package-tracking website?  The answer is no to both.  You can go about your daily business, go to work, spend time with the family or what-have-you.  All you have to do is contribute to the questions people ask, and/or ask questions yourself.  Everything you do there rewards you with points.  The only perk us bloggers really get on ITKE is a badge saying we are writers for this community.

If you’re wondering how many points you need in order to get some nice gifts (including even a t-shirt), what the badges look like, what they mean, etc…please see this FAQ section.  Looking around on the IT Questions section of ITKE, you’ll notice a lot of people already having these badges.

I look forward to seeing even more active members of ITKE.  There’s plenty of questions asked and answered, but never be afraid of asking a question.  ITKE is a very helpful and open community for all walks of IT life.

March 21, 2011  2:35 PM

Installing Nagios on Linux Part 2: Installing the Plugins

Eric Hansen Eric Hansen Profile: Eric Hansen

In part one of this series, Installing Nagios Part 1: Handling the Core, I talked about how to install Nagios’ core to get the fundamentals up and running.  This time, I will be covering installing the plug ins that are required to monitor systems and do other essential things.

Step 1: Download and Extract

There’s only one download for the essential plugins, which makes this task a lot easier.

  1. Download nagios-plugins-1.4.15.tar.gz
    1. wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.15.tar.gz
  2. Extract the tar.gz file
    1. tar xf nagios-plugins-1.4.15.tar.gz
  3. Go to the newly created directory
    1. cd nagios-plugins-1.4.15/
Step 2: Configure and Make
This is a pretty short step, and takes a bit of less time here than doing the core package.
  1. ./configure –with-nagios-user=nagios –with-nagios-group=nagios
    1. There’s more configuration options available, but this is the essential list that I feel is needed
  2. make && make install
    1. If you’ve never used the “&&” option on CLI, it basically runs the first command (“make”), and ONLY IF that succeeds, it’ll run the other command (“make install”).
After this, the plug ins will be ready to use.  We only have a couple of more steps before Nagios is up and working as it should.
Step 3: Configure Contact Information and Web Server (Apache and Lighttpd)
The first step will be to configure the contact information for nagiosadmin.  This is done because alerts will be sent from this account, and you don’t want your SMTP server to be continuously keeping bad e-mails in the queue.  After that, we’ll configure both Apache and Lighttpd to use Nagios.  Since they both do things differently, and Lighttpd is used among popular websites (i.e.: YouTube, various BitTorrent tracker sites, etc…), it should be noted here.
  1. cd /usr/local/nagios/etc/objects
  2. Open up “contacts.cfg” in your editor (I use nano personally)
  3. Edit line 35, where it asks for the e-mail.  I put in a generic alerts e-mail here, but you can put whatever you want in it really.
After that, save the file and exit out of the editor.  Since Apache is a major web server to this date, I’ll cover how to configure Nagios with that first, then with Lighttpd.  Before doing this, make sure that Nagios isn’t running (/etc/init.d/nagios stop).
  1. Make note of line 22 (“AuthUserFile”) in /etc/httpd/conf.d/extra/nagios.conf.  Mine says /usr/local/nagios/etc/htpasswd.users but yours could be different.
    1. If /etc/httpd/conf.d/extra/nagios.conf doesn’t exist, check to see if nagios-3.2.3/sample-config/httpd.conf exists.  If it does, move it to /etc/httpd/conf/extra/nagios.conf
  2. Use htpasswd to generate an account that’ll be able to access Nagios
    1. htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
      1. nagiosadmin can be any username really, as the default nagios.conf file only asks for a valid user
  3. Edit httpd.conf to include nagios.conf
    1. Near the bottom of httpd.conf, there’ll be a bunch of “Include […].conf” lines, somewhere in there, put “Include conf/extra/nagios.conf” without quotes.
  4. Make sure /usr/local/nagios is owned by nagios:nagios by doing this: chown -R nagios:nagios /usr/local/nagios
  5. (Re)start httpd
    1. /etc/init.d/httpd restart
Here’s how to do this for Lighttpd:
  1. Edit your lighttpd.conf file, and make sure that you have the following modules enabled (uncommented):
    1. mod_alias
    2. mod_auth
    3. mod_cgi
  2. Configure alias.url to be like this:
    1. alias.url = ( “/nagios/cgi-bin” => “/usr/local/nagios/sbin”, “/nagios” => “/usr/local/nagios/share” )
  3. Make sure Lighttpd doesn’t run Nagios’ compiled CGI programs through Perl
    1. $HTTP[“url”] =~ “^/nagios/cgi-bin” { cgi.assign = ( “” => “” ) }
  4. Since Lighttpd doesn’t come with a htpasswd program, I put a script at the bottom of this post for a simple one you can use (it’s from the following post: Minimalistic HIDS: OSSEC)
    1. htpasswd “nagiosadmin” “Nagios Access” “nagiosadmin password” >> /etc/lighttpd/nagios.users
      1. Replace “nagiosadmin password” with the account’s password, but remember to keep the quotes there
  5. Configure Lighttpd for authentication
      $HTTP["url"] =~ "nagios" {
      auth.backend = "htpasswd"
      auth.backend.htpasswd.userfile = "/etc/lighttpd/nagios.users"
      auth.require = ( "" => (
      "method" => "basic",
      "realm" => "nagios",
      "require" => "user=nagiosadmin"
  6. Restart Lighttpd
      1. /etc/init.d/lighttpd restart
After you configure your web server, you should be able to access http://localhost/nagios/ with the username nagiosadmin and the password you assigned it to the password file.  If you’re getting an error saying that the documents can’t be found, make sure that /usr/local/nagios/ is owned by nagios (including the files inside of share).  Also make sure you have PHP set up for your server, and that the configuration files are correctly set.

March 20, 2011  11:42 PM

Installing Nagios on Linux Part 1: Handling the Core

Eric Hansen Eric Hansen Profile: Eric Hansen

When it comes to intrusion detection systems (IDSs), there’s more than a handful of choices out there, especially for cross-platform and variant systems like Linux.  However, I don’t think any of them have taken the security world by storm as Nagios has.

While I have covered Trend Micro’s IDS product, OSSEC, here, I do believe also that one of the “king of the domains” deserves it’s own place here as well.  While OSSEC claims itself as being compliant to various standards (including governmental and health [i.e.: HIPAA]), Nagios totes itself as being the industry-standard.  A drawback with Nagios though is that while you can install and configure the core, it doesn’t include the monitoring tools you need for it to be a useful IDS, so you need to install the plug ins as well.  But, if you look past this part, you’ll see that Nagios is quite powerful in what it does, and that is be a very robust host-based intrusion detection system.  In this guide, I’ll walk through how to install Nagios core on Linux, and following posts will go through installing other components as well.

Step 1: Downloading and Extracting

Unlike my OSSEC posts, where I pre-installed the software and just walked through the steps of what I did, I’ll be documenting all the steps I take as I do them, so it’ll be as though you’re installing it along with me.  For those wondering, Nagios is native to the Linux system, but does have ports and (limited) support for Windows systems (using NSC++) as well.

But, first, you’ll need to download the tarball and extract the files:

  1. Visit http://www.nagios.org/download/core/thanks/
    1. Or directly download it: wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.3.tar.gz
  2. Extract the tarball
    1. tar xf nagios-3.2.3.tar.gz
  3. Change directories to Nagios
    1. cd nagios-3.2.3

In case you haven’t noticed in my previous posts, when I use tar, I don’t use the v or z commands like some guides direct you to.  Quite honestly, I don’t see the extra printout with “v” (which just shows you the files being extracted) being useful, and “z” I’ve found is only useful if you decide to use tar with a non-tar.gz file.

Step 2: Creating Nagios Users and Groups

After handling the downloading and extracting of Nagios, we need to prep our system as well.  To do this, we need to create a user and group for Nagios so that we can complete the install.  You can run ./configure and make just fine, but once you try make install, and you skip you this step, make will error out mentioning no user for Nagios.  Please note that the username (“nagios”) and group name (“nagcmd”) can be replaced by you, but you do need to make things consistent, so if you change these in your install, replace what needs to be.

  1. Create the Nagios user account (and give it a password):
    1. useradd -m nagios
    2. passwd nagios
      1. A password is given to nagios because it’s non-sysadmin-like to leave an account without a password, even dummy ones.  You could also add a fake login shell, but that’ll be up to the admin.
  2. Create the Nagios group
    1. groupadd nagcmd
  3. Assign usernames to nagcmd
    1. usermod -a -G nagcmd nagios
    2. usermod -a -G nagcmd www-data
      1. This is for the web UI.  If you don’t have your files/server chown’ed to it’s own specific user, make sure that the user that the web server is owned to can access the Nagios files.  If your web server is owned by someone else, then replace www-data with that username.

Step 3: Running ./configure & make

It does require g++ and make to be installed in order to pass this step, but most systems I run into have these installed already.  If not, then install these before moving on.  Depending on the system and how it’s set up, it’ll install Nagios into different directories by default.

  1. Run this command (use ./configure –help for more switches):
    1. ./configure –with-nagios-user=nagios –with-nagios-group=nagios –with-command-user=nagcmd –with-command-group=nagcmd
      1. At the end of this running, make sure you make note of the user and group names you used for both command and Nagios.  Also, make note of the URLs that it supplies to you as well.
  2. Run make to complete the compilation process
    1. make all
      1. This doesn’t take long even on my old AMD Sempron 3100+ processor with 720 MB of RAM (about 5 minutes max).
      2. This makes sure that everything fits nicely and compiles certain files in compiled CGI files.
    2. After make all finishes without errors, you’ll have to run the following in this order:
      1. make install
        1. Installs the core Nagios files into the system
      2. make install-init
        1. Installs files where ever your init (boot up) scripts are stored; usually /etc/init.d/ or /etc/rc.d
      3. make install-commandmode
        1. Installs and configures directories and files for Nagios to use them (such as sendmail)
      4. make install-config
        1. Installs template configuration files that are needed
      5. make install-webconf
        1. Installs the Apache files for the web UI.  If your web configuration path isn’t /etc/httpd/conf.d, then edit the nagios-3.2.3/Makefile line 35.

After running all the make commands above, Nagios itself is finally installed.  From here, it’d be best not to start Nagios just yet

The next guide will go over Nagios plug ins, and probably configuring the web server.

(Photo: Nagios.org)

March 17, 2011  11:26 PM

Installing the Linux OSSEC agent

Eric Hansen Eric Hansen Profile: Eric Hansen

In my previous post, Minimalistic HIDS: OSSEC, I gave a guide on how to install the OSSEC server on Linux. While the server has a built in agent (so you can monitor the server’s activities itself), if you have more than one Linux machine to monitor, you’ll need to install agents on those machines as well.

Step 1: Add the agent to the server

You’d think installing the agent on the machine would be first, right? Not with OSSEC. What you actually need to do, before anything else, is set up a new agent on the server. So, fire up your SSH session and get ready for some fun. For this, go to where you installed OSSEC (I’ll be using /var/ossec/ here), and go to the bin directory (/var/ossec/bin/). Inside, run the manage_agents file (“./manage_agents”).

From there, enter “a” (without quotes, and it’s not case sensitive) to add an agent to the server. The first thing it’ll prompt you for is a name of the agent. I put in the host name of the machine, as it makes things easier later on.

The next thing it’ll want is the IP of the agent. This field is a little tricky. If it gets it’s IP from a DHCP server, you’ll need to use the CIDR format (i.e.: if the machine’s IP is currently, you’ll have to use if it’s on a subnet). This is due to how OSSEC works, and quite honestly, I can’t explain it beyond that…but, I scratched my head a lot at first.

Lastly, it’ll prompt you for an ID for the host. This has to be an ID not currently used. Also, please be aware that in any future OSSEC tasks requiring the ID of the agent, any trailing 0’s are required (so if you give the ID 003, you need to have 003 for any ID requests). This makes sense, but is also kind of annoying at the same time. After that, it’ll ask you to confirm, so type “y” and hit enter.

Step 2: Get the authorization key for the agent

While you could restart the server (which is required to for it to see new agents), we’ll do that later. Instead, we’ll get the authorization key for it. After adding the agent, it should return you to the main menu of the program. This type, hit “e” and enter. This will list all the available agents it’s registered to. Enter the ID of the agent, and hit enter. You’ll see text appear for the agent key. Copy and paste this into a document somewhere, as you’ll need this later, and exit out of the manage_agents program. From here, restart OSSEC before continuing on to make sure OSSEC is refreshed.

Step 3: Set up the agent

Here, I’m going to assume you already downloaded and untar’ed the compressed file (this step is #1 in my first guide linked above). So, I’m going to skip straight to the installation step.

Inside of the downloaded and untared OSSEC folder, run install.sh and choose your language preference again. If you are deciding for some reason to install an agent on a server that already has OSSEC installed (which wouldn’t be necessary as all the other types already install an agent), say no to upgrading OSSEC. Otherwise, choose agent as your install. It’ll ask you if you want to install it into (by default /var/ossec/), I chose the default path, which is what I’ll be using in all my guides for OSSEC anyways. After that, it’ll ask you for the IP address of the OSSEC server. Put in the actual IP address of the server (not a CIDR-notated IP). For the sake of consistency, I’m installing all the default options here as well. After this, it will compile OSSEC and install it into the path specified.

Step 4: Import the agent key

On the new agent, run manage_agents from /var/ossec/bin/, and choose “I”. From there, paste in the authentication key you received from the server once you added the agent there. Once you paste the code, it’ll ask if you want to confirm the addition, tell it yes. After this, restart the agent (/var/ossec/bin/ossec-control restart), and OSSEC should start monitoring the agent.

To make sure that the agent is working properly, check to make sure there’s no “cannot connect to server” type errors in /var/ossec/logs/ossec.log.

March 11, 2011  3:17 PM

Minimalistic HIDS: OSSEC

Eric Hansen Eric Hansen Profile: Eric Hansen

While this software is far from new in the world of IDSes, I’m not sure just how many people actually know about this IDS (especially since Snort takes the crown most of the time).  However, while I’m not a fan of Trend Micro and their products, they are backing what can easily be a perfect IDS in OSSEC.

What is a HIDS and OSSEC?

Without going into all the gritty details, a HIDS is a host-based intrusion detection system.  Basically, what it does is monitor systems on the network (instead of the network itself, which a NIDS [network intrusion detection system] does).  From what I’ve found, a NIDS is good if you want to monitor the whole network, and a HIDS is useful if you only want to monitor specific systems (such as just an e-mail and web server).  Generally, to compensate for the lack of network-wide detection, HIDS monitor logs as well, such as syslog.  Some also monitor file system activity and the like, but it’s not a priority.

About OSSEC, it pretty much exemplifies what a HIDS is used for.  You install the server itself on a machine (if possible, not on the one you want to monitor), and agents (or sensors) on the machines you want to monitor.  It will monitor the file system for changes, logs and essentially anything else you tell it to (with pug ins).  OSSEC also offers a notification system for those who want to be e-mailed about changes it detects.  Lastly, OSSEC runs on many systems (including Linux, Mac, Windows and Unix), so there is a wide range of support, not including the fact that their mailing list is very active with support.

How Do I Download and Install OSSEC?

First thing that should be pointed out is that Windows offers an agent only, so you will have to use a different operating system for a server.  This might change in the future, but for now, this is how it is.  This will be installed on my home server, and here are the specs:

Linux (Arch Linux) running kernel 2.6.37 on a AMD Sempron 3100+ processor with about 720 MB of RAM (and 7 MB of swap)

I’ll also go through setting up the OSSEC Web UI, as it is quite helpful on monitoring the system as well.  However, this was done using Lighttpd running PHP 5.3.3 (FastCGI).  The Apache installation on OSSEC’s website should be enough for that web server.  For those wondering why I’m not using Apache for this, it’s always been a resource hog (even in it’s 1.x days), and I find Lighttpd easier to maintain and manage.  One last point I’m going to make here is that this will go through how to install the server itself.  Installing and adding sensors will come in a different article as it has its own heartaches and breaks.

Before installing OSSEC, you need to have gcc (or g++) installed (or some other C compiler), htdigest (if you’re using Apache, it’s already there), md5sum and sha1sum.  If a whereis md5sum or whereis sha1sum returns no path, then see if your system has md5 and sha1 (most flavors have one or the other), and create a symlink to it’s sum partner.  In regards to htdigest, here’s a script that’ll work if you don’t want to install Apache just for this tool:


hash=`echo -n "$user:$realm:$pass" | md5sum | cut -b -32`

echo "$user:$realm:$hash"

How to use: htdigest “username” “name of OSSEC realm” “password” > ossec.htdigest (example: htdigest “bob” “OSSEC” “denver”). The quotes are required if you’re using a space in any of them.

Step 1: Download and untar

The latest build right now is 2.5.1, and according to their site, they send out a new release (including new rules and definitions) every 3-4 months.  While there’s no change log or anything to indicate release dates, ls -liha shows most files being modified on Oct 12th (I untared the file today).

First, download and extract the latest tarball:

wget http://www.ossec.net/files/ossec-hids-2.5.1.tar.gz && tar -xf ossec-hids-2.5.1.tar.gz

This will create a ossec-hids-2.5.1 folder.  Inside there, it’ll have install.sh, which you need to run:


The first prompt will be which language do you want to use.  The default is English (en), but choose it as needed. After, a message saying that a C compiler needs to be installed, Just hit enter. Here is where you choose what you want to install.

Server: If you are going to have more than one system to monitor, this is the choice to choose. Besides including monitoring the system its installed on, it also offers the ability for remote administration of the agents as well.
Local: Same as server, minus the ability to monitor agents.
Agent: Basically a node on the network (useful if it’s a server that is used at least moderately). Installing OSSEC as an agent allows the computer to connect to the server and send various information.

My personal choice here was server, but local can work as well. If you choose local you’ll receive less options during the following steps, but I’ll go with server just for the full experience.

Choosing the path is pretty simple, the default location (/var/ossec) is generally the best. If you change this, however, make note of it. E-mail notifications are enabled by default, which I kept on as it’s needed. Another thing to make note of here, though, is that if you are using GMail as your SMTP server, you have to use this: gmail-smtp-in.l.google.com

The integrity check daemon runs on the server and monitors important files, and if they are modified (checksums change), it’ll send out a notice. When it comes to the rootkit detection, though, I turn that off. It’s a Linux machine and isn’t used much from the outside so I’m not worried about it. As for active response, it’s best to keep it enabled, especially if you decide to develop your own plug ins for OSSEC later on. This feature basically lets you have OSSEC act as a firewall of sorts as well, depending on what you have it do (i.e.: if php.ini is modified by a user that’s not root, then you can block access to that user). Firewall-drop events are enabled for me, as the server I have OSSEC installed on is heavily used by SSH, so I prefer to just be safer than sorry. All it really is though is allowing you to add iptable rules. Remote syslog is disabled as I don’t have any other Linux machines connecting to the server, but if you do, or plan to, it’s safe to enable this. After this, OSSEC will run it’s make file, compiling all the files needed and the like. Even on my old server it doesn’t take longer than 5 minutes.

Step 2: Getting OSSEC to Run

For some systems, it’ll install an init script at the end of a successful compile. If this doesn’t happen for some reason, though, you can use this simple init script:


/var/ossec/bin/ossec-control "$1"

I tried creating a simple symlink, but the path to ossec-control isn’t hard-coded, so it would cause ossec-control to be run from /etc/rc.d/ instead of /var/ossec. You can pass the generic arguments to it (even status).

Step 3: Installing the Web UI

Once you have OSSEC running, it’s time to install the web UI. The process itself is easy, but there are some configurations that need to be made for it to work correctly. For modules in Lighttpd, you need to have mod_fcgi (for FastCGI use) and mod_auth (for authentication) enabled. The basic set up of Lighttpd + PHP is out of the scope of this article. This assumes that the web-root is in /srv/http. Make the necessary changes to suite your server.

To begin, you need to download and untar the web UI file. For the sake of simplicity, the current directory will be in the web root (/srv/http).

wget http://www.ossec.net/files/ui/ossec-wui-0.3.tar.gz && tar -xf ossec-wui-0.3.tar.gz

To make things easier you can rename the folder (mv ossec-wui-0.3 owui). If you changed the install path of OSSEC, make sure that $ossec_dir points to the correct path in ossec_config.php. For Apache, you can run setup.sh to create a user for logging into OSSEC with (that’s all the file does), but if you’re using Lighttpd you can ignore that file.

Step 4: Modify PHP

You can do this step at any point, but since we’ll have to restart the web server anyways, may as well do this now. You need to edit your php.ini file, and add the path of OSSEC to open_basedir. You can skip this step if you wish, but if you get an error later saying that the web UI can’t open the OSSEC directory, this is what fixed it for me.

Step 4: Configure Lighttpd

First, I’ll show what I have set in my config file for Lighttpd, and then explain the important parts.

$HTTP["url"] =~ "ossec" {
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/ossec_auth"
auth.require = ( "" =>
"method" => "digest",
"realm" => "ossec",
"require" => "user=raevin"

The authentication method is up to you, digest is recommended on the OSSEC install guide, however. You don’t even need authentication, but it’s highly recommended. The line:

auth.backend.htdigest.userfile = “/etc/lighttpd/ossec_auth”

Should point to the authentication file (see my script above for more information). After that, make sure that the realm matches what you put into your authentication file, as well as the require-valid-user line below that.

Step 5: Testing the Web UI

Restart the Lighttpd daemon, clear the temp files that FastCGI creates (if you don’t have your init script do that automatically for you), and check to see if you can access the web UI at http://localhost/owui/. If it does work, you should see at least one monitored host ( If you have any issues, leave a comment and I’ll try to help.


To uninstall or update your OSSEC files, you will need to download the newest tar file and run install.sh again. After you choose your language, it’ll ask if you wish to update your set up. If you choose no, then it’ll ask if you wish to uninstall OSSEC.

To make plug ins and such for OSSEC, it’s kind of tricky. I’ll try to make a guide on this after I go through how to set up agents.

March 8, 2011  10:40 PM

On-the-fly Data Compression: Wingless Bird or Golden Savior?

Eric Hansen Eric Hansen Profile: Eric Hansen

With solid-state drives (SSDs) getting a lot of usage now, especially with the further adaption of netbook usage, instead of the aging notebook and laptops, there have been people bringing up the idea of on-the-fly data compression.  The main question is, how efficient is this concept?  As it stands, I have yet to find a Linux file system that supports this out of the box.  What I’m wanting to address in this post, is whether this should change or not.

What Is On-the-Fly Data Compression?

Basically what this whole post is even about is the ability to compress data on the hard drive, and uncompress it when it’s accessed/requested.  The premise is that this will save disk space, and that’s it.  Since SSDs have came into the market with a punch now, there has been a little bit more pressure for this ability, especially with non-Linux file systems being able to do this for years now.

Why Should I Use This?

In a non-biased manner, it’s not something that is clean-cut (similar to “which Linux distro/flavor should I use?”).  It has its moments, definitely, but it also has its limits…both of which I will do my best to address in this post.

How Do I Use This In Linux?

This is what draws me away from this technology/feature.  For Linux, ext2 and 3 do support this, but you have to compile in a patch into the kernel in order for it to be there.  For ext2, there’s e2compr, which works with 2.2, 2.4 and 2.6 kernels (with the last 2.6 support being in 2.6.22-25).  For ext3, there’s the ported version of e2compr called e3compr.  The problem with both these solutions is that they haven’t been updated in some time (e2compr since 2009, e3compr since 2008).  As for the other file systems (including ext4 and ResierFS), I’m not able to find any information on these supporting this feature.

Why Wouldn’t I Use This?

I really didn’t want to make a post starting out with the negatives on this from the get-go, as I love this concept.  But there are a few issues here that I see with it, that I haven’t brought up yet.

The first issue, is disk performance.  SSDs already have a lower life-span than it’s hard drive step-brother.  This makes me wonder why people think this technology would be great for SSDs to begin with.  I don’t see much of any issues when it comes from reading the files.  There’s no difference between a compressed and uncompressed file in this regard, stats() will still show the same information.  However, the write portion of files is what scares me the most.  If you think there isn’t much to it, then consider this.  Even with laptops able to have 2 or more GB of memory, how does the file get uncompressed?  If it’s uncompressed on the drive, then you’ll be using up to double the space (even if it is temporary, you’d also have to make sure you have the space).  If you decide to uncompress the files to a tmpfs (RAM drive), you still gotta make sure you have the space for RAM (which is even trickier as RAM usage fluctuates a lot more).  Of course, there’s the possibility of swap helping out here, but it seems like a loss cause for the fact there’s a bigger chance of data loss or corruption, especially if you end up running a program that gets caught in a buffer overflow/overrun.


Would I use this in my every day life?  No, not even on my netbook.  I don’t feel that the possible threats justify the possible gain of disk space.  Also, while I love using Linux, I don’t like tinkering with the kernel, especially with patches that are more than 6 months old (let alone 2-3 years).

There are systems out there (such as NTFS) that do offer this, but I feel there is a reason why this isn’t enabled by default.  If you have a SSD set up, you might enjoy the added space, as they are limited in what they can hold, but with hard drives being a good 500 GB to 1 TB or more, the sacrifice is too great.

March 3, 2011  2:18 PM

Nagios Checker [Bash Script]

Eric Hansen Eric Hansen Profile: Eric Hansen

While working on another post I’ll be making here later, I decided to venture back into bash scripting.  While there’s plenty enough Nagios scripts out there I’m sure (I really haven’t looked myself, but Nagios does have a big enough community), I decided to write my own, basic one.  The only requirement (besides having Nagios installed), is having SSMTP installed as your SMTP server.  If you use a different SMTP server, however, then you should be able to simply modify the send-mail line with ease.  For this, I’m going to post the code first, then go into detail about the important parts.


NAGIOS=$(/etc/rc.d/nagios status | grep “is running”)

if [ -n $NAGIOS ]; then

echo “To: admin@domain.com” > /tmp/nag_check

echo “From: nagiosadmin@domain.com” >> /tmp/nag_check

echo “Subject: Nagios system down on `hostname`” >> /tmp/nag_check

echo “” >> /tmp/nag_check

echo -e “`date \”+%D %r\”`: Nagios was found to not be running.  If this was found in error, please correct this issue as soon as possible.” >> /tmp/nag_check

/usr/sbin/ssmtp -t < /tmp/nag_check


First thing to note, is the NAGIOS line:

NAGIOS=$(/etc/rc.d/nagios status | grep “is running”)

The path (/etc/rc.d) applies to Arch Linux, which my server runs.  You will have to modify the path to fit your server if there isn’t a /etc/rc.d/nagios file.  All that file is is a basic init script, which also exists in /etc/nagios (/etc/nagios/daemon-init to be exact).  Again, the path may vary depending on the flavor used, but this one should be more standardized.  All the entire line is doing is asking for the status of Nagios (is it running or not?).  If it’s running, the output will be something of “Nagios (pid: ###) is running…”, and if it’s not, it’ll say it can’t find the lock file.  Next, we check to see if we found the “it’s running” text (the “-n” switch in the if statement).  While this isn’t perfect-standard programming (the $NAGIOS variable should be in quotes), it still does its job effectively.  Besides this, there’s nothing else to cover about the code, as it’s pretty self-explanatory.  The “-t” switch for ssmtp just says to scan the /tmp/nag_check file for the To/From/BCC/Subject lines instead of being passed via CLI (which is why we add them in to the file).

There’s two other ways to make this easier, though.  On my system, the lock file is located here: /var/nagios/nagios.lock (which, again, varies on the flavor of your system…it’s best to do a find /etc -iname nagios.cfg and scan that file for a lock file variable to find the correct path for you).  Instead of having the NAGIOS line, you could take that out completely and replace the “if [ -n $NAGIOS ]; then” with “if [ ! -e /var/nagios/nagios.lock ]; then” instead.  This will basically check to see if the file doesn’t exist (-e is to see if the file exists, and the ! states “if this condition is not true/if the file is not present”).  This is a little bit faster (not a noticeable difference, however), but not sure what other performance differences there are.

Lastly, and one I just thought of while writing this post, is probably the most efficient in terms of processing.  You can either strip out all of the NAGIOS checking, so it just sends out an e-mail, or move the e-mail code into /etc/nagios/daemon-init itself where it detects that it’s not running, and let the init script do it by itself.  The only difference in the cron job would be that instead of calling the script, you’d call the status check.

The e-mail, when sent out, will look like this, though:

Subject: Nagios system down on *hostname*

Body: 03/03/11 01:49:16 PM: Nagios was found to not be running.  If this was found in error, please correct this issue as soon as possible.

February 25, 2011  1:04 PM

Software VPN vs. SSH: Which is better?

Eric Hansen Eric Hansen Profile: Eric Hansen

In the IT world lately there’s been a lot of buzz about VPN, and how to effectively use it for remote administration.  During this time, it seems a lot of people are forgetting the roots of remote administration (at least within the last few years), when VPN was just starting to get recognized really.  While both do have their pros and cons, which one is better for the administration?  Let’s go a little in-depth with both and see.


Hands down, in the general scheme of things, I would say VPN has this area won.  While SSH does in fact have a high security standard, the way VPN handles encryption is more intelligent.  This is pretty much all because of VPN’s encapsulation method (which will be covered again later) that adds an additional security measure that SSH doesn’t.

For those who aren’t familiar with encapsulation, especially with VPN, here’s a general overview.  When you connect through a VPN tunnel, the TCP/IP packets have more overhead.  This is due in part to the VPN service placing an encryption header on top of the packet itself.  While this does slow down traffic (which will be covered later as well), companies have gotten wise to this technology and added measures to make sure this cannot be tampered with.  This is generally done with a token (such as a 6-digit RSA SecurID) which authenticates the user, instead of using (just) a username and password.

In the realm of SSH, the protocol it uses is pretty intensive when it comes to security.  The fundamental flaw to this, though, is when a key isn’t used.  In previous articles I gave a step-by-step guide on how to set this up, and there’s a major reason for this.  When you log in via SSH without using a key, your log in information is sent via plain text.  Now, if you’re only connecting to a home computer that’s setting right next to you, it’s almost never an issue.  But, say you’re at work and you want to check the iostats on the server, sending your data via plain text would not be the logical choice.  That’s basically telling everyone to log into your server.  VPN doesn’t have this issue, as the encapsulation header doesn’t contain anything damaging to the user itself, as all the data is inside the packet underneath the encapsulation header.


This is the one that gets to me the most here.  For anyone that’s used both protocols, I’m sure they will agree that SSH is the faster of the two.  Fortunately, the reason is simple.

In the last point, I talked about how VPN has that encapsulation header to provide an additional layer of security.  While it adds that extra sense, it comes at a great price.  With VPN, what happens is (not necessarily like this but the basic idea is there), the packet payload is segregated (as a TCP/IP packet length can only be 1500 bytes).  This means that less data reaches the destination, requiring more packets to be sent, increasing bandwidth.  Once the encapsulation header has been added to the packet, and reaches the destination, the destination then has to remove the header to get the payload from the packet.  So, basically, a website that generally takes about 5 seconds to load (for example), can take 10-15 seconds (if you’re lucky).

SSH, however, plays nicely with the data.  With or without a key, packets don’t take as long to get from A to B.  There is no overhead (i.e.: extra headers) to deal with when it comes to SSH, and while there is a slight speed performance hit, even when using SSH as a proxy that same page can take only 2 seconds longer to load.

Ease of Use

For the administrator setting up the service, this sort of depends.  Since most Linux flavors have at least 1 VPN solution in their repos, and I have yet to run into one that doesn’t have an SSH server/client in them, it’s pretty easy for the administrator to just run a single command to install it all.  This category, though, is really intended for the users the administrator will have to support with said software.

When I worked on the Ford help desk, I got a lot of calls every day asking for help on how to get VPN to work.  While the calls were easy, there was never an easy “click here to solve it all” button we could push.  Granted, most of the time it dealt with either a transport/tunneling issue, or sometimes even a port issue, there was a lot more calls than we should’ve received from it.  Then there was also the problem of the previously-mentioned SecurIDs desynching.  Again, a very easy fix, but it seemed if it wasn’t one thing it was another.  I can’t fault the user though, completely, as the e-mail documentation (read: a couple of paragraphs) didn’t go into any detail what-so-ever to assist the user.

SSH, while I never really dealt with it in the same manner as VPN, is a lot easier to use.  The thing is, using a key makes the task even easier.  Unfortunately there’s not a lot to really say here, as it also depends on how you use SSH (i.e.: PuTTy, OpenSSH client, another SSH client, etc…), but it all pretty much amounts to the same result.  It’s all relatively easy to set up, and the administrator can simply pass out a config file (like a script file for Linux) to make the task even easier if it uses a generic key.


Every server is different, which makes this category a bit hard to judge.  However, in general SSH is easier to manage, especially when you have a big network.  The reason why I say this is because, at least OpenSSH, uses one, maybe two configuration files you have to modify generally.  Probably the hardest task is setting it up to use keys instead of password authentication.  VPN does make this task easier, though, by offering an online configuration (at least OpenVPN does).  Which in the end does make configuration even easier, while you only have to worry about the network configuration (i.e.: making sure VPN access from outside is possible).


When it comes down to it, I personally prefer SSH.  VPN would be great if I was running a medium or large network, and/or was super paranoid.  However, SSH with key-authentication satisfies my necessities just fine.  Basically, if you’re using VPN, know that it might (probably will) need extra configuration compared to it’s sibling, but adds a hefty amount of extra security SSH doesn’t (at a price).  If you prefer, you could also set up both, and use VPN when it’s not time-critical, while SSH is used for those moments where you’re against the clock.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: