I.T. Security and Linux Administration

June 29, 2012  3:39 PM

Stripping out information from SSH keypairs

Eric Hansen Eric Hansen Profile: Eric Hansen

Long time no post, hah.  But this post is about something involving SSH and the glorious ssh-keygen (keypair generator).  This won’t be beneficial to a lot of people, but for me it has posed to be quite a wonderful asset to my scripts.

Idea Behind This

Why would anyone want to strip out data from a SSH key pair?  For me, it’s because it was garbage.  I wasn’t going to be using the key pair for what they were intended for, but instead of writing my own key generator and such, this was the easiest method.  If you want random data for some person (noise on the network, testing data collection, etc…), this could prove to be a time saver.

What It Does

It does two things:

  1. Removes identifying information from the private key  (it states the key type, algorithm used, etc…)
  2. Keeps only the public key

A typical private keey looks like this:

Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,EE03FBC573CD3AAB6F1DA1740C60AD4A

<key snipped for clarity reasons>

While a public key will look like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCygEhgh2bCzhrD57EHMjOHNvpSE4zTUNyHwezkQgjer+m2qojlKxKouTbACMPxL6Q+ddtLAaJprwe4uCBNy5hTc6AkMFLkZ5ZZYYkBYMyxUX2UWU3zMUm7VOO9CeCMAiWf42jCbWuCEPXpgKgvQTEDIwG0zHk/mH1/USiWImR2kzDv1Kjmi8OGoy7sr6fKi8gWJWOW7lzNhbwCc0HrDhHX+G7GpVJ6mSyC32bVtFLjKc9oek+KZresuD493Dm7+XNNf+xw5Y/WuLcByGeSfSs7NKpRZaxTN4pqfh0W97LENDYfcmILsvlRYgYLuYYoY1SlCtEz6q0HyfF4i8P8KWIh comment-stamp

The end result, however, will be where in the private key all you have is the key itself, and the public key will have everything between “ssh-rsa” and the comment-stamp.

How This Is Done

As I didn’t want to manually enter the information for ssh-keygen myself each time, I automated the process (and without using expect!).  At the end of this post will be the script in it’s entirety, but right now I’m going to break it into parts a little.

DATA=`dd status=noxfer if=/dev/urandom ibs=1 | head -c1024 > /tmp/key`

As I don’t want to use a generic or standard passphrase, I use random data from /dev/urandom.  The “status=noxfer” argument is there because I didn’t want to see how much work dd did, as it wouldn’t fail unless the user couldn’t read /dev/urandom.  It’s stored in a temp file because I didn’t feel like echo’ing it into the while block I used next.  If you’re curious why I didn’t just use of=/tmp/key that’s because I only wanted the first 1024 bytes, which could’ve been resolved by count=1024 in the dd command, but this was efficient enough for me (I won’t be running this every minute).

Now onto the while loop:

while IFS= read -r -n1 d
TMP=$(printf "%02x" \'$d)

if [ "$TMP" != "00" ]; then
done < "/tmp/key"

To sum up this entire block, we are converting each character into a hex value, and only storing those hex values that aren’t 00.  As dd returns a bunch of ASCII data we need to tell printf that the data is ASCII (which is what the \’ is for).  We then only append it to the passphrase (KEY) if we don’t have a null/0 hex value.  We do this for each character we read from /dev/urandom (/tmp/key).

ssh-keygen -o -q -t rsa -N “$KEY” -f `pwd`/$FILE

Pretty straight forward if you know your ssh-keygen switches.  -o automatically overwrites a keypair without prompting, -q makes it so ssh-keygen does not output anything (i.e.: the ASCII chart it shows at the end), -t rsa specifies the key to be RSA (instead of DES), -N “$KEY” says to use KEY as the passphrase (so we’re not prompted for it) and -f specifies the location of the keypair.  $FILE is passed as an argument (FILE=”$1″) in the script.

If all goes well, you’ll get an automated SSH keypair, but it’s not stripped down yet.

Now, the stripping of the private key is a little bit involved, but here it is:

FRESH=$(cat `pwd`/$FILE | tail -n +5 | head -n -1 | sed -e ‘:a;N;$!ba;s/\n//g’)

Basically this is taking the private key, skipping the first 5 lines (information and an empty line) and the last line (see the example above).  The sed command is there because I did not want newlines in the private key.

I want to take a moment to talk about the sed command, as I didn’t write it myself but I follow it pretty well now.  With sed, there’s no real way to strip out newlines (\n) as sed parses each line up to the end of it (\n, , etc… if you know programming).  So, what you need to do is tell sed to create a label (a) and join newlines together (N), and while you haven’t reach the end of the data ($!) repeat the process (ba [b = branch, a = the label to go to]).  Then, sense we will have all the newlines on one line, we can then strip them out by globally (g) replacing (s) \n with nothing.

trim, in this case, wouldn’t work as you can’t trim something into nothing.

FRESH=$(cat `pwd`/$FILE.pub | awk ‘{print $2}’)

This just simply takes the public key itself and stores it.

After each FRESH you need to store the data back in the file, so do echo -n “$FRESH” > `pwd`/$FILE (or $FILE.pub for the public key).  I used the -n to enforce no newlines be inserted as well.


While this was a long post, it was worth writing this script.  Without further waiting, here it is:




[ -z "$FILE" ] && echo "No file specified.  Exiting." && exit

DATA=`dd status=noxfer if=/dev/urandom ibs=1 | head -c1024 > /tmp/key`
while IFS= read -r -n1 d
  	TMP=$(printf "%02x" \'$d)

        if [ "$TMP" != "00" ]; then
done  `pwd`/$FILE

FRESH=$(cat `pwd`/$FILE.pub | awk '{print $2}')
echo -n "$FRESH" > `pwd`/$FILE.pub

May 25, 2012  2:47 PM

Unable to look up DNS records for a particular host? Look at your ISP

Eric Hansen Eric Hansen Profile: Eric Hansen

Over the past few days, I’ve been messing with CDN and Object Storage from Soft Layer.  Overall the experience has been pretty smooth sailing.  The web UI is very easy and fluid, their offering is pretty impressive, and they even have bindings to their API via PHP which helps with my business.

One thing though that I experienced was that while some places (i.e.: Facebook when sharing a photo from the CDN) would display the picture, but I could not view it.  Even after waiting 2 days for any possible DNS propagation to finish, I had this issue.

Continued »

May 17, 2012  1:15 PM

[Script] Linux Boot Finder + GRUB Menu Creator

Eric Hansen Eric Hansen Profile: Eric Hansen

I’ve been wanting to venture into the realm of scripting with Perl for quite a few years, but always find a better use using PHP & Bash.  However, this was changed not too long ago when I was looking to write a Bash script to find every Linux kernel on a machine.  Of course I was trying to do this in Bash, and there were two methods: 1) easy – use arrays to store various information and loop through the arrays, or 2) hard – do everything in one for/while/if block and make the code near unmanageable and unreadable.

Ultimately I chose option #1, but that posed another issue…for some reason I was unable to populate the array I needed.  The array was usable (no errors saying the variable was undefined), the data I wanted to store was visible through the while and if loops, but for some reason no matter what I tried, I couldn’t store the data.

Then, one day, a friend of mine mentioned doing this in Perl.  Originally I was set off by this because I wanted it to run on every Linux machine there was (Bash seems to be the most popular shell).  But, when he mentioned that most systems come with Perl installed, it dawned on me.  So, I wrote this script to fetch every detected kernel & initrd (ramdisk) found on ext2, 3 and 4 partitions. Continued »

May 14, 2012  9:42 AM

Is Ubuntu Moving To a Rolling Release Cycle Slowly?

Eric Hansen Eric Hansen Profile: Eric Hansen

Linux Today posted an article entitled “Ubuntu 12.10 Daily ISO Images Are Now Available“.  Now, there was talk before the 11.x branch of releases was coming out that Ubuntu was contemplating moving to a rolling release cycle, where there was no real designated “new” releases, just updates kept coming continuously.  However, talk on IRC back then was that it was just a rumor and not true.  I know 12.04 just got released back on April 26th, and these 12.10 daily ISOs are directed towards testers and Ubuntu developers more than anything, but it can still raise the question of whether this is leading to a rolling release cycle option.

I definitely do not see this being a mainstream option for Ubuntu, as it has taken the stance of basically being the transitional Linux flavor between Windows and some of the more intricate flavors such as Gentoo & Fedora.  But that still doesn’t mean it don’t have a chance of happening for those who are wanting more updates (i.e.: testing or unsupported repositories).

This can hurt Ubuntu, though, if it chooses to go this way.  While I like the way Unity has changed since its debut back in 11.04, Ubuntu seems to be more focused these days on more of the business market than consumer.  The feel for it is more commercialized and not really meant for people who want to test new software.  Linux flavors such as Gentoo and Arch Linux have made a name for themselves for being that Linux version to go to when you want that constant updating.  Businesses tend to be afraid of updating, however, unless there’s some known reason to do so.  Which makes sense, right?  The whole “don’t fix what’s not broken” mentality, it holds true.

Here is another option to think about though: it could be great to offer it as a secondary service of sorts.  There are those who are comfortable with Ubuntu and don’t want to use a different version of Linux.  For example me.  I spent the entire weekend trying to get Gentoo to play nice with my system, and I have had no luck.  With Ubuntu it works right out of the box.  But, what I personally do not like about Ubuntu is the slowness with the updates, especially if there’s a new feature in software X and I have a version or two older.  This is where having the opportunity to use another repository for installing the latest software (that hasn’t been tested thoroughly) can come in handy.

If they ever decide to add this feature in it would probably break APT because you probably won’t want to install all of the new software, just what you need/want.  I know the Ubuntu software package manager allows you to choose which software to update, but you have to consider the other copies of Ubuntu too that aren’t so kind to the user.  Some, like Linux Mint, have really taken a hold and given their users reasons to use their package manager, but its not always the case.

All in all, I think Ubuntu moving towards a rolling release cycle would be a good step forward for consumers, if given as an option.  I don’t feel forcing it on the user (ala Arch Linux & Gentoo) is a smart decision because of who Canonical wants to focus their product on, businesses.  If I were to give a definite verdict though, I would say to not do it.  Arch Linux and Fedora are not hard to set up and get running.

April 19, 2012  2:11 PM

NRPE: Could not complete SSL handshake

Eric Hansen Eric Hansen Profile: Eric Hansen

In setting up a server to remotely monitor various other servers I run, I decided to go the route of NRPE instead of SNMP (which I have set up for Cacti).  However, even after installing the SSL libs on both machines and compiling NRPE, I discovered one problem: the monitoring server could not connect to the remote host via NRPE.  When logged into the remote server, which is running Ubuntu 11.10 (32-bit), I could run check_nrpe -H localhost and it display NRPE v2.12.  However, the monitoring server running Debian Squeeze (32-bit) would give me this error:

root@hq:/tmp/nrpe-2.13# /usr/local/nagios/libexec/check_nrpe -H <ip address>

CHECK_NRPE: Error – Could not complete SSL handshake.

I ensured the permissions for /usr/local/nagios/* was set correctly, both versions of NRPE were compiled with SSL, and that the IP address of the Debian machine was found in both ALLOWED_HOSTS in the nrpe.cfg file and only_from in the xinetd.d/nrpe file.  Everything looked in order, and still kept receiving this issue.  Then, I looked at the output of ps aux | grep nrpe, which showed that the NRPE process was using the wrong configuration file.  So what I needed to do on the Ubuntu server was edit the init script (which was made during the apt-get install nagios-nrpe-server step I did earlier).  These are the two lines I had to edit:
Both of these had to point to /usr/local/nagios/libexec for DAEMON and /usr/local/nagios/etc for CONFIG.  I restarted nagios-nrpe-server on Ubuntu and then ran the check_nrpe command from Debian again, getting this:
root@hq:/tmp/nrpe-2.13# /usr/local/nagios/libexec/check_nrpe -H <ip>
NRPE v2.13
Everything is back to working order and NRPE is working just as planned.

April 5, 2012  11:25 PM

Apache 2 + mod_rewrite + Subdirectory confusion

Eric Hansen Eric Hansen Profile: Eric Hansen

Originally my business’ website was set up fine, with the structure being similar to:

/ – Root domain
/accounts/ – CRM
/webmail/ – Webmail access

What I decided to do was create a subdomain, mail.securityfor.us to use instead of /webmail.  I’ve also wanted to set up a document resource section for my business, so I created a subdomain docs.securityfor.us.  Ultimately I was able to get the subdomains to play nicely with each other, but I was having issues with /accounts/ redirecting to mail.securityfor.us.  Then, the fun begins (after the “Continue”)… Continued »

March 29, 2012  1:51 PM

More Bash Alias Tips

Eric Hansen Eric Hansen Profile: Eric Hansen

So as of late I’ve been running into some issues on servers I manage.  A good example is today, I ran into a situation where mail was being stuck in the queue due to Amavis not running.  While this is find and dandy, and an easy fix (had to change the system’s hostname), I quickly got bored of typing out the same long string each time.  So, I decided to open up my .bashrc and start cracking, and here’s some helpful functions and aliases to get you started!
Continued »

March 25, 2012  2:42 PM

IP Banlist with Automagic Updating

Eric Hansen Eric Hansen Profile: Eric Hansen

First let me start off by saying that this can be used for iptables with some minor tweaking, but I chose to implement this using tcp_wrappers instead (/etc/hosts.allow; hosts.deny).  Main reason being is I wrote this for Rob to make his task of updating a list of banned IPs that much easier. Continued »

March 15, 2012  3:18 PM

Two-Factor Authentication in PHP Using SSH

Eric Hansen Eric Hansen Profile: Eric Hansen

For a good couple of years now I’ve wondered if there was a way to write an authentication system in PHP that utilized SSH instead of the widely-breakable database and flatfile methods. After doing some research I found its possible after installing a PHP extension. This guide will detail the methods used to do this, with the intent of hopefully having this a more versatile option. Continued »

March 14, 2012  11:09 AM

Custom Apache Directory Configuration with ISPConfig 3

Eric Hansen Eric Hansen Profile: Eric Hansen

I’ve started my own business, and have been working with a friend’s business to migrate his web hosting clients over to my servers.  For the most part this transition has been smooth, except for one client.  Due to how their directories were configured (and WP misconfigurations), instead of creating normal subdomains through ISPConfig, I had to create them as new domains.  This was fine until they changed their name servers to reflect mine…then in came the 500 and 503 errors.  Luckily, I documented what I did for similar issues with those who use Apache2 + PHP + ModFCGI. Continued »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: