I.T. Security and Linux Administration


May 30, 2013  5:30 PM

First Ubuntu bug FINALLY fixed

Eric Hansen Eric Hansen Profile: Eric Hansen

Ubuntu, with its first release (and first bug) in 2004, has finally marked the first bug as fixed…about 9 years later.  What is the first bug, you ask?

“Microsoft has a majority market share”

If you compare and contrast 2004 with 2013, it makes sense, doesn’t it?  Windows has seen its fair share of flops (Vista and 8, anyone?), Linux has, for the most part, seen a handsome (or beautiful) increase in both use and users.  We even have the best of both worlds in products such as Gnome and KDE.

A great quote regarding this comes from Mark Shuttleworth himself (frankly I don’t know who he is relative to Ubuntu, but then again I really don’t care for Ubuntu):

Personal computing today is a broader proposition than it was in 2004: phones, tablets, wearables and other devices are all part of the mix for our digital lives. From a competitive perspective, that broader market has healthy competition, with IOS and Android representing a meaningful share.

Windows will, unless Google decides to really screw everyone over, not come close to the market share.  I feel that there really is little distance between Microsoft and RIM (now BlackBerry), with the major difference being BlackBerry’s whole model resolves around devices like phones, while Microsoft has a vast array of outlets.

Source: http://www.omgubuntu.co.uk/2013/05/mark-shuttleworth-marks-bug-1-fixed

May 30, 2013  5:11 PM

Gotta love captcha

Eric Hansen Eric Hansen Profile: Eric Hansen

[...] a recent We The People petition at the White House, asking the administration to support the treaty for the blind, which would make it easier to access creative works for the blind by creating a few small “exceptions” to copyright law (i.e., returning rights to the public) for the sake of sharing formats that are accessible to the blind across borders. However, some blind advocacy groups have discovered that, if you happen to be blind/visually impaired, it’s basically impossible to sign the petition.

The reason being?  Captcha.

You know, the (usually randomly trippy) letters that you have to type to “verify you’re human”.  The problem though isn’t that they were forced to try and type in characters they couldn’t see, because that wasn’t the case.  The issue is more so that the audio captcha is unintelligible (like this video).  Which if you have ever tried doing the audio of a captcha, it is.  Kind of picture trying to talk to someone at a party, and everyone is talking over you.  I had the displeasure of going through this with Outlook.com’s audio captcha not too long ago, and it frustrated me more than anything.

Captcha is a great design initially, and I do understand why they would garble the audio, but it hurts the solution more than it helps.  Its bad enough that you distort letters that look similar whether lower or upper (“C” vs. “c”, “X” vs. “x”, etc…).  That alone is annoying, but to be blind and unable to make life better because you thought distorting audio was a great idea?  No thanks.


April 30, 2013  10:23 PM

Web App/Vulnerability Scanner

Eric Hansen Eric Hansen Profile: Eric Hansen

I want to know something: what scanner(s) do you use to assess the security of your systems, programs, network, etc…? For example, Metasploit and Nessus are two of the most popular in this field, but there’s also ones such as OpenVAS, W3af, Nikito.

Of the one(s) you use, why do you? What draws you to them over the use of another?

I’ll start.

I used to use Nexpose (from Rapid7, makers of Metasploit) for a long time. Only reason I really stopped is the limitations and extreme usage requirements (it runs off of Java, and you need at least 2GB RAM to even try to run it).

When I reinstalled BackTrack, I discovered OpenVAS, and have since been using that. Its forked from Nessus before it went close-source, and has basically taken a form all in its own.

Its intensive as well, but at least I can run it on my old server (720 MB, 1.8 GHz single-core AMD processor, etc…).


April 30, 2013  7:38 PM

Usability Systems

Eric Hansen Eric Hansen Profile: Eric Hansen

While I’ve kind of put a hold on the monitoring solution (kind of shifted gears, so twiddling between that and another project), one thing is true regardless: usability needs to exist at the highest level.

Bluntly, when writing UI, you have to write it with the mindset of people not knowing how to do anything with it.

Ask yourself whenever you add something: will Joe-Shmoe know how to use this without documentation?  If the answer is “no”, then you need to figure out a way to make it happen.

Even before the day and age of Internet, people were wanting things fast and easy.  The transition of “work hard grow fast” was in full force, and now its here to stay.  If you think your user will want to go through 20 hoops to register an account, think again.

A great way to bring UI to the end user immediately is Ajax.  However, the other issue comes those who don’t allow JavaScript to run.  If you prompt the user to disable it, its a crap-shoot if they’ll know how to.  This problem comes in with people believing everyone that JavaScript is horrible.  JS itself is not horrible, how its used can be.  But with the world moving to HTML5, JS is a heavy component of that.

Making a system usable is easy, its making it usable for everyone that determines if its successful or not.


April 24, 2013  1:54 PM

Quick Shell Trick – Find Memory Usage

Eric Hansen Eric Hansen Profile: Eric Hansen

Unlike Windows (hear me out), its not as easy to find out how much memory a process is using.  Tools like ps report the RSS (resident set size), which “significantly overestimate memory usage” while PSS (proportional set size) measures “each application’s ‘fair share’ of each shared area to give a realistic measure.” (source: http://www.selenic.com/smem/).

When you’re trying to find bottlenecks (and don’t want to use tools like Valgrind), or get an actual representation of how much memory a process is using, this means that ps is more or less out of the question.  There’s always “top”, but personally I’m not sure what type of memory usage it reports and I’m too lazy to research that.

After doing some digging around, I discovered smem (linked above as source).  Its a Python script that reports different types of memory (including PSS).  Installing it is easy (it should be in your package manager), and using it is even easier.

I’d like to start off by saying that I’m fairly certain there’s an easier way of doing this than how I’m going to show, but I haven’t really felt like going through the arguments.

To get the PSS of a process, just run this command:

smem | grep <name> | grep -v “grep” | awk ‘{print $6}’

Replacing “<name>” with either the program or the PID.  Just like ps, this will generate a listing for the grep command, so we exclude all grep commands from the list, and then print the 6th column (PSS).

The output by default is in bytes, but if you want it more human-readable, just pass “-k” to the arguments of smem.


April 21, 2013  10:33 PM

Writing a Full-Serviced Sysadmin App from Scratch – Part 3

Eric Hansen Eric Hansen Profile: Eric Hansen

Probably the last post I’m going to make for the night (not sure, though) is about presenting the monitoring data.

As mentioned, I was working on a backup solution previously.  In it, I used a Python module called Tornado to interact with HTTP requests that the client would issue.  This was all backend, however.  During this time, however, I learned about its powerful templating engine which looks to be tightly based on Jinja2.  So, for this project, I decided to ditch the PHP interface for the end user and go directly with Python.

One of the toughest decisions I’ve had to make so far is how do I want to present certain data (this is prior to me working with graphs, mind you, that only came a couple of days ago at the time of this writing).  Tornado allows for passing as many arguments as you can through Python to the templates, but what if I didn’t want to work it that way?  Luckily I didn’t have to.

Templates allow you to do all sorts of things in Tornado, including executing methods/functions.  All I had to do was pass a reference to the function, which is as easy as just giving the name without ()’s (so func_to_execute=some_func and not func_to_execute=some_func()).  The parameters are passed when being called in the template itself:

{% set var = some_func(params) %}

Easy, you say?  You darn tootin it is.  I really wouldn’t recommend doing this for EVERYTHING, but I’ve done it for generating graphs on page views instead of when the graphs are updated (which I’ll try to get to in a different part).

Now, to be honest, I’m horrible at designing pages.  I’m more of a backend developer kind of person.  With that said…

A few months ago (maybe longer?), a friend of mine wrote a small monitoring set up for his servers.  Originally he wasn’t going to share the source, but he ended up making a private repo of it on BitBucket and shared it with me.  Basically, with his permission, I used his design for mine.  It works for me and is clean, so I figure why not.  This posed an issue, though, as I had to also figure out how to present data that it really wasn’t designed for (i.e.: graphs).

Lets just say I might use a HTML5 template to make things easier.


April 21, 2013  10:18 PM

Writing a Full-Serviced Sysadmin App from Scratch – Part 2

Eric Hansen Eric Hansen Profile: Eric Hansen

I touched up on a bit regarding why I’m writing my own monitoring solution.  This will cover some of the design aspects I had thought of initially and talk some on how things have progressed since.

One of the first things that came to me was an authentication system.  Right now its still pretty bare (passwords are stored in plain text, for example), but this will be solved soon as I do have some ideas rolling around in my head right now.  But, within the authentication system, I had to consider how I wanted to structure the database.  Everything was (and is) tied to the user id, and that is all I knew at the time.

This will monitor servers and services, which I dubbed “units” cosmetically.  However, services are dependent on the server, so I wanted to give both their own unique ID, so I tied the ID  of the server to a column in the services table (foreign keys).

Next, there was the difference maker of how to notify the user.  There’s still the traditional email and SMS/text.  However, I took it one step further (and will make it even further in the future).  Details are top secret here, but the integration of all are tightly nit together.

Then there was the idea of storing alerts.  Potentially a heavy burden on the database if its not handled properly (which I’ll try to discuss in a different part), or very effective in aiding sysadmins.  The idea is simple enough, there’s three states: 1 – okay (no issues), 2 – warning (non-critical issue/unable to connect), 3 – alert (resopnd asap).  However, you don’t want to store an alert everytime you check a server and see its down, just the first time.

There’s some other tables I’ve created that I’ll discuss later, but a big thing to discuss since this is the topic of database design is what server to use.  For me there’s MySQL and PostgreSQL.

I’ve used MySQL since early 2000 when I first started web development until not too long ago.  I’ve seen it improve tremendously in the form of redundancy and preventing corruption.  From the days of forced MyISAM to almost near elimination of the engine.

PostgreSQL is one that I haven’t had that much experience with.  I’ve used it since early 2012 or late 2011 when I first started development of a backup service I was working on.  A few things drew me to it over MySQL:

  • Not as heavy on the resources (matters a lot when you’re running on low-end VPSes)
  • Has a cascade feature
  • Simplier to set up and use in Python
  • Has built in “whoopsie” support

Cascade is a nice feature in which when you set up a foreign key, if the “master” key of sorts is deleted, then any FK that is tied with that master key is deleted as well.  For an example, we have a user in the users table with an ID of 4.  In the servers table we have four servers (IDs 4, 5, 10, 42) that the said user added.  If servers.user_id is a foreign key to users.id, and set to cascade on delete, then when user ID #4 is deleted, servers 4, 5, 10 and 42 are deleted as well (and everything that has a FK tied with them).  This makes it easier than in MySQL where you would have to write a trigger and deal with various issues I’m sure (I’ve never written a trigger or procedure before so I don’t know).

Another thing you learn in Python quick is to use what is best supported.  So, for example, when dealing with a configuration file, its a lot easier to work with a JSON file than it is a PHP array file, for example.  MySQL support in Python exists, but it is not the greatest from what I’ve read and seen (been a while so this might be different now, though).  PostgreSQL, however, has tremendous support with the module psycopg2.  I ended up just writing a small wrapper around it to make things easier to deal with (I’ll post a link to the code once I make sure all the bugs are ironed out), but other than that its all based on said module.

Lastly, “whoopsie” support.  What I mean is if either a) you insert data incorrectly or b) data would be corrupted (power failure or something), nothing is auto-committed or changed.  I’m sure this can be changed if you REALLY wanted it to, but why?  PostgreSQL offers a ROLLBACK/COMMIT feature.  If an error occurs or otherwise can’t modify data, you just need to issue a ROLLBACK command, otherwise COMMIT it.  This was a little difficult for me to grasp at first because I was so used to MySQL that I couldn’t figure out why my INSERT command would return correctly but no data was actually inserted, but it was still amazing once I caught it.

PostgreSQL it was, and I really haven’t looked back.  Truthfully, I’ve completed ditched MySQL all together and have been happier without it.  I think of it an ex-girlfriend…you got rid of it/her for a reason, after all.


April 21, 2013  9:53 PM

Writing a Full-Serviced Sysadmin App from Scratch – Part 1

Eric Hansen Eric Hansen Profile: Eric Hansen

For the past week or two now I’ve been working on an application to help monitor systems and services.  Kind of a bastard child of Nagios and Cacti.  There’s a few reasons why I’m “reinventing” the wheel, so to speak.  But, I’ve decided to post here explaining the details, trials and tribulations, as well as any other random tidbits of information to toss in as well.  Some of these will most likely also be cross-posted to www.securityfor.us as well, but a bulk of the stuff will be posted here.  Anyways, on to why I’m doing this…

Nagios is a great tool.  For years I used it myself.  Its very powerful, integrates with virtually anything you can access via SNMP or scripts, and the documentation is pretty easy to follow.  There are two main points of failure in terms of usability, though, that I find.

First, there’s the fact that its written in compiled CGI (or C, can’t remember now).  With a strong background of “open source” initiatives, this surprises me thats it has lasted so long in the field.  Its akin to running Microsoft Office via WINE; it works and does its job, but there’s better (or more suited) alternatives.  If something goes wrong, you have to wait for the Nagios team to fix it, unless its a script to check for service/host status.

Next, there’s no ACL built in.  I’m not sure why this is, but essentially Nagios relies on you running Apache (or something else that provides an authentication method like an LDAP module).  Writing even a moderately functional authentication system takes a day at most.  I’ve never been able to understand this aspect, and I feel it hurts the rest of the usability by not being able to segregate who can see what.

There are more reasons with Nagios that sparked my interest in writing my own monitoring set up, but I want to also address the software that this is more or less based on, Cacti.

Cacti, at least to me, is more of a RRDTool learning tool or assistant than an actual monitoring solution.  It helps you create RRDTool graphs (which can be a major pain at first), and thats about it.  If you really look at it, about 99% of its functionality revolves around building RRD graphs.  Which is great, I even use RRD graphs in my set up as well.  But, what else does it bring?  Products like Centreon, that rely on Nagios on the backend usually, offer graphing as well, on the same platform (PHP) and is more actively developed.

More or less, I also touched on this with Nagios, but Cacti is developed in PHP (granted, open sourced).  Why I mention this is because from my stand point and feeling, PHP and CGI/Perl are absolete languages when it comes to web development.  They do have their place, don’t get me wrong, but the overhead and slowness of PHP, and the over-abuse of extending a language beyond its intents in CGI/Perl is just wrong.

These are a couple of reasons why I decided to roll my own solution.  It has what is currently out there, offering more, while more or less requiring less overhead.  A normal Cacti installation is difficult to run efficiently on a 256 MB VPS with 256 MB swap, and Nagios just causes more overhead the more it has to monitor.  While the same can be said for Python, the integrations it has with different tools and the like makes developing on said platform a lot easier.  This is what most of these articles will be covering as well.

As I’ve been working on this for a week or so now, I already have a couple of more articles I’m going to be writing tonight, as I’m heavily enduced with caffeine and nothing else to do as I can’t program on laptops (they annoy me greatly).


March 31, 2013  3:15 PM

Kali: The new pentester?

Eric Hansen Eric Hansen Profile: Eric Hansen

Earlier this year (this month?), BackTrack developers announced a new version of their distro, but this time seemed completely re-developed, called Kali.

Its a nice distro, and is based on Gnome 2 (MATE??).  It has more tools than you can ever imagine, and from my experience so far runs pretty smooth.  Though, getting Virtual Box’s additions to install can be a little annoying.

To be honest, there’s not a whole lot to say on this, just that the name seems cooler.  It still has a plethora of hacking tools to mess with, and they seem to have trimmed the selection to a more recent and viable use.  But, it doesn’t really add much, besides disk space.  That’s also one of my biggest complaints on BackTrack.

It tries to be a one-size-fits-all solution to pentesting, which is nice, for professionals.  I’m talking about people who go out to various enterprises and charge $150/hour just to click a mouse a few times.  But for those more focused on the SMB side, you’re never even going to look at ~50-60% of the tools on the disc.  So you’re pretty much left with either reimaging the distro, or building your own.

Kali has a strong potential, but I feel it’d be better if they left it more up to the community to choose what they wanted on it instead of trying to be a swiss army knife.


March 31, 2013  3:10 PM

CLI password manager

Eric Hansen Eric Hansen Profile: Eric Hansen

I’m a strong opponent of using password managers.  To me, they’re nothing better than writing info down on a post-it note.  You know sensitive info is there, but they just use security by obscurity.  However, as I’m also a strong lover of the CLI, I thought it was interesting that a CLI-based password manager for Linux is out there, named Pass: https://liquidat.wordpress.com/2013/03/27/pass-a-perfect-shell-based-password-manager/

The basic rundown is on install, it has you create a GPG key, and with that key it encrypts data in its “store”.  It tries to store one login per file, but you can change this by either editing the file itself or running pass insert <folder>/<file>.  This can make it easier, or harder, for you and any eavesdroppers.

Another thing to like about this one (or hate) is that it has built-in support for Git.  So, if you want to have a repo somewhere that stores your GPG-encrypted passwords, you don’t have to worry about much.  Once you edit a file via pass, it calls the git add and git commit programs automatically to save to a local repo.

Other than this, there’s not a whole lot to write about, it saves your passwords and does it using a GPG key.  Its a nice twist, but no different than any other password manager out there.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: