I.T. Security and Linux Administration


May 30, 2013  6:43 PM

“Malware in your system? Good”



Posted by: Eric Hansen
security

Who really thinks that allowing to protect IP (intellectual property) by locking it in with malware is such a great idea?  Who?!

Basically the idea is to stop pirating software and people stealing software and other IP.  The law would give the creator rights to basically have a backdoor to the violator’s machine and network, photograph the hacker, ruin their network or their machine, etc…

This is a small post but really, there’s not much to say about it.  Why?  Why would you want to put more money into something that is not going to stop “hackers” from hacking?  That’s like throwing a steak at a dog and telling him not to eat it.

If you want to read more information on this, go here: https://www.techdirt.com/articles/20130527/21352923220/dumb-idea-dumbest-idea-letting-companies-use-malware-against-infringers.shtml

May 30, 2013  5:54 PM

Outlook.com vs. GMail : Part 1 – Why



Posted by: Eric Hansen
security

A while ago Google announced its discontinuation of the free tier of Google Apps (basically Google providing email, storage space, etc…) free to businesses.  It also so happened that within the next couple of months, Microsoft announced Outlook.com, which was set to take the place of GMail (without specifically stating it).

Yes, I’m a Linux fan, but this isn’t a matter of Windows vs. Linux, its a matter of usability.  Which service am I more happy with?  I can use either or just fine regardless of the operating system I’m using.

GMail used to be the coupe de grace when it came to email.  If you didn’t have a GMail invite, you weren’t cool.  Heck, people were selling them everywhere (even eBay I believe).  It was a mad rush and everyone was wanting to get in on the next big thing, just like Facebook vs. MySpace.  I loved the easy design, and the functionality was just what I was looking for.  I didn’t want to use the SquirrelMail of my ISP anymore, and GMail was the best ticket ever.

Then the Labs feature came out, along with themes.  While I wasn’t crazy about the themes,  I thought it was a nice touch.  The Labs option was nice as well, basically bleeding edge trial and error of what you might be able to expect next for your mailbox.  Some of them were nice, others were pointless for my use.  At least it gave me the option of disabling  ones I didn’t want, however.

After that came IMAP support, which sent me into heaven.  No longer did I have to keep a backup copy of my entire inbox on my machine.  Nope, I could instead just mirror a list locally and reference emails when need-be.  The threading was pretty nicely done too.

Whats that?  Oh yeah, ads.  Well, they weren’t bad first.  A simple bar at the top giving some user-targeted ads, why not?  But wait…there’s more!  Now they get placed on the sides of the email, reducing the visibility of your emails.  Which, really isn’t a bad thing, but at the same time I’d like to have as much of my screen possible devoted to reading.

From there on, it just kept declining.  The more features GMail added, the slower the response times got.  Even so much they had to preload things before you even got to view your email.  Why?  It takes a whole 2-3 seconds now to view my inbox, whereas when I first signed up it took less than 1 second.

Now, here I am.  Looking at Google’s competitor Microsoft, as a saving grace.  I’ll be the first to admit I dislike the whole Metro/tiling design, and its one reason why I swear away from Windows 8, but Outlook.com looked nice.  Sure it still had the ad space and folder list like GMail does, but more of my screen was devoted to why I’m even on the website, to read email.

Composing email is one of the biggest complaints I have against GMail.  If it wasn’t for the fact that they are now forcing to you type an email into an instant message (and no more free service), I’d highly consider waiving the rest of the issues off.  But, really, if you’re trying to run a business why would you want to force yourself to squint and scroll up just so you can remember what you typed?

I’ve started using Outlook.com for my personal email as well as for my business, and feel its a better change.  This series, however, take you through my issues and enjoyment over the switch, and conclude with my final verdict.


May 30, 2013  5:30 PM

First Ubuntu bug FINALLY fixed



Posted by: Eric Hansen
security

Ubuntu, with its first release (and first bug) in 2004, has finally marked the first bug as fixed…about 9 years later.  What is the first bug, you ask?

“Microsoft has a majority market share”

If you compare and contrast 2004 with 2013, it makes sense, doesn’t it?  Windows has seen its fair share of flops (Vista and 8, anyone?), Linux has, for the most part, seen a handsome (or beautiful) increase in both use and users.  We even have the best of both worlds in products such as Gnome and KDE.

A great quote regarding this comes from Mark Shuttleworth himself (frankly I don’t know who he is relative to Ubuntu, but then again I really don’t care for Ubuntu):

Personal computing today is a broader proposition than it was in 2004: phones, tablets, wearables and other devices are all part of the mix for our digital lives. From a competitive perspective, that broader market has healthy competition, with IOS and Android representing a meaningful share.

Windows will, unless Google decides to really screw everyone over, not come close to the market share.  I feel that there really is little distance between Microsoft and RIM (now BlackBerry), with the major difference being BlackBerry’s whole model resolves around devices like phones, while Microsoft has a vast array of outlets.

Source: http://www.omgubuntu.co.uk/2013/05/mark-shuttleworth-marks-bug-1-fixed


May 30, 2013  5:11 PM

Gotta love captcha



Posted by: Eric Hansen
security

[...] a recent We The People petition at the White House, asking the administration to support the treaty for the blind, which would make it easier to access creative works for the blind by creating a few small “exceptions” to copyright law (i.e., returning rights to the public) for the sake of sharing formats that are accessible to the blind across borders. However, some blind advocacy groups have discovered that, if you happen to be blind/visually impaired, it’s basically impossible to sign the petition.

The reason being?  Captcha.

You know, the (usually randomly trippy) letters that you have to type to “verify you’re human”.  The problem though isn’t that they were forced to try and type in characters they couldn’t see, because that wasn’t the case.  The issue is more so that the audio captcha is unintelligible (like this video).  Which if you have ever tried doing the audio of a captcha, it is.  Kind of picture trying to talk to someone at a party, and everyone is talking over you.  I had the displeasure of going through this with Outlook.com’s audio captcha not too long ago, and it frustrated me more than anything.

Captcha is a great design initially, and I do understand why they would garble the audio, but it hurts the solution more than it helps.  Its bad enough that you distort letters that look similar whether lower or upper (“C” vs. “c”, “X” vs. “x”, etc…).  That alone is annoying, but to be blind and unable to make life better because you thought distorting audio was a great idea?  No thanks.


April 30, 2013  10:23 PM

Web App/Vulnerability Scanner



Posted by: Eric Hansen
security

I want to know something: what scanner(s) do you use to assess the security of your systems, programs, network, etc…? For example, Metasploit and Nessus are two of the most popular in this field, but there’s also ones such as OpenVAS, W3af, Nikito.

Of the one(s) you use, why do you? What draws you to them over the use of another?

I’ll start.

I used to use Nexpose (from Rapid7, makers of Metasploit) for a long time. Only reason I really stopped is the limitations and extreme usage requirements (it runs off of Java, and you need at least 2GB RAM to even try to run it).

When I reinstalled BackTrack, I discovered OpenVAS, and have since been using that. Its forked from Nessus before it went close-source, and has basically taken a form all in its own.

Its intensive as well, but at least I can run it on my old server (720 MB, 1.8 GHz single-core AMD processor, etc…).


April 30, 2013  7:38 PM

Usability Systems



Posted by: Eric Hansen
security

While I’ve kind of put a hold on the monitoring solution (kind of shifted gears, so twiddling between that and another project), one thing is true regardless: usability needs to exist at the highest level.

Bluntly, when writing UI, you have to write it with the mindset of people not knowing how to do anything with it.

Ask yourself whenever you add something: will Joe-Shmoe know how to use this without documentation?  If the answer is “no”, then you need to figure out a way to make it happen.

Even before the day and age of Internet, people were wanting things fast and easy.  The transition of “work hard grow fast” was in full force, and now its here to stay.  If you think your user will want to go through 20 hoops to register an account, think again.

A great way to bring UI to the end user immediately is Ajax.  However, the other issue comes those who don’t allow JavaScript to run.  If you prompt the user to disable it, its a crap-shoot if they’ll know how to.  This problem comes in with people believing everyone that JavaScript is horrible.  JS itself is not horrible, how its used can be.  But with the world moving to HTML5, JS is a heavy component of that.

Making a system usable is easy, its making it usable for everyone that determines if its successful or not.


April 24, 2013  1:54 PM

Quick Shell Trick – Find Memory Usage



Posted by: Eric Hansen
security

Unlike Windows (hear me out), its not as easy to find out how much memory a process is using.  Tools like ps report the RSS (resident set size), which “significantly overestimate memory usage” while PSS (proportional set size) measures “each application’s ‘fair share’ of each shared area to give a realistic measure.” (source: http://www.selenic.com/smem/).

When you’re trying to find bottlenecks (and don’t want to use tools like Valgrind), or get an actual representation of how much memory a process is using, this means that ps is more or less out of the question.  There’s always “top”, but personally I’m not sure what type of memory usage it reports and I’m too lazy to research that.

After doing some digging around, I discovered smem (linked above as source).  Its a Python script that reports different types of memory (including PSS).  Installing it is easy (it should be in your package manager), and using it is even easier.

I’d like to start off by saying that I’m fairly certain there’s an easier way of doing this than how I’m going to show, but I haven’t really felt like going through the arguments.

To get the PSS of a process, just run this command:

smem | grep <name> | grep -v “grep” | awk ‘{print $6}’

Replacing “<name>” with either the program or the PID.  Just like ps, this will generate a listing for the grep command, so we exclude all grep commands from the list, and then print the 6th column (PSS).

The output by default is in bytes, but if you want it more human-readable, just pass “-k” to the arguments of smem.


April 21, 2013  10:33 PM

Writing a Full-Serviced Sysadmin App from Scratch – Part 3



Posted by: Eric Hansen
security

Probably the last post I’m going to make for the night (not sure, though) is about presenting the monitoring data.

As mentioned, I was working on a backup solution previously.  In it, I used a Python module called Tornado to interact with HTTP requests that the client would issue.  This was all backend, however.  During this time, however, I learned about its powerful templating engine which looks to be tightly based on Jinja2.  So, for this project, I decided to ditch the PHP interface for the end user and go directly with Python.

One of the toughest decisions I’ve had to make so far is how do I want to present certain data (this is prior to me working with graphs, mind you, that only came a couple of days ago at the time of this writing).  Tornado allows for passing as many arguments as you can through Python to the templates, but what if I didn’t want to work it that way?  Luckily I didn’t have to.

Templates allow you to do all sorts of things in Tornado, including executing methods/functions.  All I had to do was pass a reference to the function, which is as easy as just giving the name without ()’s (so func_to_execute=some_func and not func_to_execute=some_func()).  The parameters are passed when being called in the template itself:

{% set var = some_func(params) %}

Easy, you say?  You darn tootin it is.  I really wouldn’t recommend doing this for EVERYTHING, but I’ve done it for generating graphs on page views instead of when the graphs are updated (which I’ll try to get to in a different part).

Now, to be honest, I’m horrible at designing pages.  I’m more of a backend developer kind of person.  With that said…

A few months ago (maybe longer?), a friend of mine wrote a small monitoring set up for his servers.  Originally he wasn’t going to share the source, but he ended up making a private repo of it on BitBucket and shared it with me.  Basically, with his permission, I used his design for mine.  It works for me and is clean, so I figure why not.  This posed an issue, though, as I had to also figure out how to present data that it really wasn’t designed for (i.e.: graphs).

Lets just say I might use a HTML5 template to make things easier.


April 21, 2013  10:18 PM

Writing a Full-Serviced Sysadmin App from Scratch – Part 2



Posted by: Eric Hansen
security

I touched up on a bit regarding why I’m writing my own monitoring solution.  This will cover some of the design aspects I had thought of initially and talk some on how things have progressed since.

One of the first things that came to me was an authentication system.  Right now its still pretty bare (passwords are stored in plain text, for example), but this will be solved soon as I do have some ideas rolling around in my head right now.  But, within the authentication system, I had to consider how I wanted to structure the database.  Everything was (and is) tied to the user id, and that is all I knew at the time.

This will monitor servers and services, which I dubbed “units” cosmetically.  However, services are dependent on the server, so I wanted to give both their own unique ID, so I tied the ID  of the server to a column in the services table (foreign keys).

Next, there was the difference maker of how to notify the user.  There’s still the traditional email and SMS/text.  However, I took it one step further (and will make it even further in the future).  Details are top secret here, but the integration of all are tightly nit together.

Then there was the idea of storing alerts.  Potentially a heavy burden on the database if its not handled properly (which I’ll try to discuss in a different part), or very effective in aiding sysadmins.  The idea is simple enough, there’s three states: 1 – okay (no issues), 2 – warning (non-critical issue/unable to connect), 3 – alert (resopnd asap).  However, you don’t want to store an alert everytime you check a server and see its down, just the first time.

There’s some other tables I’ve created that I’ll discuss later, but a big thing to discuss since this is the topic of database design is what server to use.  For me there’s MySQL and PostgreSQL.

I’ve used MySQL since early 2000 when I first started web development until not too long ago.  I’ve seen it improve tremendously in the form of redundancy and preventing corruption.  From the days of forced MyISAM to almost near elimination of the engine.

PostgreSQL is one that I haven’t had that much experience with.  I’ve used it since early 2012 or late 2011 when I first started development of a backup service I was working on.  A few things drew me to it over MySQL:

  • Not as heavy on the resources (matters a lot when you’re running on low-end VPSes)
  • Has a cascade feature
  • Simplier to set up and use in Python
  • Has built in “whoopsie” support

Cascade is a nice feature in which when you set up a foreign key, if the “master” key of sorts is deleted, then any FK that is tied with that master key is deleted as well.  For an example, we have a user in the users table with an ID of 4.  In the servers table we have four servers (IDs 4, 5, 10, 42) that the said user added.  If servers.user_id is a foreign key to users.id, and set to cascade on delete, then when user ID #4 is deleted, servers 4, 5, 10 and 42 are deleted as well (and everything that has a FK tied with them).  This makes it easier than in MySQL where you would have to write a trigger and deal with various issues I’m sure (I’ve never written a trigger or procedure before so I don’t know).

Another thing you learn in Python quick is to use what is best supported.  So, for example, when dealing with a configuration file, its a lot easier to work with a JSON file than it is a PHP array file, for example.  MySQL support in Python exists, but it is not the greatest from what I’ve read and seen (been a while so this might be different now, though).  PostgreSQL, however, has tremendous support with the module psycopg2.  I ended up just writing a small wrapper around it to make things easier to deal with (I’ll post a link to the code once I make sure all the bugs are ironed out), but other than that its all based on said module.

Lastly, “whoopsie” support.  What I mean is if either a) you insert data incorrectly or b) data would be corrupted (power failure or something), nothing is auto-committed or changed.  I’m sure this can be changed if you REALLY wanted it to, but why?  PostgreSQL offers a ROLLBACK/COMMIT feature.  If an error occurs or otherwise can’t modify data, you just need to issue a ROLLBACK command, otherwise COMMIT it.  This was a little difficult for me to grasp at first because I was so used to MySQL that I couldn’t figure out why my INSERT command would return correctly but no data was actually inserted, but it was still amazing once I caught it.

PostgreSQL it was, and I really haven’t looked back.  Truthfully, I’ve completed ditched MySQL all together and have been happier without it.  I think of it an ex-girlfriend…you got rid of it/her for a reason, after all.


April 21, 2013  9:53 PM

Writing a Full-Serviced Sysadmin App from Scratch – Part 1



Posted by: Eric Hansen
security

For the past week or two now I’ve been working on an application to help monitor systems and services.  Kind of a bastard child of Nagios and Cacti.  There’s a few reasons why I’m “reinventing” the wheel, so to speak.  But, I’ve decided to post here explaining the details, trials and tribulations, as well as any other random tidbits of information to toss in as well.  Some of these will most likely also be cross-posted to www.securityfor.us as well, but a bulk of the stuff will be posted here.  Anyways, on to why I’m doing this…

Nagios is a great tool.  For years I used it myself.  Its very powerful, integrates with virtually anything you can access via SNMP or scripts, and the documentation is pretty easy to follow.  There are two main points of failure in terms of usability, though, that I find.

First, there’s the fact that its written in compiled CGI (or C, can’t remember now).  With a strong background of “open source” initiatives, this surprises me thats it has lasted so long in the field.  Its akin to running Microsoft Office via WINE; it works and does its job, but there’s better (or more suited) alternatives.  If something goes wrong, you have to wait for the Nagios team to fix it, unless its a script to check for service/host status.

Next, there’s no ACL built in.  I’m not sure why this is, but essentially Nagios relies on you running Apache (or something else that provides an authentication method like an LDAP module).  Writing even a moderately functional authentication system takes a day at most.  I’ve never been able to understand this aspect, and I feel it hurts the rest of the usability by not being able to segregate who can see what.

There are more reasons with Nagios that sparked my interest in writing my own monitoring set up, but I want to also address the software that this is more or less based on, Cacti.

Cacti, at least to me, is more of a RRDTool learning tool or assistant than an actual monitoring solution.  It helps you create RRDTool graphs (which can be a major pain at first), and thats about it.  If you really look at it, about 99% of its functionality revolves around building RRD graphs.  Which is great, I even use RRD graphs in my set up as well.  But, what else does it bring?  Products like Centreon, that rely on Nagios on the backend usually, offer graphing as well, on the same platform (PHP) and is more actively developed.

More or less, I also touched on this with Nagios, but Cacti is developed in PHP (granted, open sourced).  Why I mention this is because from my stand point and feeling, PHP and CGI/Perl are absolete languages when it comes to web development.  They do have their place, don’t get me wrong, but the overhead and slowness of PHP, and the over-abuse of extending a language beyond its intents in CGI/Perl is just wrong.

These are a couple of reasons why I decided to roll my own solution.  It has what is currently out there, offering more, while more or less requiring less overhead.  A normal Cacti installation is difficult to run efficiently on a 256 MB VPS with 256 MB swap, and Nagios just causes more overhead the more it has to monitor.  While the same can be said for Python, the integrations it has with different tools and the like makes developing on said platform a lot easier.  This is what most of these articles will be covering as well.

As I’ve been working on this for a week or so now, I already have a couple of more articles I’m going to be writing tonight, as I’m heavily enduced with caffeine and nothing else to do as I can’t program on laptops (they annoy me greatly).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: