I.T. Security and Linux Administration


May 20, 2011  10:10 AM

Mass process kill using Bash, ps, and awk

Eric Hansen Eric Hansen Profile: Eric Hansen

I’ve been looking for something like this for a while, and have never seen it. I know it exists somewhere, as does most things these days, but I was too lazy to search through the archives of Google, and wrote a script myself. What this does is take each line of the output from ps, and sends a signal (kill -s) to the process ID to kill it. Continued »

May 14, 2011  10:41 AM

Security Vulnerability in WHMCS 4.4.2

Eric Hansen Eric Hansen Profile: Eric Hansen

Recently I ventured into WHMCS, and decided that I did not like that the “company title” was a text instead of image.  With this in mind, I began experimenting with the “company title” setting in WHMCS’ admin panel, and discovered that it’s prone to a potential security flaw. Continued »


May 10, 2011  9:21 AM

Two-Factor Authentication via SSH

Eric Hansen Eric Hansen Profile: Eric Hansen
Security

Security and smart phones, a great combination when used in the right situations.  A while ago, Google released their two-factor authentication mechanism, as well as released software to run on iPhones, Blackberries, and of course Android.  Since they released this, I was wondering how long it’d take to really take power with this for IT systems (lets face it, Google is trying to take over the IT world).  Then, I stumbled upon (ironically not on StumbleUpon) an article that shows steps on how to integrate Google Authenticator with SSH.  That’s where this really takes an interesting turn. Continued »


April 25, 2011  8:23 PM

Linux From Scratch Premier

Eric Hansen Eric Hansen Profile: Eric Hansen

I’m going to start a new series here, one that I hope is very informative. But, first, I would like to give a little bit of real-world ideals as to how this will be beneficial.

This series is going to focus on Linux From Scratch (LFS), and how to build up from LFS to BLFS (Beyond LFS), and then HLFS (Hardened LFS). While this may not be substainable for companies already deploying pre-packaged servers (i.e.: CentOS, Debian, Arch Linux, etc…), the experience and knowledge gained from this will be substantial. It can not only provide a better service to your customers, but knowing how things actually work, what’s needed, and being able to make a Linux fitting your needs, instead of everyones, can make your life easier. Continued »


April 18, 2011  3:47 PM

Misconceptions of DoS Attacks

Eric Hansen Eric Hansen Profile: Eric Hansen

There was an article I read titled “Chrome Shields Websites From Denial-Of-Service Attacks“. Right away, from the title I was intrigued. While I know end-users can cause a DoS attack to happen (especially if they absolutely can’t let go of that F5 key), I was interested to see how Google went about this. Lets just say the results were less than stellar, and here’s why. Continued »


April 14, 2011  9:59 PM

New Naming Scheme for Linux Devices

Eric Hansen Eric Hansen Profile: Eric Hansen

In the recent May 2011 magazine Linux, page 64 offers an article about Fedora wanting to redo the Fedora device naming scheme, saying that it will be based on the device’s ID and not the name anymore (which is also referred to biosdevname in Fedora’s documents).

Now, first off, I will say that I have become a victim of this quite a few times during my times using Linux. Situations arise for me where I’ll have to restart my computer, or I switch from ethernet to wireless for a while, then switch back, and my devices will be renamed (i.e.: instead of eth0, it’ll now be eth1). This happened once a few times in a row that lead me to using eth4 before I stopped. This has not just happened when switching Internet sources, but other various times as well that I can not recall right now.
Continued »


April 11, 2011  10:56 AM

The Start of a Penguin

Eric Hansen Eric Hansen Profile: Eric Hansen

I wasn’t going to write about this just yet, because I don’t believe April is Linux’s birthday, since Linus didn’t post on the MINIX newsgroup until August 26, 1991 (but did started working on Linux in April 1991).  But, since everyone is calling this the Linux Birthday Month basically, I thought I would write a short blurb about it.

Even though Linus himself has said that Linux is free of MINIX’s code, it was announced on the official MINIX newsgroup, which I find funny, personally.  If you’d like to read the posting yourself, it’s titled “What would you like to see most in minix?”  This isn’t just his post, though.  It also includes all of the comments people stated as well, and it’s quite interesting to read how intrigued the world was interested in this new project already.

An interesting aspect of this is that Linux Foundation is making a big celebration this year, as it marks the 20th anniversary/birthday of the penguin to be.  The big thing to note from LF is that it’s 20th Anniversary Celebration is going non-stop for the entire 2011 year it seems.  In mid-August, they will be holding quite an extravagant celebration (August 17-19 to be exact), about 1 week prior to the infamous newsgroup posting, in honor of Linux’s birthday.  While this is being held in Vancouver, I think it’s also worth a look to see how empowered Linux has become since it’s conception 20 years ago.

Every year people think that Linux is going to take the commanding lead over Windows for various products, services, reasons, etc…, but instead of focusing on that, why not just focus on what we have been given? A (relatively) stable operating system that has opened up the doors to many more people in not only how an operating system works, but how to customize it to your needs (without the threat of lawsuits). Linux is a great and versatile operating system, that has many more years left in it’s tank, especially with the explosion of green computing and such, even if you can’t make it to LF’s convention later this year, celebrate it in your own ways. I, personally, will be sporting a pin-clipped penguin to my shirt/jacket/whatever I may be wearing that day.


April 6, 2011  12:58 PM

Does Storage Virtualization Have a Purpose?

Eric Hansen Eric Hansen Profile: Eric Hansen

Earlier today, the IT Watch Blog posted an article, Storage: Virtualization’s wallflower?, which brought up how IT enterprise companies are neglecting virtualized storage solutions. In the article, Melanie stated:

The findings can be a little disturbing, especially to a company who creates a product that many medium and large enterprise IT orgs are leaving out of their virtualization plans: Storage. The study found that 43 percent had mistaken the impact storage would have on server and desktop virtualization or had shied away from a virtualization project because storage-related costs were too high.

One reason I can see for this is that storage virtualization is still a relatively new concept. When I first read the article, I was thinking that storage virtualization was storing data in RAM instead of using hard disks, especially with how DataCore presents the benefits. Instead, it still seems to rely on actual hard disks to store data, but essentially caches (or uses RAID in some cases) the data. Now, I might still be wrong on this, but these points on the site is what brings me to various conclusions (and why):

1. Lowers your capital expenses by avoiding the need to ‘Rip & Replace’ existing storage
Long theory-rationalization short, doesn’t this sound an awful like RAID? I know RAID is the “rip & replace”, but the only benefit storage virtualization has here from what I can tell is that you can rebuild the virtualized hard disk, but it holds no benefits against RAID really.

2. Maximize storage investment productivity, doubles your effective storage utilization rates to 90%
When caching data, while you are using extra disk space, but it makes the heads on the drives have to do less work (if cached in memory), and uses less system resources (i.e.: doesn’t have to call stat() all the time).

3. Accelerates performance by 200% to speed up of your workloads and achieve faster response times
Pretty much the same as the above, with the added point that caching is used in various server aspects to fetch data that is constantly requested, but rarely modified.

4. Reduce admin and provisioning times from days to minutes
This seems more of a RAID perk than virtualization (which is very loosely similar). Reducing administration time on a system is done by proper documentation, not by virtualization. It can reduce the amount of admins needed, but their jobs aren’t going to shorten, only lengthen if anything. Provisioning time is not any different really, as a hot-swappable RAID set up makes the job go from possibly hours to possibly seconds.

5. Boosts ‘Purchasing Power’ by freeing you from disk vendor lock-in: ‘only buy when & what you need’
I know companies usually go into some sort of agreement with a vendor to get discounts, but it seems virtualization locks you in more than if you were to use physical devices. Unless it’s an open-source type of solution, it seems more of a proprietary solution, and they just let other vendors re-brand the same product for different prices.

6. Enables Non-stop Business Operations! Prevents storage related downtime & costly business disruptions
I was going to bring this up later, but their site also states they provide 100% uptime. No product can provide 100% uptime unless there’s a redundant backup (i.e.: how hot-swap works with RAID). There’s really nothing else to note on this, sadly.

Their site does offer explanations to their claims, but it seems like an over-shoot to claim they take care of the physical disk requirement, when it seems to rely on it itself. Volatile systems were never a good idea to begin with, that’s why RAM is only meant as a temporary solution (i.e.: mounting /tmp to tmpfs/RAM disk). This technology could very well be the future, but there’s no clear understanding on what it is from this manufacturer.


March 31, 2011  1:58 PM

Sandbox 101: Polishing Skills

Eric Hansen Eric Hansen Profile: Eric Hansen

Continuing on with my virtualization fixation as of late, I would like to address something else to those who read my blog. As we all know, improving (and learning new) skills is a never-ending event in the IT world. The moment you stop doing this, it’s the moment you can easily be replaced. However, how are people going to be able to improve skills? With the fact that most (but not all) Linux flavors are free, you can easily install them into a virtual machine and learn new skills (or improve what you already know) doing this. The install process and such is no different really than installing it on a PC itself, so what I want to cover here is the benefits of doing this (with the only con really being is that it uses up your time, but in the end, it’s well worth it).

Before going into this, though, I would like to address sandbox means (if you already know, then skip to the next paragraph if you like). In the general sense, a sandbox means a highly-restricted area to perform tasks in. To make this more clear, you can think of an actual sandbox, and think about how the sand doesn’t leave the box, unless someone moves it out somehow. With IT, this is similar to running a program in a virtual machine that has no network connectivity to the network. Ultimately, it adds a sense of safety, knowing that what you’re running won’t affect your actual desktop.

My first, and possibly biggest, backing for using a sandbox to test skills is productivity. This doesn’t just affect you, but your job as well. For example, if your job requires you to deal with RAID devices, but you’ve never dealt with RAID before, it can be daunting to just dive head-first into this (real life example from me). What you can do is install an OS in a virtual machine, and learn how to use RAID that way. While you won’t get any of the performance/usage perks, it’s meant as a learning tool. This way, when you’re placed in a position where you have to use mdadm again, you’ll at least know some of the commands to use. The beauty with doing a sandbox here is that you can make as many mistakes as you want, and you’ll just have to reinstall the guest OS, not your actual OS (which, when you’re first learning, can happen a lot…).

Secondly, as I said earlier, if you’re not improving and learning new skills, you’re being left behind in the dust; especially with the rate technology is increasing. The trouble here is that most businesses are quite weary of migrating solutions from MySQL to MongoDB, for example. But, with a sandbox, you can learn and evaluate which solution is best, and go from there. Another added bonus to this is that while most people will still know MySQL, knowing alternatives can put you in the lead, and show you’re maintaining an understanding what the advances. As a side note here that is irrelevant to this post (but an interesting fact), there has been a slight push lately migrating from a relational (MySQL) database system to a document (MongoDB) system…which, I’ll cover in another article.

Lastly, and this is more of a security adventure than the rest, but when you’re trying new software, it might be a good idea to try it in a limited-resource environment first. While pretty much all software in a distro’s repository is tested before it’s published, if you’re downloading software from an outside source (i.e.: not-official source), it’s wise to make sure there’s no faults in the software. This includes any exploits and (to a lesser extent) viruses, but it’s not limited to that. Granted, this tip is more geared towards Windows, given how that and Linux operate differently, this can defiantly be applied to Linux as well. Especially with how fast software is being released, and how many forks there are in the wild Internet these days, it’s better to be safe than sorry.

There’s only a few (three, to be exact) ways to use a sandbox, but I do plan on expanding on this topic in the future. But, this is a good place to start, and will help you in the long run.


March 29, 2011  10:56 PM

Enable HTTPS By Default

Eric Hansen Eric Hansen Profile: Eric Hansen

With the recent issues of Microsoft removing HTTPS from Hotmail in some countries, and Comodo’s recent SSL breach, the end users of various software (web and non) should really start considering how secure their data truly is. In the first article, it’s also reported that Yahoo only uses HTTPS for the initial log in process, unless you’re a paying customer. In my view, this is not acceptable practice…why pay to be secure?

Essentially, what I’m going to be bringing up in this post is enabling HTTPS by default (i.e.: forcing HTTPS) via your web server. This will be done via both Apache, Lighttpd, as well as PHP. If you use LiteSpeed, or some other web server, then these steps should still apply to you, but some modifications might have to be made. Please note that this assumes you already have HTTPS set up (enabled for the server, certificates made, etc…). I’ll go over how to do this in a later article, but to make things easier, assumptions are made here. First, we’ll handle this with Lighttpd, as I already set this up for personal projects.

HTTPS in Lighttpd

This will be for your entire server (not just a specific domain, like secure.example.com), so if you do want this only for a certain portion of your hosting, you’ll have to place it in the proper config block. First, open up your config file, and put the following in there (unless it’s for a specific domain, I always add changes to the end):

$HTTP["host"] =~ "secure.example.com" {
$SERVER["socket"] == ":80" {
url.redirect = ( "^/(.*)" => "https://secure.example.com/$1" )
server.name                 = "secure.example.com"
}
}

$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = "/path/to/pem/file/for/example.com"
server.document-root = "/path/to/web/files/for/example.com/"
}

In actuality, you don’t need the $SERVER[“socket”] == “:80″ { … } case statement, but it’s there to reduce the load and redundancy on the server (why constantly force HTTPS if it’s already enabled?). Taken from the Lighttpd Wiki on how to do this, you can also set it up globally like this:

$SERVER["socket"] == ":80" {
$HTTP["host"] =~ "(.*)" {
url.redirect = ( "^/(.*)" => "https://%1/$1" )
}
}

HTTPS with Apache (without mod_rewrite)

Personally, I think mod_rewrite is a pain to use in Apache, but it’s also very robust, so it’s kind of a double-edged sword. Since I don’t personally use Apache, I can only go by what experience I had doing this at my last place. But, I’ll show the easy (this), and hard (using mod_rewrite) way. First, it’ll be the easy, but not very robust, way of doing this:

Open up your config file (httpd.conf by default), and put this in near the end:

Redirect permanent / https://secure.example.com/

Doing this is a bit finicky, as sometimes Apache will decide it doesn’t like the “/” at the end of the URL, for example. But, this will redirect all of your traffic that Apache handles to HTTPS. This pretty much tells the browser the HTTP code is 301 (which pretty much just tells the browser not to bother doing anything with the URL but redirect it to https://secure.example.com/).

HTTPS with Apache (with mod_rewrite)

Not everyone has the ability to use mod_rewrite, but if you do, then .htaccess is your best friend here. This is essentially the same as the above Lighttpd step, but it’s not in the configuration file itself. Basically, create a .htaccess file in the directory you want to have HTTPS-only access of, and put this in it:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^/(.*) https://secure.example.com/$1 [R]

In short, it tells Apache we’re using mod_rewrite (RewriteEngine On), and checks to see if HTTPS is not being used (RewriteCond %{HTTPS} off). If the condition is meant (accessing the domain using HTTP), then it rewrites/redirects the user to the HTTPS form. If you want this globally, or if you have multiple domains pointing to the same path, just replace “secure.example.com” with %{HTTP_HOST}. The $1 after that is anything you see after the “secure.example.com/” part (i.e.: index.php, contacts.php?user=awesome, etc…). Lastly, if this is the last (or only) rewrite rule in the .htaccess file, then replace “[R]” with “[L,R]” (without quotes). The “L” designates the last rule, and “R” designates it’s a redirection rule. mod_rewrite is a very powerful tool in Apache, and I might do a write up on it in the future, after I brush off my rusty Apache skill set.

PHP Redirect

This last trick is nothing special, and there’s millions (almost literally) of ways to do this in PHP alone, let alone other languages. Usually on my sites I design, I give the user to force SSL when they’re logged in. Then, the page header will always check to see if the user is browsing in HTTPS; if they aren’t, then it redirects using the header() function. Please note that in order for this to work, the code has to be above the tag, and any other code that will put out text (see PHP’s header() function for more information). But, here’s the simple code I use on my sites:

if(!$_SERVER['HTTPS']) && $_SESSION['force_https']){
header("Location: https: //secure.example.com/". $_SERVER['REQUEST_URI']);
}

It’s not the most robust, and you can leave out the ” && $_SESSION[‘force_https’]” part as well (it’s basic HTTPS checking for me), but it gets the job done. $_SERVER[‘REQUEST_URI’] is the same as the “$1″ above in the Apache (with mod_rewrite) guide.

I know a lot of sites have started to implement this type of feature (forcing HTTPS), such as Facebook, GMail, Hotmail (their HTTPS issue is now resolved), among others. Perhaps though this was a little too late in the game to be starting this. Regardless, I hope these tips have helped you, and you start preparing your users for a more-secured experience!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: