I.T. Security and Linux Administration

January 1, 2011  6:07 PM

Deployment Pre-Cautions

Eric Hansen Eric Hansen Profile: Eric Hansen

First off, I would like to say happy New Years to everyone who reads my blog. I’m hoping to bring more new content on a more frequent basis.

During deployment, things can get hectic, especially if it’s deploying something that’s (still relatively) new, such as offering a new operating system for website hosting. During the rush, it’s pretty common (if not a requirement in itself it seems) to forget the essentials to make sure the deployment goes as smooth as possible. While these points won’t work for every roll, nor will this be an extensive list, it’s still something that you might not think about or overlook…and, if it’s possible to stop one person from making a mistake, even if just one out of a hundred, then this post did it’s job.

1. Pre-plan the Development Cycle
There’s no doubt that things are going to go wrong, at least once during deployment. This will bring me to point #2, but first, before you even touch anything else, it’s best to plan out how, when and where things are going to happen.

If you plan on building a server in room A and mounting it in room B, reserve space for the server. Or, if you’re in the stages of website development, know prior to writing one line of code what the customer wants…at least to work off of, and tweak what you need to as you continue on.

Pre-planning is something quite a few people who are just starting plans skip over, which leads me to my second point…

2. Before Testing, Know What You Can and Cannot Do
After you figure out what needs to be done, see what you can do on your own, and what you will need help with. Plain and simple, even though it’s another pre-plan step, this is more of a self-evaluation and a C.Y.O.A. moment.

If you’re not well-accustomed to hardware, see if a friend can teach you what you need to know, or watch videos about it even. Unaware of how to write your own class in PHP? Search Google, or (better yet) PHP’s official website (www.php.net). This will be a life saver if half-way through a project the customer wants you to change something.

3. Evaluate Your Resources
See what you have and what you need. Do you need more memory for that 32 GB server, or an IDE for that new programming language you have to learn in the next 24 hours? It’s best to make sure you have more than too little of what you need, especially when you cut things down to the last minute.

You should also, during this time, see the time you can allocate to your project. If you know during the first week you’re going to have a lot of family time you have to devote yourself to, then it’s probably best to tell the client it’ll have to wait a week. Distractions can come very easy,whether you have a job you have to go to, or you work from home. It’s best to know when things can get done and how long you can work on it for.

4. Re-Pre-Plan Everything
This might sound a little counter-productive but the logic behind it is that you’ve re-evaluated how everything should work in step #3…so it’s time to see if you’re forgetting anything. Does the customer want more flairs added to that next-gen website you’re making for him, or does the server now need 64 GB of RAM? As long as everything matches up to step #1, though, you’re good to go to the next step.

5. Take Your Time
Probably where most of the mistakes are made next to pre-planning…taking things slowly. Especially with the way our lives work now, we’re always so rushed to get things done as soon as possible. Maybe you do this in hopes that your customer will be impressed and request more work? Perhaps you forgot about that honeymoon to go on and so you try to handle two things at once?

One word: STOP. Why, you might ask? You tend to make more mistakes speeding through something than not. I’m sure the customer will better understand that you want to give them a bug-free product, rather than a product that only works when it wants to. During this process, it’s also very important to test whenever you implement something new. Throwing everything together at once just to realize only about 5% of what you made actually works just discourages the customer more.

As I said, these may not apply to every project, or to everyone, but this is more from a “learn it from someone who knows” perspective. I’ve made these mistakes and it’s caused quite a ruckus when trouble happened. So, the next time you want to install that plug-in for WordPress, make sure you also have a backup handy, as well know it will work with the rest of your set up.

December 24, 2010  1:43 PM

The do and do-nots of making a SysAdmin’s life easier

Eric Hansen Eric Hansen Profile: Eric Hansen

ACM Queue recently posted an article (“What can software vendors do to make the lives of sysadmins a little easier?”, Dec. 22, 2010) that lists 10 total do-and-don’ts for making a system administrator’s job easier. The article itself has some interesting points; but, I’m going to give my own perspective on it’s points it makes.

DO have a “silent install” option
- I agree with this 110%. Having to go to 1 computer to click a button or choose some options is fine; having to go to multiple computers to do the same task is annoying (as well as counter-productive). I’m sure this is at least a big reason why most (if not all) package manages like yum, apt and pacman have an auto-yes/no-prompt option in them. The only downside to this I can foresee though is if you need to customize the install for a selection of computers. But, really, even if that’s the case, it should be rather easy to put those computers in a different queue.

DON’T make the administrative interface a GUI
- Personally, I see this as a common-sense one. While for desktop users it’s great to have some sort of GUI for programs, it seems rather illogical to use any sort of window manager for a server you’re going to be VNC’d to very, very rarely. Most system admins seem to just use tools such as Putty or a SSH client.

Another viewpoint to this though is what about web interfaces? Granted, it’s not truly a GUI set up, but would those be acceptable? My own taste says yes (a good example is SafeSquid’s administration…it’s all web-based). So basically, ditch the GTK+ designing and try to work on a web-interface; your system resources will thank you.

DO create an API so that the system can be remotely administered
- This is kind of hit or miss for me. While I agree being able to remotely administrate a program is a grand idea, I don’t see how it’s any different than creating an SSH session and viewing logs. This is a short comment for this, but it’s a pretty overly-done process. APIs are meant to expand the functionality of a program (i.e.: plugins). While the article makes a good point in it that it helps extend program functionality, I feel the author just didn’t truly address this point, and for good reason.

DO have a configuration file that is an ASCII file, not a binary blob
- Another 110% belief in. The main reason for this, as the article states, is the ability to use diff to know what changes were made.

For an example on this, lets compare to the two most popular browsers not based on operating system, Mozilla Firefox and Google Chrome. The way each browser stores it’s data is different. Firefox does it similar to Internet Explorer, in that it writes to the disk your URL history. Chrome, on the other hand, decides to use a database to store it’s data in (using SQLlite I believe, to be exact). In short, Firefox stores it’s data in ASCII, and Chrome stores it’s data in binary blob (trying to open up the database info in a text editor gives you a lot of unreadable text).

DO include a clearly defined method to restore all user data, a single user’s data, and individual items
- This was a big problem where I used to work. While there was a very well developed backup system we had in place, it wasn’t very well managed (we lost a good 2-3 months worth of backups one time and no one caught it for a month). The backup system did cover both points, but if you have to also be able to set some protocols on monitoring your systems, especially your backup systems. Unless there’s a reason not to, I fail to see why a simple cron job to run a backup script and send it to a backup server wouldn’t just do the task just fine. I guess I kind of veered of topic on this, but it’s still something to note here.

DO instrument the system so that we can monitor more than just, “Is it up or down?”
- Another situation I’ve experienced. Having some system available to actively monitor services is very important and well worth the investment. My personal liking is the Nagios/OpsView option (OpsView is just a forked version it seems of Nagios). It monitors all the system resources you ask it to, and works for more than just *nix-based systems. If you have more than one computer and/or server, this is definitely something to set up, and you’ll be thankful when your server shuts down without you knowing.

DO tell us about security issues
- This is more of a double-edged sword more than anything. Yes, you help out the community and the people using your services, but you also let the enemies know of a weakness. While I agree with the article in letting people know publicly, it’s always a tough call for any vendor to know how much is too much. However, it’s never a good idea to wait for a fix to occur before releasing a notice of an error, because then the enemy almost always has already won.

DO use the built-in system logging mechanism
- Yes, using syslog and Event Viewer (depending on the server’s OS) is great…perhaps even greater is it makes everything run a lot smoother. There’s really no excuse to not use a system’s already-available functionality for this, especially since they will always be more optimized for the system at hand.

DON’T scribble all over the disk
- This seems to be more of an issue with Windows servers than any *nix one, but still a good point made. *nix servers seem to have a pre-defined structure that Windows neglects to make it more user friendly, which also makes it harder to diagnose the hidden settings when something goes wrong.

DO publish documentation electronically on your Web site
- While this is whole-heartedly true for a Windows machine, for any system using manpages, it shouldn’t be necessary. The only positive side to using this instead of manpages is if the server is in an unbootable state of some sort. But, that point, I’m sure it’s still going to be no use unless you happen to have a LiveCD around somewhere.

All in all, the article was a good read, but in an opinionated view, made some mistakes in it’s points. Here’s a question to the readers of this blog, though…what do you feel is a do and/or don’t to make a system administrator’s job easier?

December 13, 2010  2:11 PM

Is Ad Revenue Worth User Insecurity?

Eric Hansen Eric Hansen Profile: Eric Hansen

In a vague sense, this does involve I.T. security. Over the past week, there has been a lot of drive-by-downloads done via a fake advertising firm, spreading it’s payload through both Microsoft and DoubleClick’s ads service. Both of which affecting a very high amount of computers and devices. While not going into to much detail, the generic rundown is that people registered the domain AdShufffle.com (intentional 3 f’s; the actual domain is AdShuffle.com [2 f's]), and somehow that tricked the ad sites DoubleClick (owned by Google) and Microsoft’s own ad service into serving ads from this domain. The domain then would use JavaScript an iFrame coding to download software onto a user’s computer and use various exploits to cause damage.

While I haven’t read anything about what the damage was, the pressing issue is more so is using and advertising business model worth risking your clients’ (personal) security? I don’t agree with using NoScript and Ad-Block programs extensively (websites do need to make revenue, especially if they offer free services to it’s clients. But, if these issues keep arising, then I see no alternative but to restrict ads.

The main issue I see with ads is they try to attract the user instead of provide them a reason to go to their website. By this I mean there’s a lot of ads that use voice and flashiness to attract a person’s eyes, but it serves no purpose besides that. One thing about AdSense (Google’s own personal ad service) is text-only, which gives the user a real reason to go to the ad’s site (if it’s informative enough).

Articles like the one that this is about tends to lead people further away from supporting the “starving artists” of the I.T. world, and leads to people needing to find more strategic, and possibly more intrusive, ways to support themselves.

Source: Major Ad Networks Found Serving Malicious Ads

November 6, 2010  10:23 AM

SSH Security (Part 2)

Eric Hansen Eric Hansen Profile: Eric Hansen

In the last part, there was a lot of planning, and preparation, for setting up SSH to use certificates instead of passwords to authenticate a user. Now comes the configuration and trial-and-error portion.

First thing I’m going to cover is the sshd_config file (config file for the SSH daemon), which is usually found in /etc/ssh/sshd_config. I’ll be going through this one by one on what I changed, why I did so, and any suggestions I can provide.

Port ####

I highly suggest changing this to something else, simply because it reduces the risk of attack all together. All this does is change the port that SSH listens on. I usually set mine higher than 1024, because a lot of “good” services listen on ports below that, but it’s up to you on how high or low you want to go. Another note here, I’ve read a lot of talk over the years that you should specify the address a program listens on (i.e.: ListenAddress I don’t believe in doing this unless you have more than one NIC in the server, especially if you have a switch in your network. I’ll probably make this another article at some point on this specific topic, but not right now.

PermitRootLogin no

Another security measure, that doesn’t really deal with certificates, but is a “better safe than sorry” approach. This restricts “root” from logging in at all when establishing an SSH connection. I usually have this enabled though until I know everything works fine just for the fact I tend to make mistakes. But, in a production environment, I don’t know of any security-aware professional that would say to leave this enabled (“yes”).

RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile     .ssh/authorized_keys

The first two are for enabling certificate-based authentication. RSAAuthentication is basically saying that an RSA key is going to be used, and PubkeyAuthentication uses a public key to authenticate the user. AutherizedKeysFile just simply tells SSH where to look for the RSA and public keys (relative to /home/user/). The reason for enabling the first two is that RSA generates a private and public key. The public keys are generated on the user’s machine, and private keys are created on the server. I’ll get into this more in a bit.

PasswordAuthentication no

This, by default, is turned on. This is what makes it possible to log in using a password. Now, if you want to completely disable the ability to log in using a password, disable this. Otherwise, when you go to log in, it will ask you for the certificate’s passphrase (if there is one; which I’ll also get into in a bit), and then if fails, it’ll ask for the account’s password. I never see the point of having this enabled, but if you feel more safe allowing both methods to be used, have it there.


If you remember my last SSH security post, you’ll remember I suggested creating a SSH-only group. If you didn’t do this, or want to allow users outside of that group, then AllowUsers will help as well. One problem I’ve seen with this is that you can have a big list (separated by spaces) of groups and/or users you want to grant access to, but most editors tend to break these lines so that the list is on two or more lines. So, to make things easier (and cleaner), if you have a big list, you can have multiple lines of Allow* in the config file. A personal recommendation here, is to not have both of these enabled. Reason being, what happens if you remove a user from AllowUsers, but forget to remove them from the group? They’re still able to log in (this is assuming you didn’t delete the user or lock their account).

Phew, okay, now after you’ve made these changes, you can save the file and close it. Since we’ll be copying the key over before we can use this, I suggest NOT restarting the daemon until later. Now, onto generating the keys needed. First, I’ll cover the server side, as we have to copy the client’s key over at the end.

On the server, there’s similar work that needs to be made. Run this command from inside the /home directory for the SSH account:

mkdir .ssh && touch .ssh/authorized_keys && chmod go= .ssh && chmod 600 .ssh/authorized_keys

This does similar to the client-side command you had to run, with the added “read and write only” permissions on authorized_keys. If this isn’t in place, then SSH tends to basically not allow log in period, as it views it as a security breach if it’s anything beyond 600 (i.e.: 700 or 610). Now, to handle the client side of things.

Assuming AuthorizedKeysFile was left as the default, make the .ssh directory:

mkdir ~/.ssh && chmod go= ~/.ssh

This makes the directory in your home account and then chmod’s the directory as 700 (rwx——). Next, go into that directory and run the following command:

ssh-keygen -t rsa

By default, SSH uses RSA for certificates (even though you can use DSA as well). Next, you’ll be asked a few questions (which most can be used as defaults), such as:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.

Now, one thing to note here, is the passphrase option. To put it shortly, if you enter a passphrase, you’ll ALWAYS be asked for this when logging in. If you leave it empty, as long as a key matches in the authorized_keys file on the server, authentication is assumed good. Personally, I go the extra mile and add in this little bit of security. After that, you’ll have your private (id_rsa) and public (id_rsa.pub) keys ready to go. Now all that’s left is to copy the key over to the server, and restart the daemon. To copy the key over, just use this command:

ssh-copy-id user@host

For example:

ssh-copy-id ssh_dummy@testing_machine

After this, restart the daemon on the server, and try to log in to SSH from the client. If you entered a passphrase, you should be asked for the passphrase (and if you didn’t enter a passphrase, you should see the prompt).

If you have any issues, remarks, questions, etc…on this, feel free to leave a comment. This is really a lot easier than it sounds. Another thing I like to do is use the dummy account as a skeleton account for others (-k command in useradd). This will copy over the entire directory for this account to the new one.

October 27, 2010  11:42 AM

Prelude to SSH Security (why I’m starting this series)

Eric Hansen Eric Hansen Profile: Eric Hansen

This probably should have been posted prior to my previous post, but I didn’t really think of posting this until just now. However, I’d like to make a note as to why I’m starting this mini-series of sorts.

After reading this article, it got me thinking; “I have SSH at home, why don’t I force connections to that to make sure my data is secure?” While it’s true nothing is ever 100% secure (if someone tells you otherwise, I’d love to hear how this is possible), data leaks is finding a new home in the “fear list” of sorts.

The premise, if you’re not interested in going to the above link, is basically that most websites don’t secure data beyond the log in page. It talks about a proof-of-concept extension in Firefox that lets you sniff out other users’ data while connected to a hot spot.

Now, while my logic is more of a “you get what comes to you” approach, I’d also like to help people out and not have them get their accounts hacked. You can use programs like Tor (but even that’s a pretty dangerous step to take), or just not log in to your Facebook or Twitter anywhere besides your home computer (or phone if you’re not connecting to a wireless AP), but what if you don’t want to do either of these? Well, you can use SSH to open up a SOCKS proxy so all data will be sent through SOCKS to your SSH server.

This is what this series is going to ultimately get to. While it’s a pretty short series all-in-all, I feel it’s a step in the right direction with this blog, and with what’s going on lately and all these scares, there’s no harm in bringing forth a new view point on a topic that can always use a new voice.

Also, please keep in mind that all of the steps and information I’m passing along to this topic is of my own advice, you can alter it and/or not take it at all. There’s always room to improve what’s already been done, and I’m more than open to hearing about what you choose to do.

In the mean time, I’m going to see where I should take this series next (whether I should do a part 1b or start on part 2), and continue from there. While I doubt I’ll be posting a new entry this weekend, I do plan on sparking some life and (hopefully) debates in this blog and getting things rolling again. I’ll probably take breaks in-between form my series (as I plan on doing at least one a month) to discuss other IT-related material as well, so we’ll see what the world brings us, one cycle at a time.

October 26, 2010  11:26 PM

SSH Security (Part 1 [perhaps a out of b])

Eric Hansen Eric Hansen Profile: Eric Hansen

To kick start a new life into this blog, I have decided to venture into the realm of SSH security. Going through the troubles I’ve experienced so far in securing my own SSH server, providing tips along the way. This first part is going to probably be one of the more boring parts (read: pre-planning ideas).

The first thing to cover, here, though, is you should highly think ahead of time what security you want, exactly. There’s a lot of rave about key-based authentication, instead of using passwords. While I’ve used both, there’s only one real difference I’ve noticed in switching to using keys (certificates) instead of passwords: logging in is faster. Granted, my testing environment is home-based, so it isn’t exactly the best to base security off of (I have no wireless router, for one). However, using password authentication would generally take me about 5-10 seconds to log in from connecting to seeing my bash shell. While using keys, I’ve noticed it takes about 3-5 seconds, at most, to go from connecting to bash shell.

Another issue on this matter is how secure do you want to be. As I said, I’ve set this up for myself to just learn how to break things so I can fix them again, there is no wireless router, and I’m about the only one who uses the Internet unless my girlfriend comes over, so there’s really no security warnings I should take heed of. However, in a data center, for example, it’s better safe than sorry to keep things as secure as possible without breaking everything. So, for my environment, I would be fine with using passwords (which I was doing for a long time), but in a production environment, it’s wiser to use keys. Not just because of the “do it for security”, but the reason why it’s more secure. While SSH does encrypt the data stream, having a double-layer of security is preferred, so that if someone does get your keyphrase, they don’t have instant access to your server. Which brings me to my next point…

For the sake of preventing bad things from happening, create a dummy SSH account (preferably using /bin/rbash [restrictive bash] for it’s shell). Yes, you should still set a password for the account (since sudo will still be accessible).

Fact is, your system will more than likely become under attack, especially with the more services you run/offer. The reason for creating a dummy SSH account is to basically set up a honey pot of sorts. Do you want a stranger to just walk inside your home, even if you leave a key in a secret spot? I’d think not. The same philosophy applies here. Let the stranger in at the porch door (dummy account), but not at the door letting them into your home (root/some other account). When I set my account up, I ran this:

useradd -s /bin/rbash -d /home/[account] -m -G [ssh only group] [account]

-s assigns a shell to the user (/bin/rbash), -d says where the user’s home directory is, -m creates the home directory, -G adds the user to a group (or list of groups), and “[account]” is the username.

From here, you’ll also want to assign a password to the account. NOTE: Only do this if you’re not root, otherwise just do passwd [account] instead of these two lines.

su [account]

For my last piece of advice (which will most likely be a two-parter for part 1) here, and this kind of goes into another part as well that’ll be covered in more detail later, create a SSH-only group. I created one called “sshdude”, and assigned my dummy account to that group only. Basically the purpose of this, as I said that’ll be shown in more detail later, will restrict the SSH server from accepting log ins from any account outside of that group. Of course you can specify more groups too, but I only have one group set up.

All in all, this is all more system-based preparation. I know I didn’t go into detail here either about how to set up key-based authentication, but that will come in part 2. The reason being is that this is kind of something that has to set up after you have followed everything else here (well, first you need to make sure you even want to do this method). It’s pretty simple to set up, but can be a bit tedious if you don’t automate the task.

Come to think of it, I might just come up with a script to automate this entire process…but, that’s for another time.

September 13, 2010  10:36 AM

SSH and the alias Features

Eric Hansen Eric Hansen Profile: Eric Hansen

When working with any *nix-based system, I’ve found that SSH becomes a part of your life…almost as if you’re married to it for the 9 (or 12…) hours you are at work.  But, I see many people just constantly type the same command over and over again.  While I’m not going to cover the ideals as to why this is bad for productivity, I will cover, however, a simple and easy trick to make using SSH faster and easier.

I discovered this while on my job (never messed with aliases before).  Our sysadmins were kind enough to make it easy for us to log into the abundance of servers that we house (both on-site and off).  Essentially we have two root accounts we have to log into before we can do anything.  One is a generic support account that basically only has sudo access, and then there’s root.  While I don’t agree root should be enabled, that’s not a topic for this post.  But, getting bored one day (weekends tend to be a slow day), I poked through the .bashrc file on “our” server (our SSH protocol is strange…), and found that the short commands we use to get to the generic support account is just a function aliased with simplicity.

This got me thinking, especially since I do work from home when I’m bored, and I have 2 servers of my own I manage.  How would I go about setting this up myself.  While I liked how it was done at my work, I like to keep things as close-knit as possible.  So, I developed this little gem (originally it had switches and was more advanced, but I have since reformatted, lost my .bashrc and decided to recode it for the 10th time anywho):

function ssh_call ()
case "$1" in
# ssh to vps

alias vps='ssh_call vps'
alias home='ssh_call home'
alias work='ssh_call work'

You can hard code the SSH commands and such in the alias, I just did it this way ’cause I was working on how to use functions properly in shell scripting, and so this was sort of a mini-project for me. Put that in your .bashrc script and either restart the terminal or run source ~/.bashrc and you should be set (shouldn’t give errors, it works for me). Now all you got to do is type in any of the alias names (whatever is between “alias ” and “=”), and it’ll connect you. This is highly helpful when you have long commands to type.

I’m sure this will be helpful to at least a few people out there. This obviously works for more than just SSH too (the starter aliases are for ls commands), so if you have any nifty alias tricks, or any .bashrc tricks to show off, feel free to post them here.

August 27, 2010  7:44 PM

Using grep…one hell of a tool

Eric Hansen Eric Hansen Profile: Eric Hansen

I’m far from an expert when it comes to grep, or any other CLI tool for that matter, but I did discover one trick with grep that saved me a lot of time.

This command is essentially find, but it builds on it a little bit more by not only displaying the file that the grep’ed text is found in, but also showing you the line of text as well.

grep -H -r  

This does the following:
-H: prints the filename that the text is found in
-r: recursive

This is similar to doing something like:

find . -iname  -exec echo {} \;

I am very horrible at writing advanced find commands, so it’s most likely not very efficient (or working, haha), but you get the idea here I hope.

You don’t have to pipe the data through grep either (i.e.: find … | grep -H -r…) as grep alone can also function as a file/folder locator command. For example, I run the command grep -H -r “H” `pwd` (search for “H” in any files recursively [-H -r] from the current directory [`pwd`]), the output will look something like (for me):

/../.bashrc:# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
/../.bashrc:[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"

I “..”‘ed the directories not important (ran it from /home). As you can see, it’s pretty helpful if you want a quick “find in files” creation. I’ve also wrote a bash script to automate this more:


if [ -z "$1" or -z "$2" ]; then
  echo "No parameters passed."
  exit 1

grep -H -r "$1" "$2"

exit 0

Not the most advanced script ever, but it is pretty useful instead of typing grep -H -r … … over and over again.

This is the end of this article, but I would like to say that I’m planning on updating this a lot more now since school is out for a few weeks (yes, I go to college full-time and work full-time…it does leave me with little time to enjoy the finer things in life, but I make do).

July 11, 2010  7:16 AM

Side note and beginner guide 101 (part 1)

Eric Hansen Eric Hansen Profile: Eric Hansen

First off, I’m just going to say I’m proud to announce the fact that people are starting to notice this blog (see http://www.linuxaffinity.com/?p=19692 ).

Now, for another Linux tip…don’t use Simple Script!!! Just playing (although, it does cause more issues than if you were to install any of the products manually).

This is kind of for aspiring Linux administrators out there, and I want to put it out that I am by far, without a doubt, not experienced as one, but I feel I can pass on advise that will help those who just learning the CLI tricks of the trade.

Tip #1: Learn shortcuts. This will make turning those folder-corners and nano’ing (or vi’ing) it up a lot faster if you know you can tab to auto-complete things and whatnot.

Tip #2: Learn how to script. To group everything into one (since you’ll be incorporating a lot into scripting), this will also include learning helpful aids like AWK and sed. You will be completely amazed at how easy your job will become once you can cut 10 minutes of typing into a simple ./fix-my-problems…believe me.

Tip #3: Never stop reading. Now, I’m currently going for my bachelors, and even my teachers who have been in the field for decades say this. There’s no way you’ll survive if you don’t keep reading and being up to date with what’s out there. Flavor of the weeks happen so much.

Tip #4: Remember, the answer is always 42 (hey, why not have some humor?).

Tip #5: When in doubt, don’t assume. You have Google and co-workers (hopefully at least) at your fingertips…don’t assume you know how to fix something if you aren’t sure, the only thing coming between the user getting you fired and you coming in tomorrow is your ability to know and learn…which kind of goes back to tip #3.

Tip #6: Set up a test server at home to become better at your job. Essentially this is what got me my job to begin with, is I gained all my knowledge by doing most of this stuff at home. I learned how to set up web servers, handle e-mail issues, troubleshoot and rebuild computers, etc… Again, going back to tip #3.

Tip #7: Analyze issues correctly, but don’t over-analyze. Big mistake I personally have made in the past. When faced with issues, don’t dive head-first…write it down to better understand what’s going on if you have to. What I do when facing certain issues is I open up notepad (yes, I do Linux admin stuff through Windows…but, hey, that’s what Putty and a personal laptop is for), and just break the problem down into parts. I find it’s a lot easier to handle situations that way, especially when it comes to e-mail because there’s so many variables involved. Basically, sit back, relax, breathe, and just look at it in parts.

I know these aren’t the most helpful tips for some, but really, I don’t want to see people going into this field to become exasperated due to the complexity that they can experience. Being a Linux administrator is both a rewarding and non-rewarding job. You don’t get much praise from your clients (usually), but you also get to feel like you accomplish something every time you fix an issue. Just remember, things happen, and you can’t fix ‘em all. Most importantly, keep up with tip #3.

More tips to come, I plan on this being a never-ending series.

July 4, 2010  2:17 PM

Restarting services made easy!

Eric Hansen Eric Hansen Profile: Eric Hansen

Figured while I’m posting scripts that make my job easier, I’d share another, since we have servers here that have services that need to be restarted (quite) frequently sometimes.

While this one is hard-coded for Apache/Lightpseed, it will work for any service that creates a PID file.


# stop()
# Stops the web server service
function stop {
# Attempt to stop the service cleanly
service httpd stop

sleep 5

# pid files only exist when either the program is running
# or when it's a zombie process
if [ -f /var/run/httpd.pid ]
# If httpd.pid still exists, keep trying to stop the service cleanly
echo "---> httpd.pid still exists...attempting stop again."

# The pause is just used for gracefulness
sleep 5

# Recursiveness is awesome!
# pid file doesn't exist, so why worry?
echo "----> httpd.pid no longer exists."

# start()
# Starts the web server service
# Everything is basically the same as above, so not commenting this also
function start {
service httpd start

sleep 5

if [ -f /var/run/httpd.pid ]
echo "----> httpd.pid exists...server has been restarted."
echo "----> httpd.pid doesn't exist...server not restarted, attempting to start again."
sleep 5

# Call the functions like a boss

sleep 10


# Graceful exit...not needed but used in best practice
exit 0

All the sleep commands are in as I’ve found out service restart tends to be quite pushy/rushing with it finishing the restart process. Also, checking to see if the pid file still exists is a pretty good indicator if the service is still running (of course, you could also do an awk from a ps or some other process command).

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: