I.T. Security and Linux Administration


February 21, 2011  4:17 PM

Drive Encryption (feat. TrueCrypt) Part 2

Eric Hansen Eric Hansen Profile: Eric Hansen
Security

Installation

Unfortunately, TrueCrypt’s official website does not offer a simple click-this-link-to-download, instead you have to choose between a GUI (x86 or x64) or CLI (x86 or x64) version. As my server’s pretty old (read: very, very old), I had to use the x86 version, so I cannot verify currently what changes there are between the two (if any). What I did was download the CLI/console x86 version, and then use WinSCP to copy the tar.gz file over to my server.

Note that most Linux flavors tend to have TrueCrypt in their repos, so if you’re not to follow the installation procedure, you can easily do it this way. Personally, though, I prefer to use the manual wrote, so that’s what I’ll cover here.

Once you download, transfer, and untar the TrueCrypt .tar.gz file, you’ll be left with one file:

truecrypt-7.0a-setup-console-x86

All you have to do is run this file, and it’ll give you two options:

TrueCrypt 7.0a Setup
____________________

Installation options:

1) Install truecrypt_7.0a_console_i386.tar.gz
2) Extract package file truecrypt_7.0a_console_i386.tar.gz and place it to /tmp

To select, enter 1 or 2:

I chose option 1, since 2 is more of a “see what’s involved in this”. After choosing 1, you’ll be prompted to acknowledge the EULA, where the Page Down key comes in handy quite a bit, as there’s a lot to read. Once you reach the end, you can type either “y” or “yes” (w/o the quotes) to acknowledge the EULA. After that, you’ll see the following screen:

Uninstalling TrueCrypt:
———————–

To uninstall TrueCrypt, please run ‘truecrypt-uninstall.sh’.

Installing package…
usr/bin/truecrypt
usr/bin/truecrypt-uninstall.sh
usr/share/truecrypt/doc/License.txt
usr/share/truecrypt/doc/TrueCrypt User Guide.pdf

Press Enter to exit…

That’s it for the installation, it’s a pretty easy install I’d say.

Usage

I’m not going to cover hidden volumes in this post, so I’ll address what the help option shows for TrueCrypt in regard to the hidden volume type:

Inexperienced users should use the graphical user interface to create a hidden
volume. When using the text user interface, the following procedure must be
followed to create a hidden volume:
1) Create an outer volume with no filesystem.
2) Create a hidden volume within the outer volume.
3) Mount the outer volume using hidden volume protection.
4) Create a filesystem on the virtual device of the outer volume.
5) Mount the new filesystem and fill it with data.
6) Dismount the outer volume.
If at any step the hidden volume protection is triggered, start again from 1).

In part one I covered what a hidden and normal volume is, as well as some other fundamental concepts for using this software. So, right now I’ll go through creating a TrueCrypt volume.

Before going into creating a volume, though, there is a switch (–random-source) that you can use, instead of inputting 320 random characters. While the help menu says “Use file as source of random data”, you can also use a device (such as /dev/urandom). You’ll have to wait a little bit though for it to finish, as it won’t output anything to let you know it’s gathering data.

truecrypt -t -c

Very simple, yet will become very powerful. You’ll be prompted with the following:

Volume type:
1) Normal
2) Hidden

My own personal experience, I never have found a need for a hidden volume, so I always go with normal (1). Once you do that, it’ll ask you for the volume path. It should be noted here, however, that it is the full absolute path (/home/tcvolumes/…, not ~/…), including the filename of the volume itself. When I was first working with TrueCrypt, this threw me off a little bit. Then the real fun begins.

Now it’ll ask you for the size of the volume, with megabytes (M) being the default size. If you want to specify kilobytes or gigabytes, then type in the size then K or G (i.e.: for 500 kilobytes, type 500K). Please note, even though it probably shouldn’t have to be said, make sure the partition or disk you use has the necessary space for the volume you are creating.

The first interesting bit during a volume set up is the encryption choice. Here’s what you’ll see next:

Encryption algorithm:
1) AES
2) Serpent
3) Twofish
4) AES-Twofish
5) AES-Twofish-Serpent
6) Serpent-AES
7) Serpent-Twofish-AES
8 ) Twofish-Serpent

While the default is #1, I prefer using #5 or #7, but outside of those, I recommend any of the AES ones. The cipher of AES is pretty robust, is among one of the hardest encryption algorithms to crack, and just overall provides a better security foreground to your volume.

After that, you’ll choose the hash algorithm to use, which mostly seems dependent on the encryption algorithm you just chose. I generally choose Whirlpool as it gathers data more randomly. For the file system, the default is to use FAT, but I like to stay consistent, so I stick w/ whatever format I have set up for the main partition (which is ext3 for my server). Besides that, make sure your kernel supports the file system format you want to use.

It’ll ask you for a password after you choose the file system. This is used for mounting the volume to ensure wondering eyes aren’t peeping into your files. While you don’t have to provide a password, it is very highly recommended that you do, unless you’re positive you’re the only one who will be able to access/mount/etc…the volume. If you enter a password shorter than 20 characters, it’ll warn you that a short password is generally easier to crack, and ask if you wish to still use the password. Alternatively, you can provide a keyfile (multiple keyfiles are possible) to use instead of a password. Please note though that the keyfile does have to exist prior, or else you’ll receive an error saying that the keyfile does not exist.

Then the real fun begins during the process, as you finally get to type in different characters to generate the encryption key. While it requires at least 320 characters, if you press Enter at any time, it’ll show you how many characters are left. After that, the volume will be created and then you’ll just have to mount it.

Mounting

(NOTE: Any switches with a * before the name means there’s two dashes instead of one.  WordPress screws up the formatting.)

This is pretty simple. The command is as follows:

truecrypt {volume} {mount point}

Just as if you were using the mount command, though, the mount point does have to exist before you mount the volume, or else you’ll get an error. If you decided to use a password for your volume, it’ll prompt you for it’s password before mounting. You’ll also be prompted for a keyfile (even if you didn’t provide one during creation), so if you didn’t give one during set up, you can just hit enter. After that, it’ll ask if you if you want to protect the hidden volume if there is any, which defaults to no. Then you’re done with mounting the volume. At this point, you can do pretty much whatever to the volume, as it’ll act just like any other drive would, mounted.

For mounting, if you want to speed up the process a bit, you can use this:

truecrypt -t -k “” *protect-hidden=no {volume} {mount point}

This will basically speed up the mounting process, only asking for a password. The -t switch says to use a text-based user interface, -k is to specify keyfiles (“” means no keyfiles), –protect-hidden does the same as the prompt during the regular mount process. Another switch you can use, but pretty much defeats the purpose of using passwords, is the -p switch, which lets you specify the password for the volume. In the case you decide to use this switch, the command (if you want to skip all user intervention) would look like this:

truecrypt -t -k “” -p “password here” *protect-hidden=no {volume} {mount point}

If you use the -m switch, you can set mount options (such as -m ro for read-only mount), similar (again) to the mount command. Another option that might come in handy is changing the password, which can be done using the -C switch. To do this, you would simply use the following command:

truecrypt -C {volume}

It’ll prompt you for the volume’s current password (which you can use the -p switch again if you like), then the new password and another 320 character string of random text.

Dismounting

I’ll cover this shortly as there’s only two commands really for dismounting a volume.  First, if you want to just dismount a specific volume, you would type in the following:

truecrypt -d {volume}

So, if your volume file was located at /root/tcv, you’d type truecrypt -d /root/tcv.

Lastly, if you would like to dismount all of your volumes, you would just use the following:

truecrypt -d

Finish

I’ll be covering more of TrueCrypt in future articles, as well as other programs that add to security and safety.  Keep checking back for more articles, as I’ll also be covering more about the IT world as well.

February 16, 2011  1:47 PM

Drive Encryption (feat. TrueCrypt) Part 1

Eric Hansen Eric Hansen Profile: Eric Hansen

Preface

“Who’s going to look in this file named ‘top secret details.xls’?” Does this type of question sound familiar? In the IT world, it should be like a second language of sorts the types of questions we get. However, once this user gets that file stolen by some means, you’re the first one they come to in order to save them. What if all of this could have been avoided to begin with, though?

This brings in my personal favorite, TrueCrypt. I’m sure most people have already heard of this software, as it is quite well known. Throughout this article, though, it’s going to cover the general aspects. Later ones are going to go more in-depth with this, and other, software.

Good vs. Bad

Before going into installing TrueCrypt, I want to cover some of the benefits of using this. What TrueCrypt does is basically creates a virtual drive (or virtual volume) of sorts that it (un)mounts, where you can put any type of document there. This may not sound at all amazing, especially in the Linux world where you can even just repartition a small hard drive for this purpose. The added benefit to TrueCrypt though is that it encrypts the contents inside of it’s virtual drive. This is done through the set up process of the virtual drive as to what encryption, strength, etc…will be used. Another benefit is that it works on multiple platforms (Linux, Windows and Mac OS X so far), so it’s quite easy to carry the virtual drive via thumb or network drive and not have to worry if it’s going to work or not. There’s also the fact that it can encrypt the entire drive, also requiring a password on boot for the hard drive to be decrypted and bootable. While I have not done any personal studies on how effective this is against forensics, it’s still a nice security feature to combat those who try to access your data without you knowing. This doesn’t just affect hard drives though, but also solid state drives like thumb drives. Lastly, I would like to point out that it’s seed and encryption algorithms are of top quality. The seed pool it uses with the GUI is by having the user move the mouse around a box to collect random data. As for the algorithms used, it’s mostly AES and Twofish-based, which is really quite nice…especially since AES is a cipher-based algorithm and one of the more difficult schemes to decrypt.

With the good though, does come the bad. While it may not come as a surprise, TrueCrypt does have to be installed on the host computer before a TrueCrypt-ed drive can be mounted. A remedy for this does exist though, in the form of a portable TrueCrypt. Last time I checked though it was only for Windows, and essentially required you to partition your thumb drive to use it anyways. You also need administrator rights to mount the drive (this goes for Windows and Linux, most likely Mac as well). If this is being used on your own PC/laptop, then this really doesn’t pose much of a problem…but, if you are trying to use this at work, a friend’s computer, etc…it might cause a security concern.

There’s one more point I want to go into before going into the installation and such of TrueCrypt is that you can create two different types of volumes. One is a regular volume, which is the same as a partition, nothing special. The other type of volume is a hidden volume. A hidden volume works in kind of a hierarchical fashion. It’s embedded inside of a regular volume, but is hidden until it’s mounted, as TrueCrypt embeds it inside of the free space on the volume it’s used with. Both volumes (should) use a separate encryption/keyphrase as well, to further the usefulness and secrecy that it is meant to hold.

Here is where this entry will leave off. Next will cover installation, usage and any other pros and cons I forgot to mention in this entry. Look forward to this next entry very soon.


January 19, 2011  9:40 PM

Dual-booting Linux and Windows 7: The 0xc0000225 Error

Eric Hansen Eric Hansen Profile: Eric Hansen

Disclaimer: This entry is focused on both Windows and Linux…but, I’m sure people will have this same issue, so I will detail my steps.

Recently, I decided to reinstall Linux on my desktop (gave up after KDE 4.0 was released and I loathe Gnome) after seeing how much KDE has improved, and went through the process. Mind you, I did all of this via a USB drive, as I have no working optical drive. I dual-booted Kubuntu 10.10 with Windows 7 Premium (x64 for both), and everything went fine. Or so I thought.

I could boot into Kubuntu just fine after the install. Installed programs, programmed a bit, and ventured around a little to get the feel of Ubuntu/Debian-based systems again (too used to Arch Linux). Long story short of re-enjoying KDE/Linux, I rebooted to start Windows back up to make sure I didn’t screw anything up, and…this is where the story starts.

I got the following error whenever I booted Windows 7:

0xc0000225 Boot selection failed because a required device is inaccessible.

Here, I thought it would be a simple fix. I rebooted again into Linux, and searched high and low along the Interwebs’ wall of information. I tried a bunch of fdisk tricks (most of which wouldn’t work as I was currently on the mounted drive and couldn’t unmount it), cfdisk would fail saying that the partition table is invalid. Something similar between (c)fdisk is that it gave a partition table error (don’t remember the errors off the top of my head, but basically invalid cylinder ranges).

Everywhere I read on the Internet said that this was something to not be worried about…well, they didn’t want to boot back into Windows 7 apparently. I Google’d the fdisk error (as I know cfdisk is very finicky about everything), and everyone just did the short/easy route, reinstall both OSes. This wasn’t very viable to me, just because I didn’t feel like making a USB bootable of Windows 7 for the nth-time now. So, I took it a different route…I Google’d the Windows error I received. That is what solved my problem.

Step #1: Download the Windows 7 Rescue Disk

Microsoft was smart with Windows 7, and offered a rescue disk for those who dared to do silly and things. Download the version of the disk that corresponds with your version of Windows 7 (i.e.: x64 for x64 installs).

Step #2: Format the USB as NTFS (if it’s not already; if it is, then delete whatever is on it)

I’m not going to go into how to delete contents of a USB drive, but to format one as NTFS, use mkntfs. The command I used (my USB was /dev/sdc) is:

mkntfs /dev/sdc1

You have to specify the partition number to format. You could optionally use the -Q (quick format) switch, but I had things to occupy myself with anyways so I let it do it’s own thing…took about 45 minutes. It’ll zero-out the partition (which takes the longest) and then do the formatting.

Step #3: Mount the USB

I let KDE do it by itself by unplugging and replugging in the USB device. But, if you don’t want to, you can easily use the mount command again:

mount /dev/sdc1 /mnt/usb

Again, /dev/sdc1 was for me, it might be different for you. The second option is the path where to mount the device, just make sure it exists and is empty.

Step #4: Copy over the ISO content

This one is probably the easiest step, to me. Command:

Now, I’m not gonna lie, I kind of cheated here and use UNetBootin. Reason being is that it combined about 2-3 steps into one. You can get this software at http://unetbootin.sourceforge.net/ and it will run on Linux and Windows. For those on Ubuntu-based (and probably Debian as well) systems, you can use apt-get install unetbootin and it’ll install for you.

All you have to do is choose to use a disk image (ISO) file (second radio button). Choose your Windows 7 rescue disk ISO you downloaded, and choose “Show All Drives”, choosing USB as the type. The reason why you mounted the drive earlier was because at this stage, if you don’t, it won’t let you use your drive until you do mount it. So, choose your drive partition (i.e.: /dev/sdc1) and continue on. It’ll copy over the files, and install a boot loader.

Step #5: Reboot and Run System Repair

Reboot your computer and choose to boot the USB (this varies from motherboard to motherboard, most of the time you’ll choose HDD or USB-HDD). You let the rescue disk find your Windows partition, and then choose “Startup Repair”. Some people say you should do this 3 times, but I just had to do it once. It’ll fix the partition table that got fubar’ed and then click “Finish” to reboot.

At this point, Windows 7 should be booting just fine now.

I’m not sure if this only caused by Ubuntu or not, but most of the time it’s more of an issue with resizing a Windows partition (which is what I did to cause a partition table boo-boo).


January 11, 2011  2:02 PM

Anti-Virus on Cloud Nine?

Eric Hansen Eric Hansen Profile: Eric Hansen

On January 5th, 2011, Sourcefire, the creator of SNORT, released a press release over a “partnership to deliver a free, Windows-based version of the ClamAV(R) antivirus solution.” In a world where buyouts and “partnerships” happen left-and-right, why is this so important, you ask? Further into the press release, it states that it’s going to utilize a cloud-based anti-virus scanning set up, to provide further true positives.

While I’m not the most avid supporter of cloud computing, this seems to be a more intelligent way to use the capabilities of clouds. Case in point, Windows with it’s roaming profiles. All that is is simply cloud computing, and while it is convenient, it’s also a very slow process to log into another computer. Here, from what the press release says, it’s basically using a cloud platform to store various scan reports (what files are infected, what type of infection, etc…). At least to my eyes, this seems more like a offline-version of various online multi-virus-scanner websites.

Right now, one of the main problems I see with the anti-virus community is not that there’s so many false positives and such, but that there’s no protocol. While nothing’s perfect, and all these products do their job differently, it’s truly nothing but a big whirlwind. Perhaps this is what is needed for the anti-virus vendors to finally see working together, instead of against each other, is going to be about the only way you can truly be successful. Sharing virus definitions can do nothing but help the public and themselves, and the greediness is only hurting everyone that has a computer connected to the Internet.

If this idea picks up as much as I honestly hope it does. My ideal vision of this would be that there will be one anti-virus program that uses this cloud. No more Norton, McAfee, Kaspersky, etc…but, they all form sort of a committee of their own (similar to IEEE). They develop a singular anti-virus program that does what it’s supposed to, and it just simply works. Let the vendors develop their own GUI to the definitions and scanner if want-be, but make the definitions themselves open to the public basically. Let the whole programming community write their own scanners as well.

This is 2011, not 2000, things have to change in the I.T. world for things to continue moving forward, instead of backward. No longer can we continue to see things for ourselves, but we have to start really looking out for others as well. If there is no competition, it is doomed to fail.

If you want proof of this, look at Microsoft. For many years, they were on top of the I.T. world. Windows was the most-used operating system available. November 1991 came around and Linus Torvald decided to release a college project, Linux. Was it a threat back then? Definitely not. Over the years though, the community came together and made it a very viable alternative to the once-king operating system. From then on, Microsoft has really stepped up it’s products and released a very solid system in Windows 7.

Only time will tell how well the cloud-AV solution will work. While ClamAV was never the greatest anti-virus scanner personally, it was still a good solution if paying for a product was not possible. Given that ClamAV is also heavily used on servers, I’m hoping that this will lead to a whole different ballpark for safety, security and care for these vendors’ customers.


January 1, 2011  6:07 PM

Deployment Pre-Cautions

Eric Hansen Eric Hansen Profile: Eric Hansen

First off, I would like to say happy New Years to everyone who reads my blog. I’m hoping to bring more new content on a more frequent basis.

During deployment, things can get hectic, especially if it’s deploying something that’s (still relatively) new, such as offering a new operating system for website hosting. During the rush, it’s pretty common (if not a requirement in itself it seems) to forget the essentials to make sure the deployment goes as smooth as possible. While these points won’t work for every roll, nor will this be an extensive list, it’s still something that you might not think about or overlook…and, if it’s possible to stop one person from making a mistake, even if just one out of a hundred, then this post did it’s job.

1. Pre-plan the Development Cycle
There’s no doubt that things are going to go wrong, at least once during deployment. This will bring me to point #2, but first, before you even touch anything else, it’s best to plan out how, when and where things are going to happen.

If you plan on building a server in room A and mounting it in room B, reserve space for the server. Or, if you’re in the stages of website development, know prior to writing one line of code what the customer wants…at least to work off of, and tweak what you need to as you continue on.

Pre-planning is something quite a few people who are just starting plans skip over, which leads me to my second point…

2. Before Testing, Know What You Can and Cannot Do
After you figure out what needs to be done, see what you can do on your own, and what you will need help with. Plain and simple, even though it’s another pre-plan step, this is more of a self-evaluation and a C.Y.O.A. moment.

If you’re not well-accustomed to hardware, see if a friend can teach you what you need to know, or watch videos about it even. Unaware of how to write your own class in PHP? Search Google, or (better yet) PHP’s official website (www.php.net). This will be a life saver if half-way through a project the customer wants you to change something.

3. Evaluate Your Resources
See what you have and what you need. Do you need more memory for that 32 GB server, or an IDE for that new programming language you have to learn in the next 24 hours? It’s best to make sure you have more than too little of what you need, especially when you cut things down to the last minute.

You should also, during this time, see the time you can allocate to your project. If you know during the first week you’re going to have a lot of family time you have to devote yourself to, then it’s probably best to tell the client it’ll have to wait a week. Distractions can come very easy,whether you have a job you have to go to, or you work from home. It’s best to know when things can get done and how long you can work on it for.

4. Re-Pre-Plan Everything
This might sound a little counter-productive but the logic behind it is that you’ve re-evaluated how everything should work in step #3…so it’s time to see if you’re forgetting anything. Does the customer want more flairs added to that next-gen website you’re making for him, or does the server now need 64 GB of RAM? As long as everything matches up to step #1, though, you’re good to go to the next step.

5. Take Your Time
Probably where most of the mistakes are made next to pre-planning…taking things slowly. Especially with the way our lives work now, we’re always so rushed to get things done as soon as possible. Maybe you do this in hopes that your customer will be impressed and request more work? Perhaps you forgot about that honeymoon to go on and so you try to handle two things at once?

One word: STOP. Why, you might ask? You tend to make more mistakes speeding through something than not. I’m sure the customer will better understand that you want to give them a bug-free product, rather than a product that only works when it wants to. During this process, it’s also very important to test whenever you implement something new. Throwing everything together at once just to realize only about 5% of what you made actually works just discourages the customer more.

As I said, these may not apply to every project, or to everyone, but this is more from a “learn it from someone who knows” perspective. I’ve made these mistakes and it’s caused quite a ruckus when trouble happened. So, the next time you want to install that plug-in for WordPress, make sure you also have a backup handy, as well know it will work with the rest of your set up.


December 24, 2010  1:43 PM

The do and do-nots of making a SysAdmin’s life easier

Eric Hansen Eric Hansen Profile: Eric Hansen

ACM Queue recently posted an article (“What can software vendors do to make the lives of sysadmins a little easier?”, Dec. 22, 2010) that lists 10 total do-and-don’ts for making a system administrator’s job easier. The article itself has some interesting points; but, I’m going to give my own perspective on it’s points it makes.

DO have a “silent install” option
– I agree with this 110%. Having to go to 1 computer to click a button or choose some options is fine; having to go to multiple computers to do the same task is annoying (as well as counter-productive). I’m sure this is at least a big reason why most (if not all) package manages like yum, apt and pacman have an auto-yes/no-prompt option in them. The only downside to this I can foresee though is if you need to customize the install for a selection of computers. But, really, even if that’s the case, it should be rather easy to put those computers in a different queue.

DON’T make the administrative interface a GUI
– Personally, I see this as a common-sense one. While for desktop users it’s great to have some sort of GUI for programs, it seems rather illogical to use any sort of window manager for a server you’re going to be VNC’d to very, very rarely. Most system admins seem to just use tools such as Putty or a SSH client.

Another viewpoint to this though is what about web interfaces? Granted, it’s not truly a GUI set up, but would those be acceptable? My own taste says yes (a good example is SafeSquid’s administration…it’s all web-based). So basically, ditch the GTK+ designing and try to work on a web-interface; your system resources will thank you.

DO create an API so that the system can be remotely administered
– This is kind of hit or miss for me. While I agree being able to remotely administrate a program is a grand idea, I don’t see how it’s any different than creating an SSH session and viewing logs. This is a short comment for this, but it’s a pretty overly-done process. APIs are meant to expand the functionality of a program (i.e.: plugins). While the article makes a good point in it that it helps extend program functionality, I feel the author just didn’t truly address this point, and for good reason.

DO have a configuration file that is an ASCII file, not a binary blob
– Another 110% belief in. The main reason for this, as the article states, is the ability to use diff to know what changes were made.

For an example on this, lets compare to the two most popular browsers not based on operating system, Mozilla Firefox and Google Chrome. The way each browser stores it’s data is different. Firefox does it similar to Internet Explorer, in that it writes to the disk your URL history. Chrome, on the other hand, decides to use a database to store it’s data in (using SQLlite I believe, to be exact). In short, Firefox stores it’s data in ASCII, and Chrome stores it’s data in binary blob (trying to open up the database info in a text editor gives you a lot of unreadable text).

DO include a clearly defined method to restore all user data, a single user’s data, and individual items
– This was a big problem where I used to work. While there was a very well developed backup system we had in place, it wasn’t very well managed (we lost a good 2-3 months worth of backups one time and no one caught it for a month). The backup system did cover both points, but if you have to also be able to set some protocols on monitoring your systems, especially your backup systems. Unless there’s a reason not to, I fail to see why a simple cron job to run a backup script and send it to a backup server wouldn’t just do the task just fine. I guess I kind of veered of topic on this, but it’s still something to note here.

DO instrument the system so that we can monitor more than just, “Is it up or down?”
– Another situation I’ve experienced. Having some system available to actively monitor services is very important and well worth the investment. My personal liking is the Nagios/OpsView option (OpsView is just a forked version it seems of Nagios). It monitors all the system resources you ask it to, and works for more than just *nix-based systems. If you have more than one computer and/or server, this is definitely something to set up, and you’ll be thankful when your server shuts down without you knowing.

DO tell us about security issues
– This is more of a double-edged sword more than anything. Yes, you help out the community and the people using your services, but you also let the enemies know of a weakness. While I agree with the article in letting people know publicly, it’s always a tough call for any vendor to know how much is too much. However, it’s never a good idea to wait for a fix to occur before releasing a notice of an error, because then the enemy almost always has already won.

DO use the built-in system logging mechanism
– Yes, using syslog and Event Viewer (depending on the server’s OS) is great…perhaps even greater is it makes everything run a lot smoother. There’s really no excuse to not use a system’s already-available functionality for this, especially since they will always be more optimized for the system at hand.

DON’T scribble all over the disk
– This seems to be more of an issue with Windows servers than any *nix one, but still a good point made. *nix servers seem to have a pre-defined structure that Windows neglects to make it more user friendly, which also makes it harder to diagnose the hidden settings when something goes wrong.

DO publish documentation electronically on your Web site
– While this is whole-heartedly true for a Windows machine, for any system using manpages, it shouldn’t be necessary. The only positive side to using this instead of manpages is if the server is in an unbootable state of some sort. But, that point, I’m sure it’s still going to be no use unless you happen to have a LiveCD around somewhere.

All in all, the article was a good read, but in an opinionated view, made some mistakes in it’s points. Here’s a question to the readers of this blog, though…what do you feel is a do and/or don’t to make a system administrator’s job easier?


December 13, 2010  2:11 PM

Is Ad Revenue Worth User Insecurity?

Eric Hansen Eric Hansen Profile: Eric Hansen

In a vague sense, this does involve I.T. security. Over the past week, there has been a lot of drive-by-downloads done via a fake advertising firm, spreading it’s payload through both Microsoft and DoubleClick’s ads service. Both of which affecting a very high amount of computers and devices. While not going into to much detail, the generic rundown is that people registered the domain AdShufffle.com (intentional 3 f’s; the actual domain is AdShuffle.com [2 f’s]), and somehow that tricked the ad sites DoubleClick (owned by Google) and Microsoft’s own ad service into serving ads from this domain. The domain then would use JavaScript an iFrame coding to download software onto a user’s computer and use various exploits to cause damage.

While I haven’t read anything about what the damage was, the pressing issue is more so is using and advertising business model worth risking your clients’ (personal) security? I don’t agree with using NoScript and Ad-Block programs extensively (websites do need to make revenue, especially if they offer free services to it’s clients. But, if these issues keep arising, then I see no alternative but to restrict ads.

The main issue I see with ads is they try to attract the user instead of provide them a reason to go to their website. By this I mean there’s a lot of ads that use voice and flashiness to attract a person’s eyes, but it serves no purpose besides that. One thing about AdSense (Google’s own personal ad service) is text-only, which gives the user a real reason to go to the ad’s site (if it’s informative enough).

Articles like the one that this is about tends to lead people further away from supporting the “starving artists” of the I.T. world, and leads to people needing to find more strategic, and possibly more intrusive, ways to support themselves.

Source: Major Ad Networks Found Serving Malicious Ads


November 6, 2010  10:23 AM

SSH Security (Part 2)

Eric Hansen Eric Hansen Profile: Eric Hansen

In the last part, there was a lot of planning, and preparation, for setting up SSH to use certificates instead of passwords to authenticate a user. Now comes the configuration and trial-and-error portion.

First thing I’m going to cover is the sshd_config file (config file for the SSH daemon), which is usually found in /etc/ssh/sshd_config. I’ll be going through this one by one on what I changed, why I did so, and any suggestions I can provide.

Port ####

I highly suggest changing this to something else, simply because it reduces the risk of attack all together. All this does is change the port that SSH listens on. I usually set mine higher than 1024, because a lot of “good” services listen on ports below that, but it’s up to you on how high or low you want to go. Another note here, I’ve read a lot of talk over the years that you should specify the address a program listens on (i.e.: ListenAddress 192.168.1.59). I don’t believe in doing this unless you have more than one NIC in the server, especially if you have a switch in your network. I’ll probably make this another article at some point on this specific topic, but not right now.

PermitRootLogin no

Another security measure, that doesn’t really deal with certificates, but is a “better safe than sorry” approach. This restricts “root” from logging in at all when establishing an SSH connection. I usually have this enabled though until I know everything works fine just for the fact I tend to make mistakes. But, in a production environment, I don’t know of any security-aware professional that would say to leave this enabled (“yes”).

RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile     .ssh/authorized_keys

The first two are for enabling certificate-based authentication. RSAAuthentication is basically saying that an RSA key is going to be used, and PubkeyAuthentication uses a public key to authenticate the user. AutherizedKeysFile just simply tells SSH where to look for the RSA and public keys (relative to /home/user/). The reason for enabling the first two is that RSA generates a private and public key. The public keys are generated on the user’s machine, and private keys are created on the server. I’ll get into this more in a bit.

PasswordAuthentication no

This, by default, is turned on. This is what makes it possible to log in using a password. Now, if you want to completely disable the ability to log in using a password, disable this. Otherwise, when you go to log in, it will ask you for the certificate’s passphrase (if there is one; which I’ll also get into in a bit), and then if fails, it’ll ask for the account’s password. I never see the point of having this enabled, but if you feel more safe allowing both methods to be used, have it there.

AllowGroups 
AllowUsers 

If you remember my last SSH security post, you’ll remember I suggested creating a SSH-only group. If you didn’t do this, or want to allow users outside of that group, then AllowUsers will help as well. One problem I’ve seen with this is that you can have a big list (separated by spaces) of groups and/or users you want to grant access to, but most editors tend to break these lines so that the list is on two or more lines. So, to make things easier (and cleaner), if you have a big list, you can have multiple lines of Allow* in the config file. A personal recommendation here, is to not have both of these enabled. Reason being, what happens if you remove a user from AllowUsers, but forget to remove them from the group? They’re still able to log in (this is assuming you didn’t delete the user or lock their account).

Phew, okay, now after you’ve made these changes, you can save the file and close it. Since we’ll be copying the key over before we can use this, I suggest NOT restarting the daemon until later. Now, onto generating the keys needed. First, I’ll cover the server side, as we have to copy the client’s key over at the end.

On the server, there’s similar work that needs to be made. Run this command from inside the /home directory for the SSH account:

mkdir .ssh && touch .ssh/authorized_keys && chmod go= .ssh && chmod 600 .ssh/authorized_keys

This does similar to the client-side command you had to run, with the added “read and write only” permissions on authorized_keys. If this isn’t in place, then SSH tends to basically not allow log in period, as it views it as a security breach if it’s anything beyond 600 (i.e.: 700 or 610). Now, to handle the client side of things.

Assuming AuthorizedKeysFile was left as the default, make the .ssh directory:

mkdir ~/.ssh && chmod go= ~/.ssh

This makes the directory in your home account and then chmod’s the directory as 700 (rwx——). Next, go into that directory and run the following command:

ssh-keygen -t rsa

By default, SSH uses RSA for certificates (even though you can use DSA as well). Next, you’ll be asked a few questions (which most can be used as defaults), such as:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.

Now, one thing to note here, is the passphrase option. To put it shortly, if you enter a passphrase, you’ll ALWAYS be asked for this when logging in. If you leave it empty, as long as a key matches in the authorized_keys file on the server, authentication is assumed good. Personally, I go the extra mile and add in this little bit of security. After that, you’ll have your private (id_rsa) and public (id_rsa.pub) keys ready to go. Now all that’s left is to copy the key over to the server, and restart the daemon. To copy the key over, just use this command:

ssh-copy-id user@host

For example:

ssh-copy-id ssh_dummy@testing_machine

After this, restart the daemon on the server, and try to log in to SSH from the client. If you entered a passphrase, you should be asked for the passphrase (and if you didn’t enter a passphrase, you should see the prompt).

If you have any issues, remarks, questions, etc…on this, feel free to leave a comment. This is really a lot easier than it sounds. Another thing I like to do is use the dummy account as a skeleton account for others (-k command in useradd). This will copy over the entire directory for this account to the new one.


October 27, 2010  11:42 AM

Prelude to SSH Security (why I’m starting this series)

Eric Hansen Eric Hansen Profile: Eric Hansen

This probably should have been posted prior to my previous post, but I didn’t really think of posting this until just now. However, I’d like to make a note as to why I’m starting this mini-series of sorts.

After reading this article, it got me thinking; “I have SSH at home, why don’t I force connections to that to make sure my data is secure?” While it’s true nothing is ever 100% secure (if someone tells you otherwise, I’d love to hear how this is possible), data leaks is finding a new home in the “fear list” of sorts.

The premise, if you’re not interested in going to the above link, is basically that most websites don’t secure data beyond the log in page. It talks about a proof-of-concept extension in Firefox that lets you sniff out other users’ data while connected to a hot spot.

Now, while my logic is more of a “you get what comes to you” approach, I’d also like to help people out and not have them get their accounts hacked. You can use programs like Tor (but even that’s a pretty dangerous step to take), or just not log in to your Facebook or Twitter anywhere besides your home computer (or phone if you’re not connecting to a wireless AP), but what if you don’t want to do either of these? Well, you can use SSH to open up a SOCKS proxy so all data will be sent through SOCKS to your SSH server.

This is what this series is going to ultimately get to. While it’s a pretty short series all-in-all, I feel it’s a step in the right direction with this blog, and with what’s going on lately and all these scares, there’s no harm in bringing forth a new view point on a topic that can always use a new voice.

Also, please keep in mind that all of the steps and information I’m passing along to this topic is of my own advice, you can alter it and/or not take it at all. There’s always room to improve what’s already been done, and I’m more than open to hearing about what you choose to do.

In the mean time, I’m going to see where I should take this series next (whether I should do a part 1b or start on part 2), and continue from there. While I doubt I’ll be posting a new entry this weekend, I do plan on sparking some life and (hopefully) debates in this blog and getting things rolling again. I’ll probably take breaks in-between form my series (as I plan on doing at least one a month) to discuss other IT-related material as well, so we’ll see what the world brings us, one cycle at a time.


October 26, 2010  11:26 PM

SSH Security (Part 1 [perhaps a out of b])

Eric Hansen Eric Hansen Profile: Eric Hansen

To kick start a new life into this blog, I have decided to venture into the realm of SSH security. Going through the troubles I’ve experienced so far in securing my own SSH server, providing tips along the way. This first part is going to probably be one of the more boring parts (read: pre-planning ideas).

The first thing to cover, here, though, is you should highly think ahead of time what security you want, exactly. There’s a lot of rave about key-based authentication, instead of using passwords. While I’ve used both, there’s only one real difference I’ve noticed in switching to using keys (certificates) instead of passwords: logging in is faster. Granted, my testing environment is home-based, so it isn’t exactly the best to base security off of (I have no wireless router, for one). However, using password authentication would generally take me about 5-10 seconds to log in from connecting to seeing my bash shell. While using keys, I’ve noticed it takes about 3-5 seconds, at most, to go from connecting to bash shell.

Another issue on this matter is how secure do you want to be. As I said, I’ve set this up for myself to just learn how to break things so I can fix them again, there is no wireless router, and I’m about the only one who uses the Internet unless my girlfriend comes over, so there’s really no security warnings I should take heed of. However, in a data center, for example, it’s better safe than sorry to keep things as secure as possible without breaking everything. So, for my environment, I would be fine with using passwords (which I was doing for a long time), but in a production environment, it’s wiser to use keys. Not just because of the “do it for security”, but the reason why it’s more secure. While SSH does encrypt the data stream, having a double-layer of security is preferred, so that if someone does get your keyphrase, they don’t have instant access to your server. Which brings me to my next point…

For the sake of preventing bad things from happening, create a dummy SSH account (preferably using /bin/rbash [restrictive bash] for it’s shell). Yes, you should still set a password for the account (since sudo will still be accessible).

Fact is, your system will more than likely become under attack, especially with the more services you run/offer. The reason for creating a dummy SSH account is to basically set up a honey pot of sorts. Do you want a stranger to just walk inside your home, even if you leave a key in a secret spot? I’d think not. The same philosophy applies here. Let the stranger in at the porch door (dummy account), but not at the door letting them into your home (root/some other account). When I set my account up, I ran this:

useradd -s /bin/rbash -d /home/[account] -m -G [ssh only group] [account]

-s assigns a shell to the user (/bin/rbash), -d says where the user’s home directory is, -m creates the home directory, -G adds the user to a group (or list of groups), and “[account]” is the username.

From here, you’ll also want to assign a password to the account. NOTE: Only do this if you’re not root, otherwise just do passwd [account] instead of these two lines.

su [account]
passwd

For my last piece of advice (which will most likely be a two-parter for part 1) here, and this kind of goes into another part as well that’ll be covered in more detail later, create a SSH-only group. I created one called “sshdude”, and assigned my dummy account to that group only. Basically the purpose of this, as I said that’ll be shown in more detail later, will restrict the SSH server from accepting log ins from any account outside of that group. Of course you can specify more groups too, but I only have one group set up.

All in all, this is all more system-based preparation. I know I didn’t go into detail here either about how to set up key-based authentication, but that will come in part 2. The reason being is that this is kind of something that has to set up after you have followed everything else here (well, first you need to make sure you even want to do this method). It’s pretty simple to set up, but can be a bit tedious if you don’t automate the task.

Come to think of it, I might just come up with a script to automate this entire process…but, that’s for another time.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: