I.T. Security and Linux Administration


March 8, 2011  10:40 PM

On-the-fly Data Compression: Wingless Bird or Golden Savior?

Eric Hansen Eric Hansen Profile: Eric Hansen

With solid-state drives (SSDs) getting a lot of usage now, especially with the further adaption of netbook usage, instead of the aging notebook and laptops, there have been people bringing up the idea of on-the-fly data compression.  The main question is, how efficient is this concept?  As it stands, I have yet to find a Linux file system that supports this out of the box.  What I’m wanting to address in this post, is whether this should change or not.

What Is On-the-Fly Data Compression?

Basically what this whole post is even about is the ability to compress data on the hard drive, and uncompress it when it’s accessed/requested.  The premise is that this will save disk space, and that’s it.  Since SSDs have came into the market with a punch now, there has been a little bit more pressure for this ability, especially with non-Linux file systems being able to do this for years now.

Why Should I Use This?

In a non-biased manner, it’s not something that is clean-cut (similar to “which Linux distro/flavor should I use?”).  It has its moments, definitely, but it also has its limits…both of which I will do my best to address in this post.

How Do I Use This In Linux?

This is what draws me away from this technology/feature.  For Linux, ext2 and 3 do support this, but you have to compile in a patch into the kernel in order for it to be there.  For ext2, there’s e2compr, which works with 2.2, 2.4 and 2.6 kernels (with the last 2.6 support being in 2.6.22-25).  For ext3, there’s the ported version of e2compr called e3compr.  The problem with both these solutions is that they haven’t been updated in some time (e2compr since 2009, e3compr since 2008).  As for the other file systems (including ext4 and ResierFS), I’m not able to find any information on these supporting this feature.

Why Wouldn’t I Use This?

I really didn’t want to make a post starting out with the negatives on this from the get-go, as I love this concept.  But there are a few issues here that I see with it, that I haven’t brought up yet.

The first issue, is disk performance.  SSDs already have a lower life-span than it’s hard drive step-brother.  This makes me wonder why people think this technology would be great for SSDs to begin with.  I don’t see much of any issues when it comes from reading the files.  There’s no difference between a compressed and uncompressed file in this regard, stats() will still show the same information.  However, the write portion of files is what scares me the most.  If you think there isn’t much to it, then consider this.  Even with laptops able to have 2 or more GB of memory, how does the file get uncompressed?  If it’s uncompressed on the drive, then you’ll be using up to double the space (even if it is temporary, you’d also have to make sure you have the space).  If you decide to uncompress the files to a tmpfs (RAM drive), you still gotta make sure you have the space for RAM (which is even trickier as RAM usage fluctuates a lot more).  Of course, there’s the possibility of swap helping out here, but it seems like a loss cause for the fact there’s a bigger chance of data loss or corruption, especially if you end up running a program that gets caught in a buffer overflow/overrun.

Verdict

Would I use this in my every day life?  No, not even on my netbook.  I don’t feel that the possible threats justify the possible gain of disk space.  Also, while I love using Linux, I don’t like tinkering with the kernel, especially with patches that are more than 6 months old (let alone 2-3 years).

There are systems out there (such as NTFS) that do offer this, but I feel there is a reason why this isn’t enabled by default.  If you have a SSD set up, you might enjoy the added space, as they are limited in what they can hold, but with hard drives being a good 500 GB to 1 TB or more, the sacrifice is too great.

March 3, 2011  2:18 PM

Nagios Checker [Bash Script]

Eric Hansen Eric Hansen Profile: Eric Hansen

While working on another post I’ll be making here later, I decided to venture back into bash scripting.  While there’s plenty enough Nagios scripts out there I’m sure (I really haven’t looked myself, but Nagios does have a big enough community), I decided to write my own, basic one.  The only requirement (besides having Nagios installed), is having SSMTP installed as your SMTP server.  If you use a different SMTP server, however, then you should be able to simply modify the send-mail line with ease.  For this, I’m going to post the code first, then go into detail about the important parts.

#!/bin/bash

NAGIOS=$(/etc/rc.d/nagios status | grep “is running”)

if [ -n $NAGIOS ]; then

echo “To: admin@domain.com” > /tmp/nag_check

echo “From: nagiosadmin@domain.com” >> /tmp/nag_check

echo “Subject: Nagios system down on `hostname`” >> /tmp/nag_check

echo “” >> /tmp/nag_check

echo -e “`date \”+%D %r\”`: Nagios was found to not be running.  If this was found in error, please correct this issue as soon as possible.” >> /tmp/nag_check

/usr/sbin/ssmtp -t < /tmp/nag_check

fi

First thing to note, is the NAGIOS line:

NAGIOS=$(/etc/rc.d/nagios status | grep “is running”)

The path (/etc/rc.d) applies to Arch Linux, which my server runs.  You will have to modify the path to fit your server if there isn’t a /etc/rc.d/nagios file.  All that file is is a basic init script, which also exists in /etc/nagios (/etc/nagios/daemon-init to be exact).  Again, the path may vary depending on the flavor used, but this one should be more standardized.  All the entire line is doing is asking for the status of Nagios (is it running or not?).  If it’s running, the output will be something of “Nagios (pid: ###) is running…”, and if it’s not, it’ll say it can’t find the lock file.  Next, we check to see if we found the “it’s running” text (the “-n” switch in the if statement).  While this isn’t perfect-standard programming (the $NAGIOS variable should be in quotes), it still does its job effectively.  Besides this, there’s nothing else to cover about the code, as it’s pretty self-explanatory.  The “-t” switch for ssmtp just says to scan the /tmp/nag_check file for the To/From/BCC/Subject lines instead of being passed via CLI (which is why we add them in to the file).

There’s two other ways to make this easier, though.  On my system, the lock file is located here: /var/nagios/nagios.lock (which, again, varies on the flavor of your system…it’s best to do a find /etc -iname nagios.cfg and scan that file for a lock file variable to find the correct path for you).  Instead of having the NAGIOS line, you could take that out completely and replace the “if [ -n $NAGIOS ]; then” with “if [ ! -e /var/nagios/nagios.lock ]; then” instead.  This will basically check to see if the file doesn’t exist (-e is to see if the file exists, and the ! states “if this condition is not true/if the file is not present”).  This is a little bit faster (not a noticeable difference, however), but not sure what other performance differences there are.

Lastly, and one I just thought of while writing this post, is probably the most efficient in terms of processing.  You can either strip out all of the NAGIOS checking, so it just sends out an e-mail, or move the e-mail code into /etc/nagios/daemon-init itself where it detects that it’s not running, and let the init script do it by itself.  The only difference in the cron job would be that instead of calling the script, you’d call the status check.

The e-mail, when sent out, will look like this, though:

Subject: Nagios system down on *hostname*

Body: 03/03/11 01:49:16 PM: Nagios was found to not be running.  If this was found in error, please correct this issue as soon as possible.


February 25, 2011  1:04 PM

Software VPN vs. SSH: Which is better?

Eric Hansen Eric Hansen Profile: Eric Hansen

In the IT world lately there’s been a lot of buzz about VPN, and how to effectively use it for remote administration.  During this time, it seems a lot of people are forgetting the roots of remote administration (at least within the last few years), when VPN was just starting to get recognized really.  While both do have their pros and cons, which one is better for the administration?  Let’s go a little in-depth with both and see.

Security

Hands down, in the general scheme of things, I would say VPN has this area won.  While SSH does in fact have a high security standard, the way VPN handles encryption is more intelligent.  This is pretty much all because of VPN’s encapsulation method (which will be covered again later) that adds an additional security measure that SSH doesn’t.

For those who aren’t familiar with encapsulation, especially with VPN, here’s a general overview.  When you connect through a VPN tunnel, the TCP/IP packets have more overhead.  This is due in part to the VPN service placing an encryption header on top of the packet itself.  While this does slow down traffic (which will be covered later as well), companies have gotten wise to this technology and added measures to make sure this cannot be tampered with.  This is generally done with a token (such as a 6-digit RSA SecurID) which authenticates the user, instead of using (just) a username and password.

In the realm of SSH, the protocol it uses is pretty intensive when it comes to security.  The fundamental flaw to this, though, is when a key isn’t used.  In previous articles I gave a step-by-step guide on how to set this up, and there’s a major reason for this.  When you log in via SSH without using a key, your log in information is sent via plain text.  Now, if you’re only connecting to a home computer that’s setting right next to you, it’s almost never an issue.  But, say you’re at work and you want to check the iostats on the server, sending your data via plain text would not be the logical choice.  That’s basically telling everyone to log into your server.  VPN doesn’t have this issue, as the encapsulation header doesn’t contain anything damaging to the user itself, as all the data is inside the packet underneath the encapsulation header.

Speed

This is the one that gets to me the most here.  For anyone that’s used both protocols, I’m sure they will agree that SSH is the faster of the two.  Fortunately, the reason is simple.

In the last point, I talked about how VPN has that encapsulation header to provide an additional layer of security.  While it adds that extra sense, it comes at a great price.  With VPN, what happens is (not necessarily like this but the basic idea is there), the packet payload is segregated (as a TCP/IP packet length can only be 1500 bytes).  This means that less data reaches the destination, requiring more packets to be sent, increasing bandwidth.  Once the encapsulation header has been added to the packet, and reaches the destination, the destination then has to remove the header to get the payload from the packet.  So, basically, a website that generally takes about 5 seconds to load (for example), can take 10-15 seconds (if you’re lucky).

SSH, however, plays nicely with the data.  With or without a key, packets don’t take as long to get from A to B.  There is no overhead (i.e.: extra headers) to deal with when it comes to SSH, and while there is a slight speed performance hit, even when using SSH as a proxy that same page can take only 2 seconds longer to load.

Ease of Use

For the administrator setting up the service, this sort of depends.  Since most Linux flavors have at least 1 VPN solution in their repos, and I have yet to run into one that doesn’t have an SSH server/client in them, it’s pretty easy for the administrator to just run a single command to install it all.  This category, though, is really intended for the users the administrator will have to support with said software.

When I worked on the Ford help desk, I got a lot of calls every day asking for help on how to get VPN to work.  While the calls were easy, there was never an easy “click here to solve it all” button we could push.  Granted, most of the time it dealt with either a transport/tunneling issue, or sometimes even a port issue, there was a lot more calls than we should’ve received from it.  Then there was also the problem of the previously-mentioned SecurIDs desynching.  Again, a very easy fix, but it seemed if it wasn’t one thing it was another.  I can’t fault the user though, completely, as the e-mail documentation (read: a couple of paragraphs) didn’t go into any detail what-so-ever to assist the user.

SSH, while I never really dealt with it in the same manner as VPN, is a lot easier to use.  The thing is, using a key makes the task even easier.  Unfortunately there’s not a lot to really say here, as it also depends on how you use SSH (i.e.: PuTTy, OpenSSH client, another SSH client, etc…), but it all pretty much amounts to the same result.  It’s all relatively easy to set up, and the administrator can simply pass out a config file (like a script file for Linux) to make the task even easier if it uses a generic key.

Configuration

Every server is different, which makes this category a bit hard to judge.  However, in general SSH is easier to manage, especially when you have a big network.  The reason why I say this is because, at least OpenSSH, uses one, maybe two configuration files you have to modify generally.  Probably the hardest task is setting it up to use keys instead of password authentication.  VPN does make this task easier, though, by offering an online configuration (at least OpenVPN does).  Which in the end does make configuration even easier, while you only have to worry about the network configuration (i.e.: making sure VPN access from outside is possible).

Conclusion

When it comes down to it, I personally prefer SSH.  VPN would be great if I was running a medium or large network, and/or was super paranoid.  However, SSH with key-authentication satisfies my necessities just fine.  Basically, if you’re using VPN, know that it might (probably will) need extra configuration compared to it’s sibling, but adds a hefty amount of extra security SSH doesn’t (at a price).  If you prefer, you could also set up both, and use VPN when it’s not time-critical, while SSH is used for those moments where you’re against the clock.


February 21, 2011  4:17 PM

Drive Encryption (feat. TrueCrypt) Part 2

Eric Hansen Eric Hansen Profile: Eric Hansen
Security

Installation

Unfortunately, TrueCrypt’s official website does not offer a simple click-this-link-to-download, instead you have to choose between a GUI (x86 or x64) or CLI (x86 or x64) version. As my server’s pretty old (read: very, very old), I had to use the x86 version, so I cannot verify currently what changes there are between the two (if any). What I did was download the CLI/console x86 version, and then use WinSCP to copy the tar.gz file over to my server.

Note that most Linux flavors tend to have TrueCrypt in their repos, so if you’re not to follow the installation procedure, you can easily do it this way. Personally, though, I prefer to use the manual wrote, so that’s what I’ll cover here.

Once you download, transfer, and untar the TrueCrypt .tar.gz file, you’ll be left with one file:

truecrypt-7.0a-setup-console-x86

All you have to do is run this file, and it’ll give you two options:

TrueCrypt 7.0a Setup
____________________

Installation options:

1) Install truecrypt_7.0a_console_i386.tar.gz
2) Extract package file truecrypt_7.0a_console_i386.tar.gz and place it to /tmp

To select, enter 1 or 2:

I chose option 1, since 2 is more of a “see what’s involved in this”. After choosing 1, you’ll be prompted to acknowledge the EULA, where the Page Down key comes in handy quite a bit, as there’s a lot to read. Once you reach the end, you can type either “y” or “yes” (w/o the quotes) to acknowledge the EULA. After that, you’ll see the following screen:

Uninstalling TrueCrypt:
———————–

To uninstall TrueCrypt, please run ‘truecrypt-uninstall.sh’.

Installing package…
usr/bin/truecrypt
usr/bin/truecrypt-uninstall.sh
usr/share/truecrypt/doc/License.txt
usr/share/truecrypt/doc/TrueCrypt User Guide.pdf

Press Enter to exit…

That’s it for the installation, it’s a pretty easy install I’d say.

Usage

I’m not going to cover hidden volumes in this post, so I’ll address what the help option shows for TrueCrypt in regard to the hidden volume type:

Inexperienced users should use the graphical user interface to create a hidden
volume. When using the text user interface, the following procedure must be
followed to create a hidden volume:
1) Create an outer volume with no filesystem.
2) Create a hidden volume within the outer volume.
3) Mount the outer volume using hidden volume protection.
4) Create a filesystem on the virtual device of the outer volume.
5) Mount the new filesystem and fill it with data.
6) Dismount the outer volume.
If at any step the hidden volume protection is triggered, start again from 1).

In part one I covered what a hidden and normal volume is, as well as some other fundamental concepts for using this software. So, right now I’ll go through creating a TrueCrypt volume.

Before going into creating a volume, though, there is a switch (–random-source) that you can use, instead of inputting 320 random characters. While the help menu says “Use file as source of random data”, you can also use a device (such as /dev/urandom). You’ll have to wait a little bit though for it to finish, as it won’t output anything to let you know it’s gathering data.

truecrypt -t -c

Very simple, yet will become very powerful. You’ll be prompted with the following:

Volume type:
1) Normal
2) Hidden

My own personal experience, I never have found a need for a hidden volume, so I always go with normal (1). Once you do that, it’ll ask you for the volume path. It should be noted here, however, that it is the full absolute path (/home/tcvolumes/…, not ~/…), including the filename of the volume itself. When I was first working with TrueCrypt, this threw me off a little bit. Then the real fun begins.

Now it’ll ask you for the size of the volume, with megabytes (M) being the default size. If you want to specify kilobytes or gigabytes, then type in the size then K or G (i.e.: for 500 kilobytes, type 500K). Please note, even though it probably shouldn’t have to be said, make sure the partition or disk you use has the necessary space for the volume you are creating.

The first interesting bit during a volume set up is the encryption choice. Here’s what you’ll see next:

Encryption algorithm:
1) AES
2) Serpent
3) Twofish
4) AES-Twofish
5) AES-Twofish-Serpent
6) Serpent-AES
7) Serpent-Twofish-AES
8 ) Twofish-Serpent

While the default is #1, I prefer using #5 or #7, but outside of those, I recommend any of the AES ones. The cipher of AES is pretty robust, is among one of the hardest encryption algorithms to crack, and just overall provides a better security foreground to your volume.

After that, you’ll choose the hash algorithm to use, which mostly seems dependent on the encryption algorithm you just chose. I generally choose Whirlpool as it gathers data more randomly. For the file system, the default is to use FAT, but I like to stay consistent, so I stick w/ whatever format I have set up for the main partition (which is ext3 for my server). Besides that, make sure your kernel supports the file system format you want to use.

It’ll ask you for a password after you choose the file system. This is used for mounting the volume to ensure wondering eyes aren’t peeping into your files. While you don’t have to provide a password, it is very highly recommended that you do, unless you’re positive you’re the only one who will be able to access/mount/etc…the volume. If you enter a password shorter than 20 characters, it’ll warn you that a short password is generally easier to crack, and ask if you wish to still use the password. Alternatively, you can provide a keyfile (multiple keyfiles are possible) to use instead of a password. Please note though that the keyfile does have to exist prior, or else you’ll receive an error saying that the keyfile does not exist.

Then the real fun begins during the process, as you finally get to type in different characters to generate the encryption key. While it requires at least 320 characters, if you press Enter at any time, it’ll show you how many characters are left. After that, the volume will be created and then you’ll just have to mount it.

Mounting

(NOTE: Any switches with a * before the name means there’s two dashes instead of one.  WordPress screws up the formatting.)

This is pretty simple. The command is as follows:

truecrypt {volume} {mount point}

Just as if you were using the mount command, though, the mount point does have to exist before you mount the volume, or else you’ll get an error. If you decided to use a password for your volume, it’ll prompt you for it’s password before mounting. You’ll also be prompted for a keyfile (even if you didn’t provide one during creation), so if you didn’t give one during set up, you can just hit enter. After that, it’ll ask if you if you want to protect the hidden volume if there is any, which defaults to no. Then you’re done with mounting the volume. At this point, you can do pretty much whatever to the volume, as it’ll act just like any other drive would, mounted.

For mounting, if you want to speed up the process a bit, you can use this:

truecrypt -t -k “” *protect-hidden=no {volume} {mount point}

This will basically speed up the mounting process, only asking for a password. The -t switch says to use a text-based user interface, -k is to specify keyfiles (“” means no keyfiles), –protect-hidden does the same as the prompt during the regular mount process. Another switch you can use, but pretty much defeats the purpose of using passwords, is the -p switch, which lets you specify the password for the volume. In the case you decide to use this switch, the command (if you want to skip all user intervention) would look like this:

truecrypt -t -k “” -p “password here” *protect-hidden=no {volume} {mount point}

If you use the -m switch, you can set mount options (such as -m ro for read-only mount), similar (again) to the mount command. Another option that might come in handy is changing the password, which can be done using the -C switch. To do this, you would simply use the following command:

truecrypt -C {volume}

It’ll prompt you for the volume’s current password (which you can use the -p switch again if you like), then the new password and another 320 character string of random text.

Dismounting

I’ll cover this shortly as there’s only two commands really for dismounting a volume.  First, if you want to just dismount a specific volume, you would type in the following:

truecrypt -d {volume}

So, if your volume file was located at /root/tcv, you’d type truecrypt -d /root/tcv.

Lastly, if you would like to dismount all of your volumes, you would just use the following:

truecrypt -d

Finish

I’ll be covering more of TrueCrypt in future articles, as well as other programs that add to security and safety.  Keep checking back for more articles, as I’ll also be covering more about the IT world as well.


February 16, 2011  1:47 PM

Drive Encryption (feat. TrueCrypt) Part 1

Eric Hansen Eric Hansen Profile: Eric Hansen

Preface

“Who’s going to look in this file named ‘top secret details.xls’?” Does this type of question sound familiar? In the IT world, it should be like a second language of sorts the types of questions we get. However, once this user gets that file stolen by some means, you’re the first one they come to in order to save them. What if all of this could have been avoided to begin with, though?

This brings in my personal favorite, TrueCrypt. I’m sure most people have already heard of this software, as it is quite well known. Throughout this article, though, it’s going to cover the general aspects. Later ones are going to go more in-depth with this, and other, software.

Good vs. Bad

Before going into installing TrueCrypt, I want to cover some of the benefits of using this. What TrueCrypt does is basically creates a virtual drive (or virtual volume) of sorts that it (un)mounts, where you can put any type of document there. This may not sound at all amazing, especially in the Linux world where you can even just repartition a small hard drive for this purpose. The added benefit to TrueCrypt though is that it encrypts the contents inside of it’s virtual drive. This is done through the set up process of the virtual drive as to what encryption, strength, etc…will be used. Another benefit is that it works on multiple platforms (Linux, Windows and Mac OS X so far), so it’s quite easy to carry the virtual drive via thumb or network drive and not have to worry if it’s going to work or not. There’s also the fact that it can encrypt the entire drive, also requiring a password on boot for the hard drive to be decrypted and bootable. While I have not done any personal studies on how effective this is against forensics, it’s still a nice security feature to combat those who try to access your data without you knowing. This doesn’t just affect hard drives though, but also solid state drives like thumb drives. Lastly, I would like to point out that it’s seed and encryption algorithms are of top quality. The seed pool it uses with the GUI is by having the user move the mouse around a box to collect random data. As for the algorithms used, it’s mostly AES and Twofish-based, which is really quite nice…especially since AES is a cipher-based algorithm and one of the more difficult schemes to decrypt.

With the good though, does come the bad. While it may not come as a surprise, TrueCrypt does have to be installed on the host computer before a TrueCrypt-ed drive can be mounted. A remedy for this does exist though, in the form of a portable TrueCrypt. Last time I checked though it was only for Windows, and essentially required you to partition your thumb drive to use it anyways. You also need administrator rights to mount the drive (this goes for Windows and Linux, most likely Mac as well). If this is being used on your own PC/laptop, then this really doesn’t pose much of a problem…but, if you are trying to use this at work, a friend’s computer, etc…it might cause a security concern.

There’s one more point I want to go into before going into the installation and such of TrueCrypt is that you can create two different types of volumes. One is a regular volume, which is the same as a partition, nothing special. The other type of volume is a hidden volume. A hidden volume works in kind of a hierarchical fashion. It’s embedded inside of a regular volume, but is hidden until it’s mounted, as TrueCrypt embeds it inside of the free space on the volume it’s used with. Both volumes (should) use a separate encryption/keyphrase as well, to further the usefulness and secrecy that it is meant to hold.

Here is where this entry will leave off. Next will cover installation, usage and any other pros and cons I forgot to mention in this entry. Look forward to this next entry very soon.


January 19, 2011  9:40 PM

Dual-booting Linux and Windows 7: The 0xc0000225 Error

Eric Hansen Eric Hansen Profile: Eric Hansen

Disclaimer: This entry is focused on both Windows and Linux…but, I’m sure people will have this same issue, so I will detail my steps.

Recently, I decided to reinstall Linux on my desktop (gave up after KDE 4.0 was released and I loathe Gnome) after seeing how much KDE has improved, and went through the process. Mind you, I did all of this via a USB drive, as I have no working optical drive. I dual-booted Kubuntu 10.10 with Windows 7 Premium (x64 for both), and everything went fine. Or so I thought.

I could boot into Kubuntu just fine after the install. Installed programs, programmed a bit, and ventured around a little to get the feel of Ubuntu/Debian-based systems again (too used to Arch Linux). Long story short of re-enjoying KDE/Linux, I rebooted to start Windows back up to make sure I didn’t screw anything up, and…this is where the story starts.

I got the following error whenever I booted Windows 7:

0xc0000225 Boot selection failed because a required device is inaccessible.

Here, I thought it would be a simple fix. I rebooted again into Linux, and searched high and low along the Interwebs’ wall of information. I tried a bunch of fdisk tricks (most of which wouldn’t work as I was currently on the mounted drive and couldn’t unmount it), cfdisk would fail saying that the partition table is invalid. Something similar between (c)fdisk is that it gave a partition table error (don’t remember the errors off the top of my head, but basically invalid cylinder ranges).

Everywhere I read on the Internet said that this was something to not be worried about…well, they didn’t want to boot back into Windows 7 apparently. I Google’d the fdisk error (as I know cfdisk is very finicky about everything), and everyone just did the short/easy route, reinstall both OSes. This wasn’t very viable to me, just because I didn’t feel like making a USB bootable of Windows 7 for the nth-time now. So, I took it a different route…I Google’d the Windows error I received. That is what solved my problem.

Step #1: Download the Windows 7 Rescue Disk

Microsoft was smart with Windows 7, and offered a rescue disk for those who dared to do silly and things. Download the version of the disk that corresponds with your version of Windows 7 (i.e.: x64 for x64 installs).

Step #2: Format the USB as NTFS (if it’s not already; if it is, then delete whatever is on it)

I’m not going to go into how to delete contents of a USB drive, but to format one as NTFS, use mkntfs. The command I used (my USB was /dev/sdc) is:

mkntfs /dev/sdc1

You have to specify the partition number to format. You could optionally use the -Q (quick format) switch, but I had things to occupy myself with anyways so I let it do it’s own thing…took about 45 minutes. It’ll zero-out the partition (which takes the longest) and then do the formatting.

Step #3: Mount the USB

I let KDE do it by itself by unplugging and replugging in the USB device. But, if you don’t want to, you can easily use the mount command again:

mount /dev/sdc1 /mnt/usb

Again, /dev/sdc1 was for me, it might be different for you. The second option is the path where to mount the device, just make sure it exists and is empty.

Step #4: Copy over the ISO content

This one is probably the easiest step, to me. Command:

Now, I’m not gonna lie, I kind of cheated here and use UNetBootin. Reason being is that it combined about 2-3 steps into one. You can get this software at http://unetbootin.sourceforge.net/ and it will run on Linux and Windows. For those on Ubuntu-based (and probably Debian as well) systems, you can use apt-get install unetbootin and it’ll install for you.

All you have to do is choose to use a disk image (ISO) file (second radio button). Choose your Windows 7 rescue disk ISO you downloaded, and choose “Show All Drives”, choosing USB as the type. The reason why you mounted the drive earlier was because at this stage, if you don’t, it won’t let you use your drive until you do mount it. So, choose your drive partition (i.e.: /dev/sdc1) and continue on. It’ll copy over the files, and install a boot loader.

Step #5: Reboot and Run System Repair

Reboot your computer and choose to boot the USB (this varies from motherboard to motherboard, most of the time you’ll choose HDD or USB-HDD). You let the rescue disk find your Windows partition, and then choose “Startup Repair”. Some people say you should do this 3 times, but I just had to do it once. It’ll fix the partition table that got fubar’ed and then click “Finish” to reboot.

At this point, Windows 7 should be booting just fine now.

I’m not sure if this only caused by Ubuntu or not, but most of the time it’s more of an issue with resizing a Windows partition (which is what I did to cause a partition table boo-boo).


January 11, 2011  2:02 PM

Anti-Virus on Cloud Nine?

Eric Hansen Eric Hansen Profile: Eric Hansen

On January 5th, 2011, Sourcefire, the creator of SNORT, released a press release over a “partnership to deliver a free, Windows-based version of the ClamAV(R) antivirus solution.” In a world where buyouts and “partnerships” happen left-and-right, why is this so important, you ask? Further into the press release, it states that it’s going to utilize a cloud-based anti-virus scanning set up, to provide further true positives.

While I’m not the most avid supporter of cloud computing, this seems to be a more intelligent way to use the capabilities of clouds. Case in point, Windows with it’s roaming profiles. All that is is simply cloud computing, and while it is convenient, it’s also a very slow process to log into another computer. Here, from what the press release says, it’s basically using a cloud platform to store various scan reports (what files are infected, what type of infection, etc…). At least to my eyes, this seems more like a offline-version of various online multi-virus-scanner websites.

Right now, one of the main problems I see with the anti-virus community is not that there’s so many false positives and such, but that there’s no protocol. While nothing’s perfect, and all these products do their job differently, it’s truly nothing but a big whirlwind. Perhaps this is what is needed for the anti-virus vendors to finally see working together, instead of against each other, is going to be about the only way you can truly be successful. Sharing virus definitions can do nothing but help the public and themselves, and the greediness is only hurting everyone that has a computer connected to the Internet.

If this idea picks up as much as I honestly hope it does. My ideal vision of this would be that there will be one anti-virus program that uses this cloud. No more Norton, McAfee, Kaspersky, etc…but, they all form sort of a committee of their own (similar to IEEE). They develop a singular anti-virus program that does what it’s supposed to, and it just simply works. Let the vendors develop their own GUI to the definitions and scanner if want-be, but make the definitions themselves open to the public basically. Let the whole programming community write their own scanners as well.

This is 2011, not 2000, things have to change in the I.T. world for things to continue moving forward, instead of backward. No longer can we continue to see things for ourselves, but we have to start really looking out for others as well. If there is no competition, it is doomed to fail.

If you want proof of this, look at Microsoft. For many years, they were on top of the I.T. world. Windows was the most-used operating system available. November 1991 came around and Linus Torvald decided to release a college project, Linux. Was it a threat back then? Definitely not. Over the years though, the community came together and made it a very viable alternative to the once-king operating system. From then on, Microsoft has really stepped up it’s products and released a very solid system in Windows 7.

Only time will tell how well the cloud-AV solution will work. While ClamAV was never the greatest anti-virus scanner personally, it was still a good solution if paying for a product was not possible. Given that ClamAV is also heavily used on servers, I’m hoping that this will lead to a whole different ballpark for safety, security and care for these vendors’ customers.


January 1, 2011  6:07 PM

Deployment Pre-Cautions

Eric Hansen Eric Hansen Profile: Eric Hansen

First off, I would like to say happy New Years to everyone who reads my blog. I’m hoping to bring more new content on a more frequent basis.

During deployment, things can get hectic, especially if it’s deploying something that’s (still relatively) new, such as offering a new operating system for website hosting. During the rush, it’s pretty common (if not a requirement in itself it seems) to forget the essentials to make sure the deployment goes as smooth as possible. While these points won’t work for every roll, nor will this be an extensive list, it’s still something that you might not think about or overlook…and, if it’s possible to stop one person from making a mistake, even if just one out of a hundred, then this post did it’s job.

1. Pre-plan the Development Cycle
There’s no doubt that things are going to go wrong, at least once during deployment. This will bring me to point #2, but first, before you even touch anything else, it’s best to plan out how, when and where things are going to happen.

If you plan on building a server in room A and mounting it in room B, reserve space for the server. Or, if you’re in the stages of website development, know prior to writing one line of code what the customer wants…at least to work off of, and tweak what you need to as you continue on.

Pre-planning is something quite a few people who are just starting plans skip over, which leads me to my second point…

2. Before Testing, Know What You Can and Cannot Do
After you figure out what needs to be done, see what you can do on your own, and what you will need help with. Plain and simple, even though it’s another pre-plan step, this is more of a self-evaluation and a C.Y.O.A. moment.

If you’re not well-accustomed to hardware, see if a friend can teach you what you need to know, or watch videos about it even. Unaware of how to write your own class in PHP? Search Google, or (better yet) PHP’s official website (www.php.net). This will be a life saver if half-way through a project the customer wants you to change something.

3. Evaluate Your Resources
See what you have and what you need. Do you need more memory for that 32 GB server, or an IDE for that new programming language you have to learn in the next 24 hours? It’s best to make sure you have more than too little of what you need, especially when you cut things down to the last minute.

You should also, during this time, see the time you can allocate to your project. If you know during the first week you’re going to have a lot of family time you have to devote yourself to, then it’s probably best to tell the client it’ll have to wait a week. Distractions can come very easy,whether you have a job you have to go to, or you work from home. It’s best to know when things can get done and how long you can work on it for.

4. Re-Pre-Plan Everything
This might sound a little counter-productive but the logic behind it is that you’ve re-evaluated how everything should work in step #3…so it’s time to see if you’re forgetting anything. Does the customer want more flairs added to that next-gen website you’re making for him, or does the server now need 64 GB of RAM? As long as everything matches up to step #1, though, you’re good to go to the next step.

5. Take Your Time
Probably where most of the mistakes are made next to pre-planning…taking things slowly. Especially with the way our lives work now, we’re always so rushed to get things done as soon as possible. Maybe you do this in hopes that your customer will be impressed and request more work? Perhaps you forgot about that honeymoon to go on and so you try to handle two things at once?

One word: STOP. Why, you might ask? You tend to make more mistakes speeding through something than not. I’m sure the customer will better understand that you want to give them a bug-free product, rather than a product that only works when it wants to. During this process, it’s also very important to test whenever you implement something new. Throwing everything together at once just to realize only about 5% of what you made actually works just discourages the customer more.

As I said, these may not apply to every project, or to everyone, but this is more from a “learn it from someone who knows” perspective. I’ve made these mistakes and it’s caused quite a ruckus when trouble happened. So, the next time you want to install that plug-in for WordPress, make sure you also have a backup handy, as well know it will work with the rest of your set up.


December 24, 2010  1:43 PM

The do and do-nots of making a SysAdmin’s life easier

Eric Hansen Eric Hansen Profile: Eric Hansen

ACM Queue recently posted an article (“What can software vendors do to make the lives of sysadmins a little easier?”, Dec. 22, 2010) that lists 10 total do-and-don’ts for making a system administrator’s job easier. The article itself has some interesting points; but, I’m going to give my own perspective on it’s points it makes.

DO have a “silent install” option
- I agree with this 110%. Having to go to 1 computer to click a button or choose some options is fine; having to go to multiple computers to do the same task is annoying (as well as counter-productive). I’m sure this is at least a big reason why most (if not all) package manages like yum, apt and pacman have an auto-yes/no-prompt option in them. The only downside to this I can foresee though is if you need to customize the install for a selection of computers. But, really, even if that’s the case, it should be rather easy to put those computers in a different queue.

DON’T make the administrative interface a GUI
- Personally, I see this as a common-sense one. While for desktop users it’s great to have some sort of GUI for programs, it seems rather illogical to use any sort of window manager for a server you’re going to be VNC’d to very, very rarely. Most system admins seem to just use tools such as Putty or a SSH client.

Another viewpoint to this though is what about web interfaces? Granted, it’s not truly a GUI set up, but would those be acceptable? My own taste says yes (a good example is SafeSquid’s administration…it’s all web-based). So basically, ditch the GTK+ designing and try to work on a web-interface; your system resources will thank you.

DO create an API so that the system can be remotely administered
- This is kind of hit or miss for me. While I agree being able to remotely administrate a program is a grand idea, I don’t see how it’s any different than creating an SSH session and viewing logs. This is a short comment for this, but it’s a pretty overly-done process. APIs are meant to expand the functionality of a program (i.e.: plugins). While the article makes a good point in it that it helps extend program functionality, I feel the author just didn’t truly address this point, and for good reason.

DO have a configuration file that is an ASCII file, not a binary blob
- Another 110% belief in. The main reason for this, as the article states, is the ability to use diff to know what changes were made.

For an example on this, lets compare to the two most popular browsers not based on operating system, Mozilla Firefox and Google Chrome. The way each browser stores it’s data is different. Firefox does it similar to Internet Explorer, in that it writes to the disk your URL history. Chrome, on the other hand, decides to use a database to store it’s data in (using SQLlite I believe, to be exact). In short, Firefox stores it’s data in ASCII, and Chrome stores it’s data in binary blob (trying to open up the database info in a text editor gives you a lot of unreadable text).

DO include a clearly defined method to restore all user data, a single user’s data, and individual items
- This was a big problem where I used to work. While there was a very well developed backup system we had in place, it wasn’t very well managed (we lost a good 2-3 months worth of backups one time and no one caught it for a month). The backup system did cover both points, but if you have to also be able to set some protocols on monitoring your systems, especially your backup systems. Unless there’s a reason not to, I fail to see why a simple cron job to run a backup script and send it to a backup server wouldn’t just do the task just fine. I guess I kind of veered of topic on this, but it’s still something to note here.

DO instrument the system so that we can monitor more than just, “Is it up or down?”
- Another situation I’ve experienced. Having some system available to actively monitor services is very important and well worth the investment. My personal liking is the Nagios/OpsView option (OpsView is just a forked version it seems of Nagios). It monitors all the system resources you ask it to, and works for more than just *nix-based systems. If you have more than one computer and/or server, this is definitely something to set up, and you’ll be thankful when your server shuts down without you knowing.

DO tell us about security issues
- This is more of a double-edged sword more than anything. Yes, you help out the community and the people using your services, but you also let the enemies know of a weakness. While I agree with the article in letting people know publicly, it’s always a tough call for any vendor to know how much is too much. However, it’s never a good idea to wait for a fix to occur before releasing a notice of an error, because then the enemy almost always has already won.

DO use the built-in system logging mechanism
- Yes, using syslog and Event Viewer (depending on the server’s OS) is great…perhaps even greater is it makes everything run a lot smoother. There’s really no excuse to not use a system’s already-available functionality for this, especially since they will always be more optimized for the system at hand.

DON’T scribble all over the disk
- This seems to be more of an issue with Windows servers than any *nix one, but still a good point made. *nix servers seem to have a pre-defined structure that Windows neglects to make it more user friendly, which also makes it harder to diagnose the hidden settings when something goes wrong.

DO publish documentation electronically on your Web site
- While this is whole-heartedly true for a Windows machine, for any system using manpages, it shouldn’t be necessary. The only positive side to using this instead of manpages is if the server is in an unbootable state of some sort. But, that point, I’m sure it’s still going to be no use unless you happen to have a LiveCD around somewhere.

All in all, the article was a good read, but in an opinionated view, made some mistakes in it’s points. Here’s a question to the readers of this blog, though…what do you feel is a do and/or don’t to make a system administrator’s job easier?


December 13, 2010  2:11 PM

Is Ad Revenue Worth User Insecurity?

Eric Hansen Eric Hansen Profile: Eric Hansen

In a vague sense, this does involve I.T. security. Over the past week, there has been a lot of drive-by-downloads done via a fake advertising firm, spreading it’s payload through both Microsoft and DoubleClick’s ads service. Both of which affecting a very high amount of computers and devices. While not going into to much detail, the generic rundown is that people registered the domain AdShufffle.com (intentional 3 f’s; the actual domain is AdShuffle.com [2 f's]), and somehow that tricked the ad sites DoubleClick (owned by Google) and Microsoft’s own ad service into serving ads from this domain. The domain then would use JavaScript an iFrame coding to download software onto a user’s computer and use various exploits to cause damage.

While I haven’t read anything about what the damage was, the pressing issue is more so is using and advertising business model worth risking your clients’ (personal) security? I don’t agree with using NoScript and Ad-Block programs extensively (websites do need to make revenue, especially if they offer free services to it’s clients. But, if these issues keep arising, then I see no alternative but to restrict ads.

The main issue I see with ads is they try to attract the user instead of provide them a reason to go to their website. By this I mean there’s a lot of ads that use voice and flashiness to attract a person’s eyes, but it serves no purpose besides that. One thing about AdSense (Google’s own personal ad service) is text-only, which gives the user a real reason to go to the ad’s site (if it’s informative enough).

Articles like the one that this is about tends to lead people further away from supporting the “starving artists” of the I.T. world, and leads to people needing to find more strategic, and possibly more intrusive, ways to support themselves.

Source: Major Ad Networks Found Serving Malicious Ads


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: