Open Source Software and Linux


November 3, 2008  5:00 PM

Using hosts.allow and hosts.deny aka tcpwrappers effectively

John Little Profile: Xjlittle

The hosts.allow and hosts.deny files located under the /etc directory are collectively known as tcpwrappers. tcpwrappers along with iptables is an effective solution to protecting your network and individual servers.

The hosts.allow and hosts.deny files have some fairly simple syntax rules to follow. They both accept the keywords or wildcards of ALL and EXCEPT. Each of these files contains two or more colon separated fields. The first field contains a comma delimited list of executable names. Note that these must be the executable name not the service name and can contain the wildcards previously mentioned.

The second field contains comma separated lists of client specifications using IP addresses, hostnames, trailing dot networks, leading dot domains and network/netmask pairs. This field can also use the wildcards ALL and EXCEPT.

Executables that can be used in tcpwrappers use the shared object library libwrap.so. Determining if the application is “eligible” to work with tcpwrappers means that we must find out if it contains the libwrap.so library. To do so issue a command like the following:

[root@centos5-dev ~]# strings `which sshd` |grep libwrap.so
libwrap.so.0
[root@centos5-dev ~]#

Here we can see that sshd contains the libwrap.so library. This means that we can use tcpwrappers to control access to the secure shell daemon.

Now we need to make sure that we have the proper executable name. Generally speaking this will be the name of the executable unless it is a daemon run out of xinetd. In that case you must look in the service file under the /etc/xinetd,d directory. For instance I have telnet setup to run under xinetd.d and the binary name located in /etc/xinetd.d/telnet is in.telnetd. This would be the service name that you would use in tcpwrappers.

tcpwrappers first checks the hosts.allow and then the hosts.deny files. tcpwrappers implements a stop on first match policy. Therefore if you have in.telnetd in the hosts.allow then the machine will allow a telnet connection. If it doesn’t find a reference in either of the tcpwrappers files then it will allow the connection. This is known as “by fault of omission” meaning that the connection request meets no rule restrictions. Note that changes made in either of the tcpwrappers files are effective immediately on any new connections.

Once you have all of the allowed connections that you want in your hosts.allow file you should then make the following entry into your hosts.deny:

ALL : ALL

By making this entry you insure that you haven’t missed anything and that only the services mentioned in the hosts.allow file are going to be allowed.

Now that we have the basics down lets take a look at making some entries in the hosts.allow file. Our first entry should be for our local machine. After all we don’t want to lock ourselves out of our own machine. This entry looks like this:

ALL : 127.0.0.1 [::1}

The entry [::1] is for IPv6 addresses.

For our next entry let’s use sshd. We’ve already checked above that sshd will use tcpwrappers so now we must decide from which machines or networks we will allow the ssh connection. If I am on the 192.168.0.0/24 network I may want all of the machines on the network to be able connect to my machine over ssh. In that case I would make an entry like the following:

sshd : 192.168.0.

Notice the dot at the end of the ip. This must be in place to insure that all of the machines can connect.

Using the same scenario I want all of the machines on the network to connect except for 192.168.0.10 and 192.168.0.44. I would then make an entry like the following:

sshd : 192.168.0. EXCEPT 192.168.0.10,192.168.0.44

I am now letting everyone on the network login via ssh except for those two machines.

This should help you get started in locking down your machines so that only the services you want are allowed. Don’t forget that you can use hostnames and leading dot domains such as .example.com.

-j

November 2, 2008  3:34 AM

Logging for BIND in a chroot environment

John Little Profile: Xjlittle

Today while setting up a new BIND server I decided that it might be a good time to start using the chroot-bind package in a chroot environment. This presented a set of challenges that I wasn’t quite expecting. While they weren’t difficult to sort out I thought that i might help you save some time.

The first question that popped up was where do I put the configuration and log files. Since we are in a chrooted environment they go under the var and etc files below the chroot. For CentOS this is /var/named/chroot/{var,named}. Don’t let this get you confused when you put in the path in the files in the named.conf. In the named.conf it will start as normal in the var and etc directories. Just remember that it is referring to those directories under /var/named/chroot and not under /etc/ and /var/named. Those are the places that you would expect to see them in a non chroot environment.

options {
directory "/var/named/pz"; ##This path starts under /var named/chroot/

After sorting this out and getting my server running I noticed that I was not getting any logging for my BIND server. BIND, when installed in a typical environment, places it’s logs in /var/log/messages by default. Setting up logs in the chroot environment requires a stanza for logging be set up in named.conf. You will also need to specify what you want to log as well as the severity level. There is a default variable that will give you all logging except for query. You will need to set up logging for queries separately in a separate stanza.

Following is an example of what my finished logging stanza’s are in named.conf:

# Logging Configuration
#
logging {
#
# Define channels for the two log files
#
channel query_log {
severity info;
print-time yes;
file "/var/log/query.log" versions 3 size 100M;
};
category query {
query_log;
};
channel default_log {
severity info;
print-time yes;
print-category yes;
print-severity yes;
file "/var/log/activity.log" versions 3 size 100M;
};
category default {
default_log;
};
};

Notice that you have a primary logging stanza. Under this you enter whatever you want to call your channel stanza. This is where you enter the severity to log along with various other options. Under this is place the category log which defines whatever is it that your logging. In mine I am logging the default and queries. Also notice the “file” paths in the channel stanza. Again this is the var that starts under /var/named/chroot and not the typical /var/log. The logs are created whenever you start BIND.

You can read more about the logging syntax and available entries by reading the file /usr/share/doc/bind-9.3.4/arm/Bv9ARM.ch06.html. Scroll down until you find the section titled “logging Statement Grammar” and start reading from there. The quickest and easiest way to read this is with a text only browser such as elinks.

Enjoy your new secure BIND server!

-j


October 31, 2008  2:15 AM

Filesystem Labels – They are important

John Little Profile: Xjlittle

Labeling your filesystems can save you a major headache. Using that label in /etc/fstab is key to keeping your disks mounted where they belong.

Disks in Linux are assigned special device files. Anytime you replace a drive Linux is liable to change that file based on the order which it sees the new disk. Filesystem labels provide an alternative way for Linux to identify the partitions and drive and mount them where they belong.

On Red Hat systems partitions are automatically labeled if they are created during install. You can check this out by using the command

[root@centos5-lt ~]# e2label /dev/sda1
/boot
[root@centos5-lt ~]#

You can also see this in /etc/fstab:

[root@centos5-router ~]# cat /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/var /var ext3 defaults 1 2
LABEL=/home /home ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
/dev/router/backups /backups ext3 user,defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-hda6 swap swap defaults 0 0
[root@centos5-router ~]#

As system administration goes though we create partitions with labels. We then put the path to the device file in /etc/fstab and point it to the mount point. This could present a problem when replacing a disk in your machine.

To label an existing partition use the e2label command.

[root@centos5-router ~]# e2label /dev/router/backups backups
[root@centos5-router ~]# e2label /dev/router/backups
backups
[root@centos5-router ~]#

Now we can change our entry in fstab to use the labe for our backups filesystem and check that it works with mount -a:

LABEL=/backups /backups ext3 user,defaults 1 2
[root@centos5-router ~]# mount -a
[root@centos5-router ~]#

Mount will return without any output if you have entered the label correctly.

The easiest way to label a new partition is to do it when you create the partition:

mkefs -j -L /newDisk /dev/sdaX

This will give us a journaled ext3 filesystem with a label of newDisk. If we want to mount the directory /test on this our entry into /etc/fstab would look like this:

LABEL=/newDisk /test ext3 user,defaults, 1 2

Now you know why you want to label your partitions. Save yourself a major headache and get them labeled!

-j


October 25, 2008  12:56 AM

Red Hat and the IBM Open Collaboration Client Solution

John Little Profile: Xjlittle

Have you heard about the Red Hat and IBM Open Client Collaboration Solution (OCCS)? This is a collaboration solution put together by Red Hat and IBM utilizing the Notes Domino collaboration suite running on Red Hat.

Normally I try to stay away from proprietary solutions of this magnitude because there are plenty of open source solutions that can accomplish the same thing. But IBM is a staunch supporter of open source. At Lotusphere they go so far as to push the Notes Domino platform running on Linux as opposed to Windows.

Using Red Hat to host the Notes Domino platform provides the stability of Red Hat using a very strong collaboration suite that should meet any company’s needs. The suite can provide email, team rooms, document storage and very much more. Starting with version 8 it comes with Symphony, a free office suite built by IBM using the OpenOffice core code.

IBM’s Notes Domino platform provides the ability to build custom databases to fit your users needs. They can pull data from at least MySQL and Microsoft’s SQL server. You can then build that into a Notes database or an HTML page. The HTML can be rendered on Apache or the Notes Domino Web Server.

Along with the typical security provided by the Red Hat platform Notes Domino provides group and individual ACLs. The platform allows for customized security for individuals that are in the same group.

Red Hat and IBM have released a worksheet that will help you estimate your cost savings by migrating from Windows to Red Hat. It even includes the cost of the Domino servers and the Notes clients. You can find this work sheet here..

Migrating from Windows to Red Hat allows you to migrate at your own pace. The Notes Domino platform allows for a smooth migration by having the ability to mix the Red Hat and Windows environments. This allows you a smooth migration without any loss of production.

If you’re looking for a strong collaboration suite that can provide all of your collaboration needs at a cost effective price using the Notes Domino platform along Red Hat Linux is the one to consider.

-j


October 21, 2008  12:06 PM

Why I don’t like Gnome Evolution

John Little Profile: Xjlittle

I really wanted to title this “Why Gnome Evolution sucks” but thought that was a little harsh since the application itself doesnt suck. Why don’t I like Gnome Evolution? In a word printing. In more words when trying to print a decent looking calendar Gnome Evolution simply sucks and does not do this well.

Yes I am irritated with this. I spent a couple of hours last night trying to get this sorted out. That’s 1 hour and 55 minutes longer than it should have taken for me to print a calendar.

You see here’s the deal: Gnome Evolution will not print in landscape mode. I needed to print in landscape because in portrait all of the words were jammed together. In landscape mode they overlapped each other but it did not look nearly as bad as portrait mode. Still, neither was sufficient for business purposes.

When you set the calendar to print in landscape mode and look at print preview all seems fine. However when you go to actually print the calendar it jumps back to portrait. I noticed that the preview mode created a pdf under /tmp so I thought that I would go ahead an use that. After printing that is when I discovered that the words for the appointment reason were overlapping.

After a couple of hours of digging around on the web I found this bug. Specifically look at comments 12 and 13. Are you kidding me? This has been going on for over a year and still isn’t fixed. The only difference in the writer’s comments and what I experienced was that I did not have a print button in print preview.

And before you ask, no I am not using something old. The OS is CentOS 5.2 with all updates applied. The Gnome Evolution version is 2.12.3 which is the shipped version with CentOS 5.2.

So how did I fix my problem? I made the entries that I needed in Yahoo calendar. After making the entries I put it into printable view. Since this had the Yahoo logo on it that I can’t use (long story) I copied just the calendar and pasted it into OpenOffice as HTML. After that I right clicked on the calendar and clicked tables. Here is where I set the column widths and table grid for the calendar. Voila! I now had a decent looking printed calendar that I could use for business purposes.

So if you know of a good calendar application, hopefully for GNOME but KDE is ok as well, please let me know.

-j


October 17, 2008  11:46 PM

Chalk one up for Linux and Health Care

John Little Profile: Xjlittle

When it comes to health care it always seems as though it and Linux have a strange relationship. Doctors tend to purchase software without any thought to cost or licensing. The generally leads them into a one-size-fits-all solution and they just deal with the shortcomings of the software or go and buy another one-size-fits all solution to cover the shortcomings. And so begins the “Endless Circle” of purchasing software, licensing and annual maintenance costs.

Now enter CRIX International (The Clinical Research Information Exchange), Red Hat, Alfresco and JBoss.

CRIX is a not-for-profit collaborative consortium that includes government agencies, members of the bio-pharmaceutical industry, academic researchers, healthcare providers, and other stakeholders in development of new drug therapies. Typically pharmaceutical research is slow and expensive. There is a tremendous amount of research data that has to get passed from company to company, including the FDA.

CRIX chose to use Red Hat, JBoss and Alfresco to make collaboration easier, faster and less expensive between all of the consortium members. Now as soon as a document is submitted all interested parties in the consortium have access to the information. If suggestions or editing ned to be made or done they are done very quickly and without having to send paper, cdrom’s or attachments by email to all of the parties. Alfresco’s software takes care of the version control. Alfesco is the perfect choice for this project since it is compatible with all major software suites including Microsoft Office. If you are a Windows user you can still use shared drives.

The benefits of this marriage

Will enable previously unmatched levels of collaboration among pharmaceutical companies, government agencies, academic institutions, and health care providers to make the drug development, testing, and approval process more secure and efficient while reducing costs and safeguarding the safety of the end consumer

The software that they developed became known as the CRIX Collaborative Platform. The CRIX Collaborative Platform was developed using Red Hat, Alfresco ECM and several components of the JBoss middleware. The reason for this is simple. It allows participants in the project and ISV’s to create whatever components or plugins they need to enhance the software for their particular use or need. Remember the “Endless Circle” above? Over and done with. Kaput. History. With the “Endless Circle” no longer in play now it is quicker and considerably less expensive to bring a drug to market.

Kudo’s to the folks at CRIX, Red Hat and Alfresco for making this happen. Anyone in the medical field should pay attention to this type of solution. It will pay huge dividends in the future. Not only will you get software that does exactly what you want, at a free to low cost, if your needs change you can recode the software meet those changing needs.

References: here

-j


October 17, 2008  2:19 AM

Virus shuts down sales of ASUS eee PC’s in Japan

John Little Profile: Xjlittle

The virus known as recycled.exe was put on the D: drive at the factory. When the user booted the ASUS eee PC the first time the virus copied itself to the C: drive. According to ASUSTEK there were 4500 of the eee PCs made for Japan and only about 300 sold.

Now for me this begs the question…did this ever happen with all of the eee PCs sold and shipped with Linux? While I don’t officially know the answer my guess is that it did not.

That then brings myself and all Linux users to the next question…why do manufacturers insist on putting Windows on their machines rather than Linux? Market share or translated it’s what everybody has.

So let’s discuss the vaidity “everybody has it” and see if we can get some of you users to switch to desktop Linux. Yes I know all of the usual answers of why you don’t want to. It won’t do what I want it do. It doesn’t have software that allows me to do thus and such. I may have to use the command line. yadayadayada.

What exactly does it not do that you want it to do? It edits photos, plays music, plays DVDs, browses the internet and…wait for it…will even send and receive email. If you are a regular desktop user the chances that you are going to have to use the command line are about as great as the chances are that you will need to edit the Windows registry. In fact I would say that you would have to edit the registry before you would ever need to use the command line.

If you are a little more aggressive with the use of your desktop you already edit the registry. I can assure you that using the command line is much easier than editing the registry. Think about the fact that a lot of the configurations of any application that you run on Windows resides in the registry. Compare that to all of you configurations for any application that you use in Linux are text files and reside in the /etc directory. I know from experience that editing a text file is considerably easier than editing the registry.

So what then is the problem? Are you afraid to learn something new? It costs you absolutely nothing to try or buy so it can’t be the cost. If you are reading this then you have the intelligence to learn and run Linux.

Go ahead think about it. Stop buying licenses that don’t even let you own the software let alone install it on as many machines as you need to.

Download an easy to use distribution such as Ubuntu or CentOS and find the freedom of using and installing software on as many machines as you need. No cost to you unless you opt to buy a pre-burned set of CD’s for about $5. Ubuntu is more for a regular user and CentOS is more for an Administrator or Power User type who need stability and likes to run servers and experiment with software on their local machine.

When it is all said and done you will be glad that you did.

-j


October 15, 2008  9:32 PM

LPIC 101

John Little Profile: Xjlittle

Well I finally decided to go ahead and start the LPIC certification process. Today I took the LPIC 101 (first test) and passed with a score of 650

For those of you who don’t know LPIC stands for Linux Professional Institute Certification. There are three levels of certification: Level 1, 2 and 3. The test that I took was the first of two for the LPIC 1 certification.

I fully intend to follow through all the way through LPIC 3. You certainly learn many things when studying for these tests.

The study guide that I used the most was Exam Cram 2, LPIC 1 by Ross Brunson. This book covers study guides for both the LPIC 101 and 102. It also includes sample tests that can be loaded on either a Linux or Windows computer. The electron version has both a training and testing scenario. There are also 10 practice question at the end of each chapter and then a 60 question written test for both LPIC 101 and 102. I highly recommend this book.

So rather than sitting in front of the TV or Xbox take some time to earn yourself a certification. You’ll be glad that you did.

j


October 15, 2008  10:57 AM

Why use Linux?

John Little Profile: Xjlittle

I hear this question occasionally. I hear the usual because it’s free or because it’s secure. While all of this is true and certainly plays a part in the decision to use Linux it is not my primary reason for using Linux.

In a nutshell it comes down to a substantially better price:performance ratio. Take for example Red Hat’s virtualization product. For starters Red Hat integrates their virtualization product into the operating system at no additional cost. The real kicker though is the performance when compared to VMWare.

Red Hat and Intel worked together to produce a tightly integrated virtualization package with the Caneland processor. Having completed the project they asked independent laboratory Principled Technologies to perform some industry-standard benchmarks on these new capabilities. The results can be found here

In their tests they used a Red Hat 3 stand alone server, a Red Hat 5 server and another Red Hat 3 server virtualized on the RH5 machine. The results are as follows:

* A Xeon system running Red Hat Enterprise Linux 3 achieved approximately 210,000 operations/second (4 socket, hyperthreaded, dual core allowing for 16 compute threads).
* A Caneland system running Red Hat Enterprise Linux 5 achieved approximately 380,000 operations/second (4 socket, quad core also allowing for 16 compute threads).
* A Red Hat Enterprise Linux 3 virtualized guest running on a Red Hat Enterprise Linux 5 host achieved approximately 340,000 operations/second. So Red Hat Enterprise Linux 3 delivered a performance increase of over 50 percent when running virtualized on the new Caneland system.

Regarding virtualization on VMWare there are some points to consider. The first is quite simply the added cost of VMWare regardless of what operation system is your choice. Regarding virtualization Red Hat Enterprise Linux guests can utilize all the underlying hardware – so a full quad-core, 4-socket system can be virtualized and presented to Red Hat Enterprise Linux 3. VMWare does not support guests with more than 4 executable threads. What that means is that VMware cannot provide a virtual machine guest larger than 1/4 of the new Caneland capacity.

Although I discussed only one technology there are many examples to be found. And when I hear of places such as Indiana University with almost 200,000 faculty and staff, Amerada Hess Corporation – Oil Exploration Supercomputing, Burlington Coat Factory – Entire Systems, Conoco – Oil Exploration Supercomputing I have to believe that these folks have some very smart engineers and CTOs on their IT staff that would decide that Linux is the best platform on which to be running. The complete list can be found here.

-j


October 13, 2008  12:48 AM

Two Linux utilities you should always have

John Little Profile: Xjlittle

Those two utilities are aria2 and screen.

Quite often as administrators we have to download large files. We may start this work from a workstation at work or over the VPN to a server at work. The best way that I’ve found to handle this task is using aria2.

The aria2 utility is command line driven. It supports downloads via bittorrent, http(s), ftp and metalink. It can download one file or multiple files from multiple sources or protocols simultaneously. In handling multiple downloads it attempts to utilize all of your available download bandwidth.

Downloading via a local bittorrent file is as simple as

aria2c file1.torrent file2.torrent

or from an http site
.
aria2c http://site/file.torrent

If you want to download multiple torrents use this command:

aria2c -s2 http://host/image.iso http://mirror1/image.iso http://mirror2/image.iso

If you want to download the same file from two different locations use:

aria2c -s2 http://host/image.iso http://mirror1/image.iso http://mirror2/image.iso

The -s2 indicates that you want to download from two site. If one of the sites fail aria2 will attempt to use the 3rd listed sitte.

I’ll leave you to visit aria2’s site and explore the many options that they have to offer.

The next utility is screen. In many case you will find that it works very well to use aria2 and screen together.

Screen is a utility that, when logged into a remote machine via ssh, you can start a screen session and begin your work. If you lose your ssh connection screen will still have the job that you started earlier when you again log in. You can also detach the screen session from you current ssh session, go home or move to another work station, log in again and reattach to your screen.

This is a fantastic utility that becomes a real life saver when running long jobs, downloading large files such and iso’s and having documents open via ssh using an editor such as VIM. Imagine that you just started compiling a kernel and were called away from your office. Over the standard ssh session you have no way to check the progress. With screen though you can log in to the remote machine, attach to the screen session and check on the progress of your compile.

I say that it can work hand in hand with aria2 because of downloading large files such as iso’s. Start your screen session and your aria2 job, detach from your screen session and go home or wherever you need to go. Once at your destination login to the remote machine and reattach to your screen session. If the iso is downloaded mount it via a loop device and do whatever you need with it.

Both of these utilities are command line driven, very powerful and helpful, and very easy to use. You can probably find them in your rpm or deb repositories. Give them a spin and you just might find a whole new way of computing or at the very least to really great utilities to put in your tool box.

-j


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: