Open Source Software and Linux


February 12, 2009  1:19 AM

How secure is your network? (Part 2)

John Little Profile: Xjlittle

In my last post I referred to an article about the number of security breaches in networks across the U.S. This has caused economic losses of an estimated trillion dollars.

As I mentioned in that post my home network certainly doesn’t rank with those mentioned in the article but it did give me pause to consider the security of my network. In that post I outlined some things that I wanted to harden on my network as follows:
1. Disallow ssh root logins
2. Disallow su to root except for certain users
3. Disallow internal ssh logins to any machine on the network. These logins must come from the “jump” machine

An overview of my network: I have a 1u server running Centos 5.2 using the native virtualization. All of the servers on the machine are para-virtualized and run Centos 5.2 with the exception of the NAS fronted. This is from the Openfiler project at rpath. These include file, web, a NAS frontend, database, dns, dhcp and a firewall. A NIC is imported to the firewall machine which is directly connected to the internet. All of the machines share a common NFS mount. Service requests inbound from the internet are forwarded to the appropriate machine based on the port number.

I disallowed ssh root logins by editing the /etc/ssh/sshd_config file as shown below.

Protocol 2
SyslogFacility AUTHPRIV
PermitRootLogin no <==changed this to no
MaxAuthTries 2 <==changed this to 2
PasswordAuthentication yes
ChallengeResponseAuthentication yes <==changed this to yes
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
UsePAM yes
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL
X11Forwarding yes
Subsystem sftp /usr/libexec/openssh/sftp-server

The above is the stock sshd_config file with the noted changes made.

I disallowed su to root by removing the comment on a line in /etc/pam.d/su. This file is shown below.

#%PAM-1.0
auth sufficient pam_rootok.so
# Uncomment the following line to implicitly trust users in the "wheel" group.
#auth sufficient pam_wheel.so trust use_uid
# Uncomment the following line to require a user to be in the "wheel" group.
auth required pam_wheel.so use_uid <==uncomment this line
auth include system-auth
account sufficient pam_succeed_if.so uid = 0 use_uid quiet
account include system-auth
password include system-auth
session include system-auth
session optional pam_xauth.so

After making this change I added my account to the wheel group so that I could su to root as necessary. I also modified the sudoers file and added the following line so that I could use sudo and not have to su to root for short administrative tasks:

jlittle ALL=(ALL) ALL

Again all of these files are the stock CentOS files except for the changes.

I then edited the tcpwrappers files, /etc/hosts.allow and /etc/hosts.deny, so that the machines would only except ssh connections from the “jump” machine in the internal network.
hosts.allow:

sshd: 172.16.0.201

hosts.deny:

ALL: ALL

If you want to check to see if a binary is tcpwrappers aware such as sshd use the following command:

[root@fw0 ~]# ldd `which sshd` |grep libwrap
libwrap.so.0 => /usr/lib/libwrap.so.0 (0x00ddf000)
[root@fw0 ~]#

Substitute the binary that you want to check for sshd.

To speed the changes along I copied all of the modified files to the shared NFS mount. I then created a script to replace the existing files, add my username to all of the machines and enter my ssh public key into ~/.ssh/authorized_keys. All I had to do at this point was login to each machine and run the script to make the changes. The script follows. Make sure that you adjust it to fit your needs if you want to use it.

cp -af /srv/secure/hosts.* /etc
cp -af /srv/secure/dist.su /etc/pam.d/su
cp -af /srv/secure/sshd_config.root /etc/ssh/sshd_config
cp -af /srv/secure/sudoers.jlittle /etc/sudoers
useradd jlittle
usermod -a -G wheel jlittle
passwd jlittle
[ -d /home/jlittle/.ssh ]
if [ $? -ne 0 ]; then mkdir /home/jlittle/.ssh; fi
cat /srv/secure/id_rsa.pub.jlittle >> /home/jlittle/.ssh/authorized_keys
chmod -R 600 /home/jlittle/.ssh && chown -R jlittle:jlittle /home/jlittle/.ssh
service sshd restart

There you have it. An hour of two of work and I have hardened my network a little more. This coupled with strong passwords goes a long way in securing your network from inside and outside attacks.

-j

February 11, 2009  6:47 PM

How secure is your network? (Part 1)

John Little Profile: Xjlittle

After reading this article I began to wonder how secure my home network really is. After giving the article much thought I concluded that my home network is probably not as secure as I would want.

Sure it’s secure, probably above and beyond most home networks. I use iptables as my firewall. Connections from the internet are directed to a particular machine based on the inbound port. SSH connections from the outside are directed to one machine so that you must be able to get to that machine to reach the rest of the network. My web server uses standard apache security. Seems reasonably secure for a home network. Maybe.

After all I’m not a millionaire. I don’t have other people’s confidential information on my network. I’m not the FAA or a bank. No one in their right mind would try and extort money from me based on the information contained on my network. Besides, what little I could give them wouldn’t make it worth their time. However these justifications just don’t give me a warm and fuzzy feeling inside.

Crackers don’t necessarily just want those things. Sometimes it is just vandalism by tearing up someone’s machine. Or they may want to use a machine to setup a DOS attack. It could be that they want to use the mail server as a mail relay for spam. Whatever it is I don’t want to have to take the time to clean up after them. After all if they can break into the networks listed in the article it would seem rather arrogant of me to think that they couldn’t break into mine.

The question then becomes what to do to make it more secure. Below I’ve created a scope sheet of sorts of work that needs to be done.

1. Disallow ssh root logins
2. Disallow su to root except for certain users
3. Disallow internal ssh logins to any machine on the network. These logins must come from the “jump” machine

What else can I do? I’ll give that some thought. If you have suggestions post them in the comments. It is always interesting to hear how other people secure their networks above and beyond the norms.

In my next post I’ll describe the changes that I’ve made based on the scope of work above.

-j


February 11, 2009  1:14 AM

A response to Why are some open source people so adamant about doing a discervice to their users?

John Little Profile: Xjlittle

I want to respond to a post by one of my fellow bloggers titled Why are some open source people so adamant about doing a discervice to their users?

I will be the first to admit some people are close to fanatical about software that they use. Most notably I’ve seen this with Open Source users, developers and administrators and Lotus Notes developers and administrators. Fanaticism about anything causes people to do and say things without thinking them through. I am referring to the administrator in the aforementioned article. That, however, is not what this post is about.

In his post Mr. Denny states and asks the following: “However I recently saw a post on /. about how a university network admin wanted to start switching the university over to open source.The only thing that came to mind was why on earth would you want to do such a disservice to your students?” How is this a disservice?

The truth of the matter is that it is more of disservice to not want to do something like this. It is more economically responsible than just going out and blindly buying proprietary products (read licenses) without thought to less expensive open source alternatives that provide the same capabilities.

Then all of a sudden we switch gears to something more specific: “In the article he’s talking about replacing Office 2007 with Open Office. Which is a fine idea for home, or for a business; however an educational institution should be more concerned with making sure that the students have access to what they will be using in the real world when they get into the job market.”

First I want to point out that the statement contradicts itself. He says it’s “fine idea for home, or for a business” but then says that students should “have access to what they will be using in the real world when they get into the job market.”. Most people go to work for a business Mr. Denny.

Moving beyond that my personal experience with Open Office is that the learning curve is minimal with the word processor and non-existent with the slide show or spreadsheet modules. I know that Microsoft used to sell a pared down version of Office for around $10 or $15 to students. Isn’t it better for the students to have a full working version of whatever software that they are going to use?

To his credit he does state that in an ideal world students should have access to both versions. I do agree with this. It is the only way to for anyone to make a valid decision about any product that they use.

This statement is the one that really gets me though: “While open source is great, most large companies (which is where most university students want to end up) don’t use much if any open source applications.” I’m positive that what he means by this are companies like IBM, NASDAQ, Eli Lilly, Yahoo!, Wal Mart and Lockheed. Let’s not leave out government agencies like the NSA, the U.S Navy and the U.S. Army. I know these people use Open Source, Mr. Denny, from either personal experience or from talking with people that work at these places. Maybe you should get out more or, at the very least, take off the Microsoft blinders.

I could go on but I believe that I have made my point. By the way Mr. Denny, you say that Open Source is great. Have you ever actually used it consistently for longer than a week or two?

-j


February 6, 2009  2:49 PM

Solaris 10, ksh and root

John Little Profile: Xjlittle

Like many people I have the question of, on Solaris 10, can I set the default root shell to ksh. After some study and research apparently it is ok to do this. The default shell for root is /sbin/sh. For those of us that use Linux this is not a symlink to bash but a static binary of the Bourne Shell.

Solaris versions previous to 10 had the Bourne Shell statically linked so that in case of a crash root would have access to a shell. This implies that certain directories were not mounted when booting into a ‘rescue’ mode, namely /usr.

With version 10 of Solaris, Sun has dynamically linked linked the Bourne Shell. This means that it does in fact now use shared libraries. This also implies that since other shells use shared libraries that it is ok to use the shell of your choice.

Sun has also built code into the OS that if for some reason the shell that is designated as the default is not accessible it will fall back to /sbin/sh. This resolves the problem of changing the default shell and still being able to sleep at night. Still experience says this is new and different, do I really want to rely on this new style?

If you are like me, a devout coward regarding such things, then you probably are reluctant to go this route. Because of my cowardice I have reached a compromise that works well, at least for me.

First I created a /root home directory and changed the home directory in /etc/passwd to reflect this. The directory mode is set to 700 and user/group ownership is set to root.

I then copied the contents of /etc/skel, including the hidden file .profile, into root’s home directory. After that I edited the .profile file so that it contains the following:

SHELL=/bin/ksh
export SHELL
HISTFILE=~/.history
HISTSIZE=1000

This gives me a login and working shell of ksh while leaving the default shell in /etc/passwd to /sbin/sh.

hth.

John


February 5, 2009  5:18 PM

Setting the qualified and loghost name for Solaris 10

John Little Profile: Xjlittle

After installing Solaris 10 for the first time I noticed that I was getting messages on boot referring to the loghost and qualified host name as not resolvable.

After some digging around I came with the solutions to these messages. As so often happens the fix for the problem is quick and simple relative to the time it takes to find the answer

The loghost could not be resolved problem arises from the /etc/inet/hosts file. You simply need to append loghost to the ::1 localhost and 127.0.0.1 localhost entries.

The My unqualified host name (sol10) unknown; sleeping for retry message is resolved by putting the FQDN into the /etc/nodename file:

sol10.home.local

What puzzles me is why this is not setup during the install phase. They do ask for the hostname of the machine which would fix the sendmail problem. Apparently loghost is standard part of the Solaris 10 OS so why not put it in the hosts file when it is created?

You should also be aware that when you open the hosts file it will be read only even if you are root. If you are using vi to edit the file make your edits and close it with :wq!. That way it will be saved without changing the permissions.. Changing default permissions on these files is not a good idea.

-j


January 27, 2009  8:51 PM

How many Top applications are in Linux?

John Little Profile: Xjlittle

I don’t know how many Top applications are in Linux. However MARKUS FEILNER AND SASCHA SPREITZER over at Linux Pro Magazine have evaluated all that they could and come up with a top ten for use to use.

As you might expect the original Top came in number one. I was surprised at the number of uses the other Tops came up with.

My favorite is PowerTOP since all that I use is a notebook. Occasionally it gets hot but the fan doesn’t come on. Maybe PowerTOP can help me with that!

Another one that I like is MyTop for MySQL. You can use MyTop to analyze what is going on with your MySQL database. There’s also PTop for PostGRE users.

Head over to their website and check out the article. You may find a Top that you really like!

-j


January 27, 2009  8:01 PM

VirtualBox releases version 2.1.2 with wireless NIC support

John Little Profile: Xjlittle

The newest version of VirtualBox has the native ability to use host interface networking and attach to your host’s wireless NIC. This gives users the ability to use a local ip of their network for network access and maintain an internet connection. In other words your virtual machines can now become an integral part of your LAN.

If you’ve ever tried to use wireless with a virtualized machine you know that it is a royal PITA. With this new version of VirtualBox this is no longer the case.

The 2.1.0 version was released in December of 2008. If you tried this version you probably found that you had numerous problems in many areas. Personally it crashed my Centos 5.2 installation. Trying an upgrade failed to clean up files and in general wreaked havoc during the boot phase of the host machine.

This latest version appears to have most of this cleaned up. However, to be safe, if you are using a 2.0 version or lower I would uninstall it rather than upgrade. You should also do a manual search and destroy of any leftover files. You can make quick work of this by issuing the following commands after uninstall:

locate -i vbox | xargs rm -Rf
updatedb
locate -i vbox

You should not get any return after the last command.

All of that said this newest version of VirtualBox looks very promising with it’s ability to natively connect to the host’s wireless NIC and LAN. You can download it here. It is available for Linux, Solaris, Windows and Mac.

-j


January 27, 2009  7:30 PM

Broadcom releases open source wireless drivers

John Little Profile: Xjlittle

Broadcom has released an open source version of their wireless drivers.

The driver is available for 32 and 64 bit operating systems. It supports 802.11a/b/g/n wireless protocols so it should work for any network. It is designed for use on Broadcom’s BCM4311, 4312, 4321 and 4322 based hardware.

The software comes in tar files for download. This means that you will need your development kernel and kernel headers for your distribution to compile the driver.

The binary driver files are designed to work with any version of the Linux kernel using operating system specific files. That should make it viable for any Linux distribution.

You can download the drivers here.

-j


January 27, 2009  2:08 PM

Open Souce software is literally making money..

John Little Profile: Xjlittle

Open source software is literally making money in Holland.

Stani Michiels, a Belgian artist and open source software developer, won the competition to design the new 5-Euro commentorative coin. The competition was set up by the Dutch Ministry of Finance.

His winning design was developed entirely with open source software, has now been implemented and is a legal coin in Holland. The theme of the competition was “Holland and Architecture”. Competition was by invitation only to a select group of architectural firms and individuals.

Michiels says in his blog “All the developing and processing was done on GNU/Linux machines which were running Ubuntu/Debian.”

How he went about designing his coin is wildly interesting as is the software and hardware that he used to generate the graphics. You can read all about it on his blog.

-j
architecture


January 25, 2009  1:22 AM

Using the Korn Shell with Linux

John Little Profile: Xjlittle

My current consulting gig requires that I use the Korn Shell and modify Unix scripts so that they will work with Linux. While the Korn Shell has many comparable characteristics of BASH there are some distinct differences-or at least ones that I’ve never seen in BASH.

The first difference that I noticed is tab completion. For example let’s say that I issue the command

ls /home/jlittle

and hit the tab key to see the files and directories. The output that you see will be in this format

ls /home/jlittle/
1) CentOS-5.2-x86_64-bin-DVD/
2) Desktop/
3) Documents/
4) Video call snapshot 8.png
5) bin/
6) ffmpeg.cfg

At this point you can either choose a number and hit the tab key or type in the first couple of letter of what you want to see or do. The complete output when using the number would look like this

ls /home/jlittle/<tab>
1) CentOS-5.2-x86_64-bin-DVD/
2) Desktop/
3) Documents/
4) Video call snapshot 8.png
5) bin/
6) ffmpeg.cfg
ls /home/jlittle/Desktop/<2tab>
Project-timeSheet.ods Skype.desktop

Typing 2 tab and the tab completion gives us the listing of the /home/jlittle/. Kind of a cool way of doing tab completion don’t you think?

You should also not use the “test” built-in that is available in bash. In bash the test built-in is the same as the “[” built-in. In other words don’t use

if test $# -gt 0; then

instead use:

if [ $# -gt 0 ]; then

The korn shell also prefers the use of double brackets syntax “[[ ]]” instead of single brackets. This adds additional operators such as && and ||:

if [[ $# -gt 0 && $? -eq 0 ]]; then

You can use && and || to construct shorthand for an “if” statement in the case where the if statement has a single consequent line:

[ $# -eq 0 ] && exit 0

The Korn Shell is a powerful tool that can make your job easier. Since it’s creation several features have been added while maintaining backwards compatibility with the Bourne shell. The Korn shell can also be used as a programming language which gives it a distinct advantage of typical Unix and Linux shells.

Give ksh a whirl. I haven’t even scratched the surface of what the Korn shell can do for your scripting. If you are used to scripting with Bash then learning the Korn shell should only have a mild learning curve while presenting you with additional scripting power and speed.

-j


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: