I.T. Security and Linux Administration


January 30, 2013  4:48 PM

PostgreSQL – Updating From 8.x to 9.x

Eric Hansen Eric Hansen Profile: Eric Hansen

I installed Back Track recently, which is based on Ubuntu 10.04 (Lucid Lynx). As such, it’s not got the most up to date software, even with it operating on it’s own repos. So, when I migrated a site over that was used to PostgreSQL 9.1, to the new server running PostgreSQL 8.4, the database wouldn’t play nice. Some of the queries would work, but when it came to performing CONCAT() and similar operations, the server croaked.

After looking high and low on Google for how to solve this, I decided to just see what my options were on my system. This is where I discovered pg_upgradecluster. From what it seems like it’s the older version of pg_upgrade, but for the life of me I can’t figure out why Postgre would make this more difficult in pg_upgrade. Anyways…

The first thing you should (and I highly suggest) do is stop any instances of both the old and new PgSQL server. For me it was simply running these:

service postgresql-8.4 stop
service postgresql stop

Easy enough. Then I had to figure out where my 8.4 data was stored. Typically (as was the case here), it was stored in /var/lib/postgresql/8.4/main. It’s important to make note of the last two parts of that directory structure, too. You’ll see why in a bit.

To keep things organized and nice, I decided to use the same structure for 9.1. What I ran was this:

pg_upgradecluster 8.4 main /var/lib/postgresql/9.1

This took me about 3 minutes on a small-scale database (49 MB according to du -h .). What happens is pg_upgradecluster will migrate all of the data from /var/lib/postgresql// (in this case version = 8.4 and cluster = main) to the new data directory, keeping the same cluster (main). It will also migrate over the configuration files and any user accounts in the PgSQL system.

After that, just start up the PostgreSQL server instance and you should be running on 9.1 now.

January 28, 2013  3:03 PM

Getting Into Linux Security (Part 2)

Eric Hansen Eric Hansen Profile: Eric Hansen

Earlier this month I started a series about breaking into the Linux security field (part 1: http://itknowledgeexchange.techtarget.com/security-admin/getting-into-linux-security-part-1/). I’m going to continue this with more tools of the trade to start learning.

Shorewall

I wrote an article about some of the pro’s and con’s of Shorewall, a supplement of iptables. But why am I listing it here? Because once you know how people can get into a network, you need to be able to prevent them.

Installing a firewall is one thing, but being able to manage it is another. While sometimes you won’t have to mess with the grittiness of firewalls, it’s definitely a good asset to have, knowing how to configure various firewalls. As such, learning how to use Shorewall can be easily translated into iptables or some other solution. Kind of like how learning the Linux CLI can help you navigate around a Unix CLI, even though they aren’t the same.

There’s a few different solutions besides Shorewall you can dab into, but I’ve found that for those that don’t use the native iptables, Shorewall is there. The effectiveness and ease of configuration will basically give you a ‘set it and forget it’ feel, and will definitely make life easier.

Metasploit

Venturing away from the network aspect of things, Metasploit is another one of those de-facto standard applications that you should have in your Swiss army knife. Used by many IT professionals and highly recognized in the application vulnerability assessment field, Metasploit will be a great asset during audits.

Similar to other tools that’ll be listed, this isn’t one that is a hacker’s delight, and it’s not meant to be. It’s very easy to be detected using this when it comes do to being automated. Instead, this is supposed to give you an realistic look at how hackers view your network.

Metasploit is basically a huge database of known vulnerabilities in various services and systems themselves. For example, if a hacker detects you’re running Apache CloudStack 4.0, they could attempt to exploit CVE-2012-5616. Metasploit has a vast community of developers and authors who write plugins for Metasploit that will allow penetration testers the ability to see if their Apache server(s) are vulnerable to this CVE.

Unless you have permission from the server owner, I would advise running this against anything outside of your LAN. Even as a professional, when I’m performing audits I contact the data center and inform them of the audit, what it will involve and the time frame. This is a very powerful tool, but as the saying goes, “with great power comes great responsibility”.

Nessus

The last tool in our second part is going to be Nessus. This tool has been around about as long as Metasploit, if not longer, and has another strong background in the audit world. However, this is more of a network auditing tool than application.

Nessus itself is very powerful. It will also create a lot of noise, similar to Metasploit, on the network. So again, I advise unless you have permission to only do this on the LAN.

What this software does is probe any hosts you provide it, and throw a bunch of attacks at it. You can, however, also create different profiles to fit different attacks. So if you want to only test Apache, you can create an Apache profile and add specific tests. This makes the tool very versatile and flexible.

I’ll admit I haven’t used Nessus in a few years, but that last time I remember, the paid version only allowed PDF reports, and those had references to Tenable Security (Nessus’ developers) in it. However, if you’re looking for a strong competitor and a very flexible engine that offers in-depth information about your network’s security, this is definitely a great tool to have and understand.

Conclusion

There’s so many other tools that I’ll be touching on, more so free ones that offer the same functionality but with a better price tag. However, this is a good start into looking into the deeper security aspects.


January 28, 2013  12:55 PM

Shorewall Firewall

Eric Hansen Eric Hansen Profile: Eric Hansen

Linux is well known for it’s networking capabilities. This includes turning an old dusty machine in your house into a home grown firewall or even PBX (a fun weekend project, by the way). But with just about everything else involving Linux, there’s a million ways to solve one problem. Such is true with firewalls.

When you install your OS, you’re most likely going to have iptables pre-installed (you should, anyways). This is mostly due to the fact that iptables is the de-facto standard application for Linux when managing a firewall. It allows direct access via netfilter to inspect packets to great depths and even interface with IDSes like Snort when configured properly. But, the problem with having such power is that it can become very troublesome to maintain and configure.

For example, to set up a simple logging rule, you have to do this: iptables -A INPUT -p tcp –dport 22 -s !192.168.1.0/24 -j LOG –log-level 4 –log-prefix “[ssh daemon access attempt]: ”

This is not very well written, and the fact I know this by heart kind of speaks for itself too. Mind you, this is a pretty simplistic rule, but it goes to show that it takes a bit to configure a firewall properly with the default tools. But, what if there was an easier tool? A better tool? A safer tool?

Over the weekend I spent a few hours working with Shorewall, a front-end of sorts for iptables. Now, while the documentation for Shorewall says to disable iptables from running, it still requires iptables to be installed. This threw me off at first until I realized why, when I started up Shorewall.

Shorewall does a lot of work by making firewall management easier without taking away the power of iptables, it just simplifies it. For a one-NIC firewall configuration, you’re looking at about 4 or 5 files you have to edit, with 2 of them being ones you have to edit more than once (policies and rules files). This, compared to the troublesome iptables rules and switches, remembering which rule does what, etc… can make for a headache fast.

I’ll be writing a tutorial on this soon, but as a starting point, Shorewall separates the power from the user. This allows you to easily know what you’re configuring, and how it’s configured, and when it’ll work. Shorewall even allows for macros, which are even further simplified rulesets that you can use for known services (i.e.: to deny ping attempts, just call the ping macro like this: Ping(DROP)). This makes for quick and easy mangement and also makes rules that much simpler to follow.

Now that I’ve talked Shorewall up like it’s a godsend and unbeatable, lets raise the playing requirements a bit. While a great solution, Shorewall also has it’s downfalls. The biggest of which is poor documentation.

The documentation on the website makes Shorewall sound like it takes at least a master’s degree to get up and running. However, even if I had an associate’s I could get it going without issues, but I had to follow a different guide. There is a lot of technical jargon that, to me, is nothing more than fluff. It adds nothing to assisting users in getting a firewall set up, and if it wasn’t for I was bored and wanted to configure it, I would’ve just stuck with direct iptables rules.

There is also a severe lack of tutorials on Shorewall, and most of what is out there is lacking in substance. A lot of the guides I looked through were either for older versions, or for configuring ‘the perfect firewall’. I just wanted a simple how-to and that was that.


January 16, 2013  11:14 AM

Getting Into Linux Security (Part 1)

Eric Hansen Eric Hansen Profile: Eric Hansen

There’s a big increase lately in terms of Linux security and how to get into the field.  Some can get by only knowing basic command line arguments, others require a CISSP to even be considered.  But, experience in the field itself shows more than anything, even if you’re sitting at your desk working through a virtual machine.  But what can really help someone, who’s passionate about security and Linux become even better?  Help them get a job, and even start their own Linux security business?

Test Environments

You’re not going to know the tricks and ropes of every situation. Even if you were to simulate your own DoS and see how it affects your home network, there’s many ways to construct such an attack.

What you can do, however, is set up a virtual environment, or even a small virtual server farm. Get them to talk to each other, throw DoS attacks at them and see how they react. Having the knowledge of virtualization will get you further ahead than you may think, especially if you work for a “go green” company.

Scenario: I had a job interview late last year for a company. I did pretty well up until troubleshooting came into play. Safe to say I didn’t get the job, but knowing what they were looking for me to do on a day-to-day basis really opened my eyes. It made me realize what I need to focus more on, and how to do so. I ended up taking that knowledge, installing a small Linux distro on a server and trying to simulate various issues. With virtualization I can kill the network adapter without interrupting anything important. I’m able to run through various scenarios in practice that I was tested with in the interview, and try to solve them there.

It’s not a permanent solution, but it helps a lot. Heck, I have another scenario for you.

Scenario: Working for a web hosting company, I was placed in the position of rebuilding a RAID array. I had very minimal experience with such technology and only knew a little bit of the basics. But, I was forced to dive in blindfolded and do it what I needed to do. I don’t remember the details, but I can definitely tell you it didn’t go well (I told them before hand I wasn’t sure what I was doing, but I sure learned what not to do hah). Anyways, after I got off work, I went back to my place, and again installed a server via virtual machine. I spent a good week toying with RAID and how to work with it. Even to this day I’ve done it, and have also written various scripts to make the process smoother.

Virtualization is probably going to be your biggest friend going into security. There’s so many things that can break and go wrong that it’s easier being to reformat a virtual machine than it is reformatting your PC.

Knowing the Software (Part 1)

nmap

This here is going to be at least a two-part coverage, because in security there is just too much to cover. However, the biggest tool I see in the arsenal everywhere is nmap. Especially if you’re looking to set up or test a firewall, nmap should always be the first tool you go to. It will help you test against malformed packets, scan the network for devices and a whole lot more. The documentation on this tool will pretty much speak more volume than I can about it, but you should definitely learn your way around this piece of software.

Scenario: You notice a lot of traffic being sent to a server but no requests being fullfilled. this can either be an attempted DoS attack or someone just sending bad packets. The best way to find out if you don’t have network monitoring software set up is to test. While nmap won’t be a viable solution to test for a DoS attack, it does provide a vast array of methods to test for bad packets. You can run the host against each known test in nmap, and compare the results until you find a match.


December 31, 2012  3:51 PM

Privacy Policy and Big Brother

Eric Hansen Eric Hansen Profile: Eric Hansen

Recently there’s been a lot of hysteria about privacy and the controls to it. A good instance of this is Facebook, which has essentially been under the gun ever since (or even before) it’s IPO failure earlier this year.

The idea behind privacy policies are nice, but short of being something to sue a company over, they hold very little information. They don’t tell you how the data stored or distributed, just that they are. It doesn’t say what happens on a privacy breach, or how to protect your data. It also doesn’t provide information on how to protect your privacy, just that it’s possible.

A lot of businesses, especially social media-based ones, are primarily focused on bringing in users, but they tend to give the assumption all of their users are computer-conscious. In fact, the safest way to present this to the user should be to assume that their users are not savvy and to spell out the information instead.


December 31, 2012  3:28 PM

Review: Snort GUI – Snorby

Eric Hansen Eric Hansen Profile: Eric Hansen

Typical Snort installs have you installing BASE for a graphical front-end to view packet information. While the UI is fluid, it’s also very outdated. It has the coding standards of 1995-2000, with limited functionality in it (just enough to get what you want and get out).

As such, there’s been advances in making viewing Snort logs easier. Of those is Snorby (www.snorby.org). It’s based on Ruby on Rails and has a pretty slick interface that brings Web 2.0 to Snort. But how good is it, really?

Personally I can’t stand any RoR projects. They’re about as resource intensive as Java programs and have about the same performance. It’s great if you have a 32-CPU and 192GB RAM server, but if you’re trying to operate it on a VPS, you’ll need a pretty high-end VPS just to give it enough RAM (Xen VPS might be better suited).

The UI is nice but it feels a bit clunky in that it tries to present too much to you at once. Otherwise, the color scheme is nice, but the navigation feels like everything is just clumped up together.


December 31, 2012  2:45 PM

Shell Portscanner

Eric Hansen Eric Hansen Profile: Eric Hansen

Many, many people have heard of nmap before. The infamous port scanner that does everything you can think of. This is great if you’re wanting to do recon on a network, but what if you just want to see what ports are open on a network without all the extra special features? Easy, you use Bash!

Below is a simple Bash script that uses the system’s TCP “device” to establish the connection and see if the port is open or closed:
#!/bin/sh
# Code is swipped from http://legroom.net/2010/05/02/port-testing-and-scanning-bash
#
# Usage: ./portscan.sh
#
# Change range in {...} block as you please.

function port() {
(echo > /dev/tcp/$1/$2) &> /dev/null
if [ $? -eq 0 ]; then
echo "$1:$2 open"
else
echo "$1:$2 closed"
fi
}

for i in {22..80}; do
port $1 $i
done

This does a check for ports 22-80, but you can change the range to match your needs. A link to this can be found here: https://gist.github.com/4422216


December 31, 2012  2:24 PM

IPv6 Startup Script

Eric Hansen Eric Hansen Profile: Eric Hansen

After writing this article: http://itknowledgeexchange.techtarget.com/security-admin/ipv6-and-linux/ I decided to write a simple script that will allow you to start and stop the IPv6 functionality pretty easily. The script is pretty well commented and easy to use:

#!/bin/bash

# This script is modified from the one found here: https://wiki.archlinux.org/index.php/IPv6_-_Tunnel_Broker_Setup
# Noticably this script now runs like this:
# $0
#
# So if you want IPv6 traffic to be routed through wlan0, you would do:
# $0 start wlan0

if [ "$EUID" -ne 0 ]; then
echo "You must run this as root"
exit 1
fi

# Edit these
if_name=he6in4
server_ipv4='' # HE Server Endpoint IP
client_ipv6='' # Your HE-assigned client IP

# Don't edit below this line
client_ipv4=''
link_mtu=1480
tunnel_ttl=255

if [ -n "$2" ]; then
client_ipv4=$(ifconfig eth0 | grep netmask | awk '{print $2}')
fi

if [ -z "$client_ipv4" ]; then
echo "Usage: $0 "
exit 1
fi

echo "Tunneling data from IPv6 to $2 (IP: $client_ipv4)"

daemon_name=6in4-tunnel

# . /etc/rc.conf
# . /etc/rc.d/functions

case "$1" in
start)
# stat_busy "Starting $daemon_name daemon"

ifconfig $if_name &>/dev/null
if [ $? -eq 0 ]; then
stat_busy "Interface $if_name already exists"
stat_fail
exit 1
fi

ip tunnel add $if_name mode sit remote $server_ipv4 local $client_ipv4 ttl $tunnel_ttl
ip link set $if_name up mtu $link_mtu
ip addr add $client_ipv6 dev $if_name
ip route add ::/0 dev $if_name
# Here is how you would add additional ips....which should be on the eth0 interface
# ip addr add 2001:XXXX:XXXX:beef:beef:beef:beef:1/64 dev eth0
# ip addr add 2001:XXXX:XXXX:beef:beef:beef:beef:2/64 dev eth0
# ip addr add 2001:XXXX:XXXX:beef:beef:beef:beef:3/64 dev eth0

add_daemon $daemon_name
stat_done
;;

stop)
stat_busy "Stopping $daemon_name daemon"

ifconfig $if_name &>/dev/null
if [ $? -ne 0 ]; then
stat_busy "Interface $if_name does not exist"
stat_fail
exit 1
fi

ip link set $if_name down
ip tunnel del $if_name

rm_daemon $daemon_name
stat_done
;;

*)
echo "usage: $0 {start|stop}"
esac
exit 0

The download link for this script can be found here: https://gist.github.com/4422119


December 31, 2012  2:11 PM

IPv6 and Linux

Eric Hansen Eric Hansen Profile: Eric Hansen

IPv6 isn’t a new thing, nor is it really a wave of the future at this point. It’s been in development since the late 90′s when the inherent flaw of IPv4 was finally considered important. However, over the last few years it has made great strides in trying to take over it’s older brother. But, even to this day, while you’re assigned an IPv6 address, it doesn’t necessarily mean you can reach out to the IPv6-accessible websites. A good way to test this is by trying to go to http://ipv6.google.com. If it works, you’ll be welcomed to the standard Google search page. If it doesn’t, you’ll typically see a DNS lookup-related error. This is because IPv6 is not accessible via IPv4.

Recently I took up the position of trying to get IPv6 to play nicely with my home network. I don’t use anything fancy, and ultimately this costed me a grand total of $0.00. But, there are endless possibilities once you do this.

First, I’m going to assume you have Linux installed and you know how to use your distro’s repos. We will be installing some software. I’m using Arch Linux 64-bit, but I’m 99% certain the steps are similar for others out there as well. Just might need some tweaks here and there.

Now, what you need to do is install iproute2. If your system allows you to run this command: “ip addr” (without quotes) then you already have it installed. Wikipedia has a good enough article about iproute2 here: http://en.wikipedia.org/wiki/Iproute2

Once you have that installed, you will need to register for an IPv6 address. You’ll probably have one locally, but this is used externally. While there’s a few “tunnel brokers” out there that offer this functionality, I use Hurricane Electric’s http://tunnelbroker.net/. I have a VPS rented out from them too so I already knew their network was decent. This is simple to do, you just need to click on the “Register” button, fill out the information and submit it.

Now, from here you need to add a tunnel. This threw me off at first, so I’m going to assume you’re being a router here. Having a private IP address and everything. If not, then the step still applies.

Once you’re logged in, go to the “User Functions” section on the left side, and click on “Regular Tunnel”. From there, fill out the info. Now, it’s all pretty self-explanatory, but what threw me off at first was the “IPv4 Endpoint”. This is actually your network’s public-facing IP. So if your ISP gave you an IP of 1.2.3.4, you would put in there 1.2.3.4. It’ll also suggest the closest DC to route you from based on your IP’s geographical location.

Once your tunnel is created, you’ll be faced with a tunnel info page. What you need to do is click on the “Example Configurations” page and choose your method (i.e.: Linux-route2). It’ll give you a list of commands to put into your shell (as root or via sudo). Once you do that, run ping6 -c 1 ipv6.google.com and you should be able to ping Google.

Note that while this allows you to have IPv6 availability, you can still browse IPv4 websites just fine.


December 14, 2012  12:06 PM

Snort on Low-End Servers

Eric Hansen Eric Hansen Profile: Eric Hansen

Recently I’ve been looking more in-depth into Snort.  I’ve had it in use for my business for a little while now, but I wanted to see how far down the spec-chain I can get it to run on.  I already had it running fine in a virtual machine environment with 4GB of RAM, so I worked from that machine.  While this proved to be quite interesting (who wouldn’t love to run a Snort sensor on a Raspberry Pi?), it also proved to be a little stressful.

The documentation on how to get Snort to run on low-spec’ed machines was for the most part out of date.  Most of them would say to add ‘low-mem’ to the ‘configuration detection’.  With Snort 2.9.3, I found this wasn’t exactly a working solution as I kept receiving the error that ‘low-mem’ is not a valid option.  Another thing I was to told to try is to change the search-method option to something like ‘lowmem-nq’, which in conjunction with what I found to be the answer is can help, but you still have to dig a little bit deeper.

What I found I had to tweak is actually the ‘max_*’ settings for the stream5_global preprocessor.  When I was trying to run Snort, I would always receive an error saying that the flowbits could not be allocated in stream5_global.c, which I dug for about an hour trying to figure out what was actually going on.  Since this I have also learned there’s a ‘config flowbits_size’ config option (commented out by default), but I did not want to mess with that as I’m not sure what it would do.

Instead, here’s what my preprocessor looks like on a 4GB virtual machine:

preprocessor stream5_global: track_tcp yes, track_udp yes, tracp_icmp no, max_tcp 262144, max_udp 131072, max_active_responses 2, min_response_seconds 5

Not having the preprocessor track packets you’re not interested in (i.e.: no icmp if you don’t care about those) will reduce the memory usage as well, but what you have to actually focus on is the max_* settings.  These tell the preprocessor how many sessions at a given time it can keep track of for each protocol, which in short terms also leads it to allocating enough memory to handle such a work load.  I disabled tracking icmp and udp as my server only permits TCP anyways, and reduced the max_tcp to a very small value of 1024 to see if it would run.  Low and behold, it ran without issues and I can monitor traffic just fine!

max_tcp has to be within the range of 1 and 1048576 (max_tcp, udp and icmp have different ranges).  If you set the value higher than what your VM can handle, you’ll receive an error similar to:

ERROR: snort_stream5_tcp.c(949) Could not initialize tcp session memory pool.
Fatal Error, Quitting..

As TCP likes to have everything in multiples of 32, I’m a fan of sticking with such multiples, but anywhere within the region mentioned earlier will be fine.  If you have 256MB of RAM, I’ve found that the highest setting (for me at least) that works is 9999 for max_tcp.  Which for a small network it should be just fine, if not overly abundant.

Also, please note that limiting the rules, preprocessors, etc… that are running will also reduce the memory footprint as well, so this is by no means a de-facto standard of how to get it to run, but it’s definitely a step in the newer right direction.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: