I.T. Security and Linux Administration


December 31, 2012  2:24 PM

IPv6 Startup Script

Eric Hansen Eric Hansen Profile: Eric Hansen

After writing this article: http://itknowledgeexchange.techtarget.com/security-admin/ipv6-and-linux/ I decided to write a simple script that will allow you to start and stop the IPv6 functionality pretty easily. The script is pretty well commented and easy to use:

#!/bin/bash

# This script is modified from the one found here: https://wiki.archlinux.org/index.php/IPv6_-_Tunnel_Broker_Setup
# Noticably this script now runs like this:
# $0
#
# So if you want IPv6 traffic to be routed through wlan0, you would do:
# $0 start wlan0

if [ "$EUID" -ne 0 ]; then
echo "You must run this as root"
exit 1
fi

# Edit these
if_name=he6in4
server_ipv4='' # HE Server Endpoint IP
client_ipv6='' # Your HE-assigned client IP

# Don't edit below this line
client_ipv4=''
link_mtu=1480
tunnel_ttl=255

if [ -n "$2" ]; then
client_ipv4=$(ifconfig eth0 | grep netmask | awk '{print $2}')
fi

if [ -z "$client_ipv4" ]; then
echo "Usage: $0 "
exit 1
fi

echo "Tunneling data from IPv6 to $2 (IP: $client_ipv4)"

daemon_name=6in4-tunnel

# . /etc/rc.conf
# . /etc/rc.d/functions

case "$1" in
start)
# stat_busy "Starting $daemon_name daemon"

ifconfig $if_name &>/dev/null
if [ $? -eq 0 ]; then
stat_busy "Interface $if_name already exists"
stat_fail
exit 1
fi

ip tunnel add $if_name mode sit remote $server_ipv4 local $client_ipv4 ttl $tunnel_ttl
ip link set $if_name up mtu $link_mtu
ip addr add $client_ipv6 dev $if_name
ip route add ::/0 dev $if_name
# Here is how you would add additional ips....which should be on the eth0 interface
# ip addr add 2001:XXXX:XXXX:beef:beef:beef:beef:1/64 dev eth0
# ip addr add 2001:XXXX:XXXX:beef:beef:beef:beef:2/64 dev eth0
# ip addr add 2001:XXXX:XXXX:beef:beef:beef:beef:3/64 dev eth0

add_daemon $daemon_name
stat_done
;;

stop)
stat_busy "Stopping $daemon_name daemon"

ifconfig $if_name &>/dev/null
if [ $? -ne 0 ]; then
stat_busy "Interface $if_name does not exist"
stat_fail
exit 1
fi

ip link set $if_name down
ip tunnel del $if_name

rm_daemon $daemon_name
stat_done
;;

*)
echo "usage: $0 {start|stop}"
esac
exit 0

The download link for this script can be found here: https://gist.github.com/4422119

December 31, 2012  2:11 PM

IPv6 and Linux

Eric Hansen Eric Hansen Profile: Eric Hansen

IPv6 isn’t a new thing, nor is it really a wave of the future at this point. It’s been in development since the late 90′s when the inherent flaw of IPv4 was finally considered important. However, over the last few years it has made great strides in trying to take over it’s older brother. But, even to this day, while you’re assigned an IPv6 address, it doesn’t necessarily mean you can reach out to the IPv6-accessible websites. A good way to test this is by trying to go to http://ipv6.google.com. If it works, you’ll be welcomed to the standard Google search page. If it doesn’t, you’ll typically see a DNS lookup-related error. This is because IPv6 is not accessible via IPv4.

Recently I took up the position of trying to get IPv6 to play nicely with my home network. I don’t use anything fancy, and ultimately this costed me a grand total of $0.00. But, there are endless possibilities once you do this.

First, I’m going to assume you have Linux installed and you know how to use your distro’s repos. We will be installing some software. I’m using Arch Linux 64-bit, but I’m 99% certain the steps are similar for others out there as well. Just might need some tweaks here and there.

Now, what you need to do is install iproute2. If your system allows you to run this command: “ip addr” (without quotes) then you already have it installed. Wikipedia has a good enough article about iproute2 here: http://en.wikipedia.org/wiki/Iproute2

Once you have that installed, you will need to register for an IPv6 address. You’ll probably have one locally, but this is used externally. While there’s a few “tunnel brokers” out there that offer this functionality, I use Hurricane Electric’s http://tunnelbroker.net/. I have a VPS rented out from them too so I already knew their network was decent. This is simple to do, you just need to click on the “Register” button, fill out the information and submit it.

Now, from here you need to add a tunnel. This threw me off at first, so I’m going to assume you’re being a router here. Having a private IP address and everything. If not, then the step still applies.

Once you’re logged in, go to the “User Functions” section on the left side, and click on “Regular Tunnel”. From there, fill out the info. Now, it’s all pretty self-explanatory, but what threw me off at first was the “IPv4 Endpoint”. This is actually your network’s public-facing IP. So if your ISP gave you an IP of 1.2.3.4, you would put in there 1.2.3.4. It’ll also suggest the closest DC to route you from based on your IP’s geographical location.

Once your tunnel is created, you’ll be faced with a tunnel info page. What you need to do is click on the “Example Configurations” page and choose your method (i.e.: Linux-route2). It’ll give you a list of commands to put into your shell (as root or via sudo). Once you do that, run ping6 -c 1 ipv6.google.com and you should be able to ping Google.

Note that while this allows you to have IPv6 availability, you can still browse IPv4 websites just fine.


December 14, 2012  12:06 PM

Snort on Low-End Servers

Eric Hansen Eric Hansen Profile: Eric Hansen

Recently I’ve been looking more in-depth into Snort.  I’ve had it in use for my business for a little while now, but I wanted to see how far down the spec-chain I can get it to run on.  I already had it running fine in a virtual machine environment with 4GB of RAM, so I worked from that machine.  While this proved to be quite interesting (who wouldn’t love to run a Snort sensor on a Raspberry Pi?), it also proved to be a little stressful.

The documentation on how to get Snort to run on low-spec’ed machines was for the most part out of date.  Most of them would say to add ‘low-mem’ to the ‘configuration detection’.  With Snort 2.9.3, I found this wasn’t exactly a working solution as I kept receiving the error that ‘low-mem’ is not a valid option.  Another thing I was to told to try is to change the search-method option to something like ‘lowmem-nq’, which in conjunction with what I found to be the answer is can help, but you still have to dig a little bit deeper.

What I found I had to tweak is actually the ‘max_*’ settings for the stream5_global preprocessor.  When I was trying to run Snort, I would always receive an error saying that the flowbits could not be allocated in stream5_global.c, which I dug for about an hour trying to figure out what was actually going on.  Since this I have also learned there’s a ‘config flowbits_size’ config option (commented out by default), but I did not want to mess with that as I’m not sure what it would do.

Instead, here’s what my preprocessor looks like on a 4GB virtual machine:

preprocessor stream5_global: track_tcp yes, track_udp yes, tracp_icmp no, max_tcp 262144, max_udp 131072, max_active_responses 2, min_response_seconds 5

Not having the preprocessor track packets you’re not interested in (i.e.: no icmp if you don’t care about those) will reduce the memory usage as well, but what you have to actually focus on is the max_* settings.  These tell the preprocessor how many sessions at a given time it can keep track of for each protocol, which in short terms also leads it to allocating enough memory to handle such a work load.  I disabled tracking icmp and udp as my server only permits TCP anyways, and reduced the max_tcp to a very small value of 1024 to see if it would run.  Low and behold, it ran without issues and I can monitor traffic just fine!

max_tcp has to be within the range of 1 and 1048576 (max_tcp, udp and icmp have different ranges).  If you set the value higher than what your VM can handle, you’ll receive an error similar to:

ERROR: snort_stream5_tcp.c(949) Could not initialize tcp session memory pool.
Fatal Error, Quitting..

As TCP likes to have everything in multiples of 32, I’m a fan of sticking with such multiples, but anywhere within the region mentioned earlier will be fine.  If you have 256MB of RAM, I’ve found that the highest setting (for me at least) that works is 9999 for max_tcp.  Which for a small network it should be just fine, if not overly abundant.

Also, please note that limiting the rules, preprocessors, etc… that are running will also reduce the memory footprint as well, so this is by no means a de-facto standard of how to get it to run, but it’s definitely a step in the newer right direction.


November 30, 2012  8:27 PM

To Release, or Not Release Full Disclosures?

Eric Hansen Eric Hansen Profile: Eric Hansen

Wired posted an interesting article this month discussing the benefits and losses of hackers releasing exploits out into the wild and to vendors.  Some of the points I agree with, but some I do not.

I do feel that exploits should be released to the vendors before disclosure.  Back in my hay-day of finding exploits, I had a set ruling:

  1. Find exploit
  2. Send any/all contacts for vendor an e-mail outlining the exploit
  3. Wait 7 days
  4. If no response, release exploit as live, otherwise publish it as is.  Both scenarios would be labelled as vendor-notified

This was simple: in the e-mail, I would provide the software name and version, OS if needed along with any other system specifics, what the exploit is, does and how to patch it.  I would also include a note saying that if no response is received within 7 days, the exploit will be released to the world.

My view was that it is up to the vendor at that point to either fix it, or not.  None of the exploits I found was extensive (i.e.: sifting through the code of Virtual Box to find out a memory leak happens when some action occurs).  It was mostly beginner stuff, such as local/remote file inclusion and cross-site scripting.  Some vendors responded back, most didn’t.  Out of those who did, I had a long-lasting relationship with one in fixing exploits for him.

I do not, however, condone the releasing of such information to the public without properly informing the vendor first, however (unless of course they cannot be reached).  I never classified myself as any type of hat, but if I had to it’d be grey.  I didn’t find exploits to ruin the lives of people, I found them because I love security.  I wanted to reach out to those who needed help, and do my best.  However, with-holding valuable information such as exploits for personal gain of any sort is far from beneficial to anyone, even yourself.  For every exploit you can find, there’s someone out there who can find more, and they may give away your exploit before you have the chance.


November 30, 2012  8:03 PM

Security Precuation In Programming: Validate User Input

Eric Hansen Eric Hansen Profile: Eric Hansen

When most people think of validating user input, the first thing to come to mind is making sure a string is a string, numbers are numbers and dates are proper.  But does it stop there?  Let’s have Facebook decide.

It seems there’s a new exploit available for their chat system, and it’s not something most people would ever cause due to the nature and extreme case of this scenario.  The overall action that you need to perform is to send an extremely long message via chat to Facebook’s servers, which will then crash the end user’s session (and yours).  This has further repercussions for Facebook apps that keep chat sessions alive (i.e.: tablet Facebook apps), as they will no longer be able to use the Facebook chat program on their tablet due to the fact the Messenger app would be constantly trying to load the too-long message, and crashing the app.  This was posted on seclists.org by Chris Russo (http://seclists.org/fulldisclosure/2012/Nov/46).

While it does have a specific use case, and is not something the average user would ever reach such limits needed to cause this issue, it also shows that proper data validation is far from properly implemented, even with big-name corporations.  If it’s as simple as sending a “malformed” request to Facebook’s chat service, how easy would it be to do the same with GTalk, IRC, etc…?


November 30, 2012  6:19 PM

Proper Firewall Management: Part 1 – Introduction To fail2ban

Eric Hansen Eric Hansen Profile: Eric Hansen

As a short series, I will be showcasing some firewall tips and tricks on what to (not) do if you want to secure your network.  The first of which is going to be an overview for a very helpful log analyzer, fail2ban.  There’s other programs out there, such as logwatch, that monitor logs and ensure nothing ‘illegal’ is occurring.  However, fail2ban is the most well known one that will also act on such findings.  To me, it is the IDS of logs.

fail2ban works based on configuration files that specify what program ID (i.e.: http, pop3) it’s parsing for, and then another file that specifies the rules that match restricted content.  This also makes fail2ban optimal for those looking to use your mail server for relaying, SSH for proxying or flooding your server with malformed HTTP requests.  Essentially all you do is throw in the rule(s) you want matched, and fail2ban will match the regex expression with data in logs.  If anything is found, it will then add the offending IP to iptables for a given period of time.

fail2ban is also useful for overseeing the network and handling of Snort logs to automatically restrict offending IPs without having to parse through each Snort log yourself.


November 30, 2012  5:52 PM

Proper Handling of Phishing

Eric Hansen Eric Hansen Profile: Eric Hansen

SANS recently put up an article involving handling phishing attacks within the network: https://isc.sans.edu/diary.html?storyid=14578

While most of the points are sensible, and should be what everyone follows, there is one that I actually disagree with: blocking the URL.

Most of the URLs provided in phishing emails are garbled text that no one actually looks at when the e-mail looks promising and legitimate.  This also tends to cause providers to shut down websites quickly for one reason or another.  This makes the effort of filtering URLs, blocking them and then unblocking them (as to not clog up the firewall/DNS lookups) more of a hassle than anything else.

There is very little anyone can do beyond security awareness training on how to educate others to not click on unknown links.  What sysadmins should focus on, besides security awareness training, is proper ACLs.  As a good example, lock down machines to download files to a specific central server (i.e.: mount a remote directory onto each machine), and feed each file through an AV or whatnot and if everything is detected as clean, move it to the appropriate directory.  Using something like Fabric, this is far from difficult to accomplish.

Sysadmins have a lot to do on their day-to-day tasks as is, constantly adding and removing websites from the firewall and DNS zones should not be the same.


November 28, 2012  5:30 PM

The Flaws in New Designs

Eric Hansen Eric Hansen Profile: Eric Hansen

http://news.yahoo.com/windows-8-terrible-says-usability-expert-jakob-nielsen-174300612.html

While I normally don’t nod my head with excitement at what ‘experts’ say, from personal account of previously using Windows 8, some of the points are valid.

On the PC, navigating through is pretty horrid.  Before, shutting down was a simple as going to the start menu.  Now you have to jump through a billion (more realistically about 5-10) hoops to shut down properly.  There’s also no start menu, and that will definitely confuse their end users who, for decades, have been accustomed to going to the start menu for everything from a calculator to starting up a new game.

Then, there’s the ‘widgets’ (not sure what the technical term for Windows 8 is).  This is my biggest complaint about this.  I get that Microsoft is aiming to offer ‘cross platform compilation’, thus meaning things that run on Windows 8 phones will also work on the tablet and desktops, but why should I have to pass through two welcome screens just to get to my desktop?  I feel like I’m working my eyes and arms just to pop up a game of solitaire.

As the author says, it seems Windows 9 will be the savior of Windows 8, it seems.


November 28, 2012  5:09 PM

Be More Productive, Use Less Facebook

Eric Hansen Eric Hansen Profile: Eric Hansen

There’s a nifty extension to Chrome called Facebook Nanny: https://chrome.google.com/webstore/detail/facebook-nanny/gkpjofmdbabecniidggbbicfbcmfafmk

This is a nice little plugin in that unless you have notifications from Facebook, a message will show up instead disabling use of Facebook.  If you’re finding yourself going on Facebook more than going on with your productive day, you’ll find some great use to this.  Now, if there was a way to tweak it and make it so it’d display the message no matter what, you could have some fun with people at your work.

This isn’t something people should really depend on working, unless it uses the FB API to see if there’s notifications (which I doubt).  As evident by the Social Fixer plugin, Facebook likes to break it’s design on purpose so things like Social Fixer doesn’t work anymore.  But, regardless, this can be quite useful in trying to to get back into the swing of things, and get more work pumped out before your next deadline.


November 28, 2012  3:56 PM

The operating system of Call of Duty is….

Eric Hansen Eric Hansen Profile: Eric Hansen

…looking like it’s going to be Windows, according to Slashdot.

For those who aren’t familiar with Call of Duty and it’s release cycle, a new game is put out every year, around the same time (mid-to-late November).  This has been the case for about 5 or so years now.  As such, it has gotten a lot of flak in the gaming community for being a rehash of previous years’ titles.

How is this, gaming, relevant to Windows?  Why am I posting this in a primarily Linux-focused blog?  Because I see this being the trend for more than just Windows and a handful of Linux distros.  I also feel this is one of the worst possible mistakes that can be made in terms of operating system life cycle and development.

Some Linux distros are well-suited for it.  They fair-warn users ahead of time and also make it known in many different areas that things can break way too easily.  Lets think about this for a minute.

Windows has always been known as the “noob operating system”.  Those who don’t want to venture into the realm of actual operating system usage go with Windows.  Thus, Windows has also sort of solidified it’s placement in I.T. as the safe and secure operating system.  There’s a lot of reasons why it’s really not a bad operating system in general.  In this happening, it has made Windows become a pretty solid operating system considering it’s inherent faults of being closed source.

With them releasing a new Windows operating system yearly, however, this will severely reduce the amount of effort Microsoft can put into solidifying their operating system before a new one is put out.  This is an issue a lot of rolling release systems suffer from.  By the time a major issue/exploit/etc… is found, they’re already dedicating too many man hours into the next release to be able to go back and fix their current product.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: