I.T. Security and Linux Administration


November 30, 2012  8:27 PM

To Release, or Not Release Full Disclosures?

Eric Hansen Eric Hansen Profile: Eric Hansen

Wired posted an interesting article this month discussing the benefits and losses of hackers releasing exploits out into the wild and to vendors.  Some of the points I agree with, but some I do not.

I do feel that exploits should be released to the vendors before disclosure.  Back in my hay-day of finding exploits, I had a set ruling:

  1. Find exploit
  2. Send any/all contacts for vendor an e-mail outlining the exploit
  3. Wait 7 days
  4. If no response, release exploit as live, otherwise publish it as is.  Both scenarios would be labelled as vendor-notified

This was simple: in the e-mail, I would provide the software name and version, OS if needed along with any other system specifics, what the exploit is, does and how to patch it.  I would also include a note saying that if no response is received within 7 days, the exploit will be released to the world.

My view was that it is up to the vendor at that point to either fix it, or not.  None of the exploits I found was extensive (i.e.: sifting through the code of Virtual Box to find out a memory leak happens when some action occurs).  It was mostly beginner stuff, such as local/remote file inclusion and cross-site scripting.  Some vendors responded back, most didn’t.  Out of those who did, I had a long-lasting relationship with one in fixing exploits for him.

I do not, however, condone the releasing of such information to the public without properly informing the vendor first, however (unless of course they cannot be reached).  I never classified myself as any type of hat, but if I had to it’d be grey.  I didn’t find exploits to ruin the lives of people, I found them because I love security.  I wanted to reach out to those who needed help, and do my best.  However, with-holding valuable information such as exploits for personal gain of any sort is far from beneficial to anyone, even yourself.  For every exploit you can find, there’s someone out there who can find more, and they may give away your exploit before you have the chance.

November 30, 2012  8:03 PM

Security Precuation In Programming: Validate User Input

Eric Hansen Eric Hansen Profile: Eric Hansen

When most people think of validating user input, the first thing to come to mind is making sure a string is a string, numbers are numbers and dates are proper.  But does it stop there?  Let’s have Facebook decide.

It seems there’s a new exploit available for their chat system, and it’s not something most people would ever cause due to the nature and extreme case of this scenario.  The overall action that you need to perform is to send an extremely long message via chat to Facebook’s servers, which will then crash the end user’s session (and yours).  This has further repercussions for Facebook apps that keep chat sessions alive (i.e.: tablet Facebook apps), as they will no longer be able to use the Facebook chat program on their tablet due to the fact the Messenger app would be constantly trying to load the too-long message, and crashing the app.  This was posted on seclists.org by Chris Russo (http://seclists.org/fulldisclosure/2012/Nov/46).

While it does have a specific use case, and is not something the average user would ever reach such limits needed to cause this issue, it also shows that proper data validation is far from properly implemented, even with big-name corporations.  If it’s as simple as sending a “malformed” request to Facebook’s chat service, how easy would it be to do the same with GTalk, IRC, etc…?


November 30, 2012  6:19 PM

Proper Firewall Management: Part 1 – Introduction To fail2ban

Eric Hansen Eric Hansen Profile: Eric Hansen

As a short series, I will be showcasing some firewall tips and tricks on what to (not) do if you want to secure your network.  The first of which is going to be an overview for a very helpful log analyzer, fail2ban.  There’s other programs out there, such as logwatch, that monitor logs and ensure nothing ‘illegal’ is occurring.  However, fail2ban is the most well known one that will also act on such findings.  To me, it is the IDS of logs.

fail2ban works based on configuration files that specify what program ID (i.e.: http, pop3) it’s parsing for, and then another file that specifies the rules that match restricted content.  This also makes fail2ban optimal for those looking to use your mail server for relaying, SSH for proxying or flooding your server with malformed HTTP requests.  Essentially all you do is throw in the rule(s) you want matched, and fail2ban will match the regex expression with data in logs.  If anything is found, it will then add the offending IP to iptables for a given period of time.

fail2ban is also useful for overseeing the network and handling of Snort logs to automatically restrict offending IPs without having to parse through each Snort log yourself.


November 30, 2012  5:52 PM

Proper Handling of Phishing

Eric Hansen Eric Hansen Profile: Eric Hansen

SANS recently put up an article involving handling phishing attacks within the network: https://isc.sans.edu/diary.html?storyid=14578

While most of the points are sensible, and should be what everyone follows, there is one that I actually disagree with: blocking the URL.

Most of the URLs provided in phishing emails are garbled text that no one actually looks at when the e-mail looks promising and legitimate.  This also tends to cause providers to shut down websites quickly for one reason or another.  This makes the effort of filtering URLs, blocking them and then unblocking them (as to not clog up the firewall/DNS lookups) more of a hassle than anything else.

There is very little anyone can do beyond security awareness training on how to educate others to not click on unknown links.  What sysadmins should focus on, besides security awareness training, is proper ACLs.  As a good example, lock down machines to download files to a specific central server (i.e.: mount a remote directory onto each machine), and feed each file through an AV or whatnot and if everything is detected as clean, move it to the appropriate directory.  Using something like Fabric, this is far from difficult to accomplish.

Sysadmins have a lot to do on their day-to-day tasks as is, constantly adding and removing websites from the firewall and DNS zones should not be the same.


November 28, 2012  5:30 PM

The Flaws in New Designs

Eric Hansen Eric Hansen Profile: Eric Hansen

http://news.yahoo.com/windows-8-terrible-says-usability-expert-jakob-nielsen-174300612.html

While I normally don’t nod my head with excitement at what ‘experts’ say, from personal account of previously using Windows 8, some of the points are valid.

On the PC, navigating through is pretty horrid.  Before, shutting down was a simple as going to the start menu.  Now you have to jump through a billion (more realistically about 5-10) hoops to shut down properly.  There’s also no start menu, and that will definitely confuse their end users who, for decades, have been accustomed to going to the start menu for everything from a calculator to starting up a new game.

Then, there’s the ‘widgets’ (not sure what the technical term for Windows 8 is).  This is my biggest complaint about this.  I get that Microsoft is aiming to offer ‘cross platform compilation’, thus meaning things that run on Windows 8 phones will also work on the tablet and desktops, but why should I have to pass through two welcome screens just to get to my desktop?  I feel like I’m working my eyes and arms just to pop up a game of solitaire.

As the author says, it seems Windows 9 will be the savior of Windows 8, it seems.


November 28, 2012  5:09 PM

Be More Productive, Use Less Facebook

Eric Hansen Eric Hansen Profile: Eric Hansen

There’s a nifty extension to Chrome called Facebook Nanny: https://chrome.google.com/webstore/detail/facebook-nanny/gkpjofmdbabecniidggbbicfbcmfafmk

This is a nice little plugin in that unless you have notifications from Facebook, a message will show up instead disabling use of Facebook.  If you’re finding yourself going on Facebook more than going on with your productive day, you’ll find some great use to this.  Now, if there was a way to tweak it and make it so it’d display the message no matter what, you could have some fun with people at your work.

This isn’t something people should really depend on working, unless it uses the FB API to see if there’s notifications (which I doubt).  As evident by the Social Fixer plugin, Facebook likes to break it’s design on purpose so things like Social Fixer doesn’t work anymore.  But, regardless, this can be quite useful in trying to to get back into the swing of things, and get more work pumped out before your next deadline.


November 28, 2012  3:56 PM

The operating system of Call of Duty is….

Eric Hansen Eric Hansen Profile: Eric Hansen

…looking like it’s going to be Windows, according to Slashdot.

For those who aren’t familiar with Call of Duty and it’s release cycle, a new game is put out every year, around the same time (mid-to-late November).  This has been the case for about 5 or so years now.  As such, it has gotten a lot of flak in the gaming community for being a rehash of previous years’ titles.

How is this, gaming, relevant to Windows?  Why am I posting this in a primarily Linux-focused blog?  Because I see this being the trend for more than just Windows and a handful of Linux distros.  I also feel this is one of the worst possible mistakes that can be made in terms of operating system life cycle and development.

Some Linux distros are well-suited for it.  They fair-warn users ahead of time and also make it known in many different areas that things can break way too easily.  Lets think about this for a minute.

Windows has always been known as the “noob operating system”.  Those who don’t want to venture into the realm of actual operating system usage go with Windows.  Thus, Windows has also sort of solidified it’s placement in I.T. as the safe and secure operating system.  There’s a lot of reasons why it’s really not a bad operating system in general.  In this happening, it has made Windows become a pretty solid operating system considering it’s inherent faults of being closed source.

With them releasing a new Windows operating system yearly, however, this will severely reduce the amount of effort Microsoft can put into solidifying their operating system before a new one is put out.  This is an issue a lot of rolling release systems suffer from.  By the time a major issue/exploit/etc… is found, they’re already dedicating too many man hours into the next release to be able to go back and fix their current product.


November 28, 2012  3:08 PM

IPv6 Transitioning

Eric Hansen Eric Hansen Profile: Eric Hansen

An interesting article was posted on Slashdot.org: http://tech.slashdot.org/story/12/11/28/1355225/ipv6-deployment-picking-up-speed

In it, it talks about how the transition from IPv4-IPv6 has been extremely slow, and some other common statistics.  Overall, the information in it proves un-newsworthy, but the whole idea in itself that IPv6 has yet to even really become mainstream is disheartening.

IPv6 has been out, according to the article, for 15 years now.  In that 15 years, IPv4 finally ran out of addressing blocks in early-2011.  Here we are going into 2 years later and IPv6 is still on the back burner.  Even most VPS providers are not offering many (if any) IPv6 addresses.  This causes an issue of migration because as I see it, it’ll be too late to do anything by the time we actually need to give out the IPv6 addresses.  It will take businesses a good year if not more to be fully rid of IPv4 addressing (thinking medium businesses).  There’s the issues of not knowing what software will be fine, not work at all, need tweaks, etc… then the internal migration of moving servers and services over to different addresses.

I know typically it’s not that big of a deal when it comes to services, as they tend to bind to all or a specific IP, but if one of those pieces of software does not play nice with IPv6, then your entire infrastructure can be torn down until a replacement or fix can be made.

Now is the time for businesses to actually put a foot forward in migrating to IPv6.  This hasn’t been a random spurring of scenarios, either.  It’s been known from day one that this will happen, just a matter of time.


November 24, 2012  11:20 AM

HSTS : The HTTP Strict Transport Security

Eric Hansen Eric Hansen Profile: Eric Hansen

There’s a new RFC that was published this month (http://tools.ietf.org/html/rfc6797) about an additional layer of HTTPS for web browsing, called HSTS (HTTP Strict Transport Security).  The basic idea behind it is that the server tells the browser that only HTTPS is allowed, or where to find the secure version of the website, while browsers that don’t support this feature will browse the insecure version.

Now there’s really no comparison I can find behind this and just simply using a rewrite rule in your favorite web server to force HTTPS, but it’s still an interesting take.  It seems like an additional handshake for a service that actually does nothing more than force security on a user.  The use cases (http://tools.ietf.org/html/rfc6797#section-2.1) further exemplify this fact.

From their thread model, it handles passive and active network attackers, as well as imperfect web developers.  However, it does not fix phishing and malware issues.  One has to wonder then what the point of HSTS is?  It basically does everything HTTPS but perhaps over HTTP (which, then, would nullify security completely and be a broken chain…)

I know RFCs are not intended to be super-awesome-de-facto things, and some are even jokes (the coffee pot protocol comes to mind), but this is just like saying “hey, I wrote a web server by compiling Apache’s code!”  I’m just not following it, and while it has some interesting points of use (using the UA string and HTTP response headers), I’m not sold on this as being a viable security solution.  All it sounds like, especially by the last threat model use, is a lazy man’s ways of forcing HTTPS on users.


November 24, 2012  11:07 AM

Starting with Tornado in Python: Setting Up Your Server

Eric Hansen Eric Hansen Profile: Eric Hansen

There’s a good collection of different Python modules to use so you can run a server through Python (think SimpleHTTPServer). One that is commonly used behind Nginx proxies for handling API requests, however, is Tornado (http://www.tornadoweb.org). Since creating my backup service I have chosen Tornado as the backend to my API to provide an efficient and secure service to allow users to write their own clients.

Installing Tornado
To install Tornado all you have to do is this in Pypi:
pip install tornado If you don’t have Pypi installed, then you can install it from sources:
python set.py install This will install Tornado in your Python repos so you can now simply import it into your scripts. Now to cover some simple usages.

Initializing Tornado
The main thing we are going to focus on right now is setting up a simple ‘web’ server. This will allow you to have Tornado listen for connections and handle them appropriately. To start, import Tornado’s HTTP web server code:
import tornado.httpserver
We also need a reference to I/O handlers, so we need ioloop:

import tornado.ioloop To make this easy on us we will define a main() function:
def main(port = 8888): The magic inside is what makes this work.

Lets say we wanted to serve content from our server with a specific URI only. We’ll make this URI /get/sysinfo and /get/cpuinfo. Our main() method will look like this:
def main(port = 8888):
ioloop = tornado.ioloop.IOLoop.instance()
application = tornado.web.Application()
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(port)
try:
ioloop.start()
except KeyboardInterrupt:
pass

tornado.ioloop.IOLoop.instance() creates an instance of our I/O so we can send/receive network data. application is our reference to our tornado.httpserver instance, and we tell it to listen for /get/sysinfo and /get/cpuinfo requests, and forward them to our InfoHandler class which I will show in a bit. We then have our web server listening on the specified port, and try to start it. The exception is in place in case you are running this manually, so if you do Ctrl+C no Traceback information is displayed.

Now that we have Tornado checking for a request, we need to be able to handle such requests. This is where our infoHandler class comes in.

Handling Requests
class InfoHandler(tornado.web.RequestHandler):
def get(self, call):
try:
resp = valid_calls
self.write(resp)
self.finish()
except:
# log error message
pass
We need to subclass the RequestHandler class so we can read and write data on the stream. To write data back to the user use self.write(), and to get data from the user you simply call self.get_argument(). So if someone sent the URI: /get/cpuinfo?core=1 you would do core = self.get_argument(‘core’).
What is valid_calls? This is a dictionary of ‘key’ : ‘value” where the key is the request being made (i.e.: ‘sysinfo’ and ‘cpuinfo’) and the value being a reference to the function. For example:
def SysInfo():
return "sysinfo printed from here"

valid_calls = {'sysinfo' : SysInfo} To get this up and running we then just do a simple name check and call main:
if __name__ == "__main__":
try:
import sys
main(sys.argv)
except:
main()
else:
pass


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: