Hacking in America has really gained a lot of attention this year already. February alone seems to be the month of hacking. There was an interesting article posted on Yahoo today though (http://finance.yahoo.com/news/china-says-u-routinely-hacks-130252420.html), basically saying that China is trying to play victim in their mind games.
I had a discussion with a friend earlier today on it, and here’s one thing that, as an outsider, I don’t understand: if China’s “Great Firewall” exists, why is it allowing all of these hacking attempts in to them to begin with? Lets think about this for a moment.
Logically, most hacking attempts on the government, based on the article, is going to be with inbound traffic. Now there’s two scenarios that can be played out on this: 1) USA proxies attacks through other countries (possibly allies) and 2) USA doesn’t care and attacks them with little/no proxying.
We’ll make this a bit more difficult and go with #1, and for sake of ease we’ll assume USA uses Tor. There is still no real way that the attacks should be allowed through. Mind you, I’m thinking about this in similar manner to having a firewall on my home network and Billie Joe down the street is trying to hack. Proper configuration eliminates roughly 90-95% of the risks out there. One can then deduce that China’s firewall is not properly configured.
Throw in the possibility that a 0-day attack is used. Most well known software seems to be developed in America (Apache, IIS, mongoDB, MySQL, PostgreSQL, etc…) This would give America an advantage in that regard. This would mean that the risk now lies on the server where the software’s installed. Again, proper configuration eliminates a large chunk of attacks. It is then possible to come to a logical conclusion that China either knowingly or unknowingly is leaving holes in their network.
They’re basically baiting other people to hack into them.
Next article, we’ll focus on the outbound traffic and see if we can come to any other conclusions.
Then: #16: Use A Centralized Authentication Service
A “centralized authentication service” is only useful if it’s not a single point of failure. It’s an issue I think most people overlook, too.
It’s nice having LDAP installed and being able to authenticate against it, but if you only have one instance of it running, what happens when the server goes down? No one can authenticate.
Safest bet would be to prioritize authentication: LDAP server (for example) and then allow system login if that doesn’t work.
Then: #17: Logging and Auditing
There’s so many tools out there now to help with this. OSSEC is a good log and filesystem monitoring service, Nagios is de-facto system monitoring and alerting, and auditing is becoming such a huge field as well. This, inline with tighter security practices, will most likely be your move viable answer to staying secure.
Then: #18: Secure OpenSSH Server
The use of PAM, firewall configuration and protocol 2 will make this a piece of cake to accomplish.
Of course, adding in SSH keys is nice, too.
Then: #19: Install And Use Intrusion Detection System
I love IDS and IPS solutions, but not every instance needs one.
If you have a firewall placed before the Intranet, you already cut out a lot of the job. Continue that with proper auditing solutions and you have a pretty robust solution. Also, until something like OpenVAS for IDS/IPS comes along, I’m not sure how beneficial a free solution will be (its not always easy talking your CFO into buying a license for something they don’t understand).
Then: #20: Protecting Files, Directories and Email
Goes in line with the auditing and securing this and that, but again, addressing points.
Proper audit solutions and system management will make this a piece of cake.
Continuations are fun! Part 2 is here: https://itknowledgeexchange.techtarget.com/security-admin/how-have-security-practices-changed-2009-now-part-2/
Then: #11: Configure Iptables and TCPWrappers
Having a firewall properly configured will help both the network and the server be secure. You can perform better load balancing on the server as well as make sure requests going to/from the server are what it expects. This should be done once the server is set up properly, however, as it can cause major headaches if not.
Then: #12: Linux Kernel /etc/sysctl.conf Hardening
I’m not an RPM-system person, I prefer deb and I use Arch Linux at times. But, I know on my installs /etc/sysctl.conf never exists.
Then: #13: Separate Disk Partitions
Unless there’s some dire reason not to, this is always a good idea. Being lazy isn’t dire, by the way.
This helps in a few ways. One, it makes backing up information easier (instead of backing up folders on the same partition, you can just back up the entire partition). If you’re wanting to set up RAID for /home and /var but not /tmp, this is about the only way I know of to do it safely.
It also makes disk management easier. Need to resize /home without worrying about corrupting data on the / partition? This will let you do it!
Then: #14: Turn Off IPv6
As much as I hate it, and as much as I enjoy using it, IPv6 has no benefits.
There have been reports that disabling IPv6 improves network performance due to lowering the overhead on the networking drivers, but not sure if that’s true now. Whenever I did it, I noticed very little difference anyways.
IPv6 is really like 64-bit processors…unless you have a hardware requirement for it, it’s not going to benefit you any.
The transition to IPv6 is taking forever and is safe to say almost nullified. There’s nothing natively supporting it that would make it beneficial and tools like ping6 are there for testing purposes more than “this is why you should have IPv6!”
Then: #15: Disable Unwanted SUID and SGID Binaries
I’m not knowledgeable enough about the sticky bits to make a judgement. However, I rarely seem to find an exploit that utilizes these.
In continuing with my series (started here)…
Then: #6: User Accounts and Strong Password Policy
Do I agree? Definitely. However, the concept of “strong password policy” has changed a lot. I’ve touched on it before, but I’ll say it again, I do not believe in “randomized” passwords. You know, the passwords that looked like you just mashed a bunch of keys together.
I had a debate with a friend not too long ago on this. Especially with the crackers (i.e.: hashcat) that bruteforce against bytes instead of ASCII character sets, even throwing in UTF-8 or other characters outside of the 0-127 decimal range will get notified. The easiest method to this is enforcing spaces to be used, and use phrases.
Brute forcers check against random strings. While phrases are just that, they’re not typically going to have weird characters like @ or $. Dictionary attacks assume the pass phrase only has one dictionary word in it typically, and even if not it has to run through the entire dictionary n times for however many phrases are in the password.
Then: #7: Disable root Login
If you’re using root for anything you’re not sysadmining properly. There is a reason sudo was created, which means there’s no reason for you to log in as administrator.
Then: #8: Physical Server Security
Let me explain. Yes, physical server security extremely important. However, going as far as “all production boxes must be locked in IDCs (Internet Data Center) and all persons must pass some sort of security checks before accessing your server” is a little bit of a stretch. Call me silly but I don’t really know what an IDC is (same as a regular data center?), but to have all of your employees go through a security background check prior to working on the server should only be for the extreme cases (i.e.: medical record storage).
Then: #9: Disable Unwanted Services
This really seems to be redundant with #2 & #3, but at this stage you shouldn’t have to worry about unwanted services. Especially if you make your own images and/or repos.
Then: #10: Delete X Windows
Even though I have Back Track running on my home server, there’s really no reason to have Xorg running. Again this goes more in line with #2/#3/#9 but as I’m addressing each suggestion, making this a point to address.
Back in late 2009, an article was published by CyberCiti detailing 20+ tips on how to secure your Linux machine. How have things changes since and now (especially since we’re nearing Linux kernel 4.0)?
Then: #1: Encrypt Data Communication
Especially with the advent of more sophisticated tools at everyone’s disposal, ensuring data communication is encrypted should be a top priority. This includes using SSH instead of Telnet, SFTP over FTP, HTTPS over HTTP, etc…
These steps have also been made easier, however. Instead of purchasing a new SSL cert for every Intranet site, just create your own. It saves $30+/year per certificate and is more manageable.
Then: #2: Minimize Software to Minimize Vulnerability
#3: One Network Service Per System or VM Instance
It’s no secret having 100 different programs listening on different ports makes you 100x more at risk than 1 program listening on 1 port (or even 1 program listening on multiple ports). This will also help with the “go green” initiatives as well, though, as it will require less power to constantly maintain those applications.
Virtualization is a hot subject right now, and it has gained a lot of steam since around 2006 from my experience. Now, everywhere you look online there’ more talk about new virtualization, cloud solutions, etc..
Then: #4: Keep Linux Kernel and Software Up to Date
Similar to the above, this one is no secret as well. However, one thing I feel is lacking in every package manager I’ve seen is the ability to update just a single package. There’s not always a reason to update the entire system when you just need PHP updated, for example. It also causes sysadmins to be put between a brick and a hard place, because if they don’t update it could be catastrophic to their network, but if they update everything the entire server could get corrupted.
Then: #5: Use Linux Security Extensions
I’m a firm believer of not using “security extensions” such as SELinux. They tend to cause more of a headache than they’re worth and just add extra load on the server.
While they’re good for a catch-all approach, proper sysadmin and monitoring solutions should be a better approach.
I’ll continue with the next batch of 5 in another part.
While running python setup.py install is simple and easy, it doesn’t always work when you want to install some things (such as pip in my case). Especially when you have multiple versions of Python and you’re not using virtualenv.
To install pip on Python 2.7, this is what will make your life a lot easier:
1. Install setuptools:
wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg -O setuptools.egg
2. Install pip via easyinstall:
This will install pip as pip-2.7 for you and make the rest of your adventures easier.
Typically a binary of /usr/bin/pip might exist as well. To see what Python version it’s currently linked to, do this:
# pip –version
The output is pretty self-explanatory:
pip 0.3.1 from /usr/lib/python2.6/dist-packages (python 2.6)
To change this to 2.7, do this:
rm -rf /usr/bin/pip
ln -s /usr/local/bin/pip-2.7 /usr/bin/pip
This will now link pip to your 2.7 installation.
So, I ran into the issue of msfupdate not updating Metasploit on my BackTrack installation. It worked fine on my laptop, but wouldn’t on my home server. I didn’t even get the typical SSL library issues that occur on initial install. Heck, even the usual svn update didn’t work for me.
What I found, however, is a relatively easy way to upgrade Metasploit without issues.
First, we need to remove our current installation:
rm -rf /pentest/exploits/framework2/
This will completely remove Metasploit from the system. Next, we need to download the newest data from Metasploit’s servers:
svn co https://www.metasploit.com/svn/framework3/trunk/ /pentest/exploits/framework2
This will download the most recent snapshot of the Metasploit base into our new folder. Then you should be able to run svn update or msfupdate just fine!
I installed Back Track recently, which is based on Ubuntu 10.04 (Lucid Lynx). As such, it’s not got the most up to date software, even with it operating on it’s own repos. So, when I migrated a site over that was used to PostgreSQL 9.1, to the new server running PostgreSQL 8.4, the database wouldn’t play nice. Some of the queries would work, but when it came to performing CONCAT() and similar operations, the server croaked.
After looking high and low on Google for how to solve this, I decided to just see what my options were on my system. This is where I discovered pg_upgradecluster. From what it seems like it’s the older version of pg_upgrade, but for the life of me I can’t figure out why Postgre would make this more difficult in pg_upgrade. Anyways…
The first thing you should (and I highly suggest) do is stop any instances of both the old and new PgSQL server. For me it was simply running these:
service postgresql-8.4 stop
service postgresql stop
Easy enough. Then I had to figure out where my 8.4 data was stored. Typically (as was the case here), it was stored in /var/lib/postgresql/8.4/main. It’s important to make note of the last two parts of that directory structure, too. You’ll see why in a bit.
To keep things organized and nice, I decided to use the same structure for 9.1. What I ran was this:
pg_upgradecluster 8.4 main /var/lib/postgresql/9.1
This took me about 3 minutes on a small-scale database (49 MB according to du -h .). What happens is pg_upgradecluster will migrate all of the data from /var/lib/postgresql// (in this case version = 8.4 and cluster = main) to the new data directory, keeping the same cluster (main). It will also migrate over the configuration files and any user accounts in the PgSQL system.
After that, just start up the PostgreSQL server instance and you should be running on 9.1 now.
Earlier this month I started a series about breaking into the Linux security field (part 1: https://itknowledgeexchange.techtarget.com/security-admin/getting-into-linux-security-part-1/). I’m going to continue this with more tools of the trade to start learning.
I wrote an article about some of the pro’s and con’s of Shorewall, a supplement of iptables. But why am I listing it here? Because once you know how people can get into a network, you need to be able to prevent them.
Installing a firewall is one thing, but being able to manage it is another. While sometimes you won’t have to mess with the grittiness of firewalls, it’s definitely a good asset to have, knowing how to configure various firewalls. As such, learning how to use Shorewall can be easily translated into iptables or some other solution. Kind of like how learning the Linux CLI can help you navigate around a Unix CLI, even though they aren’t the same.
There’s a few different solutions besides Shorewall you can dab into, but I’ve found that for those that don’t use the native iptables, Shorewall is there. The effectiveness and ease of configuration will basically give you a ‘set it and forget it’ feel, and will definitely make life easier.
Venturing away from the network aspect of things, Metasploit is another one of those de-facto standard applications that you should have in your Swiss army knife. Used by many IT professionals and highly recognized in the application vulnerability assessment field, Metasploit will be a great asset during audits.
Similar to other tools that’ll be listed, this isn’t one that is a hacker’s delight, and it’s not meant to be. It’s very easy to be detected using this when it comes do to being automated. Instead, this is supposed to give you an realistic look at how hackers view your network.
Metasploit is basically a huge database of known vulnerabilities in various services and systems themselves. For example, if a hacker detects you’re running Apache CloudStack 4.0, they could attempt to exploit CVE-2012-5616. Metasploit has a vast community of developers and authors who write plugins for Metasploit that will allow penetration testers the ability to see if their Apache server(s) are vulnerable to this CVE.
Unless you have permission from the server owner, I would advise running this against anything outside of your LAN. Even as a professional, when I’m performing audits I contact the data center and inform them of the audit, what it will involve and the time frame. This is a very powerful tool, but as the saying goes, “with great power comes great responsibility”.
The last tool in our second part is going to be Nessus. This tool has been around about as long as Metasploit, if not longer, and has another strong background in the audit world. However, this is more of a network auditing tool than application.
Nessus itself is very powerful. It will also create a lot of noise, similar to Metasploit, on the network. So again, I advise unless you have permission to only do this on the LAN.
What this software does is probe any hosts you provide it, and throw a bunch of attacks at it. You can, however, also create different profiles to fit different attacks. So if you want to only test Apache, you can create an Apache profile and add specific tests. This makes the tool very versatile and flexible.
I’ll admit I haven’t used Nessus in a few years, but that last time I remember, the paid version only allowed PDF reports, and those had references to Tenable Security (Nessus’ developers) in it. However, if you’re looking for a strong competitor and a very flexible engine that offers in-depth information about your network’s security, this is definitely a great tool to have and understand.
There’s so many other tools that I’ll be touching on, more so free ones that offer the same functionality but with a better price tag. However, this is a good start into looking into the deeper security aspects.
Linux is well known for it’s networking capabilities. This includes turning an old dusty machine in your house into a home grown firewall or even PBX (a fun weekend project, by the way). But with just about everything else involving Linux, there’s a million ways to solve one problem. Such is true with firewalls.
When you install your OS, you’re most likely going to have iptables pre-installed (you should, anyways). This is mostly due to the fact that iptables is the de-facto standard application for Linux when managing a firewall. It allows direct access via netfilter to inspect packets to great depths and even interface with IDSes like Snort when configured properly. But, the problem with having such power is that it can become very troublesome to maintain and configure.
For example, to set up a simple logging rule, you have to do this: iptables -A INPUT -p tcp –dport 22 -s !192.168.1.0/24 -j LOG –log-level 4 –log-prefix “[ssh daemon access attempt]: ”
This is not very well written, and the fact I know this by heart kind of speaks for itself too. Mind you, this is a pretty simplistic rule, but it goes to show that it takes a bit to configure a firewall properly with the default tools. But, what if there was an easier tool? A better tool? A safer tool?
Over the weekend I spent a few hours working with Shorewall, a front-end of sorts for iptables. Now, while the documentation for Shorewall says to disable iptables from running, it still requires iptables to be installed. This threw me off at first until I realized why, when I started up Shorewall.
Shorewall does a lot of work by making firewall management easier without taking away the power of iptables, it just simplifies it. For a one-NIC firewall configuration, you’re looking at about 4 or 5 files you have to edit, with 2 of them being ones you have to edit more than once (policies and rules files). This, compared to the troublesome iptables rules and switches, remembering which rule does what, etc… can make for a headache fast.
I’ll be writing a tutorial on this soon, but as a starting point, Shorewall separates the power from the user. This allows you to easily know what you’re configuring, and how it’s configured, and when it’ll work. Shorewall even allows for macros, which are even further simplified rulesets that you can use for known services (i.e.: to deny ping attempts, just call the ping macro like this: Ping(DROP)). This makes for quick and easy mangement and also makes rules that much simpler to follow.
Now that I’ve talked Shorewall up like it’s a godsend and unbeatable, lets raise the playing requirements a bit. While a great solution, Shorewall also has it’s downfalls. The biggest of which is poor documentation.
The documentation on the website makes Shorewall sound like it takes at least a master’s degree to get up and running. However, even if I had an associate’s I could get it going without issues, but I had to follow a different guide. There is a lot of technical jargon that, to me, is nothing more than fluff. It adds nothing to assisting users in getting a firewall set up, and if it wasn’t for I was bored and wanted to configure it, I would’ve just stuck with direct iptables rules.
There is also a severe lack of tutorials on Shorewall, and most of what is out there is lacking in substance. A lot of the guides I looked through were either for older versions, or for configuring ‘the perfect firewall’. I just wanted a simple how-to and that was that.