According to a recent article on eWeek, Amazon’s US-EAST-1 DC (or “AZ”) failed…again. This isn’t the first time and won’t be the last that the DC has issues. However, what struck me funny was this:
The purpose of the AZ concept is to have geographically disparate fault tolerance and stability on a global basis. Amazon currently operates eight AZs in total, including three in the Asia Pacific region, one in Western Europe, one in South America and three AZs in the United States. US-EAST-1 is the only Amazon AZ on the East Coast; the other two AZs are US-WEST-1 located in Northern California and US-WEST-2 located in Oregon.
So, basically, what its telling me is that US-EAST is supposed to have fault tolerance in the event an outage occurs by operating in only 1 DC? That’s like going to a car dealer and they convincing you to buy two cars, with the second one just having a picture of an engine in place of the actual engine.
I recently signed up for AWS’ free tier experience. While I haven’t toyed with it much, seeing their logic just really makes me feel unsettled. How can I have redundancy when there’s only 1 DC in the zone/area? Shoot, Asia Pacific (~Japan and the like) has 3 DC’s. But, in the same way, the East coast is no different than South America or Europe either because they too only have 1 DC.
Redundancy is meant to be a real activity, not just a buzz word to sell pancakes at the price of a tire.
Two common threats a network administrator will deal with involving people trying to circumvent content-filtering proxies is people using a proxy, as well as Tor. While fundamentally they are the same there’s also some distinct differences between the two.
The purpose of Tor is to share information securely and confidentially. Tor also has its own darknet of sorts where you get a random Onionfied URL/domain and its only accessible via Tor. Most people also use it to try and get past network devices and filters without being caught what they are trying to transmit.
Its really in how Tor works though that causes most concern for me. From a network admin’s standpoint, you want to keep your network secure. Most users who would use Tor discovered it by Googling or via word of mouth, and just set it and forgot it. This can always pose an issue, but what about those users who want to dig deeper, and even potentially run an exit node from your own network?
That is threat I’m talking about. This would lead to your network being open to various attacks, especially if the exit node is not configured properly. In light of this, you would also have to filter out outbound traffic on said point, and make sure no sensitive data was stolen or tampered with in any way. Such a pleasant thought isn’t it?
While I’ve not found any resources on how to start your own Tor network, the source code for the project is open.
There’s different versions of proxies, each with their pros and cons. Some have authentication, some don’t. Most of the proxies (if any) don’t have encryption though, which is Tor’s biggest advantage. However, standard proxies also have advantages of their own:
- Improved speed compared to Tor
With Tor, traffic is routed through various relays before hitting the exit node, each adding a bit more latency to the traffic flow for logical reasons. This adds to the fact that its not uncommon to see your IP saying you’re in South Africa when in actuality you’re in Toronto, Canada.
Unlike Tor a standard proxy is easy to set up and maintain. It doesn’t offer the encryption and security that Tor does, but a standard proxy can have its own benefits if you like to get fancy with firewall rules.
Its always important to know how your enemy works. If you wanted to be really mean to someone on the LAN who is using Tor you could also throttle their switch port too, but that’s just for fun. 😉
The month of August has apparently been a busy one for the Tor network.
For those unfamiliar with what Tor is, in the shortest sense possible it acts as a multiple-endpoint VPN service.
It operates on what is called onion technology, in that there are various levels of security implemented into the protocol/network. Similar to a proxy you connect to the proxy server and handle Internet requests through that end point while the results get transmitted back to you. However, unlike a normal proxy Tor bounces your traffic through multiple endpoints (relays) and the final endpoint where your connection is detected from (exit node) is changed every 10 minutes.
Really it strips out a lot of the overly-complex and convoluted aspects of being overly secure so all you have to generally do is connect to the Onion master server, it gives you routes to connect to, and you tell any of the services you want to be encrypted to use the Tor proxy as a SOCKS proxy.
What makes this an interesting read though about Tor is that August has also been a popular uprising of wars, Snowden conspiracies and just extreme unrest among the world.
To see what I mean, Tor’s statistics can be viewed here: https://metrics.torproject.org/users.html?graph=direct-users&start=2013-05-30&end=2013-08-29&country=all&events=off#direct-users and you can see that compared to most of the rest of the year, this month’s usage has more than doubled.
A lot of companies and even nations (China and some other Middle Eastern) are blocking the usage of Tor from the ISPs, so that does hamper some things as well. However, in the broad scheme of things, Tor has been around since early 2000’s (I used to use it in high school) and is still growing strong.
After reading an interesting article on posing the question of why we are still using RC4 it got me thinking, why not?
Now, the article itself states that while its not gone the route of XOR encryption just yet, its rapidly getting close to that point. A big aspect of using RC4 is its portability and no need of CPU extensions. RC4 was invented in 1987, made public (well, as public as it can be) in 1994, since then all hell has broken lose.
While there are no official documents by RSA on how the algorithm works, many people have been able to replicate it pretty easily, and have even wrote variants of it to improve some of its downfalls (i.e.: RC4+ and ARC). While the article imposes that RC4 be extinct soon, we are after all still using WEP in some of our networks as well (which I believe also uses RC4 for the encryption stream).
Are there better options when we’re talking about SSL/TLS? Always. You can use encryption that requires hardware (fobs), use asymmetric block-stream ciphers like AES, or even write your own (which will most likely not be a better option in practice but is fun to devise regardless personally).
When it comes to IT, everything will be broken. Everything is meant to fail, or else we’d still be content with using bit-shifting to hide our secret love letters (even rot13 is a wiser choice in that regard).
I got into setting up OpenVPN for my business needs and looked into what ways I can implement to make it more secure. Granted there’s no IPSec set up going on (thankfully for my needs), OpenVPN still allows a lot of options: PAM, certificate, user/password pair, etc… Even more so when you consider OpenVPN’s authentication system is plugin-based so you can in theory have an unlimited options for this.
What I ended up doing is setting up two OpenVPN servers, one a little less strict than the other. One runs on the standard port (1194) while the other runs on 1195. The standard-port install is what I like to call “security dungeon”. The current set up consists of:
- Server certficate
- User certficate and key
- System username
- System password
- Google Authenticator
So, in a way I have 5-factor authentication for an OpenVPN set up. The user needs the first 4 to even be considered, and the authenticator token is prepended to the password when you type it in. However, the one that runs on port 1195 requires only the first 2.
Why did I set these up this way? Well, the standard port instance has sort of become a dev config install. Its easy enough to work on and edit but its mostly intended for testing purposes. The 1195 instance was spawned off to allow my mobile phone to connect to it (using the authenticator on it is just too much hassle). So for those times where I’m connecting my phone to a local McDonald’s WiFi I have nothing to worry about.
Is there really a point to having this much in-depth security, though? Who is going to really sit there and try to highjack my OpenVPN that really isn’t connected to anything within my business’ network? Its just sitting there looking pretty and running in an LXC container.
There’s a plethora of scripting languages available…Shell, Perl, Python and Ruby to name a few. While I’ve used all but one of those (Ruby), I’ve always wondered to myself what makes me use one over another.
Shell I use for when I want something quick and dirty and easy access to the OS. Instead of having to build loops in Python to browse through a directory recursively, I just have to do a simple for loop in Shell. It also allows for retrieval of certain data without requiring extra libraries or modules (next best thing in both these cases would be Perl).
Next there is Perl, which comes standard on most Linux distros. It used to be the de-facto scripting language next to Shell (and really seemed to surpass it). I never used it much to be honest, as the language was far too different for me compared to what I used to at the time (C & C++ mostly) and I was in high school as well. From what I’ve seen of it though it has a similar structure of Python where it has its core functionality then everything else is a module. While this isn’t horrible, it can drag development.
Python, which I have only used for about 2 years now, has become my go-to scripting language for these needs. It offers a lot of power and ease of use without sacrificing much. The only issue really is that its based on indentation (which is why I use Sublime Text Editor). While a lot of things can be done via modules, you can also write your own methods which can be more efficient.
Microsoft has a program called MAPP (Microsoft Active Protections Program). Basically what this is is a program Microsoft started where people can get early access to patch Tuesday releases, basically being able to update their Windows systems before others (sort of like VIP treatment).
From my experience, a lot of environments in the business world that run Linux also run Windows, whether it be server or desktop. Thus, this news does have a big effect on those in that type of environment, because they (Microsoft) are now opening up the program to a broader scope. When the change takes place, you can now access Windows updates 3 days prior to patch Tuesday, and still have all the same perks. This expansion is more intended for incident response teams (i.e.: CIRTs), which every business should have to some degree at least.
The MAPP program is intended to share information with others within about issues, vulnerabilities, etc… Opening this up to security-centric businesses focusing on Windows machines as well can only mean better resources available to others as well.
While I don’t actively support Windows, I do know some who do and are heavily involved in handling the day to day tasks of maintaining the systems as well. What I would like to see, though, is a registry on the MAPP website where one can browse the participants of the program to better know who takes a part in it.
I love security, the cat and mouse game, the endless ventures of finding ways to thwart your best-friend-gone-rogue. You’d figure all of the events circulating the NSA would at least raise a hair or two on my scalp, wouldn’t you? Well, not really…
First and foremost, it just doesn’t surprise me. This isn’t the first time the NSA has been involved in these types of scenarios, and I’m not really sure why this is any different. The government never really has been for or against its people, its been for itself. Just like a business, the government wants to protect its IP, however it has fewer mediums to do so due to the risks there would be if word got out. Case in point: now.
Edward Snowden also isn’t a hero to this, either, as far as I’m concerned. He held an interview about it, yes, but stuff like this has been portrayed in movies and such for a long, long time. Yeah, I know, Hollywood is fake….but, really, how fake is it? Think about this. Anti-Trust came out in 2001. Its basically a movie about a big corporation that creates something called SKYNet, where everything is linked up together (the cloud), what did we get a couple years later? “The Cloud”…even though its just a buzzword for technology that’s existed for a long time (see: roaming profiles in Windows).
Don’t get me wrong, I think its pretty horrible what we got going on here. This sort of stuff shouldn’t have happened in the first place, but the NSA isn’t really to blame as much as it is us for thinking this isn’t real. There’s no reason to wear tin-foil hates all the time, but there is a reason to be more self-aware of your surroundings.
Lastly, as a small note regarding our freedoms, we lost those when we blindly allowed 9/11 to happen… Lets face it, we can’t fix the past but we can fix it only and only when we know where we went wrong.
I found an interesting piece/article on Slashdot that covers an interesting prospect to all of the hooplah over the recent scares in IT regarding data theft and storage (looking at you Mr. NSA)…don’t try to implement more encryption.
The basic idea of it is to not look for solutions that add more security to your environment, because with the way things are now its not unfeasible that the government or some other body will look to pursuade businesses to reduce the security in their products. A good example of this is HTTPS and browsers. Take the 3 biggest browsers in the market (Firefox, Chrome and IE), and have the government pay up a large lump sum to them to randomize HTTPS keys out of a known dictionary.
This wouldn’t be your normal 300-word dictionary, however. This would span millions and millions of lines, and with a lot of products introducing cloud and *-as-a-service offerings, there’s no real way that we can tell this isn’t already occurring.
I’m also not a conspiracy theorist either, so if it is or isn’t are two different playing fields, but it should make you re-evaluate what these scares and controversies are really bringing to the table. My biggest complaint is I’ve always lived the mantra of “if you have nothing to hide then don’t be afraid”.
There’s a plethora of IDSes out for Linux, and even a fair share of them out for Windows as well. While I’m not a fan of running Mac OS as a server, and I’m not sure what software it has already in this regard, I found this little gem today called 4Shadow: http://4shadowapp.com/
Given a lot of people use Macs at their local coffee shop, bakery, etc… it does make sense to think about this, however, especially if you’re there coding away at a website and locally testing it.
I can’t say very much on it as I don’t use Mac, don’t have access to one, or really want to except for OSx86, but it is something for those who do use the OS to look into.
If you do try it please leave a comment so I know how it is, at least.