Just like everything else with IT security, once a gem is found everyone jumps on it. Originally I was going to offer this as a service for my business (still might to a degree), but instead thought I’d share with the Internet how to create yourself a beneficial two-factor authentication system. In the end, you’ll be able to validate authentication requests either via SMS or voice and be proud of the extra power!
While there’s few requirements for this, they should still be addressed:
- SMS & Voice Provider: Twilio
They have been around for a while and give you $20 worth of credit to use for your test number when you sign up. This should be far more than enough for your use as 1 SMS = 0.75 cents and 1 outbound call = 1 cent. While it does cost to also have a number ($1/month/number) its still a valuable service in a small market. Also note that pricing is relative to when I wrote this.
- Language: Python
I am not a pro at Python but it is the easiest I have ever worked with thus far. It also is easier to deploy on various systems.
- Help Libraries: Flask, Peewee and Twilio
Flask is used to process web requests (i.e.: make our Python script act like a minified web server) and Twilio has a Python client/module/helper library that helps us tie very easily into Twilio’s REST API. Peewee makes database connections and such A LOT easier. I won’t go into the mechanics of it but basically its an ORM (object-relational mapper).
- Database: SQLite
Any database will do and I’ll just be covering the table layout here (fields, types, etc…). I’ve since used PostgreSQL after I stopped writing this code and if I did it again I would choose PostgreSQL in a heartbeat. But for a starter guide SQLite is great. After all there’s not a lot of fancy work that needs to be done here.
A sort of pain the butt aspect of Twilio, however, is that numbers have to be in E.164 format. Essentially what this means is that any numbers (even your own Twilio one) that are passed through the Twilio client must be in the format of “+[country code][area code][rest of digits]” so if your Twilio number is 1.234.567.8900 then when you use the API it’d be +12345678900. While its not a big deal it can be annoying to work with at first.
I don’t suggest using the code as is for public use. This was intended for such but back then I was still in the infancy stage. However, this code can be used to build your own as I intend on those who are reading it to take what’s given and run with it at the end.
This guide assumes you have all the mentioned required stuff installed and ready to go.
Twilio does have a tutorial and code for creating a two-factor authentication system using their backend. The major difference between my code and theirs, however, in terms of purpose anyways is that mine generates a random token that has to be entered. You’ll see more of what I’m talking about when we get deeper into everything. The next part will cover setting up Twilio for use.
A recent article on Slashdot discussed the aspect of Google requesting to start using dotless TLDs. While ultimately ICANN denied this request, its interesting to see where the future of the Internet very well could go.
Most filter systems use wildcards but not to a scalable extent (in such a way that future concerns are taken care of as well). It does make sense in some aspects, though. How is anyone to know that dotless domains will ever happen, for example. Another issue is that filtering through a long list of futuristic ideas still adds more overhead to each request, which even when cached can pose some annoyances.
Should filtering be done on a whim, when new techniques/resources are made available or when its best suited for the network? Its tough to say.
When building a LAN you could easily tell the filtering system to deny any requests to *.onion, but then you have to consider how likely it’d be to even set up a Tor node within the network, have it connect successfully and allow applications like web servers. If all of these are also possible then you have more concerns than just filtering which domains are accessible.
TrueCrypt is a very popular option for encrypting data, while dm-crypt+LUKS (LUKS is a module for dm-crypt) is an unsung hero of sorts for those who don’t want to install a lot of software.
- TrueCrypt allows encrypting an entire hard disk, while I haven’t found a way for dm-crypt to do this
- Both allow you to create containers to store data
- Neither allow expendable containers (unless TC has changed that)
- Process of creating a container in dm-crypt is more troublesome than with TC
- TC allows for more encryption options out of the box, while dm-crypt may allow more with additional effort
- dm-crypt does not require administrative rights (except for probably mounting)
There will always be people who make a mountain out of an ant hill. However, it hits me in a sore spot when people like to make a simple issue seem like Armageddon.
The Register posted an article about an ISP monitoring mouse traffic on their support pages. The headline of it makes it seem like they’re monitoring all traffic going through their wires, while when you read the article itself its just about them monitoring mouse clicks and activities on their support pages. Why should this be considered a security threat?
I’m not aware of The Register’s credibility but this has greatly hurt it for me personally.
While I can see some uproar about it simply because the end-user’s activities are being monitored, its not as if they are logging key strokes and sending them to the NSA (oh, wait…). They’re seeing what pages are being read, what bounce and exits are being done, etc… Hell, Google Analytics does this for you for free. Its becoming just a little too much for me to handle at the moment with everyone crying wolf without even knowing what sheep look like.
“Being compliant” is a big buzz word as of late that really adds nothing to the company needing it. Chances are people will be able to tell you how they can make you compliant, but not be able to tell you why you should be. Granted, the flip side is that if you’re looking into compliance you should know why you want it done anyways, but still.
PCI and HIPAA compliance are probably some of the most common ones, both serving the purpose of credit card processing and medical records respectively. The main case for these is that more and more people are using plastic instead of paper to pay for things, and if you’re doing business online its virtually a necessity somewhere down the line. HIPAA, while part of me feels has seen its days as less and less people can afford to go to the doctors/medical professionals still holds a strong place in the government regulations (PCI isn’t governmental regulated).
I don’t know the fundamentals of HIPAA regulations (never really was concerned with it) but PCI is a tricky little fella. It has 4 classes/levels: A, B, C, D, which range from strictest – laziest. Most online merchants will fall between C & D and physical merchants will be A & B (simply due to the vast differences in how cards are handled). D, which is common for stores that are on shared hosting plans and do not actually store CC information is also the most common. A has the hardest checklist of items to pass, however. It goes not only into virtual security but also to physical as well.
While this won’t fit the mold for every SMB (small and medium business) out there, it will still give others an idea of what should be considered. This will assume the SMB wants to expand in the future.
Most SMBs do not want to stay in that classification forever. If the company knows their end goal in this regard early they should also be able to plan into the future in terms of software use like security applications (AV, IDS/IPS, etc…). Should you be expecting your company to go beyond the 5-50 employee mark (or however you deem a SMB) then knowing that the software is able to handle both a small, as well as enterprise business should be part of the concern.
2. Ease of Use
Software should be easy to use from the time you look at the pretty packaging or install file until its time to uninstall it for good. This is one area where a lot of vendors make their critical mistake, however.
If you need to write documentation on how to do everything then you’re doing something wrong. Documentation should be there for clarification, not how to use the software itself.
3. Easy Authentication
Typically there’s going to be some level of authentication whether directly (program requiring username and password) or indirectly (logging into the system to use). If the program does require the use of its own login system it usually makes more sense to tie it into the system itself as well. SSH is a prime example of this. It prompts the user for the username and password (or whatnot), but authenticates the user based on the system itself.
4. Automated Updates
No one likes having to remember to update anything, especially when its supposed to be set it and forget it. Automatically updating a program can pose issues in itself, however, most people don’t consider it until its too late. The convienence of not having to remember to hit the update button or run the script makes their life much easier, which is what you want the end result to be.
At this year’s USENIX talks, an interesting presentation was given describing how two people reversed engineered Dropbox’s client. This project, performed by Dhiru Kholia of Openwall and Przemyslaw Wegrzyn of CodePainters, showed how to both intercept SSL traffic (thus being able to manipulate the API calls) as well as bypass two-factor authentication. The authors also note, however, that for this attack to be efficient you need to already have compromised the machine:
Kholia concurred that hijacking a Dropbox client first requires hacking an existing vulnerability on the target user’s machine, which can be executed remotely.
So if you’re wanting to peak at your friend’s Dropbox account, you’ll have to dig deeper into the architecture to even attempt it. In the end they still proclaim Dropbox is a viable and efficient tool for its purpose, and were looking to open up the eyes of the IT security community and not devalue the usefulness of Dropbox.
From what I’m able to gather being able to intercept the SSL traffic opens up the flood gates of possibilities. You’ll be able to both see the data before encryption and after decryption and snoop out details you want/need.
According to a recent article on eWeek, Amazon’s US-EAST-1 DC (or “AZ”) failed…again. This isn’t the first time and won’t be the last that the DC has issues. However, what struck me funny was this:
The purpose of the AZ concept is to have geographically disparate fault tolerance and stability on a global basis. Amazon currently operates eight AZs in total, including three in the Asia Pacific region, one in Western Europe, one in South America and three AZs in the United States. US-EAST-1 is the only Amazon AZ on the East Coast; the other two AZs are US-WEST-1 located in Northern California and US-WEST-2 located in Oregon.
So, basically, what its telling me is that US-EAST is supposed to have fault tolerance in the event an outage occurs by operating in only 1 DC? That’s like going to a car dealer and they convincing you to buy two cars, with the second one just having a picture of an engine in place of the actual engine.
I recently signed up for AWS’ free tier experience. While I haven’t toyed with it much, seeing their logic just really makes me feel unsettled. How can I have redundancy when there’s only 1 DC in the zone/area? Shoot, Asia Pacific (~Japan and the like) has 3 DC’s. But, in the same way, the East coast is no different than South America or Europe either because they too only have 1 DC.
Redundancy is meant to be a real activity, not just a buzz word to sell pancakes at the price of a tire.
Two common threats a network administrator will deal with involving people trying to circumvent content-filtering proxies is people using a proxy, as well as Tor. While fundamentally they are the same there’s also some distinct differences between the two.
The purpose of Tor is to share information securely and confidentially. Tor also has its own darknet of sorts where you get a random Onionfied URL/domain and its only accessible via Tor. Most people also use it to try and get past network devices and filters without being caught what they are trying to transmit.
Its really in how Tor works though that causes most concern for me. From a network admin’s standpoint, you want to keep your network secure. Most users who would use Tor discovered it by Googling or via word of mouth, and just set it and forgot it. This can always pose an issue, but what about those users who want to dig deeper, and even potentially run an exit node from your own network?
That is threat I’m talking about. This would lead to your network being open to various attacks, especially if the exit node is not configured properly. In light of this, you would also have to filter out outbound traffic on said point, and make sure no sensitive data was stolen or tampered with in any way. Such a pleasant thought isn’t it?
While I’ve not found any resources on how to start your own Tor network, the source code for the project is open.
There’s different versions of proxies, each with their pros and cons. Some have authentication, some don’t. Most of the proxies (if any) don’t have encryption though, which is Tor’s biggest advantage. However, standard proxies also have advantages of their own:
- Improved speed compared to Tor
With Tor, traffic is routed through various relays before hitting the exit node, each adding a bit more latency to the traffic flow for logical reasons. This adds to the fact that its not uncommon to see your IP saying you’re in South Africa when in actuality you’re in Toronto, Canada.
Unlike Tor a standard proxy is easy to set up and maintain. It doesn’t offer the encryption and security that Tor does, but a standard proxy can have its own benefits if you like to get fancy with firewall rules.
Its always important to know how your enemy works. If you wanted to be really mean to someone on the LAN who is using Tor you could also throttle their switch port too, but that’s just for fun.
The month of August has apparently been a busy one for the Tor network.
For those unfamiliar with what Tor is, in the shortest sense possible it acts as a multiple-endpoint VPN service.
It operates on what is called onion technology, in that there are various levels of security implemented into the protocol/network. Similar to a proxy you connect to the proxy server and handle Internet requests through that end point while the results get transmitted back to you. However, unlike a normal proxy Tor bounces your traffic through multiple endpoints (relays) and the final endpoint where your connection is detected from (exit node) is changed every 10 minutes.
Really it strips out a lot of the overly-complex and convoluted aspects of being overly secure so all you have to generally do is connect to the Onion master server, it gives you routes to connect to, and you tell any of the services you want to be encrypted to use the Tor proxy as a SOCKS proxy.
What makes this an interesting read though about Tor is that August has also been a popular uprising of wars, Snowden conspiracies and just extreme unrest among the world.
To see what I mean, Tor’s statistics can be viewed here: https://metrics.torproject.org/users.html?graph=direct-users&start=2013-05-30&end=2013-08-29&country=all&events=off#direct-users and you can see that compared to most of the rest of the year, this month’s usage has more than doubled.
A lot of companies and even nations (China and some other Middle Eastern) are blocking the usage of Tor from the ISPs, so that does hamper some things as well. However, in the broad scheme of things, Tor has been around since early 2000′s (I used to use it in high school) and is still growing strong.