The time required to break an eight-character password has dropped to two minutes. A seven-character password–-the minimum currently required by PCI-DSS for retailers to protect stored payment-card information–-is compromised in seconds. (Read more: http://www.storefrontbacktalk.com/securityfraud/kill-all-the-passwords/#ixzz13sqR29Do). That’s why I have gone to 10-characters as a minimum password length. But there’s a caveat: 10 characters is fine if you can use special characters, but I would go to 12 if you can only use upper/lower case and numerals.
That might work for awhile, but processors just keep getting faster and faster. Before too long, even passwords like H4*$.ndl_@@I1~nRfCsI	()&^%$# won’t be secure enough. It’s time for a second factor. Yes, I know there are sites that use them. PayPal is one of them (I use their security key-essentially a time-synchronized one-time password). It’s also integrated with eBay. Banks and other financial institutions seem to be slow on the uptake, however.
When I log into PayPal or eBay, I’m not the least bit worried that someone could hack me. Even if there is a keylogger on my system, the fact that my strong, 10-character password is augmented with a random, non-repeating six-digit token makes it highly unlikely that anyone in any known universe is going to hack me within any human’s lifetime. After all, even if the hacker knows my password (factor 1-something you know), he still won’t be able to enter the security key token (factor 2-something you have) because only I have that.
I’m not saying for a minute that passwords are completely dead, only that they are no longer sufficient as a single factor authentication method. I’ll explore alternatives such as sequential one-time passwords and other methods in a future post.
Well, according to Sophos Security, that is. But why not? It’s Halloween, a day dedicated to all thing creepy. What’s more creepy than a Zombie, especially one that spews out nasty spam that infects PCs with all manner of creepy, crawly, slimy stuff. So, tomorrow, make it a point to “Kill a Zombie!”
[kml_flashembed movie="http://www.youtube.com/v/C6Jm_wAl668" width="425" height="350" wmode="transparent" /]
In Part 2, I showed how the EFF recommends building location systems which don’t collect the data in the first place. How is that accomplished? Cryptographic protocols. One of these is electronic cash . Electronic cash refers to means by which an individual can pay for something using a special digital signature which is anonymous but which guarantees the recipient that the can redeem it for money; it acts just like cash! Transfer of money at places like toll booths and fuel pumps would not be tied to any specific individual.
Another approach would involve the use of anonymous credentials for certain types of passes and access cards. The EFF document provides an explanation:
These give [a person] a special set of digital signatures with which he can prove that he is entitled to enter the [restricted location] (i.e. prove you’re a paying customer) or get on the bus. But the protocols are such that these interactions can’t be linked to him specifically and moreover repeated accesses can’t be correlated with one another. That is, the [restricted location] knows that someone authorized to enter has come by, but it can’t tell who it was, and it can’t tell when this individual last came by. Combined with electronic cash, there are a wide-range of card-access solutions which preserves locational privacy.
Of course, these aren’t the only solutions (though they may become the only ones that are reliable). There is also good old data retention and erasure. If there is no real need to keep location data beyond a short period of time, then it should be deleted. The problem with that approach is that companies who acquire locational data have incentives to keep it. Picture a third-party advertising service that automatically feeds you advertising about local businesses based on your where you are logged in. The data about your movements about town and the planet are valuable demographics to use in highly targeted ad campaigns.
In the end, the real concern is with government:
…there’s no guarantee that a government won’t suddenly pass a law requiring … companies and government agencies to keep all of their records for years, just in case the records are needed for “national security” purposes. This last concern isn’t just idle paranoia: this has already happened in Europe, and the [United States Government] has toyed with the same idea…
In the long run, the decision about when we retain our location privacy (and the limited circumstances under which we will surrender it) should be set by democratic action and lawmaking.
In my last post, I outlined the concept of location privacy and gave some examples of how you can be tracked when you’re out and and about. You may say, “So, what? What do I care if people know where I’m going? I’m not doing anything wrong.” Maybe so, in your eyes. But in the post-9/11 climate, there’s a hyper-sensitivity toward anything that could be construed as terrorist activity. Not only that, but anyone who may have it in for you could cause you no end of trouble. The EFF document provides this insight:
The systems discusssed [in my previous post] have the potential to strip away locational privacy from individuals, making it possible for others to ask (and answer) the following sorts of questions by consulting the location databases:
- Did you go to an anti-war rally on Tuesday?
- A small meeting to plan the rally the week before?
- At the house of one “Bob Jackson”?
- Did you walk into an abortion clinic?
- Did you see an AIDS counselor?
- Have you been checking into a motel at lunchtimes?
- Why was your secretary with you?
- Did you skip lunch to pitch a new invention to a VC? Which one?
- Were you the person who anonymously tipped off safety regulators about the rusty machines?
- Did you and your VP for sales meet with ACME Ltd on Monday?
- Which church do you attend? Which mosque? Which gay bars?
- Who is my ex-girlfriend going to dinner with?
Are you beginning to get the idea? Pretty scary, if you ask me. So what do you do?
We can’t stop the cascade of new location-based digital services. Nor would we want to — the benefits they offer are impressive. What urgently needs to change is that these systems need to be built with privacy as part of their original design…
Our contention is that the easiest and best solution to the locational privacy problem is to build systems which don’t collect the data in the first place.
How is that possible? More in Part 3.
You’ve never heard the term before? Well, here’s what it is according to the Electronic Frontier Foundation (EFF): “Locational privacy (also known as “location privacy”) is the ability of an individual to move in public space with the expectation that under normal circumstances their location will not be systematically and secretly recorded for later use.”
In what ways could you be located and your location recorded? For one thing, security cameras have become ubiquitous; they’re in every parking garage, convenience store, liquor store, bank, ATM machines, you name it. In some cities your passage is recorded by taking a snapshot of your vehicle license plate as you move through traffic intersections. The EFF notes notes that “…systems which create and store digital records of people’s movements through public space [are being] woven inextricably into the fabric of everyday life. We are already starting to see such systems now, and there will be many more in the near future.
“Here are some examples you might already [be using] or have read about:
- Monthly transit swipe-cards
- Electronic tolling devices (FastTrak, EZpass, congestion pricing)
- Services telling you when your friends are nearby
- Searches on your PDA for services and businesses near your current location
- Free Wi-Fi with ads for businesses near the network access point you’re using
- Electronic swipe cards for doors
- Parking meters you can call to add money to, and which send you a text message when your time is running out”
Perhaps you’ve heard about the new rage in apps that post your location to Twitter or Facebook. One of those is My Latitude, an application that lets you publish your Google Latitude position in your profile page. This is accomplished using the Google Latitude Public Badge. There’s another called Android Location Services for those phones.
If you’re using any of those, you’re losing your locational privacy. What to do about it? I’ll cover that in Part 2.
[kml_flashembed movie="http://www.youtube.com/v/2-34Iyz7EYk" width="425" height="350" wmode="transparent" /]
“A 15-year-old Californian caught with a stolen scooter while high on drugs has been banned from using encryption – despite the lack of any computer crime element to his alleged offences. In fact, there was actually no computer involved in the commission of the crime at all.” So begins this article in The Register.
What idiocy–or paranoia–is this? It never ceases to amaze me that otherwise educated people, like lawyers and judges, can be so stupid when it comes to technology. Encryption has nothing to do with the theft of a piece of physical property by any stretch of the imagination. Sure, if the kid was stealing money out of bank accounts or hacking debit card machines or something like that, it would make sense. But this crime had nothing to do with computers and banning him from using encryption isn’t going to prevent him from committing a similar crime in the future.
At first, the kid was completely banned from using a computer except for doing schoolwork. That meant no social networking, Facebook, etc. Here’s an excerpt from the ruling:
[J.J.] shall not use a computer that contains any encryption, hacking, cracking, scanning, keystroke monitoring, security testing, steganography, Trojan or virus software.
[J.J.] is prohibited from participating in chat rooms, using instant messaging such as ICQ, MySpace, Facebook, or other similar communication programs.
[J.J.] shall not have a MySpace page, a Facebook page, or any other similar page and shall delete any existing page. [J.J.] shall not use MySpace, Facebook, or any similar program.
[J.J.] is not to use a computer for any purpose other than school related assignments. [J.J.] is to be supervised when using a computer in the common area of [his] residence or in a school setting.
What? Did the judge think that he was going to contact his scooter chop shop crime syndicate co-conspirators? Fortunately, some reason prevailed and an appellate judge lifted most of these restrictions as being in violation of First Amendment rights:
Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. . . . Two hundred years after the framers ratified the Constitution, the Net has taught us what the First Amendment means.
Score a point for that judge. However, the restriction not to use “encryption, hacking, cracking, scanning, keystroke monitoring, security testing, steganography, Trojan or virus software” wasn’t completely lifted and was only modified to prohibit him from “knowingly” using a computer with these things.
That someone can be so completely clueless about technology as to rob someone of their ability to use their Gmail account (it uses SSL) or to even log into Yahoo! mail or Hotmail (both use SSL during login) is disturbing. The appellate judge, regardless of the position he took above, still doesn’t have a clue as to what the First Amendment really means: He has completely taken away J.J.’s ability to communicate via those particular webmail accounts. Moreover, he has forced J.J. to be totally insecure with any login to any account he may have on any server that requires SSL.
That’s not acceptable.
I just got this from a friend of mine, Arindam Chakraborty, who is also a fellow Internet marketer: Warning About EFTPS Tax Phishing Emails!Like me and many, many other marketers, he uses AWeber Communications email marketing service to manage his subscriber lists. It seems that AWeber was hacked last Saturday. Here is their official notice: Email Subscriber Data Accessed; What We’re Doing About It. Here’s an excerpt.
Over the weekend, AWeber was the target of a deliberate and successful attempt to mine email addresses.
On Saturday, October 16th, an unknown person gained unauthorized access to databases containing email subscriber information.
This incident appears to be part of a broader series of similar successful attacks on a number of email service providers (ESPs).
This happened 2009 December as well:
December 21, 2009
AWeber was recently the victim of an intentional attack to mine email addresses.
We’d like to take this opportunity to share what happened, what was (and was not) affected and what we’re doing as a result of this attack.
Apparently, the attackers found a zero-day vulnerability in AWeber’s systems, though they’re not saying exactly what that was:
On a daily basis, a few thousand attempts are made to attack AWeber. This sounds like a lot (and it is), but it’s no different at any other sizable web-based application.
We use a combination of in-house and third-party security solutions to scan our network for possible “holes” in security, and to monitor, block and analyze the many attempts made to gain unauthorized access to AWeber. On the whole, these solutions are very good at what they do and this approach serves us well. Unfortunately, both the in-house and third-party solutions failed to detect or stop this particular attack.
I’d sure like to know what those “third-party solutions” are so I can patch them if they exist on any of my clients’ systems!
One of the services I provide to clients is proactive server and network maintenance. Part of my monthly routine involves checking to make sure that the security measures remain effective and haven’t been compromised. For the longest time, I had a series of five things I checked. One day, while researching a security issue, I stumbled upon SANS’ excellent cheat sheet, Intrusion Discovery Cheat Sheet v1.4 for Windows. I noticed that they specified two additional things to check, so I added those to my list as well. (It’s gratifying when such a respected authority as SANS Institute publishes something that validates what you have been doing.) Here are the checks and the order I do them in:
- Event logs: Anything unusual or suspicious in any log gets my attention. I am particularly sensitive to entries in the security log.
- Running processes and services: I sort task manager processes by user name and look for anything unusual (SANS recommends checking the performance for anything unusual). Then I examine the services using both
- Network usage: I look for unusual shares, open sessions, listening ports and NetBIOS over TCP/IP activity. Anything that doesn’t look normal is suspect.
- Registry keys: Strange entries in HKLM\Software\Microsoft\Windows\CurrentVersion\Run, Runonce and RunonceEx are suspect.
- File system: Unusually large files and sudden disk space changes can indicate system compromise.
- User manager: The SANS cheat sheet says to look for new, unexpected accounts in the Administrators group.
- Scheduled tasks: SANS recommends using both the command line and GUI for look for unusual scheduled tasks, especially those that run with Administrator privileges, as SYSTEM, or with blank user name. The cheat sheet also recommends checking autostart items in msconfig.exe.
This is by no means a comprehensive list of security checks, but if there has been a system intrusion, some indication is likely to be found in one or more of the above items. Sys Admins generally get a feel for how their systems operate and often simply “get the feeling” that something isn’t right. It certainly happens to me sometimes; that’s when I start looking for unusual behavior. Often, it turns out to be nothing, sometimes I catch something before it becomes an issue.
These checks can be applied to any system including workstations. You can even do them on your personal computers. If you’re not already doing checks like these, I highly recommend you start. You’ll enjoy even greater peace of mind if you do.
I have seen it happen time and again; I educate the people I support about proper security practices and they go on and do dumb things anyway. Trusting users with security is a bad idea. It’s a bad idea because it doesn’t work. Security is hard. It takes thought and effort. People don’t want to have to think about it. They want instant gratification and they want it to be easy.
So, what’s the solution? Do we lock everything down so it’s impossible to get in trouble? That has been proven unworkable. Do we switch to dumb terminals for mission-critical apps? Perhaps, but that’s cost prohibitive for small businesses.
The solution that works for my clients is a simple one:
- There is an Internet usage policy in place and incorporated into the employee’s employment agreement; it is strictly enforced.
- Server-based anti-malware with real time threat monitoring and notification is in place.
- Proven anti-spam filtering is in place.
- URL filtering is in place to block known malicious and prohibited sites.
In the last five years, where the above is implemented, I have had to respond to a security incident on only one occasion and that one was an internal breach by an employee who attempted to steal a customer list.