The latest, from IdentityTheftBlog.info:
Thanks to a more recent credit union notice that Jai Vijayan of Computerworld uncovered from the Alabama Credit Union, we now know that this is not just credit cards that have been affected, but that the breach also appears to involve “long lists” of compromised ATM/debit cards. Visa and MasterCard remain mute about the source of the breach, although once the confirmation was found, Visa confirmed to Computerworld that a processior “experienced a compromise of payment card account information from its systems,” and MasterCard’s statement referred to the processor as being in the U.S.
The fact that the breach includes ATM cards is scary and disheartening. The fact that another large processor has been breached tells me Heartland and Hannaford were not anomalies – they represent the tip of the iceberg. Cybercrimminals have developed a way to capture streaming card data that’s being transmitted unencrypted on internal networks.
We need to start encrypting card data during every point in the transaction process, whether nor not it’s running across internal networks or sitting in databases.
Next, let’s start monitoring outbound transmissions on our firewalls and get more granular about firewall rules. Servers sitting in stores don’t need to be able to access the Internet. Or set up critical servers in a group and monitor ALL their outbound and inbound transmissions.
Wireless? OK, you say you don’t have them, but what’s to stop an employee from plugging one in? Rogue access point detectors should alert and shut down the port.
How about physical security? Servers installed in stores are the weakest link – I’ve found servers in closets, break rooms, and once, in the Gift Wrap department.
It’s much more expensive to retrofit that to install secure systems – but we are now paying the price.
One of my Rules of Thumb: You can pay now, or you can pay later, but if you pay later, you will always pay more.
I guess we’re paying more, don’t you?]]>
All too often, executives make software purchases without any regard as to regulatory requirements, security best practices or implementation costs. It’s my job (and yours) to educate and ask the hard questions, preferrably BEFORE they shell out the big bucks, then bang their heads against cost overruns due to requirements, implementation of add-ons and customized product.
Once you’ve signed the software contract, it’s way too late to ask questions. You need the sort of answers that will give you some assurance that the software vendor is committed to writing secure code, cares about writing secure code, and invests in the time and training of their staff to ensure their product can provide the right information about what their software does “under the hood.”
This should be a part of the “due diligence” organizations perform before making software purchases. It’s more important to your company, in the long run, than the financial solvency of the software company. A great company writing bad code is a higher risk than a shaky company writing excellent code.
It’s also a great time to see how mature the software company is in terms of software development practices. “Real” software companies have code storage, written documentation of process and practices, as well as defined QA testing. Beware of a company that may say they have all these things, but have nothing in writing.
1. Where is security in your software development life cycle? (If they don’t know, or have to go look, be careful)
2. Do you test your software for security vulnerabilities? What do you use to test your software? Can I see the latest report? (kudos to them if you can see it. If they don’t test, you’re buying a pig in a poke)
3. If there is a database, do you have a security design for it? (Try to find out if they’ve basically opened up the database with one application ID that can do everything – cheap and easy coding. Cheap and easy to hack, too)
4. If this is a web-based application, do you provide documentation on how to install your application securely? (Watch for the ones that use a “default installation” of a web server to load their application on. It’s likely that if you secure the web server, you’ll break their application.)
5. What kind of logging and reporting does your application do? (You’re looking for good reporting on WHO, WHAT and WHEN. You’ll need this for your auditors, too)
6. How quickly will you test Microsoft patches so that we can patch? (This is critical; you don’t want to have a vulnerable server hacked because they don’t want to test, or be put off with promises of the next upgrade. Make sure they test for IIS, SQL and/or any other specialty the application uses.)
These are the kinds of questions that software vendors need to keep hearing, so that they know the customer cares about security and won’t buy a product that isn’t secure. Plus you’ll weed out hundreds of hours of pain and aggravation. It’s a “win-win.” For you.]]>
Knowing what to look for is half the battle. And kudos to the author, James Heary, a Cisco Security Expert. He’s just gotten added to my Blog Roll!]]>
A lot of banks that I audit use contracted time and space on mainframes as a standard part of business. From what I’ve seen of this, there are both pluses and minuses:
No mainframe in the basement that requires at least two technically trained engineers.
You are entirely reliant on the third-party for coding changes, reporting and security implementations. They will most definitely charge you for every little and big thing they can. It’s death by a thousand fees. You are also at their mercy for when they are willing to make a change for you. “Security flaw? We’ll fix it in the next release.”
Is there actually a cost savings? It varies from bank to bank. A tiny regional bank may find it difficult to acquire technically skilled employees, in which case it can make a lot of sense and save money. Consider, however, that the larger the organization, and the more IT functions are needed, the more complex management of that third-party relationship is going to be.
You rely on a SAS 70 for assessing the security of the service provider.
You rely on a SAS 70 for assessing the security of the service provider.
Yes, I repeated myself. Right now we only have the SAS 70 as a way to assess service providers, and that applies ONLY if the service bureau is handling financial services for the company. The SAS 70 is meant to provide assurance for the financial auditors of the client companies, NOT test to a standard or any kind.
And then, only the controls that the service bureau says are in place are the controls that are tested in a SAS 70.
There is not an independent standard to test “cloud computing” environments for secure practices.
Cloud computing vendors tout the possibility of security: “Cloud computing can be as secure, if not more secure, than the traditional environment,” said Eran Feigenbaum, director of security for Google Apps. Which, in my mind, means that it will be an additional cost to the business.
Eigen’s Rule of Thumb - you get what you pay for. How many businesses will pay for security beyond what the vendor offers as basic services? How many businesses will skimp because they can’t afford it and there is no requirement for it?
Short answer: too many.]]>
There are three items to consider, and they are the same ones we must always deal with:
Confidentiality – WHO has access to your health records? Right now hospitals, doctors, pharmaceutical companies and the government have access to your health records. And probably a lot more marketing companies have pieces of information, as well. A online pharmacy clerk in West Overshoe knows all your prescription medications and is paid minimum wage.
Integrity Is your data accurate? Or has someone stolen your medical information to get health care, died, and left you with a rolling disaster?
Availability Can you inspect and correct your data – ALL your data, including any diagnoses? What if you don’t agree with one? Can you delete it?
If you compare the answers, it looks remarkably similar to where your (and my) credit record is right now – in the hands of the data miners. All my data belong to….them.
From a regulatory perspective, the Feds are not providing any real consequences for medical data breaches, or lack of HIPAA compliance. They are waving a large carrot around, instead. Only one or two organizations have actually been fined for non-compliance, despite a large uptick in data breaches. It is left to the outraged patient to sue for damages. There are no clear statistics for medical identity theft, because the appropriate agency isn’t tracking them.
It’s one thing to get information online, another thing to get it online safely. It seems to be a pattern in every industry that data becomes electronic before any thought of security.]]>
Posting information about oneself has definite perils. I thought long and hard about doing a blog, and I think (or try to) carefully about what I write and who I write about. When I “google” myself, (you have, haven’t you? I know you have) I still see posts from the year 2000. So consider that what you posted five years ago about your problem with your Exchange server using your work email address is probably still out there. How detailed was your post? If somebody read it today, what would it tell them about your network?
So I read with considerable interest a blog posting detailing the use of Facebook as the social research part of penetration testing, and I’d suggest you read it too, especially if your company is using Facebook as a Team tool.
I guess it’s another way of saying that Facebook isn’t just for identity thieves, stalkers and pedophiles anymore. Considering such articles as “Facebook Killed My Career,” a woman being killed due to her Facebook update, and now using it for hacking, I’m a bit dismayed by the ingenuity of “bad people.”
I’d also like to recommend an article, “Ten Settings Every Facebook User Should Know,” as a good starting point for adults and kids. And take the hacking article to your team if you’re using Facebook/MySpace for team building.]]>
What differentiates this report from the study provided by McAfee? Well, for starters, it’s not a security company telling us we should buy more security products. I have learned to tune out reports from vendors over the years; there’s just a little too much self-interest at play.
The other interesting thing is that the Ponemon study looks at the activities of companies that have admitted a data breach. So their study uses harder data and is based on corporate activity (or lack of it, as it turns out) in response to a breach.
Here’s a couple of quotes that rocked me:
More than 84 percent of all cases examined by Ponemon were repeat data breach offenders.
Hello? When did losing data become repeatable? And acceptable? And what about responding to the breach? Here’s the other statement:
Only 49 percent of respondents are creating additional manual procedures and control processes
So the other 51% are doing the same things they did that got them hacked in the first place. No wonder there are repeat offenders.
It is time to acknowledge that these breaches are not isolated incidents that happen by chance, but more likely a pattern of poor controls.
Where’s a really big stick when I need one?]]>