During the “sales romance,” when software vendors are showing off the bells and whistles of their product to the ooohs and aahhhs of management, it’s a challenge sometimes to be the “wet blanket” of security reality.
All too often, executives make software purchases without any regard as to regulatory requirements, security best practices or implementation costs. It’s my job (and yours) to educate and ask the hard questions, preferrably BEFORE they shell out the big bucks, then bang their heads against cost overruns due to requirements, implementation of add-ons and customized product.
Once you’ve signed the software contract, it’s way too late to ask questions. You need the sort of answers that will give you some assurance that the software vendor is committed to writing secure code, cares about writing secure code, and invests in the time and training of their staff to ensure their product can provide the right information about what their software does “under the hood.”
This should be a part of the “due diligence” organizations perform before making software purchases. It’s more important to your company, in the long run, than the financial solvency of the software company. A great company writing bad code is a higher risk than a shaky company writing excellent code.
It’s also a great time to see how mature the software company is in terms of software development practices. “Real” software companies have code storage, written documentation of process and practices, as well as defined QA testing. Beware of a company that may say they have all these things, but have nothing in writing.
1. Where is security in your software development life cycle? (If they don’t know, or have to go look, be careful)
2. Do you test your software for security vulnerabilities? What do you use to test your software? Can I see the latest report? (kudos to them if you can see it. If they don’t test, you’re buying a pig in a poke)
3. If there is a database, do you have a security design for it? (Try to find out if they’ve basically opened up the database with one application ID that can do everything – cheap and easy coding. Cheap and easy to hack, too)
4. If this is a web-based application, do you provide documentation on how to install your application securely? (Watch for the ones that use a “default installation” of a web server to load their application on. It’s likely that if you secure the web server, you’ll break their application.)
5. What kind of logging and reporting does your application do? (You’re looking for good reporting on WHO, WHAT and WHEN. You’ll need this for your auditors, too)
6. How quickly will you test Microsoft patches so that we can patch? (This is critical; you don’t want to have a vulnerable server hacked because they don’t want to test, or be put off with promises of the next upgrade. Make sure they test for IIS, SQL and/or any other specialty the application uses.)
These are the kinds of questions that software vendors need to keep hearing, so that they know the customer cares about security and won’t buy a product that isn’t secure. Plus you’ll weed out hundreds of hours of pain and aggravation. It’s a “win-win.” For you.
If you want to know what to look for in the growing cybercrime market of ATM card skimming, read the article and check out the pictures.
Knowing what to look for is half the battle. And kudos to the author, James Heary, a Cisco Security Expert. He’s just gotten added to my Blog Roll!
I know I keep harping on this “new” concept. The only “new” thing about it is the marketing around the name. It’s still off-site data storage and third-party management of corporate hardware and data. It’s got a prettier face than the old green-screen connection to the mainframe, but the concept of thin client/thick client is exactly the same.
A lot of banks that I audit use contracted time and space on mainframes as a standard part of business. From what I’ve seen of this, there are both pluses and minuses:
No mainframe in the basement that requires at least two technically trained engineers.
You are entirely reliant on the third-party for coding changes, reporting and security implementations. They will most definitely charge you for every little and big thing they can. It’s death by a thousand fees. You are also at their mercy for when they are willing to make a change for you. “Security flaw? We’ll fix it in the next release.”
Is there actually a cost savings? It varies from bank to bank. A tiny regional bank may find it difficult to acquire technically skilled employees, in which case it can make a lot of sense and save money. Consider, however, that the larger the organization, and the more IT functions are needed, the more complex management of that third-party relationship is going to be.
You rely on a SAS 70 for assessing the security of the service provider.
You rely on a SAS 70 for assessing the security of the service provider.
Yes, I repeated myself. Right now we only have the SAS 70 as a way to assess service providers, and that applies ONLY if the service bureau is handling financial services for the company. The SAS 70 is meant to provide assurance for the financial auditors of the client companies, NOT test to a standard or any kind.
And then, only the controls that the service bureau says are in place are the controls that are tested in a SAS 70.
There is not an independent standard to test “cloud computing” environments for secure practices.
Cloud computing vendors tout the possibility of security: “Cloud computing can be as secure, if not more secure, than the traditional environment,” said Eran Feigenbaum, director of security for Google Apps. Which, in my mind, means that it will be an additional cost to the business.
Eigen’s Rule of Thumb - you get what you pay for. How many businesses will pay for security beyond what the vendor offers as basic services? How many businesses will skimp because they can’t afford it and there is no requirement for it?
Short answer: too many.
What happens when we build a national database, with everyone’s health records? Will everyone get better, less expensive healthcare? That’s the impetus for funding a portion of the stimulus bill to push more health providers into the electronic age.
There are three items to consider, and they are the same ones we must always deal with:
Confidentiality – WHO has access to your health records? Right now hospitals, doctors, pharmaceutical companies and the government have access to your health records. And probably a lot more marketing companies have pieces of information, as well. A online pharmacy clerk in West Overshoe knows all your prescription medications and is paid minimum wage.
Integrity Is your data accurate? Or has someone stolen your medical information to get health care, died, and left you with a rolling disaster?
Availability Can you inspect and correct your data – ALL your data, including any diagnoses? What if you don’t agree with one? Can you delete it?
If you compare the answers, it looks remarkably similar to where your (and my) credit record is right now – in the hands of the data miners. All my data belong to….them.
From a regulatory perspective, the Feds are not providing any real consequences for medical data breaches, or lack of HIPAA compliance. They are waving a large carrot around, instead. Only one or two organizations have actually been fined for non-compliance, despite a large uptick in data breaches. It is left to the outraged patient to sue for damages. There are no clear statistics for medical identity theft, because the appropriate agency isn’t tracking them.
It’s one thing to get information online, another thing to get it online safely. It seems to be a pattern in every industry that data becomes electronic before any thought of security.
I don’t have a Facebook profile. I’ve never even been ON Facebook. There’s something about posting one’s life constantly that I just don’t find all that appealing. I’ve got too much to do online as it is. I admit to being on LinkedIn, mostly because my University dean pushed the entire graduating class from Norwich to get connected, but I find it is of limited value. I often get people I don’t know trying to connect into my network. If I don’t know you personally, I’m not about to do any connecting.
Posting information about oneself has definite perils. I thought long and hard about doing a blog, and I think (or try to) carefully about what I write and who I write about. When I “google” myself, (you have, haven’t you? I know you have) I still see posts from the year 2000. So consider that what you posted five years ago about your problem with your Exchange server using your work email address is probably still out there. How detailed was your post? If somebody read it today, what would it tell them about your network?
So I read with considerable interest a blog posting detailing the use of Facebook as the social research part of penetration testing, and I’d suggest you read it too, especially if your company is using Facebook as a Team tool.
I guess it’s another way of saying that Facebook isn’t just for identity thieves, stalkers and pedophiles anymore. Considering such articles as “Facebook Killed My Career,” a woman being killed due to her Facebook update, and now using it for hacking, I’m a bit dismayed by the ingenuity of “bad people.”
I’d also like to recommend an article, “Ten Settings Every Facebook User Should Know,” as a good starting point for adults and kids. And take the hacking article to your team if you’re using Facebook/MySpace for team building.
The Ponemon Institute (I keep wanting to say Pokemon, don’t you?) is about to release it’s fourth annual study on data breach activity.
What differentiates this report from the study provided by McAfee? Well, for starters, it’s not a security company telling us we should buy more security products. I have learned to tune out reports from vendors over the years; there’s just a little too much self-interest at play.
The other interesting thing is that the Ponemon study looks at the activities of companies that have admitted a data breach. So their study uses harder data and is based on corporate activity (or lack of it, as it turns out) in response to a breach.
Here’s a couple of quotes that rocked me:
More than 84 percent of all cases examined by Ponemon were repeat data breach offenders.
Hello? When did losing data become repeatable? And acceptable? And what about responding to the breach? Here’s the other statement:
Only 49 percent of respondents are creating additional manual procedures and control processes
So the other 51% are doing the same things they did that got them hacked in the first place. No wonder there are repeat offenders.
It is time to acknowledge that these breaches are not isolated incidents that happen by chance, but more likely a pattern of poor controls.
Where’s a really big stick when I need one?
Sometimes you just have to laugh. Hackers edited roadside signs in Texas
I am willing to bet that the padlock was flimsy and the password even flimsier (IF it had one). Nice of them not to use naughty words and REALLY embarrass the Public Works Department. And when was the last time that password was changed? (Oops, I must remember I’m talking about Texas.)
The head of Public Works got all huffy, but really should have been considering what the sign might have said, and thanking his lucky stars he got off so lightly. Check out the KXAN spoofings of the Zombie alert.
It goes to show you that the low-tech attack on high-tech trumps fancy attack code every time.
Some interesting information is coming forward about the break in at Heartland Payment Systems. The Secret Service has identified an overseas suspect, according to StoreFront BackTalk.
What’s more interesting (to me, at least) is that the sniffer software installed on Heartland’s systems was deactivated when it was found. This can mean any number of things, including that it might not be the malware that accompanied the data theft, was waiting to be re-activated, or turned off because the thieves knew they had been spotted.
From an audit perspective, this makes me return to the challenge of how we monitor changes to our systems. How do we know when something has been installed or deleted? There are a number of software packages that purport to be able to monitor and report on changes (Tripwire comes to mind), but as an engineer I know that changes happen on a server architecture all the time.
Do we simply monitor traffic to and from the systems? I can’t imagine that this would be feasible with payment systems that have 100 million transactions a month, like Heartland.
Do we look for anomalies in the traffic? Even tougher and more CPU intensive. We can watch outbound firewall traffic to block lists of known malware servers, but that list would change constantly.
Ideas? Suggestions? I’m shaking my head.
If you haven’t heard by now, the “downadup” worm (renamed various other things by competing vendors) is propagating itself like crazy across the Internet. Various software vendors have added some artificial hype about how fast it is spreading, but I didn’t get sweaty palms until I read that US_CERT is now saying that the patch/Technote Microsoft released to address the issue doesn’t work.
Here’s how it’s going so far – the worm installs itself via the “autorun” feature that is enabled whenever removable device is connected to a computer. This includes, but is not limited to, inserting a CD or DVD, connecting a USB or FireWire device, or mapping a network drive. This connection can result in code execution without any additional user interaction.
So Microsoft issued an out-of-cycle patch that wasn’t really a patch or a fix – just a workaround. The patch/fix/workaround involves disabling the autorun function inside the Windows registry. The instructions in the Technet article 91525 were incorrect, and did not disable autorun.
So if you’ve done this on your network, and think you are safe…..you’re not.
A newer Microsoft Technet article is available here.
At first I was confused, because the article provides instructions for a way to disable autorun as a “workaround” against the worm propagating itself. The information does not address the vulnerability the worm is actually designed to exploit.
After some more digging, the actual vulnerability we should be concerned about is that the worm employs an attack against the “server” service listed as a Bulletin in October 2008. The exact details from the Security Bulletin MS08-067 are as follows:
“This is a remote code execution vulnerability. An attacker who successfully exploited this vulnerability could take complete control of an affected system remotely. On Microsoft Windows 2000-based, Windows XP-based, and Windows Server 2003-based systems, an attacker could exploit this vulnerability over RPC without authentication and could run arbitrary code. If an exploit attempt fails, this could also lead to a crash in Svchost.exe. If the crash in Svchost.exe occurs, the Server service will be affected. The Server service provides file, print, and named pipe sharing over the network. The vulnerability is caused by the Server service, which does not correctly handle specially crafted RPC requests.”
It seems the only “solution” we are offered from Microsoft for users of anything other than Server 2008 is a manual fix to try and stop propagation.
Where’s the real fix? Not the workaround (which didn’t work). Am I missing something? “Where’s the beef?”
The sixth largest US credit card payment processor Heartland Payment Systems, has just acknowledged that their payment systems have been breached. The discovery of malware by forensic auditors on the system last week has led to this announcement.
Credit card payment processors have to jump through enormous requirements to keep their systems secure. Their systems and their applications must be compliant with Payment Card Industry data security standards. They must have an external compliance audit every year.
According to the CFO, the forensic teams found that hackers “were grabbing numbers with sniffer malware as it went over our processing platform.” I immediately thought of Hannaford and the same issue of sniffer capture.
Heartland processes over 100 million credit card transactions a year. That’s far more than the 2 million processed by Hannaford. The FBI and Secret Service are involved. The discovery was brought about not by Heartland finding it, but by the folks at Visa who noted a pattern of suspicious activity that could be traced back to Heartland as the common denominator.
This is really not surprising. There is obviously a group of talented coders who have figured out how to drop this code on critical servers to capture data as it “goes by.”
I’m sure the Payment Card Consortium does not want to have to add “encrypt all your data streams, inside and out, on your network,” to the PCI standard, but I believe it’s inevitable. Internal networks are no longer inviolate, where significant data can travel unencrypted.