I’ve written before about how the Payment Card Industry’s (PCI) Data Security Standard (DSS) has some loopholes that make it easy to look “compliant” and therefore “secure. In order to comply with the DSS requirments, merchants can do one of three options:
1. their own self-assessment report signed -off on by a C-level management officer;
2. hire auditors to do a self-assessment and report the C-level can sign off on;
3. or hire an independent, PCI-accredited QSA firm to provide an independent assessment and report to their acquiring bank.
Consider that the QSA firm is required to have liability insurance, pay a hefty yearly fee to the Consortium and provide an independent assessment. Chances are the independent firm will work harder to make sure their report is accurate (given the stick that the PCI hangs over their heads of ultimate liability).
It costs merchants a lot more to have the report done independently, and they can’t hide security problems as easily. Just the fact that they have the do-it-yourself option in two out of three doesn’t give me confidence in their reporting. Any time organizations “self-assess,” there can be an enormous opportunity for fraud by the unscrupulous.
Interestingly enough, MasterCard has decided to remove the “self-serve” option from Level 2 vendors by December 31, 2010. Under the new rules, Level 2 merchants must hire a PCI-approved auditor to complete an annual on site data security assessment by Dec. 31, 2010.
Previously, those merchants were only required to complete an annual self-assessment questionnaire in order to comply with MasterCard’s Site Data Protection Program. The Payment Card Industry Data Security Standard (PCI DSS) forms the baseline for MasterCard’s Site Data Protection Program.
Will PCI’s DSS begin requiring it as well? Is this a move to test the waters? I hope so. Even though VISA says it has no plans to do so, the PCI standards board has a lot more power to remove poor QSA’s than try to assess an internal team.
I recently attended a seminar at a well known southwestern school on building an Incident Response Team. During the discussion about Team membership, management oversight of the Team and related responsibilities, I noticed that the membership of the Team and the Oversight Committee was lacking some critical input.
An area often overlooked, especially when being developed by those in the Information Technology field, is the aspect of physical security. The campus police and the maintenance department were the two members lacking in this particular seminar. When I brought up this issue, it was dismissed with the equivalent of: “Oh, them.”
(They may never be getting into their offices again, or have decent air conditioning. And keys? forget it.)
Considering an “IT event” to be the only worthy event included in the IRT criteria for action is truly shortsighted. Physical events such as a string of burglaries on campus, flooding or water damage can have just as much impact on communications as a network outage. Not to mention the idea that those events would be a great shield for someone intent on attacking the network. If the IRT is unaware of these events, they become ineffective.
Not only that. Bringing physical security to the common IRT table is important for those folks, as well. They may be unaware of events in the IT world that would impact on securing the overall physical environment. Having all parties educate each provides a unified response, and that’s a much better incident response overall.
I just finished reading an absolutely terrific article from a sister auditor who is now on my short-list of must-reads. She’s got a great name (Gunn) and a killer sense of humor (sorry, I could NOT resist).
“Why Suing Auditors Won’t Solve the Problem”
is worth a read for her point of view on what it’s really like in Audit-Land.
A bank that was impacted by a data breach at a merchant is suing the QSA firm that performed the PCI exam and signed off that the merchant was compliant. They want to recoup the money they lost from replacing all the credit cards to their customers and dealing with related fraud from the breach.
Her point of view presents the difficulties auditors have in providing reports and doing exams, as well as the foibles of various firms.
It’s a painful, but absolutely true description of how clients can respond to auditors when they don’t get the exam results they like – “Throw the bums out, and hire better (meaning cheaper AND more cooperative) ones!” As well as pushing a report documenting problems to the circular file.
What is equally painful is that there are certainly “security auditors” out there who are more than willing to do the “check box” report, collect their check, and hit the door. They are usually the cheapest bidder, by the way.
She makes an interesting point about PCI auditors, however. In order to be compliant, merchants can either do one of three options: their own report, or hire auditors to do a report they can sign off on, or hire an independent, licensed QSA firm to provide an independent report, on their behalf, to their acquiring bank, which until recently did not have to forward the report to the Credit Card Consortium.
Consider that the QSA firm is required to have liability insurance, pay a hefty yearly fee to the Consortium and provide an independent assessment. This requires a firm with pretty deep pockets (a juicy candidate for a lawsuit) and a good skillset of people. Staff of a QSA firm must have at least 10 years of experience and a CISSP running the assessment. As a result, the number of QSA firms is limited to large audit/accounting firms and security companies.
The challenge is that the client they are assessing is also paying their bill. And most of the security companies doing PCI exams also sell security products. Two fundamental conflicts with true independence, don’t you think?
Most merchants tend to do the internal self-exam, where they can manage their own report or hire a firm to do the report they can then sign off on. This means they may hire firms that do not have the same level of experience to get the job done more cheaply. See Eigen’s Rules of Thumb numbers 1 and 6.
The second challenge is that merchants can change the configuration that was tested a week after the QSA firm issues a report.
Perhaps the most fundamental issue is the public’s expectation that PCI compliance = a secure architecture that protects their information. Given that a large percentage of merchants are only partially compliant (meaning that they have met some, but not all, of the requirements and have a plan in place to be compliant at some point soon, i.e., TJMaxx, and we can see how that worked) and most merchants are doing the internal exam, there is generally a recipe for chaos.
Acquiring Banks, of course (meaning those banks who have acquired, and are supposed to manage merchant accounts) are placed in the role of security monitor by the Credit Card Consortium. They also levy fines (the ones handed down by the CCC) and set timeline requirements for PCI compliance.
Can they cut off a merchant who is making the Bank loads of money for not being compliant? Yes. Are they likely to? Probably not.
Consider that if a merchant is not fully compliant, their level of security is below the minimum. Would I want to give that merchant my credit card info? Probably not. The merchant would start to lose business based on that poor reputation, which is why PCI doesn’t publish a list of merchants who are fully compliant.
Confused yet? Me, too. Use cash and checks. Preferably cash.
So what is a poor admin to do? Focus on securing the systems under your purview and documenting your efforts. If you’re doing the job you know you should be doing, sooner or later, when the auditors show up at your door, your efforts will be validated and you can sleep at night.
I have a series of Google Alerts set up to alert me daily on such interesting topics as data theft, data breach, etc., etc., and I have one set up for my full name, or any two parts thereof. I have, as it happens, a very unique name, and should someone decide to post my name and information for sale on any of “those” forums, or otherwise post as me, I will be notified.
It is common these days for HR staff to run a search engine query on potential employees. I still capture emails I sent out in 1999 about a technical issue where I was working that are archived in various places.
So you have a terribly common name – no big deal. Try using your full name in quotes, with a plus sign, then your city and state. So, for example John Smith might start out with Google results of “about 66,500,00,” but use of quotes narrows the results to “about 5,730,00.”
Now add a city, say Atlanta, and the results draw down further to “around `130,000.” Paradoxically, if you add Atlanta, Georgia, the results go up to 150,000, but if you add the state as GA, the results drop further to around 39,000.
If you’re on the web on a regular basis, do yourself a favor and keep an eye on yourself.
As you may know, one of my favorite posting topics has to do with ATMs. I call them Automatic Theft Machines because there are way too many stories of equipment being hacked, and/or swiping hardware being installed, or people just driving away with them.
Well, along comes a story about the progression of this issue: In Eastern Europe, the bad guys have perfected the art of getting the machine to spit out all its money on demand.
According to the article (linked above), authorities say there must be some sort of inside access to allow software to be installed. The articles claims that after unlocking the security, the inside equipment is quite vulnerable.
Hmmm, hard on the outside, yummy and soft on the inside….where have we heard that before? And something else interesting to note: many of the ATMs appear to be Diebolds; the same company that makes voting machines for us……and was implicated in another attack earlier this year, also in Eastern Europe.
The ATMs utilize a scaled down version of Windows XP, which actually doesn’t make me feel any better at all.
I’m a big advocate of disabling HTML in email messages. The marketing people scream because they can’t run their pretty code to sell products and convey appealing images. Other folks love being able to use those nice fonts you can’t use with Rich Text for signatures.
But a pretty face can’t justify the dangers in accepting HTML email. I’m not talking about Gmail, Yahoo or Hotmail – those are web clients for email. By default they accept HTML email, but you can turn that off.
Virus writers have known for years that the auto-open of Microsoft Outlook will “run” HTML code, including activating embedded web bugs.
The recent report on Web Tracking by the University of California at Berkeley spotlights the use of web bugs on web pages and HTML email messages. HTML Email can be written to:
1. Use a web bug to find out if a particular message has been read by someone and if so, when the message was read.
2. Use a web bug to provide the IP address of the recipient.
3. Use a web bug to report how often a message is being forwarded and read.
Spammers love web bugs, because they can be invisibly embedded in the email HTML code to do the following:
1. To detect if someone is viewed a junk Email message or not. People who do not view a message are removed from the list for future mailings.
2. To synchronize a Web browser cookie to a particular email address. This trick allows a Web site to know the identity of people who come to the site at a later date.
Plus, phishers can use HTML to disguise the link that they send in the email so that it “looks” like a legit site.
HTML in email can be turned off, and frankly, should be turned off.
It seems like every big vendor is pushing for business to “use the cloud.” Only now are we starting to see some questions arise in the general media about how secure cloud computing is.
The short answer is: it’s not. Intrinsically, whoever has physical ownership of your hardware has your data. It’s all very nice to say you will save money by outsourcing, but there are no hard and fast statistics to support that. What you save in outsourcing may come back in the form of increased costs for securing your data outside of your data center.
And you do know, of course, that the Feds can look at your data in that cloud without a warrant, don’t you?
So what CAN you do to save money and justify the “real costs” of keeping your data local to higher management?
First: Explore virtualization – Many organizations have realized enormous hard savings in electricity, storage space, UPS, etc by utilizing Virtual Machines to run their applications. The added bonus is that you can have immediate full backups stored elsewhere. It’s also marvelously easy to test a patch on a virtual machine, without having to worry about breaking something in production.
Second – Re-negotiate contracts – If a vendor isn’t meeting your standards, now is the time to switch. There is an enormous competition going on with this downturn of the economy. IF nothing else, get a better deal than the contracts you have.
There’s quite a bit on the web that can help you justify costs internally. But when the discussion about clouds comes up, make sure you ask the questions needed, such as:
1. How we will provide audit information from the cloud?
2. How do we control access to our data? (This will be the real question, because ultimately, the cloud vendor will control access, not your company. You may be able to control application access, but that does not address the server OS or underlying database controls.)
3. How will we monitor access to our data? Because there is no standard for thin-client computing security, the answers will be all over the map, and usually cost you more money.
The PCI standards council is currently looking at cloud computing with an eye to evaluating the security of credit card data. I’ll be interested to hear what they come up with. In the mean time, consider on of my Rules of Thumb: You can outsource data, but you can’t outsource data responsibility.
If you do find a vendor that says they can help you stay compliant, make sure you understand the contract very, very well. Your job could depend on it. I suspect the cost savings will be small, but it’s worth examining just for comparison’s sake with what your organization is doing now.
A study was just released by the University of California at Berkeley details just how much big business uses web tracking, and how little they appear to care about the privacy of users.
This really is not new information. The biggest businesses use it constantly to track visitors, and even Google gives you quite a lot of information via Google Analytics. The issue, I believe, is how much is really being tracked and how well it is hidden.
Heard of ‘web bugs?” A Web Bug is a graphics on a Web page or in an (HTML) email message that is designed to monitor who is reading the Web page or email message. Web Bugs are almost always invisible on a web page or HTML email because they are typically only 1-by-1 pixel in size. They are represented as HTML IMG tags. Those don’t show up at all in the page, only if you look at the source code of the page. And how many of us do that?
The report goes into some very relevant detail about how web bugs are the predominant tool used by businesses because they are simple and “invisible” to the visiting user. For example, if you look at the source code of a web page, you’ll see something like this:
They are easy to identify because they contain pointers to another IP address.
So, why should we care? After all, marketing people watch what we do all the time in the retail marketplace, so they can target their products to the right audience. Benignly, Ad networks can use web bugs to add information to a personal profile of what sites a person is visiting. The personal profile is identified by the browser cookie of an ad network. At some later time, this personal profile which is stored in a database server belonging to the ad network, determines what banner ad one is shown.
It’s rather like having someone shadow everywhere you go during the course of the day. They just follow you around, writing down everything you look at and/or buy. Then they sell that information to someone else, but they won’t tell you what information they’ve written down or who they’re selling it to.
That seems pretty intrusive, when it’s put that way, doesn’t it?
Do you want to be able to SEE web bugs when you’re surfing? There used to be a nifty piece of software, BugNosis, but it is no longer available. It’s hard to complain about what you can’t see. So the guy following you is now invisible.
“In our analysis of the privacy policies, we found that 46 of the top 50 companies affirmatively state that they share data with affiliates, and the four remaining were unclear,” the researchers report. “We sent each company a request via email or an online Web form for a list of each affiliate they may share data with. We received 14 replies, but none included the lists we asked for. Most stated that they do not disclose corporate information. Based on our experience, it appears that users have no practical way of knowing with whom their data will be shared.”
I run into an awful lot of engineers who hate paperwork (I feel the same way.) They are busy fixing problems, building new application support and dealing with upper managers who have no idea what they’re asking for, clueless users and now I come along to top it off asking for a bunch of documentation.
Been there, done that.
I gently explain, after I have corrected their misapprehension that auditors know nothing about IT, that if it’s not written down, it doesn’t exist. I know some engineers who believe in job security that way, but the fact is it just makes it harder for the next person to step into that role. That role will always exist. So why make it easier for the next person? Sooner or later, that next person will be you.
Why write down how a server should be built? Why write down how the servers get patched? Why bother changing the administrator password on all the servers and a different one on all the workstations? Why check to make sure that the anti virus server is actually updating all those machines? Why test to confirm that the group policy for downloading patches is actually working, and how to do that?
It’s part of being a professional engineer. It’s part of all the certifications we have signed off on; that pesky ethical paragraph that asks us to be responsible, dedicated and at the top of our game whether the job asks for that, or more commonly, does not.
It’s also a really great way of showing just how much work you do.
“Good Enough” is short for “Good Enough to Get Hacked.”
Bottom line? When you are sitting in front of a judge testifying as to what steps were taken to secure your organization, you WILL be asked what policies, standards and procedures you were following. If you have none to give the judge, you will be roasted by the jury, and your company will lose its case.
We can blame the company for not “making” us do it, but that’s not the real deal, is it?
I see a LOT of firewall configuration files and router configuration files. It’s the bane of my auditor’s existence to read through a PIX firewall config (up to 500 pages of a text file). After the 35th page of text, you could drive a truck through that firewall while I tried to wake up.
Plus, I can’t just log on to the firewall and look at it, oh no. I’m an auditor, and we aren’t trusted with such things (probably just as well). So, when I find a tool that will look at the configuration text file, analyze it and give me a nice HTML report, I want to throw a party.
Allow me to introduce Nipper. It takes a microsecond to turn out an absolutely superb report (and found things I missed!). AND it doesn’t just do Cisco, it also handles Nortel, Sonicwall, Juniper and Nokia. I’m in love. AND I gave the guy $50.00. I hope he had a party for himself. What an awesome piece of work.
It runs in Linux or Windows, and somebody else built a GUI front end, if command line makes your eyes cross. Grab your config files and see what you might have missed.