Right after one of the debates I found myself knee deep in a debate about Dodd-Frank. A close personal friend of mine, a very bright bulb who I’ve never found a reason to disagree with brought up Dodd-Frank as an example of horrible legislation that’s crippling banks and contributing to our horrible economic conditions. Whoa, whoa, whoa…. rail against taxes, complain about government spending, assail the current administration for the dramatic escalation of our national debt. But leave Dodd-Frank out of it because that’s not one of our bigger problems. I can offer a five thousand word defense of the best parts of Dodd-Frank without even pausing to organize my thoughts but I don’t need to go that far. I can sell it’s virtues in a single, simple sentence: Any legislation that created the Consumer Financial Protection Bureau is instantly more effective than anything that’s come before it in my lifetime.
No, seriously… in my lifetime.
I’ve already screamed from the rooftops about how much I like the CFPB. In my own geeky, nerdy way I’m proud to admit that I look forward to getting their regular updates and announcements because they always seem either ridiculously relevent or illuminate how they’re hot on the heels of yet another predatory business practice. In barely a years time they’ve pushed deeper into the heart of the issues that crashed Wall Street in 2008 than anyone could have hoped (that’s my opinion but one I’m willing to defend). And their examiners appear to be freaky efficient. I’ve been hearing from our banking clients that they’re drilling in on details and covering more territory than was expected and that they’re discussing issues much closer to protecting customers (and members). Our practice recently issued a bulletin to our clients alerting them to the fact that CFPB examiners are expecting related oversight to be pushed down to external business parters and vendors. This is not a new consideration, it’s exactly the same as what’s supposed to happen with regards to GLBA (and one of the reasons we developed our related software and services for same) but still, we anticipated this would take several exam cycles to surface. CFPB cut right to that chase in a heartbeat, which is stunning for such things. It’s almost like someone told them where to look and what to look for which to a certain extent is true.
The CFPB didn’t start as most new agencies do. They didn’t recruit green examiners and place them under the management of a few practiced hands. What they apparently have done is to hire well seasoned examiners from related regulatory agencies (e.g. FDIC, FRB, OCC) have them contribute to creating the necessary procedures and then send them out to bring it all to life. So on Day One they already know where the bodies are likely to be buried and what to do about it. It’s brilliant, it’s efficient and it’s the very best example of your government doing its job.
Here are some snippets from my in-box:
And the kicker about these three items? This was all issued this month (December 2012) and we’re not even quite halfway through it.
Which is why I don’t much care for any manner of compliance-based assessments that are self-administered.
Companies have had this crazy notion for more than a decade now that the best way to identify and address risks inherent within the infrastructure is to ask key stakeholders a somewhat generic set of questions and use their responses to figure out what’s what. Most of the time the people driving these initiatives are either information security professionals or corporate compliance people who either believe they already know where the problems are or are looking for the simplest and easiest way to satisfy some requirement. But what they often fail to grasp is that it’s almost impossible to draft a common set of questions that either apply to the vast majority or worse, will be interpreted consistently across the stakeholder population. Plus the perceived benefit of using a self-assessment approach to reduce effort and required support resources is almost always an illusion. Most of the time saved in not having someone ask the questions and record the answers is instead consumed by needing to explain the format, explain the questions or trying to clarify and clean up the responses. While supporting one such program recently each assessment required a kick-off meeting, a follow-up meeting to review the status of the assessment, a third meeting to review the initial draft of the questionnaire, a fourth meeting to review the resulting report(s) and a largely untracked number of hours to help generate all of the related support documentation. Regardless of the size of the entity being assessed each one consumed somewhere close to eight hours. While that might seem like a scary large number, the really scary part was that based on which risk analyst was responsible for the assessment and the personality/mindset of the stakeholder completing it the results looked very different from one another. It was almost impossible to generate meaningful metrics across the assessment population because a “Yes” answer for one question might mean the same as an “N/A” in another; there was no way to know that.
Another issue I’ve always had with the self-assessment approach is that while some stakeholders take it seriously and do a remarkably thorough job, others race through it with little hesitation just to fill in the blanks and get it off their desk. Sometimes you can detect which is which, sometimes you can’t. Plus the approach fails to capture much of the rich and relevant information related to each question and the underlying risk behind it. I recall conducting a team-driven risk assessment years ago where one stakeholder after the next covering a very broad sampling of the infrastructure kept lamenting on the lack of a proper disaster recovery plan. They had something to show auditors/examiners but to a person no one believed it was a truly viable plan. All but the CIO brought it up as a concern and when pressed a bit about why that was they all shared a common concern: If their main office was closed unexpectedly for twenty-four hours, regardless of the reason, they were likely out of business. A related self-assessment question would ask “Do you have a current and recently tested DR plan?” – most respondents on that engagement would simply have selected “Yes” and moved on to the next question without ever being challenged to share their concerns. Where’s the value in having a repository of questions and answers when it fails to capture the true essence or dimension of risk?
And the biggest issue I’ve always had with self-assessment questionnaires and their related templates is that they’re so often poorly designed. I can guarantee you that each of them has at least one question which makes zero sense to anyone who reads it. They either answer it based on what they think it’s asking, answer with an “N/A” or require follow-up with the people managing the process to have it explained. And you’d be amazed how many times even the author is challenged to provide a meaningful answer (including this guy). One thing’s for certain, a self-anything needs to be designed and written so that everyone understands what they need to do without having their hand held. Plus it’s rare that questionnaires are customized so that each stakeholder is only asked those questions that truly make sense. An application owner should never be asked if their anti-virus solution is current and up-to-date. A business process owner should never be asked about software change management. Yet seldom have I encountered a self-assessment process which does anything like this and so the audience is burdened with time consuming yet unnecessary questions.
Really though in the end my overriding problem with the self-assessment approach is that it fails to capture the expertise and guiding hand of true risk and assurance people. The process is often supported by analysts who don’t really have a feel for conducting assessments and are satisfied that all of the blanks are filled in. I have a nose for when there’s something beyond a simple answer and know when to scratch at the surface to bring it to light. By not allowing expert hands to guide the process potentially huge amounts of valuable and possibly critical details are being missed thus undermining any perceived value of the process. When you consider that all tolled and tallied the self-assessment approach versus the guided assessment approach doesn’t really save you much time (if any) and that it results in a weaker finished product, why would you elect to use it? One answer is that regulators push for it because perhaps it’s better than nothing (I can’t get any of those I know to comment). Another is that the people sponsoring these initiatives lack the fundamental comprehension to understand their options and chose what they perceive as the less complicated approach (again, I don’t know for sure it’s just a theory). What I do know is that when done right a risk assessment is managements best friend, a fundamental belief behind the recent spike in ERM activity.
While recently having my car serviced the mechanic discovered a nest of some sort in the engine block, he thinks it was probably squirrels. Because of this discovery he went searching for all the wired connections to make sure they weren’t chewed up and destroyed, quite a few were as it turns out (the car had been idle for several months). The bill only added the cost of the replacement wires but nothing significant for the time it took to first find which were affected and then replace them. Had I attempted the repair myself I might have noticed the nest and likely would’ve cleared it but know for certain I never would’ve thought to check the wires, where to look for them or what to look for. I was smart enough to rely on a professional with a nose for that sort of thing and it saved me time, money and best of all the aggravation of having the car break down somewhere unexpectedly. Good thing I didn’t go the self-repair route.]]>
Now I’ve designed and supported more than my fair share of related content. I understand that sometimes the best way to tell a story is to paint it in the form of a picture; I get that part. But way too many times I’ve witnessed such initiatives spiral out of control to the point where it takes an army of people working ridiculous hours to pull together a deck of metrics that either fails to answer anyone’s questions or, even worse, generates requests for more metrics to provide clarity. And once a metric becomes a standard part of any reporting package it often stays there until management changes, and sometimes even beyond.
But I think there’s a bigger issue with metrics that exceeds my simply not thinking they’re “all that and a bag of chips”. Where are the controls around generating them?
Seriously, we have this vastly complex framework wrapped around financial reporting (SOX) to provide reasonable assurances that what management is reporting to its investors is accurate. We have industry, federal and state legislation requiring all manner of controls around sensitive information. There are auditors (internal and external) and regulators from all over the place that comb over everything with a fine tooth comb (or at least claim to) to make sure everything being done is done right – but in my nearly fifteen years in the audit/assurance industry I have never heard of a finding or issue regarding the veracity of metrics. Which is only a problem if the people running an institution or company rely on them to make decisions.
The reason why it’s a problem is because so much of the metrics out in circulation is pulled together from disparate sources, cobbled together in spreadsheets or non-production databases and manually generated. There’s no easy way to verify the source data, or know that it’s unaltered in any way or even know if it’s the right information. And even if the data source used is from a secured production-like environment, still there’s no real auditing conducted to ensure the information is accurate or better yet, is even the right information.
I once took over a change management process and assumed responsibility for a series of reports that were generated for the Managing Director who in turn used that as part of his reporting package shared with the CIO. One of the key metrics being reported on was scheduled releases and the IT departments on-time implementation percentage. The numbers looked great showing that they were on-time more than ninety-five percent of the time over a two year period. The only problem I could see with the metric was that it was misleading to the point where it was almost a lie. The scheduled release date was being pulled from the system used to migrate changes into production and that date was only determined once the development team had completed all of their work. So the scheduled implementation date was chosen once they knew they were ready to move into production. Of course the on-time numbers looked great, they always knew they were ready before committing to it. The Managing Director incorrectly assumed that there was a legitimate release schedule with forecasted dates and that the on-time numbers reflected on a well run process; wrong. No one ever questioned the numbers or their source and had I not inserted myself into what was described as a well honed, efficient process the problem might have never been identified; and there a few more just like it. My trust in metrics was permanently altered after that.
Metrics represents an excellent way for decision makers to quickly understand status and identify problems. I’ve quoted here before about how someone I respect quite a bit was fond of asking her team “If you can’t measure it, how can you manage it” and she’s absolutely right. Metrics is the ultimate management means of measuring key activities and issues within their world. But how far do you go and how much effort do you expend pulling the related reports together? And even if you’re able to automate the process and shorten the time necessary to generate the reports, how do you know that you’re either measuring the right things or that the underlying data is unaltered? Ultimately I think that senior managers should be provided with something akin to a cost-benefit analysis for each metric they’re given. Have them understand the degree of complexity and the amount of effort required to generate a number before deciding whether or not it’s worth it. Perhaps I’m being naive but I’d like to think that most C-level executives would eliminate a significant amount of their reporting if they could see how much it was really costing them.
Here’s the part that should really concern you the most though: Metrics is a key component of Board reporting, they make all sorts of decisions based on what these reports tell them. How can that be allowed unless the process used to generate them is locked down and audited? Where are the regulators in all of this?
Things are looking up a bit because I have a new favorite regulatory agency to follow, the Consumer Financial Protection Bureau (CFPB). And here’s why: They focus on things that impact my day-to-day life (and yours as well).
I started tracking what the CFPB was doing about five months ago by accident. Someone I know who used to be an examiner for the FRB switched over to the newer agency right at its infancy and I noticed this courtesy of a LinkedIn update. Because I consider the Fed to be the Big Kahuna of the regulatory agencies I was surprised (you don’t leave the Yankees to sign with an expansion team unless you have to, or so I thought). Compelled a bit by the update I started poking around the CFPB website. For the first few months of this year it seemed to have potential but was little more than brochure-ware. But last month that all changed.
The first CFPB update that caught my attention was labeled 12 CFR Part 1070 and it was all about the protection of consumer data, only with a slight twist. Basically it was all about how any information they received as part of their field work would be protected exactly the same way that any third party vendor would be required to. Despite their being a Federal agency they weren’t going to hide behind that as a means to simplify their lives. They spearheaded an update to the underlying regulation that frames their charter so that consumers and their institutions can be assured that all PII and NPPI would be protected. For me it was a rare win-win topic; protection of PII and NPPI combined with a reference to vendor management (these are a few of my favorite things). And really for me it was that much more significant because I’ve known of a few situations where representatives of Federal and State regulatory agencies were responsible for the outright loss of confidential and/or restricted data. Beyond a slap on the wrist there wasn’t much else done to the offending examiner or their agency. And the affected institution couldn’t really complain too loudly because it’s always a bad idea to challenge your regulators, even when you’re in the right. So I thought this was all at once a compelling and remarkably sensible update by a regulator, not something I’d expect to see. That was the first points on the board for the CFPB.
The second set of points were scored almost on the same day. I wanted to check one of the details related to the aforementioned update and noticed this one “Consumer Financial Protection Bureau report finds confusion in reverse mortgage market“. Because I have a parent who is a senior citizen and who I think might one day soon be open to at least exploring a reverse mortgage I read with great interest. The report was in plain English, was oriented in such a way that I could share it with my family and have them understand the issues and concerns detailed within and most importantly it made sense. Reverse mortgages are growing in popularity and its main audience is the senior citizens segment of society. Seniors tend to be more easily misled, they’re under greater pressures to find new money sources (courtesy of our recession) at a time in their lives where going back to work is often not an option. And because a parent would do almost anything rather than turn to their children for financial assistance they see a reverse mortgage as a way out of their predicament. So for me having this content available was quite the relief. I can caution and advise all day and night but the risks presented by a reverse mortgage are much more credible coming from an authorized source. And so I celebrated July 4th this year by declaring the CFPB my new FDIC (the Sheila Bair inspired version, not the current blah one).
Here’s my really bizarro advice to any of you with even the slightest interest in regulatory oversight; if you haven’t already done so visit www.cfpb.gov and take a look around. It’s oriented towards lay people, not just lawyers and regulators (and practitioners like me) and addresses topics and concerns that affect the majority of our population. Basically it’s what I would expect from a regulator that still has that new agency smell but nothing like I’ve come to know from those that preceded it. To those who have had a hand in defining its charter and organizing its content, great job! Now repay my kind words by going out and getting me some juicy enforcement stories to write about.
Before you read any further please fire up Netflix or hit up Redbox and rent “Catch Me If You Can” the DiCaprio-Hanks movie about Frank Abagnale Jr. the infamous check forger. The movie covers in sufficient detail how Mr. Abagnale figured out how to forge checks and stay one step ahead of the law for years. Take sufficient notes and then consider remote deposit capture and how it solves so many of the issues he had to figure out work-around’s for.
I’ve written in the past about how insane I think it is that we send unsecured documents via the mail that contains all of our bank account information including name and address without so much as a second thought. When you consider how relatively pervasive ACH payments are these days (I pay at least a half-dozen of my monthly bills that way) I’m amazed that hasn’t become the newest criminal hot spot. And now we’ve gone and made it that much easier to exploit this antiquated and poorly designed system of moving our money around. You no longer need to even steal a persons check book, you only need to make copies of their blank checks so that later on you can fill in the appropriate details and use remote capture to process it. When you consider the amount of time it would take to even figure out what just happened the thieves will be long gone. First a person has to get their monthly statement and even figure out that a rogue check was presented against their account (and if you keep the amount small enough that might not even happen). Then they’d need to contact the bank who would have to investigate and pull up check images to try and verify the customers claim. By the time that all happens it’s potentially been at least a month, plenty of time for the perpetrators to close the account where funds were deposited and move on. And with bank accounts being setup online all the time you wouldn’t even have video footage or images of the people behind the theft. And that’s only one possible way to use remote deposit capture to rig the system (I’ll keep the other ideas I have to myself lest this post become a self-fulfilling prophecy).
Seriously, if the banks introduced a new service offering where you can pay for purchases by simply sending a copy of your credit card you’d all think it insane and no one would use it. How is this any different? If the stores and restaurants we frequented required that they make back-and-front photo copies of your credit card for their records you’d stop using your credit card. But with checks it’s not so big a deal?
With regards to remote deposit capture, all because you can doesn’t mean you should.]]>
That first experience has arguably tainted my opinion of the role played by internal audit for nearly twenty years. Subsequent to that first encounter I’ve been audited a few more times, assisted clients in preparing for internal audits many times and have had hundreds of interactions either directly or indirectly with a variety of companies internal audit function. And despite all of this experience and having eventually become an auditor myself I’m not sure I could present a credible argument as to where there’s real value being generated by the process beyond maintaining appearances.
The first problem is that for most companies there’s an unhealthy fear of auditors. There’s often real concern that if any major issues are uncovered someone’s head will roll. At the aforementioned Fortune 100 company, it was widely believed that if your group was found to have a material finding (or anything remotely resembling one) the highest ranking person in the group was doomed. To their credit the company also had a mechanism in place so that if you figured out that you had a problem before anyone else and self-reported it you were allowed appropriate time to remediate. But that wasn’t always effective enough because most application and business managers weren’t auditors and couldn’t always recognize when a control was either missing or failing and so there was still an enormous amount of work and panic leading up to a scheduled audit. I remember thinking that the company should remove the threat of termination and encourage both auditor and auditee to work openly and honestly together so that in the end issues were surfaced, defined and repaired. In the two decades since I’ve worked with and for a few companies who believed they had this healthier sort of dynamic in place between their internal audit department and its business and technology functions but really in the end it’s almost always the same problem. Internal audit is viewed as an unforgiving and punishing agent and no one ever want them snooping around.
The second problem is that there’s a degree of incompetence found within many internal audit functions. While conducting my first technical audit back in 1997 (my company was managing an outsourced audit plan) I identified a significant issue with the methodology used to make production changes in a certain database environment. It resulted in there being virtually no clear or simple way for the DBA to back out a change if it didn’t work. If a change failed it would require bringing down production for several hours in order to restore things to the previous state. The first person who challenged my finding was the internal auditor who had audited the same platform for years and didn’t either understand or agree with the finding. It took me nearly an hour to first educate him as to why the technical issue existed, prove that it did and finally to agree with the associated risks. He had worked there for years, had never had the chance to see how other companies managed similar infrastructures and was way more concerned with his authority and capabilities being challenged than with the fact that his company had a significant risk to be repaired. In the time since I’ve met many more people just like that one, auditors who stay at one company for years, fall into bad habits and fail to keep their skills relevant. They wind up relying too much on the Internet to try and update their knowledge base, don’t have the perspective of understanding how other companies are managing similar challenges and are happy enough to bring out the same whipping stick and a feeling of empowerment to scare the daylights out of internal control owners while conducting their audits. It results in poorly formed and often irrelevant findings that waste everyone’s time. I wish I had a ten dollar bill for every instance I knew of where something was being fixed because it was easier to appease the auditor than it was to convince them their finding was flawed or even wrong.
Now I’m not saying all internal auditors are incompetent, they’re not. I’ve met some brilliant and extremely effective internal auditors along the way. And in those environments audits weren’t feared because there was a high degree of confidence that if an issue was identified it was something worth knowing about. But in almost all of those cases the auditors involved had only been with their company for a few years, not decades.
The third problem is that audit needs to be seen as adding value, not creating unnecessary delays or work. Practically speaking internal audit is playing for the same team as the control owners whose processes they assess. Their primary goal shouldn’t be to notch as many findings as possible on the board but rather to identify weaknesses and deficiencies so that they can be remediated and help further harden the infrastructure and reduce risks. I understand the need for the function to maintain independence and separation but only so they can remain objective not so they can operate as if though they’re the ultimate authority on right and wrong and beyond reproach. If they’re invited to participate early in a project and find issues they should issue interim findings so that small problems don’t become bigger problems further on down the project road. If you wait for the post-implementation audit to document early stage issues you’re not really helping anyone. If they abuse being granted access to meetings and documentation long before the audit function is typically engaged the only predictable outcome is that access will be denied until someone forces the issue. And one more major issue I routinely find with internal audit is that no matter how strong or weak a finding may be, no matter how poorly or strongly worded, no matter how relevant or irrelevant they all too often defend it as if though it’s gospel that’s beyond reproach. Why is that? Why can’t the control owner question the finding, demand clarity or try to frame it’s relevancy? All auditors should feel an obligation to issue a final report which resonates with everyone involved as being accurate and hopefully fair.
Until internal audit is seen as part of the solution, not part of the problem it’s going to remain, well, a problem. Until control owners gain a sense that by developing a healthy dialogue with their auditors it will only help things and not hurt them it will continue to be a problem. And until all involved parties working for the company feel as if though they’re working towards a common goal it will remain a problem.]]>
Doesn’t anyone remember the great Heartland breach of 2009? Seriously, anyone?
I’ve never tried to quantify what percentage of the work we do within the regulatory compliance domain is focused on the safeguarding of customer data but off the top of my head I’m thinking it’s high. And when you factor in that there’s an entire industry focused exclusively on protecting credit card information (PCI) you’d think that not only are breaches getting harder to pull off but that we’re becoming less tolerant as a society in accepting them. But there’s a general lack of outrage exhibited when these incidents occur, the media doesn’t much care to cover it properly and really in the end they wind up being something of a non-issue. And as I learned recently when my own bank card was compromised, the banking industry seems to simply accept that these things are going to happen. Instead of getting better at preventing breaches they’ve instead managed to streamline the process where they shut down the accounts in question and reissue new ones.
You often hear that any security solution is only as good as its weakest link. It seems to me that financial institutions are no closer to figuring how to truly lock everything down and with the constant evolution of technology where we’re always adjusting to new exposures, new threats and new challenges we’ll never actually get there. There’s never a point where an infrastructure is truly hardened and where the weakest link is something so obscure as to not even present a credible threat. Despite regulatory and industry requirements and sometimes intense scrutiny we’ve reached a point where the only thing that’s improved is in how quickly we repair the damage. PCI hasn’t stopped things from happening (it hasn’t and don’t debate me on its merits because every time there’s an issue with a PCI-certified company there’s an excuse). GLBA hasn’t stopped things from happening (too many moving parts and not enough pressure applied from the enforcement divisions). It’s just not getting better and I can’t see that improving anytime soon.
I’ve long ago decided that vigilance on my part is my only true defense against identity theft. I’ve written previously on how I check every physical detail of every ATM I ever use to make sure the equipment is legitimate, that there’s no hidden cameras recording my PIN and that I never use the privately leased machines you find all over the place. I also double-check gas pumps to make sure a portable device isn’t scanning my credit card (I get strange looks all the time when I wiggle the card scanner to see if it’s loose). And I’ve turned on every email alert possible to track activity on my checking account (much to my wife’s chagrin). I almost never use a smartphone app or web-based solution to conduct my banking because I don’t completely trust the technologies (or rather the people who can exploit them). And to be clear, none of my concerns stem from what I see while doing my day-to-day fieldwork. It’s all based on what I know happens out in the real world.
Until breaches are treated as a true threat to our personal security and receives the scrutiny it so richly deserves none of this is going to get better. When a breach of over one million credit card accounts is prefaced with the word “only” and that’s perfectly acceptable to all involved we’re still obviously a long way off from solving the problem.]]>
Now we have GRC software solutions that oddly enough promise to automate GRC-related tasks.
The first problem with any such assertion is that GRC is too broad a spectrum of activities and disciplines – most solutions are focused on addressing subsets therein. On one end you have the security-centric solutions, on the other end you have the risk-centric platforms and somewhere in the middle is a crowd of offerings that try and touch on everything but none particularly deeply. So the first thing a stakeholder needs to understand is what they’re looking to accomplish before they set out to select a product. You can select ten different GRC vendors and discover ten different interpretations of the discipline. And within those ten solutions there are vastly different approaches. Some are similar to ERP packages where their approach is somewhat hard-coded and you have to do things their way (or spend big bucks to customize). Some are remarkably configurable and can be made to fit your processes like a glove (but that requires a steep learning curve and expanded time frames).
The second problem is that because most vendors selling to the GRC market tend to use common terms their internal definitions can be quite different. Some solutions pitch risk assessments which are little more than questionnaires (e.g. very little to no risk-related elements such as inherent and residual risk) whereas others provide questionnaires that are absolutely risk assessments but only appear as such upon inspection. If you’re looking for a true risk-oriented solution you might go with the former when it’s the latter you truly need. But the terminology is so similar it’s hard to differentiate and the only way you’ll get to realizing that is after you take the software out for a test drive, not something every vendor is willing to provide (and I’m not talking about a two hour demo, I’m talking about a true trial period). You think you’re comparing apples to apples and it may turn out that you were comparing apples to car batteries without knowing it.
The third problem is that after a while it’s easy to become snow-blind during the selection and evaluation process. Because of the common language, because of apparently similar functionality you start looking for factors unrelated to what you really need to focus on as a way to separate out the solutions from one another. You’ll consider solutions as prequalified because a competitor is using it thinking that their needs are similar to yours. But they may be focused on information security activities where your institution is looking for automated risk assessment capabilities. You’ll start shopping on price and contract terms thinking that competing solutions are so similar it really comes down to who offers the best deal. But software vendors usually know their market and the correct price points based on what their solutions offer – if two or more products appear evenly matched on functionality but one is much cheaper there’s usually a reason. The more expensive solution may come pre-loaded with all the related content you’ll need to effectively use it whereas the cheaper solution might require you to obtain your own licenses. It’s not intentionally misleading but that’s a detail easy to overlook during the vetting process.
GRC is an awesome concept working towards one day becoming an awesome discipline but it’s not quite there just yet (a point I routinely beat to death, I know). It’s spread out too far and wide and depending on who you’re talking to about it can get widely (if not wildly) varying definitions of what it is. So it’s no wonder that trying to find an automated GRC solution is equally challenging, the vendors are trying hard to figure out what nail to hammer as well. They all do some things remarkably well but at the expense of doing some things either partially or not at all. Thus the reason that it’s not uncommon in larger companies to find multiple GRC solutions installed; different business functions have unique needs and they purchase whichever is closest to meeting those needs. It’s an expensive approach but for the foreseeable future an necessary evil.
I think we’re getting closer to a point in time where a common dialogue will be accepted by the audit and compliance community. The OCEG folks have poured the foundation and it just needs a little more time to harden in terms of broad acceptance. When I see their content displayed prominently next to all the COBIT binders at my clients I’ll know that time has come. I predicted in 2007 that once we’re in the midst of a full-blown economic recovery GRC will quickly rise in prominence due to increasing regulatory pressures, almost identical to the way COBIT soared into the forefront of the industry fueled by SOX. I see no reason to alter that prediction, I’m just not sure when the recovery will officially begin.
In the meantime keep participating in the dialogue, keep trying to define what GRC means to you and to your organization and every now and again share those ideas with some of the decision makers who are shaping the discipline, they need to hear from everyone as they mature the thing. As long as we in the audit and compliance domain keep moving things forward we’ll get GRC to where we need it to be, I’m certain of it.]]>
So I called up in an attempt to resolve things and was informed that it wasn’t my spending that caused a problem, it was the fact that one of the vendors I completed a transaction with reported a breach. Because my card number was potentially included in that breach I was shut down. I was fortunate that my bank is setup to help customers manage these situations fairly effortlessly (I don’t love them most of the time but this event won them some points with me) and after a brief stop at a local branch I had a temporary card and was able to continue on my trip.
A few items of note surfaced as a result of this experience. The first is that my bank would not reveal the vendor that reported the breach. The customer service representative I spoke with claimed that she didn’t have access to the information which I sort of believed. But when I asked how I could find that information out she replied that they typically don’t share it. I thought that a bit odd. Shouldn’t I as a consumer be able to make informed decisions about who I do business with? I should be able to find out who the vendor is so that I can decide whether or not I’ll continue to give them any of my hard earned dollars. The second thing that I found curious was how seamlessly the replacement process was. They had a stack of temporary cards about five inches thick and a process so well defined and efficient that it almost seemed like I was asking to borrow a pen so I could sign something. When I returned to the car my son who had been waiting for me assumed they weren’t able to help me because I was out so fast. How often does this sort of thing happen? And to make their degree of efficiency that much more notable a friend of mine experienced something similar and it took her bank over a week to get a new piece of plastic into her hands.
I recognize that this is a sign of the times we now live in. We use plastic everywhere, our sensitive account information is digitized all over the place and security controls protecting that information are only as strong as their weakest link. It’s why you’ve heard me say many a time that requirements like PCI are an excellent starting point but by no means the end-all to be-all for securing the perimeter. All it takes is one USB storage device to go missing, one new appliance added to a network with default values unchanged, one person printing off a report with NPPI and forgetting to pick it up from the printer and viola, a breach is born.
I’m frequently onsite at clients of wildly varying sizes and I find something every day that makes me realize that sometimes the best weapon against a company being embarrassed by some sort of exposure is just dumb luck. Regardless of whether they have a well formed team of risk and compliance folks working hard to protect information assets or just a single person serving in a related function it comes down to human nature both in terms of those not following the rules and those who are ready to exploit that fact. A prime example is that when I find sensitive information left exposed I collect it and either dispose of it properly or lock it up to share with the appropriate party as a “for instance”. However in those places where less honest people make similar discoveries that same information becomes a commodity to be sold to those who indulge in things like identity theft. Like I said, it comes down to pure dumb luck.
And so I’m left wondering if my now deactivated and defunct bank card was the victim of human nature, a sophisticated scheme to access otherwise properly secured sensitive information or just plain incompetence. And while I’m glad that my bank was swift to react and protect me I wish they’d extend that to also inform and educate me as well. I mean honestly, if I’m going to be forced to memorize a whole new series of numbers shouldn’t I at least be allowed to know who’s to blame?]]>
My first thought was that it was just like what drug dealers do – they give you free product until you’re hopelessly addicted and then start making you pay to feed that addiction. My second thought was that I couldn’t imagine anyone actually wanting to pay for the content. While it’s better than nothing as a framework it’s not that much better. I’m sure there are certain pockets in the GRC industry who think that the Shared Assessment is to vendor management what COBIT is to IT governance but I certainly don’t.
Since first encountering the Shared Assessment a few years back I’ve always thought of it as bloated, difficult to effectively apply and all at once redundant and oddly vague. The very first time I reviewed the content I immediately thought that whoever was behind creating it must be people who get paid by the hour because any attempt at relying on it was going to be major league time consuming. And of course once I started investigating the companies behind developing the questionnaire(s) I realized I was spot on. I once commented to a colleague that the questionnaire looked as if though the purpose of the collective assignment was to think of every possible question you might ever want to ask a vendor, throw it into a spreadsheet and then try and organize it after the fact. If I’ve ever truly liked it in any meaningful way it’s as a reference source when considering questions to include in customized questionnaires and assessment.
The folks running the show have made strides to truly make the questionnaire into a framework with accompanying methodology but in my experiences most companies simply want to leverage the content of the questionnaires and use it how they see fit. Some have made the effort to dig through the massive pile of questions and whittle it down to something more manageable while others pretty much ship it out as is to their vendors including both the lite and full versions. As someone whose practice often has to complete due diligence questionnaires I have to tell you that if we needed to fill out even the lite version it might be a deal breaker due to time constraints.
As I alluded to earlier, I think many practitioners who use the Shared Assessment think of it as being something more like COBIT. I know COBIT and you sir are no COBIT. It’s really intended to be used by large vendors who provide services to multiple clients as something akin to a SAS 70/SSAE 16 report. They pay someone to complete it for them and sign off on it and when their customers look for annual proof that they’re properly controlled they can send along a copy of the completed questionnaire with managements approval stamped on the cover. In theory it’s a good idea but I’d still prefer a proper audit instead.
And it’s heavily geared towards technology vendors and to a lesser extent those who host services. When you try and use the Shared Assessment for non-technology vendors it becomes that much more difficult to apply and sort of forces your hand into coming up with something else. Trying to whittle 900+ questions down to something smaller only to discover you need to write a bunch of new questions on top of that has to be something between depressing and outrageous I would think.
What I really don’t understand is why this was even needed to begin with. My vendor management experience goes back several years and I’ve always been satisfied working with content from existing sources. I think that when you combine content from COBIT and FFIEC you can adequately cover what needs to be covered to assess vendors. I would go so far as to say that most examiners would agree with me based mostly on the fact that there are more than 100 institutions using some version of a vendor management program my practice has designed and they always do well on that front, always.
For those of you who are going to stay the course, cough up the money and continue along with the Shared Assessment I wish you good luck. I hope you’re able to glean something meaningful from the process and I pray you never wind up working for a vendor that needs to complete one of the resulting questionnaires.]]>