In the days since that conversation I’ve put some thought into the frameworks because in the end the aforementioned CISO was committed to finding the NIST experience and eventually did. But what did that really mean? Having fairly recently had the occasion to have both NIST 800-53 and the ISO 27000 documents in front of me it was striking how similar they both were with only a few obvious distinctions to be made between the two. Essentially the differences reflected more on the cultures that created them than the risk factors they were focused on (NIST = U.S.A and ISO = European). But information technology architectures fundamentally are identical the world over so despite formatting and spelling they both are addressing the same challenges whether or not they realise it. And for those of us who have familiarity with both, to know one is to know both, even if those who are committed to either one disagree. If you’ve worked on audit/assessment projects leveraging ISO 2700o material you’re immediately qualified to work on projects using the corresponding NIST framework and vice versa. And if you have experience working with PCI standards guess what? You can pretty much step in and work with either NIST or ISO content (except of course you have to expand your sights to include the entire infrastructure, not just on whatever touches PAN data).
My preference is that we would consolidate globally into the ISO frameworks where applicable and maybe even fit that in to the SSAE 16 process. I’ve read enough toothless SAS 70/SSAE 16 reports to know that it’s easy enough to rig the system to your advantage. And unless you’re a government agency that has to comply with NIST there’s little meaningful value to using NIST whereas being ISO 27000 certified carries a great deal of weight within the audit/assurance community. Plus there’s the added benefit of having InfoSec practitioners all getting trained and practiced at both building out ISO 27000 compliant solutions and also knowing how to test the related controls. Think about that, a single global security standard regardless of where you enter into the profession. Having run a few practices in my career and way more than my fair share of engagements I can tell you that has great appeal. Plus it would help eliminate awkward dialogues where my sixteen years of real and relevant experience is at least partially marginalized because it hasn’t all been with one particular standard.
Ultimately in the end a frameworks only meaningful advantage is that it theoretically ensures consistency in how controls are identified and assessed. If you have someone who knows a framework but doesn’t really understand the details within that sort of defeats the process anyway, no matter how robust or thorough it may be. Perhaps that’s why I consider it a non-issue when it comes to which frameworks a practitioner has used. I’d much rather work with someone who understands the technology and has a good feel for the details rather than someone who knows that SDLC is addressed in SA-3 for NIST or Section 12.5 for ISO 27002. But than again, I’ve always been more concerned with real risk, not perceived risk so this shouldn’t be surprising to anyone who’s read my content in the past.
A security framework by any other name would be just as comprehensive, you know what I mean?]]>
Right after one of the debates I found myself knee deep in a debate about Dodd-Frank. A close personal friend of mine, a very bright bulb who I’ve never found a reason to disagree with brought up Dodd-Frank as an example of horrible legislation that’s crippling banks and contributing to our horrible economic conditions. Whoa, whoa, whoa…. rail against taxes, complain about government spending, assail the current administration for the dramatic escalation of our national debt. But leave Dodd-Frank out of it because that’s not one of our bigger problems. I can offer a five thousand word defense of the best parts of Dodd-Frank without even pausing to organize my thoughts but I don’t need to go that far. I can sell it’s virtues in a single, simple sentence: Any legislation that created the Consumer Financial Protection Bureau is instantly more effective than anything that’s come before it in my lifetime.
No, seriously… in my lifetime.
I’ve already screamed from the rooftops about how much I like the CFPB. In my own geeky, nerdy way I’m proud to admit that I look forward to getting their regular updates and announcements because they always seem either ridiculously relevent or illuminate how they’re hot on the heels of yet another predatory business practice. In barely a years time they’ve pushed deeper into the heart of the issues that crashed Wall Street in 2008 than anyone could have hoped (that’s my opinion but one I’m willing to defend). And their examiners appear to be freaky efficient. I’ve been hearing from our banking clients that they’re drilling in on details and covering more territory than was expected and that they’re discussing issues much closer to protecting customers (and members). Our practice recently issued a bulletin to our clients alerting them to the fact that CFPB examiners are expecting related oversight to be pushed down to external business parters and vendors. This is not a new consideration, it’s exactly the same as what’s supposed to happen with regards to GLBA (and one of the reasons we developed our related software and services for same) but still, we anticipated this would take several exam cycles to surface. CFPB cut right to that chase in a heartbeat, which is stunning for such things. It’s almost like someone told them where to look and what to look for which to a certain extent is true.
The CFPB didn’t start as most new agencies do. They didn’t recruit green examiners and place them under the management of a few practiced hands. What they apparently have done is to hire well seasoned examiners from related regulatory agencies (e.g. FDIC, FRB, OCC) have them contribute to creating the necessary procedures and then send them out to bring it all to life. So on Day One they already know where the bodies are likely to be buried and what to do about it. It’s brilliant, it’s efficient and it’s the very best example of your government doing its job.
Here are some snippets from my in-box:
And the kicker about these three items? This was all issued this month (December 2012) and we’re not even quite halfway through it.
I’ve personally reviewed and/or audited somewhere close to fifty business continuity/disaster recovery (BCP/DR) plans over the past decade. I’ve also written or edited several of those as well in the past five years since moving into professional services for financial institutions. Furthermore I’ve participated in roughly a half-dozen tests while still working within the infrastructure during the first part of my career. Suffice to say I have at least an informed opinion regarding the viability of any such BCP/DR strategies.
Fundamentally there are a few varieties of BCP/DR plans: Those that are current and viable, those that convince your examiner that it’s current and viable and those that may have been viable years ago but bear no resemblance to your current business profile. And beyond those there’s the worst of BCP/DR realities, the non-existent one. But really in the end what your current state of preparedness comes down to is this – either you’re ready for an event or you’re not. And in the past forty-eight hours that’s been made abundantly clear in the form of how many of my clients affected by Hurricane Sandy have navigated through what’s now clearly one of the worst weather events in my lifetime.
Around noontime yesterday (October 29, 2012) as weather conditions worsened and major metropolitan areas were literally shutting down for business I started checking up on a few clients. The first thing I did was visit the website of every client that my practice has assisted with their BCP/DR strategy – each of them had updated their website to announce that branches in the affected areas were closed. Some had a pop-up window with the update, others had a message displayed in either bright red letters, bold font or both. As a standard design consideration each of them also had phone numbers clearly displayed and when I called a sampling real people answered and were available to assist me. I inquired of a few of them where they were physically located and they were all located remotely and not on site in affected areas (much to their credit they were reluctant to share too much information). The second thing I did was visit the website for a few clients whose BCP/DR plans were tagged during an audit/assessment as either being deficient or missing. The websites were not updated and in all but one case I only learned that they were closed for the day after calling into a branch (one had an 800 number that was redirected to a real person).
Now I know this wasn’t a very deep or meaningful test of anyone’s ability to continue operations in the event of a disaster. But what it did prove is that those institutions who had plans that were current and whose management team knew to rely upon had already thought through the little things that make a difference. Someone knew to update the website, management knew to reroute calls away from unmanned branch locations. I can only assume that the appropriate parties desginated to do so also contacted their regulators to inform them of their closing and that a phone chain was initiated informing staff thus keeping them off the roads and safe. And because an important part of the plan creation/update process is both training and testing stakeholders are able to navigate through the decision tree and take appropriate related steps without having to think through it – one of the biggest challenges confronting management during a crisis. The very best part of having a viable and current plan is that all the thinking has been done in advance and has been reviewed and validated which greatly reduces the chances that something (or someone) will be missed.
Here’s a sanity test: If you didn’t know exactly where to begin the decision-making process or who to engage you’re in need of a new plan. And if you did know but can’t be absolutely certain that others would be able to do the same in your absence, you’re in need of a new plan. One of the rebuttals I’ve heard all too often when identifying a deficient or missing BCP is that management knows what to do should some manner of disaster strike. That may be true but what happens if key people are unavailable or can’t be reached?
Seriously, when something like Hurricane Sandy occurs it’s the best time to consider how you’re institution would fare when navigating such an event. Block off an hour within the next week with your key people, pull out your BCP/DR documentation and try and step through how you’d handle things under similar circumstances. In a very short time you’ll gain a sense of whether or not you’re prepared and if necessary afford you the opportunity to improve.
Trust me on this – you don’t want to be in the middle of a disaster scenario and find out that your plan doesn’t work.
Truth be told, while I’ve spent somewhere near seventy-five percent of my time over the past ten years working for financial institutions I’ve also done a fair amount of work for insurance companies, mostly centered on SOX with occasional diversions into general risk assessment work. The drivers in the insurance industry are different in terms of oversight and requirements and so the volume of work isn’t nearly the same. But that by itself begs a question: Why isn’t the insurance industry as regulated as financial institutions?
I’ve now done major audit and assurance work for financial institutions, insurance companies and health care providers and for most of them the risk profile is almost identical in terms of non-public personal information. So why isn’t the level of scrutiny equal across all three of them? While some might start spouting about how it is, about how states routinely audit insurance companies and how the health care industry has to comply with HIPAA the truth is that banks and credit unions are held to a much higher degree of accountability than any other vertical. Why is that?
I’m fond of routinely, almost incessantly beating the drum about how it’s all about the risk. I get my initial client opportunities because I have a deep resume with relevant experience but I generate repeat business because I tend to whittle things down to what matters most both to my clients and to their oversight providers (auditors and examiners alike). Compliance exists because risks need to be addressed – if the risks aren’t credible or likely the work should be adjusted to reflect that. But where the risks are real they’re really real. The type of data shared with an insurance company is in many ways even more sensitive than anything shared with a bank and most of what’s shared with insurance companies is also shared with health care providers. Yet there’s no true Federal oversight for the insurance industry and HIPAA is about as much of a toothless tiger as anything I’ve ever encountered.
I recently completed a boatload of documentation to get my family on a new health insurance plan. I turned over every piece of sensitive information I have for every member of my family minus my bank account information because that’s what was required. I had to provide all of this online and follow that up by sending them an impressive array of hard-copy documents with even more sensitive information that should never be kicking around in the public domain. In the past I’ve also been required to provide my bank account information because one plan in particular would only provide coverage if they could automatically deduct monthly premiums via ACH drafts. So now the insurance industry has access to it all; name, address, social security number, date-of-birth, maiden name, medical history and banking information. And yet there’s no true oversight agency that’s responsible for making sure they’re protecting all of MY information.
To compound my frustration, of the four insurance companies I’ve conducted work for since 2006 (two of which are Fortune 5oo’s) exactly none of them have something akin to a Chief Information Security Officer. They all have risk people focused on the business side of things (because that’s necessary to protect profitability) but that’s it. There’s typically an information security manager who’s part of the infrastructure team but who almost never reports right into the senior-most technology person (e.g. CIO, CTO). Any audit work that occurs is coordinated across multiple IT managers and on rare occasions there will be an audit/assurance manager. However in the one example I personally know of where that position exists the person in the role was really just a converted IT manager who obtained a CISA designation – no fundamental audit or assessment experience.
The question has to be asked: Why is it that banks and credit unions are heavily regulated regarding protection of non-public personal information but other industries with similar risk profiles are not? Why aren’t insurance companies required to comply with FFIEC-type guidance? Why isn’t there a Federal regulatory agency that is responsible for keeping an eye on the insurance industry the way the FDIC, OCC, FRB and NCUA do so for their financial institutions? And trust me, whatever oversight exists for the insurance and health care industry is largely ineffective. Why is my sensitive information considered more at risk within a banking infrastructure than it is within an insurance infrastructure? Having been on site for both and examined their internal controls I can’t answer that question, that’s for certain.]]>
Which is why I don’t much care for any manner of compliance-based assessments that are self-administered.
Companies have had this crazy notion for more than a decade now that the best way to identify and address risks inherent within the infrastructure is to ask key stakeholders a somewhat generic set of questions and use their responses to figure out what’s what. Most of the time the people driving these initiatives are either information security professionals or corporate compliance people who either believe they already know where the problems are or are looking for the simplest and easiest way to satisfy some requirement. But what they often fail to grasp is that it’s almost impossible to draft a common set of questions that either apply to the vast majority or worse, will be interpreted consistently across the stakeholder population. Plus the perceived benefit of using a self-assessment approach to reduce effort and required support resources is almost always an illusion. Most of the time saved in not having someone ask the questions and record the answers is instead consumed by needing to explain the format, explain the questions or trying to clarify and clean up the responses. While supporting one such program recently each assessment required a kick-off meeting, a follow-up meeting to review the status of the assessment, a third meeting to review the initial draft of the questionnaire, a fourth meeting to review the resulting report(s) and a largely untracked number of hours to help generate all of the related support documentation. Regardless of the size of the entity being assessed each one consumed somewhere close to eight hours. While that might seem like a scary large number, the really scary part was that based on which risk analyst was responsible for the assessment and the personality/mindset of the stakeholder completing it the results looked very different from one another. It was almost impossible to generate meaningful metrics across the assessment population because a “Yes” answer for one question might mean the same as an “N/A” in another; there was no way to know that.
Another issue I’ve always had with the self-assessment approach is that while some stakeholders take it seriously and do a remarkably thorough job, others race through it with little hesitation just to fill in the blanks and get it off their desk. Sometimes you can detect which is which, sometimes you can’t. Plus the approach fails to capture much of the rich and relevant information related to each question and the underlying risk behind it. I recall conducting a team-driven risk assessment years ago where one stakeholder after the next covering a very broad sampling of the infrastructure kept lamenting on the lack of a proper disaster recovery plan. They had something to show auditors/examiners but to a person no one believed it was a truly viable plan. All but the CIO brought it up as a concern and when pressed a bit about why that was they all shared a common concern: If their main office was closed unexpectedly for twenty-four hours, regardless of the reason, they were likely out of business. A related self-assessment question would ask “Do you have a current and recently tested DR plan?” – most respondents on that engagement would simply have selected “Yes” and moved on to the next question without ever being challenged to share their concerns. Where’s the value in having a repository of questions and answers when it fails to capture the true essence or dimension of risk?
And the biggest issue I’ve always had with self-assessment questionnaires and their related templates is that they’re so often poorly designed. I can guarantee you that each of them has at least one question which makes zero sense to anyone who reads it. They either answer it based on what they think it’s asking, answer with an “N/A” or require follow-up with the people managing the process to have it explained. And you’d be amazed how many times even the author is challenged to provide a meaningful answer (including this guy). One thing’s for certain, a self-anything needs to be designed and written so that everyone understands what they need to do without having their hand held. Plus it’s rare that questionnaires are customized so that each stakeholder is only asked those questions that truly make sense. An application owner should never be asked if their anti-virus solution is current and up-to-date. A business process owner should never be asked about software change management. Yet seldom have I encountered a self-assessment process which does anything like this and so the audience is burdened with time consuming yet unnecessary questions.
Really though in the end my overriding problem with the self-assessment approach is that it fails to capture the expertise and guiding hand of true risk and assurance people. The process is often supported by analysts who don’t really have a feel for conducting assessments and are satisfied that all of the blanks are filled in. I have a nose for when there’s something beyond a simple answer and know when to scratch at the surface to bring it to light. By not allowing expert hands to guide the process potentially huge amounts of valuable and possibly critical details are being missed thus undermining any perceived value of the process. When you consider that all tolled and tallied the self-assessment approach versus the guided assessment approach doesn’t really save you much time (if any) and that it results in a weaker finished product, why would you elect to use it? One answer is that regulators push for it because perhaps it’s better than nothing (I can’t get any of those I know to comment). Another is that the people sponsoring these initiatives lack the fundamental comprehension to understand their options and chose what they perceive as the less complicated approach (again, I don’t know for sure it’s just a theory). What I do know is that when done right a risk assessment is managements best friend, a fundamental belief behind the recent spike in ERM activity.
While recently having my car serviced the mechanic discovered a nest of some sort in the engine block, he thinks it was probably squirrels. Because of this discovery he went searching for all the wired connections to make sure they weren’t chewed up and destroyed, quite a few were as it turns out (the car had been idle for several months). The bill only added the cost of the replacement wires but nothing significant for the time it took to first find which were affected and then replace them. Had I attempted the repair myself I might have noticed the nest and likely would’ve cleared it but know for certain I never would’ve thought to check the wires, where to look for them or what to look for. I was smart enough to rely on a professional with a nose for that sort of thing and it saved me time, money and best of all the aggravation of having the car break down somewhere unexpectedly. Good thing I didn’t go the self-repair route.]]>
Our family has had a PayPal account almost since PayPal has offered them. It’s remarkably convenient, it provides us great flexibility to shop online using a single payment source and I love that we’ve been able to change funding sources several times over the years. It’s always conveyed a certain sense of security; I’ve just always felt safe using PayPal. I’ve even gone so far as to suggest that at some point, if PayPal management grows things just right I could see a future state where paper currency and maybe even actual physical credit cards go away and are replaced by some version of their services. When I discovered this past year that Home Depot already allows you to use PayPal to make in-store purchases I was convinced I was right. Now I’m not so sure.
Over the past year or so I’ve been getting the occasional email ping from PayPal regarding our reaching a spending limit. It’s a fairly high limit for most but considering that we’ve been using PayPal to make purchases going back nearly a decade maybe not as much. But the message has been quite clear; if we didn’t verify our account before reaching this limit it would be “ the maximum amount of money you can send or use for purchases before you need to become Verified”. So how you become verified is quite simple – either give up your bank account information or apply for a privately owned credit card. No, seriously, those are the only two options.
My first thought was that although I liked having the protective layer of a credit card product buffering my PayPal account from my actual money I was okay with providing bank account information. It’s not like I don’t use that in other places to make payments and so there wouldn’t be any enhanced risk by doing so again. I wasn’t going to apply for a PayPal-based credit card because I don’t want one or need one and I wasn’t looking for a new credit source anyway, I just wanted to continue using PayPal. I clicked on the option to provide my bank account information and after the initial screen where they ask for the routing and account details and clicking on “Submit” I was presented with a screen that I still can’t believe exists. Right there before my eyes was a screen from PayPal in which they ask me to provide my online banking user-id and password so they can verify a series of PayPal generated payments thus confirming my banking details. Let me repeat that one more time; PayPal asked me to provide them with my online banking user-id and password.
Has PayPal lost its collective mind? Seriously, have they?
I was stunned, almost to the point where I couldn’t get coherent words to flow. I immediately fired off an email to PayPal customer support asking them how they could do something so outrageous. Within minutes I received an automatically generated reply which I always find insulting, as if though I’m not worth an actual intelligent and personal response. It was a complete regurgitation of everything stated on their website and completely ignored the gist of my email. I fired off a second email missive, this time way more specific. Here’s what I wrote:
Now I’ve designed and supported more than my fair share of related content. I understand that sometimes the best way to tell a story is to paint it in the form of a picture; I get that part. But way too many times I’ve witnessed such initiatives spiral out of control to the point where it takes an army of people working ridiculous hours to pull together a deck of metrics that either fails to answer anyone’s questions or, even worse, generates requests for more metrics to provide clarity. And once a metric becomes a standard part of any reporting package it often stays there until management changes, and sometimes even beyond.
But I think there’s a bigger issue with metrics that exceeds my simply not thinking they’re “all that and a bag of chips”. Where are the controls around generating them?
Seriously, we have this vastly complex framework wrapped around financial reporting (SOX) to provide reasonable assurances that what management is reporting to its investors is accurate. We have industry, federal and state legislation requiring all manner of controls around sensitive information. There are auditors (internal and external) and regulators from all over the place that comb over everything with a fine tooth comb (or at least claim to) to make sure everything being done is done right – but in my nearly fifteen years in the audit/assurance industry I have never heard of a finding or issue regarding the veracity of metrics. Which is only a problem if the people running an institution or company rely on them to make decisions.
The reason why it’s a problem is because so much of the metrics out in circulation is pulled together from disparate sources, cobbled together in spreadsheets or non-production databases and manually generated. There’s no easy way to verify the source data, or know that it’s unaltered in any way or even know if it’s the right information. And even if the data source used is from a secured production-like environment, still there’s no real auditing conducted to ensure the information is accurate or better yet, is even the right information.
I once took over a change management process and assumed responsibility for a series of reports that were generated for the Managing Director who in turn used that as part of his reporting package shared with the CIO. One of the key metrics being reported on was scheduled releases and the IT departments on-time implementation percentage. The numbers looked great showing that they were on-time more than ninety-five percent of the time over a two year period. The only problem I could see with the metric was that it was misleading to the point where it was almost a lie. The scheduled release date was being pulled from the system used to migrate changes into production and that date was only determined once the development team had completed all of their work. So the scheduled implementation date was chosen once they knew they were ready to move into production. Of course the on-time numbers looked great, they always knew they were ready before committing to it. The Managing Director incorrectly assumed that there was a legitimate release schedule with forecasted dates and that the on-time numbers reflected on a well run process; wrong. No one ever questioned the numbers or their source and had I not inserted myself into what was described as a well honed, efficient process the problem might have never been identified; and there a few more just like it. My trust in metrics was permanently altered after that.
Metrics represents an excellent way for decision makers to quickly understand status and identify problems. I’ve quoted here before about how someone I respect quite a bit was fond of asking her team “If you can’t measure it, how can you manage it” and she’s absolutely right. Metrics is the ultimate management means of measuring key activities and issues within their world. But how far do you go and how much effort do you expend pulling the related reports together? And even if you’re able to automate the process and shorten the time necessary to generate the reports, how do you know that you’re either measuring the right things or that the underlying data is unaltered? Ultimately I think that senior managers should be provided with something akin to a cost-benefit analysis for each metric they’re given. Have them understand the degree of complexity and the amount of effort required to generate a number before deciding whether or not it’s worth it. Perhaps I’m being naive but I’d like to think that most C-level executives would eliminate a significant amount of their reporting if they could see how much it was really costing them.
Here’s the part that should really concern you the most though: Metrics is a key component of Board reporting, they make all sorts of decisions based on what these reports tell them. How can that be allowed unless the process used to generate them is locked down and audited? Where are the regulators in all of this?
Here’s the sequence of events:
Wednesday morning I received an email alert from a company I use that my automatic monthly payment was declined. Knowing full well it wasn’t a balance issue I assumed correctly that my bank had cancelled the card. As I travel extensively and rely on the card exclusively I made my way to a local branch later that morning. Along the way I called into the service center and confirmed my suspicions, that Visa informed the bank that my card was part of a range of numbers that was possibly exposed via a breach. I asked if it was possible to learn the name of the offending vendor and was told (same as last time) that Visa doesn’t share that information. As I am now a two-time victim it’s easy to spot the trend and hard to ignore the possibility that it might have involved the same vendor both times. It wound up taking three visits to a branch to straighten me out and actually get a functioning card in my wallet. The inconvenience is more than benign as I use the card in several places and will now need to make manual, one-off payments with the temporary card while awaiting the permanent card so that I can update the affected accounts. By the time this is all said and done it will have resulted in my exhausting more than a half day of billable time trying to fix a problem I didn’t create.
A few things need to change.
We collectively as an industry and a society need to accept that both identity and card theft is a mainstream occurrence and adjust accordingly. Legislation is needed to further insulate the victims (like me) from any extended damage or inconvenience and ensure as smooth a process as possible to allow us to continue living our lives. Because right now I don’t just feel like a victim, I feel like I’m being punished for being one and treated like I simply don’t matter.
Hey Washington, make the industry tell us what’s going on and to treat the consumers better!
Oh, and PCI Security Standards Council, how’s that framework working out for you? I’m thinking the only one benefiting from your content are the practitioners making money by supporting it.
Seriously, something needs to change.]]>
Things are looking up a bit because I have a new favorite regulatory agency to follow, the Consumer Financial Protection Bureau (CFPB). And here’s why: They focus on things that impact my day-to-day life (and yours as well).
I started tracking what the CFPB was doing about five months ago by accident. Someone I know who used to be an examiner for the FRB switched over to the newer agency right at its infancy and I noticed this courtesy of a LinkedIn update. Because I consider the Fed to be the Big Kahuna of the regulatory agencies I was surprised (you don’t leave the Yankees to sign with an expansion team unless you have to, or so I thought). Compelled a bit by the update I started poking around the CFPB website. For the first few months of this year it seemed to have potential but was little more than brochure-ware. But last month that all changed.
The first CFPB update that caught my attention was labeled 12 CFR Part 1070 and it was all about the protection of consumer data, only with a slight twist. Basically it was all about how any information they received as part of their field work would be protected exactly the same way that any third party vendor would be required to. Despite their being a Federal agency they weren’t going to hide behind that as a means to simplify their lives. They spearheaded an update to the underlying regulation that frames their charter so that consumers and their institutions can be assured that all PII and NPPI would be protected. For me it was a rare win-win topic; protection of PII and NPPI combined with a reference to vendor management (these are a few of my favorite things). And really for me it was that much more significant because I’ve known of a few situations where representatives of Federal and State regulatory agencies were responsible for the outright loss of confidential and/or restricted data. Beyond a slap on the wrist there wasn’t much else done to the offending examiner or their agency. And the affected institution couldn’t really complain too loudly because it’s always a bad idea to challenge your regulators, even when you’re in the right. So I thought this was all at once a compelling and remarkably sensible update by a regulator, not something I’d expect to see. That was the first points on the board for the CFPB.
The second set of points were scored almost on the same day. I wanted to check one of the details related to the aforementioned update and noticed this one “Consumer Financial Protection Bureau report finds confusion in reverse mortgage market“. Because I have a parent who is a senior citizen and who I think might one day soon be open to at least exploring a reverse mortgage I read with great interest. The report was in plain English, was oriented in such a way that I could share it with my family and have them understand the issues and concerns detailed within and most importantly it made sense. Reverse mortgages are growing in popularity and its main audience is the senior citizens segment of society. Seniors tend to be more easily misled, they’re under greater pressures to find new money sources (courtesy of our recession) at a time in their lives where going back to work is often not an option. And because a parent would do almost anything rather than turn to their children for financial assistance they see a reverse mortgage as a way out of their predicament. So for me having this content available was quite the relief. I can caution and advise all day and night but the risks presented by a reverse mortgage are much more credible coming from an authorized source. And so I celebrated July 4th this year by declaring the CFPB my new FDIC (the Sheila Bair inspired version, not the current blah one).
Here’s my really bizarro advice to any of you with even the slightest interest in regulatory oversight; if you haven’t already done so visit www.cfpb.gov and take a look around. It’s oriented towards lay people, not just lawyers and regulators (and practitioners like me) and addresses topics and concerns that affect the majority of our population. Basically it’s what I would expect from a regulator that still has that new agency smell but nothing like I’ve come to know from those that preceded it. To those who have had a hand in defining its charter and organizing its content, great job! Now repay my kind words by going out and getting me some juicy enforcement stories to write about.
Of course the truth is much more complicated. I don’t just focus on computers, my scope expands to include anything that involves sensitive information. While that always includes a variety of devices it also includes paper-based and people processes as well. I frequently share stories about the enormous amount of printed content that’s to be found throughout an institutions physical locations. I occasionally tell stories about how careless people can be when on the phone or in conversation and sharing all manner of sensitive information. It’s never just about computers, it is however always about information and how it needs to be protected.
Truthfully though what I really do is search for controls that protect information, identify those that I find and try and measure their effectiveness and more importantly identify where controls are missing and work with my clients to remedy that. At the heart of the regulatory requirements I focus on it’s all about the risk introduced by the presence of information, from personally identifiable (PII) to non-public personally identifiable (NPPI). Risk: It’s what drives every single project I work on, it’s what drives every product and process I help develop. And really, if you take the time to read through the literature, it’s what’s behind just about every piece of regulation known to the banking world. Risk, risk, risk and risk.
One of the reasons I’ve enjoyed spending so much time working with the community banking and credit union sector over the past few years is that it’s a simple enough argument to make with fewer people to convince; everything you do to comply with the regulations should be risk-based. It doesn’t really make a difference if it’s complicated to do or time consuming, you prioritize based on where they are found and make decisions accordingly. But that gets much more difficult to do as the institutions grow in size and complexity. Over the fifteen years I’ve been building and supporting compliance initiatives I’ve worked with Fortune 50′s, 100′s and 500′s and a whole lot of financial institutions that merely read Fortune magazine. But while their overall size varies widely risk is still risk and that never changes.
I wish more practitioners embraced this simple concept. While some do, many still don’t. There’s often a rush to come up with a standard set of decision criteria to drive the work based on factors not necessarily aligned with risk factors. Those who have worked with or for me will tell you that when presented with questions about which vendors or applications to assess or what to look for when conducting any type of assessment my first line of logic is to try and figure out where the greatest possible exposures to be found. Assessing a low risk application yields little value no matter how complete it may be. And reviewing a vendor where the dollar spend is high but the risk factors are low does little to protect the institution.
Beware the practitioner who wields a hammer for they only know to look for nails.
Your regulator doesn’t want you to blindly implement compliance programs, they want you to identify and manage risks, real risks. They want to be able to understand the logic and approach being used and find credible evidence that you’re focusing your efforts on the right things. Go back and read through the library of FFIEC documentation and pay close attention to the hooks inserted throughout where they talk about conducting assessments and talk about using approaches which are appropriate for the size and complexity of your institution. Then scan through your related program inventory and figure out if you’ve designed things accordingly. Are they actually protecting your institution from credible threats and risks or are they just filling binders on your compliance officers shelves?
For me, professionally I’d prefer to always only do meaningful work and in the audit and assurance world meaningful is code for risk-based.