Regulatory Reality


July 8, 2011  3:16 AM

Cloud Computing – at what price?



Posted by: David Schneier
cloud, cloud computing, compliance, regulatory, Regulatory Compliance

Years ago while working on SOX in its early days the team I managed started getting just a little tired of hearing that very term.  It seemed that everything was “SOX-this” or “SOX-that” as everyone was trying to attach themselves to the massively intrusive new regulation and establish that they were in the know.  One of the members of my team started playing with the concept and began using terms such as “Good SOX morning” and “Excellent SOX point”.  And while it seemed sophomoric it was actually quite fitting and helped people in our environment start pulling back a bit on their SOX-isms.

I’m reminded of this phenomenon courtesy of the latest and greatest technology to revolutionize the business world – Cloud Computing.

Over the past three months the cover of every industry rag had something Cloud-related on its cover.  Every respectable technology website I know of has Cloud-something splashed all across it’s pages.  Almost every audit and compliance source I frequent seems to only want to talk about Cloud Computing.  But I knew I’d reached SOX-ish proportions when I recently visited a community website for my former hometown and right there amongst the neighborhood goings on and local news bits was a blog post by someone offering to explain the phenomenon of Cloud Computing.   Right next to the story “Main Street Bistro offers live music this Friday” was a blog post titled “What is Cloud Computing?”.

Wow!  I haven’t seen such lunacy over a new technology since the iPad2 came out earlier this year.  Seriously, this isn’t just about corporate data centers and hosted business solutions, this is also about small town Main Street (or so I’d have to think based on the blog post).

Am I the only one finding this all just a bit odd?  Remember when Microsoft started running those commercials touting Windows 7 software and the Cloud (every problem people in the commercials were confronted with segued to a solution elsewhere, “To the Cloud” went the line)?   What exactly was that supposed to even mean?  There’s a huge difference between accessing your home desktop via a remote PC connection and having all of your digital world stored in some amorphous conflagration of servers.  My kids would stare at the TV and ask aloud if we could use the Cloud.  Holy Hype!

I’m not sure who exactly is behind this awesome marketing strategy but I have to tip my hat to them, they’ve outdone themselves this time.  When children are asking their parents if they can use the Cloud you know something went very, very right.

But as someone who makes his living trying to build controls around things and testing them to make sure they’re working properly I have to tell you, when I think Cloud I don’t think Computing, I think storm.  I think of huge thunderstorms and heavy, ear-splitting rain.  I think of hail the size of baseballs smashing down on everything and winds whipping up and destroying anything in its path.  When it comes to Cloud Computing you may see fluffy, white pillowy images but I see nasty dark skies ahead.

How do you secure the Cloud?  How do you back-up the Cloud?  How do you know where your data passes through or resides in the Cloud?  You don’t, that’s the thing.  If one server in the Cloud configuration gets hacked, if one virus infiltrates somehow past the anti-virus filters, if somehow someone with ill-intent gains access to that one server or is able to install a sniffer of some sort how would you know if it affects you?  Again, you wouldn’t.  You might know that your at some risk but you wouldn’t know for certain.  It’s a controls nightmare for people like me.

And to be fair I’m not as concerned about private cloud configurations as I am about those that are offered out in the public domain; I’m concerned but not as much.  But for those Cloud offerings that promise cheaper storage, email and web-hosting I have to ask, how can you assure me that my data is safe?  What happens if one component in the Cloud configuration is compromised, how would you know who is impacted and who to contact?  And what if some or all of your infrastructure is seized by law enforcement as part of an investigation, who’s impacted and how do you know for certain (that was a question raised on a LinkedIn board I read)?

There have always been too many moving parts in a heterogeneous network design and the industry has never been able to completely build out solutions to lock it down sufficiently (thus the reason for the “Breach of the Week” announcements we read about).  Now we’re being told to migrate what’s on that network to an ever-changing virtual infrastructure where many hands makes light work.  Where your digital world resides today is potentially/likely different than where it will be tomorrow and you have no real control over that.  How does that appeal to anyone?  Seriously, how much money do you think you’re saving by rolling these dice?

You know who I think is behind the push for Cloud Computing, the criminal element. Seriously, think about it.  They offer a big virtual sandbox where you can host all your files and applications on the cheap.  They sexy it up by running creative ads and getting vendors to back them up and voila, people are running to it like moths to the light.  And for what, to save a few bucks?  How much is your sensitive data worth?  Probably a bit more than what you’d be saving by running in the Cloud.  Now that I think about it I’m certain I’m on to something.  I think that people who are looking to find easier and more efficient ways to gain access to sensitive data not theirs are behind this.  If you’re information is compromised how can you ever prove it happened in the Cloud?  This may be the perfect crime.

If you want to have your head (and data) in the Cloud so be it.  For my money… no, wait, for my personally identifiable information I’m taking a pass on the Cloud.  I know too much about how technology things work and don’t work and I can’t even begin to figure out how these configurations can be properly secured.

June 24, 2011  2:43 PM

Is new guidance really new or worth waiting for?



Posted by: David Schneier
cloud, compliance, compliant, FDIC, FFIEC, guidance, NCUA, PCI, regulatory, Regulatory Compliance, regulatory guidance

Oh how the times have changed.  Once upon a time I was part of a group of peers who waited for new album releases, camped out over night for concert tickets and once even waited on line for the annual release of Strat-O-Matic’s baseball set (perhaps the nerdiest thing I’ve ever done).  And all of this was done with genuine anxious anticipation.  Now I’m part of a group who has been nervously drumming their fingers on the virtual table waiting for the FFIEC to release it’s new guidance on Internet-based application authentication.

Seriously, it’s a big deal.  And so far it’s much adieu about nothing.

I don’t know what the actual hold-up has been.  A draft of the new guidance was leaked online last year (ironic, don’t you think) and heavily circulated a while back but no one in any position of authority has offered word one as to whether or not that’s close to what the official document will look like.  But here’s my question to stakeholders throughout the banking industry: Why are you waiting for the FFIEC to spell out what you need to do?

I suppose if you’re committed to doing the bare minimum expected by the examiners and not interested in extending your solutions to adequately protect your customers that’s a sound strategy.  But why do you need anyone to tell you what to do?  Shouldn’t you be continually assessing your environment, keeping current with existing and emerging threats and designing controls to reign them in?  That’s not only a solid business practice it’s also heavily implied by, wait for it, FFIEC guidance.  That’s right folks, if you’re supervised by any of the FFIEC sponsoring agencies they’re already expecting you to conduct  periodic assessments and modify your infrastructure to mitigate and manage identified risks.  But that’s really more theory than practice.  All too often management is willing to wait and see what their annual exam reveals and only address those things that the examiner cares about.  And because examiners typically operate under the constraints of limited hours they look at what they can and the rest just has to wait (and sometimes wait and wait and wait).  So while a key requirement may not be satisfied, if the examiner didn’t have time to look into it the gap remains unchanged.  Again, why does that happen?

I recently brought up this very topic during an internal meeting within my practice and one of our subject matter experts laughed at my naivete.  As he pointed out so matter of factly, the only reason most of the FFIEC-centric activities ever really happen is because financial institutions don’t want to fail an exam.  Rare is the management team that builds out their controls in an attempt to address the so-called “industry best practices” and instead does what they believe necessary to keep their examiners happy.  And so if the FFIEC doesn’t spell out minimum requirements to authenticate and protect online banking solutions there’s little chance the industry will move in the right direction.

But what if the guidance falls short of what’s necessary to get the job done?  What if it only frames the problem but doesn’t actually tell you how to solve it?  Remember, the primary purpose of guidance is to raise awareness to the issue but not necessarily how to fix it.

I offer as a for-instance the most recent publication from the PCI folks.  They just released a new document providing guidance for virtualized infrastructures (which is really a fancy term for cloud computing).  I’ve been somewhat outspoken on this very topic because I’m not confidant that in-scope infrastructures have done enough to address traditional PCI guidance in a somewhat homogeneous environment – now these same companies are chomping at the bit to move things into the Cloud.  If you couldn’t properly secure and monitor a configuration where each device could be identified and configured how are you going to be able to do it on a platform where you never really know where your information passes through?  But the leadership atop the PCI council at least decided to try and frame not only the challenge but also provide some direction on what to do about it.  And their guidance boiled down to this: No one can tell you how to secure relevant parts of the Cloud configuration so the only way to be properly compliant is to make the entire configuration compliant.  I’m sure that when the audience first downloaded the document they were hoping to find directions for a clear path to being able to leverage the latest and greatest technology without having to boil the ocean.  Instead they were told that you have to assess the environment and introduce PCI-related controls anywhere there’s a possibility in-scope data might pass.  With that one broad stroke of a digital pen they pretty much made Cloud computing a much more costly investment for those who need to comply.  Their guidance didn’t solve the problem, it just defined it more clearly and delivered the bad news that there would be no shortcuts available in effort or cost.  And while it may not be popular guidance it is, ultimately right.

As for the FFIEC guidance I’d offer this as food for thought: If you have weak or deficient controls around online authentication your examiner is not going to give you a free pass because the new guidance is delayed.  They’re not going to let you off the hook if you’re missing something significant simply because no one told you it was missing.  You’re supposed to figure these things out for yourself, they’ve told you that time and time again.  And while I won’t know for sure until I know for sure, I’m expecting their guidance will be somewhat similar to the PCI Cloud publication where they frame the problem and summarize by telling you that you need to figure things out based on your own unique infrastructure.

Seriously, don’t wait for the industry to tell you what you need to do when you should already know what that is.  As Dr. Seuss advised many years ago in the great childrens book “Oh the Places You’ll Go”; Your mountain is waiting so get on your way!


June 15, 2011  4:52 PM

The trouble with ineffective controls



Posted by: David Schneier
assess, assessment, Audit, bank, banking, community bank, compliance, credit union, CU, data center, GLBA, NCUA, regulation, regulatory, Regulatory Compliance, Security

I’ve been visiting with my mother who lives in a gated retirement community. In order for me to gain access to the development I need to pass through a security check point at the main gate. They ask me who I’m visiting, I provide my mother’s name and either they find my name on the pre-approved persons list or they have to call her to authorize my entry, or at least that’s what they’re supposed to do. Ever the auditor, I’m always amazed that they never ask me to provide any form of proof that I am who I say I am. I’m further amazed by how inconsistent this very basic control is applied. Some of the security guards wave me in without ever checking that it’s OK to let me in. Some look up her name on their system to make sure she exists but never ask me who I am and just a very small handful of the guards follow protocol and check my name against the list (but still without ever knowing if I’m me). For the purpose of this blog post, lets ignore the fact that I could park on the street outside the development and simply walk across the lawn in order to gain access to her apartment completely bypassing security. Lets also look past the fact that all I would ever need to do is have someone elderly sitting next to me and tell the guard that I’m returning that person to their apartment in order for them to let me in. Generally speaking, despite having security guards, a secured entry and a documented process to control who is allowed access, they might as well have nothing because net-net that’s what they really have. This visually impressive control fails miserably to work and anyone with ill intent would know that in a heart beat.

Which begs the question, why bother supporting ineffective controls when they fail to control anything?

I wish it was rare that I encountered similar situations with my clients but it’s not. My favorite ineffective control is the manual visitor sign-in sheet I often find when auditing/assessing my clients physical data center controls. My hosts often make a big deal out of asking me to sign-in before allowing me access to their data center or server room and I typically play along. However, I’m fond of using an alias to see if they validate the information I provide (usually they don’t). The manual sign in sheet falls under the category of “better than nothing” but in its own special sub-category I call “but not by much.” The list is always a bit lite and is often missing sufficient evidence to prove that it’s consistently relied upon. Another favorite of mine centers on production change control. Some of my clients have fairly robust processes to track changes to application software but ask them for evidence of system software updates or hardware configuration changes and I’m met with blank stares as they try and figure out how to tell me they don’t really track those things formally. So you have to wonder why you’d even bother to track some of the changes if you’re not tracking all of them? If something went wrong within a clients infrastructure how would they know if any recent changes might explain it if they don’t know about everything that changed?

Here’s a bit of a radical thought; stop supporting ineffective controls and save the time and effort required to support them.

Seriously, even though a control might appear critical in nature, if it’s poorly designed, poorly supported or just flat out ineffective, just kill it altogether. No decent examiner or auditor is going to be tricked into thinking it’s providing value and it’s likely going to call into question the validity and reliability of all your other (hopefully) effective controls. If you feel strongly that the control needs to be in place and doing its job than do something about it. Either redesign things so that it’s viable and effective or scramble like crazy to identify compensating controls that render the control unnecessary.

We live in an age where compliance rules all. There are all manner of controls that are required in order to satisfy our oversight agencies and auditors and that’s a list that will only continue to grow. No one has the luxury of wasting time or the precious few resources they have to work with and so it’s that much more critical that these things be thought through and validated. Expecting people to support control related activities that ultimately fail to satisfy their objective is flat out wrong. And because this is the age of regulatory enlightenment those who toil within the financial services industry are a bit more savvy about how these things work. They have an idea of whether or not what they’re being asked to do makes sense and will resist or defer participating if they think it’s a waste of time. The only thing worse than an ineffective control is one that’s poorly supported.

It’s why I often wonder what would happen if I simply drove across the lawn closest to my Mom’s building and completely avoided the main gate. I’m thinking that if it’s after sunset when there are no golfers walking the links I could probably pull it off. Of course I’d have to deal with the compensating control of an angry mother once she figured out what I did but perhaps, just to prove a point it might be worth it.


June 3, 2011  3:18 PM

What does the “E” stand for in ERM?



Posted by: David Schneier
assess, assessment, Audit, compliance, enterprise risk, enterprise risk management, ERM, GLBA, NCUA, regulations, regulatory, Regulatory Compliance, risk management

Last week while attending a banking conference I found myself in a conversation about Enterprise Risk Management (ERM).   I had made the comment that I was tired of constantly hearing different definitions of what the discipline is and how it should be applied.  It’s the latest hot buzzword fueled in large part by the banking crisis that’s still unfolding and how industry experts believe that much of the mess could have been avoided if management was better at measuring and managing risk more effectively.   And while in theory I agree that ERM would certainly have helped, that’s only true if the concept is applied effectively.

While I’m no expert on the subject I have plenty of experience working in organizations that either have successfully implemented ERM practices or who are attempting to do so.  I am routinely amazed by what I discover along the way, how some have a firm grasp on what needs to be done and how others simply rebrand a group or function as Enterprise Risk but do little else to further the initiative.  There are some core activities that need to be in place and functioning in order for management to achieve any measure of success and ultimately they’re either there or they’re not.

My entire perspective is a bit tainted.  I first learned about ERM at the hands of a master practitioner.  A few years back I attended a two-day workshop taught by Tim Leech who has been often referred to within my circle of associates as the “Godfather of ERM”.   Tim has been assisting companies of all sizes, complexities and verticals for decades in building out programs to first define their risks and figure out what to do about it.  His approach is so effective that to the casual observer it would almost seem simple, even easy to implement.  However what I learned after two days of listening and learning was that there was a right way and a wrong way to build out ERM programs, that the approach needed to be tailored to the organization and that if the wrong people are leading the way it could cause more harm than good.  A few months later I had the pleasure of participating in an actual ERM engagement where Tim applied his theories to assist a healthcare company in designing the foundation of an ERM program.   It was fascinating how much information he was able to collect from an audience comprised entirely of the “C” suite, how he engaged them in a lively dialogue which resulted in a frank and honest identification of risks and allowed them to begin building out a framework.  What was key to the early success of the newly forming program was that it resonated with management because it echoed their sentiments and incorporated their thoughts and concerns.  The risks that they were being asked to address made sense to them because they were the ones who helped identify them to begin with.

Unfortunately what I didn’t realize at the time was that I might never have a reason to feel as good about ERM ever again.

In the time since my introduction to Enterprise Risk I have routinely encountered approaches where a team of really smart people sit in relative isolation and come up with a list of risks they perceive as being relevant to the organization.  They then try and figure out what to do about it so that they can provide direction to management.  Management often receives the information more as a directive rather than guidance and attempts to operationalize it.  People in the proverbial trenches are often attached to activities as a result, struggle to implement new controls and associated processes and report on progress.  And in the end no one really even knows if the work was either necessary or useful.  I’ve encountered several variations of this approach, sometimes where external experts are brought in, sometimes where internal audit leads the way but almost always with the same general limitations; no one talks to the people who live with the risks.

I’m reminded of a phrase I encountered years ago and have always liked: “It is not the same to speak of bulls as to be in the ring”.

Rare is the ERM approach where the approach incorporates an active dialogue with the people “in the ring”.  How can you identify any measure of risk without first talking to the people who have to deal with it every day?  Sure there are frameworks that predefine some of what needs to be done but any regulation, any guidance that currently exists advocates for an organization to identify and measure risk as it exists within their world, their infrastructure.  And the only reliable way to begin such an exercise is to talk to the experts, the people who know how things really get done.  And how do you even know which people to talk to without first starting atop the organization and finding out what management is committed to doing, what their business goals and objectives are?  This information isn’t found in a document or in a spreadsheet, it only exists in the minds of those people who help run the organization.  Without knowing what they know and understanding what they’re concerned about you don’t really have a meaningful clue about what or where the risks are.

Doing what I do for a living you get pretty good at forming qualifying questions that quickly frame a clients environment.  Recently I asked a network manager about his institutions vendor management program and he wasn’t sure if they had one; I told him they didn’t because in his role, if they did, he would have to know about it.  And so when I ask someone serving in a critical role about their institutions ERM efforts I can more or less assess its effectiveness based on the answer.  If I hear anything along the lines of “I’m not really involved” or “I think it’s covered as part of our audit process (a popular misconception)” I know that ERM is just a buzzword and not a true discipline.  Because if it existed in any meaningful way they would have needed to contribute somehow.  And thus my reason for being frustrated.

I really don’t want to hear about ERM or talk about it unless it’s contextually effective.  If the “E” in ERM doesn’t represent “Enterprise” but rather “Executive” or “Existential” or some other unrelated perspective I’ll take a pass.  It’s like when Cobit-based became the defacto standard and everyone wanted to know if your framework was Cobit-based as if that somehow conveyed something akin to pedigree.  There’s real work behind the discipline and all because you label a group or project ERM doesn’t mean it is.  And I’ll promise you this, if you’re applying some or all of ERM’s most basic tenets there’s no way you’ll ever speak of it in conceptual terms ever again.  Because once you’ve been in that ring you’ll never look at a bull the same way.


May 20, 2011  3:29 AM

Does the banking industry understand what risk-based means?



Posted by: David Schneier
compliance, FFIEC, GLBA, regulation, regulations, regulatory, Regulatory Compliance, risk, risk assessment, risk-based

Years ago I added an addition to my first house. After my second child arrived, we had simply run out of room and decided it was easier to expand our current living space rather than trying to find a bigger one. Plans were drawn up, work scheduled and money deposited. Two days before the first shovel was due to hit the ground, our contractor called to inform us that a recent change in town ordinances required that our crawl space be deeper than what was originally there. As a result, they would need to rip up what was in place, dig another eighteen inches deeper and pour a new foundation. Day One minus two days and the blueprints were scuttled, the schedule changed and the project under-funded (concrete ain’t cheap). But that’s just the way things tend to happen in the real world.

It is why when I recently heard a fellow practitioner describe a popular industry framework as a turnkey solution that I cringed. Not only can you not use a framework as is, you can’t even accurately whittle it down and right-size it until you take it out for a test drive. Life happens, the world is imperfect and things don’t always align the way they should. Which is why the banking industry really needs to adjust its approach to compliance and take advantage of one of its greatest weapons in the never ending battle to comply with the overwhelming amount of regulations – risk management.

Seriously, it amazes me how so many of my clients overlook this valuable discipline when setting out to build their controls frameworks. FFIEC guidance is very clear that every solution, every process, every procedure should be designed based on the size and complexity of your institution. What they’re telling you is that what might make sense for $500 billion bank might not make sense for a $100 million credit union; you need to determine what you should have in place, and how you determine what you need ultimately comes from conducting a variety of risk assessments.

There’s all manner of  risk (e.g. enterprise, operational, financial, information security, etc.) and an even longer list of sub-categories that belong to each of those. By identifying those myriad risk factors and assessing them properly, management is able to decide what needs to be managed, what can be mitigated, what can be eliminated and what they just don’t care about and are willing to live with. That’s how you decide what controls need to be in place and that’s when you’re ready to start leveraging the various frameworks, but that almost never happens.

Typically when an institution decides to build out a new procedure they download the appropriate framework and either try and use it as is or make what basically boils down to arbitrary decisions as to what should be included. It’s why I’ll often come across an information security policy that prohibits the use of company equipment to browse the Internet for non-business purposes despite the fact that they neither prevent it via web filtering and never enforce it. Or why policy and web-filtering both prohibit access to Facebook yet the institution has a Facebook page to support its marketing efforts. It’s how so many modest sized banks wind up having requirements to rely on a rigorous change management process despite its being a two man IT shop that is just about always out of compliance. No one bothered to determine what they really needed before committing to it. A risk assessment would have helped.

None of the requirements are intended to be literal. Your regulators want you to measure twice before cutting once. They want you to gain an understanding of where you’re at risk, where you’re not and than do something about it. Finally, they want you to periodically repeat the process. One of the sharpest people I ever worked for and who has since ascended to become the companies CIO was fond of asking “If you can’t measure it, how can you manage it” and she was right. That’s exactly what risk assessments do, they allow you to measure the problem so you can design the appropriate solutions to manage it. This is why we hear Enterprise Risk Management (ERM) used increasingly in conversation and how it’s matured from some sort of seemingly mystical voodoo magic into the boardrooms and C-suites.

Honestly, it’s difficult enough to keep up with everything these days; why do more than you need to?  Why commit to conducting work without first knowing that you need to?  The banking industry wants you to work smarter, not harder (measure twice, cut once) so why not embrace it?


May 8, 2011  4:46 AM

Another data breach? What else is new?



Posted by: David Schneier
breach, compliance, data breach, FDIC, NCUA, regulations, regulatory, Regulatory Compliance

The other day I was watching my cat attempt to catch his own tail. Now I know that by itself it’s not unusual for cats or dogs to attempt such a feat but for this one in particular it was unusual as I’ve never seen him do it before. He’s a remarkably athletic animal and so what I witnessed turned out to be something a bit different. He started spinning so fast that at one point he actually gained altitude and spun more than a complete rotation without the benefit of legs. At the same time, he somehow managed to extend his forepaws just enough to grab the tip of his tail and once done, dropped back to the ground to enjoy his success. He went on to do the same exact thing twice more before calling it quits.

Why I bring this up is because sometimes I feel that my industry does the same exact thing only in writing.

After staying up late last Sunday night to follow the developing story regarding Osama Bin Laden, I remember quite clearly what was going through my mind.  It was a delicate blend of relief, national pride and something that can best be described as detached ambivalence. I also experienced a touch of concern wondering if those aligned with the terrorist leader would attempt some measure of revenge and wishing that I wasn’t traveling this week. I also remember wondering if my children were going to remember this moment in any measurable way so that perhaps one day they might tell their children the story about where they were when they heard the news. But what I didn’t think at all about was how this turn of events was going to impact the banking industry. Apparently I was missing something.

When I had a chance to scan the industry sites on Monday, a number of them had lead stories about how important it was for banks to step up their monitoring efforts in the wake of Bin Laden’s death to detect the movement of monies used to fund terrorist organizations. Several rehashed the impact that 9/11 had on the banking industry discussing AML and BSA. One even had a story that sort of spun things in a way that might make the reader think the banking industry was at increased risk of disruption due to malicious efforts.

Really? I mean, really?

The only silver lining to any of this was that it sort of pushed the Sony data breach to the back of the line which was another hot topic that had me scratching my head. Many industry experts were clamoring about the enormity of the breach (no one actually knows how big it is, it’s all speculative at this point).  Several articles were thinking aloud about how significant this incident could be if it also included credit/debit card information. Some were estimating that the potential cost of the breach could set records. If I didn’t do what I do for a living this would have had me freaking out a bit. But really in the end I know better and by putting things in perspective could see that this wasn’t another Heartland but really something more closely resembling the Epsilon breach.  Sony clearly stated that while there was the potential that credit card information might have been exposed it was less than one percent of the total number of accounts involved and all were exclusively outside of the U.S.A.  So for most of the tens of millions of Playstation users who were affected, it was pretty much a minor event

At the end of my workday on Monday and after reading all the blaring headlines and posts dissecting the Bin Laden and Sony story, I came to the conclusion that my banking clients had nothing new to worry about that wasn’t already on their radar when they left for the weekend the previous Friday. All of the institutions for which I have knowledge of their operations were already addressing what they needed to address AML/BSA requirements and none of them had any new exposures due to the Sony breach (unless of course they had a Sony Playstation at home). All those headlines and so little to learn from any of it.

Really?  I mean, really?

There are legitimate news stories that can and will naturally extend themselves to banking and regulatory compliance but not all of them will. And not all re-occurrences of a now all-too-common affliction (data breaches) require a “stop the press” mindset. I remember shortly after the Heartland breach was announced back in 2009 being onsite at a credit union client. I was amazed by how much it impacted their operational area but only until their COO shared with me that this was only the most recent such event and it was something they had to deal with fairly regularly – what I was witnessing was, sadly, a new type of business as usual. Here I was thinking Heartland had been a game changer but all it was in the end was an unusually large incident. Some banking media sites at the time rode that story for months despite the fact it was only big in scope but not in impact.

And so in the end I wonder what exactly is the difference between publishing content about an event that isn’t really an event and my cat chasing his tail.


April 26, 2011  6:00 AM

Is compliance moving too fast?



Posted by: David Schneier
assessment, Audit, compliance, exam, examiner, exams, GLBA, governance, GRC, NCUA, oversight, regulations, regulatory, Regulatory Compliance, risk

I joined a new group last week on LinkedIn focusing on compliance within the banking space and during my first visit answered a forum question that started with “How do you manage the flow of compliance information”?  It was a relevant question and I was happy enough to offer my two cents (never a problem for me I assure you).

Here’s my reply:

It’s no longer even a matter of whether or not your institution has time to track the various activities and statuses, it’s quickly becoming a measurable practice of its own within the oversight circles. We’ve recently encountered several exam comments addressing the concept of compliance management which focuses on how an institution demonstrates a working knowledge of and compliance with the broad spectrum of requirements.

I think the days of last minute program (policy and procedure) updates and testing in the days leading up to an exam are near an end; the examiners are quickly losing their appetite to allow such flexibility and are expecting management to clearly establish that they’re taking compliance seriously.”

I’m sharing this exchange with you for a couple of reasons.  First, my reply was one of four and quite literally each answer seemed to be addressing four separate questions which I found both curious and concerning.  One person interpreted the question to be about keeping up with newly emerging and changing laws, one person replied as if though it was about keeping track of what needs to be done internally and one person thought it was more about governance and engaging stakeholders.  And while I’m not sure which, if any of us answered the question correctly I am certain that all four brought out into the open the bigger issue which is how does anyone keep up with the speed at which compliance is evolving?

Which brings me to my second reason for bringing up the exchange.  Are you prepared to demonstrate to an examiner how you manage all of your compliance initiatives?  If not you’d better get busy because it’s something you’re likely going to need to do in the near future.  There have been at least two clients my practice works with that have recently shared with us that their examiners have been slicing off time reviewing what’s being called “compliance management”.  Simply put it’s the overall approach an institution takes to tracking the various regulations and ensures that they’re complying where applicable.

What that means to you is that it’s no longer enough to present the various program artifacts upon request to the examiner, you now have to demonstrate how you track each of those elements and determine their status.  It also means that you have to demonstrate an awareness of new and/or changing requirements and maintain some measure of program change management.  Gone are the days of pulling a new program together in the days leading up to the exam just so you have something to show for it.  Gone too are the days of scrambling to bring everything up-to-date via herculean efforts by logging long nights and weekends in the weeks leading up to the kick-off meeting.

I remember how when Red Flags was about to go live back in 2008 I asked an audience I was presenting to how many had their programs board approved and in-place with only a few hands going up.  I asked how many expected to have their program at least finalized by the go-live date and again only a few hands went up.  But when I asked how many planned to wait until two weeks before their next exam to get around to designing something almost the entire room laughed and then sadly raised their hand.  But those days are about to come to an end.

Ultimately what I’m thinking is going to happen is that this import shift in oversight strategy is going to accelerate the adoption of the principles of GRC.  I’ve been beating that drum quite a bit lately (even more than usual) and am all the more confidant that my thinking is right.  An important element of GRC is the ongoing monitoring (governance) of the various risk and compliance activities and that’s what your examiners are going to be looking for.  My best guess is that we’re about a decade away from widespread acceptance and that GRC will follow a growth curve similar to that recently charted by ERM.    Right now GRC seems a bit exotic to senior management and more theoretical than practical but that will continue to change.  As more practitioners incorporate elements of the methodology into how they meet the various challenges it will become increasingly common-place.  And when the economy finally starts to rebound and funding isn’t as hard to come by institutions will accelerate the pace and GRC will become  part of the every day vernacular for compliance professionals and their management.

For now though practitioners like me will simply have to keep introducing elements of GRC into the solutions we develop for our clients without identifying it as such.  For those of us fortunate enough to know there’s a better way there’s no reason to wait and it’s a win-win for the institutions we work with.  As I recently advised a client in regards to an upcoming exam, have a plan, collect evidence that the plan is being followed and prove that there’s a process to periodically assess the plan for accuracy, viability and relevance.  That they liked but had I introduced it as a component of GRC I wonder if it would have appealed to them as much.

How else can you keep pace with compliance?


April 18, 2011  6:22 PM

Epsilon: Why vendor management is critical.



Posted by: David Schneier
Audit, bank, banking, compliance, FDIC, FFIEC, GLBA, NCUA, regulatory, Regulatory Compliance, requirements, risk, SAS 70, vendor, Vendor Management

A few years back we hired a local painting contractor to do some work around my house.  Upon completing his sales spiel he announced that he often relies upon subcontractors for the less skilled work and wanted to be upfront about that before we entered into any sort of deal with him.  Anyone he used was both legal and covered under his insurance and so he assured us we needn’t worry that we were relying on illegal immigrants or exposing ourselves to any unusual risks.  The first day of the project one of those subcontractors cracked the expensive glass top of our brand new oven and true to his word the contractor completely covered the cost of repair.  What was interesting in hindsight was how much value the contractor placed upon being able to issue such guarantees up front and how he felt it was important to illuminate his dependency upon what we in the banking industry call third-party vendors.  I wish all my business partners felt the same way.

Over the past few weeks I was stunned by the number of email mea culpa’s I received from a long list of companies I conduct business with and whom were affected by the recent Epsilon email breach.   For those not already in the know, Epsilon is a third-party vendor that specializes in email and digital marketing services for thousands of businesses and as a result have one of the largest collections of valid emails in the world.  At some undisclosed point last month an undisclosed number of personal accounts were breached in a, yup, you guessed it, undisclosed manner.  And because of the breach it’s quite possible that your name and email are now in the hands of someone who plans to use it for unauthorized or unwanted purposes.

I find it truly amazing how many companies I choose to conduct business with who in turn choose to conduct business with Epsilon.  The breach by itself doesn’t overly concern me as my cadre of email addresses is already in wide spread circulation and I can throttle what makes it all the way through to my in-box anyway.  What does concern me is how many companies used this one outfit and how despite having such a rich repository of personal information still allowed for conditions to develop that resulted in the loss of data.  How could this happen and why didn’t the nearly dozen companies I do business with and who were affected by the breach  make absolutely certain that my information was safe?

But here’s the bigger question: Who else are they doing business with that I need to worry about?

Seriously, think about all the information you trust to your business partners be it a credit card company, a utility company, a doctors office, your bank, your financial services firm or even your grocery store.  Think about how many times you’ve filled out forms either online or in writing and turned it over to the long list of companies you routinely engage with.  They all make a big deal about security and issue disclaimer after disclaimer about how they protect your information.  But along comes a third-party vendor that they conduct business with and you no longer get to decide how your information is used or protected,   They negotiate deals, conduct varying degrees of due diligence (and by varying it could range from almost none to remarkably extensive – but usually closer to none) and typically go with the deals that best serve their interests.  And you haven’t a clue.

This is not a new type of risk either.  Vendor management has long been a regulatory requirement and over the past few years has been receiving greater scrutiny from the examiners.  But you’d be amazed by how many business entities and financial institutions I’ve encountered who either don’t do enough or anything meaningful at all to address this properly.  I often encounter vendor management programs that are really just spreadsheet repositories with pitifully thin information and a lack of supporting documentation.  And the majority of financial institutions tend to focus what efforts they do make on those vendors they deem as critical – whose numbers can usually be counted on one hand.  I wonder how many of the companies affected by the Epsilon breach either had a vendor management program in place to manage that relationship or had them listed as a critical vendor.  And if they did, what information did they collect to assess the related (and required) controls and how did they arrive at the conclusion that they were properly managing sensitive data?

Remember, GLBA requires that that rules that govern how a bank manages non-public, personal information (NPPI) also extend to the vendors that bank conducts business with.  And so the Epsilon breach cannot be considered a separate and distinct breach; for those institutions that use its services they are directly responsible for what happened.  What will likely occur should the issue be pressed is that Epsilon’s business partners will wave copies of a recent SAS 70 in the air and claim they did everything reasonable to protect their customers data.  But the truth is that reports such as SAS 70′s are more subjective than we’re lead to believe and typically only prove that functioning controls are functioning – it’s rare to encounter a SAS 70 that details failed controls.  And so you have to question who your business partner is in turn doing business with because as a byproduct of that relationship you’re now also doing business with them even if you’ve never heard of them before.

Ultimately what we need is for financial institutions and Corporate America to step up and adhere to the same standards as my afore-mentioned painting contractor.  They need to offer full disclosure up front when they share your information with another business entity  (and not just via veiled references that are poorly detailed in the fine print) and need to extend protection of that information in a way that’s more explicit than tacit.  We should be able to trust that the handshakes we make and the relationships we enter into protect us in a seamless fashion.  And this shouldn’t be something that’s done simply  because a regulatory oversight agency makes them do it but rather because it’s the right way to manage their relationships.

How is it that my painting contractor understands the value of full disclosure and extending trust to every facet of his business relationships but the Ivy League-ish educated leaders of America don’t?


April 8, 2011  10:45 AM

GRC is about to see its future.



Posted by: David Schneier
Audit, compliance, GLBA, governance, GRC, HIPAA, PCI, regulations, regulatory, Regulatory Compliance, risk, SOX, UCF
After nearly a quarter century of working in and around the corporate IT domain I have a grand total of four bold predictions I’ve made that stand out.  Three of them I had nailed dead on and the fourth never panned out a fact that confounds me still to this day.

The very first prediction was that the Iomega Zip Drive was going to accelerate the push into portable mass storage devices.  For about two years it blazed the trail soon followed by others but I knew the first time I laid eyes on the device I was looking at the future.

The second prediction was that Borland was going to be bought by either Microsoft or IBM.  They had launched their new Delphi development software and it was blindingly fast and easy to use and clearly set them apart from the competition in the client-server domain.  For reasons still unknown it never happened and so while I was wrong I still think I read things correctly (it’s my ego, it won’t let me be wrong for too long).

The third prediction changed my career direction.  As Y2K was nearing I outlined a concept where companies could leverage all the repositories they developed and maintained to ensure a smooth transition into the new millennium and convert it into an ongoing management tool.  It was a discipline that eventually matured into what we now call portfolio management.  While I wasn’t in a position to pursue my theory I knew I was onto something and as it turned out I was right.  Why this prediction changed my career is because it gave me the confidence to both trust my instincts and pursue new ideas even when no one else thought it would work.

Which leads me to my fourth prediction.  Back in 2002 while with Metlife I was put in charge of a bizarre project that came to be referred to as “Server Consolidation”.  After working with a vendor not of my choosing for six months and with nothing to show for my time I discovered VMware about ten minutes after they went public and knew this was what the company needed.  I immediately brought it to my bosses attention and instead of trusting me to make us all look brilliant I was instead admonished for not doing what I was told and VMware had to wait another five years before the company embraced the technology.  But while it indirectly cost me my job (I was laid-off six months later) I knew I was right and still believe it was worth taking the risk.

My instincts are screaming at me again and so allow me to share my fifth bold prediction.

My readers know that I’m a huge believer of GRC as a concept.  I write about it almost monthly and at least quarterly and track its progress closely.  I’ve participated in several related projects and constantly try and insinuate myself into newly emerging GRC-based initiatives.  The idea that each of the three core disciplines break out of their silo’s and work together is just flat out the right approach.  But that’s not the prediction.

Almost all of GRC-related activity now is driven by regulatory and/or industry compliance requirements.  While most companies would publicly reject that statement and insist that their approach is based on risks that they identify and manage, the truth is most of those risks are already being targeted by one of the many compliance requirements they operate under and need to comply with.  And after nearly a decade of dealing with one new set of requirements after another quite literally every company I’ve encountered has multiple frameworks and related initiatives to ensure compliance.  It’s resulted in massive duplication of effort and wasted time, money and bandwidth.  And because those same companies can barely keep up  with supporting these activities there’s little chance they’ll ever find a way to reorganize and consolidate their efforts so that they can reuse steps to satisfy multiple requirements.

And so here comes the prediction.  Network Frontiers Unified Compliance Framework will become to GRC what COBIT became to SOX.

For those of you who aren’t familiar with the UCF it’s a series of documents that basically maps every single regulation, requirement and framework known to man (including coincidentally COBIT) and reveals the many points of intersection that exist but are almost impossible to identify while on the ground.  While there’s more to their library than just the mapping it’s really  where their bread gets buttered.  I first discovered UCF in 2009 while working on a governance project and have been a fan ever since continuing to follow their progress and trying to spread the word about what they’re doing.

Here’s what they ‘re doing: They examine every regulation and requirement and map them to a set of generic control activities so that they identify where one activity satisfied multiple requirements.  They follow a fairly extensive process in doing so and all of their work is vetted through legal review to ensure they’re not overreaching during the process.  And they’re constantly updating the framework to make sure that as existing regulations change and newer ones emerge the UCF captures it.  Considering the accelerated pace at which regulations are being enacted these days that’s no small task.  The way the framework is leveraged is by finding the appropriate control activity that matches what you’re working on and reading across the line (it’s delivered in spreadsheet format) to find out which regulations or requirements it satisfies.  So if you’re reviewing application access in support of SOX it’s possible that same test would also satisfy GLBA requirements.  Imagine how much time and effort can be reclaimed if your GRC program was whittled down to testing a control only once and using it many times?  Also imagine how that might look to senior management.

So why am I making my bold prediction now?  Last week I learned that Network Frontiers is making their content more readily available in an online format and for free.  This will allow a broader audience to begin accessing their impressive content without first having to get someone in their management food chain to approve its purchase.  I’ve tinkered with it a bit and while I still prefer the spreadsheet format (I’m a geeky kind of guy) I love knowing that someone can read this blog post and immediately signup at their website and begin exploring.  By making it easier for the masses to access their content it will likely accelerate broader acceptance throughout the corporate world – once that happens, once program offices start relying on the content provided there will be no turning back.

I realize that GRC is way more than testing controls but consider that the UCF will also allow a company to identify where risk assessments, policies, procedures  and programs hit multiple targets as well.  It truly allows for economies of scale to be realized in ways that were just never as easy to pursue in the past.  While the framework doesn’t tell you how to build or manage a GRC initiative it will become one of its primary tools, I’m certain of it.   I’ve pointed several people in the direction of the UCF over these past two years and almost to a person their initial reactions is “wow”.  They all immediately saw its value and started considering how best to exploit it’s offerings.  And until I meet someone who upon viewing the framework shrugs their shoulders and says something along the  lines of “I don’t get it” you’ll find me standing behind my prediction.


March 25, 2011  2:48 PM

A Hard Lesson Learned in Japan’s Disaster



Posted by: David Schneier
business continuity, business continuity plan, business continuity planning, disaster, disaster recovery, FFIEC, GLBA, NCUA, regulations, regulatory, Regulatory Compliance, Security

There will be no shortage of industry articles and analysis that will emerge from the horrific events in Japan over these past few weeks, that’s for certain.  This is arguably the most significant event to hit a major regional economy since World War II and it’s important to learn as many lessons from this tragedy as is possible.  My family are fans of the television show “Seconds from Disaster” and one thing it strives to illuminate is that by understanding what went wrong it’s often possible to make sure it won’t happen again.

Japan’s tragedy will serve as a fertile source of both proving and disproving the myriad business continuity and disaster recovery techniques being used around the world today.  The most prepared and best trained companies will have very likely fared about as well as could be expected while those who weren’t, those who either had partially baked plans or no plans at all will be lucky to survive in any measurable way.  And it’s hard to imagine that most companies didn’t have plans to deal with earthquakes and tsunami’s because they’re credible and consistent threats in the region.  But after a quarter century in corporate life and little more than half those years focusing on audit and compliance I’m no longer surprised by anything I encounter.

However there was one story to emerge from Japan this week that I found to be quite shocking.  It was about how a banks vault came open during the series of events and someone stole forty million yen (about $500k USD).  It happened in the prefecture of Myagi in a town known as Kesennuma and police said that between the wave’s power and the ensuing power outages, the vault came open.  What with all the flooding and chaos it took more than a week for someone to get back into the building and discover what had happened.

For many the story seemed plausible if not mildly amusing because who wouldn’t love to wander into a bank and be able to scoop up all the cash floating around.  And because in this particular situation no one died or was hurt as a result it’s benign enough to be more entertaining than tragic.  It sort of reminded me of a scene in the movie “Ground Hog Day” where Bill Murray’s character figured out the perfect timing to be able to steal a bag of cash out of the back of an armored truck.

But I sort of have a problem with this story because I don’t think it happened the way it’s being portrayed.  My very first thought upon reading the details was that either someone left the vault door open as they were fleeing the bank or someone who knows a thing or two about how to open a vault went back in after the fact and exploited the situation to their advantage.  The odds that a vault door simply flew open due to what was really a massive flood at that point just doesn’t hold up under scrutiny.

Have you ever actually seen what a door on a bank vault looks like?  I have and I’ve probably seen about three dozen or more since I started working in the banking sector and I couldn’t think of how any one of them, if closed properly would ever just come open due to rushing water for a relatively short period of time.  First of all they’re all seated within a metal frame and so for the rods or pistons that create the seal to come undone the metal itself would need to have been bent or twisted.  Second, they weigh a ton (not as much of an exaggeration as you might think).  Even the weakest vaults I’ve encountered have doors that have some serious density to them and would not likely bend under most natural forces.  I would sooner believe that the walls that the door and its frame were attached to failed then believe that the door simply “flew open”.

If I had to put on my most skeptical mindset to use I would venture a guess that the person responsible for making sure the vault was properly closed before safely exiting the building rushed through the procedure, didn’t properly lock the vault and in their heightened state of panic just didn’t think about it.  While that’s the most likely scenario the second most likely version is that someone who knows how to open the vault door and who knew after a day or so that no one would ever be concerned with theft while there were still lives to save made their way into the crippled building with its security systems down and manually opened the door and had at it.  But under either scenario it’s almost entirely likely that the person(s) who stole the money had an idea about what to do and took advantage of the situation.  I mean, they obviously entered the bank after the disasters struck and they weren’t likely looking for survivors if they were of a mindset to grab what had to be a sizable physical haul.

And the thing is that there’s no viable lesson to be learned from a story such as this.  I’m certain the bank had a procedure in place that specified how all cash drawers were to be placed in the vault and that the vault itself should be locked upon exiting during a disaster.  While in certain physical disaster scenarios it’s possible to install an individual to monitor the facility during and after the event this wasn’t one of those times as everyone needed to flee the area.  And having someone come back the next day to keep an eye on things was probably the last thing anyone associated with the bank was concerned with (and rightfully so) as they had lives to save and keep safe.

So no usable lesson to learn and probably no way to ever find out what really happened.  For my money I hope they find the people behind this because it makes me angry to think that while so many people struggled to search for survivors or to recover bodies there were people looking to profit from the situation.

And if there’s anything for the BCP community to glean from this story it’s that no plan can truly account for every possible scenario.  It’s a hard lesson to learn but perhaps one that serves a purpose if for no other reason than to underscore the need for adequate insurance coverage.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: