My practice recently wrapped up an engagement in which we conducted a tabletop test of a client’s business continuity plan. As always with such exercises, it’s interesting to find out how much distance exists between what’s documented in an institution’s policy/program and how business is actually conducted. In this particular case, it turned out that the client’s responses tracked fairly close to what was specified in their plan. But what was interesting was that most of their answers were extemporaneous; they barely referenced the plan itself and instead were relying on common sense. This begged the question: What’s the practical value of a policy or procedure if no one relies on it?
Everyone that participated in the test knew the nature of the exercise. Almost everyone had recently been involved in the rewrite of their current plan. In total there were approximately a dozen participants spread out over multiple test scenarios and of all of them, only one showed up with a printed copy of the plan. In what can best be described as an open book exam, only one person thought to bring the book along.
It’s like when I’m conducting an ITGC audit and ask for the institution’s password policy in order to determine if it’s compliant with the policy. You’d think that would be about as basic a control as possible, right? You write down that the minimum password length is X and the reset frequency is Y and then you configure each applicable system accordingly. This may be the purest example of low-hanging fruit in the compliance domain, and yet you’d be amazed by how my times I find significant disconnects between what’s documented and what’s done. When you consider how much effort goes into first creating and then maintaining the broad library of documentation all financial institutions have, it’s sort of a breathtaking waste of time. It makes you think that for the most part, the only time anyone ever references anything is when someone external from the company asks for it.
This is why when conducting a risk assessment I always throw a question into each interview asking if the person knows what the related documented policy states. Do you know what the BCP directs you to do if you cannot safely access your office location? What do you do according to your Red Flags program if you receive a suspicious phone call about a customer account? If someone tries to access a secured area of your facility what should you do, who should you contact according to your incident response plan? Care to guess how many times the reply is somewhere along the lines of “I have no clue”? But every bank and credit union has these artifacts in place; why is it that no one either knows to use them or knows that they’re even there?
If a policy exists on the intranet but no one knows that it’s there, or how or when to use it, does it really serve a purpose? And if a policy exists on the intranet but no one ever tests to measure its effectiveness, do they need to have it at all? Until we as an industry find a more reliable method of assessing the viability of an institution’s documentation and connecting it to actual activities, we’re falling far short of realizing its true potential. And in a time of unprecedented financial stress, can anyone really afford to waste even a single dollar on something for nothing?
I was in the midst of writing my weekly blog post focusing on threadbare thin compliance efforts when I was distracted by news of a potential terrorist incident. As you likely know by now, it appears that Al-Qaeda was either attempting to send explosive devices onto airplanes or was conducting a dry run to see if it would be able to do so at some point in the future. Either way, authorities had reached the conclusion fairly quickly that something was definitely amiss and found packages containing explosive materials on two separate airplanes.
Honestly? Bombs on airplanes? How could this even be possible? Anyone who has traveled in the past decade knows all too well exactly how insane airport security has become. I’ve had nail files broken off of nail clippers, toiletries confiscated, water bottles thrown away and have had to empty the contents of my laptop bags so often I wouldn’t dare even attempt to count the number of times. But a bomb makes it through?
Sadly it’s the perfect example of why controls and their related activities aren’t nearly as effective as any of us would like to believe. They’re a starting point and not much else. Just like in any controlled environment, we try to identify as many risk factors as is possible and then design controls to either manage or mitigate them. But risk factors continue to change, evolve, mature and move on. And those who would exploit them to their advantage understand this and seek to identify the opportunities that are created in that gap between when they emerge and the world catches on to them.
It’s why compliance by itself is never enough. It’s why risk assessments are vital to the security and soundness of your institution. You can’t manage what you can’t measure and when it comes to risk factors the only way to measure them is via an assessment process. Ever wonder why just about every piece of banking guidance makes reference to your “most recently completed risk assessment”? And trust me, ignorance isn’t bliss; it’s a bloody nightmare when it comes to the financial domain.
While financial institutions don’t typically have to worry about bombs, they do need to understand threats presented by the ever shifting technology and business landscape. They need to monitor their employees activities and assess risks presented either by newly emerging business practices (e.g. mobile banking) or growing dependencies on existing ones (e.g. ACH). Waiting for your regulator to tell you what to do will definitely result in there being a gap that someone is poised to exploit. Are you OK with that? As a banking customer, I know I’m not.
I read a blog post last week from my friend Ed Moyle in which he discussed a story about how a professor at the University of North Carolina-Chapel Hill was demoted because a server used in her research project was hacked. A committee had concluded that it was the professor’s fault that the server was improperly configured and should be held accountable. She was knocked down a rank and had her salary cut pretty much in half (this after first recommending she be fired). The assignment of blame and the punishment that was levied is a story by itself. But this story has all other kinds of juicy associated with it.
The data on the server included mammogram results from across the state, patient information that was harvested without the patients’ knowledge and included their Social Security numbers (can someone say HIPAA breach?). The vulnerabilities on the server that allowed the breach had existed since 2006. The breach occurred sometime in 2007 but wasn’t discovered until 2009. Although the IT team could determine that a breach had occurred, they had no way of knowing if any information had even been stolen.
So UNC didn’t know for at least three years that it had a vulnerable box plugged into the network and was in possession of illegally obtained information. It turns out the only thing UNC did know was who to blame. But in the end they got that wrong too.
There’s no worse precedent to set than to make business owners, regardless of the vertical, responsible for their own technology. They don’t know anything about ports, settings, patches or upgrades; they only know they sign on and use what they use. And because of economies of scale, it doesn’t ever really make sense for an individual department to hire its own resources. It’s why IT became a centralized resource decades ago and why it makes sense still today.
So why didn’t UNC’s IT department do its job? Why didn’t the group responsible for plugging servers into the network configure the machine properly? How did IT let the machine sit out there for not one, not two but for three years without detecting there was a problem? What sort of scanning tools do they use? Don’t they have antivirus or anti-malware software installed? I mean honestly, how did UNC’s IT people let this situation not only come into existence but also to remain for so long?
I don’t always go out on a limb like this, but UNC is wrong for blaming anyone other than the IT staff responsible for configuring and securing the network. What UNC has right now is a scapegoat, which just seems silly for so esteemed an institution.
Oh and the university also justified its punitive actions by claiming that the data on the server was obtained improperly. UNC is right; it was. But what it failed to realize is that the HIPAA violation falls mostly on the shoulders of the doctors who provided that information. They’re the ones who assume the obligation of protecting their patients’ data and while the professor should have been more on top of that element, it wasn’t her primary obligation; it was the original caregivers’.
Really in the end what this whole mess boils down to is a great big bowl of wrong. Wrong person blamed, wrong handling of the server, and wrong message sent. Wrong, wrong, wrong!
Early last week I downloaded some fresh content covering vendor management. It turned out that the new information wasn’t really new, it’s guidance that’s been circulating in one form or another for years and tracks closely with guidance ripped from the pages of the Sante Fe Group/BITS Shared Assessment methodology and generally tied back to FFIEC guidance. It’s an approach that turns out to be a recipe for “boiling the ocean” – it makes the work seem too big and unwieldy for all but the largest organizations to tackle, and tends to scare the small and midsized institutions into a state of paralysis But there’s more than one way to skin this particular cat and not enough practitioners bring that to the surface.
One of the files I downloaded reminded me of an exercise I participated in two years ago focused on vendor management. I was asked to develop a “how to” webinar on establishing a new program and created a Power Point stack that encapsulated the approach I’ve been using successfully for years. Much like everything I’ve had a hand in developing, I spend considerable time up front firming up on what’s required minimally, what makes sense for the organization, and designing the various tasks so that it reflects on the capabilities of the staff. Telling a community bank that they need to conduct an on-site audit of their hosted platform provider or review a software vendor’s SDLC methodology is both irresponsible and unrealistic; they typically don’t have the staff or expertise to do so. And so my presentation didn’t attempt to boil the ocean but rather boil vendor management down to something effective and manageable. The owner of the sponsoring website rejected my final draft because he felt it wasn’t detailed enough and would fall short of audience expectations. He wanted the Shared Assessment rehash; I provided something simpler and realistic that was much more likely to appeal to the audience and was unwilling to compromise my standards, so I decided to separate from the project.
Popular vendor management rhetoric tends to inspire inertia for too many financial institutions. Some admit that they’re delaying pursuing vendor management activities for sizable periods of time (anywhere from six months to five years – no joking). Some claim they’re only managing their critical vendors and are using spreadsheets or hard-copy documentation to prove compliance (possible but unlikely). And a frighteningly high number simply defer making any plans or decisions at all because their examiners don’t pay it any attention at all. Let me run that last one by you again: Their examiners don’t actually examine their vendor management programs (you know, the ones that don’t exist).
So on one hand we have a group of regulatory industry leaders shouting from the roof tops that third-party oversight is critical and needs to address a suffocating amount of information, and on the other hand no one really seems to care if anything is being done. Anyone else see the problem with all of this?
As with all compliance initiatives, only your organization can determine what makes sense. Almost all FFIEC guidance specifies that your program needs to take into consideration the size and complexity of your institution. So what might work for Citigroup or Bank of America would never make sense for 1st Community National Bank with its two branches and $100 million in assets. There will certainly be commonalities – you still need to risk rate each vendor, you still need to perform a periodic review – but the depth and breadth of the program will vary wildly. However, one thing is certain: You have a fiduciary responsibility to protect your customers’ personal data and that extends to any business relationship you maintain in which it’s exposed. Doing nothing isn’t an option as is doing too little not just because it’s the law but because it’s the right thing to do.
The industry pundits are right that the threats in conducting business with third-party vendors are real and increasing every day. Where they go wrong is in not educating you on the many options available to manage those threats. One size does not fit all in the regulatory space and that’s a concept you need to hear more frequently.
But trust me on this: Doing nothing is not an option. Waiting until the examiners force the issue is not a strategy, and being caught without a viable program in place after your institution has been involved in a breech is a train wreck waiting to happen. Don’t be scared off by what you don’t know or can’t manage; start simple and move from there. No matter what, do something and do it now!
Growing up I was a huge fan of the sitcom “The Odd Couple.” Some of my favorite catch phrases have in some part been influenced by lines of dialogue that I memorized. One in particular serves as the best pure definition for a phenomenon I encounter frequently enough in my audit/compliance career: “What you don’t know can hurt you a whole lot.” I can still hear the line being uttered and remember laughing because even as child I thought the phrase that inspired the line, “What you don’t know can’t hurt you” was pretty dumb. All these years later, I’ve collected an impressive body of evidence to support my opinion.
So when the FDIC recently issued new guidance titled “Guidance on Mitigating Risk Posed by Information Stored on Photocopiers, Fax Machines and Printers” (FIL-56-2010),” I was reminded once again of this favorite phrase of mine.
It’s important to explain that my first foray into audit allowed me to work with arguably the best auditor I’ve ever met. I was taught to question everything and assume it was in scope until proven otherwise, and I was encouraged to trust and follow my instincts. And so fairly early in my regulatory career when I first started to search out the myriad threats to personally identifiable information (PII), all sorts of things landed on my radar screen. Accordingly, for nearly a decade I’ve been advising clients on the threats posed by what are typically thought of as secondary devices or peripherals. Financial institutions will spend all sorts of crazy money to protect servers and storage devices but completely ignore multifunction devices that copy, scan, fax and email just about any document imaginable and often retain those images in memory. They’ll have surprise desktop audits where someone will spot check work spaces to see if PII has been properly secured but will walk past the copier room time and again and ignore what lays in the output trays. Our practice has long advocated for related control activities to remove this remarkable blind spot but year over year we return to our clients and find that little has changed.
And so the question needs to be asked: Why?
The answer is very likely found in the fact that no known breaches or cases of identify theft have ever been tied back to information gleaned from a peripheral device. We’ll read about huge PCI-related disasters where millions of credit card numbers were potentially stolen. We’ll see stories on the news about how a laptop has gone missing with hundreds of thousands of accounts containing Social Security numbers. We’ll read about how criminals are piggy-backing card reader devices on legitimate ATM’s to grab your credit and bank card data. But no one can ever recall hearing about any identity thefts cases where the information involved was found to be harvested from just such a device. And odds are you’re never going to.
The amount of information to be gleaned from peripheral devices is relatively small. All but a few of them can only retain a modest amount of data and so you’re not going to find much more than a few dozen opportunities per device. If someone within an office is aware of this treasure trove of information and is skimming it off and either using it or selling it how would you know? How would you be able to develop the trend (remember that very few people file police reports when they discover that their identify has been stolen or accounts accessed). So there isn’t a whole lot of investigating going on. And if someone at either the equipment reseller or company warehouse is collecting the information and using it for illegal purposes how would anyone know? We’re not talking about thousands of accounts or individuals from any one company or institution; it’s more like a patchwork collection. You would only be able to find a trend if you went looking for it, and you would only go looking for it if you had a credible reason to do so.
But here’s the thing; I’ve thought about this information being readily available and difficult to trace and I’m an honest man and one of the good guys. Don’t you think the bad guys have this figured out as well?
So it will be interesting to see how or if the banking industry reacts to this bulletin. It’s been my experience that these things go largely unheeded until an examiner applies a little pressure. I suppose way too many financial institutions are happy enough to apply the “what you don’t know can’t hurt you” logic. Not me.
I stumbled upon an old nemesis of mine recently and the bad taste it left in my mouth continues to offend my senses.
In an industry where there are standards that define how standards should be written and websites dedicated to dissecting each standard so that everyone can understand what the definition of “the” is, I’m often amazed by the broad range of solutions employed to achieve compliance. You’d think that from one solution to the next there would be obvious commonalities that would immediately identify what you’re looking at. But that’s generally not the case. I’ve seen so many flavors of SOX solutions you’d be amazed (at one client they have two very different and unconnected systems installed, one for IT and one for the business). At least in general terms, all of the systems I’ve encountered more or less wind up at the same logical meeting point, which is where a determination is offered as to whether or not the necessary controls are in place and functioning as expected.
What drives me crazy, though, isn’t the disparity in compliance solutions but rather in the broader interpretation of compliance. It’s somewhat binary; you’re either compliant or you’re not. Regardless of the method you use to attain and maintain it, once you reach the point where you’re compliant with a regulation you can stop adding to the framework. With SOX, I remember how the very first challenge that every company faced was in figuring out scope. The loosely defined guidance specified that only those controls directly connected to the most significant financial processes and applications needed to be tested. It was a constant struggle to get this done and one in which a clear dividing line eventually was drawn: those who used common-sense and those who didn’t.
It’s a dynamic I’ve encountered many times over my career that is regulation-agnostic. At what point do you stop implementing controls because all significant risks have been properly addressed? The common-sense side of the debate will identify areas that are at greatest risk either because of design or content; the other side, the literal side, will identify everything that’s even remotely related and pull it into scope. I’m a common-sense kind of practitioner. I prefer compact, efficient and relevant compliance frameworks. I hate “made” work; I hate doing something simply because someone said I had to even though there’s no apparent value or need for it. I’m of the “work smarter, not harder” mindset and that shows itself in all my work.
However, I encounter all too often the other side of the debate: Those controls and related activities inspired by people who will test everything and anything even remotely in-scope simply because it’s there. Some of my best and most significant regulatory work is found not in framework design but rather in my redesign. I have a bit of a reputation for coming into an institution and effectively ripping out all the waste, the redundancies and overlaps and whittling things down to a size that makes more sense for the size of the operation and then selling it to the examiner/auditor. I take great satisfaction in this sort of work because I’m saving my clients time and effort which translates into money at some point. One of the nicest compliments I can be paid is to be told that the most recent round of testing took only a fraction of the time and the auditor/examiner was happy with the results.
And so it’s maddening to me to find out that a solution I’ve worked on, one that was working beautifully and is running lean and efficient has been tinkered with because a new set of people have been brought in and are reexamining things in the absence of a credible reason to do so. That’s what happened recently and that’s what has left a bad taste in my mouth.
It wasn’t a solution of my design; I was simply supporting it to conduct the current year’s testing. It had been created previously by a team of people who had intimate knowledge of what was necessary to prove compliance with the regulation that required it, was well thought out and extended itself just far enough without going too far. It was one of those rare instances where I didn’t really see a need to make changes myself and instead only looked for opportunities to consolidate testing with related compliance activities. Along the way, I validated the work by bouncing things off associates of mine who are responsible for testing similar frameworks to make sure it made sense to them. They confirmed what I’d suspected: that the framework was solid, the scope appropriate and the degree of testing more than sufficient. So how is it that a year later that’s no longer true?
The justification appears to be that there was residual risk that wasn’t originally in-scope and should have been. Knowing what I do about the client’s infrastructure, the regulation requiring the work, and the way things happen in the industry, it just doesn’t hold up under scrutiny. Nothing significant changed in their environment; there was nothing from the industry side of things driving a reevaluation and in two-plus years of having their framework there wasn’t a single reported incident that would have made them non-compliant. I’m sure if you get all those involved into a room they’ll point to some relatively benign residual risk factors and pitch a really scary “what if” scenario. Because even in a sufficiently secured and controlled environment there remains risk. But someone needs to challenge how likely that risk is of ever reaching the point where it could actually cause harm to the organization. If they don’t, management will feel the need to take action and fund work that probably doesn’t need to be done.
Remember that despite the best efforts associated with SOX, most of the final financial numbers are crunched in spreadsheets where just about anyone can fudge or transpose a number without detection (or maybe even by accident). Regardless of all the work undertaken to beef up security and application controls, map all manner of business processes and document and test the heck out of all of it, the risk still remains that someone can step in at the very last minute and pretty much do as they wish.
I just think that there’s a better way to spend time and money, all the more so in this economy. It’s an old adage but it’s still very true today: “If it ain’t broke don’t fix it.”
A few months back, the big blinking light in the middle of the information security radar was a story about how someone had harvested all sorts of personal information from Facebook accounts and made the resulting files available for download. The file (actually it was a series of files) offered varying degrees of details on nearly 100 million user accounts and it rocked the security industry for what turned out to be about five minutes. I downloaded the information out of curiosity and spent an hour or so sifting through the massive collection and came away with a sense that the story was more interesting in the abstract and that once you started really examining the risks introduced by the breach, you came away with a sense that it was much adieu about nothing.
I’ve posted before about such things: about how you need to exercise good judgment when online and when sharing potentially sensitive information (avoid those Facebook “about me” quizzes always). While something like the Facebook breach might make it a little easier for the bad guys, the truth is the sheer volume likely rendered the information useless. I couldn’t find a Social Security number, bank account number or anything else remotely resembling a true digital prize. And I looked, believe me, I looked. I should qualify what that means; I have a well-earned reputation for being able to develop fairly extensive dossiers on people by using a variety of techniques, all based upon readily accessible online resources. It’s sort of a hobby interest of mine and I find new and better ways all the time to improve my techniques. But other than using the Facebook skimmed data for marketing activities, I wouldn’t think it to be too big of a deal.
However, if you’re looking for a really neat way to access social network sites in such a way that you get to work smarter, not harder, when up to no good there are far more effective methods available. My newest favorite threat to all of our privacy and sensitive information is a recent add-on to Outlook that allows me to instantly access Facebook and LinkedIn information directly connected to an email account. The way it works is that you send me an email, the Outlook add-on then scans Facebook and LinkedIn for activity linked to that email account and displays it all nice and neat in a sub-window below the message. I installed the add-on on Wednesday out of curiosity, expecting little if anything useful. The first email I receive after the fact was from an associate in the banking industry. This person must use a business email for Facebook and LinkedIn because the aforementioned sub-window filled quickly with nearly a dozen different bits of information between Facebook and LinkedIn. I can view family photos, a scheduled event detailing an upcoming vacation and several LinkedIn updates including new connections. That by itself is scary enough but what makes it worse for me is that I’m not connected to this person on either site. I was able to see all of this information without even wanting to. In one neat little bundle, I have the person’s email address, access to personal information, a clear indication of when they plan to be away from the office, and a simple way to track the individual’s whereabouts. Oddly enough, if I searched either site directly I couldn’t see much of the same information, but the Microsoft utility apparently removes such obstacles and gets me to where I want to be.
What would you rather have: A monstrous database with relatively benign Facebook user information or an email containing all forms of PII combined with the person’s title and position at a bank or credit union? I know who they are and if they are likely to have broad access capabilities within their institution — information allowing me to reset passwords and close to no possible way to trace this all back to me.
As if though this isn’t enough to cause all you security-minded folks to lose sleep, there’s one more new wrinkle to worry about. Facebook now has its new “Places” functionality working, in which mobile users can indicate where they are at a given point in time. It reminded me of the Trip-it utility that people started using on LinkedIn last year. Essentially, both tools allow you to provide specific information to everyone you’re connected to and many of the people they’re connected to, letting them know when you’re out of the office or away from home. Think about it: You go to the beach for the day and update your location on Facebook. You’re thinking that it’s no big deal if your friends and family know where you are and you may be right. But on the day I tried it out, I tagged a family member who was with me. He has nearly 600 Facebook friends, of which he knows less than a third. So 400 relative strangers knew that not only was he away from home but so was his family. Any one of those connections instantly knew there was a reasonable chance that if they broke into our house they could get in and out with little chance of detection. For a society where people have their mail collected daily and their newspaper service suspended when away on vacations to avoid the appearance that the house is empty, this is a stunning turn of events. And you can’t stop the kids from using the newest and latest capabilities, so now we have potentially tens of millions of people advertising when they’re away from home and for how long.
It’s amazing, really, how we react to a threat framed for us by the media but almost completely miss out on another that’s way more likely to hurt us. The first thing I would do as a CISO would be to have a script written that checked every corporate email account against all popular social network sites to see if anyone is showing up. The second thing I would do (and already advise clients to do) is to update all of my related policies and training curriculum to address mixing business with pleasure: Never use your corporate email, never advertise travel plans, and never disclose anything even remotely resembling sensitive data on any of the social networking sites. And I would incorporate activities that check to see if these new policies are being followed. Remember, the right way to manage this new evolutionary twist in technology isn’t to prevent it but rather to manage it appropriately.
Oh and just in case anyone needs to be reminded of the fundamental rule of security, make sure out-of-office replies are restricted to internal communications only. I can’t believe how many of them I still receive, and with this new Outlook capability it’s just a recipe for disaster.
Summer at home officially ended this morning as my children returned to school. Beyond the fact that I consider it cruel and inhuman punishment to resume academic activities before Labor Day, it also serves as a wake-up call that we’re well past mid-year on the traditional calendar and eying the home stretch for 2010; before we know it we’ll be moving into Q4. So why is that on my mind today? Because I’m mindful of all those institutions that have yet to address their obligations specific to GLBA and NCUA regulations.
This is something of an annual post that I’ve been issuing over the years where I bang the proverbial spoon on the proverbial pot trying to warn everyone that there’s work to be done. I’m not talking about running through the paces to prepare for an exam but rather having work done that ensures the protection of your customer/member information. I used to work for a company whose primary sales approach was to tell current and prospective clients that they had to conduct all manner of tests and assessments because of the regulations. The firm’s angle was that in order to be compliant you “must do this work,” which not coincidentally dovetailed with services we offered.
I always thought that the “because I said so” logic was flawed. My thinking then and now was that we should educate clients on why they need to have regular audits and assessments: How scheduling the work at proper intervals and coordinated activities so that they flow naturally into one another greatly reduces their risk of exposure and improves their reputation as a bank or credit union that can be trusted. But what if an institution’s basic strategy is to wait until an exam is a week away and then pull long hours and work all weekend to update what’s needing updating?
The regulatory compliance trinity is fairly simple and straightforward at its highest level: You document your controls and related activities (the infamous policies and procedures collection), periodically assess your risk factors to determine if you need to add or modify those controls and related activities, and then test the controls to determine if they’re in place and effective. GLBA at its core is actually that simple and really quite effective. It’s GRC 101 and there’s no doubt that by complying with its basic tenets you’re doing the right thing to protect your account holders.
And yet you’d be surprised by how many financial institutions routinely reach this point in the calendar year having deferred scheduling much (if not all) of their compliance work. You can’t go an entire year without having conducted both an audit and a risk assessment. No business infrastructure goes through a 12-month period without something significant changing, without risk factors emerging that haven’t been present before that need to be managed. By extending your compliance work to align with your exam cycle, you’re opening up a huge gap through which a truckload of problems is likely going to drive. Based on the size and complexity of your institution, you can arrange your compliance program so that not everything needs to occur annually. I’ve worked with clients where their program called for a risk assessment and audit to occur in alternate years and where only the ongoing programs (e.g. vendor management, penetration testing, business continuity planning, etc.) needed to be addressed and validated annually. And while it’s true that you don’t need to shoehorn everything into a 12-month period, you do need to have a clearly defined plan on how your institution complies with the various regulations. You simply can’t get two-thirds of the way through the year without having conducted or scheduled any manner of testing or assessments.
We’re about to turn another page on the calendar and enter September. While you may count that as four months to year-end and think there’s plenty of time to get things done you need to consider that it’s more like three months. Between the major holidays, the minor holidays and people taking time off as the year winds down you’re going to find it hard to secure resources to conduct the work and even harder to have them complete tasks while people are constantly out of the office. So with three effective months of working time left in the year, you need to move quickly to come up with a plan. What are you committed to accomplishing by year end and how are you going to succeed? Remember, there’s no more obvious red flag to an examiner than finding a pile of documentation where the ink is still wet or the update/completion dates are suspiciously recent.
And don’t come back at me with the logic that it doesn’t clearly state anywhere in GLBA/NCUA regulations that you need to conduct an audit, a risk assessment or any manner of security-based testing. As I’ve stated here in my blog several times, FFIEC guidance clearly indicates a need to have a recently conducted risk assessment available. FFIEC guidance also clearly specifies the need to conduct an audit at a frequency appropriate for the size and complexity of an institution. All you need to do is look at the Master Table of Contents in the FFIEC examination handbooks to see which parts of your infrastructure need to be tested periodically (why do you think the agency authored the handbooks?). Considering that both the FDIC and NCUA rely on FFIEC guidance to support their examination process, there’s little doubt (actually no doubt) that’s where you need to look to figure out what work to schedule.
Three months to go, what’s your plan?
Earlier this month, I blogged about my concerns regarding a drop-off in information security oversight by banking regulators. In this age of safety and soundness first, everything else is second, if at all. It’s more than a week later and I’m not feeling any better about things; as a matter of fact, I’m feeling measurably worse.
I participated in several conversations in which a recurring theme was the challenges presented from a surge in merger and acquisition activity. It’s the other side of the banking crisis that doesn’t get as much press as it probably should. Think about how this plays out: An institution acquires the assets of another institution and in a remarkably short period of time, has to absorb that information into its own infrastructure so that it can properly service the accounts. In a normal merger, this is an activity that would be planned out over several months with all forms of testing involved before the official cut-over. But we’re in an age where on Friday your account belongs to Bank A but on Monday is being managed by Bank B. How much time is allowed to cut things over between the two separate infrastructures? And when you consider that it’s rare for the two institutions involved to share a common banking platform, how do you seamlessly and accurately convert the customer data?
Back in my infrastructure days, I recall all too well the various activities that were involved with figuring, configuring and reconfiguring elements from disparate systems in order to determine the best way to bring them together. There were delimited files extracted, spreadsheets created and all manner of repositories generated to analyze the data. Back then we didn’t have CD/DVD burners as standard equipment to easily offload full repositories (we were handcuffed by 3.5″ floppies with a max of just over one megabyte of storage) or USB storage devices attached to our key chains. Laptops weren’t yet pervasive and it just wasn’t as easy to walk entire databases of customer data out the door without detection. Circa 2010, it’s just so darn easy to take huge digital piles of sensitive information outside of the secured infrastructure. Be it the result of overworked IT workers trying to meet deadlines, careless employees not realizing the sensitivity of the data on their laptops or people with actual ill intent, it’s rather simple for non-public personally identifiable (NPPI) data to find its way into the wrong hands. And with the remarkable spike in all the merging and acquiring going on the likelihood of a breach or data theft skyrockets.
And that’s only one part of the risk equation.
Every week, the industry publications are full of stories about cloud computing: The conflagration of multiple computer resources of which you only use slices that you need. In this new age of mass storage and processing, you don’t build out an isolated subset of your infrastructure to handle specific processes but rather plug your process into the cloud and it simply uses what it needs. I remember back in the 90’s working for Metlife when they launched their first true e-commerce sites and how the company struggled to find ways to monitor all of the components necessary to deliver secure content to its customers. There were typically a half-dozen handshakes required to process a request in either direction and they all existed on different platforms running different software. It was impossible to accurately measure each transaction, estimate load and response time, and calculate capacity needs. At that time, I wasn’t yet concerned with security so much but that would’ve been equally impossible to manage. But at least you could isolate each tier in the infrastructure and identify where the transaction was flowing. Now with the cloud, you don’t even have that degree of control. And when you consider that almost everyone I talk to about technology within the banking sector wants everything to run on the Web, even if it’s an application that requires only internal users, the risk factors increase exponentially.
So now I’m wondering how secure is all this NPPI with the constant rush to merge account information combined with corporate America pushing to move things onto the Web and into the cloud.
When I first started out in corporate IT well over 20 years ago, one of the managers had a sign hanging in his office that read “If you don’t have time to do it right, when will you have time to do it over?” Fast forward to 2010 and the same logic applies. The only difference is that this isn’t about application programming but rather loss of data and once that cat’s out of the bag you can’t simply put it back in.
We were watching a baseball game the other night when one of Microsoft’s recent IE8 security commercials aired. It’s the one where a fictitious bank is set up and people off the street, deceived by its appearance, wind up turning over boat loads of personally identifiable information (PII) with little apparent concern. My son loves the commercial (e.g. they ask one man if he prefers boxers or briefs) and it occurred to me that my family finds the bit to be entertaining. Not so much for me. Quite frankly it sort of freaks me out because I know that sort of thing happens every day for real (remember I’m the guy who checks for hidden cameras over ATM’s and tugs at the card reader to make sure it’s a permanent part of the machine).
But lately I’ve been wondering if it’s even the criminal element that presents the greatest threat to my PII. I worry that the banks themselves may be slipping just a bit in keeping up with their regulatory obligations regarding my privacy based on news from the field.
Our practice routinely calls on financial institutions with our services. We’ve spent an enormous amount of time and energy paring things down to what we believe are the most relevant areas based on guidance from the oversight agencies and from practical experience. And so when we engage a current or prospective client in dialog we’re typically cutting right to the chase in order to make the most efficient use of their time. We’ll hear a wide range of responses when asked how they’re managing a variety of key control activities (e.g. it’s managed internally, we use a software solution, our audit department does that, etc.) and for the most part it rings true. However lately we’re being greeted with a noticeable uptick in one response in particular: “The examiners didn’t even look at that so we’re not worrying about it right now.”
Not to belabor the point but as I’ve already mentioned we’re not offering exotic services. Quite literally everything we have to offer to our clients should make the short list of must-haves for any CISO or compliance officer. How can the examiners not cover any of these things?
To be fair, it’s typically not a reflection on ability but rather available hours. I’ve blogged before that when things are missed it’s almost always been because the fieldwork only allows for so many hours and you start with the riskiest areas first and work your way down from there. So if the examiner needs 80 hours to cover the landscape but only has 40 hours to get it done they have to focus where they think they most need to. But still, how do you not make sure that there’s a current business continuity plan in place or check to make sure that the infrastructure has been tested recently to ensure there aren’t significant vulnerabilities present? Internally we’re very kind to the entire examination process over the past year or so because safety and soundness has really needed to be at the forefront of the regulatory efforts. So we balance our concern about what’s being overlooked with an understanding that the examiners are likely doing the very best with what they have to work with. But still…..
I was reminded recently that the FDIC budget for 2009 included an increase in the number of examiners available by 30%. At the time it was announced, I figured it was a move intended to ensure that compliance was being properly enforced across all areas during a very turbulent period in our banking history. However nearly two years later I wonder what’s happened? How can I reconcile an increase in the number of examiners with an apparent decrease in information security oversight?
If you think I’m exaggerating consider that over the past decade the FDIC has released three or more Financial Institution Letters (FIL’s) addressing information technology guidance every year right up until mid-2009. Since then there have been no updates at all relating to IT or information security. After never going more than a few months offering updated guidance over a 10-year period, they’ve had nothing new to publish in 14 months. How is that even possible?
On one hand, I’m hearing that examiners aren’t always looking at key compliance activities and on the other hand, I’m seeing an apparent drop off in IT guidance from the chief banking oversight body. For someone like me who worries about these things on both a personal and professional level, this is not good. When I watch that IE8 commercial I’m not laughing; I’m wondering how anyone would even know if that sort of thing was going on right now for real?