Regulatory Reality


May 21, 2012  1:47 PM

Remote Deposit Capture is probably a very, very bad innovation.

David Schneier David Schneier Profile: David Schneier

Before I even get into the nitty-gritty of the post I have to point out that in the time it took me to choose the topic and start writing I’ve already thought of three perfect ways to steal your money via remote deposit capture.  Seriously, this is a hugely bad idea that will lead to hundreds of millions (if not billions) in stolen funds before someone finally pulls the plug or figures out a way more secure way of doing this sort of thing.

Before you read any further please fire up Netflix or hit up Redbox and rent “Catch Me If You Can” the DiCaprio-Hanks movie about Frank Abagnale Jr. the infamous check forger.  The movie covers in sufficient detail how Mr. Abagnale figured out how to forge checks and stay one step ahead of the law for years.  Take sufficient notes and then consider remote deposit capture and how it solves so many of the issues he had to figure out work-around’s for.

I’ve written in the past about how insane I think it is that we send unsecured documents via the mail that contains all of our bank account information including name and address without so much as a second thought.  When you consider how relatively pervasive ACH payments are these days (I pay at least a half-dozen of my monthly bills that way) I’m amazed that hasn’t become the newest criminal hot spot.  And now we’ve gone and made it that much easier to exploit this antiquated and poorly designed system of moving our money around.  You no longer need to even steal a persons check book, you only need to make copies of their blank checks so that later on you can fill in the appropriate details and use remote capture to process it.  When you consider the amount of time it would take to even figure out what just happened the thieves will be long gone.  First a person has to get their monthly statement and even figure out that a rogue check was presented against their account (and if you keep the amount small enough that might not even happen).  Then they’d need to contact the bank who would have to investigate and pull up check images to try and verify the customers claim.  By the time that all happens it’s potentially been at least a month, plenty of time for the perpetrators to close the account where funds were deposited and move on.  And with bank accounts being setup online all the time you wouldn’t even have video footage or images of the people behind the theft.  And that’s only one possible way to use remote deposit capture to rig the system (I’ll keep the other ideas I have to myself lest this post become a self-fulfilling prophecy).

Seriously, if the banks introduced a new service offering where you can pay for purchases by simply sending a copy of your credit card you’d all think it insane and no one would use it.  How is this any different?  If the stores and restaurants we frequented required that they make back-and-front photo copies of your credit card for their records you’d stop using your credit card.  But with checks it’s not so big a deal?

With regards to remote deposit capture, all because you can doesn’t mean you should.

April 29, 2012  7:43 PM

Internal Audit: Whose side are they on anyway?

David Schneier David Schneier Profile: David Schneier

My first encounter with an auditor was back in the mid-90’s while working as an application project manager for a Fortune 100 company.  The group responsible for change management was going through an audit of their process and one of the changes that was selected for review happened to belong to my team.  I remember the insane amount of activity that went into preparing for the audit, how every folder was pulled in advance of turning it over to the audit team and how every document was checked and double-checked to make sure everything that should have been done at the time was.  And when issues were identified that could be fixed they were fixed; missing forms were completed, back dated and inserted into the folder, missing signatures were obtained and by the time the auditors showed up everything looked perfect.  It all seemed such a waste of time to me because we didn’t figure out why things weren’t done right the first time, the auditors seemed happy enough to check off that they received everything they expected and in the end an enormous amount of work went into making sure nothing really happened.

That first experience has arguably tainted my opinion of the role played by internal audit for nearly twenty years.  Subsequent to that first encounter I’ve been audited a few more times, assisted clients in preparing for internal audits many times and have had hundreds of interactions either directly or indirectly with a variety of companies internal audit function.  And despite all of this experience and having eventually become an auditor myself I’m not sure I could present a credible argument as to where there’s real value being generated by the process beyond maintaining appearances.

The first problem is that for most companies there’s an unhealthy fear of auditors.  There’s often real concern that if any major issues are uncovered someone’s head will roll.  At the aforementioned Fortune 100 company, it was widely believed that if your group was found to have a material finding (or anything remotely resembling one) the highest ranking person in the group was doomed.  To their credit the company also had a mechanism in place so that if you figured out that you had a problem before anyone else and self-reported it you were allowed appropriate time to remediate.  But that wasn’t always effective enough because most application and business managers weren’t auditors and couldn’t always recognize when a control was either missing or failing and so there was still an enormous amount of work and panic leading up to a scheduled audit.  I remember thinking that the company should remove the threat of termination and encourage both auditor and auditee to work openly and honestly together so that in the end issues were surfaced, defined and repaired.  In the two decades since I’ve worked with and for a few companies who believed they had this healthier sort of dynamic in place between their internal audit department and its business and technology functions but really in the end it’s almost always the same problem.  Internal audit is viewed as an unforgiving and punishing agent and no one ever want them snooping around.

The second problem is that there’s a degree of incompetence found within many internal audit functions.  While conducting my first technical audit back in 1997 (my company was managing an outsourced audit plan) I identified a significant issue with the methodology used to make production changes in a certain database environment.  It resulted in there being virtually no clear or simple way for the DBA to back out a change if it didn’t work.  If a change failed it would require bringing down production for several hours in order to restore things to the previous state.  The first person who challenged my finding was the internal auditor who had audited the same platform for years and didn’t either understand or agree with the finding.  It took me nearly an hour to first educate him as to why the technical issue existed, prove that it did and finally to agree with the associated risks.  He had worked there for years, had never had the chance to see how other companies managed similar infrastructures and was way more concerned with his authority and capabilities being challenged than with the fact that his company had a significant risk to be repaired.  In the time since I’ve met many more people just like that one, auditors who stay at one company for years, fall into bad habits and fail to keep their skills relevant.  They wind up relying too much on the Internet to try and update their knowledge base, don’t have the perspective of understanding how other companies are managing similar challenges and are happy enough to bring out the same whipping stick and a feeling of empowerment to scare the daylights out of internal control owners while conducting their audits.  It results in poorly formed and often irrelevant findings that waste everyone’s time.  I wish I had a ten dollar bill for every instance I knew of where something was being fixed because it was easier to appease the auditor than it was to convince them their finding was flawed or even wrong.

Now I’m not saying all internal auditors are incompetent, they’re not.  I’ve met some brilliant and extremely effective internal auditors along the way.  And in those environments audits weren’t feared because there was a high degree of confidence that if an issue was identified it was something worth knowing about.  But in almost all of those cases the auditors involved had only been with their company for a few years, not decades.

The third problem is that audit needs to be seen as adding value, not creating unnecessary delays or work.  Practically speaking internal audit is playing for the same team as the control owners whose processes they assess.  Their primary goal shouldn’t be to notch as many findings as possible on the board but rather to identify weaknesses and deficiencies so that they can be remediated and help further harden the infrastructure and reduce risks.  I understand the need for the function to maintain independence and separation but only so they can remain objective not so they can operate as if though they’re the ultimate authority on right and wrong and beyond reproach.  If they’re invited to participate early in a project and find issues they should issue interim findings so that small problems don’t become bigger problems further on down the project road.  If you wait for the post-implementation audit to document early stage issues you’re not really helping anyone.  If they abuse being granted access to meetings and documentation long before the audit function is typically engaged the only predictable outcome is that access will be denied until someone forces the issue.  And one more major issue I routinely find with internal audit is that no matter how strong or weak a finding may be, no matter how poorly or strongly worded, no matter how relevant or irrelevant they all too often defend it as if though it’s gospel that’s beyond reproach.  Why is that?  Why can’t the control owner question the finding, demand clarity or try to frame it’s relevancy?  All auditors should feel an obligation to issue a final report which resonates with everyone involved as being accurate and hopefully fair.

Until internal audit is seen as part of the solution, not part of the problem it’s going to remain, well, a problem.  Until control owners gain a sense that by developing a healthy dialogue with their auditors it will only help things and not hurt them it will continue to be a problem.  And until all involved parties working for the company feel as if though they’re working towards a common goal it will remain a problem.


April 14, 2012  2:23 PM

Anyone remember the Heartland breach?

David Schneier David Schneier Profile: David Schneier

Two weeks ago news broke about a huge, massive leak of credit card information from a processor called Global Payments and I braced for a firestorm of media coverage that was sure to follow.  Two weeks hence and it’s pretty much a non-event.  A few days ago the State of Utah reported a breach of nearly one million social security numbers and again I waited for this to hit the front page.  It was a blurb for about an hour and then disappeared only to be found by using search engines.

Doesn’t anyone remember the great Heartland breach of 2009?  Seriously, anyone?

I’ve never tried to quantify what percentage of the work we do within the regulatory compliance domain is focused on the safeguarding of customer data but off the top of my head I’m thinking it’s high.  And when you factor in that there’s an entire industry focused exclusively on protecting credit card information (PCI) you’d think that not only are breaches getting harder to pull off but that we’re becoming less tolerant as a society in accepting them.  But there’s a general lack of outrage exhibited when these incidents occur, the media doesn’t much care to cover it properly and really in the end they wind up being something of a non-issue.  And as I learned recently when my own bank card was compromised, the banking industry seems to simply accept that these things are going to happen.  Instead of getting better at preventing breaches they’ve instead managed to streamline the process where they shut down the accounts in question and reissue new ones.

You often hear that any security solution is only as good as its weakest link.  It seems to me that financial institutions are no closer to figuring how to truly lock everything down and with the constant evolution of technology where we’re always adjusting to new exposures, new threats and new challenges we’ll never actually get there.  There’s never a point where an infrastructure is truly hardened and where the weakest link is something so obscure as to not even present a credible threat.  Despite regulatory and industry requirements and sometimes intense scrutiny we’ve reached a point where the only thing that’s improved is in how quickly we repair the damage.  PCI hasn’t stopped things from happening (it hasn’t and don’t debate me on its merits because every time there’s an issue with a PCI-certified company there’s an excuse).  GLBA hasn’t stopped things from happening (too many moving parts and not enough pressure applied from the enforcement divisions).  It’s just not getting better and I can’t see that improving anytime soon.

I’ve long ago decided that vigilance on my part is my only true defense against identity theft.  I’ve written previously on how I check every physical detail of every ATM I ever use to make sure the equipment is legitimate, that there’s no hidden cameras recording my PIN and that I never use the privately leased machines you find all over the place.  I also double-check gas pumps to make sure a portable device isn’t scanning my credit card (I get strange looks all the time when I wiggle the card scanner to see if it’s loose).  And I’ve turned on every email alert possible to track activity on my checking account (much to my wife’s chagrin).  I almost never use a smartphone app or web-based solution to conduct my banking because I don’t completely trust the technologies (or rather the people who can exploit them).  And to be clear, none of my concerns stem from what I see while doing my day-to-day fieldwork.  It’s all based on what I know happens out in the real world.

Until breaches are treated as a true threat to our personal security and receives the scrutiny it so richly deserves none of this is going to get better.  When a breach of over one million credit card accounts is prefaced with the word “only” and that’s perfectly acceptable to all involved we’re still obviously a long way off from solving the problem.


March 23, 2012  3:24 PM

GRC presents a broad spectrum; is it too broad?

David Schneier David Schneier Profile: David Schneier

In early 2004 I co-authored my first Sarbanes-Oxley (SOX) controls framework for a client.  Just about the entire thing required manual testing that, if everything worked as planned would require a full-time resource to support.  About thirty seconds after submitting the framework draft to the client my in-box started filling up with all sorts of ticklers from software vendors promoting automated SOX testing.  Anxious to identify efficiency’s to shorten the testing cycle I eagerly read through all of the offerings.  It didn’t take long to realize that most of the products were either regurgitated Y2K scanning solutions retooled to use SOX-oriented terms or flat out security scanning software that addressed a relatively minor fraction of the testing required.  The promise of automated testing was but an illusion because in the end even the best of breed would only reduce the workload just so much, a full-time resource would still be needed.

Now we have GRC software solutions that oddly enough promise to automate GRC-related tasks.

The first problem with any such assertion is that GRC is too broad a spectrum of activities and disciplines – most solutions are focused on addressing subsets therein.  On one end you have the security-centric solutions, on the other end you have the risk-centric platforms and somewhere in the middle is a crowd of offerings that try and touch on everything but none particularly deeply.  So the first thing a stakeholder needs to understand is what they’re looking to accomplish before they set out to select a product.  You can select ten different GRC vendors and discover ten different interpretations of the discipline.  And within those ten solutions there are vastly different approaches.  Some are similar to ERP packages where their approach is somewhat hard-coded and you have to do things their way (or spend big bucks to customize).  Some are remarkably configurable and can be made to fit your processes like a glove (but that requires a steep learning curve and expanded time frames).

The second problem is that because most vendors selling to the GRC market tend to use common terms their internal definitions can be quite different.  Some solutions pitch risk assessments which are little more than questionnaires (e.g. very little to no risk-related elements such as inherent and residual risk) whereas others provide questionnaires that are absolutely risk assessments but only appear as such upon inspection.  If you’re looking for a true risk-oriented solution you might go with the former when it’s the latter you truly need.  But the terminology is so similar it’s hard to differentiate and the only way you’ll get to realizing that is after you take the software out for a test drive, not something every vendor is willing to provide (and I’m not talking about a two hour demo, I’m talking about a true trial period).  You think you’re comparing apples to apples and it may turn out that you were comparing apples to car batteries without knowing it.

The third problem is that after a while it’s easy to become snow-blind during the selection and evaluation process.  Because of the common language, because of apparently similar functionality you start looking for factors unrelated to what you really need to focus on as a way to separate out the solutions from one another.  You’ll consider solutions as prequalified because a competitor is using it thinking that their needs are similar to yours.  But they may be focused on information security activities where your institution is looking for automated risk assessment capabilities.  You’ll start shopping on price and contract terms thinking that competing solutions are so similar it really comes down to who offers the best deal.  But software vendors usually know their market and the correct price points based on what their solutions offer – if two or more products appear evenly matched on functionality but one is much cheaper there’s usually a reason.  The more expensive solution may come pre-loaded with all the related content you’ll need to effectively use it whereas the cheaper solution might require you to obtain your own licenses.  It’s not intentionally misleading but that’s a detail easy to overlook during the vetting process.

GRC is an awesome concept working towards one day becoming an awesome discipline but it’s not quite there just yet (a point I routinely beat to death, I know).  It’s spread out too far and wide and depending on who you’re talking to about it can get widely (if not wildly) varying definitions of what it is.  So it’s no wonder that trying to find an automated GRC solution is equally challenging, the vendors are trying hard to figure out what nail to hammer as well.   They all do some things remarkably well but at the expense of doing some things either partially or not at all.  Thus the reason that it’s not uncommon in larger companies to find multiple GRC solutions installed; different business functions have unique needs and they purchase whichever is closest to meeting those needs.  It’s an expensive approach but for the foreseeable future an necessary evil.

I think we’re getting closer to a point in time where a common dialogue will be accepted by the audit and compliance community.  The OCEG folks have poured the foundation and it just needs a little more time to harden in terms of broad acceptance.  When I see their content displayed prominently next to all the COBIT binders at my clients I’ll know that time has come.  I predicted in 2007 that once we’re in the midst of a full-blown economic recovery GRC will quickly rise in prominence due to increasing regulatory pressures, almost identical to the way COBIT soared into the forefront of the industry fueled by SOX.  I see no reason to alter that prediction, I’m just not sure when the recovery will officially begin.

In the meantime keep participating in the dialogue, keep trying to define what GRC means to you and to your organization and every now and again share those ideas with some of the decision makers who are shaping the discipline, they need to hear from everyone as they mature the thing.  As long as we in the audit and compliance domain keep moving things forward we’ll get GRC to where we need it to be, I’m certain of it.


March 6, 2012  6:00 PM

My bank card was compromised.

David Schneier David Schneier Profile: David Schneier

Two weeks ago, about two hours before departing on a long weekend trip to welcome back baseball in Florida I received an email from my bank indicating that there’s been suspicious activity on my Visa check card and that it’s been suspended.  Considering that under normal conditions I think my families spending is a bit unusual I figured it was just a mix up.  I mean, during most weeks I can fill up my car in four different states, make purchases in five and buy an impressive assortment of merchandise spanning the full range of the consumer spectrum.

So I called up in an attempt to resolve things and was informed that it wasn’t my spending that caused a problem, it was the fact that one of the vendors I completed a transaction with reported a breach.  Because my card number was potentially included in that breach I was shut down.  I was fortunate that my bank is setup to help customers manage these situations fairly effortlessly (I don’t love them most of the time but this event won them some points with me) and after a brief stop at a local branch I had a temporary card and was able to continue on my trip.

A few items of note surfaced as a result of this experience.  The first is that my bank would not reveal the vendor that reported the breach.  The customer service representative I spoke with claimed that she didn’t have access to the information which I sort of believed.  But when I asked how I could find that information out she replied that they typically don’t share it.  I thought that a bit odd.  Shouldn’t I as a consumer be able to make informed decisions about who I do business with?  I should be able to find out who the vendor is so that I can decide whether or not I’ll continue to give them any of my hard earned dollars.  The second thing that I found curious was how seamlessly the replacement process was.  They had a stack of temporary cards about five inches thick and a process so well defined and efficient that it almost seemed like I was asking to borrow a pen so I could sign something.  When I returned to the car my son who had been waiting for me assumed they weren’t able to help me because I was out so fast.  How often does this sort of thing happen?  And to make their degree of efficiency that much more notable a friend of mine experienced something similar and it took her bank over a week to get a new piece of plastic into her hands.

I recognize that this is a sign of the times we now live in.  We use plastic everywhere, our sensitive account information is digitized all over the place and security controls protecting that information are only as strong as their weakest link.  It’s why you’ve heard me say many a time that requirements like PCI are an excellent starting point but by no means the end-all to be-all for securing the perimeter.  All it takes is one USB storage device to go missing, one new appliance added to a network with default values unchanged, one person printing off a report with NPPI and forgetting to pick it up from the printer and viola, a breach is born.

I’m frequently onsite at clients of wildly varying sizes and I find something every day that makes me realize that sometimes the best weapon against a company being embarrassed by some sort of exposure is just dumb luck.  Regardless of whether they have a well formed team of risk and compliance folks working hard to protect information assets or just a single person serving in a related function it comes down to human nature both in terms of those not following the rules and those who are ready to exploit that fact.  A prime example is that when I find sensitive information left exposed I collect it and either dispose of it properly or lock it up to share with the appropriate party as a “for instance”.  However in those places where less honest people make similar discoveries  that same information becomes a commodity to be sold to those who indulge in things like identity theft.  Like I said, it comes down to pure dumb luck.

And so I’m left wondering if my now deactivated and defunct bank card was the victim of human nature, a sophisticated scheme to access otherwise properly secured sensitive information or just plain incompetence.  And while I’m glad that my bank was swift to react and protect me I wish they’d extend that to also inform and educate me as well.  I mean honestly, if I’m going to be forced to memorize a whole new series of numbers shouldn’t I at least be allowed to know who’s to blame?


February 16, 2012  5:49 PM

BITS Shared Assessment – No Free Lunch.

David Schneier David Schneier Profile: David Schneier

On Monday the BITS Shared Assessment was free, on Tuesday it cost $5,000 per year (at a minimum).

My first thought was that it was just like what drug dealers do – they give you free product until you’re hopelessly addicted and then start making you pay to feed that addiction.  My second thought was that I couldn’t imagine anyone actually wanting to pay for the content.  While it’s better than nothing as a framework it’s not that much better.  I’m sure there are certain pockets in the GRC industry who think that the Shared Assessment is to vendor management what COBIT is to IT governance but I certainly don’t.

Since first encountering the Shared Assessment a few years back I’ve always thought of it as bloated, difficult to effectively apply and all at once redundant and oddly vague.  The very first time I reviewed the content I immediately thought that whoever was behind creating it must be people who get paid by the hour because any attempt at relying on it was going to be major league time consuming.  And of course once I started investigating the companies behind developing the questionnaire(s) I realized I was spot on.  I once commented to a colleague that the questionnaire looked as if though the purpose of the collective assignment was to think of every possible question you might ever want to ask a vendor, throw it into a spreadsheet and then try and organize it after the fact.  If I’ve ever truly liked it in any meaningful way it’s as a reference source when considering questions to include in customized questionnaires and assessment.

The folks running the show have made strides to truly make the questionnaire into a framework with accompanying methodology but in my experiences most companies simply want to leverage the content of the questionnaires and use it how they see fit.  Some have made the effort to dig through the massive pile of questions and whittle it down to something more manageable while others pretty much ship it out as is to their vendors including both the lite and full versions.  As someone whose practice often has to complete due diligence questionnaires I have to tell you that if we needed to fill out even the lite version it might be a deal breaker due to time constraints.

As I alluded to earlier, I think many practitioners who use the Shared Assessment think of it as being something more like COBIT.  I know COBIT and you sir are no COBIT.  It’s really intended to be used by large vendors who provide services to multiple clients as something akin to a SAS 70/SSAE 16 report.  They pay someone to complete it for them and sign off on it and when their customers look for annual proof that they’re properly controlled they can send along a copy of the completed questionnaire with managements approval stamped on the cover.  In theory it’s a good idea but I’d still prefer a proper audit instead.

And it’s heavily geared towards technology vendors and to a lesser extent those who host services.  When you try and use the Shared Assessment for non-technology vendors it becomes that much more difficult to apply and sort of forces your hand into coming up with something else.  Trying to whittle 900+ questions down to something smaller only to discover you need to write a bunch of new questions on top of that has to be something between depressing and outrageous I would think.

What I really don’t understand is why this was even needed to begin with.  My vendor management experience goes back several years and I’ve always been satisfied working with content from existing sources.  I think that when you combine content from COBIT and FFIEC you can adequately  cover what needs to be covered to assess vendors.  I would go so far as to say that most examiners would agree with me based mostly on the fact that there are more than 100 institutions using some version of a vendor management program my practice has designed and they always do well on that front, always.

For those of you who are going to stay the course, cough up the money and continue along with the Shared Assessment I wish you good luck.  I hope you’re able to glean something meaningful from the process and I pray you never wind up working for a vendor that needs to complete one of the resulting questionnaires.


February 3, 2012  5:58 PM

Governance, risk and compliance – related but not the same.

David Schneier David Schneier Profile: David Schneier

I was sitting in a meeting this week listening to a group of very bright people talking about an initiative centered on installing a software solution and I realized something rather disturbing; somewhere along the way in our industry governance, risk and compliance has started melting together and becoming known simply as GRC.  I say “disturbing” for a very simple reason, they’re related but not one and the same.  And so it started me thinking about a wide range of recent conversations I’ve been having lately between services work, software sales and solutions development and there it was right in front of me – most of the people who throw around the term GRC just think of it as a massive catch all for everything even remotely related to any of the three disciplines and not as a rallying point for coordinating their points of intersection.  Uh oh!

Is it possible that this incredibly important and still developing concept known as GRC can be hijacked and used instead to almost marginalize the sum total of all it’s related parts?  Until this week I would never have even thought of something like this as possible but there it was, right in front of me and a bit of a shock.

There are likely two main drivers behind this disturbing trend: GRC software and the overwhelming volume of compliance-based activities.  So many of the GRC solutions currently on the market tend to be rather broad in their scope.  While most of them are oriented towards one particular point within the GRC spectrum they have all expanded to try and touch on as much as they can justify.  So whereas you have a product that may have been designed to manage policy content it now also offers risk assessment, audit and overarching governance features.  But still what it does best is manage policy content.  The license for the product isn’t cheap and senior management has been sold to some degree on the promise of automating much of the required work via this new and costly solution.  Thus we have the first driver behind the blurring of GRC lines: “We paid a lot for it so we better use the heck out of it”.  And so there’s a slow but steady march through the organization looking for things that can be brought into the fold.  However not everything belongs in every GRC solution because as noted previously, each offering no matter how effective tends to favor one specific location within the GRC spectrum.

But even when you have a solution that’s broad enough to accommodate most of what you need to accomplish there’s the other driver coming into play, massive compliance requirements.  I’ve had clients who don’t even care so much about if what they need to do to comply makes sense for them but will do anything to pass an exam.  And so there’s this mad, Lemming-like dash in a single direction to shoehorn everything and anything into this thing called GRC that might even be remotely related.  There’s little thought put into how best to get the work done with the primary concern being “we have to have something to show the examiner”.  The result is a hodgepodge of seemingly related activities being coordinated under a single function or initiative but with almost zero effort made to try and normalize the workload and gain the efficiency’s that GRC promises.  How thoroughly depressing for us practitioners.

And it’s fantasy to think that once things are setup to be done a certain way they’ll ever change.  Unless an examiner or auditor tells you something needs to change everything stays the same.  So a poorly designed GRC function remains poorly designed forever.  And an unnecessary GRC activity continues because no one typically cares if you’re doing too much, only too little.  It’s almost like people just want to stuff everything remotely related to the discipline into the GRC closet and then make sure guests never open that door.

I know we’re still early in the GRC life cycle (Michael Rasmussen recently noted in an article that it’s been ten years since he first conceived of the acronym and concept) but what if this trend isn’t derailed sometime soon?  What if because of the weak economy (I’m being polite, I should swap “weak” for “horrible”) companies continue to just sweep everything under the GRC rug and don’t exploit the benefits of the concept?

I’m reminded of the old joke about the immigrant who decides he’s going to use his lumberjack skills in the U.S.A. to make a living and invests his life savings in a chainsaw.  After repeatedly failing to achieve any appreciable gains in his productivity he finally returns to the store to find out what’s wrong with the machine.  Once they pull the ripcord and fire it up he jumps back in surprise asking “what’s that noise”.  I have this image in my head of some internal controls manager managing his/her company’s GRC program ten years from now stumbling across an OCEG document, reading it and jumping back in surprise and exclaiming “what a great idea, why aren’t we doing this sort of thing”.  Don’t laugh, I can all but guarantee it’s going to happen at this rate.


January 8, 2012  9:27 PM

Maintaining compliance is often the Missing Link.

David Schneier David Schneier Profile: David Schneier

I’ve been in the solutions selling business on and off for about a decade but exclusively so over these past four years.  Up until becoming a partner in my current practice I pretty much was always only involved in helping sell the solution and usually implementing it before moving on.  Seldom did I ever have the occasion or opportunity to loop back to the client much beyond the initial six months after getting everything setup to find out how things were going and how well the solution was functioning.

But these past four years has allowed me more than ample opportunity to rectify that heretofore unknown blind-spot in my career.  We don’t just sell a solution, we support it and that involves the establishment and maintenance of what can most aptly be classified as a relationship.  While we have a large number of clients we seldom hear from there are some who call us all of the time.  Often it’s to ask about how best to exploit functionality, sometimes it’s because they forgot how to do something (and we advocate calling to ask rather than reading through the user guides) and on more than one occasion it’s because they have an exam looming large on the horizon and they still haven’t quite finished setting everything up.  It’s the latter that has proven to be a revelation.

The entire reason for purchasing a solution is so that you don’t have to first figure out what needs to be done.  If the solution is designed right there should be a series of relatively basic steps that are clearly outlined and once followed have you up and running.  Instead of wasting precious time and effort getting started you can pretty much start focusing on conducting the related work so that everything is kept current.  That’s not to say that it’s easy, only simple.  And because most compliance-based work is spread out over the course of a full year it should never require herculean efforts to maintain.  Our vendor management solution pretty much requires a few hours of setup time then roughly a few hours per week going forward on average.  And when properly supported it works, it actually works the way it’s intended to.

But here’s the problem: Developing or purchasing the right solution to comply with any regulation or mandate is just the very first part of what’s necessary.  You actually have to properly implement and use that solution.  All too often that part is missed.

It’s not just with my current collection of clients but also with those that I’ve provided consultative support to over the years.  I have one client who has somewhere close to $2M in purchased software sitting locked in a file cabinet having never been implemented due to shifting prioritization by management.  Shocking?  Yes but also frustrating because some of the very problems that software was intended to address still existed.  I have another client I conducted a risk assessment for that had multiple solutions that were near identical to each other but were subsequently replaced by something different because as management changed they wanted only those solutions they already knew.  The result was hundreds of thousands of dollars per year being spent on maintenance costs because they needed to keep the data contained in each solution and there was no straight-forward way of extracting from one and merging with another.

Whatever solution you decide to go with from a simple spreadsheet all the way through to a seven-figure software package it makes little difference if nothing happens beyond setting it up.  Our advice to clients is that when they purchase one of our solutions they can often get a one-year pass with their examiners as long as they can actually display the solution and provide a real and credible plan on how they’re going to be using it.  Typically the examiner will give you points for taking a step in the right direction and will allow you the additional time necessary to get it up and running.  But that’s Year One – Year Two you’d better be able to show progress.

It’s why when I’m engaged with any implementation be it one of our own solutions or when I’m serving in a pure consulting role I often caution that it’s a good first step but only the first of many needed to be successful.  Everyone gets sort of caught up in the potential of the project and starts seeing their better selves once it’s fully implemented.  But I’ve witnessed too many projects where after the initial success fades and resources start getting pulled onto newer initiatives momentum is lost and progress stalls.  I was on one business continuity project where they all but had the plan updated to address an examination finding.  I left right before they submitted the BCP for Board of Director approval and found out a year later that although that part had been properly completed they never actually deployed the plan.  Someone in senior management felt that the plan itself would satisfy the examiners and because of resource constraints decided to delay the implementation and training necessary.  Management gambled and they were tattooed by their examiner the following year.  How frustrating is it to know that the hardest part of the project was already done but not enough so to make the finding go away?  It happens all the time.

I understand the pressures in play for most institutions, honestly I do.  Too few resources, too little time and trying to figure out the right balance between running a business and meeting regulatory requirements.  But that doesn’t explain why you’d implement a solution but not maintain it.  And does it ever make business sense to invest in anything but not leverage the benefits associated with that investment?  Besides, who want to be the one standing in front of the CEO explaining that while it’s true that the money was spent to solve the problem the problem still exists?

Seriously, go the distance, finish what was started and then put someone in charge of keeping the thing current.  In the end you’re going to have to anyway so why wait?  Oh, and before you run out and purchase a brand new solution check the file cabinets and make certain you don’t already own one.


December 22, 2011  9:44 PM

Why I don’t trust hosted or SaaS solutions.

David Schneier David Schneier Profile: David Schneier

Let me begin by sharing a story from the way back files.   In the mid 80’s when I was first starting out in my career I was working as a junior programmer in Manhattan.  Courtesy of playing on the corporate softball team I became acquainted with a fairly diverse group of people ranging from those in the trenches where I plied my trade all the way up to the executive suite.  One of the people I came to know well was senior in the internal audit department.   One day I learned that he had been fired rather suddenly earlier in the day, something that definitely came out of nowhere.  I came to find out that while under the guise of conducting audit work he had gained access to the companies compensation data file and was logged browsing employee records from the CEO on down.  The problem was that he wasn’t conducting any audit that would explain his actions; he was doing it simply because he was curious what certain executives were being paid.  Having been caught red-handed and without a viable explanation he was terminated on the spot and escorted out of the building.

This was someone who for all intents and purposes had nothing to gain from doing something so blatantly stupid.  As an auditor he was likely aware of the logging capabilities available on the host (mainframe system).  He also had direct knowledge of the audit culture and the degree of scrutiny they placed on certain internal artifacts and/or repositories.  But in the end his basic human nature created an override allowing him to indulge his curiosity.  For me that meant that you could never assume that any manner of stored information was ever truly safe and secure

Thus began my basic mistrust of storing sensitive information in electronic repositories.

With that in mind imagine my horror as technology began a rapid progression away from centralized storage and started spreading out first within the infrastructure to distributed applications and eventually breaching the walls of the data center and finding new homes elsewhere in other companies  so-called data centers.   Beyond the fact that you don’t truly know how secure your data ever truly is (notwithstanding reports and attestations to the contrary), it now also has to traverse communication lines that despite what you may want to believe are vulnerable in a number of very real ways.  And we’re not just talking business data, we’re talking social security numbers, bank account numbers, credit card numbers and, and, and……

I simply don’t trust that any sensitive data is ever truly protected anymore.  I operate under the assumption that there are two common states with regards to data security, known breaches and those yet to be discovered.  When I’m challenged with the logic that we’re always told about confirmed breaches eventually and so we know exactly how much has been exposed I laugh.  All that means is that the hackers and criminal element slipped up along the way; a confirmed breach indicates someone made a mistake.  I truly believe that a successful breach is never detected, that the perpetrators behind it figure out the proper balance between skimming data and moving it around for illicit gains so that it never hits the radar.

And I think the threat comes from all over the map.  I think it’s often internal, someone on the inside behind the firewall and locked doors or someone with legitimate access to databases.  I think it’s sometimes along the way between a transmissions point of origin and its destination.  And I think it’s often at points of exposure along the way.  I just don’t believe that there aren’t rogue employees at offsite storage facilities that know how to rig the system and grab media with all manner of PII and NPPI with no one ever the wiser.  I reject the notion that it’s impossible for employees of the popular SaaS companies to gain undetected access to a wide variety of information typically considered private and secured.  I think this happens regularly (if not often) and that as long as we remain blissfully ignorant this will continue to happen indefinitely.

I use only one rule when it comes to how best to protect sensitive data: if the human element is involved in any way your data is at risk.

And if you’re not truly yet at risk, if there’s been no concerning or inappropriate attempts to access your choice data that’s either because they haven’t gotten to you yet on their to-do list or your choice data isn’t as choice as you might think.

If I had it my way everything would be moved back to Big Iron in an internal data center and I’d go hog-wild slapping every conceivable monitoring tool and detection devices wherever possible.  Short of that I’d select solutions that could only be run behind my firewall and on telecom pipes that I directly controlled to further minimize my exposure.  Oh and I’d probably fire anyone who ever even mentioned migrating to the cloud just to set an example.


December 5, 2011  11:54 PM

The trouble with GRC.

David Schneier David Schneier Profile: David Schneier

I love GRC, at least the concept.  I’ve gotten way more than my fair share of print time expounding on its many virtues and how it continues to make inroads into so many organizations.  It’s the next and necessary step in the evolution of audit and compliance, a fact (yes, fact) of which I’m certain.

But why is it that no one can ever truly and honestly agree on what exactly GRC is?  I first wrote about this very issue in 2008, then again in 2009 and 2010 and once again earlier this year.  Beyond the too few thought leaders on the topic there’s very little clarity.  And most of the credible GRC sources seldom extend from the theoretical to the practical (my opinion but let’s remember, this is my blog).  For those in the trenches who have a vested interest in trying to apply the most basic elements of the discipline there’s very little out their available to help them figure out where to begin and what to do.  When you throw into the hodgepodge of concepts the dozens of vendor spawned interpretations it becomes nearly impossible for any two people to ever agree on something close to a common definition.

What a shame.

It’s a shame because conceptually GRC is too important of an evolutionary step within the audit and compliance space to be botched.  The huge pile of industry and government requirements seems to grow almost daily, the amount of resources available to manage the work only seems to shrink daily and these trends show no sign of slowing down.  The blueprint for a better and more efficient approach is right there before us practitioners and yet we can’t quite see the forest for the trees.  I’m not sure what the primary reasons are or if they can even be boiled down to just a few but I’m gonna give it a try just the same.

First, stop listening to the software vendors explain what GRC is or isn’t.  They have solutions to sell and while some of them are truly impressive they’re going to align their GRC definition with the capabilities of their product.  Second, stop reading white papers and frameworks.  There is some very important content available in the industry published by some very, very bright people but in the abstract much of it is at best daunting to internalize or understand and at worst suffocating to the point where you’ll just get frustrated and put it away for another time.  Third, don’t think you can simply bring in an advisory firm to either define or develop  a related program.  My experience in working on GRC inspired projects is that both the corporate culture and its capabilities are way too important of an element to either overlook or underestimate.  An external perspective often can’t detect these nuances and so what they design is doomed for failure because it can’t be sustained in the real world.

What can you do to overcome these all too common pitfalls?  Your homework.

There’s no real shortcut to identifying where to begin laying down your most fundamental steps for a GRC program.  Only you and others in your institution can identify both the pain points and also the most obvious opportunities.  All too often the first step involves forming a committee which is usually a recipe for delay (someone I worked with years ago once advised that if you want to make sure a project never happens bury it under a committee).  But what you can do is seek out and enlist the support of partners that share a like mind or a common goal.  I don’t usually recommend engaging internal audit at the onset but you might want to include a trusted member of its team.  You might even consider reaching out to your examiner for suggestions and where to begin.  Perhaps you have control owners within your infrastructure who spend way too much time generating content to satisfy compliance requirements and are willing to lend a hand if it means easing their burden at some future point.  But whatever you do to start forming a team and outlining ideas you need to think it through with your expert knowledge and understanding of your institutions capabilities.

Once you begin forming that plan with some deliverable’s and goals you can consider augmenting your efforts with an expert GRC hand to guide you.  Once you firm up what you think your organization is capable of and have had the chance to vet that plan with key stakeholders you can research GRC products that are closely aligned with what you’re looking to accomplish (the right vendor will want to learn about what you’re trying to do rather than tell you how to do it, trust me.  And yes, I’m biased).  And once you have a stronger sense of what you’re looking to accomplish you can engage the structured approach of a framework.

Oh and as for a single definition of GRC I’m clinging to the one I’ve been using since first reading about it several years ago.  GRC harmonizes efforts across previously detached disciplines that existed in their own silos within an organization (this is my fancy version).  In simpler laymen’s terms it’s the point of integration between related functions reducing redundant activities and allowing the left and right hand to work together. And the only wrong way to try and implement some of its elements is to not even try.

No matter its size your institution is neither too small nor too complex to benefit from GRC.  No matter how many times you may have tried to build something unsuccessfully it’s entirely possible to accomplish.  No matter how overwhelming or confusing you’ve found the concept to be at the point where you tried to get the rubber to meet the road there’s a simpler more viable approach.

Make it your corporate New Years resolution for 2012, I implore you.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: