In an unusual risk management analysis methodology, payroll and human resource services provider Paychex breaks down its risk factors into four regions and pits them against one another.
Frank Fiorille, director of risk management for Paychex, presented an overview of the company’s risk analysis process at the 18th Edition SOX Compliance & Evolution to GRC conference in Boston on May 17.
In this version of March Madness, the Final Four is comprised of financial risks, hazard risks, strategic risks and operational risks. In the example Fiorille presented, each region is assigned 16 risks, which then compete against each other in live votes among the company’s leaders, to determine the risk champions in each region.
The brackets are whittled down to a Sweet 16, four in each region, before being put through more rigorous tests, vetting and quantitative analysis, and then ranked again on heat maps.
“Risk management is part art and part science,” Fiorille said, “but the business units know their risks the best.”]]>
I got a primer on principal agent risk a couple months ago from risk expert Ali Samad-Khan, the CEO of Stamford Risk Analytics and a keynote speaker at the Risk Management Association’s GCOR IV operational risk seminar in Boston. Principal agent risk speaks to the reality that employees of a company are sometimes in the position to do things that are not in the best interest of the organization. Managers who are agents don’t always do things that are in the best interest of the stakeholders or the principals.
For example, if an employee’s bonus is based on the amount of money he makes, as opposed to the amount of money to be made on a risk-adjusted basis, Samad-Kahn explained, that employee can end up making a huge amount of money by taking a huge of amount of risk. When thinking about principal agent risk in terms of payout metrics, he advised his audience of risk managers to ask two questions: Who is the intended beneficiary, and who is the intended loss sufferer? In a criminal event, where the perpetrator is intent on benefitting himself, there is a clear intended loser. It’s a zero-sum game. So, how is that situation different from principal agent risk? When an employee of a firm takes excessive risk, the intent is not to harm the firm, usually, he said. The intent is not to harm anyone but to benefit the firm and himself.
“But if you look at the distribution of all potential outcomes of this excessive risk-taking, the expected value of all the outcomes is negative. What does this mean? It means that you know full well that what you are doing is not in the best interest of the firm on a risk-adjusted basis. It means this transaction you are engaging in destroys value, and yet you go ahead and do it because it benefits you,” Samad-Kahn said.
In an environment where compensation is frequently on a “heads-I-win-tails-somebody-else-loses” basis, you wind up with an environment where there is a lot of principal agent risk,” Samad-Kahn said. Principal agent risk — now not even part of the risk taxonomy — must be factored into risk management models.
Making piles of money on a risk-adjusted basis is fine, provided the risk model is not intent on doing harm. Making piles of money by taking excessive risk that could harm the principals and the shareholders is not okay.
Of course, when the agents are the principals, and the shareholder being harmed is the American taxpayer, that’s a matter for the courts.]]>
“Security in the Cloud” was the title for the session at the Olin Innovation Lab, an annual event at the Olin College of Engineering in Needham, Mass. As the four panelists made clear, however, there is precious little of that in most cloud computing arrangements.
For attorney Iuean Mahony, a partner at Holland & Knight LLP, the opacity or transparency of the cloud computing engagement is determined by the contract. “But the problem now is that these contracts are fairly impenetrable,” he said. Indeed, the ability of the potential consumer of cloud computing services to actually do the due diligence required to write a contract that provides transparency is a “huge problem,” agreed David Escalante, the director of computer policy and security at Boston College, because the market is so immature.
In fact, when Boston College’s legal and contract people have been asked to look into cloud computing, they tend to “pull out the old paradigms,” used for the typical earthbound vendor for whom service-level agreements, software escrow, and spelling out where the data is stored is standard operating procedure. Pinning down cloud providers on such cloud computing security issues, or on how employees are vetted, or on legal protections if that provider goes belly up, is very tough.
“All those provisions of a customer contract, these cloud computing providers can’t do and still utilize their economies of scale,” Escalante said.
(CEO Matt Moynahan, of the cloud-based application security provider Veracode, for example, said his company offers service-level objectives rather than SLAs, in part because his customers’ demands are morphing so quickly. The Veracode promise to return an application within 72 hours is harder to guarantee when customers are now sending 100 million lines of code to vet, compared with earlier days, when customers were sending 50, 000 lines of code.)
The solution will be to move to a set of agreed-upon cloud computing security standards, and the legal and technology communities are inching toward standards. Meantime, Escalante has leaned on the due diligence being done by Shared Assessments, a member organization in the banking industry that was formed to evaluate the security of controls of service providers.
Dave Sherry, CISO at Brown University, where student email is now on Google Apps, said he consulted the nonprofit Cloud Security Alliance, which puts out free guidance and recently issued its Top Threats to Cloud Computing report, for his “many, many rounds” of contract negotiations with Google. Sherry had one big advantage going into negotiations, namely, Google’s strong desire to add a name-brand Ivy League university to its roster of academic customers. So the 800-pound gorilla proved flexible on some counts, he said, including allowing Brown to look at its hiring practices. Still, when it came to a formal risk assessment, Sherry added, Google “was going to give us precious little.”
On the other hand, Brown gets four years of Google Apps for free, and it’s hard to argue with free, Sherry said.
We’ll be writing more about the conundrums CIOs face in deciding whether and exactly how to utilize the cloud in an upcoming ezine for our sister site, SearchCIO.com. Meantime, I’d like to hear from you about how your organizations are weighing the risks and benefits of cloud computing.]]>
As senior writer Linda Tucci recently reported, IT is increasingly turning to enterprise risk management as uncertainty in the macroeconomic climate continues. Even as some enterprises have held off on further investments in GRC software, she observed, “the more budgets tightened, the more imperative it became that both IT and the business target their biggest exposures and eliminate redundant controls and audits.” For instance, in some areas, like carbon compliance, specialized GRC software has the potential to help turn carbon footprint management into cost savings.
Given continued interest in the potential of GRC software, we published a new governance, risk and compliance FAQ yesterday. If you know of neutral, useful governance, risk and compliance resources online that should be added to the FAQ, please let us know in the comments or by sending an email to firstname.lastname@example.org. As we add more resources to SearchCompliance.com, you’ll be able to find them at our IT governance, risk and compliance topic page. Also, make sure to check in throughout the week here on the IT Knowledge Exchange, which features two GRC blogs: “Regulatory Compliance, Governance and Security,” by Charles Denyer, and “IT Governance, Risk, and Compliance,” by Robert E. Davis.
The most immediate impact is the provision for an additional 60 days to comply with the regulations. The deadline for implementation is now March 1, 2010.
Individuals and municipalities have expressly been removed from guideline jurisdiction, with a clarification that the “regulation applies to those engaged in commerce.” Guidelines on the requirement for a written information security plan are now simplified.
A new definition for the term service provider was added. The Office of Consumer Affairs and Business Regulation also amended third-party vendor rules. There is now a two-year grace period, relative to existing contracts, and requirements for those third parties to be in compliance.
Encryption requirements have been clarified. The apparently strict but, practically speaking, vague 128-bit specification from the prior version was replaced by “technology-neutral language.”
Further, a “technical feasibility” standard has been incorporated, acknowledging that methods to securely encrypt data on portable devices may not yet be available. Email encryption now falls under the technical feasibility standard. Additionally, encryption of backup tapes has been clarified to include prospective encryption. So you may safely cancel your firm’s plans to encrypt existing backup tapes. Encrypting new backup tapes will still be required, along with any personal data that travels over the public Internet or wireless network.
In another change that I believe will ultimately enhance consumer protection, 201 CMR 17.00 has been brought in line with certain federal regulations. Specifically, the Massachusetts data protection act now cedes authority to the Federal Trade Commission‘s (FTC) standards established under the Gramm-Leach Bliley Act (GLBA). GLBA utilizes a risk management approach to data security.
The patchwork of 44 different state health data protection laws has delayed electronic automation of, and therefore overall security for, health records. Adopting a federal standard, starting with the FTC’s risk-based approach to data protection, avoids this pitfall and may make widespread compliance both more feasible and more likely in the near future.
On one hand, a risk management approach should be familiar to IT professionals. It shifts resources from “check-the-box” controls that may or may not address a particular organization’s specific risks to controls that make more sense in context. On the other hand, given the concrete definition of the personal information in scope, it is difficult to see where risk management would not be present whenever such personal data is stored.
“Mandating every component of a program and requiring its adoption, regardless of size and the nature of the business and the amount of information that requires security, makes little sense in terms of consumer protection,” said Bradley MacDougall, of Associated Industries of Massachusetts. Risk management and assessment will afford more consumer protection by matching a given business’ actual risks with required security investments.
The 2009 ISACA International Conference held in Los Angeles had a much different feel than those of the past. While IT controls were consistently a primary talking point, the emphasis was on how to better align business and IT goals. Even though theoretical concepts like risk and value information technology were discussed at length, many of the presenters addressed real-world issues with respect to advancing along the compliance spectrum.
Oracle representatives Mark Sunday, CIO and SVP, and Gail Coury, VP of, kicked off the festivities with a detailed and insightful keynote address that outlined the challenges of compliance amid heavy acquisition periods. Attendees then proceeded to presentations along one of four tracks:
While useful information was abundant and widespread, here are some of the more interesting discussion points:
If controls are the key, governance is the lock
Much discussion was held about progression beyond creating a control environment and moving towards overall governance. With compliance budgets decreasing at a record pace, governance is the only way that auditors will be able to show value of audit activities.
Risk was the real elephant in the room. Discussions concluded that, while we cannot fully eliminate risk in a cost effective manner, the process of implementing a monitoring or review process provides an eye opening set of data for many businesses.
Even though attendance appeared to be down, the group was very diverse and included representatives from all over the globe. ISACA members from international companies enlightened the group with unique and challenging regional issues.
Overall, the conference delivered as promised. It had legacy theory, risk management theory, international diversity, and real-world solutions for almost any IT compliance issue. ISACA continues to be on the cutting edge of IT governance.
The two management consultants are proponents of expanding the definition and practice of risk management to include business model risk — that is, risk introduced into your company by new or changed capital ventures or business processes. In their view, business process automation has run amok, leaving the business (as well as the IT organization), exposed to risks that it might not be aware of.
“What we need to understand is where are we are introducing risks, and the risk is understood and planned and not a byproduct of a lack of knowledge or visibility into what actually goes on in the organization,” said Sen in the following podcast, recorded this week. In the podcast, Sen and Vaughan explain what business model risk is, how to measure and understand it, and how to make business model risk a key part of any risk management and IT governance strategy.]]>
Over the course of the wide-ranging interview, recorded on-site at RSA Conference 2009 in San Francisco, Jones discusses the challenges he faces as the CISO for a global multinational company. Listen to the podcast to learn:
Steve Ross, a director in the Security & Privacy practice of Deloitte & Touche LLP, has some thoughts in this IT Compliance Advisor podcast about the privacy and compliance risks associated with bringing in these “vetted” users.
Ross, a former international president of ISACA and IS Security Matters columnist for the ISACA Journal, explains to SearchCompliance.com Executive Editor Scot Petersen what constitutes a vetted user, what are the compliance risks that come with a vetted user, and what are some best practices for ensuring privacy of the vetted user.]]>
Implementing a risk assessment that will align the COBIT control framework with risks is a valuable undertaking and a smart way to approach the challenge. If approached with a working knowledge of COBIT, it should take no longer than any other risk assessment approach.
In the long run, it will likely shorten the overall cycle:
Risk assessment -> Recommendation -> Solution implementation -> Audit
This is because COBIT can provide a thorough checklist of potential risk areas that might otherwise be missed, requiring multiple passes or potential wasted effort implementing solutions to lower-priority risks, while ignoring those with a higher priority.
One thing to keep in mind is that COBIT controls are not just “in an IT department.” They include controls for business interruption and other business problems that have traditionally fallen to IT to deal with, rightly or wrongly.
The first step is to obtain a copy of COBIT controls, which you can do from ISACA.org or other sources on the Web.
The second step is to provide education, if necessary. Make sure key individuals in your organization have heard of COBIT and understand it is an internationally accepted standard. No need to worry anyone will know it better than you. Even auditors and CISA professionals can achieve only a moderate level of memorization of all aspects of COBIT. COBIT changes all the time. Technology in some areas moves beyond it in areas. In general, COBIT is too far-reaching for even the most seasoned IT professional to avoid re-reading and referring to it frequently when working with it.
After obtaining a copy and getting buy-in, the third step is to put it away. You need to ask yourself and others where the known risks to IT and business lie. This bottom-up approach is critical to avoiding “over-COBITING,” a common affliction.
Once you have carefully listened to IT professionals and others with respect to control weaknesses and the risks that actually “keep them up at night,’ you are ready to pull out your COBIT framework again. Review a fuller set of risks with those same individuals. See if that uncovers risks they may have missed the first time. This checkpoint is one benefit of COBIT.
Finally, you should document your risk assessment and note areas listed in COBIT that individuals in your organization did not consider worthy of note. Each COBIT area should be covered. If the risk included in COBIT is not prioritized in the risk assessment, a specific reason should be noted, along with the individual who decided to assume or dismiss that risk. This will come in handy later, trust me.
If you follow these steps, you will be further ahead than 99% of professionals and IT departments in your shoes. Good luck, and happy documentation!
|Sarah Cortes is a senior technology manager with extensive experience in all aspects of delivering information technology systems and services to Fortune 500 firms in the financial services industry, as well as biotechnology, media and higher education. Sarah Cortes has managed numerous major Code Red business and system interruptions, including the 9/11 failover of trading, accounting and other critical business systems during Marsh McLennan’s WTC data center collapse. You can learn more her work at InmanTechnologyIT.|