IT Governance, Risk, and Compliance


August 27, 2009  8:16 PM

Synchronizing Balanced Scorecards – Part III

Robert Davis Robert Davis Profile: Robert Davis

Balanced Scorecard is a strategic planning and management system that can be utilized in for-profit and not-for-profit entities for business activities alignment to the organizational mission, communication improvement as well as monitoring performance against strategic goals. Balanced Scorecard is considered a ‘value-added’ performance measurement framework — through strategic non-financial performance measures — that supplies expanded organizational performance visualization. Methodologically, Balanced Scorecard builds on a few previously established management concepts including customer-defined quality, continuous improvement, employee empowerment, and ‘measurement-based’ management as well as feedback.

Balanced scorecard deployment integrates feedback from internal business process outputs while obtaining feedback from business strategy outcomes. Consequently, this creates a “double-loop feedback” system within the balanced scorecard implementation. The standardized Balanced Scorecard perspectives are: Learning and Growth, Business Process, Customer, and Financial. This general balanced scorecard theory can transmute to measure information security objectives achievement utilizing Business Contribution, Future Orientation, Operational Excellence, and Customer Orientation categories for continuously improving strategic performance and results.

View Part I of the Synchronizing Balanced Scorecards series here

August 24, 2009  7:13 PM

Synchronizing Balanced Scorecards – Part II

Robert Davis Robert Davis Profile: Robert Davis

Typically, measures or indicators should be selected from factors that lead to improved employee, customer, operational, and/or financial performance. Performance measures or indicators are assessable products’ or services‘ characteristics utilized to track and improve organizational results. Most modern entities depend upon performance measurements and analyses to ensure directional attentiveness. Measurements should be derived from the entity’s strategy and provide critical data and information about key processes, systems and programs. Correspondingly, one major consideration in performance improvement involves the creation and usage of performance measures or indicators. Through analysis of data generated by deployed tracking processes, adopted measures or indicators may be adaptively evaluated and changed to improve managerial goals support.

View Part I of the Synchronizing Balanced Scorecards series here


August 20, 2009  7:58 PM

Synchronizing Balanced Scorecards – Part I

Robert Davis Robert Davis Profile: Robert Davis

With the introduction of ‘Balanced Scorecard’ theory, management has the option to view the entity from four perspectives and develop metrics, collect data as well as perform analyzes relative to standardized abstraction levels. Organizational balanced score-carding provides a visible prescription regarding what an entity should measure to symmetrize the generally supported financial approach that has overshadowed holistic management. By definition, the Balanced Scorecard is a management system that enables vision and associated strategy crystallization for focused execution. However, Balanced Scorecard also drives feedback from internal business processes and external outcomes in order to continuously improve strategic performance and results. When managerially integrated, the balanced scorecard transforms strategic planning from periodic documentation drills into addressable governance items.


August 17, 2009  8:26 PM

Preserving Electronically Encoded Evidence – Part IV

Robert Davis Robert Davis Profile: Robert Davis

Whether target data is in transit or at rest, it is critical that measures are in place to prevent the sought information from being destroyed, corrupted or becoming unavailable for forensic investigation. When evidence is at rest, adequate procedures should be followed to ensure evidential non-repudiation. Volatile data capture assists investigators in determining the system state during the incident or event. Consequently, the utilization of functionally sound imaging software and practices are essential to maintaining evidential continuity.

View Part I of the Preserving Electronically Encoded Evidence series here

Post Note: An expanded version of this blog entry is available through the ISACA Journal.


August 13, 2009  9:04 PM

Preserving Electronically Encoded Evidence – Part III

Robert Davis Robert Davis Profile: Robert Davis

Creating evidential copies through routine backup procedures will only permit replicating specific files while none of the files with delete indicators are recovered, nor the designated ‘free space’ between files. To remediate this limitation, a ‘forensic image’ should be obtained utilizing task-oriented software. Appropriate forensic image software reproduces an exact working copy of the original media’s content. Technologically, media content imaging can be carried out without launching the computers operating system, thereby avoiding tampering allegations. Functionally, the applied imaging software should be capable of making an exact replication of every encoded bit contained on the target media.

Residual data includes deleted files, fragments of deleted files and other data that are still existent on the disk surface. Forensic imaging software can capture residual data on targeted drives. Effective imaging replicates the disk surface sector-by-sector as opposed to reproduction file-by-file. With appropriate tools, even data commonly considered destroyed can be recovered from a disk’s surface. Furthermore, imaging software can also generate a log file recording of IT parameters such as disk configuration, interface status, and data checksums that are critical for supportable conclusions regarding an incident or event.

After creating at least two media images, one replication can be inserted as a target system substitute for the original while the second replication can be utilized for forensic analysis. Lastly, once facsimiled, the original media should be sealed in a sterilized container, labeled and stored as evidence.

View Part I of the Preserving Electronically Encoded Evidence series here

Post Note: An expanded version of this blog entry is available through the ISACA Journal.


August 10, 2009  7:59 PM

Preserving Electronically Encoded Evidence – Part II

Robert Davis Robert Davis Profile: Robert Davis

Conditionally, if the target system is turned off, simply turning the technology on and permitting a ‘boot’ can introduce content changes to files directly or indirectly connected through operating system procedures. Some files interacting with the IT boot process may not be of interest to an investigation. Nevertheless, IT boot configuration modifications can cause previously deleted files — containing pertinent information — to become irretrievable.

When circumstances will not permit the embryonic operational state and site being maintained until law enforcement authorities arrive or when management accepts lawful extraction risks, data acquisition procedures may be invoked for evidence preservation. Data acquisition procedures involve the process of transferring encoded content into a controlled location; including electronic media types associated with an incident or event. Upon commitment to this course of action, all earmarked hardware media should be protected, as well as the target content, during transference to another medium through an approved methodology. However, capturing volatile data (such as open ports, open files, active processes, user logons and other random access memory information) is also critical in most situations where evidence integrity can become an issue. By definition, volatile data is transient electronic bits. Therefore, without adequate precautions, volatile data ceases to exist when an information technology is shut down.

View Part I of the Preserving Electronically Encoded Evidence series here

Post Note: An expanded version of this blog entry is available through the ISACA Journal.


August 6, 2009  8:39 PM

Preserving Electronically Encoded Evidence – Part I

Robert Davis Robert Davis Profile: Robert Davis

Seeking to preserve electronically encoded evidence implies an incident or event has occurred that will require facts extrapolation for presentation as proof of an irregular, if not illegal act. Anticipating this potential scenario requires information security management proactively construct incident response and forensic investigation capabilities considering legal imperatives. Consequently, procedures addressing the infrastructure and processes for incident handling should exist within the security response documentation inventory.

Cardinally, all potential electronically captured evidence should be protected (as soon as possible) from deletion, contamination, modification and inaccessibility. When dealing with stored data, prudent information security management dictates informing appropriate parties that evidence will be sought through electronic discovery from the target IT; establishing specific protocols that address preserving electronically encoded evidence; and enforcing eradication restrictions for data residing within the target IT. Furthermore, when feasible, electronically captured evidence should be stabilized in the environment that existed during the suspected inappropriate activity.

Post Note: An expanded version of this blog entry is available through the ISACA Journal.


August 3, 2009  6:16 PM

Critical Incident Response Elements – Part IV

Robert Davis Robert Davis Profile: Robert Davis

Managing an appropriate security incident response is typically a crucial business requirement. To enable effective management, a security MIS should correlate data to intended usage to determine security failure repercussions. Considering the primary contingency management objective is providing solutions through understanding of risk, an adequate IT security incident response depends on timely, reliable information to assess risks and subsequently apply resources.

“View Part I of the Critical Incident Response Elements series here


July 30, 2009  6:25 PM

Critical Incident Response Elements – Part III

Robert Davis Robert Davis Profile: Robert Davis

There exist various theories concerning managing employees during a crisis scenario. Nevertheless, security incident response tactics should be viewed as a unique application of contingency management theory that can be coupled with sound risk management practices to enable appropriate situational resolution. Contingency management practitioners assume finding and applying relevant available resources suitable for a circumstantial answer to a managerial concern will render the appropriate solution to an incident. Conjoined, risk management incorporates a systematic approach for identifying risk and defining the impact on an entity’s ability to provide goods and/or services. Therefore, security incident response applicability can be found in availability, responsibility, and authority contingency management attributes directed toward addressing ‘at risk’ information assets.

“View Part I of the Critical Incident Response Elements series here


July 27, 2009  8:31 PM

Critical Incident Response Elements – Part II

Robert Davis Robert Davis Profile: Robert Davis

By definition, an entity’s management information system (MIS) represents an aggregation of personnel, computer hardware and software, as well as procedures that process data in order to generate utilizable information for decision-making. Data elements, activity, function operation, and system are the pyramided classifications that delineate information requirements. Dialectally, an entity’s security MIS can become the catalyst for providing superior incident resolution through timely and reliable incident response data when the notification process is properly designed.

Gathering evidence that inappropriate or malicious activity has occurred is a control objective for threat management. Information security threat management controls should be configured to identify inappropriate or malicious activity within a computing environment. Since absolute computer security is impossible, management must classify misuse based on organizational impact. Categorically, security misuse can be designated as intentional or unintentional. In this regard, when constructing intentional misuse information asset records, field titling should address incident descriptions such as exploited vulnerability details (including unauthorized reading, modification, or destruction of data); as well as affected information assets and attack sources.

“View Part I of the Critical Incident Response Elements series here


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: