IT Governance, Risk, and Compliance


September 22, 2012  12:07 AM

A Few Fundamentals of Networking Electronically Encoded Data – Part IV

Robert Davis Robert Davis Profile: Robert Davis

In TDM (typically utilized for digital signals) a device is given a specific time slot during which it can utilize a particular channel. In contrast, with FDM (typically utilized for analog signals) the channel is subdivided into sub-channels, each with a different frequency width that is assigned to a specific signal.

Through TDM incorporation, optical-fiber networks can use dense wavelength-division multiplexing (DWDM), also known as wave division multiplexing (WDM), in which different data signals are sent in different wavelengths of light in the fiber-optic medium.

How networking services are maintained by enterprises

Network Administration is the function designated to maintain a secure as well as reliable on-line communications network and serves as liaison with user departments to resolve network needs and problems. Specifically, this function is generally responsible for maintaining network security, maintaining optimum system performance, and providing technical assistance to users. Thus, just like the ‘telephone service technician‘, the network administrator should be considered the specialist capable of reestablishing communication if service quality is diminished.

View Part I of the A Few Fundamentals of Networking Electronically Encoded Data series here

 

Post Note: “A Few Fundamentals of Networking Electronically Encoded Data – Part IV” was originally published through Suite101.com under the title “A Few Fundamentals of Networking Electronically Encoded Data”

September 20, 2012  1:09 AM

A Few Fundamentals of Networking Electronically Encoded Data – Part III

Robert Davis Robert Davis Profile: Robert Davis

Packet assembly and disassembly between telecommunication links

Input or output (I/O) channels are paths along which datum are transmitted to and from primary storage. These communication channels also handle the transfer of datum to and from I/O devices. As a result, this function can relieve the Central Processing Unit (CPU) of responsibility for data transfers to and from I/O devices, increase the number of input and output operations that can be performed simultaneously and reduce the time a CPU must wait for datum to arrive from, or sent to, an I/O device.

A common IT transmission technique for telecommunications is multiplexing. Multiplexing is the process of transmitting multiple (but separate) signals simultaneously over a single channel or line. The two main types of multiplexing methods are time-division multiplexing (TDM) and frequency-division multiplexing (FDM). Because the signals are sent in one complex transmission, the receiving end has to separate the individual signals through de-multiplexing.

View Part I of the A Few Fundamentals of Networking Electronically Encoded Data series here

 

Post Note: “A Few Fundamentals of Networking Electronically Encoded Data – Part III” was originally published through Suite101.com under the title “A Few Fundamentals of Networking Electronically Encoded Data”


September 15, 2012  12:24 AM

A Few Fundamentals of Networking Electronically Encoded Data – Part II

Robert Davis Robert Davis Profile: Robert Davis

All of the computers and other hardware connected to a particular network must follow the same rules or operations commonly called protocols. Operational rules are typically divided up into layers so a programmer or network administrator need only be concerned with the layer with which the software is communicating. In defining these layers, reference models have been created. At this level, available reference models for classifying communication protocols and protocol suites include the: U.S. Department of Defense, Open Systems Interconnection and Berkeley University framework.

In order to send data over a network, the necessary programs must be executed. However, datum must be organized for transmission. Consequently, a very important networking concept is the packet. Packets are telecommunication units transmitted from a sending device to a receiving device with an appended header. Substantively, a packet header contains information that is required to transfer attached data across the network.

View Part I of the A Few Fundamentals of Networking Electronically Encoded Data series here

 

Post Note: “A Few Fundamentals of Networking Electronically Encoded Data – Part II” was originally published through Suite101.com under the title “A Few Fundamentals of Networking Electronically Encoded Data”


September 13, 2012  12:42 AM

A Few Fundamentals of Networking Electronically Encoded Data – Part I

Robert Davis Robert Davis Profile: Robert Davis

Any distributed IT infrastructure requires a complex set of communication functions for proper operation. Many of these functions, such as reliability mechanisms, are common across IT architectures. Thus, the communication task is generally viewed as consisting of a modular architecture, in which the various elements of the IT configuration perform designated functions.

Standard network telecommunication approaches

There are two major concepts regarding how communication occurs on networks: successive-links and direct-links. Successive-links exist when a definite connection is made from one computer to another until the destination is reached. Alternatively, networks may establish direct-links between sending points and destination points.

 

Post Note: “A Few Fundamentals of Networking Electronically Encoded Data – Part I” was originally published through Suite101.com under the title “A Few Fundamentals of Networking Electronically Encoded Data”


September 8, 2012  12:14 AM

Data Communications Risk in Distributed Computing – Part V

Robert Davis Robert Davis Profile: Robert Davis

Reducing data communication error risk for teleprocessing systems

Teleprocessing is the handling of data through a communications channel, such as telephone lines, microwave towers, or artificial satellites. It permits datum to be posted to files in a second location, with the processing results being printed in a third location.

A major problem created by teleprocessing capabilities is the potential devaluation of information assets based on data communication errors affecting information reliability. Consequently, technology owners must evaluate the ability of teleprocessing systems to resist such data corruption to ensure information asset devaluation is minimized and information reliability is maximized.

Sources:

Davis, Robert E. IT Auditing: IT Service Delivery and Support. Mission Viejo, CA: Pleier Corporation, 2008. CD-ROM.

Watne, Donald A. and Peter B. B. Turney. Auditing EDP Systems. Englewood Cliffs, NJ: Prentice-Hall, 1984. 6, 236-7, 467

Davis, Robert E. “IT Hardware Risks.” Suite101.com. Retrieved on 10/03/2010

Strangio, Christopher E. “Data Communications Basics: A Brief Introduction to Digital Transfer.” Camiresearch.com. Retrieved on 10/03/2010

View Part I of the Data Communications Risk in Distributed Computing series here

 

Post Note: “Data Communications Risk in Distributed Computing – Part V” was originally published through Suite101.com under the title “Data Communications Risk in Distributed Computing”

@TempleU News flash! Just received a Temple University appointment letter! As of 08/29/2012, I am the (First and Inaugural) CISA in Residence at Temple University! (Job Description Link: http://www.linkedin.com/in/havecisawilltravel)


September 5, 2012  7:56 PM

Data Communications Risk in Distributed Computing – Part IV

Robert Davis Robert Davis Profile: Robert Davis

Signal fading is a decline in transmission strength. Fading can occur when a signal is transmitted by microwaves. Under certain conditions, the signal picked up by the receiving unit can be quite weak. A weak signal is more susceptible to transmission noise and error.

Signal distortion can result from lack of synchronization between the time datum is sent and the time they are received. Lack of synchronization typically occurs when a signal travels several paths with different delays in each path. This will result in distortion when there is overlapping in the receipt of data from the different signal paths.

Sources:

Davis, Robert E. IT Auditing: IT Service Delivery and Support. Mission Viejo, CA: Pleier Corporation, 2008. CD-ROM.

Watne, Donald A. and Peter B. B. Turney. Auditing EDP Systems. Englewood Cliffs, NJ: Prentice-Hall, 1984. 6, 236-7, 467

Davis, Robert E. “IT Hardware Risks.” Suite101.com. Retrieved on 10/03/2010

Strangio, Christopher E. “Data Communications Basics: A Brief Introduction to Digital Transfer.” Camiresearch.com. Retrieved on 10/03/2010

View Part I of the Data Communications Risk in Distributed Computing series here

 

Post Note: “Data Communications Risk in Distributed Computing – Part IV” was originally published through Suite101.com under the title “Data Communications Risk in Distributed Computing”


August 31, 2012  11:31 PM

Data Communications Risk in Distributed Computing – Part III

Robert Davis Robert Davis Profile: Robert Davis

The electrical signal may be changed by mechanical component failure of the communications network, or by a problem related to the characteristics of data communications. Of these two error sources, the characteristics of data communications are by far more important.

Root causes of most data communication errors

Errors associated with the communication of data, rather than a mechanical failure, are generally due to noise, fading, or distortion.

Communication noise is electrical interference with the signal. It may be background noise or impulse noise, and it may be random or cyclical. Background noise usually has little effect on the transmission of a signal. Impulse noise, such as a sudden voltage surge, is more likely to mask or distort a signal. As long as the noise occurs randomly, it is usually easy to detect an error. Conversely, cyclical noise, such as voltage oscillation, can create compensating errors that are difficult to detect.

Sources:

Davis, Robert E. IT Auditing: IT Service Delivery and Support. Mission Viejo, CA: Pleier Corporation, 2008. CD-ROM.

Watne, Donald A. and Peter B. B. Turney. Auditing EDP Systems. Englewood Cliffs, NJ: Prentice-Hall, 1984. 6, 236-7, 467

Davis, Robert E. “IT Hardware Risks.” Suite101.com. Retrieved on 10/03/2010

Strangio, Christopher E. “Data Communications Basics: A Brief Introduction to Digital Transfer.” Camiresearch.com. Retrieved on 10/03/2010

View Part I of the Data Communications Risk in Distributed Computing series here

 

Post Note: “Data Communications Risk in Distributed Computing – Part III” was originally published through Suite101.com under the title “Data Communications Risk in Distributed Computing”


August 29, 2012  2:00 PM

Data Communications Risk in Distributed Computing – Part II

Robert Davis Robert Davis Profile: Robert Davis

A common type of data communications risk

According to Christopher E. Strangio, the distance over which data moves within IT can vary from a few thousandths of an inch, as is the case within a single integrated circuit (IC) chip, to several feet along the main circuit board’s backplane connections. However, datum frequently must be sent beyond the local circuitry constituting an IT configuration.

In data communications, electronically encoded content is transmitted in the form of electrical signals. An inadvertent change in a signal will result in the datum received being in some way different from the datum sent. Typically, this increased information reliability risk is based on the probability of change in an electrical pulse due to the data communications facilities utilized for moving datum from one location to another.

Sources:

Davis, Robert E. IT Auditing: IT Service Delivery and Support. Mission Viejo, CA: Pleier Corporation, 2008. CD-ROM.

Watne, Donald A. and Peter B. B. Turney. Auditing EDP Systems. Englewood Cliffs, NJ: Prentice-Hall, 1984. 6, 236-7, 467

Davis, Robert E. “IT Hardware Risks.” Suite101.com. Retrieved on 10/03/2010

Strangio, Christopher E. “Data Communications Basics: A Brief Introduction to Digital Transfer.” Camiresearch.com. Retrieved on 10/03/2010

View Part I of the Data Communications Risk in Distributed Computing series here

 

Post Note: “Data Communications Risk in Distributed Computing – Part II” was originally published through Suite101.com under the title “Data Communications Risk in Distributed Computing”


August 25, 2012  12:09 AM

Data Communications Risk in Distributed Computing – Part I

Robert Davis Robert Davis Profile: Robert Davis

As the distance between the message source and designated destination increases, accurate transmission becomes increasingly more risky.

Data communication systems are designed with transmission speed and capacity to meet the timeliness and volume needs of defined users. Nonetheless, data delivery speed and capacity are determined by the choice of equipment and channels, modulation technique, transmission mode, as well as transmission direction.

As discussed in Computer Hardware Risks – Part II, a failure in an electronic element of an information technology (IT) configuration can cause an error by affecting the frequency, timing, strength, or shape of an electrical pulse utilized to convey datum.

 

Sources:

Davis, Robert E. IT Auditing: IT Service Delivery and Support. Mission Viejo, CA: Pleier Corporation, 2008. CD-ROM.

Watne, Donald A. and Peter B. B. Turney. Auditing EDP Systems. Englewood Cliffs, NJ: Prentice-Hall, 1984. 6, 236-7, 467

Davis, Robert E. “IT Hardware Risks.” Suite101.com. Retrieved on 10/03/2010

Strangio, Christopher E. “Data Communications Basics: A Brief Introduction to Digital Transfer.” Camiresearch.com. Retrieved on 10/03/2010

 

Post Note: “Data Communications Risk in Distributed Computing – Part I” was originally published through Suite101.com under the title “Data Communications Risk in Distributed Computing”


August 22, 2012  12:49 AM

IT Hardware Validity Checks – Part IV

Robert Davis Robert Davis Profile: Robert Davis

Some IT configurations are capable of assigning whole sections of memory for prescribed operations, programs, and/or data. These assigned sections of memory can be protected by a hardware address validity check. This type of control is also known as storage protection.

Address validity checks are also used in disk drives. When employed, firmware commonly compares the address on a disk pack requested in a write instruction with the set of valid disk storage locations.

Verification constraint of an IT hardware validity check

Where installed, the IT hardware validity check compares each action with the set of rules to ensure that it is indeed appropriate. Nevertheless, the limitation of an IT hardware validity check is that it will not detect an error when a valid symbolic representation is recorded improperly in place of another symbolic representation during data entry or transmission.

Sources:

Davis, Robert E. IT Auditing: Assuring Information Assets Protection. Mission Viejo, CA: Pleier Corporation, 2008. CD-ROM.

Boritz, Efrin J. IS Practitioners’ Views on Core Concepts of Information Integrity. Rev. ed. Ontario: University of Waterloo, 2004. 9

Gleim, Irvin N. CIA Examination Review. 3rd ed. Vol. 1. Gainesville, FL: Accounting Publications, 1989. 284

Watne, Donald A. and Peter B. B. Turney. Auditing EDP Systems. Englewood Cliffs, NJ: Prentice-Hall, 1984. 232-3

View Part I of the IT Hardware Validity Checks series here

 

Post Notes: “IT Hardware Validity Checks – Part IV” was originally published through Suite101.com under the title “IT Hardware Validity Checks”.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: