Patrick C Davis
24 Nov 2014
University of Phoenix
Coding Theory Case Study
When representing, manipulating, and transmitting information, it is crucial to use the sequence of zeros and ones. However, it is often impossible and difficult to prevent errors especially in retrieving, operating, transmitting and storing any form of data. Errors occur from different sources, for instance, human beings, equipment, and communication and electrical interference. In most cases, errors occur into data that has been stored for a long period mostly on magnetic tapes when the tape deteriorates. It is significant to make sure that there is reliable transmission especially when large computer files are hastily transmitted (Rosen, 2012). In addition, reliable transmission should be prioritized when sending data over long distances, for instance, from probes billions of millions away. This essay discusses both error correcting and error detecting codes. Further, it will introduce a significant family of codes useful in correcting errors. The essay will also cover the current applications of coding theory as well as the latest technical developments. It is always important to recover data that has degraded due to long storage in a tape. There are several techniques from the coding theory that guarantees reliable transmission of data and recovery of the degraded data. Messages that occur in the form of bit strings can be encoded through translating them into code word’s or rather bits strings that are a bit longer. A code is a set of code words (Rosen, 2012). It is possible to detect errors using definite codes. Moreover, as long as not too many errors have been made, it is simple to determine whether at least one or many errors have been introduced after transmitting a bit string. Further, it is simple to correct errors that occur due to the use of codes with redundancy. The study of codes also known as the coding theory involves error correcting and error detecting codes. The coding theory has been studied comprehensively for the past forty years. This theory is becoming more important due to the development of new technologies for data storage and communication. The process of identifying errors that occur due to noises or other impairments from transmitters to the receiver is referred to as error detection. Notably, error detection and correction codes are techniques that advocate for reliable delivery of digital data over communication channels that are unreliable (Roman, 1996). It is possible for errors to occur during transmission from the source to the receiver since most communication channels are subjected to channel noise. Error detection makes it easier to identify errors. The process of reconstructing and detecting errors in data is known as error correction. It is an application that allows the re-establishment of the original data. There are two ways that error correction occurs. The first one is forward error correction (FEC) and automatic repeat request (ARQ). When using the FEC, the transmitter transfigures the data by use of error-correcting code prior to transmission. Additionally, the receiver then has to use redundancy since the additional information in the codes is as a result of efforts to help recover the original data. However, the reconstructed data appears more of the original data (Roman, 1996). ARQ is referred to as the backward error correction. The ARQ is a technique whereby the detection scheme is joined with other retransmission applications of the erroneous data. The error detection code checks all the data the receiver gets. However, if the error checking fails, then the demanded data is retransmitted. Most of the times, the process is repeated until the verification of the data is done. Hamming distance measures the least number of substitutions that are important in changing a string. In other words, the number of strings that transforms into...
Please join StudyMode to read the full document