29-12-2012, 03:20 PM
LINEAR ERROR DETECTION AND CORRECTION USING HAMMING CODE
1LINEAR ERROR DETECTION.docx (Size: 140.76 KB / Downloads: 51)
INTRODUCTION
In the late 1940's Claude Shannon was developing information theory and coding as a mathematical model for communication. At the same time, Richard Hamming, a colleague of Shannon's at Bell Laboratories, found a need for error correction in his work on computers. Parity checking was already being used to detect errors in the calculations of the relay-based computers of the day, and Hamming realized that a more sophisticated pattern of parity checking allowed the correction of single errors along with the detection of double errors. The codes that Hamming devised, the single-error-correcting binary Hamming codes and their single-error-correcting, double-error-detecting extended versions marked the beginning of coding theory. These codes remain important to this day, for theoretical and practical reasons as well as historical.
In data transmission, the ability of a receiving station to correct errors in the received data is called forward error correction (FEC) and can increase throughput on a data link when there is a lot of noise present. To enable this, a transmitting station must add extra data (called error correction bits ) to the transmission. However, the correction may not always represent a cost saving over that of simply resending the information. Hamming codes make FEC less expensive to implement through the use of a block parity mechanism.
OBJECTIVE:
Data can be sent through a noisy environment without corruption. It can be used in various communication systems such as mobile communication, etc. Efficient transmission of data (bandwidth efficiency). More powerful – not only detecting errors, but also capable of correcting errors as well.
ORGANISATION OF THE REPORT:
In this mini project, chapter 1 deals about introduction, methodology, error, types of error, error detection and correction methods. Chapter 2 deals about Hamming code, general algorithm of hamming code and Hamming(7,4) code. Chapter 3 deals about encoding in hamming code. Chapter 4 deals about decoding in hamming code. Chapter 5 deals with flow chart of encoder and decoder. Chapter 6 deals about c-language code for encoder and decoder. Chapter 7 deals with the results of mini project. Chapter 8 deals with applications of this mini project. Chapter 9 deals about conclusion and future enhancement of this mini project.
Error Detection
Error detection is the ability to detect the presence of errors caused by noise or other impairments during transmission from the transmitter to the receiver.
Error detection schemes
In telecommunication, a redundancy check is extra data added to a message for the purposes of error detection.
Error detection schemes are
Repetition schemes.
Parity schemes.
Checksum.
Hamming distance based checks
Error Correction
Error correction is the additional ability to reconstruct the original, error-free data
Error Correcting Codes
Error correcting codes enable data to be sent through a noisy communication channel without corruption.
To accomplish this, the sender appends redundant information to the message.
So that even if some of the original data is corrupted during transmission, the receiver can still recover the original message intact.
HAMMING CODE
If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a 7-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error.
Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with, he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with 7-bits, Hamming described this as an (8,7) code, with eight bits in total, of which 7 are data. The repetition example would be (3,1), following the same logic. The code rate is the second number divided by the first, for our repetition example, 1/3.
Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the Hamming distance, after him). Parity has a distance of 2, as any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping two bits can be detected, but not corrected. When three bits flip in the same group there can be situations where the code corrects towards the wrong code word.
DECODING
The first step is to check the parity bits to determine if there is an error. Parity may also be validated using matrix operations. A 3x7 parity check matrix [H] may be constructed such that row 1 contains 1s in the position of the first parity bit and all of the data bits that are included in its parity calculation. Row 2 contains 1s in the position of the second parity bit and all of the data bits that are included in its parity calculation. Row3 contains 1s in the position of the second parity bit and all of the data bits that are included in its parity calculation.
FLOW CHART
A flowchart is a type of diagram that represents an algorithm or process, showing the steps as boxes of various kinds, and their order by connecting these with arrows. This diagrammatic representation can give a step-by-step solution to a given problem. Process operations are represented in these boxes, and arrows connecting them represent flow of control. Data flows are not typically represented in a flowchart, in contrast with data flow diagrams; rather, they are implied by the sequencing of operations. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.
A typical flowchart from older basic computer science textbooks may have the following kinds of symbols.
C- LANGUAGE CODE
C is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system. Although C was designed for implementing system software, it is also widely used for developing portable application software. C is one of the most popular programming languages of all time and there are very few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which began as an extension to C. C is an imperative (procedural) systems implementation language. It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support. C was therefore useful for many applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a very wide range of platforms, from embedded microcontrollers to supercomputers.
Conclusion:
Successful implementation of the Hamming Code Algorithm in the C language. The application works perfectly for any binary stream of data. The errors (single bit errors) are efficiently detected as well as corrected (if any), that may occur during the transmission of data. Using the Hamming algorithm, we can not only detect single bit errors in this code word, but also correct them. Fast & simple encoding/decoding algorithms