01-01-2013, 11:46 AM
Error Detection and Correction
Error Detection.ppt (Size: 694.5 KB / Downloads: 214)
Basic concepts
Networks must be able to transfer data from one device to another with complete accuracy.
Data can be corrupted during transmission.
For reliable communication, errors must be detected and corrected.
Error detection and correction are implemented either at the data link layer or the transport layer of the OSI model.
Single-bit error
Single bit errors are the least likely type of errors in serial data transmission because the noise must have a very short duration which is very rare. However this kind of errors can happen in parallel transmission.
Example:
If data is sent at 1Mbps then each bit lasts only 1/1,000,000 sec. or 1 μs.
For a single-bit error to occur, the noise must have a duration of only 1 μs, which is very rare.
Burst error
The term burst error means that two or more bits in the data unit have changed from 1 to 0 or from 0 to 1.
Burst errors does not necessarily mean that the errors occur in consecutive bits, the length of the burst is measured from the first corrupted bit to the last corrupted bit. Some bits in between may not have been corrupted.
Error detection
Error detection means to decide whether the received data is correct or not without having a copy of the original message.
Error detection uses the concept of redundancy, which means adding extra bits for detecting errors at the destination.
Cyclic Redundancy Check
Given a k-bit frame or message, the transmitter generates an n-bit sequence, known as a frame check sequence (FCS), so that the resulting frame, consisting of (k+n) bits, is exactly divisible by some predetermined number.
The receiver then divides the incoming frame by the same number and, if there is no remainder, assumes that there was no error.