28-08-2014, 10:34 AM
A NEW Approach For FEC Decoding Seminar Report
A NEW Approach For FEC Decoding.docx (Size: 138.12 KB / Downloads: 9)
INTRODUCTION
The convolutional codes were based on either algebraically calculating the error pattern or on trellis graphical representations such as in the AP and Viterbi algorithms. With the advent of turbo coding, a third decoding principle has appeared: iterative decoding. Iterative decoding was also introduced in Tanner’s pioneering work, which is a general framework based on bipartite graphs for the description of LDPC codes and their decoding via the belief propagation (BP) algorithm. In many respects, convolutional codes are similar to block codes. For example, if we truncate the trellis by which a convolutional code is represented, a block code whose codewords correspond to all trellis paths to the truncation depth is created.
However, this truncation causes a problem in error performance, since the last bits lack error protection. The conventional solution to this problem is to encode a fixed number of message blocks L followed by m additional all-zero blocks, where m is the constraint length of the convolutional code. In the tail-biting convolutional code, zero-tail bits are not needed and replaced by payload bits resulting in no rate loss due to the tails. Therefore, the spectral efficiency of the channel code is improved.
Due to the advantages of the tail-biting method over the zero-tail, it has been adopted as the FEC in addition to the turbo code for data and overhead channels in many wireless communications systems such as IS-54, EDGE, WiMAX and LTE. Both turbo and LDPC codes have been extensively studied for more than fifteen years. However, the formal relationship between these two classes of codes remained unclear until Mackay in claimed that turbo codes are LDPC codes. Also, Wiberg in marked another attempt to relate these two classes of codes together by developing a unified factor graph representation for these two families of codes. Finally, Colavolpe was able to demonstrate the use of the BP algorithm to decode convolutional and turbo codes. The operation in is limited to specific classes of convolutional codes, such as convolutional self orthogonal codes (CSOCs). Also, the turbo codes therein are based on the serial structure while the parallel structure is more prevalent in practical applications.
LITERATURE SURVEY
Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps are to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration r taken into account for developing the proposed system
AVERAGING NOISE TO REDUCE ERRORS
FEC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data.
• Because of this "risk-pooling" effect, digital communication systems that use FEC tend to work perfectly above a certain minimum signal-to-noise ratio and not at all below it.
• This all-or-nothing tendency becomes more pronounced as stronger codes are used that more closely approach the theoretical limit imposed by the Shannon limit.
• Interleaving FEC coded data can reduce the all or nothing properties of transmitted FEC codes. However, this method has limits. It is best used on narrowband data.
Most telecommunication systems used a fixed channel code designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: hybrid automatic repeat-request uses a fixed FEQ method as long as the FEQ can handle the error rate, then switches to ARQ when the error rate gets too high; adaptive modulation and coding uses a variety of FEQ rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
INTERLEAVING IN DISK STORAGE
Low-level format utility performing interleave speed tests on a 10-megabyte IBM PC XT hard drive.
Historically, interleaving was used in ordering block storage on disk-based storage devices such as the floppy disk and the hard disk. The primary purpose of interleaving was to adjust the timing differences between when the computer was ready to transfer data, and when that data was actually arriving at the drive head to be read. Interleaving was very common prior to the 1990s, but faded from use as processing speeds increased. Modern disk storage is not interleaved.
Interleaving was used to arrange the sectors in the most efficient manner possible, so that after reading a sector, time would be permitted for processing, and then the next sector in sequence is ready to be read just as the computer is ready to do so. Matching the sector interleave to the processing speed therefore accelerates the data transfer, but an incorrect interleave can make the system perform markedly slower
INTERLEAVING IN DATA TRANSMISSION
Interleaving is used in digital data transmission technology to protect the transmission against burst errors. These errors overwrite a lot of bits in a row, so a typical error correction scheme that expects errors to be more uniformly distributed can be overwhelmed. Interleaving is used to help stop this from happening.
Data is often transmitted with error control bits that enable the receiver to correct a certain number of errors that occur during transmission. If a burst error occurs, too many errors can be made in one code word, and that codeword cannot be correctly decoded. To reduce the effect of such burst errors, the bits of a number of codewords are interleaved before being transmitted. This way, a burst error affects only a correctable number of bits in each codeword, and the decoder can decode the codewords correctly.
This method is popular because it is a less complex and cheaper way to handle burst errors than directly increasing the power of the error correction scheme.
Let's look at an example. We apply an error correcting code so that the channel codeword has four bits and one-bit errors can be corrected. The channel codewords are put into a block like this: aabbccddeeffgg
TECHNIQUE USED OR ALGORITHM USED
FEC is accomplished by adding redundancy to the transmitted information using a predetermined algorithm. Each redundant bit is invariably a complex function of many original information bits. The original information may or may not appear in the encoded output; codes that include the unmodified input in the output are systematic, while those that do not are nonsystematic.
An extremely simple example would be an analog to digital converter that samples three bits of signal strength data for every bit of transmitted data. If the three samples are mostly all zero, the transmitted bit was probably a zero, and if three samples are mostly all one, the transmitted bit was probably a one. The simplest example of error correction is for the receiver to assume the correct output is given by the most frequently occurring value in each group of three.
This allows an error in any one of the three samples to be corrected by "democratic voting". This is a highly inefficient FEC, and in practice would not work very well, but it does illustrate the principle. In practice, FEC codes typically examine the last several dozen, or even the last several hundred, previously received bits to determine how to decode the current small handful of bits (typically in groups of 2 to 8 bits).
ADVANTAGES
Forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, also known as an error correction code. This allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. The advantage of forward error correction is that a back-channel is not required, or that retransmission of data can often be avoided, at the cost of higher bandwidth requirements on average. FEC is therefore applied in situations where retransmissions are relatively costly or impossible. In particular, FEC information is usually added to most mass storage devices to protect against damage to the stored data
SYSTEM ANALYSIS
PROBLEM DEFINITION
A problem in error performance, since the last bits lack error protection. The conventional solution to this problem is to encode a fixed number of message blocks L followed by m additional all-zero blocks, where m is the constraint length of the convolutional code. This method provides uniform error protection for all information digits, but causes a rate reduction for the block code as compared to the convolutional code by the multiplicative factor. In the tail-biting convolutional code, zero-tail bits are not needed and replaced by payload bits resulting in no rate loss due to the tails. Therefore, the spectral efficiency of the channel code is improved
ADVANTAGES OF PROPOSED SYSTEM
We propose the BP decoder for these codes in a combined architecture which is advantageous over a solution based on two separate decoders due to efficient reuse of computational hardware and memory resources for both decoders. In fact, since the traditional turbo decoders have a higher complexity, the observed loss in performance with BP is more than compensated by a drastically lower implementation complexity.
FUNCTIONAL REQUIREMENTS
Functional requirements specify which output file should be produced from the given file they describe the relationship between the input and output of the system, for each functional requirement a detailed description of all data inputs and their source and the range of valid inputs must be specified
SYSTEM DESIGN
Data Flow Diagram / Use Case Diagram / Flow Diagram
The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system
The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
INPUT DESIGN
The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things:
What data should be given as input?
How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when error occur
OBJECTIVES
1.Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user
will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow
CONCLUSION
In this paper, the feasibility of decoding arbitrary tailbiting convolutional and turbo codes using the BP algorithm was demonstrated. Using this algorithm to decode the tailbiting convolutional code in WiMAX systems speeds up the error correction convergence and reduces the decoding computational complexity with respect to the ML-Viterbi-based algorithm. In addition, the BP algorithm performs a non-trellis based forward-only algorithm and has only an initial decoding delay, thus avoiding intermediate decoding delays that usually accompany the traditional MAP and SOVA components in LTE turbo decoders