28-08-2014, 02:29 PM
A NEW APPROACH FOR FEC DECODING BASED ON THE BP ALGORITHM IN LTE AND WIMAX SYSTEMS PROJECT REPORT
A NEW APPROACH.docx (Size: 392.99 KB / Downloads: 18)
ABSTRACT
Many wireless communication systems such as IS- 54, enhanced data rates for the GSM evolution (EDGE), worldwide interoperability for microwave access (WiMAX) and long term evolution (LTE) have adopted low-density parity-check (LDPC), tail-biting convolutional, and turbo codes as the forward error correcting codes (FEC) scheme for data and overhead channels. Therefore, many efficient algorithms have been proposed for decoding these codes. However, the different decoding approaches for these two families of codes usually lead to different hardware architectures. Since these codes work side by side in these new wireless systems, it is a good idea to introduce a universal decoder to handle these two families of codes. The present work exploits the parity-check matrix (H) representation of tailbiting convolutional and turbo codes, thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed, the BP algorithm provides a highly effective general methodology for devising low-complexity iterative decoding algorithms for all convolutional code classes as well as turbo codes. While a small performance loss is observed when decoding turbo codes with BP instead of MAP, this is offset by the lower complexity of the BP algorithm and the inherent advantage of a unified decoding architecture
PROJECT PURPOSE
Purpose of these codes work side by side in these new wireless systems, it is a good idea to introduce a universal decoder to handle these two families of codes. The present work exploits the parity-check matrix (H) representation of tailbiting convolutional and turbo codes, thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed, the BP algorithm provides a highly effective general methodology for devising low-complexity iterative decoding algorithms for all convolutional code classes as well as turbo codes. While a small performance loss is observed when decoding turbo codes with BP instead of MAP, this is offset by the lower complexity of the BP algorithm and the inherent advantage of a unified decoding architecture
INTRODUCTION
The convolutional codes were based on either algebraically calculating the error pattern or on trellis graphical representations such as in the AP and Viterbi algorithms. With the advent of turbo coding, a third decoding principle has appeared: iterative decoding. Iterative decoding was also introduced in Tanner’s pioneering work, which is a general framework based on bipartite graphs for the description of LDPC codes and their decoding via the belief propagation (BP) algorithm. In many respects, convolutional codes are similar to block codes. For example, if we truncate the trellis by which a convolutional code is represented, a block code whose codewords correspond to all trellis paths to the truncation depth is created.
However, this truncation causes a problem in error performance, since the last bits lack error protection. The conventional solution to this problem is to encode a fixed number of message blocks L followed by m additional all-zero blocks, where m is the constraint length of the convolutional code. In the tail-biting convolutional code, zero-tail bits are not needed and replaced by payload bits resulting in no rate loss due to the tails. Therefore, the spectral efficiency of the channel code is improved.
Due to the advantages of the tail-biting method over the zero-tail, it has been adopted as the FEC in addition to the turbo code for data and overhead channels in many wireless communications systems such as IS-54, EDGE, WiMAX and LTE. Both turbo and LDPC codes have been extensively studied for more than fifteen years. However, the formal relationship between these two classes of codes remained unclear until Mackay in claimed that turbo codes are LDPC codes. Also, Wiberg in marked another attempt to relate these two classes of codes together by developing a unified factor graph representation for these two families of codes. Finally, Colavolpe was able to demonstrate the use of the BP algorithm to decode convolutional and turbo codes. The operation in is limited to specific classes of convolutional codes, such as convolutional self orthogonal codes (CSOCs). Also, the turbo codes therein are based on the serial structure while the parallel structure is more prevalent in practical applications
PROBLEM DEFINITION
A problem in error performance, since the last bits lack error protection. The conventional solution to this problem is to encode a fixed number of message blocks L followed by m additional all-zero blocks, where m is the constraint length of the convolutional code. This method provides uniform error protection for all information digits, but causes a rate reduction for the block code as compared to the convolutional code by the multiplicative factor. In the tail-biting convolutional code, zero-tail bits are not needed and replaced by payload bits resulting in no rate loss due to the tails. Therefore, the spectral efficiency of the channel code is improved
LIMITATIONS OF EXISTING SYSTEM
The conventional solution to this problem is to encode a fixed number of message blocks L followed by m additional all-zero blocks, where m is the constraint length of the convolutional code. This method provides uniform error protection for all information digits, but causes a rate reduction for the block code as compared to the convolutional code by the multiplicative factor
ADVANTAGES OF PROPOSED SYSTEM:
We propose the BP decoder for these codes in a combined architecture which is advantageous over a solution based on two separate decoders due to efficient reuse of computational hardware and memory resources for both decoders. In fact, since the traditional turbo decoders have a higher complexity, the observed loss in performance with BP is more than compensated by a drastically lower implementation complexity
LITERATURE SURVEY
Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps are to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration r taken into account for developing the proposed system.
ADVANTAGES
Forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, also known as an error correction code. This allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. The advantage of forward error correction is that a back-channel is not required, or that retransmission of data can often be avoided, at the cost of higher bandwidth requirements on average. FEC is therefore applied in situations where retransmissions are relatively costly or impossible. In particular, FEC information is usually added to most mass storage devices to protect against damage to the stored data.
APPLICATIONS
FEC can be used in ATM networks for data transmission. Thus by using FEC for data transmission in ATM networks we can recover the lost packets. Forward Error Correction (FEC) allows recovery from loss without retransmission. In Digital communication systems, particularly for military use, need to perform accurately and reliably in the presence of noise and interference. Among many possible ways to achieve this goal, forward error-correction coding is the most effective and economical. Forward error-correction coding represents the most efficient, economical, and predictable way of improving the reliability of transmitted or stored data. In particular it offers designers a powerful tool for ensuring robust communications despite adverse conditions. Forward error-correcting codes play a larger role in helping military communication satellites to keep up with increasing system demands.
INPUT DESIGN
The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things
OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user. Efficient and intelligent output design improves the system’s relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following objectives.
Convey information about past activities, current status or projections of the
Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works
SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement
CONCLUSION
In this paper, the feasibility of decoding arbitrary tailbiting convolutional and turbo codes using the BP algorithm was demonstrated. Using this algorithm to decode the tailbiting convolutional code in WiMAX systems speeds up the error correction convergence and reduces the decoding computational complexity with respect to the ML-Viterbi-based algorithm. In addition, the BP algorithm performs a non-trellis based forward-only algorithm and has only an initial decoding delay, thus avoiding intermediate decoding delays that usually accompany the traditional MAP and SOVA components in LTE turbo decoders.