01-08-2012, 01:23 PM
CONVOLUTIONAL CODES
convolution coding.pdf (Size: 113.41 KB / Downloads: 221)
INTRODUCTION
Convolutional codes represent one technique within the
general class of channel codes. Channel codes (also called
error-correction codes) permit reliable communication
of an information sequence over a channel that adds
noise, introduces bit errors, or otherwise distorts the
transmitted signal. Elias [1,2] introduced convolutional
codes in 1955. These codes have found many applications,
including deep-space communications and voiceband
modems. Convolutional codes continue to play a role in
low-latency applications such as speech transmission and
as constituent codes in Turbo codes. Two reference books
on convolutional codes are those by Lin and Costello [3]
and Johannesson and Zigangirov [4].
Section 2 introduces the shift-register structure of convolutional
encoders including a discussion of equivalent
encoders and minimal encoders. Section 3 focuses on the
decoding of convolutional codes. After a brief mention
of the three primary classes of decoders, this sections
delves deeply into themost popular class, Viterbi decoders.
This discussion introduces trellis diagrams, describes the
fundamental add-compare-select computation, compares
hard and soft decoding, and describes the suboptimal (but
commonly employed) finite traceback version of Viterbi
decoding.
Equivalent Encoders
Convolutional encoders are finite-state machines. Hence,
state diagrams provide considerable insight into the
behavior of convolutional codes. Figures 3 and 4 provide
the state diagrams for the encoders of Figs. 1 and 2,
respectively. The states are labeled so that the least
significant bit is the one residing in the leftmost memory
element of the shift register. The branches are labeled with
the 1-bit (single-bit) input and the 2-bit output separated
by a comma. The most significant bit (MSB) of the two-bit
output is the bit labeled MSB in Figs. 1 and 2.
If one erases the state labels and the single-bit input
labels, the remaining diagrams for Figs. 3 and 4 (labeled
with only the 2-bit outputs) would be identical.
ENCODER STRUCTURE
As any binary code, convolutional codes protect information
by adding redundant bits. A rate-k/n convolutional
encoder processes the input sequence of k-bit information
symbols through one or more binary shift registers
(possibly employing feedback). The convolutional encoder
computes each n-bit symbol (n > k) of the output sequence
from linear operations on the current input symbol and
the contents of the shift register(s). Thus, a rate k/n
convolutional encoder processes a k-bit input symbol and
computes an n-bit output symbol with every shift register
update. Figures 1 and 2 illustrate feedforward and feedback
encoder implementations of a rate-12
code.
DECODING CONVOLUTIONAL CODES
Convolutional code decoding algorithms infer the values of
the input information sequence from the stream of received
distorted output symbols. There are three major families
of decoding algorithms for convolutional codes: sequential,
Viterbi, and maximum a posteriori (MAP). Wozencraft
proposed sequential decoding in 1957 [7]. Fano in 1963 [8]
and Zigangirov in 1966 [9] further developed sequential
decoding. See the book by Johannesson and Zigangirov [4]
for a detailed treatment of sequential decoding algorithms.
Viterbi originally described the decoding algorithm that
bears his name in 1967 [10]. See also Forney’s work [11,12]
introducing the trellis structure and showing that Viterbi
decoding is maximum-likelihood in the sense that it
selects the sequence that makes the received sequence
most likely.
The Basic Viterbi Algorithm
The Viterbi algorithm uses the trellis diagram to compute
the accumulated distances (called the path metrics)
from the received sequence to the possible transmitted
sequences. The total number of such trellis paths grows
exponentially with the number of stages in the trellis,
causing potential complexity and memory problems.
However, the Viterbi algorithm takes advantage of the
fact that the number of paths truly in contention to have
the minimum distance is limited to the number of states
in a single column of the trellis, assuming that ties may
be arbitrarily resolved.
Hard versus Soft Decoding
The integer branch and path metrics of the binary error
channel facilitate a relatively simple example of the
Viterbi algorithm. However, the AWGN channel is far
more common than the bit error channel. For the AWGN
channel, binary phase shift keying (BPSK) represents
binary 1 with 1.0 and binary 0 with −1.0. These two
transmitted values are distorted by additive Gaussian
noise, so that the received values will typically be neither
1.0 nor −1.0. A novice might choose to simply quantize
each received value to the closest of 1.0 and −1.0 and
assign the appropriate binary value. This quantization
would effectively transform the AWGN channel to the
bit error channel, facilitating Viterbi decoding exactly
as described above. This method of decoding is called
hard decoding, because the receiver makes a binary (hard)
decision about each bit before Viterbi decoding.