25-08-2017, 09:32 PM
Project Report On Facsimile Image Compression using Slepian-Wolf Coding Principles and Context Modelling
Facsimile Image.pdf (Size: 907.17 KB / Downloads: 13)
Abstract
In this paper, Slepian-Wolf coding is considered
for compression of facsimile images. The correlation between
the image scan lines is exploited at the decoder using
Slepian-Wolf coding (SWC). The side information is generated
at the decoder using previously decoded image lines and context
modelling. A novel weighted context-based iterative decoding of
low density parity check (LDPC) codes is proposed in addition
to realizing context-based initialization of log-likelihood ratio
values in LDPC iterative decoding. On an average, an increase
of 0.60 in the compression ratio is obtained with the context
modelling. As the proposed method is based on SWC principles,
it is inherently error resilient and affords a system with a
low-complexity encoder.
INTRODUCTION
The Slepian-Wolf (SW) theorem [1] is generally applied to
uniform binary sources to exploit the correlation between two
binary sources. Slepian-Wolf coding is also known as
distributed source coding (DSC), or compression of
correlated sources with side information. DSC schemes offer
lot of scope for a flexible trade-off between encoder and
decoder complexities. The basic theory and principles of
DSC can be found in [2]. DSC principles are applied to
remote sensing images and multi-view images to realize a
low-complexity encoder by exploiting inter-band correlation
and correlation between the images respectively. However,
literature on a single-image compression utilizing spatial
redundancy in accordance with DSC principles is limited. In
[3], a still image was divided into two sources, and both the
sources were lossy coded. One source was encoded using a
low-pass component of wavelet decomposition and the other
source was encoded using a modulo based binning scheme in
tune with the DSC principles. The authors of [4] considered
single-source compression within the DSC framework using '
virtually' created side information. In [5], the authors
constructed distributed image compression techniques that
operate in the pixel domain and exploit the correlation only at
the decoder for binary text images
.BUILDING BLOCKS OF ENCODING
DSC principles are combined with classical facsimile
image coding techniques to achieve low-complexity
facsimile image encoding. The different building blocks used
in the proposed scheme are presented in this section. The
block diagram of our scheme is shown in Fig.
Variable Rate LDPC Codes
Each line in an image has a different source distribution,
and consequently, a different compression ratio. Hence, one
needs to select a matching LDPC code for each line. This
necessitates the use of variable rate LDPC codes. To use the
correct LDPC code for each image line in order to achieve
lossless decoding, we need to know the simulation threshold
values (q#
) [9] of each code. For this purpose, a set of LDPC
codes of different code rates are pre-designed, and their
threshold values are computed.
Generally, each line in the image is encoded independently
using LDPC syndrome codes according to (1).
DECODING
As seen in Sec II-D, that the decision to encode an
imageline using an LDPC syndrome code is based on the
dataof the previous line i.e. (i-1)th line. Hence, an obviousway
to decode LDPC coded lines is to assume the (i-1)th line as
side information and decode the i
th line using SWdecoding.
Such simple decoding may not be adequate to achieve good
compression. Further, the spatial redundancy in the image is
not exploited in an effective way. Hence, we propose context
modelling at the decoder to make use of the spatial
redundancy. In the following sections, we describe the details
of context modelling.
The facsimile image scan lines can be treated as
nonuniform sources, and hence, decoding strategy presented
in [9] can be followed. In [9] the authors modified the
decoding algorithm proposed in [16] for non-uniform sources
and incorporated the source distribution while initializing the
edges originating from the variable nodes of bipartite graph
[17]. They modelled the side information as the output of an
equivalent BSC with crossover probability equal to q.
Let x
i
, yi
∈{0, 1} be the realization of the non-uniform
source X to be compressed with Y as side information. Both xi,
yi belong to the i
th variable node, and the edges emanating
from the variable node are initialized as
EXPERIMENTAL SETUP AND SIMULATION RESULTS
In this section, we provide additional details, practical
aspects, and realization of the building blocks described in
Sec.II. The CCITT test images are used for testing the
algorithm and for comparison purposes. The test images are
of size 2376 × 1728 and are available online [18]. The
thumbnail images of CCITT test images are shown in Fig. 4.
A. Variable Rate LDPC Codes
Different LDPC matrices (codes) are designed using
variable number of rows and fixed number of columns; the
number of columns is equal to 1728, which is equal to the
image width. The (3, p) regular variable rate LDPC matrices
are constructed using the PEG algorithm [19]. For simulation
purposes, we have designed 61 LDPC codes for the LDPC
code set. This accounts to 6 bits per image line in the header,
which transmits the mode selection information. The ’0’
mode for q = 0 (i.e. when the present image line is the same as
the previous one); 1 – 61, for different LDPC codes; 62, for
VLC coded image line; and 63, for raw data. The LDPC
codes are constructed using 96 – 1056 number of rows; the
number of rows increases in steps of 16. Therefore, the bit
rates range from 0.056 – 0.61 bits/pixel. The computed
simulation threshold (q#
) values range from 0.000579 for the
Simulation Results
The proposed algorithm is applied to the CCITT images
with each image is encoded for different overloading factors
and decoded. The overloading factors listed in Table I
represent the highest overloading factor for which near
lossless decoding is achieved. The first column lists the
compression ratios obtained when the overloading factor is
zero and no context model was used at the decoder.
Obviously, no overloading can be used when context
modelling at the decoder is disabled. On an average, an
increase of 0.60 in the compression ratio is obtained with
context modeling(combined context-based LLR initialization
and context-based LDPC decoding) as compared to that
obtained with no context. Context-based LDPC iterative
decoding itself increases the compression ratio by 0.42 in
comparison with no context.
CONCLUSION
We investigated the compression of facsimile images
under the DSC paradigm. It is a single source near-lossless
image compression with side information (for Slepian-Wolf
decoding) derived from the previously decoded image lines.The side information was generated using context-based
modeling at the decoder. Further, we have introduced a novel
weighted context-based LDPC iterative decoding in addition
to realizing context-based initialization of log-likelihood
ratio (LLR) values in LDPC decoding. The combination of
these techniques improved the compression performance
considerably. On an average, an increase of 0.60 in the
compression ratio is obtained with context modelling. The
performance of the overall system is extremely compatible
with fax3 (CCITT group 3) compression. As the proposed
method is based on DSC principles, it is inherently error
resilient and affords a system with a low-complexity encoder