22-11-2012, 04:34 PM
Image Compression Based on Wavelet and Quantization with Optimal Huffman Code
Image Compression.pdf (Size: 250.41 KB / Downloads: 49)
ABSTRACT
This paper proposes a method for the compression of color images
by Haar transform, quantization and construction of minimum
redundancy code using conventional Huffman coding. Wavelet
compression is accomplished by decomposing the row and
column of the image matrix using the Harr transform. And the
reconstruction of the image is feasible just from 1/4th of the
decomposed image and even 1/16th of the decomposed image is
enough for re-construction and the quality relies on the nature the
image. A fast and effective histogram-based quantization is
applied to the decomposed image. The weighted minmax
quantization incorporates activity weighting, whereby obtaining
high quality quantized image with significantly less visual
distortion. Partition based Huffman coding divides symbols based
on sorted probabilities of symbols into two equal halves and
generates codes for each portioned symbols. Analytical and
experimental results suggest that the optimum code can be
generated for images with balanced binary partition Huffman
coding, which is not only decodable but also offers the possibility
of realizing an average code-word length that can be made
arbitrarily close to the source entropy.
INTRODUCTION
A common characteristic of image data is that, they contain a
significant amount of information that is redundant. . The amount
of data associated with visual information is so large and its
storage requires enormous storage capacity. The transmission of
this redundant data is wasteful of primary communication
resources. For efficient data transmission, the redundant
information should be removed from the signal prior to
transmission [1]. The image coding may be lossless or lossy based
on application. Huffman coding has been widely used in lossless
image compression. The famous Huffman code is an
instantaneous uniquely decodable block code which assigns
shorter code words to frequent source symbols, and longer code
words to rare source symbols of the image to be encoded for
transmission and storage. The goal of this paper is to provide a
complete coding system which includes wavelet transform and
quantization plus balanced binary partition Huffman coding, and
propose a novel context-based compression approach to provide
high compression performance with reduced memory utilization.
HAAR WAVELET
As we are going to deal with compression of images, it is obvious
that manipulation in raw image is quite impossible and if the
image is represented in some mathematical form then the
manipulation will be simpler and easier. Hence, the raw image
needs to be transformed. To what extend a particular transform
will support data compression depends on both the transform and
the nature of images being compressed. The practically of an
image coding scheme depends on the computational workload of
the encoding and decoding steps, as well as the degree of
compression obtained. The availability of a fast implementation
algorithm can greatly enhance the appeal of the particular
transform. Some of the transformations are Sine transform, Cosine
transform, Haar transform, Slant transform etc [2].
The Haar transform is a very fast transform. It is a compact,
dyadic, orthonormal wavelet transform. The basic wavelet is
progressively narrowed (reduced in scale) by power of two. Each
smaller wavelet is then translated by increments equal to its width,
so that the complete set of wavelets at any scale completely
covers the internal (As the basis wavelet is scaled down by power
of two, its amplitude is scaled up by powers of √2, to maintain
orthonormality)[3].
QUANTIZATION
In color quantization a true color image is transformed into a
color-mapped image consisting of K carefully selected
representative colors. The goal of this quantization is to discard
information, which is not visually significant. Quantization is a
many-to-one mapping, and therefore is fundamentally lossy. A
fast and effective (improves image quality) method for color
quantization, which uses a histogram to weight, each color
proportion to its frequency, is presented here. This is a modified
version of simple minmax algorithm proposed by Gonzalez. This
quantization is applied to transformed image. The quantized
image is then subjected to entropy coding and the coded image is
stored in a storage medium.
EXPERIMENTAL RESULT
In compressed file, the index table that contains the unique
symbols and its code and the maximum and minimum length of
the Huffman code are written [10,11].The image can be
reconstructed from this compressed file. The reconstructed images
for complete coding system with simple minmax(MM), weighted
minmax(MM), wavelet and minmax(WMM) and WMM
Partitioned Huffman coding are shown in figure. The quality and
compression ratio of the image codec is comparable with different
image compression standards.
CONCLUSION
The compression ratio is good for Wavelet transform method and
even by storing 1/16th of the image, the reconstruction of original
image is found to be good. A comprehensive comparision
between MinMax algorithm and weighted MinMax algorithm is
presented The quantized image has considerable contouring
because it treats all colors equally. Weighted Minmax is a high
quality quantization technique which uses a histogram to weight
each color proportional to its frequency i.e, colors that occurs
more often in the image are given greater precedence.
The running time of the Weighted MinMax algorithm is slower
than the MinMax algorithm but the quality of quantized output
produced was shown to be superior. The generalized activity
weighting method was shown to be a fast and effective way to
enhance the quality quantized images. After adding weighted
minmax quantization between the wavelet transform and the
partitioned Huffman coding , the compression ratio and quality of
reconstructed image is found to be better than wavelet based
image compression. The partitioned Huffman code gives optimum
code for weighted minmax quantized pixel values by reducing
number of bits for least frequent symbols.