17-11-2012, 02:21 PM
A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES
1A NEW LOSSLESS METHOD.pdf (Size: 130.45 KB / Downloads: 196)
ABSTRACT
The need for an efficient technique for compression of Images ever increasing because the raw images need large
amounts of disk space seems to be a big disadvantage during transmission & storage. Even though there are so many
compression technique already present a better technique which is faster, memory efficient and simple surely suits the
requirements of the user. In this paper we proposed the Lossless method of image compression and decompression
using a simple coding technique called Huffman coding. This technique is simple in implementation and utilizes less
memory. A software algorithm has been developed and implemented to compress and decompress the given image
using Huffman coding techniques in a MATLAB platform.
INTRODUCTION
A digital image obtained by sampling and quantizing a
continuous tone picture requires an enormous storage.
For instance, a 24 bit colour image with 512x512 pixels
will occupy 768 Kbyte storage on a disk, and a picture
twice of this size will not fit in a single floppy disk. To
transmit such an image over a 28.8 Kbps modem would
take almost 4 minutes. The purpose for image
compression is to reduce the amount of data required for
representing sampled digital images and therefore reduce
the cost for storage and transmission. Image compression
plays a key role in many important applications,
including image database, image communications,
remote sensing (the use of satellite imagery for weather
and other earth-resource application). The image(s) to be
compressed are gray scale with pixel values between 0 to
255. There are different techniques for compressing
images. They are broadly classified into two classes
called lossless and lossy compression techniques. As the
name suggests in lossless compression techniques, no
information regarding the image is lost. In other words,
the reconstructed image from the compressed image is
identical to the original image in every sense. Whereas
Inter pixel Redundancy
Another important form of data redundancy is inter
pixel redundancy, which is directly related to the inter
pixel correlations within an image. Because the value of
any given pixel can be reasonable predicted from the
value of its neighbors, the information carried by
individual pixels is relatively small. Much of the visual
contribution of a single pixel to an image is redundant; it
could have been guessed on the basis of its neighbor’s
values. A variety of names, including spatial redundancy,
geometric redundancy, and interframe Redundancies
have been coined to refer to these interpixel
dependencies. In order to reduce the interpixel
redundancies in an image, the 2-D pixel array normally
used for human viewing and interpretation must be
transformed into a more efficient but usually non-visual
format. For example, the differences between adjacent
pixels can be used to represent an image.
Transformations of this type are referred to as mappings.
They are called reversible if the original image elements
can be reconstructed from the transformed data set [1, 2].
TYPES OF COMPRESSION
Compression can be divided into two categories, as
Lossless and Lossy compression. In lossless
compression, the reconstructed image after compression
is numerically identical to the original image [3]. In lossy
compression scheme, the reconstructed image contains
degradation relative to the original. Lossy technique
causes image quality degradation in each compression or
decompression step. In general, lossy techniques provide
for greater compression ratios than lossless techniques.
Huffman decoding
After the code has been created, coding and/or
decoding is accomplished in a simple look-up table
manner[9]. The code itself is an instantaneous uniquely
decodable block code. It is called a block code, because
each source symbol is mapped into a fixed sequence of
code symbols. It is instantaneous, because each code
word in a string of code symbols can be decoded without
referencing succeeding symbols. It is uniquely
decodable, because any string of code symbols can be
decoded in only one way. Thus, any string of Huffman
encoded symbols can be decoded by examining the
individual symbols of the string in a left to right manner.
For the binary code of table 3.2, a left-to-right scan of the
encoded string 010100111100 reveals that the first valid
code word is 01010, which is the code for symbol a3.
The next valid code is 011, which corresponds to symbol
a1. Valid code for the symbol a2 is 1,valid code for the
symbols a6 is 00,valid code for the symbol a6 is
Continuing in this manner reveals the completely
decoded message a5 a2 a6 a4 a3 a1 , so in this manner the
original image or data can be decompressed using
Huffman decoding as explained above .
CONCLUSION
The experiment shows that the higher data redundancy
helps to achieve more compression. The above presented
a new compression and decompression technique based
on Huffman coding and decoding for scan testing to
reduce test data volume, test application time.
Experimental results show that up to a 0.8456
compression ratio for the above image is obtained .hence
we conclude that Huffman coding is efficient technique
for image compression and decompression to some
extent. As the future work on compression of images for
storing and transmitting images can be done by other
lossless methods of image compression because as we
have concluded above the result the decompressed image
is almost same as that of the input image so that indicates
that there is no loss of information during transmission.
So other methods of image compression can be carried
out as namely JPEG method, Entropy coding, etc.