06-09-2013, 04:10 PM
Still Image Compression Using Wavelet Transform
Still Image Compression .pdf (Size: 526.44 KB / Downloads: 83)
Abstract
With the growth of the multimedia technology over the past decades, the demand for
digital information increases dramatically. This enormous demand poses difficulties for
the current technology to handle. One approach to overcome this problem is to compress
the information by removing the redundancies present in it. This is the lossy compression
scheme that is often used to compress information such as digital images.
The main objective of this thesis is to study and implement the operations used in a lossy
compression scheme to compress two-dimensional images. Basically, this scheme
consists of three operations, which are the transform, quantization and entropy encoding
operations. First, wavelet transform is used to analyze the non-stationary phenomena that
are often found in two-dimensional digital images. Then, scalar quantization is applied to
remove the redundancies in the images with no visible loss under normal viewing.
Finally, entropy encoder is used to compress the image in an efficient way where a large
reduction of image size occurs. To reconstruct the compressed image, the operations are
reversed.
Image Compression
The Need for Compression
With the advance development in Internet and multimedia technologies, the
amount of information that is handled by computers has grown exponentially over
the past decades. This information requires large amount of storage space and
transmission bandwidth that the current technology is unable to handle technically
and economically. One of the possible solutions to this problem is to compress the
information so that the storage space and transmission time can be reduced.
Objective
This thesis aims to study and understand the general operations used to compress a
two-dimensional gray-scale images and to develop an application that allows the
compression and reconstruction to be carried out on the images. The application
developed aims to achieve:
• Minimum distortion
• High compression ratio
• Fast computation time
To compress an image, the operations include linear transform, quantization and
entropy encoding. This thesis will study the wavelet-based transformation and
discuss the superior features that it has over the standard Fourier transform. It will
also look into quantization and discuss how this operation can help to reduce the
volume of an image data before packing them effectively in the entropy coding
operation. To reconstruct the image, an inverse operation is performed at every
stage of the system in the reverse order of the image decomposition.
Linear Transform
Introduction
In recent years, the wavelet transform has gained the attention of many people in
the field of image compression. What makes this transform a better choice than
Fourier transform is the ability to localize in frequency and time simultaneously.
This is extremely useful when analyzing time-varying or non-stationary
phenomena that are commonly found in images [5]. In images, fine information
content is generally found in the high frequencies whereas the coarse information
content in the low frequencies. The wavelet transform uses its multi-resolution
capability to decompose the image into multiple frequency bands.
The wavelet transform has its roots in the Fourier transform. To serve as
background information, we will briefly review the Fourier transform and how
this transform had led research to develop the Short Time Fourier Transform
(STFT) that in turn led to the birth of the wavelet transformation. We will also
present an intuitive understanding of the wavelet transform and the usefulness of
this transform in image compression.
Huffman Encoding [Ref. 15, 17]
Huffman encoding is a statistical compression technique developed by David
Huffman. It uses the probability of occurrence of symbols to determine the
codeword representing the symbols. The length of the codeword is variable.
Symbols with higher probability of occurrence will have shorter codeword lengths
while symbols with lower probability of occurrence will have longer codeword
lengths. As a result, the average codeword length per symbol reduces and this
leads to a smaller output data size.
Uniform Dead-Zone Quantization
In this quantization, a threshold is specified and applied to the decomposed image.
By applying the threshold, any wavelet coefficients within the specified threshold
are set to zero as shown in Eq 3.5. Thus, there will be large number of zero values
in the quantized image which this quantizer can take advantage of. Figure 5.7
shows the algorithm for the implementation of the uniform dead-zone
quantization.