18-12-2012, 04:36 PM
A STUDY OF VARIOUS IMAGE COMPRESSION TECHNIQUES
1A STUDY OF VARIOUS IMAGE.pdf (Size: 120.91 KB / Downloads: 176)
Abstract
This paper addresses the area of image compression as it is
applicable to various fields of image processing. On the basis
of evaluating and analyzing the current image compression
techniques this paper presents the Principal Component
Analysis approach applied to image compression. PCA
approach is implemented in two ways – PCA Statistical
Approach & PCA Neural Network Approach. It also includes
various benefits of using image compression techniques.
INTRODUCTION
IMAGE
An image is essentially a 2-D signal processed by the human
visual system. The signals representing images are usually in
analog form. However, for processing, storage and
transmission by computer applications, they are converted
from analog to digital form. A digital image is basically a 2-
Dimensional array of pixels.
Images form the significant part of data, particularly in
remote sensing, biomedical and video conferencing
applications. The use of and dependence on information and
computers continue to grow, so too does our need for efficient
ways of storing and transmitting large amounts of data.
IMAGE COMPRESSION
Image compression addresses the problem of reducing the
amount of data required to represent a digital image. It is a
process intended to yield a compact representation of an
image, thereby reducing the image storage/transmission
requirements. Compression is achieved by the removal of one
or more of the three basic data redundancies:
1. Coding Redundancy
2. Interpixel Redundancy
3. Psychovisual Redundancy
Coding redundancy is present when less than optimal code
words are used. Interpixel redundancy results from
correlations between the pixels of an image. Psychovisual
redundancy is due to data that is ignored by the human visual
system (i.e. visually non essential information).
Image compression techniques reduce the number of bits
required to represent an image by taking advantage of these
redundancies.
LOSSLESS COMPRESSION TECHNIQUES
Run Length Encoding
This is a very simple compression method used for sequential
data. It is very useful in case of repetitive data. This technique
replaces sequences of identical symbols (pixels) ,called runs
by shorter symbols. The run length code for a gray scale
image is represented by a sequence { Vi , Ri } where Vi is the
intensity of pixel and Ri refers to the number of consecutive
pixels with the intensity Vi as shown in the figure. If both Vi
and Ri are represented by one byte, this span of 12 pixels is
coded using eight bytes yielding a compression ration of 1: 5
Huffman Encoding
This is a general technique for coding symbols based on their
statistical occurrence frequencies (probabilities). The pixels in
the image are treated as symbols. The symbols that occur
more frequently are assigned a smaller number of bits, while
the symbols that occur less frequently are assigned a
relatively larger number of bits. Huffman code is a prefix
code. This means that the (binary) code of any symbol is not
the prefix of the code of any other symbol. Most image
coding standards use lossy techniques in the earlier stages of
compression and use Huffman coding as the final step.
LZW Coding
LZW (Lempel- Ziv – Welch ) is a dictionary based coding.
Dictionary based coding can be static or dynamic. In static
dictionary coding, dictionary is fixed during the encoding and
decoding processes. In dynamic dictionary coding, the
dictionary is updated on fly. LZW is widely used in computer
industry and is implemented as compress command on UNIX.
LOSSY COMPRESSION TECHNIQUES
Transformation Coding
In this coding scheme, transforms such as DFT (Discrete
Fourier Transform) and DCT (Discrete Cosine Transform) are
used to change the pixels in the original image into frequency
domain coefficients (called transform coefficients).These
coefficients have several desirable properties. One is the
energy compaction property that results in most of the energy
of the original data being concentrated in only a few of the
significant transform coefficients. This is the basis of
achieving the compression. Only those few significant
coefficients are selected and the remaining are discarded.The
selected coefficients are considered for further quantization
and entropy encoding. DCT coding has been the most
common approach to transform coding.It is also adopted in
the JPEG image compression standard.
Fractal Coding
The essential idea here is to decompose the image into segments
by using standard image processing techniques such as
color separation, edge detection, and spectrum and texture
analysis. Then each segment is looked up in a library of
fractals. The library actually contains codes called iterated
function system (IFS) codes, which are compact sets of
numbers. Using a systematic procedure, a set of codes for a
given image are determined, such that when the IFS codes are
applied to a suitable set of image blocks yield an image that is
a very close approximation of the original. This scheme is
highly effective for compressing images that have good
regularity and self-similarity.
CONCLUSION
This paper presents various types of image compression
techniques. There are basically two types of compression
techniques. One is Lossless Compression and other is Lossy
Compression Technique. Comparing the performance of
compression technique is difficult unless identical data sets
and performance measures are used. Some of these
techniques are obtained good for certain applications like
security technologies. Some techniques perform well for
certain classes of data and poorly for others. PCA (Principal
Component Analysis) also found its applications as image
compression. PCA can be implemented in two forms i.e.
either statistical approach or neural network approach. The
PCA Neural Network provides new way of generating
codebook based on statistical feature of PCA transformational
coefficients. It leads to less storage of memory and reduction
of calculation.