18-08-2014, 03:03 PM
Digital Image Compression Comparisons using DPCM and
DPCM with LMS Algorithm
Digital Image.pdf (Size: 571.43 KB / Downloads: 13)
ABSTRACT
Image processing modifies pictures to improve, extract information
and change their structure (composition, image editing, and image
compression etc.). Images can be processed by optical, photographic
and electronic means, but image processing using digital computers
are the most common method because digital methods are fast,
flexible and precise. Compression - reducing the redundancy in the
image data to optimize transmission / storage. The DPCM and LMS
may be used to remove the unused bit in the image for image
compression. In this paper we compare the compressed image
results for 1 and 3 bits DPCM Quantzation and DPCM with LMS
Algorithm and also compare the histogram, prediction mean square
using DPCM Quantzation and DPCM with LMS Algorithm for
approximately same distortion levels. The LMS may provide almost
2 bits per pixel reduction in transmitted bit rate compared to DPCM
when distortion levels are approximately the same distortion for
both methods. The LMS Algorithm may be used to adapt the
coefficients of an adaptive prediction filter for image source coding.
In the method used in this paper we decrease the compressed Image
size, distortion and also the estimation error.
INTRODUCTION
1.1 The basic idea of Image Compression
process
In general the reduction of image data is achieved by the removal
of redundant [1] data. In mathematics, compression may be defined
as transforming the two-dimensional pixel array into a statistically
uncorrelated data set. Usually image compression is applied prior
to the storage or transmission of the image data. Later the
compressed image is decompressed to get the original image or
close to original image.
Compressing an image is significantly different than
compressing raw binary data. Of course, general purpose
compression programs can be used to compress images, but the
result is less than optimal. This is because images have certain
statistical properties which can be exploited by encoders
specifically designed for them. Also, some of the finer details in
the image can be sacrificed for the sake of saving a little more
bandwidth or storage space. This also means that lossy
compression techniques can be used in this area. Lossless
compression involves with compressing data which, when
decompressed, will be an exact replica of the original data. This is
the case when binary data such as executables, documents etc. are
compressed. They need to be exactly reproduced when
decompressed. On the other hand, images (and music too) need not
be reproduced 'exactly'. An approximation of the original image is
enough for most purposes, as long as the error between the original
and the compressed image is tolerable.
SIMULATION ENVIRONMENT
In this paper has used the fixed weight coefficient DPCM
and adaptive tap weight coefficient LMS parameter. This
parameter has been shown in bellow table. These parameters are
the resulted parameter of our simulation for DPCM and LMS
algorithm
Conclusion
This paper has used fixed weight coefficient DPCM and
the LMS uses the adaptive coefficient for image compression with
same distortion level. A comparison on using DPCM and using
DPCM with LMS algorithm with respect to image compression has
been carried out based on their coefficient and the number of bits.
Results are presented which show LMS may provide more
reduction in transmitted bit rate compared to DPCM when
distortion levels are approximately the same for both methods. The
results show that the LMS algorithm has the least computational
complexity but more reduction in compressed image compare to
DPCM with same distortion. The LMS can be used in fixed bit rate
environments to decrease the reconstructed Image size and
distortion.