18-12-2012, 04:06 PM
Lossless Image Compression Algorithm For Transmitting Over Low Bandwidth Line
Lossless Image Compression Algorithm.pdf (Size: 715.58 KB / Downloads: 71)
INTRODUCTION
Lossless or reversible compression refers to compression techniques in which the reconstructed data exactly matches the original. Near-lossless compression denotes compression methods, which give quantitative bounds on the nature of the loss that is introduced. Such compression techniques provide the guarantee that no pixel difference between the original and the compressed image is above a given value. Both lossless and near-lossless compression find potential applications in remote sensing, medical and space imaging, and multispectral image archiving. In these applications the volume of the data would call for lossy compression for practical storage or transmission. However, the necessity to preserve the validity and precision of data for subsequent recognized diagnosis operations, forensic analysis, as well as scientific or clinical measurements, often imposes strict constraints on the reconstruction error. In such situations near-lossless compression becomes a viable solution, as, on the one hand, it provides significantly higher compression gains lossless algorithms, and on the other hand it provides guaranteed bounds on the nature of loss introduced by compression. Another way to deal with the lossy-lossless dilemma faced in applications such as medical imaging and remote sensing is to use a successively refundable compression technique that provides a bit stream that leads to a progressive reconstruction of the image. Using wavelets, for example, one can obtain an embedded bit stream from which various levels of rate and distortion can be obtained. In fact with reversible integer wavelets, one gets a progressive reconstruction capability all the way to lossless recovery of the original. Such techniques have been explored for potential use in tele-radiology where a physician typically requests portions of an image at increased quality (including lossless reconstruction) while accepting initial renderings and unimportant portions at lower quality, and thus reducing the overall bandwidth requirements. In fact, the new still image
compression standard, JPEG 2000, provides such features.
METHODS FOR LOSSY COMPRESSION
Reducing the color space the most common colors in the image. The selected colors are specified in the color
Volume 2, issue 2, February 2012 www.ijarcsse.com
© 2012, IJARCSSE All Rights Reserved
palette in the header of the compressed image. Each
pixel just references the index of a color in the color
palette. This method can be combined with dithering to
avoid pasteurizations. Chrome sub sampling. This takes
advantage of the fact that the human eye perceives
spatial changes of brightness more sharply than those of
color, by averaging or dropping some of the
chrominance information in the image. Transform
coding. This is the most commonly used method.
A Fourier-related transform such as DCT or the wavelet
transform are applied, followed
by quantization and entropy coding.
. HUFFMAN CODING
Huffman coding is based on the frequency of occurrence
of a data item (pixel in images). The principle is to use a
lower number of bits to encode the data that occurs more
frequently. Codes are stored in a Code Book which may
be constructed for each image or a set of images. In all
cases the code book plus encoded data must be
transmitted to enable decoding.
Decoding for the above two algorithms is trivial as long
as the coding table (the statistics) is sent before the data.
(There is a bit overhead for sending this, negligible if the
data file is big.) Unique Prefix Property. No code is a
prefix to any other code (all symbols are at the leaf
nodes) great for decoder, unambiguous. If prior statistics
are available and accurate, then Huffman coding is very
good.