Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Basics on Image Compression
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Basics on Image Compression

[attachment=25721]


Why Need Compression?


Savings in storage and transmission
multimedia data (esp. image and video) have large data volume
difficult to send real-time uncompressed video over current network

Accommodate relatively slow storage devices
they do not allow playing back uncompressed multimedia data in real time


Entropy Coding


Limit of compression => “Entropy”
Measures the uncertainty or avg. amount of information of a source
Definition: H = i pi log2 (1 / pi) bits
e.g., entropy of previous example is 1.75

Can’t represent a source perfectly with less than avg. H bits per sample
Can represent a source perfectly with avg. H+ bits per sample ( Shannon Lossless Coding Theorem )

“Compressability” depends on the sources



Huffman Coding: Pros & Cons



Simplicity in implementation (table lookup)
Given symbol size, Huffman coding gives best coding efficiency
Con
Need to obtain source statistics
The codelength has to be integer => lead to gaps between its average codelength and entropy
Improvement
Code a group of symbols as a whole: allow fractional # bits/symbol
Arithmetic coding: fractional # bits/symbol
Lempel-Ziv coding or LZW algorithm
“universal”, no need to estimate source statistics
fix-length codeword for variable-length source symbols



Quantization in Coding Correlated Sequence


Basic principle: Remove redundancy between successive pixels and only encode residual between actual and predicted
Residue usually has much smaller dynamic range
Allow fewer quantization levels for the same MSE => get compression
Compression efficiency depends on intersample redundancy