27-06-2012, 12:50 PM
Basics on Image Compression
compressBasic.ppt (Size: 378 KB / Downloads: 44)
Why Need Compression?
Savings in storage and transmission
multimedia data (esp. image and video) have large data volume
difficult to send real-time uncompressed video over current network
Accommodate relatively slow storage devices
they do not allow playing back uncompressed multimedia data in real time
Entropy Coding
Limit of compression => “Entropy”
Measures the uncertainty or avg. amount of information of a source
Definition: H = i pi log2 (1 / pi) bits
e.g., entropy of previous example is 1.75
Can’t represent a source perfectly with less than avg. H bits per sample
Can represent a source perfectly with avg. H+ bits per sample ( Shannon Lossless Coding Theorem )
“Compressability” depends on the sources
Huffman Coding: Pros & Cons
Simplicity in implementation (table lookup)
Given symbol size, Huffman coding gives best coding efficiency
Con
Need to obtain source statistics
The codelength has to be integer => lead to gaps between its average codelength and entropy
Improvement
Code a group of symbols as a whole: allow fractional # bits/symbol
Arithmetic coding: fractional # bits/symbol
Lempel-Ziv coding or LZW algorithm
“universal”, no need to estimate source statistics
fix-length codeword for variable-length source symbols
Quantization in Coding Correlated Sequence
Basic principle: Remove redundancy between successive pixels and only encode residual between actual and predicted
Residue usually has much smaller dynamic range
Allow fewer quantization levels for the same MSE => get compression
Compression efficiency depends on intersample redundancy