14-11-2012, 04:33 PM
DIGITAL VIDEO AND IMAGE COMPRESSION
1DIGITAL VIDEO.pdf (Size: 426.24 KB / Downloads: 34)
Introduction
Reducing the amount of data needed to reproduce images or video (compression) saves storage space, increases access speed, and is the only way to achieve digital motion video on personal computers.
In this, general techniques for video and image compression, and then it describes several standardized compression systems, including JPEG, MPEG, p*64, and DVI Technology.
EVALUATING A COMPRESSION SYSTEM
In order to compare video compression systems, one must have ways to evaluate compression performance. Three key parameters need to be considered:
Amount or degree of compression
Image quality
Speed of compression or decompression
In addition, we must also look at the hardware and software required by each compression method.
How Much Compression?
Compression performance is often specified by giving the ratio of input data to output data for the compression process (the compression ratio). This measure is a dangerous one unless you are careful to specify the input data format in a way that is truly comparable to the output data format. For example, the compressor might have used a 512 x 480, 24 bits-per-pixel (bpp) image as the input to the compression process, which then delivered a bitstream of 15,000 bytes. In that case, the input data was 737,280 bytes, and this would give a compression ratio of 737,280/15,000 = 49. However, the output display has only 256 x 240 pixels, so we achieved 4:1 of that compression by reducing the resolution. Therefore, the compression ratio with equal input and output resolutions is more like 12:1. A similar argument can be made for the bpp relationship between input and output—the output quality may not be anything near 24 bpp.
How Good Is the Picture?
In this about picture quality performance of a compression system, it is helpful to divide the world of compression into two parts – lossless compression and loosy compression. Lossless compression means that the reproduced image is not changed in any way by the compression/decompression process; therefore, we do not have to worry about the picture quality for a lossless system—the output picture will be exactly the same as the input picture. Lossless compression is possible because we can use more efficient methods of data transmission than the pixel-by-pixel PCM format that comes from a digitizer.
On the other hand, lossy compression systems by definition do make some change to the image—something is different. The trick is making that difference hard for the viewer to see. Lossy compression systems may introduce any of the digital video artifacts, or they may even create some unique artifacts of their own. None of these effects is easy to quantify, and final decisions about compression systems, or about any specific compressed image, will usually have to be made after a subjective evaluation
There are not a good alternative to looking at test pictures. The various measures of analog picture quality—signal-to-noise ratio, resolution, color errors, etc. may be useful in some cases, but only after viewing real pictures make sure that the right artifacts are being measured.
How Fast Does It Compress or Decompress?
In most cases of storing still images, compression speed is less critical than decompression speed-since we are compressing the image ahead of time to store it, we can usually take our time in that process. On true other hand, decompression usually takes place while the user is waiting for the result, and speed is much more important. With motion video compression there is a need for fast compression in order to capture motion video in real time as it comes from a camera or VCR. In any case, compression and decompression speed is usually easy to, specify and measure.
What Hardware and Software Does It Take?
Some amount of compression and decompression can be done in software using standard PC. hardware. Except with very simple algorithms, this approach quickly runs into speed problems—the process takes too long, 1 simple algorithms do not provide the best compression. This is a moving target with time because of the continued advance in the processing power of PCs. However, at present, most systems will benefit from some hardware to speed up or accelerate compression/decompression.
Interpolative Techniques
Interpolative compression at the pixel level consists of transmitting a subset of the pixels and using interpolation to reconstruct the intervening pixels. Within our definition of compression, this is not a valid technique for use on entire pixels because we are effectively reducing the number of independent pixels contained in the output, and that is not compression. The interpolation in that case is simply a means for reducing the visibility of pixellation, but the output pixel count is still equal to the subset. However, there is one case where interpolation is a valid technique. It can be used just on the chrominance part of the image while the luminance part is not interpolated. This is called color subsampling, and it is most valuable with luminance-chrominance component images (YUV, YIQ, etc.).
The color components I and Q of the YIQ format (in NTSC color television) were carefully chosen by the developers so that they could be transmitted at reduced resolution. This works because a viewer has poor acuity for color changes in an image, so the lower resolution of the color components really is not noticed. The same is true for YUV components, which are used in PAL television systems.
Statistical Coding
Another means of compression is to take advantage of the statistical] distribution of the pixel values of an image or of the statistics of the data created from one of the techniques discussed above. These are called statistical coding techniques, or sometimes entropy coding, and they may be contained either in the compression algorithm itself, or applied separately as part of the bit assignment following another compression technique. The usual case for image data is that all possible values are not equally probable—there will be some kind of nonuniform distribution of the valued Another way of saying that is: Some data values will occur more frequently than other data values. We can set up a coding technique which codes the more frequently occurring values with words using fewer bits, and the less frequently occurring values will be coded with longer words. This results in, a reduced number of bits in the final bitstream, and it can be a lossless technique. One widely used form of this coding is called Huffman coding.
The above type of coding has some overhead, however, in that we must tell the decompression system how to interpret a variable-word-length bitstream. This is normally done by transmitting a table (called a code book) ahead of time. This is simply a table which tells how to decode the bitstream back to the original values. The code book may be transmitted once for each individual image, or it may even be transmitted for individual blocks of a single image. On the compression side, there is overhead needed to figure out the code book—the data statistics must be calculated for an image or for each block.
Motion Video Compression Techniques
In the still-image compression techniques that we discussed above, we gave little consideration to the matter of compression or decompression speeds With still images, processing only needs to be fast enough that the user does not get bored waiting for things to happen. However, when one begins to think about motion video compression systems, the speed issue becomes overwhelming. Processing of a single image in one second or less is usually satisfactory for stills. However, motion video implies a high enough frame rate to produce subjectively smooth motion, which for most people is 15 frames per second or higher. Full-motion video as used here refers to normal television frame rates—25 frames per second for European systems, and 30 frames per second for North America and Japan. These numbers mean that* our digital video system must deliver a new image every 30-40 milliseconds. If the system cannot do that, motion will be slow or jerky, and the system will quickly be judged unacceptable.