21-02-2012, 11:25 AM
sir
I wants a complete seminar report on image compression.
my Email-id mukeshkumarroy432[at]yahoo.in
21-02-2012, 11:25 AM
sir I wants a complete seminar report on image compression. my Email-id mukeshkumarroy432[at]yahoo.in
20-08-2012, 12:51 PM
Image Compression Techniques
image compression report.docx (Size: 169.22 KB / Downloads: 72) INTRODUCTION Compression means to convert a data of more memory storage to the lesser one. Why Compression? It is useful to visit or re-visit the reasons why data compression is needed or useful. One obvious reason is to save the cost of disk. While this may seem strange given that the disks are cheap. We all can buy 100s of GB disk for less than $100.00. So what all this fuss is about? First reason, for the uninitiated, is that the disks used for high-end systems are not cheap. Secondly, there is rarely a single copy of the production data. For example, if you are using high availability features like replication, log shipping or mirroring, you will have at least one more copy. Now what about test environment? There is one copy there as well. All these add up to the cost of hardware. Third, how about backups? If you have many backups of your database over time, just multiply the cost. Agreed that backup can be stored on less expensive media but still there is some cost associated with it. Second reason is the cost of managing the data. Larger the database, it takes longer to do the backup, recovery. Similarly for running DBCC commands, rebuilding indexes, and bulk import/export. Clearly if these commands are IO bound, then if we could reduce the size of the data, they will run faster. Even better, they will have less of an impact on the concurrent workload in your system. One interesting point is that if your database is compressed, the backup will be automatically smaller. Statistical encoding is another important approach to lossless data reduction. This term sounds very complex, but a similar trick in information coding had already been used by the famous American inventor Samuel Morse more than 150 years ago for his electromagnetic telegraph. A frequently occurring letter such as ‘e’ is transmitted as a single dot ‘ . ’, while an infrequent ‘x’ requires four Morse symbols ‘ - . . - ’. In this way the mean data rate required to transmit an English text is decreased as compared to a solution where each letter of the alphabet is coded with the same number of basic symbols. Accordingly in image transmission, short code words or bit sequences (one to four bits) will be used for frequently occurring small gray level differences (0, +1, -1, +2, -2 etc.), while long code words are used for the large differences (for instance the 212 in our example) with their very infrequent occurrence. Statistical encoding can be especially successful if the gray level statistics of the images has already been changed by predictive coding. The overall result is redundancy reduction, that is reduction of the reiteration of the same bit patterns in the data. Of course, when reading the reduced image data, these processes can be performed in reverse order without any error and thus the original image is recovered. Lossless compression is therefore also called reversible compression. Different File Types For the purposes of this study, we’re only going to focus on three file types, those most commonly found in web design: PNG, JPEG, and GIF. While there are other image formats out there that take advantage of compression (TIFF, PCX, TGA, etc.), you’re unlikely to run across them in any kind of digital design work. GIF GIF stands for Graphics Interchange Format, and is a bitmap image format introduced in 1987 by CompuServe. It supports up to 8 bits per pixel, meaning that an image can have up to 256 distinct RGB colors. One of the biggest advantages to the GIF format is that it allows for animated images, something neither of the other formats mentioned here allow. JPEG JPEG (Joint Photographic Experts Group) is an image format that uses lossy compression to create smaller file sizes. One of JPEG’s big advantages is that it allows the designer to fine-tune the amount of compression used. This results in better image quality when used correctly while also resulting in the smallest reasonable file size. Because JPEG uses lossy compression, images saved in this format are prone to “artifacting,” where you can see pixelization and strange halos around certain sections of an image. These are most common in areas of an image where there’s a sharp contrast between colors. Generally, the more contrast in an image, the higher quality the image needs to be saved at to result in a decent-looking final image Choosing a File Format Each of the file formats specified above are appropriate for different types of images. Choosing the proper format results in higher quality images and smaller file sizes. Choosing the wrong format means your images won’t be as high-quality as they could be and that their file sizes will likely be larger than necessary. For simple graphics like logos or line drawings, GIF formats often work best. Because of GIF’s limited color palette, graphics with gradients or subtle color shifts often end up posterized. While this can be overcome to some extent by using dithering, it’s often better to use a different file format. For photos or images with gradients where GIF is inappropriate, the JPEG format may be best suited. JPEG works great for photos with subtle shifts in color and without any sharp contrasts. In areas with a sharp contrast, it’s more likely there will be artifacts (a multi-colored halo around the area). Adjusting the compression level of your JPEGs before saving them can often result in a much higher quality image while maintaining a smaller file size. Image Compression in Print Design While the bulk of this article has focused on image compression in web design, it’s worth mentioning the effect compression can have in print design. For the most part, lossy image compression should be completely avoided in print design. Print graphics are much less forgiving of artifacting and low image quality than are on-screen graphics. Where a JPEG saved at medium quality might look just fine on your monitor, when printed out, even on an inkjet printer, the loss in quality is noticeable (as is the artifacting). For print design, using file types with lossless compression is preferable. TIFF (Tagged Image File Format) is often the preferred file format if compression is necessary, as it gives options for a number of lossless compression methods (including LZW mentioned above). Then again, depending on the image and where it will be printed, it’s often better to use a file type with no compression (such as an original application file). Talk to your printer about which they prefer. Coding In order to determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are. Audibility of spectral components is determined by first calculating a masking threshold, below which it is estimated that sounds will be beyond the limits of human perception. The masking threshold is calculated using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency, and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of different components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models. Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) prior to quantization.
13-09-2012, 10:50 AM
IMAGE COMPRESSION IMAGE COMPRESSION.ppt (Size: 288 KB / Downloads: 41) INTRODUCTION Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form. Image compression can be lossy or lossless. Lossless compression is sometimes preferred for artificial images such as technical drawings, icons or comics. Lossy methods are especially suitable for natural images such as photos in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Methods for lossless image compression are: Run-length encoding, Entropy coding, Adaptive dictionary algorithms. Methods for lossy compression: Chroma subsampling, Transform coding, Fractal compression. COMPRESSION PRINCIPLE Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS). In general, three types of redundancy can be identified: Spatial Redundancy: correlation between neighboring pixel values Spectral Redundancy: correlation between different color planes or spectral bands Temporal Redundancy: correlation between adjacent frames in a sequence of images (in video applications DIFFERENT CLASSES OF COMPRESSION TECHNIQUES Lossless vs. Lossy compression: reconstructed image, after compression, is numerically identical to the original image. Lossless compression only achieves a modest amount of compression. An image reconstructed following lossy compression contains degradation relative to the original. The compression scheme completely discards redundant information. However, lossy schemes achieves much higher compression. Predictive vs. Transform coding: In predictive coding, information already sent or available is used to predict future values, and the difference is coded. Since this is done in the image or spatial domain, it is relatively simple to implement and is readily adapted to local image characteristics. Transform coding first transforms the image from its spatial domain representation to a different type of representation using some well-known transform and then codes the transformed values (coefficients). Transform coding provides greater data compression compared to predictive methods, although at the expense of greater computation. DCT–BASED CODING The DCT-based encoder is the compression of a stream of 8x8 blocks of image samples. Each 8x8 block makes its way through each processing step, and yields output in compressed form into the data stream. Because adjacent image pixels are highly correlated, the `forward' DCT (FDCT) processing step lays the foundation for achieving data compression by concentrating most of the signal in the lower spatial frequencies. For a typical 8x8 sample block from a typical source image, most of the spatial frequencies have zero or near-zero amplitude and need not be encoded. DCT introduces no loss to the source image; it transforms them to a domain in which they can be more efficiently encoded. QUANTIZATION Each of the 64 DCT coefficients is uniformly quantized in conjunction with a carefully designed 64-element Quantization Table QT. At the decoder, the quantized values are multiplied by the corresponding QT elements to recover the original unquantized values. After quantization, all of the quantized coefficients are ordered into the "zig-zag" sequence. This ordering helps to facilitate entropy encoding by placing low-frequency non-zero coefficients before high-frequency coefficients. The DC coefficient, which contains a significant fraction of the total image energy, is differentially encoded. VARIABLE LENGTH CODING The list of values produced by scanning is entropy coded using a variable-length code (VLC). Each VLC code word denotes a run of zeros followed by a non-zero coefficient of a particular level. VLC coding recognises that short runs of zeros are more likely than long ones and small coefficients are more likely than large ones. The VLC allocates code words which have different lengths depending upon the probability with which they are expected to occur. |
|
Possibly Related Threads… | |||||
Thread | Author | Replies | Views | Last Post | |
beauty parlour management system project report | Guest | 1 | 7,445 |
07-04-2020, 11:06 PM Last Post: |
|
polytronics seminar | Guest | 4 | 19,120 |
28-09-2018, 07:46 PM Last Post: [email protected] |
|
hotel management system project report | Guest | 6 | 6,160 |
06-09-2018, 07:41 PM Last Post: Snehal shivaji pawar |
|
pvc coating seminar report | Guest | 1 | 4,058 |
22-09-2017, 11:00 AM Last Post: jaseela123 |
|
li fi seminar | Guest | 1 | 1,294 |
21-09-2017, 03:16 PM Last Post: jaseela123 |
|
google wave protocol seminar report | Guest | 1 | 8,173 |
21-09-2017, 12:39 PM Last Post: jaseela123 |
|
laboratory management system project report in vb | Guest | 1 | 3,430 |
20-09-2017, 02:56 PM Last Post: jaseela123 |
|
dct kl transform hadamard compression ppt | crazycool266 | 1 | 2,931 |
20-09-2017, 12:48 PM Last Post: jaseela123 |
|
seminar artificial leaf | Guest | 1 | 2,343 |
19-09-2017, 04:09 PM Last Post: jaseela123 |
|
ieee seminar topics for information technology | Guest | 1 | 7,262 |
18-09-2017, 04:17 PM Last Post: jaseela123 |