06-02-2013, 03:39 PM
Analysis of JPEG Encoder for Image Compression
1Analysis of JPEG Encoder.pdf (Size: 183.61 KB / Downloads: 37)
Abstract
This paper covers analysis of JPEG standard
which is based on discrete cosine transform(DCT). In the
standard, the procedure, DCT, played the important role of
separating information by different frequencies. Compression
was achieved by quantization and coding. There were common
fast DCT algorithms analyzed. These could be contribution in
efficient JPEG encoder. The other procedures, quantization
and coding, were demonstrated and the processings of the last
two procedures are mainly achieved based on standard
processing table. Therefore, enhancing the efficiency of JPEG
codec is hardly to develop the algorithm of quantization and
coding. At last, the compression encoder was established based
on JPEG, and the performance of compressed image was
presented at last.
INTRODUCTION
As the application of digital image grows dramatically,
desktop publishing, multimedia, teleconferencing and
high-definition television (HDTV) could lead brilliant views
to people. More views seen, more memory spaces and wider
bandwidth would be spent for large information amount of
images. For example, there is a picture which is 210 mm
wide and 297 mm high. If it was uploaded by a colourful
scanner with 300 dpi, it would take 26 megabytes to be saved.
Fortunately, several developed image compression
technologies are available now. The most popular
technologies may be JPEG and JPEG 2000, which are
published by Joint Photographic Experts Group. They are
both standards for image compression, but they employed
different technologies to achieve image compression. The
JPEG standard is mainly applied for compression of still
images [1].
JPEG ENCODER IN IMAGE COMPRESSION
There are three purposes in image coding, known as
image compression: firstly, compress the image for
limitation of storage; secondly, reduce the mount of
information for transmission; at last, remove redundant
information for preparation of pattern recognition. In JPEG
standard, two compression processes were defined: lossy
compression and lossless compression. The process, which
is based on DCT, is lossy compression. High compression
ration could be achieved by lossy compression. Furthermore,
the vision effect of the reconstruction image from the
process of decoder is similar with that of original image.
Meanwhile, there are extending processing coding methods
based the basic DCT sequence processing. They are
applicable widely compared with the basic encoder while
extending encoder includes the basic sequence.
EXPERIMENT RESULT
Experiment codec system was developed as the public
JPEG codes. The comparison of PSNRs of reconstructed
image was illustrated in table II. In the table, the PSNRs of
compressed images with different compression ratio is
demonstrated. As can be seen from the table, first of all, in
all three columns, the PSNRs decreases with the increase of
compression ratio. That means larger mount of information
loss with the increase of compression ratio and the
performance of compressed image becomes worse.
Secondly, in different compression ratio, there are various
PSNRs in the three images. Diverse frequency components
in the images contribute to that.
CONCLUSION
As the JPEG is a public image compression standard,
this paper analysed main procedures of the encoder based on
JPEG. Compression could be achieved based on procedure
DCT which can decompose image into different frequencies
components. Then the redundant information with high
frequency in a image could be removed by quantization or
coding procedure. It mans that DCT is the foundation of
compression in JPEG codec of lossy compression. With the
increase of compression ratio, the more information could
be loss, which was presented in experiment result. Therefore,
fast DCT algorithms contribute high encoder efficiency
while coding procedure and quantization are hardly to
conserve the resources of processors.