29-05-2013, 04:41 PM
AUDIO CODING: BASICS AND STATE OF THE ART
AUDIO CODING.pdf (Size: 44.92 KB / Downloads: 76)
ABSTRACT
High quality audio coding based on perceptual models has found its way to widespread application
in broadcasting and Internet audio (e.g. mp3). After a brief presentation of the basic ideas, a short
overview over current high quality perceptual coders will be given. Algorithms defined by the
MPEG group (MPEG-1 Audio, e.g. MPEG Layer-3 (mp3), MPEG-2 Advanced Audio Coding,
MPEG-4 Audio including its different functionalities) still define the state of the art. While there has
been some saturation in the progress for highest quality (near transparent) audio coding,
considerable improvements have been achieved for lower ("near CD-quality") coding. One
technology able to push the bit-rates for good quality audio lower is Spectral Band Replication
(SBR) as used in mp3PRO or improved AAC codes.
INTRODUCTION
High quality audio compression has found its way from research to widespread applications within
a couple of years. Early research of 15 years ago was translated into standardization efforts of
ISO/IEC and ITU-R 10 years ago. Since the finalization of MPEG-1 in 1992, many applications
have been devised. In the last couple of years, Internet audio delivery has emerged as a powerful
category of applications. These techniques made headline news in many parts of the world
because of the potential to change the way of business for the music industry. Currently, among
others the following applications employ low bit-rate audio coding techniques:
- Digital Audio Broadcasting (EUREKA DAB, WorldSpace, ISDB, DRM)
- ISDN transmission of high quality audio for broadcast contribution and distribution
purposes
- Archival storage for broadcasting
- Accompanying audio for digital TV (DVB, ATSC, Video CD, ARIB)
- Internet streaming (RealAudio, Microsoft Netshow, Apple Quicktime and others)
- Portable audio (mpman, mplayer3, Rio, Lyra, YEPP and others)
- Storage and exchange of music files on computers
THE BASICS OF HIGH QUALITY AUDIO CODING
The basic task of a perceptual audio coding system is to compress the digital audio data in a way
that
- the compression is as efficient as possible, i.e. the compressed file is as small as
possible and
- the reconstructed (decoded) audio sounds exactly (or as close as possible) to the original
audio before compression.
Other requirements for audio compression techniques include low complexity (to enable software
decoders or inexpensive hardware decoders with low power consumption) and flexibility to cope
with different application scenarios. The technique to do this is called perceptual encoding and
uses knowledge from psychoacoustics to reach the target of efficient but inaudible compression.
Perceptual encoding is a lossy compression technique, i.e. the decoded file is not a bit-exact
replica of the original digital audio data.
STANDARDIZED CODECS
MPEG (formally known as ISO/IEC JTC1/SC29/ WG11, mostly known by its nickname, Moving
Pictures Experts Group) has been set up by the ISO/IEC standardization body in 1988 to develop
generic (to be used for different applications) standards for the coded representation of moving
pictures, associated audio, and their combination. Since 1988 ISO/MPEG has been undertaking
the standardization of compression techniques for video and audio. The original main topic of
MPEG was video coding together with audio for Digital Storage Media (DSM). From the
beginning, audio-only applications have been part of the charter of the MPEG audio subgroup.
Since the finalization of the first standard in 1992, MPEG Audio in its different flavors (mostly
Layer-2, Layer-3 and Advanced Audio Coding) has delivered on the promise to establish
universally applicable standards.
MPEG-1
MPEG-1 is the name for the first phase of MPEG work, started in 1988, and finalized with the
adoption of ISO/IEC IS 11172 in late 1992. The audio coding part of MPEG-1 (ISO/IEC IS 11172-3,
see [1] describes a generic coding system, designed to fit the demands of many applications.
MPEG-1 audio consists of three operating modes called layers with increasing complexity and
performance from Layer-1 to Layer-3. Layer-3 (in recent years nicknamed MP3 because of the use
of .mp3 as a file extension for music files in Layer-3 format) is the highest complexity mode,
optimized to provide the highest quality at low bit-rates (around 128 kbit/s for a stereo signal).
Filterbank
The filterbank used in MPEG-1 Layer-3 belongs to the class of hybrid filterbanks. It is built by
cascading two different kinds of filterbank: First a polyphase filterbank (as used in Layer-1 and
Layer2) and then an additional Modified Discrete Cosine Transform (MDCT). The polyphase
filterbank has the purpose of making Layer-3 more similar to Layer-1 and Layer-2. The subdivision
of each polyphase frequency band into 18 finer subbands increases the potential for redundancy
removal, leading to better coding efficiency for tonal signals. Another positive result of better
frequency resolution is the fact that the error signal can be controlled to allow a finer tracking of
the masking threshold. The filter bank can be switched to less frequency resolution to avoid
preechoes.
Perceptual Model
The perceptual model is mainly determining the quality of a given encoder implementation. A lot of
additional work has gone into this part of an encoder since the original informative part in [1] has
been written. The perceptual model either uses a separate filterbank as described in [1] or
combines the calculation of energy values (for the masking calculations) and the main filterbank.
The output of the perceptual model consists of values for the masking threshold or allowed noise
for each coder partition. In Layer-3, these coder partitions are roughly equivalent to the critical
bands of human hearing. If the quantization noise can be kept below the masking threshold for
each coder partition, then the compression result should be indistinguishable from the original
signal.
Quantization and Coding
A system of two nested iteration loops is the common solution for quantization and coding in a
Layer-3 encoder. Quantization is done via a power-law quantizer. In this way, larger values are
automatically coded with less accuracy and some noise shaping is already built into the
quantization process. The quantized values are coded by Huffman coding. To adapt the coding
process to different local statistics of the music signals the optimum Huffman table is selected
from a number of choices. The Huffman coding works on pairs or quadruples. To get even better
adaption to signal statistics, different Huffman code tables can be selected for different parts of the
spectrum. Since Huffman coding is basically a variable code length method and noise shaping has
to be done to keep the quantization noise below the masking threshold, a global gain value
(determining the quantization step size) and scalefactors (determining noise shaping factors for
each scalefactor band) are applied before actual quantization.
CONCLUSIONS
High quality audio coding is after 14 years of standardization still an active research area. New
systems will use new paradigms, probably of parametric coding similar to techniques already in
some use in MPEG-4 audio and the Spectral Band Replication (SBR) technology. For high
qualities near to the limit of detectability of audible differences, the state of the art is defined by
MPEG-2 Advanced Audio Coding and it’s close relatives such as MPEG-4 general audio coding.
It will be interesting to see whether more knowledge about psychoacoustics will enable further
progress in this area.