29-12-2012, 02:38 PM
Source Encoding
1Source Encoding.docx (Size: 101.48 KB / Downloads: 36)
Introduction
Source encoding is the first step in information delivery through wireless communication. Whenever a sensor node has information to transmit, the information source is first encoded with a source encoder. In effect, source coding exploits the information statistics to represent tthe source with fewer bits, i.e., source codeword. Therefore, source coding is also referred to as data compression.
Accordingly, redundant information is compressed to decrease the data volume while preserving some or all of the information content.
The basic compression solutions can be classified into two based on the preservation of information after coding:
1) lossless compression and
2) lossy compression
Lossless Compression
Lossless compression reduces the data content, i.e., packet size, without hampering the information integrity. Much better compression efficiency can be achieved by lossy compression, which allows information loss for a higher reductionin data. Due to the higher complexity required for lossy compression, lossless compression technique esare generally adopted in WSNs. Several lossless compression algorithms have been developed for both digital communication and computation. However, the applicability of these algorithms to WSNs is limited. Since the programming memory of many sensor nodes is limited, higher complexity algorithms such as LZW or gzip can not be directly implemented. Furthermore, several compression algorithms require a static dictionary to be stored for compression/decompression and frequent memory access is required. Since read/write operations result in high energy consumption for wireless sensor nodes, these solutions are not energy efficient .In addition to the in efficiency of the compression algorithms in terms of energy consumption, the redundancy of information collected at each individual sensor may also not allow efficient compression at each source. However, the high-density deployment of WSNs results in highly correlated information being collected by multiple, closely located nodes.
Distributed Source Coding
Node-centric compression schemes such as S-LZW improve energy efficiency by decreasing the datacontent at each sensor node locally. The temporal relations between consecutive observations can beutilized for local compression. However, most applications may not require very high sampling ratesthat will lead to high temporal correlation. Moreover, because of the high density in sensor deployment,the information content observed by closely located sensors is highly correlated. This correlation can be exploited to significantly decrease the data content. Distributed source coding (DSC) that dates back to the work of Slepian and Wolf and Wynerand Ziv in the 1970s stands as a promising solution for this type of application. Distributed sourcecoding refers to the compression of multiple correlated sensor outputs that do not communicate witheach other. More specifically, the information at a sensor node is compressed assuming correlation with another node without exchanging any information. This information is sent to the sink and joint decoding is performed by the sink that receives data independently compressed by different sensors.
Applications
For Audio
Audio data compression, as distinguished from dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression algorithms are implemented in software as audio codecs. Lossy audio compression algorithms provide higher compression at the cost of fidelity and are used in numerous audio applications. These algorithms almost all rely on psychoacoustics to eliminate less audible or meaningful sounds, thereby reducing the space required to store or transmit them.
In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, pattern recognition and linear prediction to reduce the amount of information used to represent the uncompressed data.
Coding methods
In order to determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are. Audibility of spectral components is determined by first calculating a masking threshold, below which it is estimated that sounds will be beyond the limits of human perception.
The masking threshold is calculated using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency, and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of different components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.
Speech encoding
Speech encoding is an important category of audio data compression. The perceptual models used to estimate what a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice are normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate. If the data to be compressed is analog (such as a voltage that varies with time), quantization is employed to digitize it into numbers (normally integers). This is referred to as analog-to-digital (A/D) conversion. If the integers generated by quantization are 8 bits each, then the entire range of the analog signal is divided into 256 intervals and all the signal values within an interval are quantized to the same number. If 16-bit integers are generated, then the range of the analog signal is divided into 65,536 intervals
Encoding theory
Video data may be represented as a series of still image frames. The sequence of frames contains spatial and temporal redundancy that video compression algorithms attempt to eliminate or code in a smaller size. Similarities can be encoded by only storing differences between frames, or by using perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas to reduce space, in a manner similar to those used in JPEG image compression Some of these methods are inherently lossy while others may preserve all relevant information from the original, uncompressed video.