10-05-2012, 05:00 PM
Computational complexity
3.doc (Size: 1.18 MB / Downloads: 39)
ABSTRACT
Computational complexity is a burden in applications that require long tapped-delay adaptive structures, such as echo cancellation, where filters with hundreds or even thousands of taps are necessary to model the echo path. However, in some applications, the filter order must be large (e.g., noise and echo cancellation and channel equalization, to name a few in the communication field) in order to obtain an acceptable performance. Consequently, an excessive number of multiplication operations is required, and the implementation of the filter becomes unfeasible, even to the most powerful digital signal processors. The problem grows worse in adaptive filtering. Besides the computational complexity, the convergence rate and the tracking capability of the algorithms also deteriorate with an increasing number of coefficients to be updated.
Owing to its simplicity and robustness, the least mean square (LMS) algorithm is one of the most widely used algorithms for adaptive signal processing. Unfortunately, its performance in terms of convergence rate and tracking capability depends on the eigenvalue spread of the input signal correlation matrix. Transform domain LMS algorithms, like the discrete cosine transform (DCT) and the discrete Fourier transform (DFT), have been employed to solve this problem at the expense of a high computational complexity. In general, it consists of using an orthogonal transform together with power normalization for speeding up the convergence of the LMS algorithm.
Very interesting, efficient, and different approaches have also been proposed in the literature but they still present the same tradeoff between performance and complexity. A great deal of the recent work on adaptive filtering has been devoted to the search for new algorithms providing convergence rates higher than LMS-type algorithms(e.g., NLMS) at a lower computational cost than RLS-type algorithms. Another alternative to overcome the aforementioned drawbacks of non-recursive adaptive systems is the split processing technique.
This project proposes a new structure for split transversal filtering and introduces the optimum split Wiener filter. The approach consists of combining the idea of split filtering with a linearly constrained optimization scheme. Furthermore, a continued split procedure, which leads to a multi-split filter structure, is considered.
It increases the diagonalization factor of the input signal correlation matrix without affecting its Eigen value spread. A power normalized, time- varying step-size least mean square (LMS) algorithm, which exploits the nature of the transformed input correlation matrix, is proposed for updating the adaptive filter coefficients. The multi-split approach is extended to linear-phase adaptive filtering and linear prediction. The optimum symmetric and anti-symmetric linear-phase Wiener filters are presented. Simulation results enable us to evaluate the performance of the multi-split LMS algorithm.
This thesis is focused on the analysis and performance comparison between two methods of implementing adaptive filtering algorithms, namely the Least Mean Squares (LMS) algorithm, and the Multi split LMS (MS-LMS) algorithm. A brief theoretical development of both methods is explained, as well as the advantages of performing the LMS algorithm. Then, both algorithms are implemented the real time Digital Signal Processing (DSP) system used for noise cancellation.
INTRODUCTION
ABOUT PROJECT
Non-recursive systems have been frequently used in digital signal processing, mainly in adaptive filtering. Such finite impulse response (FIR) filters have the desirable properties of guaranteed stability and absence of limit cycles. However, in some applications, the filter order must be large (e.g., noise and echo cancellation and channel equalization, to name a few in the communication field) in order to obtain an acceptable performance. Consequently, an excessive number of multiplication operations is required, and the implementation of the filter becomes unfeasible, even to the most powerful digital signal processors. The problem grows worse in adaptive filtering. Besides the computational complexity, the convergence rate and the tracking capability of the algorithms also deteriorate with an increasing number of coefficients to be updated.
In the field of signal processing, there is a significant need for a special class of digital filters known as adaptive filters. Adaptive filters are used commonly in many different configurations, and for certain applications these filters have a great advantage over the standard digital filters. They can adapt their filter coefficients to the environment according to preset rules. The filters are capable of learning from the statistics of current conditions, and change their coefficients in order to achieve a certain goal. They are used in many different situations when it is impossible or inconvenient to use a pre-designed filter. In order for a filter to be designed, knowledge of the desired response is required a priori. When such a knowledge is not available, due to the changing nature of the filter’s requirements, it is impossible to design a standard digital filter. In such situations, adaptive filters are desirable. Adaptive filters continuously change their impulse response in order to satisfy the given conditions, and by doing so, change the very characteristic of their response.
There are certain rules that filters use in order to adapt. The rules are dependent up on the configuration in which the filter is used, and also the desired goal of the filter. Applications of adaptive filters include the noise canceling, echo canceling, adaptive beam forming, equalizers for the telecommunication systems, just to mention a few. The algorithms used to perform the adaptation, and the configuration of the filter depends directly on the use of the filter. However, the basic computational engine that performs the adaptation of the filter coefficients can be the same for different algorithms, and it is based on the statistics of the input signals to the system.
The two classes of adaptive filtering algorithms namely Recursive Least Squares (RLS) and Least Mean Squared (LMS) are capable of performing the adaptation of the filter coefficients. Each algorithm presents certain advantages and disadvantages with regards to their
performance and implementation. In most cases, the ultimate choice of the algorithm depends on trade-offs between performance gains and implementing costs.
Therefore, there is a constant need and effort across the signal processing community for improving the current algorithms in terms of both increasing the performance and reducing the computational complexity. This thesis describes techniques to increase the performance of certain classes of adaptive filters while keeping their computational demand reasonably low.
MOTIVATION
As mentioned above, in the field of adaptive signal processing, there are mainly two classes of algorithms that are used to force the filter to adapt LMS and RLS. Their implementations and adaptation properties are the determining factors for choice of application .The main requirements and the performance measures for adaptive filters are the convergence speed and the asymptotic error. The convergence speed enables us to measure how quickly the filter is converging to the desired value. This is the primary property of the filter. Since each adaptive filter, by its structure, is learning the properties of the signal from its past samples, the convergence speed measures how quickly the filter is learning. It is a major requirement and a limiting factor for most of the applications of adaptive filters. The asymptotic error represents the amount of error that the filter introduces at steady state after it has converged to the desired value.
The RLS filters, due to their computational structure, have considerably better properties than the LMS filters both in terms of the convergence speed and the asymptotic error .The RLS filters which outperform the LMS filters obtain their solution for the weight updates directly from the Mean Square Error (MSE). However, they are very computationally demanding and also very dependent upon the precision of the input signal. Both of these requirements make the RLS filters impractical for real time applications .Their computational requirements are significant, and imply the use of expensive and power demanding high-speed processors. Also, for the systems lacking the appropriate dynamic range, the adaptation algorithms can become unstable.
In most cases, regular16-bit Analog-to-Digital (A/D) converters do not provide enough dynamic range (or precision) for the filters to operate properly. These properties make the RLS filters unusable for most of the applications. The LMS class of algorithms provides a robust and computationally acceptable option for most of the applications. Most of the research in the area of adaptive filters today is performed on the LMS class of algorithms. However, the most significant problem with LMS algorithms is the slow convergence speed. The convergence speed is directly dependent up on the choice of certain parameters in the algorithm.
There is a trade-off between the convergence speed and the asymptotic error that the algorithm creates. Any increase in speed will create a larger error. The problem with the slow convergence is especially evident in cases with a large number of filter coefficients, which are often necessary for systems with long time delay spreads. In order to compensate for these problems and be able to take advantage of the simplicity and robustness of the LMS class of algorithms, there are many different attempts to upgrade the algorithm, so that its convergence speed is improved In some applications, the filter order must be large (e.g., noise and echo cancellation and channel equalization, to name a few in the communication field) in order to obtain an acceptable performance. Consequently, an excessive number of multiplication operations is required, and the implementation of the filter becomes unfeasible, even to the most powerful digital signal processors. The problem grows worse in adaptive filtering. Besides the computational complexity, the convergence rate and the tracking capability of the algorithms also deteriorate with an increasing number of coefficients to be updated.
The basic idea is to introduce the optimum split Wiener filter. The approach consists of combining the idea of split filtering with a linearly constrained optimization scheme. Furthermore, a continued split procedure, which leads to a multisplit filter structure, is considered. It is shown that the multi split transform is not an input whitening transformation. Instead, it increases the diagonalization factor of the input signal correlation matrix without affecting its Eigen value spread. A power normalized, time-varying step-size least mean square (LMS) algorithm, which exploits the nature of the transformed input correlation matrix, is proposed for updating the adaptive filter coefficients. The multisplit approach is extended to linear-phase adaptive filtering and linear prediction. The optimum symmetric and anti symmetric linear-phase Wiener filters are presented.
Two different algorithms LMS and the multi split LMS (MSLMS) are presented, analyzed, and compared. The results obtained from MATLAB simulation are presented, and analyzed in detail.
OBJECTIVE
Actually, an appropriate formulation of the split filtering problem has yet to be provided, and such a formulation would bring to us more insights on this versatile digital signal processing technique, whose structure exhibits high modularity, parallelism, or concurrency. This is the purpose of the present paper.
LITERATURE SURVEY
Owing to its simplicity and robustness, the least mean square (LMS) algorithm is one of the most widely used algorithms for adaptive signal processing. Unfortunately, its performance in terms of convergence rate and tracking capability depends on the eigenvalue spread of the input signal correlation matrix. Transform domain LMS algorithms, like the discrete cosine transform (DCT) and the discrete Fourier transform (DFT), have been employed to solve this problem at the expense of a high computational complexity.