19-04-2014, 04:24 PM
The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems
The Levenberg-Marquardt.pdf (Size: 524.28 KB / Downloads: 62)
Abstract
The Levenberg-Marquardt method is a standard technique used to solve nonlin-
ear least squares problems. Least squares problems arise when fitting a parameterized
function to a set of measured data points by minimizing the sum of the squares of
the errors between the data points and the function. Nonlinear least squares problems
arise when the function is not linear in the parameters. Nonlinear least squares meth-
ods involve an iterative improvement to parameter values in order to reduce the sum
of the squares of the errors between the function and the measured data points. The
Levenberg-Marquardt curve-fitting method is actually a combination of two minimiza-
tion methods: the gradient descent method and the Gauss-Newton method. In the
gradient descent method, the sum of the squared errors is reduced by updating the pa-
rameters in the direction of the greatest reduction of the least squares objective. In the
Gauss-Newton method, the sum of the squared errors is reduced by assuming the least
squares function is locally quadratic, and finding the minimum of the quadratic. The
Levenberg-Marquardt method acts more like a gradient-descent method when the pa-
rameters are far from their optimal value, and acts more like the Gauss-Newton method
when the parameters are close to their optimal value. This document describes these
methods and illustrates the use of software to solve nonlinear least squares curve-fitting
problems.
Introduction
In fitting a function y
ˆ(t; p) of an independent variable t and a vector of n parameters
p to a set of data points (ti , yi ), it is customary and convenient to minimize the sum of
the weighted squares of the errors (or weighted residuals) between the measured data y(ti )
and the curve-fit function y
ˆ(ti ; p). This scalar-valued goodness-of-fit measure is called the
chi-squared error criterion.
The Gauss-Newton Method
The Gauss-Newton method is a method of minimizing a sum-of-squares objective func-
tion. It presumes that the objective function is approximately quadratic in the parameters
near the optimal solution [1]. For more moderately-sized problems the Gauss-Newton method
typically converges much faster than gradient-descent methods [5].