18-07-2013, 03:53 PM
Big O notation
O notation.pdf (Size: 599.97 KB / Downloads: 22)
introduction
In mathematics, big O notation is used to describe the limiting behavior of a
function when the argument tends towards a particular value or infinity, usually in
terms of simpler functions. It is a member of a larger family of notations that is
called Landau notation, Bachmann–Landau notation (after Edmund Landau
and Paul Bachmann), or asymptotic notation. In computer science, big O
notation is used to classify algorithms by how they respond (e.g., in their processing
time or working space requirements) to changes in input size.
Big O notation characterizes functions according to their growth rates: different
functions with the same growth rate may be represented using the same O notation.
A description of a function in terms of big O notation usually only provides an
upper bound on the growth rate of the function. Associated with big O notation are
several related notations, using the symbols o, Ω, ω, and Θ, to describe other
kinds of bounds on asymptotic growth rates.
Usage
Big O notation has two main areas of application. In mathematics, it is commonly used to describe how closely a finite series approximates a given function,
especially in the case of a truncated Taylor series or asymptotic expansion. In computer science, it is useful in the analysis of algorithms. In both
applications, the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
There are two formally close, but noticeably different, usages of this notation: infinite asymptotics and infinitesimal asymptotics. This distinction is only in
application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.
Infinite asymptotics
Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n
might be found to be T(n) = 4n2 − 2n + 2. As n grows large, the n2 term will come to dominate, so that all other terms can be neglected — for instance
when n = 500, the term 4n2 is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression's value for most
purposes. Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term n3 or n4.
Even if T(n) = 1,000,000n2, if U(n) = n3, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,0003=
U(1,000,000)). Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines
typically vary by only a constant factor in the number of steps needed to execute an algorithm.
Extensions to the Bachmann–Landau notations
Another notation sometimes used in computer science is Õ (read soft-O): f(n) = Õ(g(n)) is shorthand for f(n) = O(g(n) logk g(n)) for some k. Essentially, it
is Big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for
large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-
growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand
(since logk n is always o(nε) for any constant k and any ε > 0).
History (Bachmann–Landau, Hardy, and Vinogradov notations)
The symbol O was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie ("analytic
number theory"), the first volume of which (not yet containing big O notation) was published in 1892.[5] The number theorist Edmund Landau adopted it,
and was thus inspired to introduce in 1909 the notation o[6] ; hence both are now called Landau symbols. The former was popularized in computer science
by Donald Knuth, who re-introduced the related Omega and Theta notations.[7] Knuth also noted that the Omega notation had been introduced by Hardy
and Littlewood[8] under a different meaning "≠o" (i.e. "is not an o of"), and proposed the above definition. Hardy and Littlewood's original definition (which
was also used in one paper by Landau[9]) is still used in number theory (where Knuth's definition is never used).