Penguin

BigONotation is used to describe the complexity of Algorithms (aka ComplexityTheory) and is the standard way to to describe how the execution time of a function scales to its input size. Big O notation is generally written with, well, a big O, and specifically defines the upper bound of the complexity of an algorithm in the average case.

In the following examples, the symbol n denotes the size of the input and the symbol k denotes an arbitrary constant.

!O(1) ie constant
Independant of its input size. Such an algorithm would always execute the same sequence of steps no matter what the input size.
O(n) ie linear
The amount of processing increases linearly as the size of the input increases. Searching an unsorted list by reading each element is a linear time algorithm.
O(n log n) ie log linear
Many good algorithms that need to process their input according to other parts of itself are log linear. QuickSort is the canonical example.
O(n^k) ie polynomial
This is going to be quite slow with a large input size. Many naive algorithms that need to process their input according to other parts of itself execute in polynomial time. k is commonly 2, and is usually the result of a nested loop, where both the outer and inner loop iterate over the entire input. Thus, a total of n * n iterations is required. The canonical example would be BubbleSort.
O(k^n) ie exponential
Such an algorithm will probably not terminate within the lifetime of the Universe for large inputs. Consider: with k := 2, if the time it takes is to process input of size n = 1 is one nanosecond (1/1,000,000 sec), it would take over a year to process an input of size n = 45, and over 35 years to process just 5 more elements for n = 50.

BigONotation is complemented by Little O notation, theta notation and others. However, they are rarely used outside of CS theory. These deal with cases such as the lower bound of the average case and the worst case. F.ex, for degenrate input, QuickSort scales to n^2 instead of n log n.

BigONotation is a concerned purely with the theoretical aspects of an Algorithm and ignores RealWorld efficiency issues that arise from its implementation. Only the fastest growing component of the complexity of an Algorithm is used to describe the complexity, and constant factors are discarded. F.ex, if an Algorithm scales by 5n^2 + 6n - 2, it would be described as being of complexity O(n^2). This means in terms of BigONotation it is in the same class as an Algorithm that scales n^2 + 10n, even though the latter would perform much better with growing input sizes.

See also: