Penguin
Note: You are viewing an old revision of this page. View the current version.
BigONotation is used to describe the complexity of algorithms (aka ComplexityTheory). Big Oh specifically defines the upper bound of the complexity of an algorithm. Big Oh notation is generally written with, well, a big O. For example
O(n)
Means an algorithm is linear: the amount of processing increases linearly as the size of the input increases. The symbol n here is the symbol denoting the size of the input. Hence
O(n log n)

Means the algorithm is log linear. Quicksort is an example of a log linear algorithm.

If you write a function that is O(n*n) it is probably going to be quite slow with a large input size. A O(1)? function is independant of the input size. This would be something that only ever executes the same amount of code, no matter the input size.

BigONotation ignores significant, RealWorld efficiency issues, mainly because it ``throws away'' constant factors. Constant factors include: starting a virtual machine (Java), forking, linking in external librarys (C/C++) and compiling to intermediate code (as many InterpretedLanguages? do). BigONotation is hardly the only way to describe the complexity of an algorithm, there is also little oh notation, theta notation and others. However, they are rarely, if ever, used. BigONotation is the standard way to describe how a function scales to its input size.

Also, see An informal introduction to O(N) notation.

Someone might want to merge this and BigOhNotation?.