Differences between current version and previous revision of BigONotation.
Other diffs: Previous Major Revision, Previous Author, or view the Annotated Edit History
Newer page: | version 5 | Last edited on Sunday, August 24, 2003 8:15:36 am | by AristotlePagaltzis | |
Older page: | version 4 | Last edited on Sunday, August 24, 2003 12:11:49 am | by StuartYeates | Revert |
@@ -1,13 +1,17 @@
-[BigONotation] is used to describe the complexity of algorithms
(aka ComplexityTheory). Big Oh specifically defines
the __upper bound__ of
the complexity
of an algorithm
. Big Oh
notation is generally written with, well, a big O. For example:
- O(n)
-Means an algorithm is ''linear'':
the amount
of processing increases linearly as
the size
of the input increases. The symbol ''n'' here is the symbol denoting the size of the input. Hence:
- O(n log n)
-Means the algorithm is ''log linear''. Quicksort is
an example of a log linear
algorithm.
+[BigONotation] is used to describe the complexity of [Algorithm]s
(aka ComplexityTheory) and is
the standard way to to describe how
the execution time
of a function scales to its input size
. Big O
notation is generally written with, well, a big O, and specifically defines
the __upper bound__
of the complexity
of an algorithm in the average case
.
-If you write a function that is O(
n*n) it is probably going to be quite slow with a large input
size. A O(1) function is independant
of the input size. This would be something that only ever executes
the same amount of code, no matter the input size
.
+In the following examples, the symbol ''
n'' denotes the
size of the input and
the symbol ''k'' denotes an arbitrary constant
.
-[BigONotation] ignores significant, RealWorld efficiency issues, mainly because it ``throws away
'' constant factors
. Constant factors include
: starting
a virtual machine
([Java]
), forking, linking in external librarys ([C]/[C++]) and compiling
to intermediate code (as many InterpretedLanguages do)
. [BigONotation]
is hardly
the only way
to describe the complexity
of an algorithm
, there
is also little oh notation
, theta notation
and others
. However
, they are rarely
, if ever, used. BigONotation is
the standard way
to describe how
a function scales
to its
input size.
+; ''!O(1)'' ie
''constant'': Independant of its input size
. Such an algorithm would always execute the same sequence of steps no matter what the input size.
+; ''O(n)'' ie ''linear''
: The amount of processing increases linearly as the size of the input increases. Searching an unsorted list by reading each element is
a linear time algorithm.
+; ''O
(n log n
)'' ie ''log linear'' : Many good algorithms that need
to process their input according to other parts of itself are log linear
. QuickSort
is the canonical example.
+; ''O(n^k)'' ie ''polynomial'' : This is going
to be quite slow with a large input size. Many naive algorithms that need to process their input according to other parts
of itself execute in polynomial time. ''k'' is commonly 2
, and
is usually the result of a nested loop
, where both the outer
and inner loop iterate over the entire input
. Thus
, a total of ''n * n'' iterations is required. The canonical example would be BubbleSort.
+; ''O(k^n)'' ie ''exponential'' : Such an algorithm will probably not terminate within the lifetime of the Universe for large inputs. Consider: with ''k := 2''
, if the time it takes is
to process input of size ''n = 1'' is one nanosecond (1/1,000,000 sec), it would take over
a year
to process an
input of
size ''n = 45'', and over 35 years to process just 5 more elements for ''n = 50''
.
-Also, see
[An informal introduction to
O(N)
notation|http://www
.perlmonks
.org/index
.pl?node_id=227909]
.
+[BigONotation] is complemented by Little
O notation, theta notation and others
. However, they are rarely used outside of CS theory
. These deal with cases such as the lower bound of the average case and the worst case
. F.ex, for degenrate input, QuickSort scales to ''n^2'' instead of ''n log n''
.
-''Someone might want
to merge this
and [BigOhNotation
].''
+[BigONotation] is a concerned purely with the theoretical aspects of an [Algorithm] and ignores RealWorld efficiency issues that arise from its implementation. Only the fastest growing component of the complexity of an [Algorithm] is used
to describe the complexity,
and constant factors are discarded. F.ex, if an
[Algorithm
] scales by ''5n^2 + 6n - 2'', it would be described as being of complexity ''O(n^2)''
. This means in terms of [BigONotation] it is in the same class as an [Algorithm] that scales
''n^2 + 10n'', even though the latter would perform much better with growing input sizes.
+
+See also:
+* [A rough guide to big-oh notation|http://www.cs.strath.ac.uk/~mdd/teaching/alg&comp/big_oh.html]
+* [An informal introduction to O(N) notation|http://www.perlmonks.org/index.pl?node_id=227909]