BigONotation is used to describe the complexity of Algorithms (aka ComplexityTheory) and is the standard way to to describe how the execution time of a function scales to its input size. Big O notation is generally written with, well, a big O, and specifically defines the upper bound of the complexity of an algorithm in the average case.
In the following examples, the symbol n denotes the size of the input and the symbol k denotes an arbitrary constant.
BigONotation is complemented by Little O notation, theta notation and others. However, they are rarely used outside of CS theory. These deal with cases such as the lower bound of the average case and the worst case. F.ex, for degenrate input, QuickSort scales to n^2 instead of n log n.
BigONotation is a concerned purely with the theoretical aspects of an Algorithm and ignores RealWorld efficiency issues that arise from its implementation. Only the fastest growing component of the complexity of an Algorithm is used to describe the complexity, and constant factors are discarded. F.ex, if an Algorithm scales by 5n^2 + 6n - 2, it would be described as being of complexity O(n^2). This means in terms of BigONotation it is in the same class as an Algorithm that scales n^2 + 10n, even though the latter would perform much better with growing input sizes.
See also:
3 pages link to BigONotation: