< FrankJS />

What is Big O Notation?

Big O notation is an important topic, one that we deal with regularly, knowingly or not.

Big O is a mathematical notation that describes the limiting behavior of a function in regards to its potential arguments.

The O means “order of”, which generally equates to "rate of". Let's say we have an algorithm with O(n). This means the order of growth is n. N means linear complexity. If we measure any aspects of our function that does not grow, i.e. constants, we drop them from our notation. For example, O(n)+3 becomes O(n). We "just don't care" about this, when looking at the big picture in regards to how our function works at scale.

This is relevant to us because it provides a way for us to measure and describe our work with regards to scaling. Scaling is of major relevance to us as engineers because we generally must care about the way our processing and data scales, i.e. time and space complexity.

Big O provides a way to formalize how we discuss the runtime of an algorithm, with regard to how it grows as the input grows. In particular, we are concerned with the worst case scenarios for our algorithms.

Some common notations:

O(n), or linear complexity, means the runtime increases, at most, linearly with the size of the input.

O(1), or constant complexity, means changing the input results in the same runtime.

O(log n), or logarithmic complexity, Examples where this may be found are operations on binary trees, using binary search, or binomial heaps. O(log n) is considered rather efficient. As the ratio of the number of operations to the size of the input decreases, it tends to zero while n increases.

Please refer to these resources for more of the common variations of Big O and further details:

Big-O Algorithm Complexity Cheat Sheet

Big O notation (wikipedia)

Frank J Santaguida, 2022