# Everything you Need to Know About Big-O Notation

**By Marco Wong**

(Published On 28 September 2021)

If you are taking computer science or algorithm related courses, you probably have come across the term Big O notation. It is a common way to describe the efficiency and complexity of your code. Despite it being not required in some syllabus, learning Big O notation could be beneficial in analysing your algorithm. In college interviews, it can sometimes be the edge you needed to make the cut.

**What is Big-O?**

Big O notation describes the complexity of your code using algebraic terms. The notation is actually a function, in both mathematical and computational terms: O( ). The letter O is used as it represents the “order of the function”.

Mathematically, O( ) describes the growth rate of a function(*f(x)*) when x increases from 0 to positive infinity. To put it simply, it is an estimation of how the value would change when x increases in a function. At the end of the day, we want to know how fast a function can grow when x is large. Generally, the term with the largest power of x will be kept and the ones with less power will be omitted. For example, if *f(x)*=9x^{2}-5x+10, then *f(x) *= O(x^{2}). This describes the function will grow quadratically as x approaches positive infinity.

In computer science, the use of big O notation is more related to the computational complexity. Here, big O is used to find the asymptotic upper bound of a function, which means the upper bound of which the function can grow. That is 𝑓(𝑁) = O(𝑔(𝑁)), where 𝑓(𝑁) ≤ c * 𝑔(𝑁) for all 𝑁. In other words 𝑓(𝑁), only grows as fast as g(𝑁).

For an algorithm 𝑓( ), is the function/method, and n represents the input size. You would oftenly see we describe the complexity of an algorithm as O(𝑛 log 𝑛), O(𝑛) etc. This is basically how does the complexity grow with regards to the input size. For instance, a quick sort algorithm has a worse case complexity of O(n^{2}), which means when the input size double (2n), the amount of iterations will grow in at most 4 times ((2𝑛)^{2} =4𝑛^{2}).

**How do I find out the complexity?**

Complexity comes in two main types, space and time. Space complexity is the amount of memory space the algorithm occupies when running; time complexity is the time required to run the algorithm. Both space and time are important to programs to be efficient, given that computers and us only have finite resources, at least for now.

**Space complexity**

Typically, space complexity is only a concern in recursive functions. In an algorithm without recursion, the amount of variables are fixed, therefore the complexity is O(1) , which means the space complexity does not grow with input size.

In the case like this:

```
def sum_of_seq(num):
if (num < 1):
return 0
else:
return sum_of_seq(num) + num
```

This Python code represents by 𝑓(𝑥) = 𝑓(𝑥-1) + 𝑥, 𝑓(𝑛) = 0 for 𝑛<1, which means the call for the function will create a new call for 𝑛>0. The calls stack up until the root case is reached, giving the space complexity of 0(𝑛).

The space complexity of a recursive function is proportional to the maximum depth of the recursion. Essentially, O(1) for non recursive calls and O(𝑛) for recursive calls.

**Time complexity**

Something perhaps more important for an algorithm is the time complexity. It is impossible to maximize time and space efficiency at the same time, but since space complexity typically does not go further than O(n), most of the questions regarding big O notation would be around time complexity.

For time complexity, it basically means how many iterations are needed to reach the end of the algorithm. Anything written inside a loop or recursion would be the focus of time complexity analysis.

The most typical uses of loops are “searches and sorts”.

**Insertion sort:**

```
def insertion_sort(arr):
for i in xrange(1, len(arr)):
val = arr[i]
j = binary_search(arr, val, 0, i-1)
arr = arr[:j] + [val] + arr[j:i] + arr[i+1:]
return arr
```

**Selection sort:**

```
def selection_sort(arr):
for i in range(len(arr)):
min_idx = i
for j in range(i+1, len(arr)):
if arr[min_idx] > arr[j]:
min_idx = j
arr[i], arr[min_idx] = arr[min_idx], arr[i]
return arr
```

In the insertion sort function, there is one for-loop that loop for n times. In the selection sort, there is a for-loop inside another for-loop. Therefore the average time complexity is O(n) for insertion sort and O(n^{2}) for selection sort. So when the interviewer ask you why do you prefer insertion sort over selection sort, the answer can simply by time complexity with reference to the big O notation.

Another common example is the recursion.

Take a look in the following example of a Fibonacci sequence:

```
def fibo(n):
if (n < 2) :
return n
else:
return fibo(n - 1) + fibo(n - 2)
```

In each call of the function fibo(n), if n is larger than 2, there are 2 calls created, where the subsequence call creates 2 more calls each. This created an exponential growth of iteration time, where increasing the input by 1 will result in doubling the iteration time. The time complexity is O(2^{n}).

Something perhaps more important for an algorithm is the time complexity. It is impossible to maximize time and space efficiency at the same time, but since space complexity typically does not go further than O(n), most of the questions regarding big O notation would be around time complexity.

For time complexity, it basically means how many iterations are needed to reach the end of the algorithm. Anything written inside a loop or recursion would be the focus of time complexity analysis.

**How do I improve my algorithm?**

By now, you should know how to deduce the time and space complexity of the algorithm. Here are some tips to save some memory and time for you and your computer:

- Reduce the time a loop have to run by adjusting the conditions
- If recursion is not a must, generalise the recursion for the numeric calculation and use iterative approach to save memory
- Using another method to reduce recursion depth

Marco’s teaching style is interactive and communication-focused; he focuses on motivating students to be active learners. Sign up a trial lesson with Marco now!

**About The Edge**

The Edge Learning Center is Hong Kong’s premier Test Preparation, Academic Tutoring, and Admissions Consulting services provider. Founded in 2008, The Edge has helped thousands of students improve their ACT and SAT scores as well as their IB and AP grades. The AC team has just finished off another successful period in which students gained acceptance to schools such as Columbia, Yale, UChicago, and more! Check out our latest Admissions Results!