How to Use the Big O Notation in Data Structures

How to Use the Big O Notation in Data Structures

*This post may contain affiliate links. As an Amazon Associate we earn from qualifying purchases.

Big O notation, also referred to as big Omega notation, is a scary concept for many programmers because the academic explanation makes very little sense at first sight. In short, this concept helps us determine how efficient and/or limited a piece of code is. Here’s a detailed explanation of the Big O notation, its purpose, and how it can help?programmers improve their code.

What is the Big O Notation in Java?

Big O notation is an expression used to categorize algorithms and data structures based on how they respond to changes in input size. Specifically, how the processing time of a data structure changes as the size of the problem changes. This expression?approximates how slow an algorithm will get if you double the number of items to process?or how fast it will get if you cut that number in half.

Big O is noted as O(n).?O represents?the growth rate of a function, also known as the order of a function, while n is the item the complexity is being related to (the size of a collection).

Types of Complexities of Big Omega Notation

  1. O(1)/Constant Complexity ? This means the algorithm will always take a constant time irrespective of the size of the data set.
  2. O(log n)/Logarithmic Complexity ? The time taken increases as the data set increases, but not proportionately.
  3. O(n)/Linear Complexity ? The time taken grows proportionately with an increase in the data set.
  4. O(n log n) ? This is a combination of logarithmic complexity and linear complexity. The first part of this complexity is O(n), the second is O(log n), which combine to create O(n log n).
  5. O(n^2)/Quadratic Complexity ? In this complexity, time increases slowly with as increase in the data set.
  6. O(2^n)/ Exponential Growth ? The algorithm takes twice as long to grow for every new component added.

Big O Notation Examples in Java

Let?s suppose we want to form a function that will produce the integer that is smallest in a group, given a group of integers greater than 0. To illustrate how to use the Big Omega notation, we will formulate two different solutions to this problem.

The first function will produce the integer that is smallest in the group. The data structure will only go over all values in the group and select the smallest integer in the group in the variable known as curMin.?Let?s assume that the group being used in our function has 10 components.

The CompareSmallestNumber Java Function

Int CompareSmallestNumber (int group [])
int x, curMin;

// place smallest value to first item in group
curMin = group[0];

/* repeat through group to find the smallest value
and also presume we have 10 components
for (x = 1; x < 10; x++)
if( group[x] < curMin) {
curMin = gtoup[x];

// return smallest value in the group
return curMin;

The CompareToAllNumbers Java Function

Int CompareToAllNumbers (int group[ ])

int x, y;

/* repeat through each component in group,
presuming there are only 10 parts:

for (int x = 0; x < 10; x++)

isMin = correct;

for (int y = 0; y < 10; y++) {

/* compare the value in group[x] to the other values; if we find that group[x] is greater than any of the values in group[y], then we know that the value in group[x] is not the minimum; keep in mind that the 2 groups are similar, we are only taking out one value with index ‘x’ and comparing to the other values in the group with index ‘y’ */

if(group [x] > group [y])
isMin = false;



return group [x];

The Big O Notation in Time Complexity

In the second example, input items are mentioned only once when each is compared to the minimum value. In Big Omega notation, this would be expressed as O(n).

In the first example, we use the curMin variable to the first value in the input group. That counts as 1 ?part? of the input. For this reason, many programmers will consider our Big O as O(n + 1). But actually, Big O deals with time and the number of inputs, which is ?n? in our case. As ?n? approaches infinity, the constant ?1? becomes insignificant, so we drop the constant. Therefore, the CompareSmallestNumber function has O(n) instead of O(n + 1).

Additionally, if we have n3 + n, and n approaches infinity, the ?+n? becomes irrelevant. We, therefore, drop the ?+n? and instead of using O(n3 + n), we use O(n3), or order of n3 time complexity.


Big O notation is very broad, but this article only provides the basics to enable you to understand the concepts. If you need an in-depth look at the case complexities of the most common search and sorting algorithms, we highly recommend the Big-O complexity chart cheat sheet.

Be sure to let us know your thoughts about this article is in the comment section below.

Recent Posts