All algorithms have 2 main characteristics:
- Amount of required memory.
- Execution time.
These two are used to compare algorithms with each other. Some algorithms are faster, but require more memory, while others are vice versa.
For most algorithms, the execution time and amount of memory cannot be expressed with one number. The more elements there are in the input array, the longer the search and sorting will take.
So, the Big-O notation was invented to describe the efficiency of algorithms.
Constant complexity - O(1)
The most efficient algorithm, in theory, runs in constant time and consumes a constant amount of memory. No matter how you increase the amount of incoming data, the complexity of the algorithm will not increase. The efficiency (or complexity) of such an algorithm is called constant and is written as O(1).
An example of an algorithm with constant complexity:
const getArrayElement = (arr, i) => arr [i];
As an input, we get the array arr
and the index i
. We return the array element at position i
.
The speed of execution and the amount of required memory of this algorithm doesn’t depend on the length of the input array, so the complexity is considered constant.
Linear complexity - O(n)
The complexity of many algorithms grows in direct proportion to the amount of incoming data.
A good example is linear search:
const findArrayElement = (arr, x) => {
for (let i = 0; i <arr.length; i ++) {
if (arr [i] === x) {
return i;
}
}
return -1;
}
If a search in an array of 10 elements takes 5 seconds, then in an array of 100 elements we will search for 100 seconds. 10 times longer.
This complexity (efficiency) is called linear and is written as O(n).
But, note that the linear search algorithm does not need additional memory. Therefore, for this characteristic, it gets the highest score - O(1).
Quadratic complexity - O(n^2^)
Simple sorting algorithms like bubble sort or selection sort have O(n^2^) complexity.
It is not very efficient. If the length of the array grows 10 times, the execution time of the algorithm will increase 10^2^ (100) times!
This happens because the algorithm uses a nested loop. We estimate the complexity of one pass as O(n). But we go through it not once, but n times. Therefore, the total execution time is estimated as O(n*n) = O(n^2^).
Logarithmic complexity - O(log n)
Algorithms that can reduce the amount of analyzed data at each iteration have a logarithmic complexity - O(log n).
Binary search is a good example of an algorithm with logarithmic complexity. At each iteration, we discard half of the data and work only with the other half.