All algorithms have 2 main characteristics:

1. Amount of required memory.
2. Execution time.

These two are used to compare algorithms with each other. Some algorithms are faster, but require more memory, while others are vice versa.

For most algorithms, the execution time and amount of memory cannot be expressed with one number. The more elements there are in the input array, the longer the search and sorting will take.

So, the Big-O notation was invented to describe the efficiency of algorithms.

## Constant complexity - O(1)

The most efficient algorithm, in theory, runs in constant time and consumes a constant amount of memory. No matter how you increase the amount of incoming data, the complexity of the algorithm will not increase. The efficiency (or complexity) of such an algorithm is called constant and is written as O(1).

An example of an algorithm with constant complexity:

``````const getArrayElement = (arr, i) => arr [i];
``````

As an input, we get the array `arr` and the index `i`. We return the array element at position `i`.

The speed of execution and the amount of required memory of this algorithm doesn’t depend on the length of the input array, so the complexity is considered constant.

## Linear complexity - O(n)

The complexity of many algorithms grows in direct proportion to the amount of incoming data.

A good example is linear search:

``````const findArrayElement = (arr, x) => {
for (let i = 0; i <arr.length; i ++) {
if (arr [i] === x) {
return i;
}
}

return -1;
}
``````

If a search in an array of 10 elements takes 5 seconds, then in an array of 100 elements we will search for 100 seconds. 10 times longer.

This complexity (efficiency) is called linear and is written as O(n).

But, note that the linear search algorithm does not need additional memory. Therefore, for this characteristic, it gets the highest score - O(1).