About Time Complexity

#1 Time, Space Complexity

Algorithm complexity is a measure of its performance, which can be evaluated through two metrics: time complexity and space complexity.

Time complexity refers to the total number of operations used by an algorithm, while space complexity measures the total amount of memory used by an algorithm.

In other words, time complexity assesses the speed of an algorithm, and space complexity evaluates the memory usage.

#2 Asymptotic Notations

Algorithm complexity is often expressed using asymptotic notation, which includes three common notations: O (Big-O), Ω (Omega), and Θ (Theta).

  1. Big-O Notation (O)
    This notation represents the upper bound or worst-case scenario of an algorithm's time complexity.

  2. Omega Notation (Ω)
    This notation represents the lower bound or best-case scenario of an algorithm's time complexity.

  3. Theta Notation (Θ)
    This notation represents the intersection of the upper and lower bounds, or the average case scenario of an algorithm's time complexity.

Typically, the worst-case scenario of an algorithm's performance is evaluated using Big-O notation. It provides a standardized and easy-to-read way of expressing an algorithm's time complexity, making it a widely used notation in algorithm analysis.

#3 Type of Big-O Notation

  1. O(1) - Constant time complexity
    The running time of the algorithm remains constant regardless of the input size. This is the most efficient time complexity an algorithm can have.

  2. O(log n) - logarithmic time complexity
    The running time of the algorithm grows logarithmically with the input size. This means that the algorithm's running time grows slowly as the input size increases.

  3. O(n) - linear time complexity
    The running time of the algorithm grows linearly with the input size. This means that the algorithm's running time increases at the same rate as the input size.

  4. O(n log n) - log-linear time complexity
    The running time of the algorithm grows in proportion to n multiplied by the logarithm of n. This is commonly found in algorithms that involve sorting or searching.

  5. O(n^2) - quadratic time complexity
    The running time of the algorithm grows quadratically with the input size. This means that the algorithm's running time increases much faster than the input size.

  6. O(n^3) - cubic time complexity
    The running time of the algorithm grows cubically with the input size. This is commonly found in algorithms that involve nested loops.

  7. O(2^n) - exponential time complexity
    The running time of the algorithm grows exponentially with the input size. This is commonly found in algorithms that involve solving combinatorial problems.

  8. O(n!) - factorial time complexity
    The running time of the algorithm grows factorially with the input size. This is the most inefficient time complexity an algorithm can have, and is commonly found in brute-force algorithms.

#4 Tips for Identifying

  • Single Loop
    If you use a single loop to iterate over a single element set, the time complexity is O(n).
    Also, iterate over more than half of a collection, the time complexity is O(n/2) -> O(n).
    And, use two separate loops to iterate over two individual collections is O(n + m) -> O(n).

  • Nested for loop
    If you use two nested loops to iterate over a single collection, the time complexity is O(n^2)
    Also, iterate two seprate collection with two nested loops is O(n * m) -> o(n ^ 2).

#5 Time Complexity of a few Sort

NameBest CaseAverage CaseWorst Case
Bubble SortO(n)O(n2)O(n2)
Selection SortO(n2)O(n2)O(n2)
Merge SortO(n log(n))O(n log(n))O(n log(n))
Quick SortO(n log(n))O(n log(n))O(n2)
Heap SortO(n log(n))O(n log(n))O(n log(n))
Insertion SortO(n)O(n2)O(n2)
Counting SortO(n)O(n)O(n)

#6 Time Complexity of a few Data Structures

Cover time complexity for each data structure separately in their respective chapters.

Did you find this article valuable?

Support JaeChan Lee by becoming a sponsor. Any amount is appreciated!