About Time Complexity
#1 Time, Space Complexity
Algorithm complexity is a measure of its performance, which can be evaluated through two metrics: time complexity and space complexity.
Time complexity refers to the total number of operations used by an algorithm, while space complexity measures the total amount of memory used by an algorithm.
In other words, time complexity assesses the speed of an algorithm, and space complexity evaluates the memory usage.
#2 Asymptotic Notations
Algorithm complexity is often expressed using asymptotic notation, which includes three common notations: O (Big-O), Ω (Omega), and Θ (Theta).
Big-O Notation (O)
This notation represents the upper bound or worst-case scenario of an algorithm's time complexity.Omega Notation (Ω)
This notation represents the lower bound or best-case scenario of an algorithm's time complexity.Theta Notation (Θ)
This notation represents the intersection of the upper and lower bounds, or the average case scenario of an algorithm's time complexity.
Typically, the worst-case scenario of an algorithm's performance is evaluated using Big-O notation. It provides a standardized and easy-to-read way of expressing an algorithm's time complexity, making it a widely used notation in algorithm analysis.
#3 Type of Big-O Notation
O(1) - Constant time complexity
The running time of the algorithm remains constant regardless of the input size. This is the most efficient time complexity an algorithm can have.O(log n) - logarithmic time complexity
The running time of the algorithm grows logarithmically with the input size. This means that the algorithm's running time grows slowly as the input size increases.O(n) - linear time complexity
The running time of the algorithm grows linearly with the input size. This means that the algorithm's running time increases at the same rate as the input size.O(n log n) - log-linear time complexity
The running time of the algorithm grows in proportion to n multiplied by the logarithm of n. This is commonly found in algorithms that involve sorting or searching.O(n^2) - quadratic time complexity
The running time of the algorithm grows quadratically with the input size. This means that the algorithm's running time increases much faster than the input size.O(n^3) - cubic time complexity
The running time of the algorithm grows cubically with the input size. This is commonly found in algorithms that involve nested loops.O(2^n) - exponential time complexity
The running time of the algorithm grows exponentially with the input size. This is commonly found in algorithms that involve solving combinatorial problems.O(n!) - factorial time complexity
The running time of the algorithm grows factorially with the input size. This is the most inefficient time complexity an algorithm can have, and is commonly found in brute-force algorithms.
#4 Tips for Identifying
Single Loop
If you use a single loop to iterate over a single element set, the time complexity is O(n).
Also, iterate over more than half of a collection, the time complexity is O(n/2) -> O(n).
And, use two separate loops to iterate over two individual collections is O(n + m) -> O(n).Nested for loop
If you use two nested loops to iterate over a single collection, the time complexity is O(n^2)
Also, iterate two seprate collection with two nested loops is O(n * m) -> o(n ^ 2).
#5 Time Complexity of a few Sort
Name | Best Case | Average Case | Worst Case |
Bubble Sort | O(n) | O(n2) | O(n2) |
Selection Sort | O(n2) | O(n2) | O(n2) |
Merge Sort | O(n log(n)) | O(n log(n)) | O(n log(n)) |
Quick Sort | O(n log(n)) | O(n log(n)) | O(n2) |
Heap Sort | O(n log(n)) | O(n log(n)) | O(n log(n)) |
Insertion Sort | O(n) | O(n2) | O(n2) |
Counting Sort | O(n) | O(n) | O(n) |
#6 Time Complexity of a few Data Structures
Cover time complexity for each data structure separately in their respective chapters.