IMPORTANT :
Before problem solving : Know what you need to know
To be able to solve the questions related to time complexity, you'll need to have a good understanding of various sorting algorithms and their time complexity characteristics. Here are the main and subtopics you should study and will be covered in this series:
Main Topics:
a. Basics of Sorting Algorithms
b. Time Complexity Analysis c. Best-case, Average-case, and Worst-case Complexity
Bubble Sort Algorithm.
Insertion Sort Algorithm.
Selection Sort Algorithm.
Quick Sort Algorithm.
Counting Sort [ Advanced ]
«Let’s get started»
Intro: Why Big O?
Big O notation is a way of measuring how fast or slow a computer program is. It tells you how much time or memory a program needs to solve a problem, depending on how big the problem is. For example, if you want to find the largest number in a list of numbers, you might have to look at every number in the list and compare them. This would take more time if the list is longer, right? So we can say that the time it takes to find the largest number is proportional to the length of the list. We can write this as O(n), where n is the length of the list.
Big O notation helps us compare different programs and choose the best one for a given problem. It also helps us identify the parts of a program that are slow or use too much memory and try to improve them.
What is O ? What is N?
N and O are two symbols used in Big O notation, which is a way of describing how the efficiency or complexity of an algorithm changes as the size of the input grows.
N usually represents the number of input elements, such as the length of an array or the number of nodes in a graph. O stands for "order of" or "on the order of", which means "approximately equal to" or "proportional to". For example, O(n) means that the algorithm’s complexity is approximately equal to n, or proportional to n.
Basics of Sorting Algorithms: O(n^2)
"Bubbles rise to the top." : Repeatedly swapping adjacent elements until larger elements eventually "rise to the top" of the array.
Complexity Analysis of Bubble Sort:
The time complexity of bubble sort is O(n^2),
where n is the number of elements in the array.
What does it mean to say O(n^2)
:
If an algorithm has a time complexity of O(n^2)
, it means that the number of operations required to perform the algorithm on an input of size n is proportional to n^2
. In other words, if you double the size of the input, the number of operations required to perform the algorithm will quadruple.
Bubble Sort Worst & Average Case: O(n^2) - It takes to sort an array of n elements is proportional to n squared. In other words, the time it takes to sort an array doubles as the number of elements in the array doubles.
Best case: O(N) occurs when the array is already sorted in ascending order. In this case, bubble sort only needs to make one pass through the array and compare each pair of elements once. There are no swaps needed, the number of comparisons is N-1, where N is the size of the array. Therefore, the best case time complexity of bubble sort is O(N).
it is best to check if the array is already sorted or not beforehand, to avoid O(N2) time complexity.
Not very efficient for sorting large arrays. However, it is a good algorithm to learn as a beginner because it is easy to understand and implement.
Considered a stable sorting algorithm, which means For example, suppose we have an input data structure of integers
{3, 2, 2, 1}
. A stable sorting algorithm would sort this input to{1, 2, 2, 3}
, preserving the order of the two elements with the value2
. An unstable sorting algorithm might sort this input to{1, 2, 3, 2}
, with the two elements with the value2
ordered differently.
If you see a nested loop, think that it might be
O(n^2):
A nested loop is a loop inside another loop. When the inner loop executes for every iteration of the outer loop, the number of total iterations becomes the product of the number of iterations of the outer loop and the number of iterations of the inner loop.
def nested_loop(n):
for i in range(n):
for j in range(n):
print(i, j)
But it is not the case if:
def nested_loop(n):
for i in range(n):
for j in range(
3):
print(i, j)
This loop iterates through the outer loop n times, and the inner loop 3 times for each iteration of the outer loop. This means that the total number of iterations is 3n, which is O(n)." - “Drop Constants”
Practice : Analysing the time complexity of a given function.
Question 1: "What is the time complexity of the following function that finds the maximum element in an unsorted array?
Question 2: "What is the time complexity of the following function that checks if a given number is present in an unsorted array?
How to implement bubble sort?
Revision: Access an array using a pointer to array:
To access the element at index i
of an array using a pointer, you can use pointer arithmetic to calculate the memory address of the element.
Here's an example code snippet that demonstrates how to access the element at index i
of an array using a pointer:
#include <stdio.h>
int main() {
int array[] = {1, 2, 3, 4, 5};
int *ptr = array;
// Access the element at index i using a pointer
int i = 2; // Index of the element to access
int value = *(ptr + i);
printf("The element at index %d is %d\n", i, value);
return 0;
}
Practice : Bubble sort an array using pointer to an array and print array at each swap.
Insertion sort: 🃏
Time Complexity in Insertion Sort:
The best case time complexity of insertion sort is O(n), where n is the number of elements in the array. This occurs when the array is already sorted.
The average case time complexity of insertion sort is also O(n^2), where n is the number of elements in the array. This occurs when the elements in the array are in a random order.
The worst case time complexity of insertion sort is O(n^2), where n is the number of elements in the array. This occurs when the elements in the array are in reverse sorted order, or in nearly reverse sorted order.
In terms of space complexity, insertion sort requires O(1) additional space, as it sorts the array in-place without using any extra memory.
Insertion Sort Practice in Linked List:
We need to loop the linked list to check if a preceding node contains a number > next node, if so : swap nodes
How to swap nodes ? “nodes themselves not their data”
To sort a doubly linked list using the insertion sort algorithm, we need to compare each node with the nodes that precede it, and move the node backwards until it is in the correct place. The steps are:
Initialize a pointer to the second node in the list.
Iterate through the list using
custom pointer
, and for each node, compare it with the nodes before it.If the node is smaller than its previous node, swap the two nodes using the your
swapNodes
helper function.Move the pointer to the next unsorted node.
Repeat steps 2-4 until
pointer
reaches the end of the list.
Selection sort:
Repeatedly find the minimum element from the unsorted part of the list and swap it with the element at the beginning of the unsorted part, effectively placing it in its correct position in the sorted section of the list.
The
time complexity of selection sort is O(n^2)
, which means that it takes quadratic time to sort an array of size n. This makes it inefficient for large arrays.
Here are some of the advantages of selection sort:
It is a simple algorithm to understand and implement.
It is an in-place algorithm, which means that it does not require any additional memory.
It is stable, which means that the relative order of elements with equal values is preserved.
It is not a very efficient algorithm for sorting arrays with many duplicate elements.
Once the loop has finished iterating, the minimum element is swapped with the element at the current index. This process is repeated until the entire array has been sorted.
Quick Sort:
“Divide the mountain into smaller hills, each conquered with ease.”
Depend on : Divide And Conquer technique
Divide and Conquer is a powerful problem-solving technique used in computer science, algorithms, and mathematics. The strategy involves
breaking down a complex problem into smaller, more manageable subproblems,
solving each subproblem independently,
and then combining the solutions to obtain the final solution to the original problem.
Quick sort and merge sort follows this technique.
How it works?:
It works by selecting a pivot element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The process is then recursively applied to the sub-arrays until the entire array is sorted.
Note : “(any element can be chosen to be a pivot)”
Lomuto partition scheme:
A partitioning technique used in the QuickSort algorithm to partition an array into two parts based on a chosen pivot element. It was introduced by Nico Lomuto. This partitioning scheme is simpler and easier to understand compared to other partitioning schemes like Hoare partition scheme.
The Lomuto partition scheme works as follows:
Choose the last element of the array as the pivot element.
Initialize two pointers,
i
andj
, both pointing to the first element of the array.Iterate
j
over the array from the beginning to the second-to-last element:If the element at index
j
is less than the pivot, swap the elements at indicesi
andj
, and incrementi
.
After the loop ends, swap the pivot element (at the last index) with the element at index
i
.The array is now partitioned into two parts, where all elements to the left of
i
are less than the pivot, and all elements to the right ofi
are greater than or equal to the pivot.Return the index
i
as the new pivot index.Counting Sort: (Advanced)
works well for a range of non-negative integer values. It doesn't use comparisons like other sorting algorithms (e.g., bubble sort, merge sort), which makes it faster in some cases.
The true Big O notation of counting sort is O(n + k). However, counting sort is generally only ever used if k isn't larger than n; in other words, if the range of input values isn't greater than the number of values to be sorted.
Resources:
http://www.michaelfxu.com/algorithm%20series/algorithms-with-python-pt1/
https://www.tutorialspoint.com/data_structures_algorithms/quick_sort_algorithm.htm
Go to part2 …………..