In previous post **“Introduction to data structures and algorithms”**, we learnt what an algorithm is and how data structures can be used in an algorithm to make the work easier.

In our day to day life, any task can be completed in many ways. Similarly in computer science, any computational problem can be solved using many solutions, in other words they can be solved using many algorithms. Since we have one-to-many relationship with respect to problem versus solutions, we might easily get confused while choosing a solution for a particular problem. This is the time we need to understand how to analyze an algorithm and choose the one which suits the most for our problem.

Usually we analyze an algorithm based on their time complexity and space complexity.

**Time Complexity **

Time complexity of an algorithm is the time taken by it to produce the output for a given input. Better the time complexity better the performance of the algorithm.

**Space complexity**

Space complexity of an algorithm is the space (memory) consumed by it to produce the output for a given input. In other words we can say that the algorithm with better space complexity consumes or requires less memory to execute.

**Types of Algorithm Analysis and Algorithmic Notations**

An algorithm can behave differently based upon the size of the input is given to it. And in computer world if you are making a software, your algorithm should be prepared for any sized input. So it is obvious that we need to analyze the complexity for every possible case. Once the complexity of any algorithm is analyzed, there should be some way to represent it. For that we usually use different notations.

Complexity of any algorithm can broadly analyzed using the mentioned below three categories:

**Best Case Analysis**: Best case defines the minimum time required by an algorithm to produce the output. It is also represented as Omega notation (Ω).**Average Case Analysis**: Average case defines the average time required by an algorithm to produce the output for different sized inputs. It is also represented as Theta Notation (θ).**Worst Case Analysis**: Worst case defines the maximum time required by an algorithm to produce the output. It is also represented as Big O notation (O).

**How to Analyze Complexity ?**

Before going ahead with examples explaining the way to analyze an algorithm, let’s understand how to count the number of instructions present in a particular code.

1 2 3 4 5 6 7 8 9 10 11 12 13 | int x = 5; int c = x + 10; for (int i=0;i<100;i++) { if (i % 5 == 0) { Console.WriteLine("{0} is divisible by 5",i); } else { Console.WriteLine("{0} is not divisible by 5", i); } } |

Let us break down the code to understand it better, as follows:

- The first line, int x = 5 requires 1 instruction, that is, assigning the value 5 to x.
- The second line, int c = x + 10 requires 3 instructions. First instruction is to look up for x, second instruction is to add it with 10 and the third instruction is to assign the result to c.
- The third line, for (int i = 0; i < 100; i++) will run two more instructions in the first iteration of the loop. One is the assignment instruction i = 0 and the other one is comparison instruction i < 100. And after first iteration, two more for further iterations, that is, i++ and i < 100. So the total instructions here is 4n.
- From the fourth line and further, (body of for loop), for every iteration, body will run few more instructions like i % 2, result == 0, and then based on output print instruction. Moreover, the print instruction is dependent upon the result of condition which itself depends upon condition of for loop. So the total instructions here is 2n (mod and comp) + 2n (concat and print) = 4n. Note that the counting of instructions might not be accurate, we are doing it to understand the process. For example, string concat might not be a single instruction. Similarly printing to console also might have multiple instructions. But we are considering those as one instruction to understand it better.

So we can represent the complexity of the preceding algorithm as *f(n) = 1 + 3 + 4n + 4n = 4 + 8n.*

Also, Since our main intention behind analyzing complexity of an algorithm is to identify how it behaves when the input grows or shrinks, we can ignore the instructions which are always constant.

Above, it is clear that *4* and *8* never change, so by ignoring those, we can conclude *f(n) = n*. Here we ignored all the factors which don’t change based on the input and consider only those factors which change based on input. This is what we call as asymptotic behavior.

Let’s see some examples:

*f(n) = 3n + 5*can be considered as*f(n) = n**f(n) = 5*can be considered as*f(n) = 1**f(n) = n*can be considered as^{2}+ 3n + 5*f(n) = n*^{2}

In simple words we can summarize asymptotic behavior of an algorithm as follows:

- Any program that doesn’t have any loop has
*f(n) = 1* - Any program with one for loop has
*f(n) = n* - Any program with one nested for loop (for loop inside another for loop) has
*f(n) = n*^{2}

**Complexity Analysis Example**

With the preceding theory, lets try to analyze the worst, best, and average case of an algorithm. Let’s consider an example of searching an element from a given array. So the code will look as follows:

1 2 3 4 5 6 7 8 9 10 | public void find(int[] arr, int item) { for (int i = 0; i < arr.Length - 1; i++) { if (arr[i] == item) { Console.WriteLine("Item found at index - {0}", i); } } } |

Let’s say the input array is [1,5,67,56,38,97,34,50,81] and we are trying to search the following numbers:

- 1: This can be found in the first iteration itself. This is an example of best case.
- 38: Since it is somewhere in the middle of the input, so we can get the result in approximately
*n/2*iterations (*n*is the length of the input array). This is an example of average case. - 81: Since it is the last item of the input array, the result will come at nth iteration.
- 102 : Since it is not present, the result will be produced at nth iteration.

As we can see, based on the item we are searching, the number of iterations are varying. So we should consider all cases during our analysis. Since it is always a best practice to be always prepared for the worst, we prefer analyzing the worst case more often. Let’s see the final analysis for the preceding search algorithm:

1 2 3 | Best Case - Ω(1) Average case - θ(n/2) Worst case - O(n) |

Since we don’t consider the constant factors while complexity analysis, we can consider θ(n/2) as θ(n). But please note that here O(n) > θ(n).

**Summary**

So, In this post, you have learned the following concepts:

- How to analyze the complexity of an algorithm.
- How to represent complexity of an algorithm.

So, it was all about **Complexity Analysis of algorithms**, if you have any query then please comment below and let us know. If you liked this article, then please follow us on Facebook to get notification for new posts.

Happy Learning 🙂

Rahul is a Data Geek, technology enthusiast, a passionate writer, thinker with passion for computer programming. He loves to explore technology and finds ultimate joy when writing about trending technology, geek stuff and web development.