**Divide and Conquer vs Greedy Algorithms.**

Co-written by — Vishwesh Meher, Siddhi_Mundada, Daanish Shaikh, Tejas Murkya and SIDDHARTH NAHAR

If you’re familiar with British-Indian history, you know what “Divide and Conquer” means and it doesn’t sound so good from that perspective. But fear not, this is not a history lesson :)

**Introduction**

This blog will brief you about the ways of approaching a programming problem using algorithmic concepts of **“ Divide and Conquer”** and

**along with some differences between them. We will also go through some terminologies and different algorithms that classify under these approaches.**

*“Greedy”,*Problems exist in all shapes and forms. Big problems particularly can be challenging to solve if you don’t know how to break them into small subsets and solve them individually and then build an optimal solution. Building a solution for a major problem by solving a set of sub-problems is the main objective of **Divide and Conquer** as well as **Greedy** approach. How these sub-problems are solved is the difference between these two approaches. **We assume that the reader has a basic knowledge of what is programming and what is the logic and basic syntax behind it such as control loops, conditional statements and function calls. **So! Let’s have a look at the general idea behind these approaches.

# Divide and Conquer Approach

What seems easier? To break 10 sticks together at once or break them one by one? The answer is obvious unless you are a person with immense brute strength. Over here breaking the 10 sticks is the big task. To achieve this big task we decided to break it down to smaller subsets, i.e. , breaking the sticks one by one in order to achieve the larger goal.

Divide and conquer algorithms work in a similar manner. They break down the larger problems into sub-problems and solve them individually to attain a solution. To do this, these algorithms make use of a programming technique called **recursion**.

The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function.

This technique can be divided into the following three parts:

**Divide:**This involves dividing the problem into smaller sub-problems.**Conquer:**Solve sub-problems by calling recursively until solved.**Combine:**Combine the sub-problems to get the final solution of the whole problem.

Let us look at an example of sorting any integer array using merge sort, a Divide and Conquer Algorithm.

## MERGE SORT

This sorting algorithm divides an input array into two halves and calls itself again to further divide these two halves into more halves. This goes on until the array becomes of size 1. It then compares this size 1 element with another size 1 element and merges them in one array. This process is repeated until we get a sorted array back. The following figure shows how the algorithm works. The numbers depict the steps and the flow of action, i.e. , which array is operated on and when.

Now you must be thinking why go through such a tedious process for sorting an array when you can just use simple to understand methods such as insertion sort. The difference is the number of steps required to get to the final result. A concept called **time complexity**** **is used to determine or compare which algorithm is better than the other for the same task, which over here is sorting. The worst case time complexity for merge sort is O(nlogn) where ‘n’ is the number of elements present in the input array.

Another sorting algorithm that uses Divide and Conquer paradigm is **Quick Sort.**

# Greedy Approach

To understand the Greedy approach let’s look at the game of badminton. One player has to score a point against another player to win the match. Moreover they have to win 2 out of 3 matches to win the game. Over here the bigger problem is to win the game while the smaller problem is to win a point against the opponent. You play according to the immediate situation and don’t look back on the lost points.

This means that in some cases making a decision that looks right at that moment gives the best solution. The Greedy technique is best suited for looking at the immediate situation. Greedy algorithms work in stages. In each stage, a decision is made that is good at that point, without bothering about the future. This means that some local best is chosen. It assumes that a local good selection makes for a global optimal solution.

The two basic properties of optimal Greedy algorithms are:

**Greedy choice property:**This property says that the globally optimal solution can be obtained by making a locally optimal solution (Greedy). The choice made by a Greedy algorithm may depend on earlier choices but not on the future. It iteratively makes one Greedy choice after another and reduces the given problem to a smaller one.**Optimal substructure:**A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the subproblems. That means we can solve subproblems and build up the solutions to solve larger problems.

Let’s take a sorting algorithm that uses a Greedy approach for the sake of comparison.

## SELECTION SORT

Selection sort is a sorting algorithm that selects the smallest element from an unsorted list in each iteration and places that element at the beginning of the unsorted list

Over here we Set the first element as minimum. Compare minimum with the second element. If the second element is smaller than minimum, assign the second element as minimum. Compare minimum with the third element. Again, if the third element is smaller, then assign minimum to the third element otherwise do nothing. The process goes on until the last element. After each iteration, minimum is placed in the front of the unsorted list. For each iteration, indexing starts from the first unsorted element. Previous steps are repeated until all the elements are placed at their correct positions.

Since we have an idea on how both the approaches work using the same problem statement, let us compare them from a broad perspective.

# Divide and Conquer vs Greedy approach

- D&C approach is solution based technique whereas greedy is an optimization approach.
- D&C can work efficiently on low complexity problems such as sorting whereas greedy approach becomes effective when the complexity of the problems increase, for example, Fractional Knapsack problem and Huffman coding compression.
- D&C implements recursion in order to achieve a solution whereas greedy takes an iterative approach to solve the sub-problems.
- D&C uses the top-bottom approach, i.e. it breaks the larger problem into smaller sub-problems and then solves them to build a solution. Greedy uses the bottom-top approach where it solves the sub-problems first which will lead to an optimal solution.
- D&C approach is recursive in nature, so it is slower than iterative greedy approach.

# Conclusion

Divide and Conquer and Greedy are both widely used algorithm paradigms which find their uses in various different problem statements. We cannot say which one is better than the other since it is entirely dependent on the problem. Hope this blog gave you an insight how these paradigms work!