Monday, April 13, 2020

A few Algorithmic Paradigms


Written by:
Fagun Shadi D-53
Tejas Shantaram D-71
Shruti Vasave D-76
Vidhi Raghvani D-77 

Greedy Algorithmic Paradigm

The greedy algorithm first appeared in the combinatorial optimization literature in a 1971 article by Edmonds but the foundation of this algorithm was laid in 1935 in Whitney’s dissertation on the Theory of Matroids. Greedy Algorithm is essentially a simple and intuitive algorithm to solve optimization problems. Generally, an optimization problem follows certain steps with some choices at each step. A greedy algorithm chooses the step that gives the most locally optimal choice. It is important to note that it is possible that the Greedy algorithm may not fulfil the intent of finding a global optimal solution. Despite this, greedy method provides to be quite powerful and works for a large range of problems which this blog will subsequently cover.

In any optimization problem, time complexity plays an important role in determining the efficiency of the algorithm. Greedy algorithms are generally faster as they provide local optimal solutions. Another important factor to determine efficiency is memory requirement. Greedy algorithms have less memory requirement as they don’t revise or go back to the choices they made earlier. While in case of dynamic programming, it needs a dp table to store the choices and make changes if needed. Thus, Greedy algorithms are more efficient than Dynamic Programming.
A greedy algorithm, always makes the selection that seems to be the simplest at that moment. This means that it makes a choice that is optimal in that local sense with the hope that this choice will cause a globally-optimal solution.

How do you decide which choice is optimal?

Assume that you simply have an objective function that must be optimized (either maximized or minimized) at a given point. A Greedy algorithm makes greedy choices at each step to make sure that the target function is optimized. The Greedy algorithm has just one shot to compute the optimal solution in order that it never goes back and reverses the choice.
There will be certain times when we have to make a decision which affects the state of the system, which may or may not be known to us in advance. These decisions or changes are like transformations of state variables. The results of the previous decisions help us in choosing the long term ones.
What do we conclude from this? We need to cut up a main problem into a series of overlapping sub-problems, and build up solutions to larger and bigger sub-problems. If you're given a prob statement which may be weakened into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones - and if you manage to discover some over-lapping sub-problems, then you’ve encountered a DP problem.

Dynamic Programming 

The core idea of Dynamic Programming is to avoid repeated work by remembering partial results and this idea finds it application during a lot of real-life situations.
1. Frame the optimization problem such that after a local optimum solution is selected only one sub problem remains
2. Show that there is always some solution that is optimal to the original problem that makes the greedy choice, so that the greedy choice is always safe.
3. Show that once a choice is made according to the greedy method, the sub problem that remains is such that if we combine an optimal solution to the sub problem, we will get an optimal solution to the original problem

9 comments: