Dynamic Programming, one of the most difficult algorithmic paradigms, requires consistent practice to become excellent. AlgoMonster will first outline the principles of dynamic programming (DP). Then, I’ll show you how to use them through an example where we’ll initially solve it with recursion. Instead, we will solve using DP here.

**Introduction to DP**

Dynamic Programming is used to optimize recursive algorithm because they scale exponentially. The idea behind dynamic programming is to break down complicated problems with many recursive calls into smaller subproblems, and then save them in memory so we don’t have the need to recalculate each time.

Dynamic programming refers to a method of programming that allows you to split a complex problem into smaller subproblems. Although this principle is similar to recursion in many ways, it has one key difference: each subproblem must be solved once.

This is why it is important to first understand the problem of solving repeated relations. Each complex problem can be broken down into subproblems that are very similar. This allows us to construct a relationship between them.

**How to know if you can solve it with DP?**

It is important to know if dynamic programming can solve this problem. If we don’t, then we won’t be in a position to enjoy the benefits of this approach. We will first mention two main techniques, and then show you how to put them into action through an example.

- Subproblems that overlap, or interdependent subproblems are overlapping.
- Optimal substructure.

**A dynamic programming example for Fibonacci numbers**

Let’s look at a familiar example, the Fibonacci sequence. This recurrence relationship defines the Fibonacci sequence:

Fibonacci (n)=Fibonacci(n-1)+Fibonacci(n-2)

Recurrence relations are equations that define a sequence recursively. The next term is a function the terms before it. Fibonacci sequence can explain it well.

If we want to find n-th Fibonacci number, we must know the numbers that precede it.

We have duplicate calls in our calls recursively every time we need to calculate another element of the Fibonacci sequence. As you can see in this image, Fibonacci(5) is calculated.

F(4) and F(3) are prerequisites for F(5). To calculate F(4), however, we must first calculate F(3), F(2), and F(1) to obtain F(3).

This results in repeated calculations that are redundant and slow down the algorithm. Dynamic Programming is our solution to this problem.

This approach models a solution as though it were to be solved recursively. However, it is built from the ground up and we note the steps that lead to the top.

For the Fibonacci sequence we first solve and note F(1) and F(2), then use the two memorized steps to calculate F(3). Because we know the first two elements, O(1) is used to calculate each element in the sequence.

**How dynamic programming solve problems**

Three steps to solve a dynamic programming problem:

- Find the recurrence relationship that applies to this problem
- Initialize the memory/array/matrix’s starting values
- When we make “recursive calls” (access the memorized solution to a subproblem), it is always done in advance

Let’s follow these guidelines and take a look at some examples that use dynamic programming.

**Problems that are applicable to DP**

- Partition Problem
- Find out if a set integers can be divided into subsets of equal sums.
- Subset Sum Problem
- Given a set positive integers and a sum value, determine if there’s a subset with equal sum.
- Coin Change Problem (Total methods to obtain the coin denomination)
- Find the number of different ways you can get your desired change from a limited supply of coins in given denominations.
- Linear equation with k variables: Total solutions possible
- Find the total number of solutions to a linear equation with k variables.
- There is a high probability that a drunkard won’t fall from a cliff, find out the exam possibility.

**A simple example to show how we can tell it’s a problem with dynamic programming**

This section will focus on Rod Cutting, a classic DP problem. Let’s first get to know the problem.

**Rod Cutting problem explained**

Steve has a steel rod with a length of N inches. Then , Steve is given a list with prices for each rod piece smaller than N inches.

Let’s say that the rod has a length of 10 and that there is a price list for 1, 5, 9, 10, 17, 20, 24, 30, 30 The following table shows the relationship between prices and rod pieces.

length | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |

price | 1 | 5 | 8 | 9 | 10 | 17 | 17 | 20 | 24 | 30 |

Therefore, for a rod of length 2, the price number is 5. As such, the price for a length 3 piece of rod would be 8.

Let’s now look at the problem: how to cut the rod into smaller pieces in order to get the maximum revenue. Let’s say we want to calculate the maximum profit per meter of length 4. First, let’s see how many combinations of the four pieces can be made. Let’s take a look at how it looks here.

It’s now easier to determine which combination will bring Steve the most profit. The combination that has the highest value of its components would be the best solution. Number 3 suggests that we cut the rod in two pieces, which gives us 10 pieces.

P2 + 2 = 5 + 5 = 10

It sounds good. We have so far understood the Rod Cutting problem and what it is asking.

**Conclusion**

Dynamic programming can be a time-saver in order to have more space complexity. However, some tools only go halfway.

It all depends on what type of system you are working on. If CPU time is scarce, you choose a memory-consuming solution. On the other side, if you have limited memory, you can opt for a slower solution to achieve a higher time/space complexity ratio.

For more information that is professional and helpful, you can read books about dynamic programming. Or taking onlineĀ courses maybe is even better. Google algo.monster to see what you can find.