Dynamic Programming: The Basics

What is dynamic programming?

Use this algorithmic strategy to solve a problem by remembering its subproblems so you don't have to solve them more than once! It's a concept that I think is best understood by applying it to examples. The classic example is calculating Fibonacci numbers. 

A naive approach to calculating Fibonacci numbers

Maybe the most straightforward method to calculate Fibonacci numbers is as follows: 

public static int method1(int n) {
if(n <= 2) return 1; 
else return method1(n-1) + method1(n-2); 

}

However, this method is slow: if T(n) is the amount of time it takes, then T(n) = T(n-1)+T(n-2) + O(1) because we add the amount of time it takes to calculate each subproblem as well as some constant time for comparisons, etc. Then T(n) = T(n-1) + T(n-2) + O(1) > 2T(n-2) + O(1) > 2^(n-2).  Thus, this method takes exponential time! 

Why is this method so slow? If we look at the tree of subproblems that we're calculating, we see that we are calculating the same subproblems over and over again! 


Fibonacci tree with color coded subproblems
How can we do better?

Memoization (top down)

One type of dynamic programming is called memoization. This top down approach is pretty simple...
Using this approach to solve our Fibonacci example, we don't have to calculate the same subproblem more than once, so our tree becomes a lot smaller!  
Written in code, this solution looks like...

static int[] memo; 

public static int method2(int n) {
if(memo[n] != -1) return memo[n]; 
else {
if(n <= 2) return 1; 
else {
memo[n] = method2(n-1) + method2(n-2); 
return memo[n]; 
}
}
}

This method is a lot faster! Since there are n non-memoized calls, and each one takes a constant amount of time, the overall time is linear. 

Tabulation (bottom up)

Another type of dynamic programming is tabulation. This bottom up approach is perhaps even more straightforward than memoization, because we build up from the base cases until we calculate our final answer. Here is the code:

static int[] tab; 
public static int method3(int n) {
for(int i = 1; i <= n; i++) {
if(i <= 2) tab[i] = 1; 
else tab[i] = tab[i-1] + tab[i-2]; 
}
return tab[n]; 
}

We can actually do better and save space if we only record the previous two values instead of all the previous values... 

public static int better_method3(int n) {
if(n <= 2) return 1; 

int prev = 1, curr = 1, next; 
for(int i = 3; i <= n; i++) {
next = prev + curr;
prev = curr;
curr = next; 
}

return curr; 
}

Putting it together

If we time all the methods, we can see the power of dynamic programming. If we want to calculate the 50th Fibonacci number, the naive approach took around 37.97 seconds. All the other approaches took much less than that - around 0.000016133, 0.000008461, and 0.000005634 seconds! (I got these numbers by running each of the methods 5 times, and taking the average.)

All code on my Github - try running it yourself! 

Source(s): MIT OCW

Popular posts from this blog

Building A Toy Language Interpreter in Go

Space Race: a simple Rust game built on the Bevy framework

Building a Toy Language Compiler in Go