Posts

Showing posts from November, 2015

Dynamic Programming: The Knapsack Problem!

Image
One fun problem is the Knapsack problem. In this post, I'm going to talk about the following version:  The problem: Given a set of items with specific values and  integer  weights, and a knapsack of capacity C, find the maximum value of the set of items we can fit into the knapsack.  The approach: Let's call K(i, j) the max value of the subset of items from 1 through i that we can fit into a knapsack of capacity j. We can reduce this problem into a subproblem in several ways... 1) If i = 0, then we don't have any items, so the max value will be 0 .   2) If j = 0, then we can't put any items into our knapsack because it has zero capacity. Then, the max value will be 0 . 3) If the weight of the ith item is bigger than our capacity j, then we can just chuck our ith item since we can't put it into our knapsack at all. We can just work with the first i-1 items. Then K(i, j) is reduced to K(i-1, j) .  4) If we can fit our ith item into our knapsack (in other

Dynamic Programming: Longest Increasing Subsequence

In this post, I'm going to talk about a short but interesting example where dynamic programming is useful.  The problem: Find the length of the longest strictly increasing subsequence. (Note: Subsequences, unlike substrings, do not have to be contiguous! Also, multiple subsequences may have the max length.)  For example, in { 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 }, the length of the LIS is 6, and the three possible LIS's are { 0, 2, 6, 9, 11, 15}, { 0, 4, 6, 9, 11, 15 }, and { 0, 4, 6, 9, 13, 15 }.  The approach: Let's call L(i) the length of the LIS contained in the indices 0 through i. To reduce L(i), we need to consider all the elements at indices 0 through i-1. Then, L(i) is equal to 1 plus the maximum L(j), where j is between 0 and i-1, and the jth element is smaller than the ith element.  This algorithm is O(n^2). My code for the dynamic programming approach to this problem is below...  static int[] tab;  public static int lis(int[] a) {

Dynamic Programming with Strings

Image
In this post, I'm going to discuss the "Edit Distance" problem, along with a specific case of it - the "Longest Common Subsequence" problem.  Edit Distance The problem: Given two strings "a" and "b" - find the smallest number of edits we need to transform "a" into "b." The possible types of edits are "insertion," "deletion," and "replacement." Assume all the edits have the same cost of 1.  The approach:   We can break this problem down into prefix or suffix subproblems. (It doesn't matter which one we pick - the approaches are equivalent.) If we think in terms of prefixes, we can call E(i, j) the minimum cost of transforming the first i characters of "a" into the first j characters of "b." There are four ways we can reduce E(i, j) into smaller subproblems...  1) If the ith character of "a" and the jth character of "b" are already equal, t

Dynamic Programming: The Basics

Image
What is dynamic programming? Use this algorithmic strategy to solve a problem by remembering its subproblems so you don't have to solve them more than once! It's a concept that I think is best understood by applying it to examples. The classic example is calculating Fibonacci numbers.  A naive approach to calculating Fibonacci numbers Maybe the most straightforward method to calculate Fibonacci numbers is as follows:  public static int method1(int n) { if(n <= 2) return 1;  else return method1(n-1) + method1(n-2);  } However, this method is slow: if T(n) is the amount of time it takes, then T(n) = T(n-1)+T(n-2) + O(1) because we add the amount of time it takes to calculate each subproblem as well as some constant time for comparisons, etc. Then T(n) = T(n-1) + T(n-2) + O(1) > 2T(n-2) + O(1) > 2^(n-2).  Thus, this method takes exponential time!  Why is this method so slow? If we look at the tree of subproblems that we're calculating, we see tha