Posts

Showing posts from 2015

Happy Holidays!

Image
Just got back home from my first term at college! This fall, I took 6.01 (Introduction to EECS), 6.042 (Math for Computer Science), 8.02 (Physics II) and 21W.747 (Rhetoric). (Unfortunately, 8.02 met 9-11 am three days a week, which required a lot of coffee for me...) In addition to continuing squash, I joined a couple of clubs, SBC (Sloan Business Club, mitsbc.mit.edu) and Code for Good (codeforgood.mit.edu). They're both a fun way to learn/volunteer and to meet some pretty inspiring upperclassmen! Finally, I just started coding solutions to the puzzles on adventofcode.com yesterday, and am on day 16 now. The advent calendar is currently on day 21, so I have a lot of catching up to do! I'm pushing my code to github here: https://github.com/lizziew/playground/tree/master/AdventOfCode I'll be back at MIT pretty soon (Jan 4), but in the meantime, Happy Holidays (and watch the new Star Wars movie)! Update: Merry xmas! Advent of Code is officially over! 

Dynamic Programming: The Knapsack Problem!

Image
One fun problem is the Knapsack problem. In this post, I'm going to talk about the following version:  The problem: Given a set of items with specific values and  integer  weights, and a knapsack of capacity C, find the maximum value of the set of items we can fit into the knapsack.  The approach: Let's call K(i, j) the max value of the subset of items from 1 through i that we can fit into a knapsack of capacity j. We can reduce this problem into a subproblem in several ways... 1) If i = 0, then we don't have any items, so the max value will be 0 .   2) If j = 0, then we can't put any items into our knapsack because it has zero capacity. Then, the max value will be 0 . 3) If the weight of the ith item is bigger than our capacity j, then we can just chuck our ith item since we can't put it into our knapsack at all. We can just work with the first i-1 items. Then K(i, j) is reduced to K(i-1, j) .  4) If we can fit our ith item into our knapsack (in other

Dynamic Programming: Longest Increasing Subsequence

In this post, I'm going to talk about a short but interesting example where dynamic programming is useful.  The problem: Find the length of the longest strictly increasing subsequence. (Note: Subsequences, unlike substrings, do not have to be contiguous! Also, multiple subsequences may have the max length.)  For example, in { 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 }, the length of the LIS is 6, and the three possible LIS's are { 0, 2, 6, 9, 11, 15}, { 0, 4, 6, 9, 11, 15 }, and { 0, 4, 6, 9, 13, 15 }.  The approach: Let's call L(i) the length of the LIS contained in the indices 0 through i. To reduce L(i), we need to consider all the elements at indices 0 through i-1. Then, L(i) is equal to 1 plus the maximum L(j), where j is between 0 and i-1, and the jth element is smaller than the ith element.  This algorithm is O(n^2). My code for the dynamic programming approach to this problem is below...  static int[] tab;  public static int lis(int[] a) {

Dynamic Programming with Strings

Image
In this post, I'm going to discuss the "Edit Distance" problem, along with a specific case of it - the "Longest Common Subsequence" problem.  Edit Distance The problem: Given two strings "a" and "b" - find the smallest number of edits we need to transform "a" into "b." The possible types of edits are "insertion," "deletion," and "replacement." Assume all the edits have the same cost of 1.  The approach:   We can break this problem down into prefix or suffix subproblems. (It doesn't matter which one we pick - the approaches are equivalent.) If we think in terms of prefixes, we can call E(i, j) the minimum cost of transforming the first i characters of "a" into the first j characters of "b." There are four ways we can reduce E(i, j) into smaller subproblems...  1) If the ith character of "a" and the jth character of "b" are already equal, t

Dynamic Programming: The Basics

Image
What is dynamic programming? Use this algorithmic strategy to solve a problem by remembering its subproblems so you don't have to solve them more than once! It's a concept that I think is best understood by applying it to examples. The classic example is calculating Fibonacci numbers.  A naive approach to calculating Fibonacci numbers Maybe the most straightforward method to calculate Fibonacci numbers is as follows:  public static int method1(int n) { if(n <= 2) return 1;  else return method1(n-1) + method1(n-2);  } However, this method is slow: if T(n) is the amount of time it takes, then T(n) = T(n-1)+T(n-2) + O(1) because we add the amount of time it takes to calculate each subproblem as well as some constant time for comparisons, etc. Then T(n) = T(n-1) + T(n-2) + O(1) > 2T(n-2) + O(1) > 2^(n-2).  Thus, this method takes exponential time!  Why is this method so slow? If we look at the tree of subproblems that we're calculating, we see tha

My first iOS app!

Image
WWDC When I went to WWDC for the first time a couple months ago, I was inspired by all the cool demos, people, labs, and apps (shoutout to Shadowmatic!) to start developing for iOS. I had already started to teach myself Swift through the Stanford course on iTunes U, but during WWDC, I decided to just dive in and build a game.  One of my favorite games is Dots and Boxes: I played it obsessively with friends on the whiteboard everyday in middle school, and even created an AI to play the game in high school. Naturally, I wanted to make a Dots and Boxes app. After half an hour of StackOverflow and Ray Wenderlich, I  got two dots on the screen and a line connecting them appeared when I tapped. It was exciting! By the end of WWDC, I had cobbled together a basic game called Tessature that was a variation on Dots and Boxes.  My 0th app, Tessature  After WWDC, I continued to work on Tessature. I hit a lot of obstacles: t he overall structure of the project was disorganized and growing w