Intro to Dynamic Programming with the Fibonacci Sequence

Matthew Aquino
3 min readSep 2, 2021

--

In my foray into the world of Data Structures and Algorithms, I’ve come across various problem types and patterns but one that has seemed extremely daunting has been Dynamic Programming problems. The name alone elicits some intimidation but the 2 components give no clue to what the actual problem pattern is.

Dynamic programming is actually an optimization over plain recursion. A more fitting name would probably be Recursion Optimization as it is more semantic but those two words are also a bit daunting.

Identifying a Dynamic Programming Problem:

zThe 2 identifying markers of a dynamic programming problem are:

  1. Similar sub-problems
  2. Similar sub-structures

What do the 2 above terms mean though? I was confused as well but I looked into it and found an example that demystified Dynamic Programming a little.

If you’re familiar with the Fibonacci sequence then you know it’s a series of numbers where you get the following number by adding the values of the 2 previous numbers e.g. [ 1, 1, 2, 3, 5, 8, 13, ...]

Solving for the nth number in the Fibonacci sequence is a classic recursion problem and we can do so with the following code:

Try the above code for yourself. The recursive approach is very intuitive but not completely optimized. As you can see from my comments when calculating lower values of n the function runs almost instantaneously but when we attempt to calculate n = 50 the program runs for quite a bit. Now, why does this occur?

Let’s take a look at an expanded tree of the Fibonacci sequence.

courtesy of freeCodeCamp.org

Due to the recursive approach of our initial solution, we will be drilling down to each leaf node because we do not return a value unless n is less than or equal to 2. We will effectively be visiting each node and to do that the runtime scales in an exponential fashion. We can see above how many calculations have to be done for a fib(7) now imagine it for fib(50) or even larger.

Dynamic Programming: Memoization

Remember how I dropped the terms sub-problems and sub-structures in order to identify DP problems? Well here’s the payoff.

Fibonacci Sequence substructures and subproblems

If you take a closer look at the original tree fib(3) is recalculated 4 extra times, fib(4) twice, and fib(5) 1 extra time before reaching a solution for fib(7). The recursive approach while intuitive is very inefficient.

This is where memorization comes in. Using memoization we pass along a map, along with the integer n. For every new value of n we will save down a new key and its corresponding value will be the calculated value for the Fibonacci sequence. We then insert a check beforehand that checks our map for the existence of that n value and instead of having repeated calculations like the above example we will only perform the calculation for fib(3) once and just refer to our map every time going forward.

In this case, we will drill down from the leftmost branch and save each value down and only need to add the value of fib(5) from our map to the value of f(6) to get fib(7).

The code would look something like the below:

By making use of a map we effectively cut the time and space complexity of this problem down to O(n). After covering a simple problem like this and applying DP problem-solving patterns the topic of DP became much clearer. Now DP problems come in many shapes and forms and this was a very simple example but I hope we can build on this foundational problem and build our knowledge base to solve more complex problems down the line.

Thank you for joining me this week!
Please check out this video for a more in depth guide on Dynamic Programming.

--

--

Matthew Aquino
Matthew Aquino

Written by Matthew Aquino

Navigating through software engineering one blog post at a time.

Responses (1)