Dynamic Programming Intoduction – Lecture by Rashid Bin Muhammad, PhD.

## Dynamic Programming Algorithms

Dynamic

programming is a fancy name for using divide-and-conquer technique with a table.

As compared to divide-and-conquer, dynamic programming is more powerful and

subtle design technique. Let me repeat , it is not a specific algorithm, but it is a meta-technique (like

divide-and-conquer). This technique was developed back in the days when

“programming” meant “tabular method” (like linear programming). It does not

really refer to computer programming. Here in our advanced algorithm course, we’ll

also

think of “programming” as a “tableau method” and certainly not writing code.

Dynamic programming is a stage-wise search method suitable for optimization

problems whose solutions may be viewed as the result of a sequence of decisions.

The most attractive property of this strategy is that during the search for a

solution it avoids full enumeration by pruning early partial decision solutions

…

Read More
Approximate dynamic programming
*Approximate Dynamic Programming*

Solving the curses of dimensionality

### The Second Edition

(c) John Wiley

and Sons

Dynamic

programming has often been dismissed because it suffers from “the curse

of dimensionality.” In fact, there are up to three curses of dimensionality: the state space, the outcome space and the action space.

This book brings together dynamic programming, math programming,

simulation and statistics to solve complex problems using practical techniques

that scale to real-world applications. Even more so than the first edition, the second edition forms a bridge between the foundational work in reinforcement learning, which focuses on simpler problems, and the more complex, high-dimensional applications that typically arise in operations research. Our work is motivated by many industrial projects undertaken by CASTLE

Lab, including freight transportation, military logistics, finance,

health and energy.

The book is written at a level

…

Read More

### Introduction

Dynamic programming (usually referred to as **DP** ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly *‘Remember your Past’* 🙂 . If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lapping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).

There are two ways of doing this.

**1.) **

…

Read More
**Figure 1.** Finding the shortest path in a graph using optimal substructure; a straight line indicates a single edge; a wavy line indicates a shortest path between the two vertices it connects (among other paths, not shown, sharing the same two vertices); the bold line is the overall shortest path from start to goal.

**Dynamic programming** is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then

…

Read More
Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems, so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial. For example, if we write simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.

### Recent Articles on Dynamic Programming

**Basic Concepts :**

**Advanced Concepts :**

**Basic Problems : **

**Intermediate Problems : **

**Hard Problems : **

**Quick Links :**

- Top 20 Dynamic Programming Interview Questions
- ‘Practice Problems’ on Dynamic Programming
- ‘Quiz’ on Dynamic Programming

If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to

…

Read More