Dynamic Programming Intoduction – Lecture by Rashid Bin Muhammad, PhD.

 

Dynamic Programming Algorithms

 

Dynamic
programming is a fancy name for using divide-and-conquer technique with a table.
As compared to divide-and-conquer, dynamic programming is more powerful and
subtle design technique. Let me repeat , it is not a specific algorithm, but it is a meta-technique (like
divide-and-conquer). This technique was developed back in the days when
“programming” meant “tabular method” (like linear programming). It does not
really refer to computer programming. Here in our advanced algorithm course, we’ll
also
think of “programming” as a “tableau method” and certainly not writing code.
Dynamic programming is a stage-wise search method suitable for optimization
problems whose solutions may be viewed as the result of a sequence of decisions.
The most attractive property of this strategy is that during the search for a
solution it avoids full enumeration by pruning early partial decision solutions
that cannot possibly lead to optimal solution. In many practical situations,
this strategy hits the optimal solution in a polynomial number of decision
steps. However, in the worst case, such a strategy may end up performing full
enumeration.

Dynamic programming takes advantage of the duplication and arrange
to solve each subproblem only once, saving the solution (in table or
in a globally accessible place) for later use. The underlying idea of dynamic
programming  is: avoid calculating the same stuff twice, usually
by keeping a table of known results of subproblems. Unlike
divide-and-conquer, which solves the subproblems top-down, a dynamic
programming is a bottom-up technique. The dynamic programming technique is related to
divide-and-conquer, in the sense that it breaks problem down into smaller
problems and it solves recursively. However, because of the somewhat different
nature of dynamic programming problems, standard divide-and-conquer solutions
are not usually efficient.

The dynamic programming is among the most powerful for
designing algorithms for optimization problem. This is true for two reasons.
Firstly, dynamic programming solutions are based on few common elements.
Secondly, dynamic programming problems are typical optimization problems i.e.,
find the minimum or maximum cost solution, subject to various constraints.

In other words, this technique used for optimization problems:

  • Find a solution to the problem with the
    optimal value.
  • Then perform minimization or maximization.
    (We’ll see example of both in CLRS).

 

The dynamic programming is a paradigm of algorithm design in
which an optimization problem is solved by a combination of caching subproblem
solutions and appealing to the “principle of optimality.”

 

There are three basic elements that characterize a dynamic programming
algorithm:

1. Substructure

Decompose the given problem into smaller (and hopefully
simpler) subproblems. Express the solution of the original problem in terms of
solutions for smaller problems. Note that unlike divide-and-conquer problems, it
is not usually sufficient to consider one decomposition, but many different
ones.

2.
Table-Structure

After solving the subproblems, store the answers (results) to
the subproblems in a table. This is done because (typically) subproblem
solutions are reused many times, and we do not want to repeatedly solve the same
problem over and over again.

3. Bottom-up Computation

Using table (or something), combine solutions of smaller
subproblems to solve larger subproblems, and eventually arrive at a solution
to the complete problem. The idea of bottom-up computation is as follow:

Bottom-up means

  1. Start with the smallest subproblems.
  2. Combining theirs solutions obtain the solutions to
    subproblems of increasing size.
  3. Until arrive at the solution of the original problem.

Once we decided that we are going to attack the given
problem with dynamic programming technique, the most important step is the
formulation of the problem.
In other words, the most important question in designing a dynamic programming
solution to a problem is how to set up the subproblem structure.

 

If I can’t apply dynamic programming to all optimization
problem, then the question is what should I look for to apply this technique?
Well! the answer is there are two important elements that a problem must have in order for
dynamic programming technique to be applicable (look for those!).

1. Optimal Substructure  

Show that a solution to a problem consists of making a choice,
which leaves one or sub-problems to solve. Now suppose that you are given this
last choice to an optimal solution. [Students often have
trouble understanding the relationship between optimal substructure and
determining which choice is made in an optimal solution. One way to understand
optimal substructure is to imagine that “God” tells you what was the last choice
made in an optimal solution.] Given this choice, determine which
subproblems arise and how to characterize the resulting space of subproblems.
Show that the solutions to the subproblems used within the optimal solution
must themselves be optimal (optimality principle).
You usually use cut-and-paste:

  • Suppose that one of the subproblem is not
    optimal.
  • Cut it out.
  • Paste in an
    optimal solution.
  • Get a better solution to the original
    problem. Contradicts optimality of problem solution.

That was optimal substructure.

You need to ensure that you consider a wide enough range of
choices and subproblems that you get them all . [“God” is
too busy to tell you what that last choice really was.] Try all the
choices, solve all the subproblems resulting from each choice, and pick the
choice whose solution, along the subproblem solutions, is best.

 

We have used “Optimality Principle” a couple of times. Now a
word about this beast: The optimal solution to the problem contains within it
optimal solutions to subproblems. This is some times called the principle of
optimality.

The Principle of Optimality

The dynamic programming  relies on a principle of optimality.
This principle states that in an optimal sequence of decisions or
choices, each subsequence must also be optimal. For example, in matrix
chain multiplication problem, not only the value we are interested in
is optimal but all the other entries in the table are also represent
optimal. The principle can be related as follows: the optimal solution to a
problem is a combination of optimal solutions to some of its subproblems. The difficulty in turning the principle of optimally into an
algorithm is that it is not usually obvious which subproblems are
relevant to the problem under consideration.

 

Now the question is how to characterize the
space of subproblems?

  • Keep the space as simple as possible.
  • Expand it as necessary.

As an example, consider the assembly-line scheduling. In this problem, space of subproblems was
fastest way from factory entry through stations S1, j
 and S2, j. Clearly, no need to try a
more general space of subproblems. On the hand, in case of optimal binary search trees. Suppose we had tried to constrain space of
subproblems to subtrees with keys k1, k2,
. . . , kj. An optimal BST would have root kr
, for some 1 ≤ rj. Get subproblems k1,
. . . , kr − 1 and kr + 1,
. . . , kj. Unless we could guarantee that r
= j, so that subproblem with kr + 1, . . .
, kj is empty, then this subproblem is not of the
form k1, k2, . . . , kj.
Thus, needed to allow the subproblems to vary at both ends, i.e., allow both
i
and j to vary.

 

Optimal substructure varies across problem domains:

  1. How many
    subproblems
    are used in an optimal
    solution.
  2. How many
    choices
    in determining which
    subproblem(s) to use.

In Assembly-line Scheduling Problem:
we have 1 subproblem and 2 choices (for Si, j
use either S1, j − 1 or S2, j
− 1
). In the Longest Common Subsequence Problem:
we have 1 subproblem but as far as choices are concern, we have
either 1 choice (if xi
= yj , LCS of Xi − 1
and Yj − 1), or 2
choices (if xi = yj ,
LCS of Xi − 1 and Y , and LCS of X
and Yj − 1). Finally, in case of the
Optimal Binary Search Tree Problem: we have 2
subproblems (ki , . . . , kr
− 1
and kr + 1, . . . , kj
) and ji + 1 choices for kr in
ki, . . . , kj . Once
we determine optimal solutions to subproblems, we choose from among the j
i + 1 candidates for kr .

 

Informally, the running time of the dynamic programming
algorithm depends on the overall number of subproblems times the number of
choices. For example, in the assembly-line scheduling problem, there are Θ(n)
subproblems and 2 choices for each implying running time is Θ(n). In case of
longest common subsequence problem, there are Θ(mn) subproblems and at least 2
choices for each implying Θ(mn) running time. Finally, in case of
optimal binary
search tree problem
, we have Θ(n2) sub-problems and Θ(n) choices for
each implying Θ(n3) running time.

 

Dynamic programming uses optimal substructure
bottom up fashion:

  • First find optimal solutions to subproblems.
  • Then choose which to use in optimal solution
    to the problem.

 

When we look at greedy algorithms, we’ll see that they work
in top down fashion:

  • First make a choice that looks best.
  • Then solve the resulting subproblem.

 

Warning!
You’ll surely make an ass out of yourself into thinking optimal substructure
applies to all optimization problems. IT DOES NOT.
Let me repeat, dynamic programming is not applicable to all
optimization problems.
To see this point clearly, go through pages 341 − 344 of CLRS where authors
discussed two problems that look similar: Shortest Path
Problem
and Longest Simple Path Problem. In
both problems, they gave us an unweighted, directed graph G = (V, E).
And our job is to find a path (sequence of connected edges) from vertex u
in V to vertex v in V.

 

Subproblems Dependencies

It is easy to see that the subproblems, in our above
examples, are independent subproblems: For example,
in the assembly line problem, there is only 1
subproblem so it is trivially independent. Similarly, in the longest common subsequence problem, again we have
only 1 subproblem thus it is automatically independent. On the other hand, in
the optimal binary search tree problem, we have two
subproblems, ki, . . . , kr
− 1
and kr + 1, . . . , kj,
which are clearly independent.

 

2. Polynomially many (Overlapping) Subproblems

An important aspect to the efficiency of
dynamic programming is that the total number of distinct sub-problems to be
solved should be at most a polynomial number. Overlapping subproblems occur
when recursive algorithm revisits the same problem over and over. A good
divide-and-conquer algorithm, for example the merge-sort algorithm, usually
generate a brand new problem at each stage of recursion. Our Textbook CLRS has a
good example for matrix-chain multiplication to depict this idea. The CLRS also
talked about the alternative approach so-called memoization. It works as follows:

  • Store, don’t recompute
  • Make a table indexed by subproblem.
  • When solving a subproblem:
    • Lookup in the table.
    • If answer is there,
      use it.
    • Otherwise,
      compute answer,
      then store it.

In dynamic programming, we go one step further. We determine
in what order we would want to access the table, and fill it in that way.

 

Four-Step Method of CLRS

Our Text suggested that the development of a dynamic
programming algorithm can be broken into a sequence of following four steps.

  1. Characterize the structure of an optimal
    solution.
  2. Recursively defined the value of an optimal
    solution.
  3. Compute the value of an optimal solution in a
    bottom-up fashion.
  4. Construct an optimal solution from computed
    information.

 

 

Updated: March 18, 2010.

Source Article