You are on page 1of 3

THE STEPS IN DEVELOPING A DYNAMIC PROGRAMMING ALGORITHM

1) The Set of Subinstances

The set of subinstances that need to be solved by the dynamic programming algorithm
are obtained by tracing the recursive backtracking algorithm through the tree of stack
frames starting with the given instance I.

The set consists of the initial instance I , its subinstances, their subinstances, and so
on.

Redundancy

The recursive backtracking algorithm can be speed up only when it solves the same subinstance
many times.
2) Count the Subinstances

The running time of the dynamic programming algorithm is proportional to


number of subinstances.

At this point in the design of the algorithm, user should count how many subinstances
an instance I has as a function of the size n = |I | of the
instance.

If there are too many of them, then start at the very beginning, designing a new
recursive backtracking algorithm with a different question for the little bird.

the

3) Construct a Table Indexed by Subinstances

The algorithm designer constructs a table. The table,

must have one entry for each subinstance.

will have one dimension for each parameter used to specify a particular subinstance.

To be consistent, user will always use the letter i and if necessary j to index the
subinstances.

Each entry in the table is used to store an optimal solution for the subinstance along
with its cost.

Often this table is splitted into two tables


one for the solution and
One for the cost.

4) Solution from Subsolutions

The dynamic programming algorithm for finding an optimal solution to a given instance from an
optimal solution to a subinstance is identical to that within the recursive backtracking algorithm,
except that instead of recursing to solve a subinstance, the algorithm finds its optimal solution in the
table.
5) Base Cases

The base case instances are exactly the same as with the recursive backtracking
algorithm.

The dynamic programming algorithm starts its computation by storing in the


an optimal solution for each of these and their costs.

table

6) The Order in Which to Fill the Table

When a friend in the recursive backtracking algorithm needs help from a friend, the
algorithm recurses, and the stack frame for the first friend waits until the stack frame
for the second friend returns.

This forms a tree of recursive stack frames, keeping track of which friends are waiting
for answers from which friends.

In contrast, in a dynamic programming algorithm, the friends solve their subinstances


in an order such that nobody has to wait.

When allocating the table, be clear what subinstance each entry of the table
represents.

7) The Final Solution

The original instance will be the last subinstance to be solved. When complete the
dynamic program simply returns this answer.
8) Code

From steps 17, the code can always be put together using the same basic structure:
algorithm LeveledGraph (G, s, t )
<pre-cond>: G is a weighted directed layered graph, and s and t are nodes.
<post-cond>: optSol is a path with minimum total weight froms to t , and optCost is its weight.
begin
% Table: optSol[i] stores an optimal path from vi to t , and
optCost[i] its cost.
table[0..n] optSol, optCost
% Base case: The only base case is for the best path from t to t .
Its solution is the empty path with cost zero.
optSol[n] =
optCost[n] = 0
% General cases: Loop over subinstances in the table.
for i = n - 1 to 0
% Solve instance <G, vi , t > and fill in table entry <i>.

% Try each possible bird answer.


for each of the d edges <vi , vk>
% The bird-and-friend Algorithm: The bird tells us that the first edge in an optimal
path from vi to t is <vi , vk>. User asks the friend for an optimal path from vk to t. He
gives us optSol[k], which he had stored in the table. To this user adds the birds edge.
This gives us optSolk which is a best path from vi to t from among those paths
consistent with the birds answer.
optSolk = <vi , vk> + optSol[k]
optCostk = w<vi ,vk > + optCost[k]
end for
%Having the best, optSolk , for each birds answer k, user keeps the best of these best.
kmin = a k that minimizes optCostk
optSol[i] = optSolkmin
optCost[i] = optCostkmin
end for
return <optSol[0], optCost[0]>
end algorithm
9) Running Time

It can be seen that the code loops over each subinstance and, for each, loops over each bird answer.
From this, the running time seems to be the number of subinstances in the table times the number K
of answers to the birds question.

You might also like