Dynamic Programming

We began our study of algorithmic techniques with greedy algorithms, which in some sense form the most natural approach to algorithm design. Faced with a new computational problem, we‘ve seen that it‘s not hard to propose multiple possible greedy algorithms; the challenge is then to determine whether any of these algorithms provides a correct solution to the problem in all cases.

6.1 Weighted Interval Scheduling: A Recursive Procedure

We have seen that a particular greedy algorithm produces an optimal solution to the Interval Scheduling Problem, where the goal is to accept as large a set of nonoverlapping intervals as possible. The weighted Interval Scheduling Problem is a strictly more general version, in which each interval has a certain value (or weight), and we want to accept a set of maximum value.

Designing a Recursive Algorithm

Since the original Interval Scheduling Problem is simply the special case in which all values are equal to 1, we know already that most greedy algorithms will not solve this problem optimally. But even the algorithm that worked before (repeatedly choosing the interval that ends earliest) is no longer optimal in this more general setting.

Indeed, no natural greedy algorithm is known for this problem, which is what motivates our switch to dynamic programming. As discussed above, we will begin our introduction to dynamic programming with a recursive type of algorithm for this problem, and then in the next section we‘ll move to a more iterative method that is closer to the style we use in the rest of this chapter.

We use the notation from our discussion of Interval Scheduling. We have requests labeled , with each request specifying a start time and a finish time . Each interval now also has a value, or weight . Two intervals are compatible if they do not overlap. The goal of our current problem is to select a subset of mutually compatible intervals, so as to maximize the sum of the values of the selected intervals,

Let‘s suppose that the requests are sorted in order of nondecreasing finish time: . We‘ll say a request comes before a request if . This will be the natural left-to-right order in which we‘ll consider intervals. To help in talking about this order, we define , for an interval , to be the largest index such that intervals and are disjoint. In other words, is the leftmost interval that ends before begins. We define if no request is disjoint from .

Now, given an instance of the Weighted Interval Scheduling Problem, let‘s consider an optimal solution , ignoring for now that we have no idea what it is. Here‘s something completely obvious that we can say about : either interval (the last one) belongs to , or it doesn‘t. Suppose we explore both sides of this dichotomy a little further. If , then clearly no interval indexed strictly between and can belong to , because by the definition of , we know that intervals all overlap interval . Moreover, if , then must include an optimal solution to the problem consisting of requests - for if it didn‘t, we could replace ‘s choice of requests from with a better one, with no danger of overlapping request .

On the other hand, if , then is simply equal to the optimal solution to the problem consisting of requests . This is by completely analogous reasoning: we‘re assuming that does not include request ; so if it does not choose the optimal set of requests from , we could replace it with a better one.

All this suggests that finding the optimal solution on intervals involves looking at the optimal solutions of smaller problems of the form . Thus, for any value of between and , let denote the optimal solution to the problem consisting of requests , and let denote the value of this solution. (We define , based on the convention that this is the optimum over an empty set of intervals.) The optimal solution we‘re seeking is precisely , with value . For the optimal solution on , our reasoning above (generalizing from the case in which ) says that either , in which case , or , in which case . Since these are precisely the two possible choices ( or ), we can further say that.

(1)

And how do we decide whether belongs t the optimal solution . This too is easy: it belongs to the optimal solution if and only if the first of the options above is at least as good as the second; in other words,

Request belongs to an optimal solution on the set if and only if

(2)

These facts form the first crucial component on which a dynamic programming solution is based: a recurrence equation that expresses the optimal solution (or its value) in terms of the optimal solutions to smaller subproblems.

Despite the simple reasoning that led to this point, (1) is already a significant development. It directly gives us a recursive algorithm to compute , assuming that we have already sorted the requests by finishing time and computed the values of for each .


If then

Return

Else

Return

Endif

The correctness of the algorithm follows directly by induction on :

correctly computes for each .

Proof. By definition . Now, take some , and suppose by way of induction that correctly computes for all . By the induction hypothesis, we know that and ; and hence from (1) it follows that

Unfortunately, if we really implemented the algorithm as just written, it would take exponential time to run in the worst case.

Memoizing the Recursion

In fact, though, we‘re not so far from having a polynomial-time algorithm. A fundamental observation, which forms the second crucial component of a dynamic programming solution, is that our recursive algorithm is really only solving different subproblems: . The fact that it runs in exponential time as written is simply due to the spectacular redundancy in the number of times it issues each of these calls.

How could we eliminate all this redundancy? We could store the value of in a globally accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls. This technique of saving values that have already been computed is referred to as memoization.

We implement the above strategy in the more “intelligent” procedure . This procedure will make use of an array ; will start with the value “empty”, but will hold the value of as soon as it is first determined. To determine , we invoke .


If then

Return

Else if is not empty then

Return

Else

Define

Return

Endif

Analyzing the Memoized Version

Clearly, this looks very similar to our previous implementation of the algorithm; however, memoization has brought the running time way down.

The running time of is (assuming the input intervals are sorted by their finish times).

时间: 2024-08-28 06:07:45

Dynamic Programming的相关文章

Dynamic Programming | Set 3 (Longest Increasing Subsequence)

在 Dynamic Programming | Set 1 (Overlapping Subproblems Property) 和 Dynamic Programming | Set 2 (Optimal Substructure Property) 中我们已经讨论了重叠子问题和最优子结构性质,现在我们来看一个可以使用动态规划来解决的问题:最长上升子序列(Longest Increasing Subsequence(LIS)). 最长上升子序列问题,致力于在一个给定的序列中找到一个最长的子序列

Dynamic Programming | Set 4 (Longest Common Subsequence)

首先来看什么是最长公共子序列:给定两个序列,找到两个序列中均存在的最长公共子序列的长度.子序列需要以相关的顺序呈现,但不必连续.例如,"abc", "abg", "bdf", "aeg", '"acefg"等都是"abcdefg"的子序列.因此,一个长度为n的序列拥有2^n中可能的子序列(序列中的每一个元素只有选或者不选两种可能,因此是2^n). Example: LCS for inp

HDU 4972 A simple dynamic programming problem(推理)

HDU 4972 A simple dynamic programming problem 题目链接 推理,会发现只有前一个和当前一个分数为(1, 2)或(2, 1)的时候,会有两种加分方法,其他情况最多就一种情况,所以只要统计(1, 2),(2, 1)的个数,最后判断分差是否为0,如果不为0,那么可能是正或负,那就是两倍 代码: #include <cstdio> #include <cstring> const int N = 100005; int t, n, a[N]; i

hdu 4972 A simple dynamic programming problem(高效)

题目链接:hdu 4972 A simple dynamic programming problem 题目大意:两支球队进行篮球比赛,每进一次球后更新比分牌,比分牌的计数方法是记录两队比分差的绝对值,每次进球的分可能是1,2,3分.给定比赛中的计分情况,问说最后比分有多少种情况. 解题思路:分类讨论: 相邻计分为1-2或者2-1的时候,会对应有两种的的分情况 相邻计分之差大于3或者说相等并且不等于1的话,为非法输入 其他情况下,不会造成新的比分情况产生 对于最后一次比分差为0的情况,就没有谁赢谁

2017 UESTC Training for Dynamic Programming

2017 UESTC Training for Dynamic Programming A    思维, 或 dp, 很有意思 方法1: 构造法:蛇形安排赛程表算法复杂度:O(N^2)将1-N排成两竖列,每一轮同一行的为对手保持1的位置不变,其他位置按顺(逆)时方向依次旋转1    6          1    2          1    3          1    4          1    5      2    5          3    6          4   

动态规划 Dynamic Programming

March 26, 2013 作者:Hawstein 出处:http://hawstein.com/posts/dp-novice-to-advanced.html 声明:本文采用以下协议进行授权: 自由转载-非商用-非衍生-保持署名|Creative Commons BY-NC-ND 3.0 ,转载请注明作者及出处. 前言 本文翻译自TopCoder上的一篇文章: Dynamic Programming: From novice to advanced ,并非严格逐字逐句翻译,其中加入了自己的

HDU-4972 A simple dynamic programming problem

http://acm.hdu.edu.cn/showproblem.php?pid=4972 ++和+1还是有区别的,不可大意. A simple dynamic programming problem Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/65536 K (Java/Others)Total Submission(s): 307    Accepted Submission(s): 117 Problem D

hdu 4972 A simple dynamic programming problem (转化 乱搞 思维题) 2014多校10

题目链接 题意:给定一个数组记录两队之间分差,只记分差,不记谁高谁低,问最终有多少种比分的可能性 分析: 类似cf的题目,比赛的时候都没想出来,简直笨到极点..... 1 #include <iostream> 2 #include <cstdio> 3 #include <cstring> 4 #include <cstdlib> 5 #include <cmath> 6 #include <vector> 7 #include &

hdu 4223 Dynamic Programming?

Dynamic Programming? Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/65536 K (Java/Others) Problem Description Dynamic Programming, short for DP, is the favorite of iSea. It is a method for solving complex problems by breaking them down