an optimal solution to the problem

http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Greedy/greedyIntro.htm

Greedy Introduction

Greedy algorithms are simple and straightforward. They are shortsighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. They are easy to invent, easy to implement and most of the time quite efficient. Many problems cannot be solved correctly by greedy approach. Greedy algorithms are used to solve optimization problems

Greedy Approach

Greedy Algorithm works by making the decision that seems most promising at any moment; it never reconsiders this decision, whatever situation may arise later.

As an example consider the problem of "Making Change".

Coins available are:

  • dollars (100 cents)
  • quarters (25 cents)
  • dimes (10 cents)
  • nickels (5 cents)
  • pennies (1 cent)

Problem    Make a change of a given amount using the smallest possible number of coins.

Informal Algorithm

  • Start with nothing.
  • at every stage without passing the given amount.
    • add the largest to the coins already chosen.

Formal Algorithm

Make change for n units using the least possible number of coins.

MAKE-CHANGE (n)

C ← {100, 25,
10, 5, 1}     // constant.

Sol ←
{};            
            // set that
will hold the solution set.

Sum ← 0 sum of
item in solution set

WHILE sum not = n

x =
largest item in set C such that sum + x ≤ n

IF
no such item THEN

RETURN    "No Solution"

S ← S {value of
x}

sum ← sum + x

RETURN S

Example
   
Make a change for 2.89 (289 cents) here n =
2.89 and the solution contains 2 dollars, 3 quarters, 1 dime and 4
pennies. The algorithm is greedy because at every stage it chooses the
largest coin without worrying about the consequences. Moreover, it
never changes its mind in the sense that once a coin has been included
in the solution set, it remains there.

Characteristics
and Features of Problems solved by Greedy Algorithms

To construct the solution in an optimal way. Algorithm maintains
two sets. One contains chosen items and the other contains rejected
items.

The greedy algorithm consists of four (4) function.

  1. A function that checks whether chosen set of items provide a solution.
  2. A function that checks the feasibility of a set.
  3. The selection function tells which of the candidates is the most promising.
  4. An objective
    function, which does not appear explicitly, gives the value of a solution.

Structure
Greedy Algorithm

  • Initially the set of chosen items is empty i.e.,
    solution set.
  • At each step
    • item will be added in a solution set by using
      selection function.
    • IF the set would no longer be feasible
      • reject items under consideration (and is
        never consider again).
    • ELSE IF set is still feasible THEN
      • add the current item.

Definitions of
feasibility

A feasible set (of
candidates) is promising if it can be extended to produce not merely a
solution, but an optimal solution to the problem. In particular, the
empty set is always promising why? (because an optimal solution always
exists)

Unlike Dynamic Programming, which solves the
subproblems bottom-up, a greedy strategy usually progresses in a
top-down fashion, making one greedy choice after another, reducing each
problem to a smaller one.

Greedy-Choice
Property

The "greedy-choice property" and "optimal
substructure" are two ingredients in the problem that lend to a greedy
strategy.

Greedy-Choice
Property

It says that a globally optimal solution can be
arrived at by making a locally optimal choice.

时间: 2024-08-09 02:02:46

an optimal solution to the problem的相关文章

Solution to LeetCode Problem Set

Here is my collection of solutions to leetcode problems. LeetCode - Course Schedule LeetCode - Reverse Linked List LeetCode - Isomorphic Strings LeetCode - Count Primes LeetCode - Remove Linked List Elements LeetCode - Happy Number LeetCode - Bitwise

Dynamic Programming

We began our study of algorithmic techniques with greedy algorithms, which in some sense form the most natural approach to algorithm design. Faced with a new computational problem, we've seen that it's not hard to propose multiple possible greedy alg

maker 2008年发表在genome Res

简单好用 identify repeats, to align ESTs and proteins to the genome, and to automatically synthesize these data into feature-rich gene annotations, including alternative splicing and UTRs, as well as attributes such as evidence trails, and confidence mea

全面解析《嵌入式程序员应该知道的16个问题》

文章为转载文章,写的很好,和大家分享下,原文连接如下: ----Sailor_forever分析整理,[email protected] http://blog.csdn.net/sailor_8318/archive/2008/03/25/2215041.aspx 1.预处理器(Preprocessor) 2.如何定义宏 3.预处理器标识#error的目的是什么? 4.死循环(Infinite loops) 5.数据声明(Data declarations) 6.关键字static的作用是什么

【转】嵌入式程序员应该知道的16个问题

全面解析<嵌入式程序员应该知道的16个问题> ----Sailor_forever分析整理,[email protected] http://blog.csdn.net/sailor_8318/archive/2008/03/25/2215041.aspx 1.预处理器(Preprocessor) 2.如何定义宏 3.预处理器标识#error的目的是什么? 4.死循环(Infinite loops) 5.数据声明(Data declarations) 6.关键字static的作用是什么? 7.

House RobberII; DP;

In this problem, house are arranged in a circle, robber should not invade into two adjacent houses. Compared to the former problem, we need to consider the problem that the optimal option include the first one and the last house, which is not allowed

【CV知识学习】early stop、regularation、fine-tuning and some other trick to be known

深度学习有不少的trick,而且这些trick有时还挺管用的,所以,了解一些trick还是必要的.上篇说的normalization.initialization就是trick的一种,下面再总结一下自己看Deep Learning Summer School, Montreal 2016 总结的一些trick.请路过大牛指正~~~ early stop “早停止”很好理解,就是在validation的error开始上升之前,就把网络的训练停止了.说到这里,把数据集分成train.validati

嵌入式相关5

嵌入式常用定义整理 简述常见的嵌入式存储器和特点(4种以上). ROM.SRAM.DRAM. 根据掉电数据是否丢失,存储器可以分为RAM(随机存取器)和ROM(只读存储器),其中RAM的访问速度比较快,但掉电后数据会丢失,而ROM掉电后数据不会丢失.人们通常所说的内存即指系统中的RAM. RAM又可分为SRAM(静态存储器)和DRAM(动态存储器) SRAM是利用双稳态触发器来保存信息的,只要不掉电,信息是不会丢失的. DRAM是利用MOS(金属氧化物半导体)电容存储电荷来储存信息的,因此必须通

AAAI 2018 分析

AAAI 2018 分析 word embedding Learning Sentiment-Specific Word Embedding via Global Sentiment Representation Context-based word embedding learning approaches can model rich semantic and syntactic information. However, it is problematic for sentiment an