Introducing Deep Reinforcement

The manuscript of Deep Reinforcement Learning is available now! It makes significant improvements to Deep Reinforcement Learning: An Overview, which has received 100+ citations, by extending its latest version more than one year ago from 70 pages to 150 pages.

It draws a big picture of deep reinforcement learning (RL) with many details. It covers contemporary work in historical contexts. It endeavours to answer the following questions: 1) Why deep? 2) What is the state of the art? and, 3) What are the issues, and potential solutions? It attempts to help those who want to get more familiar with deep RL, and to serve as a reference for people interested in this fascinating area, like professors, researchers, students, engineers, managers, investors, etc. Shortcomings and mistakes are inevitable; comments and criticisms are welcome.

The manuscript introduces AI, machine learning, and deep learning briefly, and provides a mini tutorial for reinforcement learning. The following figure illustrates relationships among these concepts, with major contents for machine learning and AI .Deep reinforcement learning is reinforcement learning integrated with deep learning, or deep artificial neural networks. A blog is dedicated to Resources for Deep Reinforcement Learning.

The manuscript covers six core elements: value function, policy, reward, model, exploration vs. exploitation, and representation; six important mechanisms: attention and memory, unsupervised learning, hierarchical RL, multi-agent RL, relational RL, and learning to learn; and twelve applications: games, robotics, natural language processing (NLP), computer vision, finance, business management, healthcare, education, energy, transportation, computer systems, and, science, engineering, and art.

Deep reinforcement learning has made exceptional achievements, e.g., DQN applying to Atari games ignited this wave of deep RL, and AlphaGo (Zero) and DeepStack set landmarks for AI. Deep RL has many newly invented algorithms/architectures, e.g., DQNA3CTRPOPPODDPGTrust-PCLGPSUNREALDNC, etc. Moreover, deep RL has been enjoying very abound and diverse applications, e.g., Capture the FlagDota 2StarCraft IIroboticscharacter animationconversational AIneural architecture design (AutoML)data center coolingrecommender systemsdata augmentationmodel compressioncombinatorial optimizationprogram synthesistheorem provingmedical imagingmusic, and chemical retrosynthesis, so on and so forth. A blog is dedicated to Reinforcement Learning applications.

In general, RL is probably helpful, if a problem can be regarded as or transformed to a sequential decision making problem, and states, actions, maybe rewards, can be constructed; sometimes the problem may not appear as an RL problem on the surface. Roughly speaking, if a task involves some manual designed “strategy”, then there is a chance for reinforcement learning to help. Creativity would push the frontiers of deep RL further with respect to core elements, important mechanisms, and applications.

Albeit being so successful, deep RL encounters many issues, like credit assignment, sparse reward, sample efficiency, instability, divergence, interpretability, safety, etc.; even reproducibility is an issue.

Six research directions are proposed as both challenges and opporrtunities. There are already some progress in these directions, e.g., DopamineTStarBotsMORELGQN, visual reasoningneural-symbolic learningUPNcausal InfoGANmeta-gradient RL, along with many applications as above.

  1. systematic, comparative study of deep RL algorithms
  2. “solve” multi-agent problems
  3. learn from entities, but not just raw inputs
  4. design an optimal representation for RL
  5. AutoRL
  6. develop killer applications for (deep) RL

It is desirable to integrate RL more deeply with AI, with more intelligence in the end-to-end mapping from raw inputs to decisions, to incorporate knowledge, to have common sense, to be more efficient, to be more interpretable, and to avoid obvious mistakes, etc., rather than working as a blackbox.

Deep learning and reinforcement learning, being selected as one of the MIT Technology Review 10 Breakthrough Technologies in 2013 and 2017 respectively, will play their crucial roles in achieving artificial general intelligence. David Silver proposed a conjecture: artificial intelligence = reinforcement learning + deep learning (AI = RL + DL). We will see both deep learning and reinforcement learning prospering in the coming years and beyond. Deep learning is exploding. It is the right time to nurture, educate and lead the market for reinforcement learning.

Deep learning, in this third wave of AI, will have deeper influences, as we have already seen from its many achievements. Reinforcement learning, as a more general learning and decision making paradigm, will deeply influence deep learning, machine learning, and artificial intelligence in general.

原文地址:https://www.cnblogs.com/alan-blog-TsingHua/p/9968595.html

时间: 2024-10-07 23:19:11

Introducing Deep Reinforcement的相关文章

repost: Deep Reinforcement Learning

From: http://wanghaitao8118.blog.163.com/blog/static/13986977220153811210319/ accessed 2016-03-10 深度强化学习(Deep Reinforcement Learning)的资源 Google的Deep Mind团队2013年在NIPS上发表了一篇牛x闪闪的文章,亮瞎了好多人眼睛,不幸的是我也在其中.前一段时间收集了好多关于这方面的资料,一直躺在收藏夹中,目前正在做一些相关的工作(希望有小伙伴一起交流)

论文笔记之:Deep Reinforcement Learning with Double Q-learning

Deep Reinforcement Learning with Double Q-learning Google DeepMind Abstract 主流的 Q-learning 算法过高的估计在特定条件下的动作值.实际上,之前是不知道是否这样的过高估计是 common的,是否对性能有害,以及是否能从主体上进行组织.本文就回答了上述的问题,特别的,本文指出最近的 DQN 算法,的确存在在玩 Atari 2600 时会 suffer from substantial overestimation

(转) Playing FPS games with deep reinforcement learning

Playing FPS games with deep reinforcement learning 博文转自:https://blog.acolyer.org/2016/11/23/playing-fps-games-with-deep-reinforcement-learning/ When I wrote up 'Asynchronous methods for deep learning' last month, I made a throwaway remark that after

(转) Deep Reinforcement Learning: Playing a Racing Game

Byte Tank Posts Archive Deep Reinforcement Learning: Playing a Racing Game OCT 6TH, 2016 Agent playing Out Run, session 201609171218_175epsNo time limit, no traffic, 2X time lapse Above is the built deep Q-network (DQN) agent playing Out Run, trained

论文笔记之:Dueling Network Architectures for Deep Reinforcement Learning

Dueling Network Architectures for Deep Reinforcement Learning ICML 2016 Best Paper Google DeepMind Abstract: 本文是 ICML 2016 的最佳论文之一,又是出自 Google DeepMind. 最近几年,在 reinforcement learning 上关于 deep representation 有取得了很大的成功.然而,许多这些应用都是利用传统的网络架构,例如:神经网络,LSTM

Playing Atari with Deep Reinforcement Learning

这是一篇论文,原地址在: https://arxiv.org/abs/1312.5602 我属于边看便翻译,边理解,将他们记录在这里: Abstract: 我们提出了第一个深学习模型,成功地学习控制策略直接从高维感官输入使用强化学习.该模型是一个卷积神经网络,用Q-学习的变体训练,其输入是原始像素,其输出是估计未来的值函数.我们运用我们的方法在Atari 2600 游戏中测试,没有调整结构或学习的算法.我们发现它比所有之前的方法都好,比人类专家玩得都厉害. 1 Introduction 直接从高

深度强化学习(Deep Reinforcement Learning)入门:RL base & DQN-DDPG-A3C introduction

转自https://zhuanlan.zhihu.com/p/25239682 过去的一段时间在深度强化学习领域投入了不少精力,工作中也在应用DRL解决业务问题.子曰:温故而知新,在进一步深入研究和应用DRL前,阶段性的整理下相关知识点.本文集中在DRL的model-free方法的Value-based和Policy-base方法,详细介绍下RL的基本概念和Value-based DQN,Policy-based DDPG两个主要算法,对目前state-of-art的算法(A3C)详细介绍,其他

Deep Reinforcement Learning from Self-Play in Imperfect-Information Games

Heinrich, Johannes, and David Silver. "Deep reinforcement learning from self-play in imperfect-information games." arXiv preprint arXiv:1603.01121(2016). 这篇文章提出了基于深度学习的自我博弈达到纳什均衡的训练方法.这个方法避免了人为的先验知识的误导,采用了端到端的训练方式,达到了人类专家级水平. 方法: 通过自我博弈产生训练数据,用来

Deep Reinforcement Learning for Visual Object Tracking in Videos 论文笔记

Deep Reinforcement Learning for Visual Object Tracking in Videos 论文笔记 arXiv 摘要:本文提出了一种 DRL 算法进行单目标跟踪,算是单目标跟踪中比较早的应用强化学习算法的一个工作.  在基于深度学习的方法中,想学习一个较好的 robust spatial and temporal representation for continuous video data 是非常困难的.  尽管最近的 CNN based tracke