stanford推荐阅读目录

stanford推荐阅读目录

stanford deep learning 网站上推荐的阅读目录:

UFLDL Recommended Readings

If you‘re learning about UFLDL (Unsupervised Feature Learning and Deep Learning), here is a list of papers to consider reading. We‘re assuming you‘re already familiar with basic machine learning at the level of [CS229 (lecture notes available)].

The basics:

  • [CS294A] Neural Networks/Sparse Autoencoder Tutorial. (Most of this is now in the UFLDL Tutorial, but the exercise is still on the CS294A website.)
  • [1] Natural Image Statistics book, Hyvarinen et al.
    • This is long, so just skim or skip the chapters that you already know.
    • Important chapters: 5 (PCA and whitening; you‘ll probably already know the PCA stuff), 6 (sparse coding), 7 (ICA), 10 (ISA), 11 (TICA), 16 (temporal models).
  • [2] Olshausen and Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images Nature 1996. (Sparse Coding)
  • [3] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer and Andrew Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML 2007

Autoencoders:

  • [4] Hinton, G. E. and Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 2006.

    • If you want to play with the code, you can also find it at [5].
  • [6] Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H. Greedy Layer-Wise Training of Deep Networks. NIPS 2006
  • [7] Pascal Vincent, Hugo Larochelle, Yoshua Bengio and Pierre-Antoine Manzagol. Extracting and Composing Robust Features with Denoising Autoencoders. ICML 2008.
    • (They have a nice model, but then backwards rationalize it into a probabilistic model. Ignore the backwards rationalized probabilistic model [Section 4].)

Analyzing deep learning/why does deep learning work:

  • [8] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation. ICML 2007.

    • (Someone read this and let us know if this is worth keeping,. [Most model related material already covered by other papers, it seems not many impactful conclusions can be made from results, but can serve as reading for reinforcement for deep models])
  • [9] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why Does Unsupervised Pre-training Help Deep Learning? JMLR 2010
  • [10] Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee and Andrew Y. Ng. Measuring invariances in deep networks. NIPS 2009.

RBMs:

  • [11] Tutorial on RBMs.

    • But ignore the Theano code examples.
    • (Someone tell us if this should be moved later. Useful for understanding some of DL literature, but not needed for many of the later papers? [Seems ok to leave in, useful introduction if reader had no idea about RBM‘s, and have to deal with Hinton‘s 06 Science paper or 3-way RBM‘s right away])

Convolution Networks:

  • [12] Tutorial on Convolution Neural Networks.

    • But ignore the Theano code examples.

Applications:

  • Computer Vision

    • [13] Jianchao Yang, Kai Yu, Yihong Gong, Thomas Huang. Linear Spatial Pyramid Matching using Sparse Coding for Image Classification, CVPR 2009
    • [14] A. Torralba, R. Fergus and Y. Weiss. Small codes and large image databases for recognition. CVPR 2008.
  • Audio Recognition
    • [15] Unsupervised feature learning for audio classification using convolutional deep belief networks, Honglak Lee, Yan Largman, Peter Pham and Andrew Y. Ng. In NIPS 2009.

Natural Language Processing:

  • [16] Yoshua Bengio, Réjean Ducharme, Pascal Vincent and Christian Jauvin, A Neural Probabilistic Language Model. JMLR 2003.
  • [17] R. Collobert and J. Weston. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. ICML 2008.
  • [18] Richard Socher, Jeffrey Pennington, Eric Huang, Andrew Y. Ng, and Christopher D. Manning. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. EMNLP 2011
  • [19] Richard Socher, Eric Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. NIPS 2011
  • [20] Mnih, A. and Hinton, G. E. Three New Graphical Models for Statistical Language Modelling. ICML 2007

Advanced stuff:

  • Slow Feature Analysis:

    • [21] Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 2005.
  • Predictive Sparse Decomposition
    • [22] Koray Kavukcuoglu, Marc‘Aurelio Ranzato, and Yann LeCun, "Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition", Computational and Biological Learning Lab, Courant Institute, NYU, 2008.
    • [23] Kevin Jarrett, Koray Kavukcuoglu, Marc‘Aurelio Ranzato, and Yann LeCun, "What is the Best Multi-Stage Architecture for Object Recognition?", In ICCV 2009

Mean-Covariance models

  • [24] M. Ranzato, A. Krizhevsky, G. Hinton. Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images. In AISTATS 2010.
  • [25] M. Ranzato, G. Hinton, Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines. CVPR 2010
    • (someone and tell us if you need to read the 3-way RBM paper before the mcRBM one [I didn‘t find it necessary, in fact the CVPR paper seemed easier to understand.])
  • [26] Dahl, G., Ranzato, M., Mohamed, A. and Hinton, G. E. Phone Recognition with the Mean-Covariance Restricted Boltzmann Machine. NIPS 2010.
  • [27] Y. Karklin and M. S. Lewicki, Emergence of complex cell properties by learning to generalize in natural scenes, Nature, 2008.
    • (someone tell us if this should be here. Interesting algorithm + nice visualizations, though maybe slightly hard to understand. [seems a good reminder there are other existing models])

Overview

  • [28] Yoshua Bengio. Learning Deep Architectures for AI. FTML 2009.

    • (Broad landscape description of the field, but technical details there are hard to follow so ignore that. This is also easier to read after you‘ve gone over some of literature of the field.)

Practical guides:

  • [29] Geoff Hinton. A practical guide to training restricted Boltzmann machines. UTML TR 2010–003.

    • A practical guide (read if you‘re trying to implement and RBM; but otherwise skip since this is not really a tutorial).
  • [30] Y. LeCun, L. Bottou, G. Orr and K. Muller. Efficient Backprop. Neural Networks: Tricks of the trade, Springer, 1998
    • Read if you‘re trying to run backprop; but otherwise skip since very low-level engineering/hackery tricks and not that satisfying to read.

Also, for other lists of papers:

  • [31] Honglak Lee‘s Course
  • [32] from Geoff‘s tutorial
时间: 2024-08-07 08:38:07

stanford推荐阅读目录的相关文章

推荐阅读

可以先从阅读开始,推荐阅读顺序: <javascript+dom编程艺术> --> <锋利的jQuery> --> <javascript高级程序设计>(第三版)--> <javascript语言精髓> --> <基于MVC的javascript富应用开发> 然后,订阅一些好的bolg: CSS森林(CSS Forest):无需多解释了. http://www.cssforest.org/blog/index.php?at

阅读目录

阅读目录 1. # 2. ? 3. & 回到顶部 1. # 10年9月,twitter改版.一个显著变化,就是URL加入了"#!"符号.比如,改版前的用户主页网址为http://twitter.com/username改版后,就变成了http://twitter.com/#!/username 这是主流网站第一次将"#"大规模用于重要URL中.这表明井号(Hash)的作用正在被重新认识.本文根据HttpWatch的文章,整理与井号有关的所有重要知识点.一.#

图书馆推荐阅读:香港大学推荐的44本经典书籍

如果你半夜醒来发现自己已经好长时间没读书,而且没有任何负罪感的时候,你就必须知道,你已经堕落了.好书分享,值得收藏! 1.<拖延心理学> 作者: [美]简·博克/ [美]莱诺拉·袁 你想要向拖延的恶习开刀吗?这两位加利福尼亚心理学家在她们治疗拖延者的实践中精准地捕捉到了拖延的根本原因.这本书可以帮助读者减轻拖延,更好地享受生活. 2.<梦的解析> 作者: [奥地利]弗洛伊德 弗洛伊德的<梦的解析>被誉为精神分析的第一名著.它通过对梦境的科学探索和解释,找破了几千年来人类

项目管理利器——Maven阅读目录

阅读目录 一.Maven介绍及环境搭建 二.构建Maven版的Hello World 三.Maven常见构建命令 四.自动创建目录骨架 五.Maven中的坐标和仓库 六.在eclipse中安装Maven插件以及创建Maven项目 七.Maven的生命周期和插件 假设公司要开发一个新的Web项目,使用目前流行的struts2.spring.MyBatis进行新项目开发.那么接下来首先要进行的工作就是各个框架的jar包的下载.大家通常的做法是先到struts2的官网下载struts2的jar包,再到

主打个性化推荐阅读的摘客,玩的是什么?

你的朋友圈是不是关注几十上百个微信公众号,每天打开来看的次数却少之又少?你是不是 每天去不同的牛逼哄哄的网站逛,看所谓的大流干货呢?你是不是想找感兴趣领域的专业技能,面对如此众多的平台,花费大量的时间和精力确也看花眼呢? 在如今资本市场寒冬的舆论下,摘客依然迈进了个性化推荐聚合阅读的领域.众所周知,“今日头条”作为主打个性推荐的聚合阅读产品,已经成功累计了大批的用户,且在这个同质化泛滥的应用海洋,摘客为什么要飞蛾扑火呢? 大部分主打个性化推荐阅读产品,大多遵从传统媒体的主流分类,涵括社会.时政.

推荐阅读书籍,是时候再行动起来了。

网上一位大神的经验之谈 [语言经典书]C: C程序设计语言(K&R) C和指针 C专家编程 C陷阱与缺陷 你必须知道的495个C语言问题 C++: C++ primer effective C++ 深度探索C++对象模型 stl源码分析 C++必知必会 java: java编程思想 java并发编程 深入理解Java虚拟机:JVM高级特性与最佳实践 [算法经典书]算法导论 数据结构与算法分析(维斯) 编程之美 剑指offer [系统经典书]深入理解计算机操作系统 编译原理(龙书) 程序员自我修养

幕后的英雄--风险投资(venture capital)【推荐阅读原书】

1.传统上创业资金的合法来源只有两种渠道: 靠积累(比如继承遗产或者是自己多年的积蓄),但是如果要求创业者将自己一辈子的积蓄全部拿出来创业,很多人可能会知难而退,更何况最喜欢创业的年轻人恰恰是积蓄最少的群体 靠借贷(比如从家人.亲戚和朋友那里凑钱,或从银行抵押贷款),但是从银行贷款必须要有财产可抵押,年轻人并没有多少财产可供抵押 因此,年轻人要通过这两种传统的方法获得创业资金很不容易,资金就成了创业的瓶颈. 2.美国人是富于冒险精神的,发明了非常规的投资方式--风险投资(venture capi

【博客写作】写博客的好处,附博客园文章添加阅读目录的方法总结

工程师为什么要写Blog 好处一:产生学习动机,有方向性地筛选资讯 人的脑袋跟时间有限,过多庞杂的资讯就等于无用的资讯,跟白噪音一样会被你的脑袋自然过滤掉. 好处二: 检视自己既有知识,将 input 的新资讯与既有的知识建立连结 持续检视自我,才能发现不足之处,进行改善.才能发现自我成长的亮点,保持持之以恒的动能.所以,你需要给自己创造持续检视自我的机会.你的定期写文规划与行动,就是最好的事件点. 好处三: 透过写文,刻意强化刺激知识转化,进行内化知识过程 因此这个步骤很重要,把「别人提供的资

Web前端开发推荐阅读书籍、学习课程下载

转自http://www.xuanfengge.com/fe-books.html 前言 学校里没有前端的课程,那如何学习JavaScript,又如何使自己成为一个合格的前端工程师呢? 除了在项目中学习和跟着有经验的同事学习,读书也是必不可少的.书中有着相对完整的知识体系,每读一本好书都会带来一次全面的提高. 而如果深一脚浅一脚的学习,写出代码的质量会参差不齐.初学者的首要任务是成为靠谱的熟练开发者,能够稳定的输出有一定质量的代码. 前端技术发展速度特别快,总是涌现出很多新的东西,需要不断的学习