【转载】Project on Learning Deep Belief Nets

Project on Learning Deep Belief Nets

Deep Belief Nets (DBN‘s) will be explained in the lecture on Oct 29. Instead of learning layers of features by backpropagating errors, they learn one layer at a time by trying to build a generative model of the data or the activities of the feature detectors in the layer below. After they have learned features in this way, they can be fine-tuned with backpropagation. Their main advantage is that they can learn the layers of features from large sets of unlabelled data.

The data and the code

For this project you should use the MNIST dataset. This data and the code for training DBN‘s is available here. You want the code for classifying images, not the code for deep autoencoders.

The main point of the project

You should investigate how the relative performance of three learning methods changes as you change the relative amounts of labelled and unlabelled data. The three methods are DBN‘s, SVMlite, and SVMlite applied to the features learned by DBN‘s. SVMlite is explained in several places on the web. We will add more information soon on the easiest way to use SVMlite with Matlab for multiclass classification (as opposed to 2 class which is easier). Support vector machines will be explained in the lecture on Nov 12, but you dont need to understand much about them to run SVMlite.

Choosing the data

You have to decide how much labelled and unlabelled data to use. Using small amounts of labelled data is a good place to start because it makes the supervised learning fast and it gives a high error-rate which makes comparisons easier.

Designing the Deep Belief Network

You have to decide how many hidden layers the DBN should have, how many units each layer should have, and how long the pre-training should be. This project does not involve writing your own code from scratch, so we will expect you to perform sensible experiments to choose these numbers (and the numbers of labelled and unlabelled examples) and it will be evaluated on how well you do this.

Extra work for teams

If you are working as a pair, you should also compare the three methods above with a single feedforward network trained with backpropagation on the labelled data.

时间: 2024-10-20 06:52:36

【转载】Project on Learning Deep Belief Nets的相关文章

论文笔记(2):A fast learning algorithm for deep belief nets.

论文笔记(2):A fast learning algorithm for deep belief nets. 这几天继续学习一篇论文,Hinton的A Fast Learning Algorithm for Deep Belief Nets.这篇论文一开始读起来是相当费劲的,学习了好几天才了解了相关的背景,慢慢的思路也开始清晰起来.DBN算法就是Wake-Sleep算法+RBM,但是论文对Wake-Sleep算法解释特别少.可能还要学习Wake-Sleep和RBM相关的的知识才能慢慢理解,今天

Deep Learning 17:DBN的学习_读论文“A fast learning algorithm for deep belief nets”的总结

1.论文“A fast learning algorithm for deep belief nets”的“explaining away”现象的解释: 见:Explaining Away的简单理解 2.论文“A fast learning algorithm for deep belief nets”的整个过程及其“Complementary priors”的解释: 见:paper:A fast learning algorithm for deep belief nets和 [2014041

Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3

Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3 http://blog.csdn.net/sunbow0 第二章Deep Belief Network (深度信念网络) 3实例 3.1 測试数据 依照上例数据,或者新建图片识别数据. 3.2 DBN实例 //****************例2(读取固定样本:来源于经典优化算法測试函数Sphere Model)***********// //2 读取样本数据 Logge

Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2

Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2 http://blog.csdn.net/sunbow0 第二章Deep Belief Network (深度信念网络) 2基础及源代码解析 2.1 Deep Belief Network深度信念网络基础知识 1)综合基础知识參照: http://tieba.baidu.com/p/2895759455   http://wenku.baidu.com/link?url=

【转帖】【面向代码】学习 Deep Learning(二)Deep Belief Nets(DBNs)

今天介绍DBN的内容,其中关键部分都是(Restricted Boltzmann Machines, RBM)的步骤,所以先放一张rbm的结构,帮助理解 (图来自baidu的一个讲解ppt) ========================================================================================== 照例,我们首先来看一个完整的DBN的例子程序: 这是\tests\test_example_DBN.m 中的ex2 [cpp]

Deep Belief Network简介——本质上是在做逐层无监督学习,每次学习一层网络结构再逐步加深网络

from:http://www.cnblogs.com/kemaswill/p/3266026.html 1. 多层神经网络存在的问题 常用的神经网络模型, 一般只包含输入层, 输出层和一个隐藏层: 理论上来说, 隐藏层越多, 模型的表达能力应该越强.但是, 当隐藏层数多于一层时, 如果我们使用随机值来初始化权重, 使用梯度下降来优化参数就会出现许多问题[1]: 如果初始权重值设置的过大, 则训练过程中权重值会落入局部最小值(而不是全局最小值). 如果初始的权重值设置的过小, 则在使用BP调整参

A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems

A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems Recurrent neural networks (RNNs) are now established as one of the key tools in the machine learning toolbox for handling large-scale sequence data. The ability to specify

【转载】Deep Belief Network

Deep Belief Network 为了更好的在下次讨论班讲述 DBN,特开此帖.主要是介绍 DBN 的相关知识,做一份逻辑上完整的东西.参考Hinton 的东西来讲吧: reading listRBM 相关[1] 关于 Boltzmann machine 的 scholarwiki[2] Haykin 书上第 11 章[3] Duda 书上第 7 章[4] RBM 的 exponential family 扩展[5] RBM 的建模能力:作为 universal approximator

(转) Learning Deep Learning with Keras

Learning Deep Learning with Keras Piotr Migda? - blog Projects Articles Publications Resume About Photos Learning Deep Learning with Keras 30 Apr 2017 ? Piotr Migda? ? [machine-learning] [deep-learning] [overview] I teach deep learning both for a liv