UFLDL Tutorial
From Ufldl
Jump to: navigation, search
Description: This tutorial will teach you the main ideas of Unsupervised Feature Learning and Deep Learning. By working through it, you will also get to implement several feature learning/deep learning algorithms, get to see them work for yourself, and learn how to apply/adapt these ideas to new problems.
This tutorial assumes a basic knowledge of machine learning (specifically, familiarity with the ideas of supervised learning, logistic regression, gradient descent). If you are not familiar with these ideas, we suggest you go to this Machine Learning course and complete sections II, III, IV (up to Logistic Regression) first.
Sparse Autoencoder
- Neural Networks
- Backpropagation Algorithm
- Gradient checking and advanced optimization
- Autoencoders and Sparsity
- Visualizing a Trained Autoencoder
- Sparse Autoencoder Notation Summary
- Exercise:Sparse Autoencoder
Vectorized implementation
- Vectorization
- Logistic Regression Vectorization Example
- Neural Network Vectorization
- Exercise:Vectorization
Preprocessing: PCA and Whitening
Softmax Regression
Self-Taught Learning and Unsupervised Feature Learning
Building Deep Networks for Classification
- From Self-Taught Learning to Deep Networks
- Deep Networks: Overview
- Stacked Autoencoders
- Fine-tuning Stacked AEs
- Exercise: Implement deep networks for digit classification
Linear Decoders with Autoencoders
Working with Large Images
Note: The sections above this line are stable. The sections below are still under construction, and may change without notice. Feel free to browse around however, and feedback/suggestions are welcome.
Miscellaneous
Miscellaneous Topics
Advanced Topics:
Sparse Coding
ICA Style Models
Others
- Convolutional training
- Restricted Boltzmann Machines
- Deep Belief Networks
- Denoising Autoencoders
- K-means
- Spatial pyramids / Multiscale
- Slow Feature Analysis
- Tiled Convolution Networks
Material contributed by: Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen