RNN and LSTM saliency Predection Scene Label

http://handong1587.github.io/deep_learning/2015/10/09/rnn-and-lstm.html  //RNN and LSTM

http://handong1587.github.io/deep_learning/2015/10/09/saliency-prediction.html //saliency Predection

http://handong1587.github.io/deep_learning/2015/10/09/scene-labeling.html //Scene Label

RNN and LSTM

Published: 09 Oct 2015  Category: deep_learning

Types of RNN

1) Plain Tanh Recurrent Nerual Networks

2) Gated Recurrent Neural Networks (GRU)

3) Long Short-Term Memory (LSTM)

Tutorials

A Beginner’s Guide to Recurrent Networks and LSTMs

http://deeplearning4j.org/lstm.html

A Deep Dive into Recurrent Neural Nets

http://nikhilbuduma.com/2015/01/11/a-deep-dive-into-recurrent-neural-networks/

Long Short-Term Memory: Tutorial on LSTM Recurrent Networks

http://people.idsia.ch/~juergen/lstm/index.htm

LSTM implementation explained

http://apaszke.github.io/lstm-explained.html

Recurrent Neural Networks Tutorial

Understanding LSTM Networks

Recurrent Neural Networks in DL4J

http://deeplearning4j.org/usingrnns.html

Train RNN

A Simple Way to Initialize Recurrent Networks of Rectified Linear Units

Sequence Level Training with Recurrent Neural Networks

Papers

Generating Sequences With Recurrent Neural Networks

DRAW: A Recurrent Neural Network For Image Generation

Unsupervised Learning of Video Representations using LSTMs(ICML2015)

LSTM: A Search Space Odyssey

Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

A Critical Review of Recurrent Neural Networks for Sequence Learning

Scheduled Sampling for
Sequence Prediction with Recurrent Neural Networks(Winner of MSCOCO image
captioning challenge, 2015)

Visualizing and
Understanding Recurrent Networks(Andrej Karpathy, Justin Johnson, Fei-Fei Li)

Grid Long Short-Term
Memory

Depth-Gated LSTM

Deep Knowledge Tracing

Top-down Tree Long
Short-Term Memory Networks

Alternative structures
for character-level RNNs(INRIA & Facebook AI Research)

Pixel Recurrent Neural
Networks (Google DeepMind)

Long Short-Term
Memory-Networks for Machine Reading

Lipreading with Long
Short-Term Memory

Associative Long
Short-Term Memory

Representation of
linguistic form and function in recurrent neural networks

Architectural
Complexity Measures of Recurrent Neural Networks

Easy-First Dependency
Parsing with Hierarchical Tree LSTMs

Training Input-Output
Recurrent Neural Networks through Spectral Methods

Learn To Execute Programs

Learning to Execute

Neural
Programmer-Interpreters (Google DeepMind)

A
Programmer-Interpreter Neural Network Architecture for Prefrontal Cognitive
Control

Convolutional RNN: an
Enhanced Model for Extracting Features from Sequential Data

Attention Models

Recurrent Models of
Visual Attention
 (Google
DeepMind. NIPS2014)

Recurrent Model of
Visual Attention(Google DeepMind)

Show, Attend and Tell:
Neural Image Caption Generation with Visual Attention

A Neural Attention
Model for Abstractive Sentence Summarization(EMNLP 2015. Facebook AI Research)

Effective Approaches
to Attention-based Neural Machine Translation(EMNLP2015)

Generating Images from
Captions with Attention

Attention and Memory
in Deep Learning and NLP

Survey on the
attention based RNN model and its applications in computer vision

Train RNN

Training Recurrent
Neural Networks (PhD thesis)

Deep learning for
control using augmented Hessian-free optimization


Hierarchical Conflict
Propagation: Sequence Learning in a Recurrent Deep Neural Network

Recurrent Batch
Normalization

Optimizing Performance
of Recurrent Neural Networks on GPUs

Codes

NeuralTalk
(Deprecated): a Python+numpy project for learning Multimodal Recurrent Neural
Networks that describe images with sentences

NeuralTalk2: Efficient
Image Captioning code in Torch, runs on GPU

char-rnn in Blocks

Project:
pycaffe-recurrent

Using neural networks
for password cracking

Recurrent neural
networks for decoding CAPTCHAS

torch-rnn: Efficient,
reusable RNNs and LSTMs for torch

Deploying a model
trained with GPU in Torch into JavaScript, for everyone to use

LSTM implementation on
Caffe

Blog

Survey on
Attention-based Models Applied in NLP

http://yanran.li/peppypapers/2015/10/07/survey-attention-model-1.html

Survey on Advanced
Attention-based Models

http://yanran.li/peppypapers/2015/10/07/survey-attention-model-2.html

Online Representation
Learning in Recurrent Neural Language Models

http://www.marekrei.com/blog/online-representation-learning-in-recurrent-neural-language-models/

Fun with Recurrent
Neural Nets: One More Dive into CNTK and TensorFlow

http://esciencegroup.com/2016/03/04/fun-with-recurrent-neural-nets-one-more-dive-into-cntk-and-tensorflow/

Materials to
understand LSTM

https://medium.com/@shiyan/materials-to-understand-lstm-34387d6454c1#.4mt3bzoau

Understanding LSTM and
its diagrams (
★★★★★)

Persistent RNNs: 30
times faster RNN layers at small mini-batch sizes (Greg Diamos, Baidu Silicon
Valley AI Lab)

http://svail.github.io/persistent_rnns/

All of Recurrent
Neural Networks

https://medium.com/@jianqiangma/all-about-recurrent-neural-networks-9e5ae2936f6e#.q4s02elqg

Resources

Awesome Recurrent
Neural Networks - A curated list of resources dedicated to RNN

Jürgen Schmidhuber’s
page on Recurrent Neural Networks

http://people.idsia.ch/~juergen/rnn.html

Reading and
Questions

Are there any
Recurrent convolutional neural network network implementations out there ?

« Reinforcement LearningSaliency Prediction »

Saliency Prediction

Published: 09 Oct 2015  Category: deep_learning

This task involves predicting the salient regions of an image given by human eye fixations.

Large-scale optimization of hierarchical features for saliency prediction in natural images

Predicting Eye Fixations using Convolutional Neural Networks

DeepFix: A Fully Convolutional Neural Network for predicting Human Eye Fixations

DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection

SuperCNN: A Superpixelwise Convolutional Neural Network for Salient Object Detection

Shallow and Deep Convolutional Networks for Saliency Prediction

Scene Labeling

Published: 09 Oct 2015  Category: deep_learning

Papers

Learning hierarchical features for scene labeling

  • intro: “Their approach comprised of densely computing multi-scale CNN features for each pixel and aggregating them over image regions upon which they are classified. However, their methodstill required the post-processing step of generating over-segmented regions, like superpixels, for obtaining the final segmentation result. Additionally, the CNNs used for multi-scale feature learning were not very deep with only three convolution layers.”
  • paper: http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf

Indoor Semantic Segmentation using depth information

Multi-modal unsupervised feature learning for rgb-d scene labeling

Using neon for Scene Recognition: Mini-Places2

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models

Challenges

Large-scale Scene Understanding Challenge

时间: 2024-10-13 06:38:43

RNN and LSTM saliency Predection Scene Label的相关文章

RNN和LSTM

一.RNN 全称为Recurrent Neural Network,意为循环神经网络,用于处理序列数据. 序列数据是指在不同时间点上收集到的数据,反映了某一事物.现象等随时间的变化状态或程度.即数据之间有联系. RNN的特点:1,,层间神经元也有连接(主要为隐层):2,共享参数 其结构如上图所示,数据为顺序处理,在处理长序列数据时,极易导致梯度消失问题. 二.LSTM LSTM为长短期记忆,是一种变种的RNN,在RNN的基础上引入了细胞状态,根据细胞状态可决定哪些状态应该保留下来,哪些状态应该被

深度学习与自然语言处理之五:从RNN到LSTM

/* 版权声明:可以任意转载,转载时请标明文章原始出处和作者信息 .*/ author: 张俊林 大纲如下: 1.RNN 2.LSTM 3.GRN 4.Attention Model 5.应用 6.探讨与思考 扫一扫关注微信号:"布洛卡区" ,深度学习在自然语言处理等智能应用的技术研讨与科普公众号.

深度学习之六,基于RNN(GRU,LSTM)的语言模型分析与theano代码实现

引言 前面已经介绍过RNN的基本结构,最基本的RNN在传统的BP神经网络上,增加了时序信息,也使得神经网络不再局限于固定维度的输入和输出这个束缚,但是从RNN的BPTT推导过程中,可以看到,传统RNN在求解梯度的过程中对long-term会产生梯度消失或者梯度爆炸的现象,这个在这篇文章中已经介绍了原因,对于此,在1997年 的Grave大作[1]中提出了新的新的RNN结构:Long Short Term Dependency.LSTM在传统RNN的基础上加了许多的"门",如input

转:深度学习与自然语言处理之五:从RNN到LSTM

原文地址:http://blog.csdn.net/malefactor/article/details/50436735/ 大纲如下: 1.RNN 2.LSTM 3.GRN 4.Attention Model 5.应用 6.探讨与思考

RNN 与 LSTM 的应用

之前已经介绍过关于 Recurrent Neural Nnetwork 与 Long Short-Trem Memory 的网络结构与参数求解算法( 递归神经网络(Recurrent Neural Networks,RNN) ,LSTM网络(Long Short-Term Memory )),本文将列举一些 RNN 与 LSTM 的应用, RNN (LSTM)的样本可以是如下形式的:1)输入输出均为序列:2)输入为序列,输出为样本标签:3)输入单个样本,输出为序列.本文将列举一些 RNN(LST

3. RNN神经网络-LSTM模型结构

1. RNN神经网络模型原理 2. RNN神经网络模型的不同结构 3. RNN神经网络-LSTM模型结构 1. 前言 之前我们对RNN模型做了总结.由于RNN也有梯度消失的问题,因此很难处理长序列的数据,大牛们对RNN做了改进,得到了RNN的特例LSTM(Long Short-Term Memory),它可以避免常规RNN的梯度消失,因此在工业界得到了广泛的应用.下面我们就对LSTM模型做一个总结. 2. LSTM模型结构 我们先看下LSTM的整体结构. 由于RNN梯度消失的问题,大牛们对于序列

浅谈RNN、LSTM + Kreas实现及应用

本文主要针对RNN与LSTM的结构及其原理进行详细的介绍,了解什么是RNN,RNN的1对N.N对1的结构,什么是LSTM,以及LSTM中的三门(input.ouput.forget),后续将利用深度学习框架Kreas,结合案例对LSTM进行进一步的介绍. 一.RNN的原理 RNN(Recurrent Neural Networks),即全称循环神经网络,它是一种对序列型的数据进行建模的深度模型.如图1.1所示. 图1.1 1.其中为序列数据.即神经网络的输入,例如nlp中,X1可以看作第一个单词

深度学习:浅谈RNN、LSTM+Kreas实现与应用

主要针对RNN与LSTM的结构及其原理进行详细的介绍,了解什么是RNN,RNN的1对N.N对1的结构,什么是LSTM,以及LSTM中的三门(input.ouput.forget),后续将利用深度学习框架Kreas,结合案例对LSTM进行进一步的介绍. 一.RNN的原理 RNN(Recurrent Neural Networks),即全称循环神经网络,它是一种对序列型的数据进行建模的深度模型.如图1.1所示. 图1.1 1.其中 为序列数据.即神经网络的输入,例如nlp中,X1可以看作第一个单词.

(数据科学学习手札39)RNN与LSTM基础内容详解

一.简介 循环神经网络(recurrent neural network,RNN),是一类专门用于处理序列数据(时间序列.文本语句.语音等)的神经网络,尤其是可以处理可变长度的序列:在与传统的时间序列分析进行比较的过程之中,RNN因为其梯度弥散等问题对长序列表现得不是很好,而据此提出的一系列变种则展现出很明显的优势,最具有代表性的就是LSTM(long short-term  memory),而本文就从标准的循环神经网络结构和原理出发,再到LSTM的网络结构和原理,对其有一个基本的认识和阐述: