RNN(Recurrent Neural Network)的几个难点

1. vanish of gradient

RNN的error相对于某个时间点t的梯度为:

\(\frac{\partial E_t}{\partial W}=\sum_{k=1}^{t}\frac{\partial E_t}{\partial y_t}\frac{\partial y_t}{\partial h_i}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W}\),

其中\(h\)是hidden node的输出,\(y_t\)是网络在t时刻的output,\(W\)是hidden nodes 到hidden nodes的weight,而\(\frac{\partial h_t}{\partial h_k}\),导数在时间段[k,t]上的链式展开,这段时间可能很长,会造成vanish或者explosion gradiant。将\(\frac{\partial h_t}{\partial h_k}\)沿时间展开:\(\frac{\partial h_t}{\partial h_k}=\prod_{j=k+1}^{t}\frac{\partial h_j}{\partial h_{j-1}}=\prod_{j=k+1}^{t}W^T \times diag [\frac{\partial\sigma(h_{j-1})}{\partial h_{j-1}}]\)。上式中的diag矩阵是个什么鬼?我来举个例子,你就明白了。假设现在要求解\(\frac{\partial h_5}{\partial h_4}\),回忆向前传播时\(h_5\)是怎么得到的:\(h_5=W\sigma(h_4)+W^{hx}x_4\),则\(\frac{\partial h_5}{\partial h_4}=W\frac{\partial \sigma(h_4)}{\partial h_4}\),注意到\(\sigma(h_4)\)和\(h_4\)都是向量,所以\(\frac{\partial \sigma(h_4)}{\partial h_4}\)是Jacobian矩阵也即:\(\frac{\partial \sigma(h_4)}{\partial h_4}=\) \(\begin{bmatrix} \frac{\partial\sigma_1(h_{41})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_1(h_{41})}{\partial h_{4D}} \\ \vdots&\cdots&\vdots \\ \frac{\partial\sigma_D(h_{4D})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_D(h_{4D})}{\partial h_{4D}}\end{bmatrix}\),明显的,非对角线上的值都是0。这是因为sigmoid logistic function \(\sigma\)是element-wise的操作。

后面推导vanish或者explosion gradiant的过程就很简单了,我就不写了,请参考http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf 中的公式(14)往后部分。

2. sum derivatives of nodes

未完待续。。。

时间: 2024-10-09 00:04:05

RNN(Recurrent Neural Network)的几个难点的相关文章

Recurrent neural network (RNN) - Pytorch版

import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # 配置GPU或CPU设置 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 超参数设置 sequence_length = 28 input_size = 28 hidden_size = 128 num_l

《转》循环神经网络(RNN, Recurrent Neural Networks)学习笔记:基础理论

转自 http://blog.csdn.net/xingzhedai/article/details/53144126 更多参考:http://blog.csdn.net/mafeiyu80/article/details/51446558 http://blog.csdn.net/caimouse/article/details/70225998 http://kubicode.me/2017/05/15/Deep%20Learning/Understanding-about-RNN/ RNN

Recurrent neural network language modeling toolkit 源码深入剖析系列(一)

系列前言 参考文献: RNNLM - Recurrent Neural Network  Language Modeling Toolkit(点此阅读) Recurrent neural network based language model(点此阅读) EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGE MODEL(点此阅读) Strategies for Training Large Scale Neural Network  Language

Recurrent Neural Network Language Modeling Toolkit by Tomas Mikolov使用示例

递归神经网络语言模型工具地址:http://www.fit.vutbr.cz/~imikolov/rnnlm/ 1. 工具的简单使用 工具为:rnnlm-0.3e step1. 文件解压,解压后的文件为: 图1.rnnlm-0.3e解压后的文件 step2. 编译工具 命令: make clean make 可能报错 说这个x86_64-linux-g++-4.6 命令找不到 如果出现上述错误,简单的将makefile文件的第一行CC = x86_64-linux-g++-4.6 改为 CC =

转:RNN(Recurrent Neural Networks)

RNN(Recurrent Neural Networks)公式推导和实现 http://x-algo.cn/index.php/2016/04/25/rnn-recurrent-neural-networks-derivation-and-implementation/ 2016-04-25 分类:Deep Learning / NLP / RNN 阅读(6997) 评论(7) 本文主要参考wildml的博客所写,所有的代码都是python实现.没有使用任何深度学习的工具,公式推导虽然枯燥,但

Recurrent Neural Network(循环神经网络)

Reference:   Alex Graves的[Supervised Sequence Labelling with RecurrentNeural Networks] Alex是RNN最著名变种,LSTM发明者Jürgen Schmidhuber的高徒,现加入University of Toronto,拜师Hinton. 统计语言模型与序列学习 1.1 基于频数统计的语言模型 NLP领域最著名的语言模型莫过于N-Gram. 它基于马尔可夫假设,当然,这是一个2-Gram(Bi-Gram)模

Recurrent neural network language modeling toolkit 源码走读(六)

系列前言 参考文献: RNNLM - Recurrent Neural Network  Language Modeling Toolkit(点此阅读) Recurrent neural network based language model(点此阅读) EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGE MODEL(点此阅读) Strategies for Training Large Scale Neural Network  Language

Recurrent neural network language modeling toolkit 源码走读(八)

系列前言 参考文献: RNNLM - Recurrent Neural Network  Language Modeling Toolkit(点此阅读) Recurrent neural network based language model(点此阅读) EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGE MODEL(点此阅读) Strategies for Training Large Scale Neural Network  Language

RNN(Recurrent Neural Networks)公式推导和实现

RNN(Recurrent Neural Networks)公式推导和实现 http://x-algo.cn/index.php/2016/04/25/rnn-recurrent-neural-networks-derivation-and-implementation/ 2016-04-25 分类:Deep Learning / NLP / RNN 阅读(6997) 评论(7) 本文主要参考wildml的博客所写,所有的代码都是python实现.没有使用任何深度学习的工具,公式推导虽然枯燥,但

Recurrent neural network language modeling toolkit 源码走读(七)

系列前言 参考文献: RNNLM - Recurrent Neural Network  Language Modeling Toolkit(点此阅读) Recurrent neural network based language model(点此阅读) EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGE MODEL(点此阅读) Strategies for Training Large Scale Neural Network  Language