『cs231n』作业3问题1选讲_通过代码理解RNN&图像标注训练
对于torch中的RNN相关类,有原始和原始Cell之分,其中RNN和RNNCell层的区别在于前者一次能够处理整个序列,而后者一次只处理序列中一个时间点的数据,前者封装更完备更易于使用,后者更具灵活性。实际上RNN层的一种后端实现方式就是调用RNNCell来实现的。
一、nn.RNN
import torch as t from torch import nn from torch.autograd import Variable as V layer = 1 t.manual_seed(1000) # batch为3,step为2,每个元素4维 input = V(t.randn(2,3,4)) # 1层,3隐藏神经元,每个元素4维 lstm = nn.LSTM(4,3,layer) # 初始状态:1层,batch为3,隐藏神经元3 h0 = V(t.randn(layer,3,3)) c0 = V(t.randn(layer,3,3)) out, hn = lstm(input,(h0,c0)) print(out, hn)
Variable containing: (0 ,.,.) = 0.0545 -0.0061 0.5615 -0.1251 0.4490 0.2640 0.1405 -0.1624 0.0303 (1 ,.,.) = 0.0168 0.1562 0.5002 0.0824 0.1454 0.4007 0.0180 -0.0267 0.0094 [torch.FloatTensor of size 2x3x3] (Variable containing: (0 ,.,.) = 0.0168 0.1562 0.5002 0.0824 0.1454 0.4007 0.0180 -0.0267 0.0094 [torch.FloatTensor of size 1x3x3] , Variable containing: (0 ,.,.) = 0.1085 0.1957 0.9778 0.5397 0.2874 0.6415 0.0480 -0.0345 0.0141 [torch.FloatTensor of size 1x3x3] )
二、nn.RNNCell
import torch as t from torch import nn from torch.autograd import Variable as V t.manual_seed(1000) # batch为3,step为2,每个元素4维 input = V(t.randn(2,3,4)) # Cell只能是1层,3隐藏神经元,每个元素4维 lstm = nn.LSTMCell(4,3) # 初始状态:1层,batch为3,隐藏神经元3 hx = V(t.randn(3,3)) cx = V(t.randn(3,3)) out = [] # 每个step提取各个batch的四个维度 for i_ in input: print(i_.shape) hx, cx = lstm(i_,(hx,cx)) out.append(hx) t.stack(out)
torch.Size([3, 4]) torch.Size([3, 4])Variable containing: (0 ,.,.) = 0.0545 -0.0061 0.5615 -0.1251 0.4490 0.2640 0.1405 -0.1624 0.0303 (1 ,.,.) = 0.0168 0.1562 0.5002 0.0824 0.1454 0.4007 0.0180 -0.0267 0.0094 [torch.FloatTensor of size 2x3x3]
三、nn.Embedding
embedding将标量表示的字符(所以是LongTensor)转换成矢量,这里给出一个模拟:将标量词embedding后送入rnn转换一下维度。
# 5个词,每个词使用4维向量表示 embedding = nn.Embedding(5,4) # 使用预训练好的词向量初始化 embedding.weight.data = t.arange(0,20).view(5,4) # embedding将标量表示的字符(所以是LongTensor)转换成矢量 # 实际输入词原始向量需要是l、LongTensor格式 input = V(t.arange(3,0,-1)).long() # 1个batch,3个step,4维矢量 input = embedding(input).unsqueeze(1) print(input) # 1层,3隐藏神经元(输出元素4维度),每个元素4维 layer = 1 lstm = nn.LSTM(4,3,layer) # 初始状态:1层,batch为3,隐藏神经元3 h0 = V(t.randn(layer,3,3)) c0 = V(t.randn(layer,3,3)) out, hn = lstm(input,(h0,c0)) print(out)
Variable containing: (0 ,.,.) = 12 13 14 15 (1 ,.,.) = 8 9 10 11 (2 ,.,.) = 4 5 6 7 [torch.FloatTensor of size 3x1x4] Variable containing: (0 ,.,.) = -0.6222 -0.0156 0.0266 0.1910 0.0026 0.0061 -0.5823 -0.0042 0.0932 (1 ,.,.) = 0.3199 -0.0243 0.1561 0.8229 0.0051 0.1269 0.3715 -0.0043 0.1704 (2 ,.,.) = 0.7893 -0.0398 0.2963 0.8835 0.0113 0.2767 0.8004 -0.0044 0.2982 [torch.FloatTensor of size 3x3x3]
原文地址:https://www.cnblogs.com/hellcat/p/8485258.html
时间: 2024-10-02 22:18:09