pytorch -- CNN 文本分类 -- 《 Convolutional Neural Networks for Sentence Classification》

论文  《 Convolutional Neural Networks for Sentence Classification》通过CNN实现了文本分类。

论文地址: 666666

模型图:

  

模型解释可以看论文,给出code and comment:

 1 # -*- coding: utf-8 -*-
 2 # @time : 2019/11/9  13:55
 3
 4 import numpy as np
 5 import torch
 6 import torch.nn as nn
 7 import torch.optim as optim
 8 from torch.autograd import Variable
 9 import torch.nn.functional as F
10
11 dtype = torch.FloatTensor
12
13 # Text-CNN Parameter
14 embedding_size = 2 # n-gram
15 sequence_length = 3
16 num_classes = 2  # 0 or 1
17 filter_sizes = [2, 2, 2] # n-gram window
18 num_filters = 3
19
20 # 3 words sentences (=sequence_length is 3)
21 sentences = ["i love you", "he loves me", "she likes baseball", "i hate you", "sorry for that", "this is awful"]
22 labels = [1, 1, 1, 0, 0, 0]  # 1 is good, 0 is not good.
23
24 word_list = " ".join(sentences).split()
25 word_list = list(set(word_list))
26 word_dict = {w: i for i, w in enumerate(word_list)}
27 vocab_size = len(word_dict)
28
29 inputs = []
30 for sen in sentences:
31     inputs.append(np.asarray([word_dict[n] for n in sen.split()]))
32
33 targets = []
34 for out in labels:
35     targets.append(out) # To using Torch Softmax Loss function
36
37 input_batch = Variable(torch.LongTensor(inputs))
38 target_batch = Variable(torch.LongTensor(targets))
39
40
41 class TextCNN(nn.Module):
42     def __init__(self):
43         super(TextCNN, self).__init__()
44
45         self.num_filters_total = num_filters * len(filter_sizes)
46         self.W = nn.Parameter(torch.empty(vocab_size, embedding_size).uniform_(-1, 1)).type(dtype)
47         self.Weight = nn.Parameter(torch.empty(self.num_filters_total, num_classes).uniform_(-1, 1)).type(dtype)
48         self.Bias = nn.Parameter(0.1 * torch.ones([num_classes])).type(dtype)
49
50     def forward(self, X):
51         embedded_chars = self.W[X] # [batch_size, sequence_length, sequence_length]
52         embedded_chars = embedded_chars.unsqueeze(1) # add channel(=1) [batch, channel(=1), sequence_length, embedding_size]
53
54         pooled_outputs = []
55         for filter_size in filter_sizes:
56             # conv : [input_channel(=1), output_channel(=3), (filter_height, filter_width), bias_option]
57             conv = nn.Conv2d(1, num_filters, (filter_size, embedding_size), bias=True)(embedded_chars)
58             h = F.relu(conv)
59             # mp : ((filter_height, filter_width))
60             mp = nn.MaxPool2d((sequence_length - filter_size + 1, 1))
61             # pooled : [batch_size(=6), output_height(=1), output_width(=1), output_channel(=3)]
62             pooled = mp(h).permute(0, 3, 2, 1)
63             pooled_outputs.append(pooled)
64
65         h_pool = torch.cat(pooled_outputs, len(filter_sizes)) # [batch_size(=6), output_height(=1), output_width(=1), output_channel(=3) * 3]
66         h_pool_flat = torch.reshape(h_pool, [-1, self.num_filters_total]) # [batch_size(=6), output_height * output_width * (output_channel * 3)]
67
68         model = torch.mm(h_pool_flat, self.Weight) + self.Bias # [batch_size, num_classes]
69         return model
70
71 model = TextCNN()
72
73 criterion = nn.CrossEntropyLoss()
74 optimizer = optim.Adam(model.parameters(), lr=0.001)
75
76 # Training
77 for epoch in range(5000):
78     optimizer.zero_grad()
79     output = model(input_batch)
80
81     # output : [batch_size, num_classes], target_batch : [batch_size] (LongTensor, not one-hot)
82     loss = criterion(output, target_batch)
83     if (epoch + 1) % 1000 == 0:
84         print(‘Epoch:‘, ‘%04d‘ % (epoch + 1), ‘cost =‘, ‘{:.6f}‘.format(loss))
85
86     loss.backward()
87     optimizer.step()
88
89 # Test
90 test_text = ‘sorry hate you‘
91 tests = [np.asarray([word_dict[n] for n in test_text.split()])]
92 test_batch = Variable(torch.LongTensor(tests))
93
94 # Predict
95 predict = model(test_batch).data.max(1, keepdim=True)[1]
96 if predict[0][0] == 0:
97     print(test_text,"is Bad Mean...")
98 else:
99     print(test_text,"is Good Mean!!")

原文地址:https://www.cnblogs.com/dhName/p/11826039.html

时间: 2024-08-29 06:24:49

pytorch -- CNN 文本分类 -- 《 Convolutional Neural Networks for Sentence Classification》的相关文章

《Convolutional Neural Networks for Sentence Classification》速读

文本分类任务中可以利用CNN来提取句子中类似 n-gram 的关键信息. TextCNN的详细过程原理图见下: keras 代码: 1 def convs_block(data, convs=[3, 3, 4, 5, 5, 7, 7], f=256): 2 pools = [] 3 for c in convs: 4 conv = Activation(activation="relu")(BatchNormalization()( 5 Conv1D(filters=f, kernel

Understanding Convolutional Neural Networks for NLP

When we hear about Convolutional Neural Network (CNNs), we typically think of Computer Vision. CNNs were responsible for major breakthroughs in Image Classification and are the core of most Computer Vision systems today, from Facebook's automated pho

[转]XNOR-Net ImageNet Classification Using Binary Convolutional Neural Networks

感谢: XNOR-Net ImageNet Classification Using Binary Convolutional Neural Networks XNOR-Net ImageNet Classification Using Binary Convolutional Neural Networks 本人想把算法思想实现在mxnet上(不单纯是一个layer),有意愿一起的小伙伴可以联系我,本人qq(邮箱):564326047(@qq.com),或者直接在下面留言. 一.Introdu

(转载)Convolutional Neural Networks卷积神经网络

Convolutional Neural Networks卷积神经网络 Contents 一:前导 Back Propagation反向传播算法 网络结构 学习算法 二:Convolutional Neural Networks卷积神经网络 三:LeCun的LeNet-5 四:CNNs的训练过程 五:总结 本文是我在20140822的周报,其中部分参照了以下博文或论文,如果在文中有一些没说明白的地方,可以查阅他们.对Yann LeCun前辈,和celerychen2009.zouxy09表示感谢

论文笔记之:Learning Multi-Domain Convolutional Neural Networks for Visual Tracking

Learning Multi-Domain Convolutional Neural Networks for Visual Tracking CVPR 2016 本文提出了一种新的CNN 框架来处理跟踪问题.众所周知,CNN在很多视觉领域都是如鱼得水,唯独目标跟踪显得有点“慢热”,这主要是因为CNN的训练需要海量数据,纵然是在ImageNet 数据集上微调后的model 仍然不足以很好的表达要跟踪地物体,因为Tracking问题的特殊性,至于怎么特殊的,且听细细道来. 目标跟踪之所以很少被 C

Convolutional Neural Networks for Visual Recognition 2

Linear Classification 在上一讲里,我们介绍了图像分类问题以及一个简单的分类模型K-NN模型,我们已经知道K-NN的模型有几个严重的缺陷,第一就是要保存训练集里的所有样本,这个比较消耗存储空间:第二就是要遍历所有的训练样本,这种逐一比较的方式比较耗时而低效. 现在,我们要介绍一种更加强大的图像分类模型,这个模型会很自然地引申出神经网络和Convolutional Neural Networks(CNN),这个模型有两个重要的组成部分,一个是score function,将原始

Convolutional Neural Networks for Visual Recognition 8

Convolutional Neural Networks (CNNs / ConvNets) 前面做了如此漫长的铺垫,如今终于来到了课程的重点. Convolutional Neural Networks. 简称CNN,与之前介绍的一般的神经网络相似,CNN相同是由能够学习的权值与偏移量构成.每个神经元接收一些输入.做点积运算加上偏移量,然后选择性的通过一些非线性函数.整个网络终于还是表示成一个可导的loss function,网络的起始端是输入图像.网络的终端是每一类的预測值,通过一个ful

ImageNet?Classification?with?Deep?Convolutional?Neural?Networks?阅读笔记 转载

ImageNet Classification with Deep Convolutional Neural Networks 阅读笔记 (2013-07-06 22:16:36) 转载▼ 标签: deep_learning imagenet hinton 分类: 机器学习 (决定以后每读一篇论文,都将笔记记录于博客上.) 这篇发表于NIPS2012的文章,是Hinton与其学生为了回应别人对于deep learning的质疑而将deep learning用于ImageNet(图像识别目前最大的

Convolutional Neural Networks卷积神经网络

转自:http://blog.csdn.net/zouxy09/article/details/8781543 9.5.Convolutional Neural Networks卷积神经网络 卷积神经网络是人工神经网络的一种,已成为当前语音分析和图像识别领域的研究热点.它的权值共享网络结构使之更类似于生物神经网络,降低了网络模型的复杂度,减少了权值的数量.该优点在网络的输入是多维图像时表现的更为明显,使图像可以直接作为网络的输入,避免了传统识别算法中复杂的特征提取和数据重建过程.卷积网络是为识别