自然语言处理资源NLP

转自:https://github.com/andrewt3000/DL4NLP

Deep Learning for NLP resources

State of the art resources for NLP sequence modeling tasks such as machine translation, image captioning, and dialog.

My notes on neural networks, rnn, lstm

Deep Learning for NLP

Stanford Natural Language Processing
Intro NLP course with videos. This has no deep learning. But it is a good primer for traditional nlp. Covers topics such as sentence segmentation, word tokenizing, word normalization, n-grams, named entity recognition, part of speech tagging.Currently not available

Stanford CS 224D: Deep Learning for NLP class
Richard Socher. (2016) Class with syllabus, and slides.
Videos: 2015 lectures / 2016 lectures

A Primer on Neural Network Models for Natural Language Processing
Yoav Goldberg. October 2015. No new info, 75 page summary of state of the art.

Oxford Deep Learning for NLP class
Phil Blunsom. (2017) Class by Deep Mind NLP Group.
Lecture slides, videos, and practicals: Github Repository
Currently ongoing

Word Vectors

Resources about word vectors, aka word embeddings, and distributed representations for words.
Word vectors are numeric representations of words where similar words have similar vectors. Word vectors are often used as input to deep learning systems. This process is sometimes called pretraining.

A neural probabilistic language model.
Bengio 2003. Seminal paper on word vectors.



Efficient Estimation of Word Representations in Vector Space
Mikolov et al. 2013. Word2Vec generates word vectors in an unsupervised way by attempting to predict words from a corpus. Describes Continuous Bag-of-Words (CBOW) and Continuous Skip-gram models for learning word vectors.
Skip-gram takes center word and predict outside words. Skip-gram is better for large datasets.
CBOW - takes outside words and predict the center word. CBOW is better for smaller datasets.

Distributed Representations of Words and Phrases and their Compositionality
Mikolov et al. 2013. Learns vectors for phrases such as "New York Times." Includes optimizations for skip-gram: heirachical softmax, and negative sampling. Subsampling frequent words. (i.e. frequent words like "the" are skipped periodically to speed things up and improve vector for less frequently used words)

Linguistic Regularities in Continuous Space Word Representations
Mikolov et al. 2013. Performs well on word similarity and analogy task. Expands on famous example: King – Man + Woman = Queen
Word2Vec source code
Word2Vec tutorial in TensorFlow

word2vec Parameter Learning Explained
Rong 2014

Articles explaining word2vec: Deep Learning, NLP, and Representations and The amazing power of word vectors



GloVe: Global vectors for word representation
Pennington, Socher, Manning. 2014. Creates word vectors and relates word2vec to matrix factorizations. Evalutaion section led to controversy by Yoav Goldberg
Glove source code and training data



Enriching Word Vectors with Subword Information
Bojanowski, Grave, Joulin, Mikolov 2016
FastText Code

Sentiment Analysis

Thought vectors are numeric representations for sentences, paragraphs, and documents. This concept is used for many text classification tasks such as sentiment analysis.

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
Socher et al. 2013. Introduces Recursive Neural Tensor Network and dataset: "sentiment treebank." Includes demo site. Uses a parse tree.

Distributed Representations of Sentences and Documents
Le, Mikolov. 2014. Introduces Paragraph Vector. Concatenates and averages pretrained, fixed word vectors to create vectors for sentences, paragraphs and documents. Also known as paragraph2vec. Doesn‘t use a parse tree.
Implemented in gensim. See doc2vec tutorial

Deep Recursive Neural Networks for Compositionality in Language
Irsoy & Cardie. 2014. Uses Deep Recursive Neural Networks. Uses a parse tree.

Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Tai et al. 2015 Introduces Tree LSTM. Uses a parse tree.

Semi-supervised Sequence Learning
Dai, Le 2015
Approach: "We present two approaches that use unlabeled data to improve sequence learning with recurrent networks. The first approach is to predict what comes next in a sequence, which is a conventional language model in natural language processing. The second approach is to use a sequence autoencoder..."
Result: "With pretraining, we are able to train long short term memory recurrent networks up to a few hundred timesteps, thereby achieving strong performance in many text classification tasks, such as IMDB, DBpedia and 20 Newsgroups."

Bag of Tricks for Efficient Text Classification
Joulin, Grave, Bojanowski, Mikolov 2016 Facebook AI Research.
"Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation."
FastText blog
FastText Code

Neural Machine Translation

In 2014, neural machine translation (NMT) performance became comprable to state of the art statistical machine translation(SMT).

Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (abstract)
Cho et al. 2014 Breakthrough deep learning paper on machine translation. Introduces basic sequence to sequence model which includes two rnns, an encoder for input and a decoder for output.

Neural Machine Translation by jointly learning to align and translate (abstract)
Bahdanau, Cho, Bengio 2014.
Implements attention mechanism. "Each time the proposed model generates a word in a translation, it (soft-)searches for a set of positions in a source sentence where the most relevant information is concentrated"
Result: "comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation."
English to French Demo

On Using Very Large Target Vocabulary for Neural Machine Translation
Jean, Cho, Memisevic, Bengio 2014.
"we try replacing each [UNK] token with the aligned source word or its most likely translation determined by another word alignment model."
Result: English -> German bleu score = 21.59 (target vocabulary of 50,000)

Sequence to Sequence Learning with Neural Networks
Sutskever, Vinyals, Le 2014. (nips presentation). Uses seq2seq to generate translations.
Result: English -> French bleu score = 34.8 (WMT’14 dataset)
A key contribution is improvements from reversing the source sentences.
seq2seq tutorial in TensorFlow.

Addressing the Rare Word Problem in Neural Machine Translation (abstract)
Luong, Sutskever, Le, Vinyals, Zaremba 2014
Replace UNK words with dictionary lookup.
Result: English -> French BLEU score = 37.5.

Effective Approaches to Attention-based Neural Machine Translation
Luong, Pham, Manning. 2015
2 models of attention: global and local.
Result: English -> German 25.9 BLEU points

Context-Dependent Word Representation for Neural Machine Translation
Choi, Cho, Bengio 2016
"we propose to contextualize the word embedding vectors using a nonlinear bag-of-words representation of the source sentence."
"we propose to represent special tokens (such as numbers, proper nouns and acronyms) with typed symbols to facilitate translating those words that are not well-suited to be translated via continuous vectors."

Google‘s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Wu et al. 2016
blog post
"WMT’14 English-to-French, our single model scores 38.95 BLEU"
"WMT’14 English-to-German, our single model scores 24.17 BLEU"

Google‘s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Johnson et al. 2016
blog post
Translations between untrained language pairs.

Google has started rolling out NMT to it‘s production system, and it‘s a significant improvement.

Image Captioning

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Xu et al. 2015 Creates captions by feeding image into a CNN which feeds into hidden state of an RNN that generates the caption. At each time step the RNN outputs next word and the next location to pay attention to via a probability over grid locations. Uses 2 types of attention soft and hard. Soft attention uses gradient descent and backprop and is deterministic. Hard attention selects the element with highest probability. Hard attention uses reinforcement learning, rather than backprop and is stochastic.

Open source implementation in TensorFlow

Conversation modeling / Dialog

Neural Responding Machine for Short-Text Conversation
Shang et al. 2015 Uses Neural Responding Machine. Trained on Weibo dataset. Achieves one round conversations with 75% appropriate responses.

A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Sordoni et al. 2015. Generates responses to tweets.
Uses Recurrent Neural Network Language Model (RLM) architecture of (Mikolov et al., 2010). source code: RNNLM Toolkit

Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
Serban, Sordoni, Bengio et al. 2015. Extends hierarchical recurrent encoder-decoder neural network (HRED).

Attention with Intention for a Neural Network Conversation Model
Yao et al. 2015 Architecture is three recurrent networks: an encoder, an intention network and a decoder.

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues
Serban, Sordoni, Lowe, Charlin, Pineau, Courville, Bengio 2016
Proposes novel architecture: VHRED. Latent Variable Hierarchical Recurrent Encoder-Decoder
Compares favorably against LSTM and HRED.



A Neural Conversation Model
Vinyals, Le 2015. Uses LSTM RNNs to generate conversational responses. Uses seq2seq framework. Seq2Seq was originally designed for machine translation and it "translates" a single sentence, up to around 79 words, to a single sentence response, and has no memory of previous dialog exchanges. Used in Google Smart Reply feature for Inbox

Incorporating Copying Mechanism in Sequence-to-Sequence Learning
Gu et al. 2016 Proposes CopyNet, builds on seq2seq.

A Persona-Based Neural Conversation Model
Li et al. 2016 Proposes persona-based models for handling the issue of speaker consistency in neural response generation. Builds on seq2seq.

Deep Reinforcement Learning for Dialogue Generation
Li et al. 2016. Uses reinforcement learing to generate diverse responses. Trains 2 agents to chat with each other. Builds on seq2seq.



Deep learning for chatbots
Article summary of state of the art, and challenges for chatbots.
Deep learning for chatbots. part 2
Implements a retrieval based dialog agent using dual encoder lstm with TensorFlow, based on the Ubuntu dataset [paper] includes source code

ParlAI A framework for training and evaluating AI models on a variety of openly available dialog datasets. Released by FaceBook.

Memory and Attention Models

Attention mechanisms allows the network to refer back to the input sequence, instead of forcing it to encode all information into one fixed-length vector. - Attention and Memory in Deep Learning and NLP

Memory Networks Weston et. al 2014, and End-To-End Memory Networks Sukhbaatar et. al 2015.
Memory networks are implemented in MemNN. Attempts to solve task of reason attention and memory.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Weston 2015. Classifies QA tasks like single factoid, yes/no etc. Extends memory networks.
Evaluating prerequisite qualities for learning end to end dialog systems
Dodge et. al 2015. Tests Memory Networks on 4 tasks including reddit dialog task.
See Jason Weston lecture on MemNN

Neural Turing Machines
Graves, Wayne, Danihelka 2014.
We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples. Olah and Carter blog on NTM

Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets
Joulin, Mikolov 2015. Stack RNN source code and blog post

Reasoning, Attention and Memory RAM workshop at NIPS 2015. slides included

时间: 2024-08-08 22:05:34

自然语言处理资源NLP的相关文章

利用Tensorflow进行自然语言处理(NLP)系列之一Word2Vec

写在前面的话(可略过): 一直想写下.整理下利用Tensorflow或Keras工具进行自然语言处理(NLP)方面的文章,对比和纠结了一段时间,发现博众家之长不如静下心来一步一个脚印地去看一本书来得更实在,虽然慢但是心里相对踏实些.近期刚把Thushan Ganegedara写的<Natural Language Processing with TensorFlow>(2018年5月第一次出版),目前没看到中文版.讲真,看原版书确实很耗费精力,但原版书的好处是可以原汁原味地探索.写博文的过程中

聊天机器人(chatbot)终极指南:自然语言处理(NLP)和深度机器学习(Deep Machine Learning)

在过去的几个月中,我一直在收集自然语言处理(NLP)以及如何将NLP和深度学习(Deep Learning)应用到聊天机器人(Chatbots)方面的最好的资料. 时不时地我会发现一个出色的资源,因此我很快就开始把这些资源编制成列表. 不久,我就发现自己开始与bot开发人员和bot社区的其他人共享这份清单以及一些非常有用的文章了. 在这个过程中,我的名单变成了一个指南,经过一些好友的敦促和鼓励,我决定和大家分享这个指南,或许是一个精简的版本 - 由于长度的原因. 这个指南主要基于Denny Br

自然语言处理资源总结

我爱自然语言,是自然语言的一个博客群 http://www.aclweb.org/anthology-new/ 自然语言问题列表 http://www.newsmth.net/bbsdoc.php?board=NLP 自然语言处理课程在线学习网站 https://www.coursera.org/ ACL Anthology,囊括了ACL,EMNLP,CL等NLP领域重要会议和期刊的论文. http://www.aclweb.org/anthology-new/ LDC: The Linguis

信息检索和自然语言处理 IR&amp;NLP howto

课程: 6.891 (Fall 2003): Machine Learning Approaches for Natural Language Processing http://www.ai.mit.edu/courses/6.891-nlp/ CS 276 / LING 286 Information Retrieval and Web Search Spring 2012 http://www.stanford.edu/class/cs276/index.html 资源: Informat

自然语言处理资源

http://www.52nlp.cn/resources 资源 这里提供一些52nlp博客的一些系列文章以及收集的自然语言处理相关书籍及其他资源的下载,陆续整理中!如有不妥,我会做删除处理! 特别推荐系列:1.HMM学习最佳范例全文文档,百度网盘链接: http://pan.baidu.com/s/1pJoMA2B 密码: f7az 2.无约束最优化全文文档 -by @朱鉴 ,百度网盘链接:链接:http://pan.baidu.com/s/1hqEJtT6 密码: qng0 3.PYTHON

初学者如何查阅自然语言处理(NLP)领域学术资料

原文地址 http://blog.sina.com.cn/s/blog_574a437f01019poo.html 昨天实验室一位刚进组的同学发邮件来问我如何查找学术论文,这让我想起自己刚读研究生时茫然四顾的情形:看着学长们高谈阔论领域动态,却不知如何入门.经过研究生几年的耳濡目染,现在终于能自信地知道去哪儿了解最新科研动态了.我想这可能是初学者们共通的困惑,与其只告诉一个人知道,不如将这些Folk Knowledge写下来,来减少更多人的麻烦吧.当然,这个总结不过是一家之谈,只盼有人能从中获得

自注意力机制(Self-attention Mechanism)——自然语言处理(NLP)

近年来,注意力(Attention)机制被广泛应用到基于深度学习的自然语言处理(NLP)各个任务中.随着注意力机制的深入研究,各式各样的attention被研究者们提出.在2017年6月google机器翻译团队在arXiv上放出的<Attention is all you need>论文受到了大家广泛关注,自注意力(self-attention)机制开始成为神经网络attention的研究热点,在各个任务上也取得了不错的效果.对这篇论文中的self-attention以及一些相关工作进行了学习

自然语言处理(NLP)知识结构总结

自然语言处理知识太庞大了,网上也都是一些零零散散的知识,比如单独讲某些模型,也没有来龙去脉,学习起来较为困难,于是我自己总结了一份知识体系结构,不足之处,欢迎指正.内容来源主要参考黄志洪老师的自然语言处理课程.主要参考书为宗成庆老师的<统计自然语言处理>,虽然很多内容写的不清楚,但好像中文NLP书籍就这一本全一些,如果想看好的英文资料,可以到我的GitHub上下载:  http://github.com/lovesoft5/ml  下面直接开始正文: 一.自然语言处理概述           

自然语言处理(NLP)——语言模型预训练方法(ELMo、GPT和BERT)

1. 引言 在介绍论文之前,我将先简单介绍一些相关背景知识.首先是语言模型(Language Model),语言模型简单来说就是一串词序列的概率分布.具体来说,语言模型的作用是为一个长度为m的文本确定一个概率分布P,表示这段文本存在的可能性.在实践中,如果文本的长度较长,P(wi | w1, w2, . . . , wi−1)的估算会非常困难.因此,研究者们提出使用一个简化模型:n元模型(n-gram model).在 n 元模型中估算条件概率时,只需要对当前词的前n个词进行计算.在n元模型中,