AAAI 2018 分析

AAAI 2018 分析

word embedding

Learning Sentiment-Specific Word Embedding via Global Sentiment Representation

Context-based word embedding learning approaches can model rich semantic and syntactic information.

However, it is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarities, such as good and bad, are mapped into close word vectors in the embedding space.

Recently, some sentiment embedding learning methods have been proposed, but most of them are designed to work well on sentence-level texts.

Directly applying those models to document-level texts often leads to unsatisfied results.

To address this issue, we present a sentiment-specific word embedding learning architecture that utilizes local context informationas well as global sentiment representation.

The architecture is applicable for both sentence-level and document-level texts.

We take global sentiment representation as a simple average of word embeddings in the text, and use a corruption strategy as a sentiment-dependent regularization.

Extensive experiments conducted on several benchmark datasets demonstrate that the proposed architecture outperforms the state-of-the-art methods for sentiment classification.

《通过全局情绪表示学习特定情绪词的嵌入》

基于上下文的词嵌入学习方法可以对丰富的语义和句法信息进行建模。

然而,对于情感分析却存在问题,因为在嵌入空间中,具有相似语境但对立情感极性的词(如好词和坏词)被映射到封闭词向量中。

近年来,人们提出了一些情绪嵌入学习方法,但大多数方法都是为了在句子级的文本中发挥作用。

直接将这些模型应用于文档级文本通常会导致不满意的结果。

为了解决这个问题,我们提出了一种情绪特定的词嵌入学习架构,它利用了局部的背景信息以及全局情绪表示。

该体系结构适用于句子级和文档级文本。

我们将全球情绪表征作为文字嵌入的简单平均值,并将腐败策略作为情绪依赖的规范化。

在多个基准数据集上进行的大量实验表明,所提出的架构优于最先进的情感分类方法。

Using k-Way Co-Occurrences for Learning Word Embeddings

Co-occurrences between two words provide useful insights into the semantics of those words. Consequently, numerous prior work on word embedding learning has used co-occurrences between two words as the training signal for learning word embeddings. However, in natural language texts it is common for multiple words to be related and co-occurring in the same context. We extend the notion of co-occurrences to cover k(≥2)-way co-occurrences among a set of k-words. Specifically, we prove a theoretical relationship between the joint probability of k(≥2) words, and the sum of l_2 norms of their embeddings. Next, we propose a learning objective motivated by our theoretical result that utilize k-way co-occurrences for learning word embeddings. Our experimental results show that the derived theoretical relationship does indeed hold empirically, and despite data sparsity, for some smaller k(≤5) values, k-way embeddings perform comparably or better than 2-way embeddings in a range of tasks.

Semantic Structure-Based Word Embedding by Incorporating Concept Convergence and Word Divergence

Representing the semantics of words is a fundamental task in text processing.

Several research studies have shown that text and knowledge bases (KBs) are complementary sources for word embedding learning.

Most existing methods only consider relationships within word-pairs in the usage of KBs.

We argue that the structural information of well-organized words within the KBs is able to convey more effective and stable knowledge in capturing semantics of words.

In this paper, we propose a semantic structure-based word embedding method, and introduce concept convergence and word divergence to reveal semantic structures in the word embedding learning process.

To assess the effectiveness of our method, we use WordNet for training and conduct extensive experiments on word similarity, word analogy, text classification and query expansion.

The experimental results show that our method outperforms state-of-the-art methods, including the methods trained solely on the corpus, and others trained on the corpus and the KBs.

《基于语义结构的概念收敛和词嵌入发散》

表示词的语义是文本处理中的一项基本任务。

一些研究表明,文本和知识库(KBS)是词嵌入学习的补充源。

在使用KBS时,大多数现有方法只考虑单词对儿内部的关系。

我们认为,知识库中组织良好的单词的结构信息能够更有效、更稳定地传递捕获单词语义的知识。

本文提出了一种基于语义结构的嵌入方法,并引入概念收敛和单词发散来揭示嵌入学习过程中的语义结构。

为了评估该方法的有效性,我们使用WordNet进行了训练,并在单词相似性、单词类比、文本分类和查询扩展等方面进行了广泛的实验。

实验结果表明,我们的方法优于最先进的方法,包括只在语料库上训练的方法,以及在语料库和KBS上训练的方法。

Spectral Word Embedding with Negative Sampling

In this work, we investigate word embedding algorithms in the context of natural language processing. In particular, we examine the notion of ``negative examples‘‘, the unobserved or insignificant word-context co-occurrences, in spectral methods. we provide a new formulation for the word embedding problem by proposing a new intuitive objective function that perfectly justifies the use of negative examples. In fact, our algorithm not only learns from the important word-context co-occurrences, but also it learns from the abundance of unobserved or insignificant co-occurrences to improve the distribution of words in the latent embedded space. We analyze the algorithm theoretically and provide an optimal solution for the problem using spectral analysis. We have trained various word embedding algorithms on articles of Wikipedia with 2.1 billion tokens and show that negative sampling can boost the quality of spectral methods. Our algorithm provides results as good as the state-of-the-art but in a much faster and efficient way.

Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention

Linguistic Inquiry and Word Count (LIWC) is a word counting software tool which has been used for quantitative text analysis in many fields.

Due to its success and popularity, the core lexicon has been translated into Chinese and many other languages.

However, the lexicon only contains several thousand of words, which is deficient compared with the number of common words in Chinese.

Current approaches often require manually expanding the lexicon, but it often takes too much time and requires linguistic experts to extend the lexicon.

To address this issue, we propose to expand the LIWC lexicon automatically.

Specifically, we consider it as a hierarchical classification problem and utilize the Sequence-to-Sequence model to classify words in the lexicon.

Moreover, we use the sememe information with the attention mechanism to capture the exact meanings of a word, so that we can expand a more precise and comprehensive lexicon.

The experimental results show that our model has a better understanding of word meanings with the help of sememes and achieves significant and consistent improvements compared with the state-of-the-art methods.

The source code of this paper can be obtained from https://github.com/thunlp/Auto_CLIWC.

《基于词组嵌入层次分类的汉语词汇扩展》

语言查询和字数统计(LIWC)是一种在许多领域用于定量文本分析的字数统计软件工具。

由于它的成功和普及,核心词汇已被翻译成汉语和许多其他语言。

然而,词汇中只有几千个词,与汉语常用词相比,这是一个不足之处。

目前的方法通常需要手动扩展词汇,但通常需要花费大量的时间,并且需要语言专家来扩展词汇。

为了解决这个问题,我们提出自动扩展LIWC词典。

具体地说,我们将其视为一个层次分类问题,并利用序列-序列模型对词汇进行分类。

此外,我们还利用具有注意机制的义位信息来捕获一个词的确切含义,从而扩展出一个更精确、更全面的词汇。

实验结果表明,我们的模型在义原的帮助下,对词义有了更好的理解,与目前最先进的方法相比,得到了显著和一致的改进。

本文的源代码可从 https://github.com/thunlp/auto_cliwc 获得。

Training and Evaluating Improved Dependency-Based Word Embeddings

Word embedding has been widely used in many natural language processing tasks. In this paper, we focus on learning word embeddings through selective higher-order relationships in sentences to improve the embeddings to be less sensitive to local context and more accurate in capturing semantic compositionality. We present a novel multi-order dependency-based strategy to composite and represent the context under several essential constraints. In order to realize selective learning from the word contexts, we automatically assign the strengths of different dependencies between co-occurred words in the stochastic gradient descent process. We evaluate and analyze our proposed approach using several direct and indirect tasks for word embeddings. Experimental results demonstrate that our embeddings are competitive to or better than state-of-the-art methods and significantly outperform other methods in terms of context stability. The output weights and representations of dependencies obtained in our embedding model conform to most of the linguistic characteristics and are valuable for many downstream tasks.

word representation

Learning Multimodal Word Representation via Dynamic Fusion Methods

Multimodal models have been proven to outperform text-based models on learning semantic word representations. Almost all previous multimodal models typically treat the representations from different modalities equally. However, it is obvious that information from different modalities contributes differently to the meaning of words. This motivates us to build a multimodal model that can dynamically fuse the semantic representations from different modalities according to different types of words. To that end, we propose three novel dynamic fusion methods to assign importance weights to each modality, in which weights are learned under the weak supervision of word association pairs. The extensive experiments have demonstrated that the proposed methods outperform strong unimodal baselines and state-of-the-art multimodal models.

Learning Multi-Modal Word Representation Grounded in Visual Context

Representing the semantics of words is a long-standing problem for the natural language processing community.

Most methods compute word semantics given their textual context in large corpora.

More recently, researchers attempted to integrate perceptual and visual features.

Most of these works consider the visual appearance of objects to enhance word representations but they ignore the visual environment and context in which objects appear.

We propose to unify text-based techniques with vision-based techniques by simultaneously leveraging textual and visual context to learn multimodal word embeddings.

We explore various choices for what can serve as a visual context and present an end-to-end method to integrate visual context elements in a multimodal skip-gram model.

We provide experiments and extensive analysis of the obtained results.

《基于视觉语境的多模态词表示学习》

词的语义表示是自然语言处理界长期存在的问题。

大多数方法根据大语料库中的文本上下文计算单词语义。

最近,研究人员试图整合感知和视觉特征。

这些作品大多考虑对象的视觉外观以增强单词表示,但它们忽略了对象出现的视觉环境和上下文。

我们建议将基于文本的技术与基于视觉的技术统一起来,同时利用文本和视觉上下文来学习多模态单词嵌入。

我们探讨了作为可视上下文的各种选择,并提出了一种将可视上下文元素集成到多模式跳图模型中的端到端方法。

我们对所得结果进行了实验和广泛的分析。

原文地址:https://www.cnblogs.com/fengyubo/p/11067707.html

时间: 2024-10-17 22:18:34

AAAI 2018 分析的相关文章

zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐

[清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengyan Zhang and Yushi Bai同学对 GNN 相关的综述论文.模型与应用进行了综述,并发布在 GitHub 上.16大应用包含物理.知识图谱等最新论文整理推荐. GitHub 链接: https://github.com/thunlp/GNNPapers 目录            

如何将C/C++程序转译成Delphi(十四)

众所周知,数据科学是这几年才火起来的概念,而应运而生的数据科学家(data scientist)明显缺乏清晰的录取标准和工作内容.此次课程以<星际争霸II>回放文件分析为例,集中在IBM Cloud相关数据分析服务的应用.面对星际游戏爱好者希望提升技能的要求,我们使用IBM Data Science Experience中的jJupyter Notebooks来实现数据的可视化以及对数据进行深度分析,并最终存储到IBM Cloudant中.这是个介绍+动手实践的教程,参会者不仅将和讲师一起在线

世界著名设计小组nrg推荐的75个FLASH酷站

众所周知,数据科学是这几年才火起来的概念,而应运而生的数据科学家(data scientist)明显缺乏清晰的录取标准和工作内容.此次课程以<星际争霸II>回放文件分析为例,集中在IBM Cloud相关数据分析服务的应用.面对星际游戏爱好者希望提升技能的要求,我们使用IBM Data Science Experience中的jJupyter Notebooks来实现数据的可视化以及对数据进行深度分析,并最终存储到IBM Cloudant中.这是个介绍+动手实践的教程,参会者不仅将和讲师一起在线

腾讯技术工程 | 腾讯AI Lab 现场陈述论文:使众包配对排名聚合信息最大化的 HodgeRan

前言:腾讯AI Lab共有12篇论文入选在美国新奥尔良举行的国际人工智能领域顶级学术会议AAAI 2018.腾讯技术工程官方号编译整理了现场陈述论文<使众包配对排名聚合信息最大化的 HodgeRank>(HodgeRank with Information Maximization for Crowdsourced Pairwise Ranking Aggregation),该论文被AAAI 2018录用为现场陈述报告(Oral Presentation),由中国科学院信息工程研究所.腾讯AI

深度学习用于文本分类的论文及代码集锦

深度学习用于文本分类的论文及代码集锦 原创: FrankLearningMachine 机器学习blog 4天前 [1] Convolutional Neural Networks for Sentence Classification Yoon Kim New York University EMNLP 2014 http://www.aclweb.org/anthology/D14-1181 这篇文章主要利用CNN基于预训练好的词向量中对句子进行分类.作者发现利用微调来学习任务相关的词向量可

自注意力机制(Self-attention Mechanism)——自然语言处理(NLP)

近年来,注意力(Attention)机制被广泛应用到基于深度学习的自然语言处理(NLP)各个任务中.随着注意力机制的深入研究,各式各样的attention被研究者们提出.在2017年6月google机器翻译团队在arXiv上放出的<Attention is all you need>论文受到了大家广泛关注,自注意力(self-attention)机制开始成为神经网络attention的研究热点,在各个任务上也取得了不错的效果.对这篇论文中的self-attention以及一些相关工作进行了学习

[转帖]区块链能做什么,不能做什么?

区块链能做什么,不能做什么? https://www.yicai.com/news/100057182.html 文章很长 但是很值得一看. 目前真正落地并产生社会效益的区块链项目很少,除了区块链物理性能不高以外,区块链经济功能的短板也是重要原因. 一.引言 区块链最早作为比特币的底层技术由中本聪(Nakamoto)2008年提出.但比特币的脚本语言缺乏图灵完备性(Turing completeness),使用的UTXO(unspent transaction output,未使用交易输出)模型

铺揭竟冒剐惹蒲掷咕堑接veld

IEEE Spectrum 杂志发布了一年一度的编程语言排行榜,这也是他们发布的第四届编程语言 Top 榜. 据介绍,IEEE Spectrum 的排序是来自 10 个重要线上数据源的综合,例如 Stack Overflow.Twitter.Reddit.IEEE Xplore.GitHub.CareerBuilder 等,对 48 种语言进行排行. 与其他排行榜不同的是,IEEE Spectrum 可以让读者自己选择参数组合时的权重,得到不同的排序结果.考虑到典型的 Spectrum 读者需求

支酸权我据好张近拉其深就率sJFlzqgrA

社保划到税务征收,将大大提升社保费的征管效率.税务的征管能力是目前而言最强的,以后税务征收社保不是代收,属于本职了. 之前税局要把社保信息和交个税的工资比对起来有困难!现在好了,个税是自己的,社保也是自己的,比对困难?不存在的! 这一变革,会给那些不给员工上社保.不全额上社保的企业致命一击! 最新案例 前段时间的发改委关于限制特定严重失信人乘坐民航的一则意见--发改财金[2018]385号,其中还有税务总局的联合署名. http://weibo.com/20180408PP/2309279811