自然语言13_Stop words with NLTK

https://www.pythonprogramming.net/stop-words-nltk-tutorial/?completed=/tokenizing-words-sentences-nltk-tutorial/

Stop words with NLTK

The idea of Natural Language Processing is to do some form of
analysis, or processing, where the machine can understand, at least to
some level, what the text means, says, or implies.

This is an obviously massive challenge, but there are steps to
doing it that anyone can follow. The main idea, however, is that
computers simply do not, and will not, ever understand words directly.
Humans don‘t either *shocker*. In humans, memory is broken down into
electrical signals in the brain, in the form of neural groups that fire
in patterns. There is a lot about the brain that remains unknown, but,
the more we break down the human brain to the basic elements, we find
out basic the elements really are. Well, it turns out computers store
information in a very similar way! We need a way to get as close to that
as possible if we‘re going to mimic how humans read and understand
text. Generally, computers use numbers for everything, but we often see
directly in programming where we use binary signals (True or False,
which directly translate to 1 or 0, which originates directly from
either the presence of an electrical signal (True, 1), or not (False,
0)). To do this, we need a way to convert words to values, in numbers,
or signal patterns. The process of converting data to something a
computer can understand is referred to as "pre-processing." One of the
major forms of pre-processing is going to be filtering out useless data.
In natural language processing, useless words (data), are referred to
as stop words.

Immediately, we can recognize ourselves that some words carry more
meaning than other words. We can also see that some words are just
plain useless, and are filler words. We use them in the English
language, for example, to sort of "fluff" up the sentence so it is not
so strange sounding. An example of one of the most common, unofficial,
useless words is the phrase "umm." People stuff in "umm" frequently,
some more than others. This word means nothing, unless of course we‘re
searching for someone who is maybe lacking confidence, is confused, or
hasn‘t practiced much speaking. We all do it, you can hear me saying
"umm" or "uhh" in the videos plenty of ...uh ... times. For most
analysis, these words are useless.

We would not want these words taking up space in our database, or
taking up valuable processing time. As such, we call these words "stop
words" because they are useless, and we wish to do nothing with them.
Another version of the term "stop words" can be more literal: Words we
stop on.

For example, you may wish to completely cease analysis if you
detect words that are commonly used sarcastically, and stop immediately.
Sarcastic words, or phrases are going to vary by lexicon and corpus.
For now, we‘ll be considering stop words as words that just contain no
meaning, and we want to remove them.

You can do this easily, by storing a list of words that you
consider to be stop words. NLTK starts you off with a bunch of words
that they consider to be stop words, you can access it via the NLTK
corpus with:

from nltk.corpus import stopwords

Here is the list:

>>> set(stopwords.words(‘english‘))

{‘ourselves‘, ‘hers‘, ‘between‘, ‘yourself‘, ‘but‘, ‘again‘, ‘there‘,
‘about‘, ‘once‘, ‘during‘, ‘out‘, ‘very‘, ‘having‘, ‘with‘, ‘they‘,
‘own‘, ‘an‘, ‘be‘, ‘some‘, ‘for‘, ‘do‘, ‘its‘, ‘yours‘, ‘such‘, ‘into‘,
‘of‘, ‘most‘, ‘itself‘, ‘other‘, ‘off‘, ‘is‘, ‘s‘, ‘am‘, ‘or‘, ‘who‘,
‘as‘, ‘from‘, ‘him‘, ‘each‘, ‘the‘, ‘themselves‘, ‘until‘, ‘below‘,
‘are‘, ‘we‘, ‘these‘, ‘your‘, ‘his‘, ‘through‘, ‘don‘, ‘nor‘, ‘me‘,
‘were‘, ‘her‘, ‘more‘, ‘himself‘, ‘this‘, ‘down‘, ‘should‘, ‘our‘,
‘their‘, ‘while‘, ‘above‘, ‘both‘, ‘up‘, ‘to‘, ‘ours‘, ‘had‘, ‘she‘,
‘all‘, ‘no‘, ‘when‘, ‘at‘, ‘any‘, ‘before‘, ‘them‘, ‘same‘, ‘and‘,
‘been‘, ‘have‘, ‘in‘, ‘will‘, ‘on‘, ‘does‘, ‘yourselves‘, ‘then‘,
‘that‘, ‘because‘, ‘what‘, ‘over‘, ‘why‘, ‘so‘, ‘can‘, ‘did‘, ‘not‘,
‘now‘, ‘under‘, ‘he‘, ‘you‘, ‘herself‘, ‘has‘, ‘just‘, ‘where‘, ‘too‘,
‘only‘, ‘myself‘, ‘which‘, ‘those‘, ‘i‘, ‘after‘, ‘few‘, ‘whom‘, ‘t‘,
‘being‘, ‘if‘, ‘theirs‘, ‘my‘, ‘against‘, ‘a‘, ‘by‘, ‘doing‘, ‘it‘,
‘how‘, ‘further‘, ‘was‘, ‘here‘, ‘than‘}

Here is how you might incorporate using the stop_words set to remove the stop words from your text:

from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

example_sent = "This is a sample sentence, showing off the stop words filtration."

stop_words = set(stopwords.words(‘english‘))

word_tokens = word_tokenize(example_sent)

filtered_sentence = [w for w in word_tokens if not w in stop_words]

filtered_sentence = []

for w in word_tokens:
    if w not in stop_words:
        filtered_sentence.append(w)

print(word_tokens)
print(filtered_sentence)

Our output here:
[‘This‘, ‘is‘, ‘a‘, ‘sample‘, ‘sentence‘, ‘,‘, ‘showing‘, ‘off‘, ‘the‘, ‘stop‘, ‘words‘, ‘filtration‘, ‘.‘]

[‘This‘, ‘sample‘, ‘sentence‘, ‘,‘, ‘showing‘, ‘stop‘, ‘words‘, ‘filtration‘, ‘.‘]

Our database thanks us. Another form of data pre-processing is ‘stemming,‘ which is what we‘re going to be talking about next.

时间: 2024-10-27 04:47:05

自然语言13_Stop words with NLTK的相关文章

自然语言处理(1)之NLTK与PYTHON

自然语言处理(1)之NLTK与PYTHON 题记: 由于现在的项目是搜索引擎,所以不由的对自然语言处理产生了好奇,再加上一直以来都想学Python,只是没有机会与时间.碰巧这几天在亚马逊上找书时发现了这本<Python自然语言处理>,瞬间觉得这对我同时入门自然语言处理与Python有很大的帮助.所以最近都会学习这本书,也写下这些笔记. 1. NLTK简述 NLTK模块及功能介绍 语言处理任务 NLTK模块 功能描述 获取语料库 nltk.corpus 语料库和词典的标准化接口 字符串处理 nl

自然语言19.1_Lemmatizing with NLTK

https://www.pythonprogramming.net/lemmatizing-nltk-tutorial/?completed=/named-entity-recognition-nltk-tutorial/ Lemmatizing with NLTK A very similar operation to stemming is called lemmatizing. The major difference between these is, as you saw earlie

自然语言23_Text Classification with NLTK

https://www.pythonprogramming.net/text-classification-nltk-tutorial/?completed=/wordnet-nltk-tutorial/ Text Classification with NLTK Now that we're comfortable with NLTK, let's try to tackle text classification. The goal with text classification can

自然语言14_Stemming words with NLTK

https://www.pythonprogramming.net/stemming-nltk-tutorial/?completed=/stop-words-nltk-tutorial/ Stemming words with NLTK The idea of stemming is a sort of normalizing method. Many variations of words carry the same meaning, other than when tense is in

自然语言20_The corpora with NLTK

https://www.pythonprogramming.net/nltk-corpus-corpora-tutorial/?completed=/lemmatizing-nltk-tutorial/ The corpora with NLTK In this part of the tutorial, I want us to take a moment to peak into the corpora we all downloaded! The NLTK corpus is a mass

Python自然语言处理实践: 在NLTK中使用斯坦福中文分词器

http://www.52nlp.cn/python%E8%87%AA%E7%84%B6%E8%AF%AD%E8%A8%80%E5%A4%84%E7%90%86%E5%AE%9E%E8%B7%B5-%E5%9C%A8nltk%E4%B8%AD%E4%BD%BF%E7%94%A8%E6%96%AF%E5%9D%A6%E7%A6%8F%E4%B8%AD%E6%96%87%E5%88%86%E8%AF%8D%E5%99%A8 原文地址:https://www.cnblogs.com/lhuser/p/

自然语言0_nltk中文使用和学习资料汇总

http://blog.csdn.net/huyoo/article/details/12188573 nltk是一个Python工具包, 用来处理和自然语言处理相关的东西. 包括分词(tokenize), 词性标注(POS), 文本分类, 等等现成的工具. 1. nltk的安装 资料1.1: 黄聪:Python+NLTK自然语言处理学习(一):环境搭建  http://www.cnblogs.com/huangcong/archive/2011/08/29/2157437.html   这个图

自然语言1_介绍

相同爱好者请加 QQ:231469242 seo 关键词 自然语言,NLP,nltk,python,tokenization,normalization,linguistics,semantic 单词: NLP:natural language processing  自然语言处理 tokenization 词语切分 normalization 标准化(去除标点,大小写统一         ) nltk:natural language toolkit  (Python)自然语言工具包 corp

自然语言2_常用函数

相同爱好者请加 QQ:231469242 seo 关键词 自然语言,NLP,nltk,python,tokenization,normalization,linguistics,semantic 学习参考书: http://nltk.googlecode.com/svn/trunk/doc/book/ http://blog.csdn.net/tanzhangwen/article/details/8469491 一个NLP爱好者博客 http://blog.csdn.net/tanzhangw