https://www.pythonprogramming.net/stop-words-nltk-tutorial/?completed=/tokenizing-words-sentences-nltk-tutorial/
Stop words with NLTK
The idea of Natural Language Processing is to do some form of
analysis, or processing, where the machine can understand, at least to
some level, what the text means, says, or implies.
This is an obviously massive challenge, but there are steps to
doing it that anyone can follow. The main idea, however, is that
computers simply do not, and will not, ever understand words directly.
Humans don‘t either *shocker*. In humans, memory is broken down into
electrical signals in the brain, in the form of neural groups that fire
in patterns. There is a lot about the brain that remains unknown, but,
the more we break down the human brain to the basic elements, we find
out basic the elements really are. Well, it turns out computers store
information in a very similar way! We need a way to get as close to that
as possible if we‘re going to mimic how humans read and understand
text. Generally, computers use numbers for everything, but we often see
directly in programming where we use binary signals (True or False,
which directly translate to 1 or 0, which originates directly from
either the presence of an electrical signal (True, 1), or not (False,
0)). To do this, we need a way to convert words to values, in numbers,
or signal patterns. The process of converting data to something a
computer can understand is referred to as "pre-processing." One of the
major forms of pre-processing is going to be filtering out useless data.
In natural language processing, useless words (data), are referred to
as stop words.
Immediately, we can recognize ourselves that some words carry more
meaning than other words. We can also see that some words are just
plain useless, and are filler words. We use them in the English
language, for example, to sort of "fluff" up the sentence so it is not
so strange sounding. An example of one of the most common, unofficial,
useless words is the phrase "umm." People stuff in "umm" frequently,
some more than others. This word means nothing, unless of course we‘re
searching for someone who is maybe lacking confidence, is confused, or
hasn‘t practiced much speaking. We all do it, you can hear me saying
"umm" or "uhh" in the videos plenty of ...uh ... times. For most
analysis, these words are useless.
We would not want these words taking up space in our database, or
taking up valuable processing time. As such, we call these words "stop
words" because they are useless, and we wish to do nothing with them.
Another version of the term "stop words" can be more literal: Words we
stop on.
For example, you may wish to completely cease analysis if you
detect words that are commonly used sarcastically, and stop immediately.
Sarcastic words, or phrases are going to vary by lexicon and corpus.
For now, we‘ll be considering stop words as words that just contain no
meaning, and we want to remove them.
You can do this easily, by storing a list of words that you
consider to be stop words. NLTK starts you off with a bunch of words
that they consider to be stop words, you can access it via the NLTK
corpus with:
from nltk.corpus import stopwords
Here is the list:
>>> set(stopwords.words(‘english‘))
{‘ourselves‘, ‘hers‘, ‘between‘, ‘yourself‘, ‘but‘, ‘again‘, ‘there‘,
‘about‘, ‘once‘, ‘during‘, ‘out‘, ‘very‘, ‘having‘, ‘with‘, ‘they‘,
‘own‘, ‘an‘, ‘be‘, ‘some‘, ‘for‘, ‘do‘, ‘its‘, ‘yours‘, ‘such‘, ‘into‘,
‘of‘, ‘most‘, ‘itself‘, ‘other‘, ‘off‘, ‘is‘, ‘s‘, ‘am‘, ‘or‘, ‘who‘,
‘as‘, ‘from‘, ‘him‘, ‘each‘, ‘the‘, ‘themselves‘, ‘until‘, ‘below‘,
‘are‘, ‘we‘, ‘these‘, ‘your‘, ‘his‘, ‘through‘, ‘don‘, ‘nor‘, ‘me‘,
‘were‘, ‘her‘, ‘more‘, ‘himself‘, ‘this‘, ‘down‘, ‘should‘, ‘our‘,
‘their‘, ‘while‘, ‘above‘, ‘both‘, ‘up‘, ‘to‘, ‘ours‘, ‘had‘, ‘she‘,
‘all‘, ‘no‘, ‘when‘, ‘at‘, ‘any‘, ‘before‘, ‘them‘, ‘same‘, ‘and‘,
‘been‘, ‘have‘, ‘in‘, ‘will‘, ‘on‘, ‘does‘, ‘yourselves‘, ‘then‘,
‘that‘, ‘because‘, ‘what‘, ‘over‘, ‘why‘, ‘so‘, ‘can‘, ‘did‘, ‘not‘,
‘now‘, ‘under‘, ‘he‘, ‘you‘, ‘herself‘, ‘has‘, ‘just‘, ‘where‘, ‘too‘,
‘only‘, ‘myself‘, ‘which‘, ‘those‘, ‘i‘, ‘after‘, ‘few‘, ‘whom‘, ‘t‘,
‘being‘, ‘if‘, ‘theirs‘, ‘my‘, ‘against‘, ‘a‘, ‘by‘, ‘doing‘, ‘it‘,
‘how‘, ‘further‘, ‘was‘, ‘here‘, ‘than‘}
Here is how you might incorporate using the stop_words set to remove the stop words from your text:
from nltk.corpus import stopwords from nltk.tokenize import word_tokenize example_sent = "This is a sample sentence, showing off the stop words filtration." stop_words = set(stopwords.words(‘english‘)) word_tokens = word_tokenize(example_sent) filtered_sentence = [w for w in word_tokens if not w in stop_words] filtered_sentence = [] for w in word_tokens: if w not in stop_words: filtered_sentence.append(w) print(word_tokens) print(filtered_sentence)
Our output here:[‘This‘, ‘is‘, ‘a‘, ‘sample‘, ‘sentence‘, ‘,‘, ‘showing‘, ‘off‘, ‘the‘, ‘stop‘, ‘words‘, ‘filtration‘, ‘.‘]
[‘This‘, ‘sample‘, ‘sentence‘, ‘,‘, ‘showing‘, ‘stop‘, ‘words‘, ‘filtration‘, ‘.‘]
Our database thanks us. Another form of data pre-processing is ‘stemming,‘ which is what we‘re going to be talking about next.