自然语言15_Part of Speech Tagging with NLTK

https://www.pythonprogramming.net/part-of-speech-tagging-nltk-tutorial/?completed=/stemming-nltk-tutorial/

One of the more powerful aspects of the NLTK module is the Part of Speech tagging that it can do for you. This means labeling words in a sentence as nouns, adjectives, verbs...etc. Even more impressive, it also labels by tense, and more. Here‘s a list of the tags, what they mean, and some examples:

POS tag list:

CC	coordinating conjunction
CD	cardinal digit
DT	determiner
EX	existential there (like: "there is" ... think of it like "there exists")
FW	foreign word
IN	preposition/subordinating conjunction
JJ	adjective	‘big‘
JJR	adjective, comparative	‘bigger‘
JJS	adjective, superlative	‘biggest‘
LS	list marker	1)
MD	modal	could, will
NN	noun, singular ‘desk‘
NNS	noun plural	‘desks‘
NNP	proper noun, singular	‘Harrison‘
NNPS	proper noun, plural	‘Americans‘
PDT	predeterminer	‘all the kids‘
POS	possessive ending	parent‘s
PRP	personal pronoun	I, he, she
PRP$	possessive pronoun	my, his, hers
RB	adverb	very, silently,
RBR	adverb, comparative	better
RBS	adverb, superlative	best
RP	particle	give up
TO	to	go ‘to‘ the store.
UH	interjection	errrrrrrrm
VB	verb, base form	take
VBD	verb, past tense	took
VBG	verb, gerund/present participle	taking
VBN	verb, past participle	taken
VBP	verb, sing. present, non-3d	take
VBZ	verb, 3rd person sing. present	takes
WDT	wh-determiner	which
WP	wh-pronoun	who, what
WP$	possessive wh-pronoun	whose
WRB	wh-abverb	where, when

How might we use this? While we‘re at it, we‘re going to cover a new sentence tokenizer, called the PunktSentenceTokenizer. This tokenizer is capable of unsupervised machine learning, so you can actually train it on any body of text that you use. First, let‘s get some imports out of the way that we‘re going to use:

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer

Now, let‘s create our training and testing data:

train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")

One is a State of the Union address from 2005, and the other is from 2006 from past President George W. Bush.

Next, we can train the Punkt tokenizer like:

custom_sent_tokenizer = PunktSentenceTokenizer(train_text)

Then we can actually tokenize, using:

tokenized = custom_sent_tokenizer.tokenize(sample_text)

Now we can finish up this part of speech tagging script by creating a function that will run through and tag all of the parts of speech per sentence like so:

def process_content():
    try:
        for i in tokenized[:5]:
            words = nltk.word_tokenize(i)
            tagged = nltk.pos_tag(words)
            print(tagged)

    except Exception as e:
        print(str(e))

process_content()

The output should be a list of tuples, where the first element in the tuple is the word, and the second is the part of speech tag. It should look like:

[(‘PRESIDENT‘, ‘NNP‘), (‘GEORGE‘, ‘NNP‘), (‘W.‘, ‘NNP‘), (‘BUSH‘, ‘NNP‘), ("‘S", ‘POS‘), (‘ADDRESS‘, ‘NNP‘), (‘BEFORE‘, ‘NNP‘), (‘A‘, ‘NNP‘), (‘JOINT‘, ‘NNP‘), (‘SESSION‘, ‘NNP‘), (‘OF‘, ‘NNP‘), (‘THE‘, ‘NNP‘), (‘CONGRESS‘, ‘NNP‘), (‘ON‘, ‘NNP‘), (‘THE‘, ‘NNP‘), (‘STATE‘, ‘NNP‘), (‘OF‘, ‘NNP‘), (‘THE‘, ‘NNP‘), (‘UNION‘, ‘NNP‘), (‘January‘, ‘NNP‘), (‘31‘, ‘CD‘), (‘,‘, ‘,‘), (‘2006‘, ‘CD‘), (‘THE‘, ‘DT‘), (‘PRESIDENT‘, ‘NNP‘), (‘:‘, ‘:‘), (‘Thank‘, ‘NNP‘), (‘you‘, ‘PRP‘), (‘all‘, ‘DT‘), (‘.‘, ‘.‘)] [(‘Mr.‘, ‘NNP‘), (‘Speaker‘, ‘NNP‘), (‘,‘, ‘,‘), (‘Vice‘, ‘NNP‘), (‘President‘, ‘NNP‘), (‘Cheney‘, ‘NNP‘), (‘,‘, ‘,‘), (‘members‘, ‘NNS‘), (‘of‘, ‘IN‘), (‘Congress‘, ‘NNP‘), (‘,‘, ‘,‘), (‘members‘, ‘NNS‘), (‘of‘, ‘IN‘), (‘the‘, ‘DT‘), (‘Supreme‘, ‘NNP‘), (‘Court‘, ‘NNP‘), (‘and‘, ‘CC‘), (‘diplomatic‘, ‘JJ‘), (‘corps‘, ‘NNS‘), (‘,‘, ‘,‘), (‘distinguished‘, ‘VBD‘), (‘guests‘, ‘NNS‘), (‘,‘, ‘,‘), (‘and‘, ‘CC‘), (‘fellow‘, ‘JJ‘), (‘citizens‘, ‘NNS‘), (‘:‘, ‘:‘), (‘Today‘, ‘NN‘), (‘our‘, ‘PRP$‘), (‘nation‘, ‘NN‘), (‘lost‘, ‘VBD‘), (‘a‘, ‘DT‘), (‘beloved‘, ‘VBN‘), (‘,‘, ‘,‘), (‘graceful‘, ‘JJ‘), (‘,‘, ‘,‘), (‘courageous‘, ‘JJ‘), (‘woman‘, ‘NN‘), (‘who‘, ‘WP‘), (‘called‘, ‘VBN‘), (‘America‘, ‘NNP‘), (‘to‘, ‘TO‘), (‘its‘, ‘PRP$‘), (‘founding‘, ‘NN‘), (‘ideals‘, ‘NNS‘), (‘and‘, ‘CC‘), (‘carried‘, ‘VBD‘), (‘on‘, ‘IN‘), (‘a‘, ‘DT‘), (‘noble‘, ‘JJ‘), (‘dream‘, ‘NN‘), (‘.‘, ‘.‘)] [(‘Tonight‘, ‘NNP‘), (‘we‘, ‘PRP‘), (‘are‘, ‘VBP‘), (‘comforted‘, ‘VBN‘), (‘by‘, ‘IN‘), (‘the‘, ‘DT‘), (‘hope‘, ‘NN‘), (‘of‘, ‘IN‘), (‘a‘, ‘DT‘), (‘glad‘, ‘NN‘), (‘reunion‘, ‘NN‘), (‘with‘, ‘IN‘), (‘the‘, ‘DT‘), (‘husband‘, ‘NN‘), (‘who‘, ‘WP‘), (‘was‘, ‘VBD‘), (‘taken‘, ‘VBN‘), (‘so‘, ‘RB‘), (‘long‘, ‘RB‘), (‘ago‘, ‘RB‘), (‘,‘, ‘,‘), (‘and‘, ‘CC‘), (‘we‘, ‘PRP‘), (‘are‘, ‘VBP‘), (‘grateful‘, ‘JJ‘), (‘for‘, ‘IN‘), (‘the‘, ‘DT‘), (‘good‘, ‘NN‘), (‘life‘, ‘NN‘), (‘of‘, ‘IN‘), (‘Coretta‘, ‘NNP‘), (‘Scott‘, ‘NNP‘), (‘King‘, ‘NNP‘), (‘.‘, ‘.‘)] [(‘(‘, ‘NN‘), (‘Applause‘, ‘NNP‘), (‘.‘, ‘.‘), (‘)‘, ‘:‘)] [(‘President‘, ‘NNP‘), (‘George‘, ‘NNP‘), (‘W.‘, ‘NNP‘), (‘Bush‘, ‘NNP‘), (‘reacts‘, ‘VBZ‘), (‘to‘, ‘TO‘), (‘applause‘, ‘VB‘), (‘during‘, ‘IN‘), (‘his‘, ‘PRP$‘), (‘State‘, ‘NNP‘), (‘of‘, ‘IN‘), (‘the‘, ‘DT‘), (‘Union‘, ‘NNP‘), (‘Address‘, ‘NNP‘), (‘at‘, ‘IN‘), (‘the‘, ‘DT‘), (‘Capitol‘, ‘NNP‘), (‘,‘, ‘,‘), (‘Tuesday‘, ‘NNP‘), (‘,‘, ‘,‘), (‘Jan‘, ‘NNP‘), (‘.‘, ‘.‘)]

At this point, we can begin to derive meaning, but there is still some work to do. The next topic that we‘re going to cover is chunking, which is where we group words, based on their parts of speech, into hopefully meaningful groups.

时间: 2024-10-13 03:21:33

自然语言15_Part of Speech Tagging with NLTK的相关文章

自然语言12_Tokenizing Words and Sentences with NLTK

https://www.pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/ Tokenizing Words and Sentences with NLTK Welcome to a Natural Language Processing tutorial series, using the Natural Language Toolkit, or NLTK, module with Python. The NLTK m

Part of Speech Tagging

Natural Language Processing with Python Charpter 6.1 suffix_fdist处代码稍微改动. 1 import nltk 2 from nltk.corpus import brown 3 4 def common_suffixes_fun(): 5 suffix_fdist=nltk.FreqDist() 6 for word in brown.words(): 7 word=word.lower() 8 suffix_fdist[word

自然语言27_Converting words to Features with NLTK

https://www.pythonprogramming.net/words-as-features-nltk-tutorial/ Converting words to Features with NLTK In this tutorial, we're going to be building off the previous video and compiling feature lists of words from positive reviews and words from th

自然语言18.1_Named Entity Recognition with NLTK

https://www.pythonprogramming.net/named-entity-recognition-nltk-tutorial/?completed=/chinking-nltk-tutorial/ Named Entity Recognition with NLTK One of the most major forms of chunking in natural language processing is called "Named Entity Recognition

自然语言14_Stemming words with NLTK

https://www.pythonprogramming.net/stemming-nltk-tutorial/?completed=/stop-words-nltk-tutorial/ Stemming words with NLTK The idea of stemming is a sort of normalizing method. Many variations of words carry the same meaning, other than when tense is in

python and 我爱自然语言处理

曾经因为NLTK的 缘故开始学习Python,之后渐渐成为我工作中的第一辅助脚本语言,虽然开发语言是C/C++,但平时的很多文本数据处理任务都交给了Python.离 开腾讯创业后,第一个作品课程图谱也 是选择了Python系的Flask框架,渐渐的将自己的绝大部分工作交给了Python.这些年来,接触和使用了很多Python工具包,特别是在文本 处理,科学计算,机器学习和数据挖掘领域,有很多很多优秀的Python工具包可供使用,所以作为Pythoner,也是相当幸福的.其实如果仔细留意微 博,你

Java自然语言处理NLP工具包

自然语言处理 1. Java自然语言处理 LingPipe LingPipe是一个自然语言处理的Java开源工具包.LingPipe目前已有很丰富的功能,包括主题分类(Top Classification).命名实体识别(Named Entity Recognition).词性标注(Part-of Speech Tagging).句题检测(Sentence Detection).查询拼写检查(Query Spell Checking).兴趣短语检测(Interseting Phrase Dete

以spacy中函数调用为例记录对自然语言基本处理任务

# coding=utf-8 import spacy nlp=spacy.load('en_core_web_md-1.2.1') docx=nlp(u'The ways to process documents are so varied and application- and language-dependent that I decided to not constrain them by any interface. Instead, a document is represente

自然语言处理资源NLP

转自:https://github.com/andrewt3000/DL4NLP Deep Learning for NLP resources State of the art resources for NLP sequence modeling tasks such as machine translation, image captioning, and dialog. My notes on neural networks, rnn, lstm Deep Learning for NL