自然语言16_Chunking with NLTK

Chunking with NLTK

Now that we know the parts of speech, we can do what is called
chunking, and group words into hopefully meaningful chunks. One of the
main goals of chunking is to group into what are known as "noun
phrases." These are phrases of one or more words that contain a noun,
maybe some descriptive words, maybe a verb, and maybe something like an
adverb. The idea is to group nouns with the words that are in relation
to them.

In order to chunk, we combine the part of speech tags with regular expressions. Mainly from regular expressions, we are going to utilize the following:

+ = match 1 or more
? = match 0 or 1 repetitions.
* = match 0 or MORE repetitions
. = Any character except a new line
	  

See the tutorial linked above if you need help with regular expressions. The last things to note is that the part of speech tags are denoted with the "<" and ">" and we can also place regular expressions within the tags themselves, so account for things like "all nouns" (<N.*>)

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer

train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")

custom_sent_tokenizer = PunktSentenceTokenizer(train_text)

tokenized = custom_sent_tokenizer.tokenize(sample_text)

def process_content():
    try:
        for i in tokenized:
            words = nltk.word_tokenize(i)
            tagged = nltk.pos_tag(words)
            chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""
            chunkParser = nltk.RegexpParser(chunkGram)
            chunked = chunkParser.parse(tagged)
            chunked.draw()     

    except Exception as e:
        print(str(e))

process_content()

The result of this is something like:

The main line here in question is:

chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""

This line, broken down:

<RB.?>* = "0 or more of any tense of adverb," followed by:

<VB.?>* = "0 or more of any tense of verb," followed by:

<NNP>+ = "One or more proper nouns," followed by

<NN>? = "zero or one singular noun."

Try playing around with combinations to group various instances until you feel comfortable with chunking.

Not covered in the video, but also a reasonable task is to actually access the chunks specifically. This is something rarely talked about, but can be an essential step depending on what you‘re doing. Say you print the chunks out, you are going to see output like:

(S
  (Chunk PRESIDENT/NNP GEORGE/NNP W./NNP BUSH/NNP)
  ‘S/POS
  (Chunk
    ADDRESS/NNP
    BEFORE/NNP
    A/NNP
    JOINT/NNP
    SESSION/NNP
    OF/NNP
    THE/NNP
    CONGRESS/NNP
    ON/NNP
    THE/NNP
    STATE/NNP
    OF/NNP
    THE/NNP
    UNION/NNP
    January/NNP)
  31/CD
  ,/,
  2006/CD
  THE/DT
  (Chunk PRESIDENT/NNP)
  :/:
  (Chunk Thank/NNP)
  you/PRP
  all/DT
  ./.)

Cool, that helps us visually, but what if we want to access this data via our program? Well, what is happening here is our "chunked" variable is an NLTK tree. Each "chunk" and "non chunk" is a "subtree" of the tree. We can reference these by doing something like chunked.subtrees. We can then iterate through these subtrees like so:

            for subtree in chunked.subtrees():
                print(subtree)

Next, we might be only interested in getting just the chunks, ignoring the rest. We can use the filter parameter in the chunked.subtrees() call.

            for subtree in chunked.subtrees(filter=lambda t: t.label() == ‘Chunk‘):
                print(subtree)

Now, we‘re filtering to only show the subtrees with the label of "Chunk." Keep in mind, this isn‘t "Chunk" as in the NLTK chunk attribute... this is "Chunk" literally because that‘s the label we gave it here: chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""

Had we said instead something like chunkGram = r"""Pythons: {<RB.?>*<VB.?>*<NNP>+<NN>?}""", then we would filter by the label of "Pythons." The result here should be something like:

-
(Chunk PRESIDENT/NNP GEORGE/NNP W./NNP BUSH/NNP)
(Chunk
  ADDRESS/NNP
  BEFORE/NNP
  A/NNP
  JOINT/NNP
  SESSION/NNP
  OF/NNP
  THE/NNP
  CONGRESS/NNP
  ON/NNP
  THE/NNP
  STATE/NNP
  OF/NNP
  THE/NNP
  UNION/NNP
  January/NNP)
(Chunk PRESIDENT/NNP)
(Chunk Thank/NNP)

Full code for this would be:

import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer

train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")

custom_sent_tokenizer = PunktSentenceTokenizer(train_text)

tokenized = custom_sent_tokenizer.tokenize(sample_text)

def process_content():
    try:
        for i in tokenized:
            words = nltk.word_tokenize(i)
            tagged = nltk.pos_tag(words)
            chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""
            chunkParser = nltk.RegexpParser(chunkGram)
            chunked = chunkParser.parse(tagged)

            print(chunked)
            for subtree in chunked.subtrees(filter=lambda t: t.label() == ‘Chunk‘):
                print(subtree)

            chunked.draw()

    except Exception as e:
        print(str(e))

process_content()

If you get particular enough, you may find that you may be better off if there was a way to chunk everything, except some stuff. This process is what is known as chinking, and that‘s what we‘re going to be covering next.

时间: 2024-11-13 08:08:59

自然语言16_Chunking with NLTK的相关文章

自然语言17_Chinking with NLTK

https://www.pythonprogramming.net/chinking-nltk-tutorial/?completed=/chunking-nltk-tutorial/ Chinking with NLTK You may find that, after a lot of chunking, you have some words in your chunk you still do not want, but you have no idea how to get rid o

自然语言22_Wordnet with NLTK

https://www.pythonprogramming.net/wordnet-nltk-tutorial/?completed=/nltk-corpus-corpora-tutorial/ Wordnet with NLTK WordNet is a lexical database for the English language, which was created by Princeton, and is part of the NLTK corpus. You can use Wo

自然语言处理 Spacy &amp; NLTK实操

上篇文章介绍的方法是基于一本书和书籍作者独立开发的工具——Hanlp,最近总结了一些常用工具如Space.NLTK,同时补充了Numpy.Pandas的一些使用示例. GitHub上的repo是一系列教程,对于每个工具的用法,重要的术语在jupyter notebook 文本部分有说明或者注释. 此外,还有NLP技术在对话机器人上的应用. GitHub:https://github.com/hanxinle/nlp/tree/master/Spacy_NLTK 原文地址:https://www.

Python自然语言工具包(NLTK)入门

在本期文章中,小生向您介绍了自然语言工具包(Natural Language Toolkit),它是一个将学术语言技术应用于文本数据集的 Python 库.称为“文本处理”的程序设计是其基本功能:更深入的是专门用于研究自然语言的语法以及语义分析的能力. 鄙人并非见多识广, 语言处理(linguistic processing) 是一个相对新奇的领域.如果在对意义非凡的自然语言工具包(NLTK)的说明中出现了错误,请您谅解.NLTK 是使用 Python 教学以及实践计算语言学的极好工具.此外,计

Python自然语言处理工具小结

Python自然语言处理工具小结 作者:白宁超 2016年11月21日21:45:26 1 Python 的几个自然语言处理工具 NLTK:NLTK 在用 Python 处理自然语言的工具中处于领先的地位.它提供了 WordNet 这种方便处理词汇资源的借口,还有分类.分词.除茎.标注.语法分析.语义推理等类库. Pattern:Pattern 的自然语言处理工具有词性标注工具(Part-Of-Speech Tagger),N元搜索(n-gram search),情感分析(sentiment a

《用Python进行自然语言处理》归纳一

1.自然语言工具包(NLTK) NLTK 创建于2001 年,最初是宾州大学计算机与信息科学系计算语言学课程的一部分.从那以后,在数十名贡献者的帮助下不断发展壮大.如今,它已被几十所大学的课程所采纳,并作为许多研究项目的基础.表P -2 列出了NLTK 的一些最重要的模块. 这本书提供自然语言处理领域非常方便的入门指南.它可以用来自学,也可以作为自然语言处理或计算语言学课程的教科书,或是人工智能.文本挖掘.语料库语言学课程的补充读物.本书的实践性很强,包括几百个实际可用的例子和分级练习. 本书基

Python爆火的六大原因

无论你是否清楚这个事实,Python语言实际上已经不是一门年轻的编程语言了.虽然它也不如其它一些语言那么年长,但它仍然比大部分人所想的要更老一些.它第一次发布的时间是在1991年,虽然这些年它也经历了相当大的改变,但它现在的用处跟当时并没有什么差别. 实际上,这只是它最近这些年变得流行的原因之一:它是一个面向企业和第一流项目的.基于生产的语言,而且它有着长久的历史.它可以被用做几乎任何事情,这也是它被认为是"多功能"的原因.你既可以创建一个树莓派应用,又可以用Python来写桌面程序的

156个Python网络爬虫资源

本列表包含Python网页抓取和数据处理相关的库. 网络相关 通用 urllib - 网络库(标准库) requests - 网络库 grab - 网络库(基于pycurl) pycurl - 网络库 (与libcurl绑定) urllib3 - 具有线程安全连接池.文件psot支持.高可用的Python HTTP库 httplib2 - 网络库 RoboBrowser - 一个无需独立浏览器即可访问网页的简单.pythonic的库 MechanicalSoup - 能完成自动网站交互的Pyth

Tensorflow打造聊天机器人

Tensorflow聊天机器人 聊天机器人也叫做对话系统,是一个热门领域.微软.facebook.苹果.google.微信.slack都在上面做了大的投入,这是一波新的试图改变人和服务交流的创业浪潮.例如operator x.ai,chatfuel,以及一些库例如botkit,微软的bot开发库. 许多公司都希望机器人可以自然对话,和人类没有区别.并且许多对外声明说用了NLP和深度学习技术来实现这个目标.但围绕AI这些天花乱坠的宣传有时候也很难区别现实和虚化的差别. 我要在这个系列文章里将一些构