爬虫实例——爬取python百度百科相关一千个词条

调度器:

import url_manager,html_downloader,html_parser,html_outputer

class SpiderMain(object):
    """docstring for SpiderMain"""

    def __init__(self):
        self.urls = url_manager.UrlManager()
        self.downloader = html_downloader.HtmlDownloader()
        self.parser = html_parser.HtmlParser()
        self.outputer = html_outputer.HtmlOutputer()

    def craw(self,root_url):
        count = 1
        self.urls.add_new_url(root_url)
        while self.urls.has_new_url():
            try:
                new_url = self.urls.get_new_url()
                html_cont = self.downloader.download(new_url)
                print(‘craw %d : %s‘ % (count,new_url))
                #print(‘html_cont:%s‘ %(html_cont))

                new_urls,new_data = self.parser.parse(new_url,html_cont)
                self.urls.add_new_urls(new_urls)
                self.outputer.collect_data(new_data)
                if count == 1000:
                    break
                count = count + 1
            except:
                print(‘craw failed‘)
            #SpiderMain.f.close()
        self.outputer.output_html()

if __name__=="__main__":
    root_url = "https://baike.baidu.com/item/Python"
    obj_spider = SpiderMain()
    obj_spider.craw(root_url)

url管理器:

class UrlManager(object):
    """docstring for UrlManager"""
    def __init__(self):
        self.new_urls = set()
        self.old_urls = set()

    def add_new_url(self,url):
        if url is None:
            return
        if url not in self.new_urls and url not in self.old_urls:
            self.new_urls.add(url)

    def add_new_urls(self,urls):
        if urls is None or len(urls)==0:
            return
        for url in urls:
            self.add_new_url(url)

    def has_new_url(self):
        return len(self.new_urls) != 0

    def get_new_url(self):
        new_url = self.new_urls.pop()
        self.old_urls.add(new_url)
        #print (self.old_urls)
        return new_url

网页下载器:

import requests
import requests.packages.urllib3.util.ssl_
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = ‘ALL‘

class HtmlDownloader(object):
    """docstring for HtmlDownloader"""

    def download(self,url):
        if url is None:
            return None
        #f.write("In downloader,url is %s" % (url))
        #print("In downloader,url is %s" % (url))
        response = requests.get(url)
        response.encoding = "utf-8"
        #f.write("In downloader")
        #print("In downloader,res is %s" % (response.status_code))
        if response.status_code != 200:
            return None

        return response.text

网页解析器:

from bs4 import BeautifulSoup
import re
import urllib.parse

class HtmlParser(object):
    """docstring for HtmlParser"""

    def _get_new_urls(self,page_url,soup):
        new_urls = set()
        links = soup.find_all(‘a‘,href=re.compile(r"/item/*?"))
        for link in links:
            new_url = link[‘href‘]
            new_full_url = urllib.parse.urljoin(page_url,new_url)
            new_urls.add(new_full_url)
        return new_urls

    def _get_new_data(self,page_url,soup):
        res_data = {}

        #url
        res_data[‘url‘] = page_url

        #<dd class="lemmaWgt-lemmaTitle-title">  <h1 >Python</h1>

        title_node = soup.find(‘dd‘,class_=‘lemmaWgt-lemmaTitle-title‘).find("h1")
        res_data[‘title‘] = title_node.get_text()

        #div class="lemma-summary" label-module="lemmaSummary"
        summary_node = soup.find(‘div‘,class_=‘lemma-summary‘)
        res_data[‘summary‘] = summary_node.get_text()

        return res_data

    def parse(self,page_url,html_cont):
        #print("in parse")
        if page_url is None or html_cont is None:
            return
        soup = BeautifulSoup(html_cont,‘html.parser‘)
        new_urls = self._get_new_urls(page_url,soup)
        new_data = self._get_new_data(page_url,soup)
        return new_urls,new_data

输出:

import spider_main
class HtmlOutputer(object):
    """docstring for HtmlOutputer"""

    def __init__(self):
        self.datas = []

    def collect_data(self,data):
        if data is None:
            return
        self.datas.append(data)

    def output_html(self):
        fout = open(‘output.html‘,‘w‘,encoding=‘utf-8‘)

        fout.write("<html>")
        fout.write("<body>")
        fout.write("<table>")

        for data in self.datas:
            fout.write("<tr>")
            fout.write("<td>%s</td>"%data[‘url‘])
            fout.write("<td>%s</td>"%data[‘title‘])
            fout.write("<td>%s</td>"%data[‘summary‘])
            fout.write("</tr>")
        fout.write("</table>")
        fout.write("</body>")
        fout.write("</html>")
时间: 2024-12-11 17:43:07

爬虫实例——爬取python百度百科相关一千个词条的相关文章

python3 爬虫之爬取糗事百科

闲着没事爬个糗事百科的笑话看看 python3中用urllib.request.urlopen()打开糗事百科链接会提示以下错误 http.client.RemoteDisconnected: Remote end closed connection without response 但是打开别的链接就正常,很奇怪不知道为什么,没办法改用第三方模块requests,也可以用urllib3模块,还有一个第三方模块就是bs4(beautifulsoup4) requests模块安装和使用,这里就不说

爬虫实战 爬取糗事百科

偶然看到了一些项目,有爬取糗事百科的,我去看了下,也没什么难的 首先,先去糗事百科的https://www.qiushibaike.com/text/看一下, 先检查一下网页代码, 就会发现,需要爬取的笑话内容在一个span标签里,而且父标签是class为content的div里,那就很简单了,用select方法,先找到该文件,然获取下来并保存在txt文件里.比较枯燥. 直接贴代码吧 from bs4 import BeautifulSoup import lxml import request

python爬虫实例——爬取歌单

学习自http://www.hzbook.com/index.php/Book/search.html 书名:从零开始学python网络爬虫 爬取酷狗歌单,保存入csv文件 直接上源代码:(含注释) import requests #用于请求网页获取网页数据 from bs4 import BeautifulSoup #解析网页数据 import time #time库中的sleep()方法可以让程序暂停 import csv ''' 爬虫测试 酷狗top500数据 写入csv文件 ''' fp

Python爬虫实战-爬取糗事百科段子

1.本文的目的是练习Web爬虫 目标: 1.爬去糗事百科热门段子 2.去除带图片的段子 3.获取段子的发布时间,发布人,段子内容,点赞数. 2.首先我们确定URL为http://www.qiushibaike.com/hot/page/10(可以随便自行选择),先构造看看能否成功 构造代码: 1 # -*- coding:utf-8 -*- 2 import urllib 3 import urllib2 4 import re 5 6 page = 10 7 url = 'http://www

爬虫实例——爬取煎蛋网OOXX频道(反反爬虫——伪装成浏览器)

煎蛋网在反爬虫方面做了不少工作,无法通过正常的方式爬取,比如用下面这段代码爬取无法得到我们想要的源代码. import requests url = 'http://jandan.net/ooxx' print requests.get(url).text 执行上述代码,你得到的结果应该跟我一样: 煎蛋网应该是通过检测headers来判断是否爬虫,要想获取正常的源代码,需要伪装成浏览器. # -*- coding: utf-8 -*- import re import requests from

爬虫实例——爬取淘女郎相册(通过selenium、PhantomJS、BeautifulSoup爬取)

环境 操作系统:CentOS 6.7 32-bit Python版本:2.6.6 第三方插件 selenium PhantomJS BeautifulSoup 代码 # -*- coding: utf-8 -*- import sys reload(sys) sys.setdefaultencoding('utf-8') ''' 作者:昨夜星辰 ''' import re import os import time import shutil import requests import sub

爬虫实例——爬取淘女郎的相册(通过谷歌浏览器的开发者工具找出规律快速爬取)

用正常的方式(selenium.PhantomJS.BeautifulSoup)爬取淘女郎相册不仅困难,效率很低,而且很容易卡死. 我通过谷歌浏览器的开发者工具找出每个页面的规律,快速获取每张照片的链接,再下载,这样效率就很高了. 过程 首页很简单,没有采用JS渲染,直接用requests就能获取完整的源代码,没什么说的. 淘女郎首页采用了JS渲染,直接用requests是获取不到完整的源代码的,此时可以打开谷歌浏览器的开发者工具,主要看“Network”,筛选出“XHR”,如下图: 从上图可知

#python爬虫:爬取糗事百科段子

#出处:http://python.jobbole.com/81351/#确定url并抓取页面代码,url自己写一个import urllib,urllib2def getUrl(): page=1 url="http://www.qiushibaike.com/hot/page/"+str(page) try: request=urllib2.Request(url) response=urllib2.urlopen(request) print response.read() ex

Python爬虫:爬取糗事百科

网上看到的教程,但是是用正则表达式写的,并不能运行,后面我就用xpath改了,然后重新写了逻辑,并且使用了双线程,也算是原创了吧#!/usr/bin/python# -*- encoding:utf-8 -*- from lxml import etreefrom multiprocessing.dummy import Pool as ThreadPoolimport requestsimport sys#编码reload(sys)sys.setdefaultencoding('utf-8')