34.scrapy解决爬虫翻页问题

这里主要解决的问题:

1.翻页需要找到页面中加载的两个参数。
  ‘__VIEWSTATE‘: ‘{}‘.format(response.meta[‘data‘][‘__VIEWSTATE‘]),
  ‘__EVENTVALIDATION‘: ‘{}‘.format(response.meta[‘data‘][‘__EVENTVALIDATION‘]),

还有一点需要注意的就是  dont_filter=False
yield scrapy.FormRequest(url=response.url, callback=self.parse, formdata=data, method="POST", dont_filter=False)
2.日期 我自己做的时候取的是2008-2018年的数据。3.还有的就是数据字段入库乱的问题。4.一些问题比如越界等,我都没做具体的解决只是做了一个抛出异常没做处理。

这个相对较麻烦一点,首先先分析一下网站,地址:  http://www.nbzj.net/MaterialPriceList.aspx (宁波造价网) 这里呢我主要是拿这个材料信息价数据。

nbzj.py

# -*- coding: utf-8 -*-
import scrapy
import re
from nbzj_web.items import NbzjWebItem

class NbzjSpider(scrapy.Spider):
    name = ‘nbzj‘
    allowed_domains = [‘www.nbzj.net‘]
    start_urls = [‘http://www.nbzj.net/MaterialPriceList.aspx‘]
    custom_settings = {
        "DOWNLOAD_DELAY": 1,
        "ITEM_PIPELINES": {
            ‘nbzj_web.pipelines.MysqlPipeline‘: 300,
        },
        "DOWNLOADER_MIDDLEWARES": {
            ‘nbzj_web.middlewares.NbzjWebDownloaderMiddleware‘: 500,
        },
    }
    def parse(self, response):
        _response=response.text
        # print(_response)

        #获取翻页参数
        __VIEWSTATE=re.findall(r‘id="__VIEWSTATE" value="(.*?)" />‘,_response)

        A=__VIEWSTATE[0]
        # print(A)
        __EVENTVALIDATION=re.findall(r‘id="__EVENTVALIDATION" value="(.*?)" />‘,_response)
        B=__EVENTVALIDATION[0]
        # print(B)

        #页码
        page_num=re.findall(r‘>下页</a><a title="转到第(.*?)页"‘,_response)
        # print(page_num[0])
        max_page=page_num[0]
        # print(max_page)

        content={
            ‘__VIEWSTATE‘:A,
            ‘__EVENTVALIDATION‘:B,
            ‘page_num‘:max_page,
        }

        # 获取标签列表

        tag_list=response.xpath("//div[@class=‘fcon‘]/table[@class=‘mytable‘]//tr/td").extract()
        # print(tag_list)
      #这里我直接取文本出现问题,我就直接拿标签数据等下面在做字符串的修改删除
        list=[]
        try:
            tag1=tag_list[:9]
            list.append(tag1)
            tag2=tag_list[9:18]
            list.append(tag2)
            tag3=tag_list[18:27]
            list.append(tag3)
            tag4=tag_list[27:36]
            list.append(tag4)
            tag5=tag_list[36:45]
            list.append(tag5)
            tag6=tag_list[45:54]
            list.append(tag6)
            tag7=tag_list[54:63]
            list.append(tag7)
            tag8=tag_list[63:72]
            list.append(tag8)
            tag9=tag_list[72:81]
            list.append(tag9)
            tag10=tag_list[81:90]
            list.append(tag10)
            tag11=tag_list[99:108]
            list.append(tag11)
            tag12=tag_list[108:117]
            list.append(tag12)
            tag13=tag_list[117:126]
            list.append(tag13)
            tag14=tag_list[126:135]
            list.append(tag14)
            tag15=tag_list[135:144]
            list.append(tag15)

            print(list)

            for tag in list:

                item=NbzjWebItem()
                # print(tag)
                #代码
                code=tag[0].replace(‘<td style="text-align: center">‘,‘‘).replace(‘</td>‘,‘‘)
                # print(code)
                item[‘code‘]=code
                #名称
                name=tag[1].replace(‘<td>‘,‘‘).replace(‘</td>‘,‘‘)
                # print(name)
                item[‘name‘]=name
                #地区
                district=tag[2].replace(‘<td style="text-align: center">‘,‘‘).replace(‘</td>‘,‘‘)
                # print(district)
                item[‘district‘]=district
                #型号规格
                _type=tag[3].replace(‘<td>‘,‘‘).replace(‘</td>‘,‘‘)
                # print(_type)
                item[‘_type‘]=_type
                #单位
                unit=tag[4].replace(‘<td style="text-align: center">‘,‘‘).replace(‘</td>‘,‘‘)
                # print(unit)
                item[‘unit‘]=unit
                #除税价
                except_tax_price = tag[5].replace(‘<td style="text-align: right">‘,‘‘).replace(‘</td>‘,‘‘)
                # print(except_tax_price)
                item[‘except_tax_price‘]=except_tax_price
                #含税价
                tax_price = tag[6].replace(‘<td style="text-align: right">‘,‘‘).replace(‘</td>‘,‘‘)
                # print(tax_price)
                item[‘tax_price‘]=tax_price
                #时间
                time=tag[7].replace(‘<td style="text-align: center">‘,‘‘).replace(‘</td>‘,‘‘)
                print(time)
                item[‘time‘]=time

                # print(‘-‘*100)
                yield item
            # print(‘*‘*100)
        except:
            pass

        yield scrapy.Request(url=response.url,callback=self.parse_detail,meta={"data": content})

    def parse_detail(self,response):
        for h in range(2008,2019):
            list=[‘01‘,‘02‘,‘03‘,‘04‘,‘05‘,‘06‘,‘07‘,‘08‘,‘09‘,‘10‘,‘11‘,‘12‘]
            for j in list:
                try:
                    max_page=response.meta[‘data‘][‘page_num‘]
                    # print(max_page)
                    for i in range(2,int(max_page)):
                        data={

                        ‘__VIEWSTATE‘: ‘{}‘.format(response.meta[‘data‘][‘__VIEWSTATE‘]),
                        ‘__VIEWSTATEGENERATOR‘: ‘E53A32FA‘,
                        ‘__EVENTTARGET‘: ‘ctl00$ContentPlaceContent$Pager‘,
                        ‘__EVENTARGUMENT‘:‘{}‘.format(i),
                        ‘__EVENTVALIDATION‘: ‘{}‘.format(response.meta[‘data‘][‘__EVENTVALIDATION‘]),
                        ‘HeadSearchType‘: ‘localsite‘,
                        ‘ctl00$ContentPlaceContent$txtnewCode‘:‘‘,
                        ‘ctl00$ContentPlaceContent$txtMaterualName‘:‘‘,
                        ‘ctl00$ContentPlaceContent$ddlArea‘:‘‘,
                        ‘ctl00$ContentPlaceContent$txtPublishDate‘: ‘{} - 0{}‘.format(h,j),
                        ‘ctl00$ContentPlaceContent$ddlCategoryOne‘:‘‘,
                        ‘ctl00$ContentPlaceContent$hidCateId‘:‘‘,
                        ‘ctl00$ContentPlaceContent$txtSpecification‘:‘‘,
                        ‘ctl00$ContentPlaceContent$Pager_input‘: ‘{}‘.format(i-1),
                        ‘ctl00$foot$ddlsnzjw‘: ‘0‘,
                        ‘ctl00$foot$ddlswzjw‘: ‘0‘,
                        ‘ctl00$foot$ddlqtxgw‘: ‘0‘
                        }
                        yield scrapy.FormRequest(url=response.url, callback=self.parse, formdata=data, method="POST", dont_filter=False)
                except:
                    pass
items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class NbzjWebItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    code=scrapy.Field()
    name=scrapy.Field()
    district=scrapy.Field()
    _type=scrapy.Field()
    unit=scrapy.Field()
    except_tax_price=scrapy.Field()
    tax_price =scrapy.Field()
    time=scrapy.Field()
middlewares.py

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals

class NbzjWebSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info(‘Spider opened: %s‘ % spider.name)

class NbzjWebDownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info(‘Spider opened: %s‘ % spider.name)
piplines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

# -*- coding: utf-8 -*-
from scrapy.conf import settings
import pymysql

class NbzjWebPipeline(object):
    def process_item(self, item, spider):
        return item

# 数据保存mysql
class MysqlPipeline(object):

    def open_spider(self, spider):
        self.host = settings.get(‘MYSQL_HOST‘)
        self.port = settings.get(‘MYSQL_PORT‘)
        self.user = settings.get(‘MYSQL_USER‘)
        self.password = settings.get(‘MYSQL_PASSWORD‘)
        self.db = settings.get((‘MYSQL_DB‘))
        self.table = settings.get(‘TABLE‘)
        self.client = pymysql.connect(host=self.host, user=self.user, password=self.password, port=self.port, db=self.db, charset=‘utf8‘)

    def process_item(self, item, spider):
        item_dict = dict(item)
        cursor = self.client.cursor()
        values = ‘,‘.join([‘%s‘] * len(item_dict))
        keys = ‘,‘.join(item_dict.keys())
        sql = ‘INSERT INTO {table}({keys}) VALUES ({values})‘.format(table=self.table, keys=keys, values=values)
        try:
            if cursor.execute(sql, tuple(item_dict.values())):  # 第一个值为sql语句第二个为 值 为一个元组
                print(‘数据入库成功!‘)
                self.client.commit()
        except Exception as e:
            print(e)

            print(‘数据已存在!‘)
            self.client.rollback()
        return item

    def close_spider(self, spider):

        self.client.close()
setting.py

# -*- coding: utf-8 -*-

# Scrapy settings for nbzj_web project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = ‘nbzj_web‘

SPIDER_MODULES = [‘nbzj_web.spiders‘]
NEWSPIDER_MODULE = ‘nbzj_web.spiders‘

# mysql配置参数
MYSQL_HOST = "172.16.10.197"
MYSQL_PORT = 3306
MYSQL_USER = "root"
MYSQL_PASSWORD = "123456"
MYSQL_DB = ‘web_datas‘
TABLE = "web_nbzj"

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘nbzj_web (+http://www.yourdomain.com)‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘nbzj_web.middlewares.NbzjWebSpiderMiddleware‘: 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   ‘nbzj_web.middlewares.NbzjWebDownloaderMiddleware‘: 543,
}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   ‘nbzj_web.pipelines.NbzjWebPipeline‘: 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘

scrapy crawl nbzj   执行结果如下

原文地址:https://www.cnblogs.com/lvjing/p/9706509.html

时间: 2024-10-09 22:47:23

34.scrapy解决爬虫翻页问题的相关文章

[python]利用urllib+urllib2解决爬虫分页翻页问题

最近由于公司的自动化测试工具需要将测试结果导出到excel中,奈何没有学SSH,导致无法在工具本身中添加(工具是开发做的),故转而使用python爬虫来做,开发过程中遇到了一个问题: 由于测试结果太多,需要翻页,而翻页时网址没有变化,这就导致抓取的时候没法依照网址去爬,遂去网上查找解决方法,最后找到利用urllib2提交post的方法来解决. 解决过程: 网址不变,而如果是用selenium的话,我又觉得太慢,毕竟selenium是用来做验收测试的,不是用来爬数据的.言归正传,利用urllib2

Scrapy爬虫案例01——翻页爬取

之前用python写爬虫,都是自己用requests库请求,beautifulsoup(pyquery.lxml等)解析.没有用过高大上的框架.早就听说过Scrapy,一直想研究一下.下面记录一下我学习使用Scrapy的系列代码及笔记. 安装 Scrapy的安装很简单,官方文档也有详细的说明 http://scrapy-chs.readthedocs.io/zh_CN/0.24/intro/install.html .这里不详细说明了. 创建工程 我是用的是pycharm开发,打开pycharm

爬虫1:get请求的翻页及思考

刚开始接触爬虫,理解还不透彻,说一些初始阶段的想法{1.因为get请求的方式(请求体无数据,不能通过Request.add_data()函数来添加数据,实现对网址翻页:需要直接对网址进行操作来实现翻页功能)2.post请求方式存在数据请求数据(可以通过Request.add_data()函数来添加数据,实现对网址的翻页)} 下面是标准的老师总结的两者差别 { 1. get是从服务器上获取数据,post是向服务器传送数据. 2. GET请求参数显示,都显示在浏览器网址上,POST请求参数在请求体当

python爬虫_入门_翻页

写出来的爬虫,肯定不能只在一个页面爬,只要要爬几个页面,甚至一个网站,这时候就需要用到翻页了 其实翻页很简单,还是这个页面http://bbs.fengniao.com/forum/10384633.html,话说我得给这个人增加了多大的访问量啊...... 10384633重点关注下这个数字,这个就是页面的名称,现在尝试把这个数字+/-1看看有没有结果 验证http://bbs.fengniao.com/forum/10384634.html 可以看到,这个页面是可以访问的 再试试http:/

解决Mysql 主键id是UUID的上一页下一页数据翻页,附带SQL

解决id为UUID的上一页下一页数据翻页,把base_course_timetable表换成自己的表就可以了 SELECTbef.* FROM(SELECTt.rownum,t.id FROM( SELECT @rownum := @rownum + 1 AS rownum, base_course_timetable.* FROM ( SELECT @rownum := 0 ) r, base_course_timetable ) t WHEREt.rownum < (SELECTw.rown

SSH框架中使用jquery-ingrid实现翻页+筛选

最近项目中有一个地方需要将数据进行列表展示,并提供筛选功能,虽说不是很复杂的功能,但是在使用jquery-ingird插件实现时,还是费了一番周折的,在此进行记录下实现过程,并作为自己在博客园的第一篇博客. 首先说一下ingrid插件,ingrid插件是一个很轻量级的插件,可以帮你快速美观的实现表格的翻页.拉伸等功能,关于ingrid,感觉网上并没有太多的资料,只找到一个比较权威的Demo:http://reconstrukt.com/ingrid/src/example1.html,幸好插件不

Android 聊天表情输入、表情翻页带效果、下拉刷新聊天记录

经过一个星期的折腾,终于做完了这个Android 聊天表情输入.表情翻页带效果.下拉刷新聊天记录.这只是一个单独聊天表情的输入,以及聊天的效果实现.因为我没有写服务器,所以没有双方聊天的效果.主要是聊天中表情的选择,发送.表情翻页带有不同的效果.我在主要代码中都写了注释.下面看代码实现.附上本文源码,代码较多. 下载地址:点击 一.先看实现的效果图 二.调用接口以及设置MainActivity package com.example.activity; import java.util.Arra

jQuery easyUI的datagrid,如何在翻页以后仍能记录被选中的行

1.先给出问题解决后的代码 1 <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> 2 <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core_rt"%> 3 <% 4 String path = request.getContextP

scrapy定制爬虫-爬取javascript——乾颐堂

很多网站都使用javascript...网页内容由js动态生成,一些js事件触发的页面内容变化,链接打开.甚至有些网站在没有js的情况下根本不工作,取而代之返回你一条类似"请打开浏览器js"之类的内容. 对javascript的支持有四种解决方案:1,写代码模拟相关js逻辑.2,调用一个有界面的浏览器,类似各种广泛用于测试的,selenium这类.3,使用一个无界面的浏览器,各种基于webkit的,casperjs,phantomjs等等.4,结合一个js执行引擎,自己实现一个轻量级的