爬虫(十七):Scrapy框架(四) 对接selenium爬取京东商品数据

1. Scrapy对接Selenium

Scrapy抓取页面的方式和requests库类似,都是直接模拟HTTP请求,而Scrapy也不能抓取JavaScript动态谊染的页面。在前面的博客中抓取JavaScript渲染的页面有两种方式。一种是分析Ajax请求,找到其对应的接口抓取,Scrapy同样可以用此种方式抓取。另一种是直接用 Selenium模拟浏览器进行抓取,我们不需要关心页面后台发生的请求,也不需要分析渲染过程,只需要关心页面最终结果即可,可见即可爬。那么,如果Scrapy可以对接Selenium,那 Scrapy就可以处理任何网站的抓取了。

1.1 新建项目

首先新建项目,名为scrapyseleniumtest。

scrapy startproject scrapyseleniumtest

新建一个Spider。

scrapy genspider jd www.jd.com

修改ROBOTSTXT_OBEY为False。

ROBOTSTXT_OBEY = False

1.2 定义Item

这里我们就不调用Item了。

初步实现Spider的start _requests()方法。

# -*- coding: utf-8 -*-
from scrapy import Request,Spider
from urllib.parse import quote
from bs4 import BeautifulSoup

class JdSpider(Spider):
    name = ‘jd‘
    allowed_domains = [‘www.jd.com‘]
    base_url = ‘https://search.jd.com/Search?keyword=‘

    def start_requests(self):
        for keyword in self.settings.get(‘KEYWORDS‘):
            for page in range(1, self.settings.get(‘MAX_PAGE‘) + 1):
                url = self.base_url + quote(keyword)
                # dont_filter = True  不去重
                yield Request(url=url, callback=self.parse, meta={‘page‘: page}, dont_filter=True)

首先定义了一个base_url,即商品列表的URL,其后拼接一个搜索关键字就是该关键字在京东搜索的结果商品列表页面。

关键字用KEYWORDS标识,定义为一个列表。最大翻页页码用MAX_PAGE表示。它们统一定义在settings.py里面。

KEYWORDS = [‘iPad‘]
MAX_PAGE = 2

在start_requests()方法里,我们首先遍历了关键字,遍历了分页页码,构造并生成Request。由于每次搜索的URL是相同的,所以分页页码用meta参数来传递,同时设置dont_filter不去重。这样爬虫启动的时候,就会生成每个关键字对应的商品列表的每一页的请求了。

1.3 对接Selenium

接下来我们需要处理这些请求的抓取。这次我们对接Selenium进行抓取,采用Downloader Middleware来实现。在Middleware中对接selenium,输出源代码之后,构造htmlresponse对象,直接返回给spider解析页面,提取数据,并且也不在执行下载器下载页面动作。

class SeleniumMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    def __init__(self,timeout=None):
        self.logger=getLogger(__name__)
        self.timeout = timeout
        self.browser = webdriver.Chrome()
        self.browser.set_window_size(1400,700)
        self.browser.set_page_load_timeout(self.timeout)
        self.wait = WebDriverWait(self.browser,self.timeout)

    def __del__(self):
        self.browser.close()

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        return cls(timeout=crawler.settings.get(‘SELENIUM_TIMEOUT‘))

    def process_request(self, request, spider):
        ‘‘‘
        在下载器中间件中对接使用selenium,输出源代码之后,构造htmlresponse对象,直接返回
        给spider解析页面,提取数据
        并且也不在执行下载器下载页面动作
        htmlresponse对象的文档:
        :param request:
        :param spider:
        :return:
        ‘‘‘

        print(‘PhantomJS is Starting‘)
        page = request.meta.get(‘page‘, 1)
        self.wait = WebDriverWait(self.browser, self.timeout)
        # self.browser.set_page_load_timeout(30)
        # self.browser.set_script_timeout(30)
        try:
            self.browser.get(request.url)
            if page > 1:
                input = self.wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-skip > input‘)))
                input.clear()
                input.send_keys(page)
                time.sleep(5)

                # 将网页中输入跳转页的输入框赋值给input变量 EC.presence_of_element_located,判断输入框已经被加载出来
                input = self.wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-skip > input‘)))
                # 将网页中调准页面的确定按钮赋值给submit变量,EC.element_to_be_clickable 判断此按钮是可点击的
                submit = self.wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-skip > a‘)))
                input.clear()
                input.send_keys(page)
                submit.click()  # 点击按钮
                time.sleep(5)

                # 判断当前页码出现在了输入的页面中,EC.text_to_be_present_in_element 判断元素在指定字符串中出现
                self.wait.until(EC.text_to_be_present_in_element((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-num > a.curr‘),str(page)))
                # 等待 #J_goodsList 加载出来,为页面数据,加载出来之后,在返回网页源代码
                self.wait.until(EC.text_to_be_present_in_element((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-num > a.curr‘),str(page)))
            return HtmlResponse(url=request.url, body=self.browser.page_source, request=request, encoding=‘utf-8‘,status=200)
        except TimeoutException:
            return HtmlResponse(url=request.url, status=500, request=request)

首先我在__init__()里对一些对象进行初始化,包括WebDriverWait等对象,同时设置页面大小和页面加载超时时间。在process_request()方法中,我们通过Request的meta属性获取当前需要爬取的页码,将页码赋值给input变量,再将翻页的点击按钮框赋值给submit变量,然后在数据框中输入页码,等待页面加载,直接返回htmlresponse给spider解析,这里我们没有经过下载器下载,直接构造response的子类htmlresponse返回。(当下载器中间件返回response对象时,更低优先级的process_request将不在执行,转而执行其他的process_response()方法,本例中没有其他的process_response(),所以直接将结果返回给spider解析。)

1.4 解析页面

Response对象就会回传给Spider内的回调函数进行解析。所以下一步我们就实现其回调函数,对网页来进行解析。

def parse(self, response):
    soup = BeautifulSoup(response.text, ‘lxml‘)
    lis = soup.find_all(name=‘li‘, class_="gl-item")
    for li in lis:
        proc_dict = {}
        dp = li.find(name=‘span‘, class_="J_im_icon")
        if dp:
            proc_dict[‘dp‘] = dp.get_text().strip()
        else:
            continue
        id = li.attrs[‘data-sku‘]
        title = li.find(name=‘div‘, class_="p-name p-name-type-2")
        proc_dict[‘title‘] = title.get_text().strip()
        price = li.find(name=‘strong‘, class_="J_" + id)
        proc_dict[‘price‘] = price.get_text()
        comment = li.find(name=‘a‘, id="J_comment_" + id)
        proc_dict[‘comment‘] = comment.get_text() + ‘条评论‘
        url = ‘https://item.jd.com/‘ + id + ‘.html‘
        proc_dict[‘url‘] = url
        proc_dict[‘type‘] = ‘JINGDONG‘
        yield proc_dict

这里我们采用BeautifulSoup进行解析,匹配所有商品,随后对结果进行遍历,依次选取商品的各种信息。

1.5 储存结果

提取完页面数据之后,数据会发送到item pipeline处进行数据处理,清洗,入库等操作,所以我们此时当然需要定义项目管道了,在此我们将数据存储在mongodb数据库中。

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymongo

class MongoPipeline(object):

    def __init__(self,mongo_url,mongo_db,collection):
        self.mongo_url = mongo_url
        self.mongo_db = mongo_db
        self.collection = collection

    @classmethod
    #from_crawler是一个类方法,由 @classmethod标识,是一种依赖注入的方式,它的参数就是crawler
    #通过crawler我们可以拿到全局配置的每个配置信息,在全局配置settings.py中的配置项都可以取到。
    #所以这个方法的定义主要是用来获取settings.py中的配置信息
    def from_crawler(cls,crawler):
        return cls(
            mongo_url=crawler.settings.get(‘MONGO_URL‘),
            mongo_db = crawler.settings.get(‘MONGO_DB‘),
            collection = crawler.settings.get(‘COLLECTION‘)
        )

    def open_spider(self,spider):
        self.client = pymongo.MongoClient(self.mongo_url)
        self.db = self.client[self.mongo_db]

    def process_item(self,item, spider):
        # name = item.__class__.collection
        name = self.collection
        self.db[name].insert(dict(item))
        return item

    def close_spider(self,spider):
        self.client.close()

1.6 配置settings文件

配置settings文件,将项目中使用到的配置项在settings文件中配置,本项目中使用到了KEYWORDS,MAX_PAGE,SELENIUM_TIMEOUT(页面加载超时时间),MONGOURL,MONGODB,COLLECTION。

KEYWORDS=[‘iPad‘]
MAX_PAGE=2

MONGO_URL = ‘localhost‘
MONGO_DB = ‘test‘
COLLECTION = ‘ProductItem‘

SELENIUM_TIMEOUT = 30

以及修改配置项,激活下载器中间件和item pipeline。

DOWNLOADER_MIDDLEWARES = {
   ‘scrapyseleniumtest.middlewares.SeleniumMiddleware‘: 543,
}

ITEM_PIPELINES = {
   ‘scrapyseleniumtest.pipelines.MongoPipeline‘: 300,
}

1.7 执行结果

项目中所有需要开发的代码和配置项开发完成,运行项目。

scrapy crawl jd

运行项目之后,在mongodb中查看数据,已经执行成功。

1.8 完整代码

items.py:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
from scrapy import Item,Field

class ProductItem(Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # dp = Field()
    # title = Field()
    # price = Field()
    # comment = Field()
    # url = Field()
    # type = Field()
    pass

jd.py:

# -*- coding: utf-8 -*-
from scrapy import Request,Spider
from urllib.parse import quote
from bs4 import BeautifulSoup

class JdSpider(Spider):
    name = ‘jd‘
    allowed_domains = [‘www.jd.com‘]
    base_url = ‘https://search.jd.com/Search?keyword=‘

    def start_requests(self):
        for keyword in self.settings.get(‘KEYWORDS‘):
            for page in range(1, self.settings.get(‘MAX_PAGE‘) + 1):
                url = self.base_url + quote(keyword)
                # dont_filter = True  不去重
                yield Request(url=url, callback=self.parse, meta={‘page‘: page}, dont_filter=True)

    def parse(self, response):
        soup = BeautifulSoup(response.text, ‘lxml‘)
        lis = soup.find_all(name=‘li‘, class_="gl-item")
        for li in lis:
            proc_dict = {}
            dp = li.find(name=‘span‘, class_="J_im_icon")
            if dp:
                proc_dict[‘dp‘] = dp.get_text().strip()
            else:
                continue
            id = li.attrs[‘data-sku‘]
            title = li.find(name=‘div‘, class_="p-name p-name-type-2")
            proc_dict[‘title‘] = title.get_text().strip()
            price = li.find(name=‘strong‘, class_="J_" + id)
            proc_dict[‘price‘] = price.get_text()
            comment = li.find(name=‘a‘, id="J_comment_" + id)
            proc_dict[‘comment‘] = comment.get_text() + ‘条评论‘
            url = ‘https://item.jd.com/‘ + id + ‘.html‘
            proc_dict[‘url‘] = url
            proc_dict[‘type‘] = ‘JINGDONG‘
            yield proc_dict

middlewares.py:

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from urllib.parse import urlencode
from scrapy.http import HtmlResponse
from logging import getLogger
from selenium.common.exceptions import TimeoutException
import time

class ScrapyseleniumtestSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info(‘Spider opened: %s‘ % spider.name)

class SeleniumMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    def __init__(self,timeout=None):
        self.logger=getLogger(__name__)
        self.timeout = timeout
        self.browser = webdriver.Chrome()
        self.browser.set_window_size(1400,700)
        self.browser.set_page_load_timeout(self.timeout)
        self.wait = WebDriverWait(self.browser,self.timeout)

    def __del__(self):
        self.browser.close()

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        return cls(timeout=crawler.settings.get(‘SELENIUM_TIMEOUT‘))

    def process_request(self, request, spider):
        ‘‘‘
        在下载器中间件中对接使用selenium,输出源代码之后,构造htmlresponse对象,直接返回
        给spider解析页面,提取数据
        并且也不在执行下载器下载页面动作
        htmlresponse对象的文档:
        :param request:
        :param spider:
        :return:
        ‘‘‘

        print(‘PhantomJS is Starting‘)
        page = request.meta.get(‘page‘, 1)
        self.wait = WebDriverWait(self.browser, self.timeout)
        # self.browser.set_page_load_timeout(30)
        # self.browser.set_script_timeout(30)
        try:
            self.browser.get(request.url)
            if page > 1:
                input = self.wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-skip > input‘)))
                input.clear()
                input.send_keys(page)
                time.sleep(5)

                # 将网页中输入跳转页的输入框赋值给input变量 EC.presence_of_element_located,判断输入框已经被加载出来
                input = self.wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-skip > input‘)))
                # 将网页中调准页面的确定按钮赋值给submit变量,EC.element_to_be_clickable 判断此按钮是可点击的
                submit = self.wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-skip > a‘)))
                input.clear()
                input.send_keys(page)
                submit.click()  # 点击按钮
                time.sleep(5)

                # 判断当前页码出现在了输入的页面中,EC.text_to_be_present_in_element 判断元素在指定字符串中出现
                self.wait.until(EC.text_to_be_present_in_element((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-num > a.curr‘),str(page)))
                # 等待 #J_goodsList 加载出来,为页面数据,加载出来之后,在返回网页源代码
                self.wait.until(EC.text_to_be_present_in_element((By.CSS_SELECTOR, ‘#J_bottomPage > span.p-num > a.curr‘),str(page)))
            return HtmlResponse(url=request.url, body=self.browser.page_source, request=request, encoding=‘utf-8‘,status=200)
        except TimeoutException:
            return HtmlResponse(url=request.url, status=500, request=request)

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info(‘Spider opened: %s‘ % spider.name)

pipelines.py:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymongo

class MongoPipeline(object):

    def __init__(self,mongo_url,mongo_db,collection):
        self.mongo_url = mongo_url
        self.mongo_db = mongo_db
        self.collection = collection

    @classmethod
    #from_crawler是一个类方法,由 @classmethod标识,是一种依赖注入的方式,它的参数就是crawler
    #通过crawler我们可以拿到全局配置的每个配置信息,在全局配置settings.py中的配置项都可以取到。
    #所以这个方法的定义主要是用来获取settings.py中的配置信息
    def from_crawler(cls,crawler):
        return cls(
            mongo_url=crawler.settings.get(‘MONGO_URL‘),
            mongo_db = crawler.settings.get(‘MONGO_DB‘),
            collection = crawler.settings.get(‘COLLECTION‘)
        )

    def open_spider(self,spider):
        self.client = pymongo.MongoClient(self.mongo_url)
        self.db = self.client[self.mongo_db]

    def process_item(self,item, spider):
        # name = item.__class__.collection
        name = self.collection
        self.db[name].insert(dict(item))
        return item

    def close_spider(self,spider):
        self.client.close()

settings.py:

# -*- coding: utf-8 -*-

# Scrapy settings for scrapyseleniumtest project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = ‘scrapyseleniumtest‘

SPIDER_MODULES = [‘scrapyseleniumtest.spiders‘]
NEWSPIDER_MODULE = ‘scrapyseleniumtest.spiders‘

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘scrapyseleniumtest (+http://www.yourdomain.com)‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘scrapyseleniumtest.middlewares.ScrapyseleniumtestSpiderMiddleware‘: 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    ‘scrapyseleniumtest.middlewares.ScrapyseleniumtestDownloaderMiddleware‘: 543,
#}
DOWNLOADER_MIDDLEWARES = {
   ‘scrapyseleniumtest.middlewares.SeleniumMiddleware‘: 543,
}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    ‘scrapyseleniumtest.pipelines.ScrapyseleniumtestPipeline‘: 300,
#}
ITEM_PIPELINES = {
   ‘scrapyseleniumtest.pipelines.MongoPipeline‘: 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘
KEYWORDS=[‘iPad‘]
MAX_PAGE=2

MONGO_URL = ‘localhost‘
MONGO_DB = ‘test‘
COLLECTION = ‘ProductItem‘

SELENIUM_TIMEOUT = 30

原文地址:https://www.cnblogs.com/liuhui0308/p/12150489.html

时间: 2024-11-05 16:00:48

爬虫(十七):Scrapy框架(四) 对接selenium爬取京东商品数据的相关文章

Scrapy实战---Scrapy对接selenium爬取京东商城商品数据

本篇目标:我们以爬取京东商城商品数据为例,展示Scrapy框架对接selenium爬取京东商城商品数据. 背景: 京东商城页面为js动态加载页面,直接使用request请求,无法得到我们想要的商品数据,故需要借助于selenium模拟人的行为发起请求,输出源代码,然后解析源代码,得到我们想要的数据. 第一步:设置我们需要提取的字段,也就是在Scrapy框架中设置Item.py文件. class ProductItem(scrapy.Item): # define the fields for y

Python爬取京东商品数据

对京东某一商品信息页面的HTML代码进行分析,可以发现它的图书产品信息页面都含有这样一段代码(不同类的商品页面有些不同): window.pageConfig={compatible:true,searchType: 1,product:{"skuid":"11408255","name":"\u4f17\u795e\u7684\u536b\u661f\uff1a\u4e2d\u56fd\u7981\u533a","

python制作爬虫爬取京东商品评论教程

作者:蓝鲸 类型:转载 本文是继前2篇Python爬虫系列文章的后续篇,给大家介绍的是如何使用Python爬取京东商品评论信息的方法,并根据数据绘制成各种统计图表,非常的细致,有需要的小伙伴可以参考下 本篇文章是python爬虫系列的第三篇,介绍如何抓取京东商城商品评论信息,并对这些评论信息进行分析和可视化.下面是要抓取的商品信息,一款女士文胸.这个商品共有红色,黑色和肤色三种颜色, 70B到90D共18个尺寸,以及超过700条的购买评论. 京东商品评论信息是由JS动态加载的,所以直接抓取商品详

Scrapy 通过登录的方式爬取豆瓣影评数据

Scrapy 通过登录的方式爬取豆瓣影评数据 爬虫 Scrapy 豆瓣 Fly 由于需要爬取影评数据在来做分析,就选择了豆瓣影评来抓取数据,工具使用的是Scrapy工具来实现.scrapy工具使用起来比较简单,主要分为以下几步: 1.创建一个项目 ==scrapy startproject Douban 得到一个项目目录如下: ├── Douban │   ├── init.py │   ├── items.py │   ├── pipelines.py │   ├── settings.py

python爬虫实践——爬取京东商品信息

1 ''' 2 爬取京东商品信息: 3 请求url: 4 https://www.jd.com/ 5 提取商品信息: 6 1.商品详情页 7 2.商品名称 8 3.商品价格 9 4.评价人数 10 5.商品商家 11 ''' 12 from selenium import webdriver 13 from selenium.webdriver.common.keys import Keys 14 import time 15 16 17 def get_good(driver): 18 try

Python爬虫实战(2):爬取京东商品列表

1,引言 在上一篇<Python爬虫实战:爬取Drupal论坛帖子列表>,爬取了一个用Drupal做的论坛,是静态页面,抓取比较容易,即使直接解析html源文件都可以抓取到需要的内容.相反,JavaScript实现的动态网页内容,无法从html源代码抓取需要的内容,必须先执行JavaScript. 我们在<Python爬虫使用Selenium+PhantomJS抓取Ajax和动态HTML内容>一文已经成功检验了动态网页内容的抓取方法,本文将实验程序进行改写,使用开源Python爬虫

分布式爬虫系统设计、实现与实战:爬取京东、苏宁易购全网手机商品数据+MySQL、HBase存储

[TOC] 1 概述 在不用爬虫框架的情况,经过多方学习,尝试实现了一个分布式爬虫系统,并且可以将数据保存到不同地方,类似MySQL.HBase等. 基于面向接口的编码思想来开发,因此这个系统具有一定的扩展性,有兴趣的朋友直接看一下代码,就能理解其设计思想,虽然代码目前来说很多地方还是比较紧耦合,但只要花些时间和精力,很多都是可抽取出来并且可配置化的. 因为时间的关系,我只写了京东和苏宁易购两个网站的爬虫,但是完全可以实现不同网站爬虫的随机调度,基于其代码结构,再写国美.天猫等的商品爬取,难度不

使用selenium爬取网站动态数据

处理页面动态加载的爬取 selenium selenium是python的一个第三方库,可以实现让浏览器完成自动化的操作,比如说点击按钮拖动滚轮等 环境搭建: 安装:pip install selenium 获取浏览器的驱动程序:下载地址http://chromedriver.storage.googleapis.com/index.html 驱动与浏览器版本对应:https://blog.csdn.net/ezreal_tao/article/details/80808729 设置chorme

【Scrapy框架之CrawlSpider全站爬取】--2019-08-06 15:17:42

原创链接: http://106.13.73.98/__/144/ 起 提问: 如果想要快速爬取网站的全站数据,有几种实现方法? 基于Scrapy框架中 Spider 的递归爬取来实现(Request模块递归回调parse方法) 基于 CrawlSpider 的自动爬取来实现(更加高效简洁) ???????CrawlSpider 是 Spider 的一个子类,除了继承了 Spider 的特性和功能外,还派生了其自己独有的更加强大的特性和功能.其中最为显著的功能就是 LinkExtractors: