scrapy 和 scrapy_redis 安装

安装sqlslte,scrapy需要这个模块

yum install sqlite-devel

python3.5

下载包自己编译安装

./configure

make

make install

自带pip,升到最新版

pip3 install --upgrade pip

python3 MySQL模块

pip3 install pymysql

安装Twisted,scrapy使用的线程框架

wget https://pypi.python.org/packages/6b/23/8dbe86fc83215015e221fbd861a545c6ec5c9e9cd7514af114d1f64084ab/Twisted-16.4.1.tar.bz2#md5=c6d09bdd681f538369659111f079c29d

解包

tar -jxvf Twisted-16.4.1.tar.bz2

进目录
cd Twisted-16.4.1

安装

python3 setup.py install

安装scrapy

pip install scrapy

安装redis

yum install redis

安装scrapy-redis

git clone https://github.com/rolando/scrapy-redis.git

cd scrapy-redis/

python3 setup.py install

由于python2和python3字符串不同引起的bug,github上临时解决的方法

#util.py
import six

def bytes_to_str(s, encoding=‘utf-8‘):
    """Returns a str if a bytes object is given."""

    if six.PY3 and isinstance(s, bytes):
        return s.decode(encoding)

    return s

  

# spider.pyimport six

from scrapy import signals
from scrapy.exceptions import DontCloseSpider
from scrapy.spiders import Spider, CrawlSpider

from . import connection
from .utils import bytes_to_str

# Default batch size matches default concurrent requests setting.
DEFAULT_START_URLS_BATCH_SIZE = 16
DEFAULT_START_URLS_KEY = ‘%(name)s:start_urls‘

class RedisMixin(object):
    """Mixin class to implement reading urls from a redis queue."""
    # Per spider redis key, default to DEFAULT_KEY.
    redis_key = None
    # Fetch this amount of start urls when idle. Default to DEFAULT_BATCH_SIZE.
    redis_batch_size = None
    redis_encoding = ‘utf-8‘
    # Redis client instance.
    server = None

    def start_requests(self):
        """Returns a batch of start requests from redis."""
        return self.next_requests()

    def setup_redis(self, crawler=None):
        """Setup redis connection and idle signal.

        This should be called after the spider has set its crawler object.
        """
        if self.server is not None:
            return

        if crawler is None:
            # We allow optional crawler argument to keep backwards
            # compatibility.
            # XXX: Raise a deprecation warning.
            crawler = getattr(self, ‘crawler‘, None)

        if crawler is None:
            raise ValueError("crawler is required")

        settings = crawler.settings

        if self.redis_key is None:
            self.redis_key = settings.get(
                ‘REDIS_START_URLS_KEY‘, DEFAULT_START_URLS_KEY,
            )

        self.redis_key = self.redis_key % {‘name‘: self.name}

        if not self.redis_key.strip():
            raise ValueError("redis_key must not be empty")

        if self.redis_batch_size is None:
            self.redis_batch_size = settings.getint(
                ‘REDIS_START_URLS_BATCH_SIZE‘, DEFAULT_START_URLS_BATCH_SIZE,
            )

        try:
            self.redis_batch_size = int(self.redis_batch_size)
        except (TypeError, ValueError):
            raise ValueError("redis_batch_size must be an integer")

        self.logger.info("Reading start URLs from redis key ‘%(redis_key)s‘ "
                         "(batch size: %(redis_batch_size)s)", self.__dict__)

        self.server = connection.from_settings(crawler.settings)
        # The idle signal is called when the spider has no requests left,
        # that‘s when we will schedule new requests from redis queue
        crawler.signals.connect(self.spider_idle, signal=signals.spider_idle)

    def next_requests(self):
        """Returns a request to be scheduled or none."""
        use_set = self.settings.getbool(‘REDIS_START_URLS_AS_SET‘)
        fetch_one = self.server.spop if use_set else self.server.lpop
        # XXX: Do we need to use a timeout here?
        found = 0
        while found < self.redis_batch_size:
            data = fetch_one(self.redis_key)
            if not data:
                # Queue empty.
                break
            req = self.make_request_from_data(data)
            if req:
                yield req
                found += 1
            else:
                self.logger.debug("Request not made from data: %r", data)

        if found:
            self.logger.debug("Read %s requests from ‘%s‘", found, self.redis_key)

    def make_request_from_data(self, data):
        # By default, data is an URL.

        if not isinstance(data, six.string_types):
            # XXX: Shall we log and continue?
            self.logger.error("Wrong type for data: %s" % type(data))
            url = bytes_to_str(data, self.redis_encoding)
        else:
            url = data

        # FIXME: This is a naive guard against using a wrong redis_key where
        # data are not string URLs.
        if ‘://‘ not in url:
            # XXX: Shall this be an exception?
            self.logger.error("Missing scheme in URL: ‘%s‘", url)

        return self.make_requests_from_url(url)

    def schedule_next_requests(self):
        """Schedules a request if available"""
        for req in self.next_requests():
            self.crawler.engine.crawl(req, spider=self)

    def spider_idle(self):
        """Schedules a request if available, otherwise waits."""
        # XXX: Handle a sentinel to close the spider.
        self.schedule_next_requests()
        raise DontCloseSpider

class RedisSpider(RedisMixin, Spider):
    """Spider that reads urls from redis queue when idle."""

    @classmethod
    def from_crawler(self, crawler, *args, **kwargs):
        obj = super(RedisSpider, self).from_crawler(crawler, *args, **kwargs)
        obj.setup_redis(crawler)
        return obj

class RedisCrawlSpider(RedisMixin, CrawlSpider):
    """Spider that reads urls from redis queue when idle."""

    @classmethod
    def from_crawler(self, crawler, *args, **kwargs):
        obj = super(RedisCrawlSpider, self).from_crawler(crawler, *args, **kwargs)
        obj.setup_redis(crawler)
        return obj

  

时间: 2024-10-13 19:59:48

scrapy 和 scrapy_redis 安装的相关文章

Python中scrapy框架如何安装配置

在python学习群里发现很多学习网络爬虫技术的童靴都搞不懂python爬虫框架scrapy的安装配置,在学习python网络爬虫初级阶段的时候我们利用urllib和urllib2库以及正则表达式就可以完成了,不过遇到更加强大的爬虫工具--爬虫框架Scrapy,这安装过程也是煞费苦心哪,在此整理如下. Windows平台: 我的系统是Win7,首先,你要有Python,我用的是2.7.7版本,Python3相仿,只是一些源文件不同. 官网文档:http://doc.scrapy.org/en/l

scrapy初体验 - 安装遇到的坑及第一个范例

scrapy,python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据.scrapy用途广泛,可以用于数据挖掘.监测和自动化测试.scrapy的安装稍显麻烦,不过按照以下步骤去进行,相信你也能很轻松的安装使用scrapy. 安装python2.7 scrapy1.0.3暂时只支持python2.7 # wget https://www.python.org/ftp/python/2.7.6/Python-2.7.6.tgz [[email pro

Python之Scrapy爬虫框架安装及简单使用

题记:早已听闻python爬虫框架的大名.近些天学习了下其中的Scrapy爬虫框架,将自己理解的跟大家分享.有表述不当之处,望大神们斧正. 一.初窥Scrapy Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 可以应用在包括数据挖掘,信息处理或存储历史数据等一系列的程序中. 其最初是为了 页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫. 本文档将

【Scrapy框架的安装和基本用法】 &#104849;

目录 原文: http://blog.gqylpy.com/gqy/361 @(Scrapy框架的安装和基本用法) 什么是Scrapy? ???????Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架,非常出名,非常强悍.所谓的框架就是一个已经继承了各种功能(高性能异步下载.队列.分布式.解析.持久化等)的具有很强通用性的项目模板.对于框架的研究,重点在于研究其框架的特性.各个功能的用法即可. 开始安装 如果是Windows系统,应按照下面的顺序进行安装: pip3 insta

scrapy框架的安装

# 1.在安装scrapy前需要安装好相应的依赖库, 再安装scrapy, 具体安装步骤如下: (1).安装lxml库: pip install lxml (2).安装wheel: pip install wheel (3).安装twisted: pip install twisted文件路径 (twisted需下载后本地安装,下载地址:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted) (版本选择如下图,版本后面有解释,请根据自己实际选择)

Windows平台下,Scrapy Installation,安装问题解决

按理说直接:pip install scrapy 就可以成功,但是出现了错误"libxml/xpath.h: No such file or directory" "error:failed with exit status 2" (百度上有很多解决方案而且大多相同,但是都没解决我的问题) 最后还是上官网找解决方法:https://doc.scrapy.org/en/latest/intro/install.html Scrapy is written in pur

Python Scrapy爬虫框架安装、配置及实践

近期研究业界安卓APP主要漏洞类型.wooyun算是国内最有名的漏洞报告平台,总结一下这上面的漏洞数据对后面测试与分析漏洞趋势有指导意义,因此写一个爬虫. 不再造轮子了,使用Python的Scrapy框架实现之. 一.安装 64位系统安装时,一定要注意Python的位数与Scrapy及其依赖库位数一致.否则各种坑爹Bug 安装32位Python 2.7 下载并安装pip(方便自动安装管理依赖库) https://pypi.python.org/pypi/pip/7.1.2 下载源码,python

Scrapy爬虫架构安装过程

水平有限,慢慢成长中. 环境: win 8.1 python 2.7.11 官方的相关的指南,相对有些简单: http://scrapy-chs.readthedocs.org/zh_CN/0.24/intro/install.html#intro-install 注:红色字体为命令. 过程: 1 安装下载python2.7 www.python.org(注意安装的时候选择将安装目录加入到系统路径中) 2 安装依赖插件 大于2.7.9的python2都带有pip,2.7.11的pip默认版本是7

关于Scrapy框架的安装

Scrapy介绍与环境安装 Scrapy介绍与环境安装 What is scrapy? An open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.--Scrapy Home Page Scrapy是Python开发的一个快速web爬虫抓取框架,用于抓取web站点并从页面中提取结构化的数据.Scrap