Scrapy命令行详解

官方文档:https://doc.scrapy.org/en/latest/

Global commands:

Project-only commands:   在项目目录下才可以执行

startproject

  • Syntax: scrapy startproject <project_name> [project_dir]
  • Requires project: no

Creates a new Scrapy project named project_name, under the project_dir directory. If project_dir wasn’t specified, project_dirwill be the same as project_name.

Usage example:

$ scrapy startproject myproject

genspider

  • Syntax: scrapy genspider [-t template] <name> <domain>
  • Requires project: no

Create a new spider in the current folder or in the current project’s spiders folder, if called from inside a project. The <name> parameter is set as the spider’s name, while <domain> is used to generate the allowed_domains and start_urls spider’s attributes.

Usage example:

$ scrapy genspider -l
Available templates:
  basic
  crawl
  csvfeed
  xmlfeed

$ scrapy genspider example example.com
Created spider ‘example‘ using template ‘basic‘

$ scrapy genspider -t crawl scrapyorg scrapy.org
Created spider ‘scrapyorg‘ using template ‘crawl‘

This is just a convenience shortcut command for creating spiders based on pre-defined templates, but certainly not the only way to create spiders. You can just create the spider source code files yourself, instead of using this command.

crawl

  • Syntax: scrapy crawl <spider>
  • Requires project: yes

Start crawling using a spider.

Usage examples:

$ scrapy crawl myspider
[ ... myspider starts crawling ... ]

check

  • Syntax: scrapy check [-l] <spider>
  • Requires project: yes

Run contract checks.

Usage examples:

$ scrapy check -l
first_spider
  * parse
  * parse_item
second_spider
  * parse
  * parse_item

$ scrapy check
[FAILED] first_spider:parse_item
>>> ‘RetailPricex‘ field is missing

[FAILED] first_spider:parse
>>> Returned 92 requests, expected 0..4

list

  • Syntax: scrapy list
  • Requires project: yes

List all available spiders in the current project. The output is one spider per line.

Usage example:

$ scrapy list
spider1
spider2

edit

  • Syntax: scrapy edit <spider>
  • Requires project: yes

Edit the given spider using the editor defined in the EDITORenvironment variable or (if unset) the EDITOR setting.

This command is provided only as a convenience shortcut for the most common case, the developer is of course free to choose any tool or IDE to write and debug spiders.

Usage example:

$ scrapy edit spider1

fetch

  • Syntax: scrapy fetch <url>
  • Requires project: no

Downloads the given URL using the Scrapy downloader and writes the contents to standard output.

The interesting thing about this command is that it fetches the page how the spider would download it. For example, if the spider has a USER_AGENT attribute which overrides the User Agent, it will use that one.

So this command can be used to “see” how your spider would fetch a certain page.

If used outside a project, no particular per-spider behaviour would be applied and it will just use the default Scrapy downloader settings.

Supported options:

  • --spider=SPIDER: bypass spider autodetection and force use of specific spider
  • --headers: print the response’s HTTP headers instead of the response’s body
  • --no-redirect: do not follow HTTP 3xx redirects (default is to follow them)  #有重定向的连接时候使用这个参数

Usage examples:

$ scrapy fetch --nolog http://www.example.com/some/page.html
[ ... html content here ... ]

$ scrapy fetch --nolog --headers http://www.example.com/
{‘Accept-Ranges‘: [‘bytes‘],
 ‘Age‘: [‘1263   ‘],
 ‘Connection‘: [‘close     ‘],
 ‘Content-Length‘: [‘596‘],
 ‘Content-Type‘: [‘text/html; charset=UTF-8‘],
 ‘Date‘: [‘Wed, 18 Aug 2010 23:59:46 GMT‘],
 ‘Etag‘: [‘"573c1-254-48c9c87349680"‘],
 ‘Last-Modified‘: [‘Fri, 30 Jul 2010 15:30:18 GMT‘],
 ‘Server‘: [‘Apache/2.2.3 (CentOS)‘]}

view

  • Syntax: scrapy view <url>
  • Requires project: no

Opens the given URL in a browser, as your Scrapy spider would “see” it. Sometimes spiders see pages differently from regular users, so this can be used to check what the spider “sees” and confirm it’s what you expect.

Supported options:

  • --spider=SPIDER: bypass spider autodetection and force use of specific spider
  • --no-redirect: do not follow HTTP 3xx redirects (default is to follow them)

Usage example:

$ scrapy view http://www.example.com/some/page.html
[ ... browser starts ... ]

shell

  • Syntax: scrapy shell [url]
  • Requires project: no

Starts the Scrapy shell for the given URL (if given) or empty if no URL is given. Also supports UNIX-style local file paths, either relative with ./ or ../ prefixes or absolute file paths. See Scrapy shell for more info.

Supported options:

  • --spider=SPIDER: bypass spider autodetection and force use of specific spider
  • -c code: evaluate the code in the shell, print the result and exit
  • --no-redirect: do not follow HTTP 3xx redirects (default is to follow them); this only affects the URL you may pass as argument on the command line; once you are inside the shell, fetch(url) will still follow HTTP redirects by default.

Usage example:

$ scrapy shell http://www.example.com/some/page.html
[ ... scrapy shell starts ... ]

$ scrapy shell --nolog http://www.example.com/ -c ‘(response.status, response.url)‘
(200, ‘http://www.example.com/‘)

# shell follows HTTP redirects by default
$ scrapy shell --nolog http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F -c ‘(response.status, response.url)‘
(200, ‘http://example.com/‘)

# you can disable this with --no-redirect
# (only for the URL passed as command line argument)
$ scrapy shell --no-redirect --nolog http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F -c ‘(response.status, response.url)‘
(302, ‘http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F‘)

parse

  • Syntax: scrapy parse <url> [options]
  • Requires project: yes

Fetches the given URL and parses it with the spider that handles it, using the method passed with the --callback option, or parse if not given.

Supported options:

  • --spider=SPIDER: bypass spider autodetection and force use of specific spider
  • --a NAME=VALUE: set spider argument (may be repeated)
  • --callback or -c: spider method to use as callback for parsing the response
  • --meta or -m: additional request meta that will be passed to the callback request. This must be a valid json string. Example: –meta=’{“foo” : “bar”}’
  • --pipelines: process items through pipelines
  • --rules or -r: use CrawlSpider rules to discover the callback (i.e. spider method) to use for parsing the response
  • --noitems: don’t show scraped items
  • --nolinks: don’t show extracted links
  • --nocolour: avoid using pygments to colorize the output
  • --depth or -d: depth level for which the requests should be followed recursively (default: 1)
  • --verbose or -v: display information for each depth level

Usage example:

$ scrapy parse http://www.example.com/ -c parse_item
[ ... scrapy log lines crawling example.com spider ... ]

>>> STATUS DEPTH LEVEL 1 <<<
# Scraped Items  ------------------------------------------------------------
[{‘name‘: ‘Example item‘,
 ‘category‘: ‘Furniture‘,
 ‘length‘: ‘12 cm‘}]

# Requests  -----------------------------------------------------------------
[]

settings

  • Syntax: scrapy settings [options]
  • Requires project: no

Get the value of a Scrapy setting.

If used inside a project it’ll show the project setting value, otherwise it’ll show the default Scrapy value for that setting.

Example usage:

$ scrapy settings --get BOT_NAME
scrapybot
$ scrapy settings --get DOWNLOAD_DELAY
0


runspider

  • Syntax: scrapy runspider <spider_file.py>
  • Requires project: no #全局执行

Run a spider self-contained in a Python file, without having to create a project.

Example usage:

$ scrapy runspider myspider.py
[ ... spider starts crawling ... ]


version

  • Syntax: scrapy version [-v]
  • Requires project: no

Prints the Scrapy version. If used with -v it also prints Python, Twisted and Platform info, which is useful for bug reports.

bench

New in version 0.17.

  • Syntax: scrapy bench
  • Requires project: no

Run a quick benchmark test. Benchmarking.

原文地址:https://www.cnblogs.com/LXL616/p/10765902.html

时间: 2024-11-09 00:35:22

Scrapy命令行详解的相关文章

7Z命令行详解

7z.exe在CMD窗口的使用说明如下: 7-Zip (A) 4.57 Copyright (c) 1999-2007 Igor Pavlov 2007-12-06 Usage: 7za <command> [<switches>...] <archive_name> [<file_names>...][<@listfiles...>] <Commands>a: Add files to archiveb: Benchmarkd: D

Python爬虫从入门到放弃(十三)之 Scrapy框架的命令行详解

这篇文章主要是对的scrapy命令行使用的一个介绍 创建爬虫项目 scrapy startproject 项目名例子如下: localhost:spider zhaofan$ scrapy startproject test1 New Scrapy project 'test1', using template directory '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/scra

Python爬虫从入门到成妖之3-----Scrapy框架的命令行详解

创建爬虫项目 scrapy startproject 项目名 例子如下: E:\crawler>scrapy startproject test1 New Scrapy project 'test1', using template directory 'd:\\python36\\lib\\site-packages\\scrapy\\templates\\project', created in: E:\crawler\test1 You can start your first spide

GCC 命令行详解 -L 指定库的路径 -l 指定需连接的库名

为什么会出现undefined reference to 'xxxxx'错误?首先这是链接错误,不是编译错误,也就是说如果只有这个错误,说明你的程序源码本身没有问题,是你用编译器编译时参数用得不对,没有指定链接程序要用到得库,比如你的程序里用到了一些数学函数,那么你就要在编译参数里指定程序要链接数学库,方法是在编译命令行里加入-lm. -l参数和-L参数-l参数就是用来指定程序要链接的库,-l参数紧接着就是库名,那么库名跟真正的库文件名有什么关系呢?就拿数学库来说,他的库名是m,他的库文件名是l

笔记:Maven 生命周期与命令行详解

Maven 拥有三套相互独立的生命周期,分别是 clean.default和site,clean 生命周期的目的是清理项目,default 生命周期的目的是构建项目,而site生命周期的目的是建立项目站点,每个生命周期包含一些阶段,这些阶段是有顺序的,并且后面的阶段依赖前面的阶段,用户和Maven最直接的交互方式就是调用这些生命周期阶段. clean 生命周期:包含的阶段有 pre-clean.clean 和 post-clean pre-clean:执行一些清理前需要完成的工作 clean:清

第二课补充01——redis-cli命令行详解、string类型、list类型、hash类型命令操作详解

一. redis-cli命令行参数 1.-x参数:从标准输入读取一个参数: [问题] [解决] 因为echo命令是默认带有回车\n的,不带回车需要echo –n命令: echo -n "haha"|redis-cli -x set name,从标准输入读入一个参数到redis,就不会有回车符: 2.-r参数:重复执行一个命令指定的次数: -i参数:设置命令执行的间隔: 例子:每隔1秒执行一次,一共执行3次info命令 3.-rdb文件:获取指定redis实例的rdb文件,保存到本地 可

linux wget 命令用法详解(附实例说明)

Linux wget是一个下载文件的工具,它用在命令行下.对于Linux用户是必不可少的工具,尤其对于网络管理员,经常要下载一些软件或从远程服务器恢复备份到本地服务器.如果我们使用虚拟主机,处理这样的事务我们只能先从远程服务器下载到我们电脑磁盘,然后再用ftp工具上传到服务器.这样既浪费时间又浪费精力,那不没办法的事.而到了Linux VPS,它则可以直接下载到服务器而不用经过上传这一步.wget工具体积小但功能完善,它支持断点下载功能,同时支持FTP和HTTP下载方式,支持代理服务器和设置起来

【转】 wget 命令用法详解

wget是在Linux下开发的开放源代码的软件,作者是Hrvoje Niksic,后来被移植到包括Windows在内的各个平台上.它有以下功能和特点:(1)支持断点下传功能:这一点,也是网络蚂蚁和FlashGet当年最大的卖点,现在,Wget也可以使用此功能,那些网络不是太好的用户可以放心了:(2)同时支持FTP和HTTP下载方式:尽管现在大部分软件可以使用HTTP方式下载,但是,有些时候,仍然需要使用FTP方式下载软件:(3)支持代理服务器:对安全强度很高的系统而言,一般不会将自己的系统直接暴

Mysql导入导出工具Mysqldump和Source命令用法详解

mysqldump -u 用户名 -p [--opt] DATABASENAME [Table] >导出SQL文件名 例子: mysqldump -h host -u user -p --opt databasename [table] > /home/user/databasename.sql 使用Mysqldump导出数据表结构 mysqldump -u root -p --no-data mysql user >D:\PHPWeb\sqlbackup\mysql_user.sql