爬虫 Scrapy框架 爬取图虫图片并下载

items.py,根据需求确定自己的数据要求

 1 # -*- coding: utf-8 -*-
 2
 3 # Define here the models for your scraped items
 4 #
 5 # See documentation in:
 6 # https://doc.scrapy.org/en/latest/topics/items.html
 7
 8 import scrapy
 9
10
11 class TodayScrapyItem(scrapy.Item):
12     # define the fields for your item here like:
13     # name = scrapy.Field()
14     pass
15
16
17 class TuchongItem(scrapy.Item):
18     title = scrapy.Field() #图片名字
19     views = scrapy.Field() #浏览人数
20     favorites = scrapy.Field()#点赞人数
21     img_url = scrapy.Field()#图片地址
22
23     # def get_insert_sql(self):
24     #     # 存储时候用的sql语句
25     #     sql = ‘insert into tuchong(title,views,favorites,img_url)‘ \
26     #           ‘ VALUES (%s, %s, %s, %s)‘
27     #     # 存储的数据
28     #     data = (self[‘title‘], self[‘views‘], self[‘favorites‘], self[‘img_url‘])
29     #     return (sql, data)

setting.py 设置headers和items

# -*- coding: utf-8 -*-

# Scrapy settings for today_scrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = ‘today_scrapy‘

SPIDER_MODULES = [‘today_scrapy.spiders‘]
NEWSPIDER_MODULE = ‘today_scrapy.spiders‘

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘today_scrapy (+http://www.yourdomain.com)‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
  ‘Accept-Language‘: ‘en‘,
  ‘User-Agnet‘:‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36‘
}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘today_scrapy.middlewares.TodayScrapySpiderMiddleware‘: 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    ‘today_scrapy.middlewares.TodayScrapyDownloaderMiddleware‘: 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # ‘today_scrapy.pipelines.TodayScrapyPipeline‘: 300,
    ‘today_scrapy.pipelines.TuchongPipeline‘: 200,

}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘

pipelines.py 将图片下载到指定文件夹

 1 # -*- coding: utf-8 -*-
 2
 3 # Define your item pipelines here
 4 #
 5 # Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
 6 # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
 7 import os
 8 import requests
 9
10 class TodayScrapyPipeline(object):
11     def process_item(self, item, spider):
12         return item
13
14 class TuchongPipeline(object):
15     def process_item(self, item, spider):
16         img_url = item[‘img_url‘] #从items中得到图片url地址
17         img_title= item[‘title‘] #得到图片的名字
18         headers = {
19             ‘User-Agnet‘: ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36‘,
20             ‘cookie‘:‘webp_enabled=1; bad_ide7dfc0b0-b3b6-11e7-b58e-df773034efe4=78baed41-a870-11e8-b7fd-370d61367b46; _ga=GA1.2.1188216139.1535263387; _gid=GA1.2.1476686092.1535263387; PHPSESSID=4k7pb6hmkml8tjsbg0knii25n6‘
21         }
22         if not os.path.exists(img_title):
23             os.mkdir(img_title)
24         filename =img_url.split(‘/‘)[-1]
25         with open(img_title+‘/‘+filename, ‘wb+‘) as f:
26             f.write(requests.get(img_url, headers=headers).content)
27         f.close()
28         return item

爬虫文件

tuchong.py

图片的url可以直接拼接

 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 import json
 4 from today_scrapy.items import TuchongItem
 5
 6
 7 class TuchongSpider(scrapy.Spider):
 8     name = ‘tuchong‘
 9     allowed_domains = [‘tuchong.com‘]
10     start_urls = [‘http://tuchong.com/‘]
11
12     def start_requests(self):
13         for pag in range(1, 20):
14             referer_url = ‘https://tuchong.com/rest/tags/自然/posts?page={}&count=20‘.format(pag)   # url中红字部分可以换
15             form_req = scrapy.Request(url=referer_url, callback=self.parse)
16             form_req.headers[‘referer‘] = referer_url
17             yield form_req
18
19     def parse(self, response):
20         tuchong_info_html = json.loads(response.text)
21         # print(tuchong_info_html)
22         postList_c = len(tuchong_info_html[‘postList‘])
23         # print(postList_c)
24         for c in range(postList_c):
25             print(c)
26             # print(tuchong_info_html[‘postList‘][c])
27             title = tuchong_info_html[‘postList‘][c][‘title‘]
28             print(‘图集名称:‘+title)
29             views = tuchong_info_html[‘postList‘][c][‘views‘]
30             print(‘有‘+str(views)+‘人浏览‘)
31             favorites = tuchong_info_html[‘postList‘][c][‘favorites‘]
32             print(‘喜欢的人数:‘+str(favorites))
33             images_c = len(tuchong_info_html[‘postList‘][c][‘images‘])
34             for img_c in range(images_c):
35                 user_id = tuchong_info_html[‘postList‘][c][‘images‘][img_c][‘user_id‘]
36                 img_id = tuchong_info_html[‘postList‘][c][‘images‘][img_c][‘img_id‘]
37                 img_url = ‘https://photo.tuchong.com/{}/f/{}.jpg‘.format(user_id,img_id)
38                 item = TuchongItem()
39                 item[‘title‘] = title
40                 item[‘img_url‘] = img_url
41             # 返回我们的item
42                 yield item

原文地址:https://www.cnblogs.com/pantom0122/p/9540299.html

时间: 2024-11-09 02:12:08

爬虫 Scrapy框架 爬取图虫图片并下载的相关文章

python3爬虫-通过requests爬取图虫网

import requests from fake_useragent import UserAgent from requests.exceptions import Timeout from urllib.parse import quote, unquote import re, json, os, hashlib from lxml import etree import time from multiprocessing import Process, Queue, Pool # 之前

03_使用scrapy框架爬取豆瓣电影TOP250

前言: 本次项目是使用scrapy框架,爬取豆瓣电影TOP250的相关信息.其中涉及到代理IP,随机UA代理,最后将得到的数据保存到mongoDB中.本次爬取的内容实则不难.主要是熟悉scrapy相关命令以及理解框架各部分的作用. 1.本次目标 爬取豆瓣电影TOP250的信息,将得到的数据保存到mongoDB中. 2.准备工作 需要安装好scrapy以及mongoDB,安装步骤这里不做赘述.(这里最好是先了解scrapy框架各个部分的基本作用和基础知识,这样方便后面的内容的理解.scrapy文档

爬虫入门-4-2.爬取豆瓣读书图片

一.利用lxml解析 from lxml import etree import os import requests PROXY = { 'HTTPS': '116.209.55.208:9999' } def spider(): url = 'https://book.douban.com/latest?icn=index-latestbook-all' response = requests.get(url, proxies=PROXY) html = etree.HTML(respons

python爬虫---scrapy框架爬取图片,scrapy手动发送请求,发送post请求,提升爬取效率,请求传参(meta),五大核心组件,中间件

# settings 配置 UA USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36' 一丶scrapy的图片数据爬取(流数据的爬取) ? scrapy中封装好了一个管道类(ImagesPipeline),基于该管道类可以实现图片资源的请求和持久化存储 编码流程: 爬虫文件中解析出图片的地址 将

使用scrapy框架爬取蜂鸟论坛的摄影图片并下载到本地

目标网站:http://bbs.fengniao.com/使用框架:scrapy 因为有很多模块的方法都还不是很熟悉,所有本次爬虫有很多代码都用得比较笨,希望各位读者能给处意见 首先创建好爬虫项目,并使用crawl模板创建爬虫文件 通过观察论坛的规律得出,很多贴子的页数往往大于一页,那么要将贴子里各页的图片下载到同一文件夹内,并且不能重名,就是获取到当前的页码数,已页码数+自然数的方式命令文件.发现scrapy自动爬虫会爬很多重复的页面,度娘后得出两个解决方法,第一个是用布隆过滤器,布隆过滤器相

[python爬虫] Selenium定向爬取海量精美图片及搜索引擎杂谈

我自认为这是自己写过博客中一篇比较优秀的文章,同时也是在深夜凌晨2点满怀着激情和愉悦之心完成的.首先通过这篇文章,你能学到以下几点:        1.可以了解Python简单爬取图片的一些思路和方法        2.学习Selenium自动.测试分析动态网页和正则表达式的区别和共同点        3.了解作者最近学习得比较多的搜索引擎和知识图谱的整体框架        4.同时作者最近找工作,里面的一些杂谈和建议也许对即将成为应届生的你有所帮助        5.当然,最重要的是你也可以尝

scrapy框架爬取豆瓣读书(1)

1.scrapy框架 Scrapy,Python开发的一个快速.高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据.Scrapy用途广泛,可以用于数据挖掘.监测和自动化测试. Scrapy吸引人的地方在于它是一个框架,任何人都可以根据需求方便的修改.它也提供了多种类型爬虫的基类,如BaseSpider.sitemap爬虫等,最新版本又提供了web2.0爬虫的支持. 主要组件: 2.快速开始 scrapy startproject douban cd到douban根目录

基于python的scrapy框架爬取豆瓣电影及其可视化

1.Scrapy框架介绍 主要介绍,spiders,engine,scheduler,downloader,Item pipeline scrapy常见命令如下: 对应在scrapy文件中有,自己增加爬虫文件,系统生成items,pipelines,setting的配置文件就这些. items写需要爬取的属性名,pipelines写一些数据流操作,写入文件,还是导入数据库中.主要爬虫文件写domain,属性名的xpath,在每页添加属性对应的信息等. movieRank = scrapy.Fie

python 使用scrapy框架爬取一个图书网站的信息

1.新建项目 scrapy start_project book_project 2.编写items类 3.编写spider类 # -*- coding: utf-8 -*- import scrapy from book_project.items import BookItem class BookInfoSpider(scrapy.Spider): name = "bookinfo"#定义爬虫的名字 allowed_domains = ["allitebooks.com