python 爬虫proxy,BeautifulSoup+requests+mysql 爬取样例

实现思路:

由于反扒机制,所以需要做代理切换,去爬取,内容通过BeautifulSoup去解析,最后入mysql库

1.在西刺免费代理网获取代理ip,并自我检测是否可用

2.根据获取的可用代理ip去发送requests模块的请求,带上代理

3.内容入库

注:日志模块在上一篇随笔

下面附上代码

1.可用代理获取

# -*- coding: utf-8 -*-
import random
import time
import requests
from bs4 import BeautifulSoup
import log_config
logger = log_config.getlogger(‘ip_pool‘, ‘ip_pool.log‘)

class IPProxyPool:
    # 初始化,定义一个空数组ip_list用于存储ip代理
    def __init__(self):

        # 代理ip获取网址
        self.proxy_url_list = [‘http://www.xicidaili.com‘, ‘http://www.xicidaili.com/nn‘, ‘http://www.xicidaili.com/nn/2‘]
        self.ip_list = []
        self.headers = {‘Host‘: ‘www.xicidaili.com‘,
                        ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36‘
                        }
        self.user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]

    def get_xici_all_ip(self):
        ip_lists = []
        for proxy_url in self.proxy_url_list:
            html = requests.get(proxy_url, headers=self.headers)
            content = html.text
            soup = BeautifulSoup(content, "lxml")
            ip_body = soup.find("table", attrs={"id": "ip_list"})
            ip_page_lists = ip_body.find_all("tr", attrs={"class": "odd"})
            ip_lists = ip_lists + ip_page_lists
        return ip_lists

    def get_ip_list(self):
        ip_lists = self.get_xici_all_ip()
        ip_test_pool = []
        for ip in ip_lists:
            http_type = ip.find_all("td")[5].get_text()
            if http_type == ‘HTTP‘:
                ip_test_account = ip.find_all("td")[1].get_text()
                ip_test_port = ip.find_all("td")[2].get_text()
                ip_port_dict = {ip_test_account: ip_test_port}
                ip_test_pool.append(ip_port_dict)

        for ipn in ip_test_pool:
            ip_addr = "http://"
            for ip, port in ipn.items():
                ip_addr = ip_addr + ip + ‘:‘ + port
            # ip代理有效性检验
            statu = self.check_ip(ip_addr)
            if statu:
                # 将有效ip代理存储至数组ip_list中
                self.ip_list.append(ip_addr.strip())

    def check_ip(self, ip):
        return self.microbell_proxy_ip(ip)

    def microbell_proxy_ip(self, ip):
        try:
            test_url = ‘http://www.microbell.com/elitelist.html‘
            proxy = {‘http‘: ip}
            user_agent = self.random_agent()
            headers_agent = {‘User-Agent‘: user_agent}
            response_body = requests.get(test_url, headers=headers_agent, proxies=proxy, timeout=5)
            if response_body.status_code == 200:
                # 即使返回了200,也可能不是我们访问的页面,而是代理给我们的页面,所以还需要做判断
                #response_body.encoding(‘gbk‘)
                content = response_body.text
                soup = BeautifulSoup(content, "lxml")
                body = soup.find("div", attrs={"class": "index_docmain"})
                if body is None:
                    return False
                if body.get_text() != "":
                    logger.info("ok proxy ip %s" % ip)
                    return True
                else:
                    return False

            else:
                return False

        except Exception as e:
            logger.exception(e.message)
            time.sleep(1)
            return False

    def random_agent(self):
        user_agent = random.choice(self.user_agent_list)
        return user_agent

if __name__ == "__main__":
    IPProxyPool = IPProxyPool()
    IPProxyPool.get_ip_list()
    print IPProxyPool.ip_list

    # proxies = {
    #     "http": "http://118.190.95.35:9001"  # 代理ip
    # }
    #
    # headers = {
    #     ‘Host‘: ‘www.4399.com‘,
    #     ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36‘
    #
    # }
    #
    # http_url = "http://www.4399.com/"
    # try:
    #     res = requests.get(url=http_url, headers=headers, proxies=proxies, timeout=3)
    #     if res.status_code == 200:
    #         print u"访问网页成功"
    #     else:
    #         print "faile"
    # except Exception as e:
    #     print e

2.解析界面,获取想要的内容,并入库

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import ConfigParser
import datetime
import sys
import pymysql
import requests
from bs4 import BeautifulSoup
import log_config
import time
import random
from agent_and_proxy_ip_pool import IPProxyPool

logger = log_config.getlogger(‘report‘, ‘report.log‘)

# 是否获取今天的数据 0 表示获取今天的数据,1表示获取全部数据 2表示页面没有数据
get_today_data = 1
if len(sys.argv) != 1:
    if sys.argv[1] == 1:
        get_today_data = 1
    else:
        print ‘input error,please input 0->today ,1->all data‘
        exit()

class research_report:
    def __init__(self):
        conf = ConfigParser.ConfigParser()
        conf.read("mysql.conf")
        self.ip_proxy_pool = IPProxyPool()
        # self.ip_proxy_pool.get_ip_list()
        # self.ip_pool = self.ip_proxy_pool.ip_list
        # logger.info(‘You can currently use IP %s‘ % self.ip_pool)        #多个可用的cookies
        self.cookies_pool = [
            ‘c=; ASPSESSIONIDQCQRQQCR=LEOOBOJCBAMFFDHMFBHFJKEE; __guid=188006958.3779224451650617000.1539657585525.2588; ASPSESSIONIDSATRTTDQ=MCDEIPFDLLKBNHPBBEMGBGFC; safedog-flow-item=C07B93F771; UM_distinctid=16680b1e9e411f-0674a4c85ccc2-454c092b-1fa400-16680b1e9e539d; CNZZDATA1752123=cnzz_eid%3D2075545357-1539752826-%26ntime%3D1539752826; Hm_lvt_d554f0f6d738d9e505c72769d450253d=1539757436; robih=vXuWjYMDvV6XmNxOuNmP; MBpermission=0; MBname=sunyue1993; did=67A671BFE; monitor_count=6; Hm_lpvt_d554f0f6d738d9e505c72769d450253d=1539757719‘
        ]
        self.get_today = get_today_data
        self.user = conf.get("mysql", "user")
        self.mysql_password = conf.get("mysql", "password")
        self.database_name = conf.get("mysql", "database")
        self.host = conf.get("mysql", "host")
        self.port = conf.get("mysql", "port")
        self.site_url = ‘http://www.microbell.com/‘
        self.page_url = ‘http://www.microbell.com/elitelist_1_0.html‘
        self.headers = {‘Host‘: ‘www.microbell.com‘,
                        ‘Accept‘: ‘application/json, text/javascript, */*; q=0.01‘,
                        ‘Accept-Encoding‘: ‘gzip, deflate, sdch‘, ‘Accept-Language‘: ‘zh-CN,zh;q=0.8‘,
                        ‘Connection‘: ‘keep-alive‘
                        }

    # 生成随机agent
    def get_random_headers(self):
        # self.headers[‘User-Agent‘] = ‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36‘
        self.headers[‘User-Agent‘] = self.ip_proxy_pool.random_agent()
        self.headers[‘Cookie‘] = random.choice(self.cookies_pool)

    # 获取指定数组中的随机ip
    def get_random_proxy(self):
        proxy_ip = random.choice(self.ip_pool)
        proxies = {‘http‘: proxy_ip}
        return proxies

    # 获取列表页面
    def get_html_content(self, page_num_url):
        try:
            self.get_random_headers()
            req = requests.get(page_num_url, headers=self.headers, timeout=5)
            req.encoding = ‘gbk‘
            text = req.text
            soup = BeautifulSoup(text, "lxml")
            # soup = body.prettify #美化
            report_list = soup.find_all("div", attrs={"class": "classbaogao_sousuo_list"})
            list_data = []
            logger.info("%s owner %s pages" % (page_num_url, len(report_list)))
            if len(report_list) == 0:
                return 2
            for report_item in report_list:
                url = self.site_url + report_item.table.tr.find_all("td")[1].a["href"]
                title = report_item.table.tr.find_all("td")[1].a["title"]
                item_data = {"url": url, "title": title}
                list_data.append(item_data)
            end_flag = self.get_list_page_data(list_data)
            return end_flag
        except Exception as e:
            logger.exception("get list %s page fail error info : %s" % (page_num_url, e))
            return 2

    # 获取一页数据38条的每条的详情
    def get_list_page_data(self, list_data):
        try:
            # 一页数据(38)数组入库一次
            page_datas = []
            now_date = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
            self.get_random_headers()
            #proxy_ip = self.get_random_proxy()
            for item_data in list_data:
                retry_num = 0
                while retry_num < 3:
                    try:
                        # 休眠两秒,避免被封
                        t = random.uniform(3, 5)
                        time.sleep(t)
                        req = requests.get(item_data["url"], headers=self.headers, timeout=5)
                        req.encoding = ‘gbk‘
                        text = req.text
                        soup = BeautifulSoup(text, "lxml")
                        detail_div = soup.find("div", attrs={"class": "leftn2"})
                        tr_s = detail_div.table.find_all("tr")
                        public_time = tr_s[0].find_all("td")[2].span.get_text()
                        if self.get_today == 0:
                            # 如果是爬取今天的数据,会进入这个判断,如果当期获取的文件的时间早于今天就直接退出循环
                            logger.info("now spider today data")
                            today = datetime.date.today()
                            today_time = int(time.mktime(today.timetuple()))
                            time_array = time.strptime(public_time, "%Y-%m-%d %H:%M:%S")
                            pub_time = int(time.mktime(time_array))
                            if pub_time < today_time:
                                break
                        abstract_br_replace = soup.find("div", attrs={"class": "p_main"}).p.span
                        str1 = str(abstract_br_replace).replace("<br/>", r"\r\n")
                        abstract_object = BeautifulSoup(str1, "lxml")
                        [s.extract() for s in abstract_object("font")]
                        abstract = abstract_object.get_text()
                        sec_name = tr_s[0].find_all("td")[0].span.get_text()
                        sec_code = tr_s[0].find_all("td")[1].span.get_text()
                        report_type = tr_s[1].find_all("td")[0].span.get_text()
                        doc_type = tr_s[1].find_all("td")[1].span.get_text()
                        author = tr_s[1].find_all("td")[2].span.get_text()
                        provenance = tr_s[2].find_all("td")[0].span.get_text()
                        pages = tr_s[2].find_all("td")[1].span.get_text()
                        rec_rate = tr_s[2].find_all("td")[2].span.get_text()
                        doc_size = tr_s[3].find_all("td")[0].span.get_text()
                        promulgator = tr_s[3].find_all("td")[1].span.get_text()
                        #doc_url_str = soup.find("div", attrs={"class": "anniu_main"}).a["onclick"]
                        doc_url_str = ""
                        doc_url_list = doc_url_str.split(",")
                        doc_url = self.site_url + doc_url_list[2]
                        title = item_data["title"]
                        create_time = now_date
                        update_time = now_date
                        page_data = [title, sec_name, sec_code, public_time, report_type, doc_type, author, provenance,
                                     pages,
                                     rec_rate, doc_size, doc_url, promulgator, abstract, create_time, update_time]
                        page_datas.append(page_data)
                        break
                    except Exception as e:
                        retry_num += 1
                if retry_num == 3:
                    logger.warning("current page is not get %s" % item_data)
            if len(page_datas) > 0:
                self.set_data_mysql(page_datas)
                if self.get_today == 0:
                    if len(page_datas) < 38:
                        return 0
                return 1
            else:
                return 2
        except Exception as e:
            logger.error("get detail  page fail" % list_data, e)
            return 2

    # 批量插入mysql
    def set_data_mysql(self, page_datas):
        # 创建连接
        conn = pymysql.connect(host=self.host, port=int(self.port), user=self.user, passwd=self.mysql_password,
                               db=self.database_name)
        try:
            # 创建游标
            cursor = conn.cursor()
            sql = "INSERT INTO report(title,sec_name,sec_code,public_time,report_type,doc_type,author,"                   "provenance,pages,rec_rate,doc_size,doc_url,promulgator,abstract,create_time,update_time) "                   "VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)"
            effect_row = cursor.executemany(sql, page_datas)
            # 提交sql,不提交不会进入mysql
            conn.commit()
            logger.info("already into dabatabase %s" % effect_row)
        finally:
            conn.close()

    # 登录,获取cookie,暂时没用
    # def login_in(self):
    #     data = {
    #         ‘namelogin‘: self.user_name,
    #         ‘pwdlogin‘: self.password
    #     }
    #     req = requests.post(self.login_url, headers=self.headers, data=data)
    #     req.encoding = req.apparent_encoding
    #     cookies = req.cookies.get_dict()
    #     print cookies

    # http://www.microbell.com/elitelist_1_0.html 初始页面,后面的页面只有url中的"1"这个值会变动
    def process(self):
        # 分析页面,总共不超过360页
        if get_today_data == 0:
            for i in range(1, 20):
                base_url = "http://www.microbell.com/elitelist_%s_0.html" % i
                logger.info("当前获取页面url=%s" % base_url)
                end_flag = self.get_html_content(base_url)
                if end_flag == 0:
                    logger.info("The page %s is already the last page" % base_url)
                    break
        else:
            for i in reversed(range(1, 107)):
                base_url = "http://www.microbell.com/elitelist_%s_0.html" % i
                logger.info("当前获取页面url=%s" % base_url)
                self.get_html_content(base_url)

if __name__ == "__main__":
    research_class = research_report()
    research_class.process()
else:
    research_class = research_report()
    research_class.process()

原文地址:https://www.cnblogs.com/keepMoveForevery/p/9815456.html

时间: 2024-07-31 06:16:39

python 爬虫proxy,BeautifulSoup+requests+mysql 爬取样例的相关文章

Python爬虫实战之Requests+正则表达式爬取猫眼电影Top100

import requests from requests.exceptions import RequestException import re import json # from multiprocessing import Pool # 测试了下 这里需要自己添加头部 否则得不到网页 headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Ge

python爬虫实例详细介绍之爬取大众点评的数据

python 爬虫实例详细介绍之爬取大众点评的数据 一. Python作为一种语法简洁.面向对象的解释性语言,其便捷性.容易上手性受到众多程序员的青睐,基于python的包也越来越多,使得python能够帮助我们实现越来越多的功能.本文主要介绍如何利用python进行网站数据的抓取工作.我看到过利用c++和Java进行爬虫的代码,c++的代码很复杂,而且可读性.可理解性较低,不易上手,一般是那些高手用来写着玩加深对c++的理解的,这条路目前对我们不通.Java的可读性还可以,就是代码冗余比较多,

【python爬虫】根据查询词爬取网站返回结果

最近在做语义方面的问题,需要反义词.就在网上找反义词大全之类的,但是大多不全,没有我想要的.然后就找相关的网站,发现了http://fanyici.xpcha.com/5f7x868lizu.html,还行能把"老师"-"学生","医生"-"病人"这样对立关系的反义词查出来. 一开始我想把网站中数据库中存在的所有的词语都爬出来(暗网爬虫),但是分析了url的特点: http://fanyici.xpcha.com/5f7x86

23个Python爬虫开源项目代码:爬取微信、淘宝、豆瓣、知乎、微博等

来源:全球人工智能 作者:SFLYQ 今天为大家整理了23个Python爬虫项目.整理的原因是,爬虫入门简单快速,也非常适合新入门的小伙伴培养信心.所有链接指向GitHub,祝大家玩的愉快 1.WechatSogou [1]– 微信公众号爬虫. 基于搜狗微信搜索的微信公众号爬虫接口,可以扩展成基于搜狗搜索的爬虫,返回结果是列表,每一项均是公众号具体信息字典. github地址:https://github.com/Chyroc/WechatSogou 2.DouBanSpider [2]– 豆瓣

小白学 Python 爬虫(25):爬取股票信息

人生苦短,我用 Python 前文传送门: 小白学 Python 爬虫(1):开篇 小白学 Python 爬虫(2):前置准备(一)基本类库的安装 小白学 Python 爬虫(3):前置准备(二)Linux基础入门 小白学 Python 爬虫(4):前置准备(三)Docker基础入门 小白学 Python 爬虫(5):前置准备(四)数据库基础 小白学 Python 爬虫(6):前置准备(五)爬虫框架的安装 小白学 Python 爬虫(7):HTTP 基础 小白学 Python 爬虫(8):网页基

Python 爬虫入门(requests)

相信最开始接触Python爬虫学习的同学最初大多使用的是urllib,urllib2.在那之后接触到了第三方库requests,requests完全能满足各种http功能,真的是好用爆了 :D 他们是这样说的: “Requests 唯一的一个非转基因的 Python HTTP 库,人类可以安全享用.Requests 允许你发送纯天然,植物饲养的 HTTP/1.1 请求,无需手工劳动.你不需要手动为 URL 添加查询字串,也不需要对 POST 数据进行表单编码.Keep-alive 和 HTTP

Python爬虫基础之requests

一.随时随地爬取一个网页下来 怎么爬取网页?对网站开发了解的都知道,浏览器访问Url向服务器发送请求,服务器响应浏览器请求并返回一堆HTML信息,其中包括html标签,css样式,js脚本等.我们之前用的是Python标准基础库Urllib实现的, 现在我们使用Python的Requests HTTP库写个脚本开始爬取网页.Requests的口号很响亮“让HTTP服务人类“,够霸气. 二.Python Requests库的基本使用 1.GET和POST请求方式 GET请求 1 import re

Python爬虫之Beautifulsoup模块的使用

一 Beautifulsoup模块介绍 Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库.它能够通过你喜欢的转换器实现惯用的文档导航,查找,修改文档的方式.Beautiful Soup会帮你节省数小时甚至数天的工作时间.你可能在寻找 Beautiful Soup3 的文档,Beautiful Soup 3 目前已经停止开发,官网推荐在现在的项目中使用Beautiful Soup 4, 移植到BS4 #安装 Beautiful Soup pip instal

一个超实用的python爬虫功能使用 requests BeautifulSoup

import urllib import os,refrom urllib import request, parseimport requestsimport randomimport timefrom bs4 import BeautifulSoup user_agent_list = [ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/5