22.天眼查cookie模拟登陆采集数据

通过账号登录获取cookies,模拟登录(前提有天眼查账号),会员账号可查看5000家,普通只是100家,同时也要设置一定的反爬措施以防账号被封。拿有权限的账号去获取cookies,去访问页面信息,不过这样呢感觉还是不合适,因为之前也采集过都是避开登录和验证码的问题,因为这些数据只是人家网站让不让你拿,该怎样去拿的问题。这里只是简单地做一下测试,实际采集会遇到各种问题的,这里只是个解题思路仅供参考。不然会被检测如图:


# coding:utf-8

import requests
from lxml import etree
import re
#请求地址
target_url =‘https://www.tianyancha.com/search?key=‘

headers = {
    ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8‘,
    ‘Accept-Encoding‘: ‘gzip, deflate, br‘,
    ‘Accept-Language‘: ‘zh-CN,zh;q=0.9‘,
    ‘Connection‘: ‘keep-alive‘,
    ‘Cookie‘: ‘TYCID=b9583550959d11e897e06dcb73cfa6e2; undefined=b9583550959d11e897e06dcb73cfa6e2; _ga=GA1.2.442696904.1533136553; ssuid=9383237588; aliyungf_tc=AQAAAI5OlAkhwQcA7nGBd5KVOLSC9NYt; csrfToken=-56Od6hSl_S1CmBVCzLLBYEI; _gid=GA1.2.928876903.1541388137; _gat_gtag_UA_123487620_1=1; Hm_lvt_e92c8d65d92d534b0fc290df538b4758=1541393330,1541393799,1541394021,1541394112; Hm_lpvt_e92c8d65d92d534b0fc290df538b4758=1541394115; tyc-user-info=%257B%2522myQuestionCount%2522%253A%25220%2522%252C%2522integrity%2522%253A%25220%2525%2522%252C%2522state%2522%253A%25220%2522%252C%2522vipManager%2522%253A%25220%2522%252C%2522onum%2522%253A%25220%2522%252C%2522monitorUnreadCount%2522%253A%25226%2522%252C%2522discussCommendCount%2522%253A%25220%2522%252C%2522token%2522%253A%2522eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiIxODIzNjUzMTkwNiIsImlhdCI6MTU0MTM5NDEzMSwiZXhwIjoxNTU2OTQ2MTMxfQ.biIMiqd7l2LBwARywkoJ4J-dFh7zT-SSzz0V-GKc9r4EENomkv-1SA68RvVn0sZUzN_3wHbrw-Sl0ksedBgNGA%2522%252C%2522redPoint%2522%253A%25220%2522%252C%2522pleaseAnswerCount%2522%253A%25220%2522%252C%2522vnum%2522%253A%25220%2522%252C%2522bizCardUnread%2522%253A%25220%2522%252C%2522mobile%2522%253A%252218236531906%2522%257D; auth_token=eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiIxODIzNjUzMTkwNiIsImlhdCI6MTU0MTM5NDEzMSwiZXhwIjoxNTU2OTQ2MTMxfQ.biIMiqd7l2LBwARywkoJ4J-dFh7zT-SSzz0V-GKc9r4EENomkv-1SA68RvVn0sZUzN_3wHbrw-Sl0ksedBgNGA‘,
    ‘Host‘: ‘www.tianyancha.com‘,
    ‘Referer‘: ‘https://www.tianyancha.com/‘,
    ‘Upgrade-Insecure-Requests‘: ‘1‘,
    ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.67 Safari/537.36‘,
}
#搜索关键字
list=[‘佛山‘]
for j in range(6):
    for i in list:
        form_data={
                ‘key‘: ‘{}‘.format(i),
            }
        url = ‘https://www.tianyancha.com/search/p{}?key={}‘.format(j,i)
        # 发送post请求,翻译数据
        response = requests.get(url, data=form_data, headers=headers)
        # print(response.text)
        html = etree.HTML(response.text)
        #获取当前搜索界面url
        link_urls = html.xpath("//div[@class=‘content‘]/div[@class=‘header‘]/a/@href")
        for link_url in link_urls:
            # print(link_url)
            response = requests.get(link_url, headers=headers)
            # print(response.text)
            html2 = etree.HTML(response.text)
            #公司名称
            company = html2.xpath("//h1[@class=‘name‘]").extract_first()
            print(company)
        print(‘*‘*100)
# coding:utf-8

import requestsfrom lxml import etreeimport re#请求地址target_url =‘https://www.tianyancha.com/search?key=‘

headers = {    ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8‘,    ‘Accept-Encoding‘: ‘gzip, deflate, br‘,    ‘Accept-Language‘: ‘zh-CN,zh;q=0.9‘,    ‘Connection‘: ‘keep-alive‘,    ‘Cookie‘: ‘TYCID=b9583550959d11e897e06dcb73cfa6e2; undefined=b9583550959d11e897e06dcb73cfa6e2; _ga=GA1.2.442696904.1533136553; ssuid=9383237588; aliyungf_tc=AQAAAI5OlAkhwQcA7nGBd5KVOLSC9NYt; csrfToken=-56Od6hSl_S1CmBVCzLLBYEI; _gid=GA1.2.928876903.1541388137; _gat_gtag_UA_123487620_1=1; Hm_lvt_e92c8d65d92d534b0fc290df538b4758=1541393330,1541393799,1541394021,1541394112; Hm_lpvt_e92c8d65d92d534b0fc290df538b4758=1541394115; tyc-user-info=%257B%2522myQuestionCount%2522%253A%25220%2522%252C%2522integrity%2522%253A%25220%2525%2522%252C%2522state%2522%253A%25220%2522%252C%2522vipManager%2522%253A%25220%2522%252C%2522onum%2522%253A%25220%2522%252C%2522monitorUnreadCount%2522%253A%25226%2522%252C%2522discussCommendCount%2522%253A%25220%2522%252C%2522token%2522%253A%2522eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiIxODIzNjUzMTkwNiIsImlhdCI6MTU0MTM5NDEzMSwiZXhwIjoxNTU2OTQ2MTMxfQ.biIMiqd7l2LBwARywkoJ4J-dFh7zT-SSzz0V-GKc9r4EENomkv-1SA68RvVn0sZUzN_3wHbrw-Sl0ksedBgNGA%2522%252C%2522redPoint%2522%253A%25220%2522%252C%2522pleaseAnswerCount%2522%253A%25220%2522%252C%2522vnum%2522%253A%25220%2522%252C%2522bizCardUnread%2522%253A%25220%2522%252C%2522mobile%2522%253A%252218236531906%2522%257D; auth_token=eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiIxODIzNjUzMTkwNiIsImlhdCI6MTU0MTM5NDEzMSwiZXhwIjoxNTU2OTQ2MTMxfQ.biIMiqd7l2LBwARywkoJ4J-dFh7zT-SSzz0V-GKc9r4EENomkv-1SA68RvVn0sZUzN_3wHbrw-Sl0ksedBgNGA‘,    ‘Host‘: ‘www.tianyancha.com‘,    ‘Referer‘: ‘https://www.tianyancha.com/‘,    ‘Upgrade-Insecure-Requests‘: ‘1‘,    ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.67 Safari/537.36‘,}#搜索关键字list=[‘佛山‘]for j in range(6):    for i in list:        form_data={                ‘key‘: ‘{}‘.format(i),            }        url = ‘https://www.tianyancha.com/search/p{}?key={}‘.format(j,i)        # 发送post请求,翻译数据response = requests.get(url, data=form_data, headers=headers)        # print(response.text)html = etree.HTML(response.text)        #获取当前搜索界面urllink_urls = html.xpath("//div[@class=‘content‘]/div[@class=‘header‘]/a/@href")        for link_url in link_urls:            # print(link_url)response = requests.get(link_url, headers=headers)            # print(response.text)html2 = etree.HTML(response.text)            #公司名称company = html2.xpath("//h1[@class=‘name‘]").extract_first()            print(company)        print(‘*‘*100)

原文地址:https://www.cnblogs.com/lvjing/p/9909088.html

时间: 2024-10-10 08:43:02

22.天眼查cookie模拟登陆采集数据的相关文章

python全栈系列之---cookie模拟登陆和模拟session原理

cookie模拟登陆: import tornado.web class IndexHandler(tornado.web.RequestHandler): def get(self): #self.write("Hello world") # 展示所有的cookie # print(self.cookies) # print(self.get_cookie('k1')) # self.set_cookie('k1','999')#还有 过期时间 适用路径 # self.render(

网络爬虫模拟登陆获取数据并解析实战(二)

目录 分析要获取的数据 程序的结构 构建封装数据的model 模拟登陆程序并解析数据 结果展示 分析要获取的数据 下面继续实战,写一个模拟登陆获取汽车之家,用户信息的程序.如果大家对模拟登陆获取数据不太了解,建议看完http://blog.csdn.net/qy20115549/article/details/52249232,我写的这篇含有抓包获取人人网数据的案例程序,研究透之后,再来看这个要轻松很多. 首先,大家打开汽车之家这个网站(http://i.autohome.com.cn/7741

python3下scrapy爬虫(第五卷:利用cookie模拟登陆抓取个人中心页面)

之前我们爬取的都是那些无需登录就要可以使用的网站但是当我们想爬取自己或他人的个人中心时就需要做登录,一般进入登录页面有两种 ,一个是独立页面登陆,另一个是弹窗,我们先不管验证码登陆的问题 ,现在试一下直接登陆的爬取: 爬虫是模拟人的行为来请求网页读取数据的现在我们划分一下过程,从登陆到获取: 先看一下我们到个人中心的过程: 登陆界面->输入账号密码->进入个人中心 1 进入登陆页面 可以说是第一次请求 此时会产生相应的COOKIE值,因为你只要先进入到页面才可以进行密码输入等行为 cookie

20170717_python爬虫之requests+cookie模拟登陆

在成功登陆之前,失败了十几次.完全找不到是什么原因导致被网站判断cookie是无效的. 直到用了firefox的httpfox之后才发现cookie里还有一个ASP.NET_SessionId 这个字段!!!.net网站应该都有这个字段,php网站的是phpsession 刚开始是用的alert(document.cookie)来获取cookie的,这是个大坑!!!以后要用专业工具~~~ #-*-coding:utf-8-*- #编码声明 import requests cookie = {}

Scrapy基础(十四)————Scrapy实现知乎模拟登陆

模拟登陆大体思路见此博文,本篇文章只是将登陆在scrapy中实现而已 之前介绍过通过requests的session 会话模拟登陆:必须是session,涉及到验证码和xsrf的写入cookie验证的问题:在scrapy中不需担心此问题,因为Request会保证这是一个会话,并且自动传递cookies原理想通,因为验证码识别的问题,这里先使用cookie模拟登陆 1 # -*- coding: utf-8 -*- 2 3 import scrapy 4 import json 5 import

Scrapy 爬虫模拟登陆的3种策略

1   Scrapy 爬虫模拟登陆策略 前面学习了爬虫的很多知识,都是分析 HTML.json 数据,有很多的网站为了反爬虫,除了需要高可用代理 IP 地址池外,还需要登录,登录的时候不仅仅需要输入账户名和密码,而且有可能验证码,下面就介绍 Scrapy 爬虫模拟登陆的几种策略. 1.1  策略一:直接POST请求登录 前面介绍的爬虫 scrapy 的基本请求流程是 start_request 方法遍历 start_urls 列表,然后 make_requests_from_url方法,里面执行

php 模拟登陆(不带验证码)采集数据

这里模拟表单登陆窗口 提交代码部分 1,生成session_id保存到 cookie $login_url = 'http://www.96net.com.cn/Login.php';$cookie_file = dirname(__FILE__)."/pic.cookie"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $login_url); curl_setopt($ch, CURLOPT_HEADER, 0); curl_s

模拟登陆、Cookie传递

---恢复内容开始--- 上个月月底把模拟登陆的问题解决了,现在有时间就正好记录并复习一下. 主要使用httpclient 来进行模拟登陆 首先做一个登陆布局  一直以来都是简单拖几个控件 没有啥特效  这次我想使用一点 Material Design设计风格    查了一下 TextInputLayout这个控件  还不错 TextInputLayout 来自于 Android Design Support Library    可以让edittext的hint在用户输入的时候  自动跳到上方

php爬虫(curl) 模拟登陆获取cookie,然后使用cookie查看个人中心

<!-- curl的高级使用 --> <?php //模拟登陆获取cookie保存到电脑 header("content-Type: text/html; charset=UTF-8"); /*$cookie_file = tempnam('d:/', 'cookie');*/ $cookie_file = 'd:/demo.txt'; $login_url="http://www.php-z.com/member.php?mod=logging&a