今天看博客get了一个有趣的模块,叫做 trip #(pip install trip)
兼容2.7版本
基于两大依赖包:TRIP: Tornado & Requests In Pair
。
先看一下simple code:
import trip @trip.coroutine def main(): r = yield trip.get(‘http://www.baidu.com/‘) print(r.content) trip.run(main)
于是又做了一个比较:
import time, functools import requests,trip def timeit(fn): start_time = time.time() fn() return time.time() - start_time url = ‘https://www.baidu.com/‘ times = 100 def fetch(): r = [requests.get(url) for i in range(times)] return r @trip.coroutine def async_fetch(): r = yield[trip.get(url) for i in range(times)] raise trip.Return(r) print("[+]Non-trip cost: %ss" % timeit(fetch)) print("[+]Trip cost: %ss" % timeit(functools.partial(trip.run,async_fetch))) #result#[+]Non-trip cost: 14.9129998684s#[+]Trip cost: 1.83399987221s
14.9秒和1.8秒的差距,效果显而易见!
在爬虫中的比较,普通爬虫:
import requests url = ‘http://httpbin.org‘ s = requests.Session() def fetch(times=10): s.get(‘%s/cookies/set?name=value‘ % url) r = [s.get(‘%s/get‘ % url) for i in range(times)] print r fetch()
加入trip优化后的:
import trip url = ‘http://httpbin.org‘ s = trip.Session() @trip.coroutine def fetch(times=10): yield s.get(‘%s/cookies/set?name=value‘ % url) r = yield [s.get(‘%s/get‘ % url) for i in range(times)] print r trip.run(fetch)
在原基础上更改不大。
时间: 2024-10-04 04:04:49