Scrapy去重原理
scrapy本身自带一个去重中间件
scrapy源码中可以找到一个dupefilters.py去重器
源码去重算法
# 将返回值放到集合set中,实现去重 def request_fingerprint(request, include_headers=None): if include_headers: include_headers = tuple(to_bytes(h.lower()) for h in sorted(include_headers)) cache = _fingerprint_cache.setdefault(request, {}) if include_headers not in cache: fp = hashlib.sha1() fp.update(to_bytes(request.method)) fp.update(to_bytes(canonicalize_url(request.url))) fp.update(request.body or b‘‘) if include_headers: for hdr in include_headers: if hdr in request.headers: fp.update(hdr) for v in request.headers.getlist(hdr): fp.update(v) cache[include_headers] = fp.hexdigest() return cache[include_headers]
原文地址:https://www.cnblogs.com/cq146637/p/9077500.html
时间: 2024-11-02 21:08:54