以下代码是二手房的数据,代码仅供参考,很简单,超级简单 #encoding:utf8import requestsimport re调用网址def spider(url): html = requests.get(url).content.decode(‘utf8‘)数据的正则,如果你感觉你用正则匹配不出来,那么你就分开匹配,然后在把所要的正则拼合在一起。我就是这么搞得。 title = re.compile(‘<h2><a href=".*?" target="_blank">(.*?)</a></h2>.*?<li>.*?<a target="_blank" href=".*?">(.*?)</a>.*?<a target="_blank" href=".*?">(.*?)</a>.*?</li>.*?<li class="font-balck"><span>(.*?)</span><span>(.*?)</span><span>(.*?)</span><span>(.*?)</span></li>.*?<div class="list-info-r">.*?<h3>(.*?)</h3>.*?<p>(.*?)</p>.*?<!-- <span class="btn-contrast">.*?<input name="" type="checkbox" value="" />.*?</span> -->.*?</div>‘,re.S) title_zheng = title.findall(html)循环所要的数据 for a1,a2,a3,a4,a5,a6,a7,aa8,a9 in title_zheng:用repalce替换数据中的垃圾字母为空 a8=aa8.replace(‘<em>‘,‘‘).replace(‘</em>‘,‘‘)然后插入数据库 sql = "insert into wojia(a1,a2,a3,a4,a5,a6,a7,a8,a9)VALUES(‘"+a1+"‘,‘"+a2+"‘,‘"+a3+"‘,‘"+a4+"‘,‘"+a5+"‘,‘"+a6+"‘,‘"+a7+"‘,‘"+a8+"‘,‘"+a9+"‘)" print(sql)下面是循环的网址,1-7是循环的7页for p in range(1,7): url = "http://bj.5i5j.com/exchange/n"+str(p)+"" spider(url) 就这么简单,分分钟完事
时间: 2024-11-05 22:31:57