NodeJS爬虫系统
NodeJS爬虫系统
0. 概论
爬虫是一种自动获取网页内容的程序。是搜索引擎的重要组成部分,因此搜索引擎优化很大程度上是针对爬虫而做出的优化。
robots.txt是一个文本文件,robots.txt是一个协议,不是一个命令。robots.txt是爬虫要查看的第一个文件。robots.txt文件告诉爬虫在服务器上什么文件是可以被查看的,搜索机器人就会按照该文件中的内容来确定访问的范围。
一般网站的robots.txt查找方法:
例如www.qq.com
1. 配置爬虫系统和开发环境
所需Node模块;
- Express
- Request
- Cheerio
直接在桌面创建spider项目
1.[KANO@kelvin 桌面]$ express spider2.bash: express: 未找到命令...3.安装软件包“nodejs-express”以提供命令“express”? [N/y] y4.5.6. * 正在队列中等待... 7.下列软件包必须安装:8. nodejs-buffer-crc32-0.2.1-8.fc21.noarch A pure JavaScript CRC32 algorithm that plays nice with binary data9. nodejs-commander-2.2.0-2.fc21.noarch Node.js command-line interfaces made easy10.………………………………………(略)……………………………………………………………11. nodejs-vhost-1.0.0-2.fc21.noarch Virtual domain hosting middleware for Node.js and Connect12. nodejs-compressible-1.0.1-2.fc21.noarch Compressible Content-Type/MIME checking for Node.js13. nodejs-negotiator-0.4.3-2.fc21.noarch An HTTP content negotiator for Node.js14. create : spider/app.js15. create : spider/public16. create : spider/public/images17. create : spider/routes18. create : spider/routes/index.js19. create : spider/routes/user.js20. create : spider/public/stylesheets21. create : spider/public/stylesheets/style.css22. create : spider/views23. create : spider/views/layout.jade24. create : spider/views/index.jade25. create : spider/public/javascripts26.27. install dependencies:28. $ cd spider && npm install29.30. run the app:31. $ node app32.33.
然后进到目录下,执行安装
1.[KANO@kelvin 桌面]$ cd spider/2.[KANO@kelvin spider]$ sudo npm install3.[sudo] KANO 的密码:4.npm http GET https://registry.npmjs.org/express/3.5.25.npm http GET https://registry.npmjs.org/jade6.……………………(略)…………………………7.npm http 200 https://registry.npmjs.org/negotiator/-/negotiator-0.3.0.tgz8.jade@1.11.0 node_modules/jade9.├── character-parser@1.2.110.├── void-elements@2.0.111.├── commander@2.6.012.├── mkdirp@0.5.1 (minimist@0.0.8)13.├── jstransformer@0.0.2 (is-promise@2.1.0, promise@6.1.0)14.├── clean-css@3.4.8 (commander@2.8.1, source-map@0.4.4)15.├── constantinople@3.0.2 (acorn@2.6.4)16.├── with@4.0.3 (acorn@1.2.2, acorn-globals@1.0.9)17.├── transformers@2.1.0 (promise@2.0.0, css@1.0.8, uglify-js@2.2.5)18.└── uglify-js@2.6.1 (uglify-to-browserify@1.0.2, async@0.2.10, source-map@0.5.3, yargs@3.10.0)19.20.express@3.5.2 node_modules/express21.├── methods@0.1.022.├── merge-descriptors@0.0.223.├── cookie@0.1.224.├── debug@0.8.125.├── cookie-signature@1.0.326.├── range-parser@1.0.027.├── fresh@0.2.228.├── buffer-crc32@0.2.129.├── mkdirp@0.4.030.├── commander@1.3.2 (keypress@0.1.0)31.├── send@0.3.0 (debug@0.8.0, mime@1.2.11)32.└── connect@2.14.5 (response-time@1.0.0, pause@0.0.1, connect-timeout@1.0.0, method-override@1.0.0, vhost@1.0.0, qs@0.6.6, basic-auth-connect@1.0.0, bytes@0.3.0, static-favicon@1.0.2, raw-body@1.1.4, errorhandler@1.0.0, setimmediate@1.0.1, cookie-parser@1.0.1, morgan@1.0.0, serve-static@1.1.0, express-session@1.0.2, csurf@1.1.0, serve-index@1.0.1, multiparty@2.2.0, compression@1.0.0)33.
安装完之后,启动
1.[KANO@kelvin spider]$ node app2.Express server listening on port 30003.GET / 200 793ms - 170b4.GET /stylesheets/style.css 200 20ms - 110b5.
默认开启3000端口
kelvin是我的hostname,不知道的请用:
1.[KANO@kelvin spider]$ hostname2.kelvin
接着安装request
1.[KANO@kelvin spider]$ sudo npm install request --save-dev2.[sudo] KANO 的密码:3.npm http GET https://registry.npmjs.org/request4.npm http 200 https://registry.npmjs.org/request5.……………………(略)…………………………6.npm http 200 https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.0.0.tgz7.request@2.67.0 node_modules/request8.├── is-typedarray@1.0.09.├── aws-sign2@0.6.010.├── forever-agent@0.6.111.├── caseless@0.11.012.├── stringstream@0.0.513.├── tunnel-agent@0.4.214.├── oauth-sign@0.8.015.├── isstream@0.1.216.├── json-stringify-safe@5.0.117.├── extend@3.0.018.├── node-uuid@1.4.719.├── qs@5.2.020.├── tough-cookie@2.2.121.├── form-data@1.0.0-rc3 (async@1.5.0)22.├── mime-types@2.1.8 (mime-db@1.20.0)23.├── combined-stream@1.0.5 (delayed-stream@1.0.0)24.├── bl@1.0.0 (readable-stream@2.0.5)25.├── hawk@3.1.2 (cryptiles@2.0.5, sntp@1.0.9, boom@2.10.1, hoek@2.16.3)26.├── http-signature@1.1.0 (assert-plus@0.1.5, jsprim@1.2.2, sshpk@1.7.1)27.└── har-validator@2.0.3 (commander@2.9.0, pinkie-promise@2.0.0, is-my-json-valid@2.12.3, chalk@1.1.1)28.
request模块已经安装上了
安装cheerio
1.[KANO@kelvin spider]$ sudo npm install cheerio --save-dev2.[sudo] KANO 的密码:3.npm http GET https://registry.npmjs.org/cheerio4.npm http 200 https://registry.npmjs.org/cheerio5.npm http GET https://registry.npmjs.org/css-select6.……………………(略)…………………………7.npm http 304 https://registry.npmjs.org/isarray/0.0.18.npm http 304 https://registry.npmjs.org/core-util-is9.cheerio@0.19.0 node_modules/cheerio10.├── entities@1.1.111.├── lodash@3.10.112.├── css-select@1.0.0 (boolbase@1.0.0, css-what@1.0.0, nth-check@1.0.1, domutils@1.4.3)13.├── dom-serializer@0.1.0 (domelementtype@1.1.3)14.└── htmlparser2@3.8.3 (domelementtype@1.3.0, domutils@1.5.1, entities@1.0.0, domhandler@2.3.0, readable-stream@1.1.13)
现在cheerio和request都装上了,这么一整个开发环境就装好了
2. 爬虫实战
关于expressjs中文文档:www.expressjs.com.cn
将其替换到app.js
监控node进程
1.[KANO@kelvin spider]$ supervisor start app.js2.3.Running node-supervisor with4. program ‘app.js‘5. --watch ‘.‘6. --extensions ‘node,js‘7. --exec ‘node‘8.9.Starting child process with ‘node app.js‘10.Watching directory ‘/home/KANO/桌面/spider‘ for changes.11.Express server listening on port 300012.
刷新窗口
关于request文档:https://www.npmjs.com/package/request
将其复制到app.js中,爬取果壳网的页面
1.var express = require(‘express‘);2.var app = express();3.var request = require(‘request‘);4.app.get(‘/‘, function(req, res){5. request(‘http://mooc.guokr.com/course/‘, function (error, response, body) {6. if (!error && response.statusCode == 200) {7. console.log(body); // Show the HTML for the Google homepage.8. res.send(‘hello world‘);9. }10. })11.});12.13.app.listen(3000);
刷新一下kelvin:3000
,
打印出的页面输出在终端上
下面对页面内容进行选择,使用cheerio
关于cheerio文档:https://www.npmjs.com/package/cheerio
对页面进行分析
要爬取课程名,查到课程名在<h3 class=‘course-title‘></h3>
下<span></span>
中
1.var express = require(‘express‘);2.var app = express();3.var request = require(‘request‘);4.var cheerio = require(‘cheerio‘);5.app.get(‘/‘, function(req, res){6. request(‘http://mooc.guokr.com/course/‘, function (error, response, body) {7. if (!error && response.statusCode == 200) {8. $ = cheerio.load(body);//当前的$是一个拿到了整个body的前端选择器9. res.json({10. ‘course‘: $(‘.course-title span‘).text()11. });12. }13. })14.});15.16.app.listen(3000);
刷新一下,页面
一个简单的爬虫就完成了。
但是如果需要异步请求还得更改代码,同时还得对爬下来的数据进行处理……………还有多多不完善之处,今天的node.js爬虫初探就这样吧~
时间: 2024-07-30 10:15:27