logstash日志收集分析系统
Logstash provides a powerful pipeline for storing, querying, and analyzing your logs. When using Elasticsearch as a backend data store and Kibana as a frontend reporting tool, Logstash acts as the workhorse. It includes an arsenal of built-in inputs, filters, codecs, and outputs, enabling you to harness some powerful functionality with a small amount of effort.
http://semicomplete.com/files/logstash/ logstash收集日志,需要java平台
logstash-1.4.2.tar.tar jdk-7u67-linux-x64.rpm
http://www.elasticsearch.org/overview/elkdownloads elasticsearch搜索引擎,此页面有帮助文档
http://www.elasticsearch.org/overview/kibana/installation/ Kibana提供web界面
http://redis.io/download redis redis-2.8.19.tar.gz
帮助文档
http://www.elasticsearch.org/guide/
http://logstash.net/docs/1.4.2/
https://github.com/elasticsearch/kibana/blob/master/README.md
#安装java和redis
# rpm -ivh jdk-7u67-linux-x64.rpm # /usr/java/jdk1.7.0_67/bin/java -version # vim ~/.bashrc export JAVA_HOME=/usr/java/jdk1.7.0_67 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH # . ~/.bashrc # java -version #验证java java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) # tar -xzf redis-2.8.19.tar.gz # cd redis-2.8.19 # make # make install # ./utils/install_server.sh Port : 6379 Config file : /etc/redis/6379.conf Log file : /var/log/redis_6379.log Data dir : /var/lib/redis/6379 Executable : /usr/local/bin/redis-server Cli Executable : /usr/local/bin/redis-cli # service redis_6379 restart #启动redis # redis-cli ping
#安装logstash和elasticsearch
# mkdir /var/www/logstash # unzip elasticsearch-1.4.2.zip -d /var/www/logstash # cd /var/www/logstash # ln -s elasticsearch-1.4.2/ elasticsearch # cd elasticsearch # ./bin/elasticsearch -f #启动elasticsearch,默认配置文件 getopt: invalid option -- ‘f‘ [2015-02-09 16:15:24,502][INFO ][node ] [Amergin] version[1.4.2], pid[4718], build[927caff/2014-12-16T14:11:12Z] [2015-02-09 16:15:24,502][INFO ][node ] [Amergin] initializing ... [2015-02-09 16:15:24,518][INFO ][plugins ] [Amergin] loaded [], sites [] [2015-02-09 16:15:27,945][INFO ][node ] [Amergin] initialized [2015-02-09 16:15:27,945][INFO ][node ] [Amergin] starting ... [2015-02-09 16:15:28,232][INFO ][transport ] [Amergin] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.10.1:9300]} [2015-02-09 16:15:28,300][INFO ][discovery ] [Amergin] elasticsearch/mvrxUfixSPKQKzb3s_nFug [2015-02-09 16:15:32,091][INFO ][cluster.service ] [Amergin] new_master [Amergin][mvrxUfixSPKQKzb3s_nFug][manager][inet[/192.168.10.1:9300]], reason: zen-disco-join (elected_as_master) [2015-02-09 16:15:32,143][INFO ][http ] [Amergin] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.10.1:9200]} [2015-02-09 16:15:32,143][INFO ][node ] [Amergin] started [2015-02-09 16:15:32,162][INFO ][gateway ] [Amergin] recovered [0] indices into cluster_state # curl -X GET http://localhost:9200 #也可以在浏览器打开http://192.168.10.1:9200/ { "status" : 200, "name" : "Amergin", "cluster_name" : "elasticsearch", "version" : { "number" : "1.4.2", "build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c", "build_timestamp" : "2014-12-16T14:11:12Z", "build_snapshot" : false, "lucene_version" : "4.10.2" }, "tagline" : "You Know, for Search" } # tar -xzf logstash-1.4.2.tar.tar # cd logstash-1.4.2 # ./bin/logstash -h #查看帮助
#下面是测试,查看logstash的运行原理
# echo "`date` hello world" Mon Feb 9 16:36:15 CST 2015 hello world #测试logstash的stdin,stdout,如下: # bin/logstash -e ‘input { stdin { } } output { stdout {} }‘ Mon Feb 9 16:36:15 CST 2015 hello world #输入这一行,直接粘贴,不要手动输入 2015-02-09T08:36:23.190+0000 manager Mon Feb 9 16:36:15 CST 2015 hello world #显示logstash处理后的数据 #测试logstash的stdin,stdout在elasticsearch处理后的数据显示,如下: # /var/www/logstash/elasticsearch/bin/elasticsearch -f #同时启动elasticsearch # bin/logstash -e ‘input { stdin { } } output { elasticsearch { host => localhost } }‘ you know, for logs #输入这一行 # curl ‘http://localhost:9200/_search?pretty‘ #显示elasticsearch处理后的数据 { "took" : 64, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 1, "max_score" : 1.0, "hits" : [ { "_index" : "logstash-2015.02.09", "_type" : "logs", "_id" : "IFmPqi0dQjSNZR5-94NuHg", "_score" : 1.0, "_source":{"message":"you know, for logs","@version":"1","@timestamp":"2015-02-09T08:48:48.747Z","host":"manager"} } ] } } #You’ve successfully stashed logs in Elasticsearch via Logstash
#安装elasticsearch插件,测试一下
# cd /var/www/logstash/elasticsearch/bin/ #安装kopf插件 # ./plugin -install lmenezes/elasticsearch-kopf #下面测试这个kopf插件 # /var/www/logstash/elasticsearch/bin/elasticsearch -f # bin/logstash -e ‘input { stdin { } } output { elasticsearch { host => localhost } stdout { } }‘ hello world 2015-02-09T09:07:35.590+0000 manager hellhello world hello logstash 2015-02-09T09:09:26.981+0000 manager hello logstash # curl ‘http://localhost:9200/_search?pretty‘ #会看到刚才输出一定格式log文件 # curl ‘http://localhost:9200/_plugin/kopf/‘ #显示插件的页面,不过这个看不到东西 #在浏览器访问192.168.10.1:9200/_plugin/kopf/ 会打开如下界面
#elasticsearch处理日志
# /var/www/logstash/elasticsearch/bin/elasticsearch -d /var/run/elasticsearch.pid #启动elasticsearch #logstash对apache的错误日志处理,如下: # vi logstash-apache.conf input { file { path => "/var/log/httpd/error_log" start_position => beginning } } filter { if [path] =~ "error" { mutate { replace => { "type" => "apache_error" } } grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } date { match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } } # bin/logstash -f logstash-apache.conf #稍等二十秒,如果没有输出,那么vim 这个日志,到最后面复制再粘贴一行,模拟写入日志 #此时logstash会读apache的错误日志,在下面命令行会显示,http://192.168.10.1:9200/_search?pretty 浏览器页面也会看到
继续测试
#logstash对apache日志的处理,如下: # vi logstash-apache.conf input { file { path => "/var/log/httpd/*_log" } } filter { if [path] =~ "access" { mutate { replace => { type => "apache_access" } } grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } date { match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] } } else if [path] =~ "error" { mutate { replace => { type => "apache_error" } } } else { mutate { replace => { type => "random_logs" } } } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } } # bin/logstash -f logstash-apache.conf
#上面已经很清楚的说明了logstash的工作模式,下面就结合kibaba在页面查看
#kibaba,在logstash里面已经集成了kibaba,在vendor/kibana/这个目录里面,当然你也可以下载 kibana-3.1.2.zip 然后解压
# unzip kibana-3.1.2.zip -d /var/www/logstash/kibaba # ln -s /var/www/logstash/kibaba/kibana-3.1.2 /var/www/logstash/kibaba/kibana # vim /var/www/logstash/kibaba/kibana/config.js 32 /* elasticsearch: "http://"+window.location.hostname+":9200", 33 */ 34 elasticsearch: "http://192.168.10.1:9200", # vim /etc/httpd/conf.d/kibaba.conf <VirtualHost *:80> DocumentRoot /var/www/logstash/kibaba/kibana ServerName 192.168.10.1 <Directory "/var/www/logstash/kibaba/kibana"> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all php_value max_execution_time 300 php_value memory_limit 128M php_value post_max_size 16M php_value upload_max_filesize 2M php_value max_input_time 300 php_value date.timezone Asia/Shanghai </Directory> </VirtualHost> # vim logstash.conf input { file { type => "syslog" # path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ] path => [ "/var/log/messages", "/var/log/syslog" ] sincedb_path => "/var/sincedb" } redis { host => "192.168.10.1" type => "redis-input" data_type => "list" key => "logstash" } syslog { type => "syslog" port => "5544" } } filter { grok { type => "syslog" match => [ "message", "%{SYSLOGBASE2}" ] add_tag => [ "syslog", "grokked" ] } } output { elasticsearch { host => "192.168.10.1" } } # service httpd restart # vim /etc/redis/6379.conf bind 192.168.10.1 # service redis_6379 restart # ps aux|grep redis|grep -v grep root 8340 0.1 0.7 40536 7448 ? Ssl 07:23 0:00 /usr/local/bin/redis-server 192.168.10.1:6379 # vim /var/www/logstash/elasticsearch/config/elasticsearch.yml http.cors.enabled: true #添加此行 # /var/www/logstash/elasticsearch/bin/elasticsearch -d /var/run/elasticsearch.pid #服务也重启下 # ./bin/logstash --configtest -f logstash.conf #测试配置文件 Configuration OK # ./bin/logstash -v -f logstash.conf & #服务都启动后,在浏览器打开 http://192.168.10.1即可显示Kibana的默认页面
当有日志写入的时候,http://192.168.10.1/index.html#/dashboard/file/guided.json 页面相应的数据即随着变动,下一步就是研究elasticsearch搜索
logstash日志收集分析系统elasticsearch&kibana