logstash&Kibana杂记

一、logstash基础

master1作为logstash_agent端,master1运行WEB网站。master2为logstash服务端,master3为Elasticsearch

1、安装logstash

1.1 yum安装

设置java环境变量:
[[email protected] ~]# vim /etc/profile.d/java.sh 

export JAVA_HOME=/usr

logstash已经被Elasticsearch收购,直接在ES官网下载即可

[[email protected] ~]# ls
logstash-1.5.4-1.noarch.rpm

安装
[[email protected] ~]# yum install logstash-1.5.4-1.noarch.rpm

设置环境变量:
[[email protected] ~]# vim /etc/profile.d/logstash.sh

export PATH=/opt/logstash/bin:$PATH

重新加载:
[[email protected] ~]# source /etc/profile.d/logstash.sh

1.2 创建配置文件

[[email protected] ~]# vim /etc/logstash/conf.d/sample.conf

input {
    stdin {}
}

output {
    stdout {
        codec   => rubydebug
    }
}

语法测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/sample.conf --configtest
Configuration OK

1.3 运行logstash

[[email protected] ~]# logstash -f /etc/logstash/conf.d/sample.conf
Logstash startup completed

测试:
Logstash startup completed
Hello Logstash
{
       "message" => "Hello Logstash",
      "@version" => "1",
    "@timestamp" => "2018-04-15T16:59:04.136Z",
          "host" => "master1.com"
}

2、示例(input、filter插件)

2.1 系统日志文件简单示例

[[email protected] ~]# vim /etc/logstash/conf.d/filesample.conf

input {
    file {
        path => ["/var/log/messages"]
        type => "system"
        start_position => "beginning"
    }
}

output {
    stdout {
        codec   => rubydebug
    }
}

语法测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/filesample.conf --configtest
Configuration OK

运行:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/filesample.conf

file插件官网链接:
https://www.elastic.co/guide/en/logstash/1.5/plugins-inputs-file.html

结束:Ctrl+c

2.2 udp

master2安装 collectd,配置其network插件,向外发送数据。

[[email protected] ~]# yum install collectd

配置collectd
[[email protected] ~]# vim /etc/collectd.conf 

#定义主机名
Hostname    "master2.com"

#打开几个监控项
LoadPlugin cpu
LoadPlugin df
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin network

定义发送到logstash端的监听端口
<Plugin network>
    <Server "10.201.106.131" "25826" >
    </Server>
</Plugin>

启动服务
[[email protected] ~]# systemctl start collectd.service

logstash端配置:
[[email protected] ~]# vim /etc/logstash/conf.d/udpsample.conf

input {
    udp {
        port    => 25826
        codec   => collectd {}
        type    => "collectd"
    }
}

output {
    stdout {
        codec   => rubydebug

语法检测:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/udpsample.conf --configtest
Configuration OK

启动:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/udpsample.conf
Logstash startup completed

2.3 httpd

[[email protected] ~]# yum install httpd
[[email protected] ~]# systemctl start http

结构化文本数据
[[email protected] ~]# rpm -ql logstash | grep "patterns$"
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/mcollective-patterns

[[email protected] ~]# vim /etc/logstash/conf.d/groksample.conf

input {
    stdin {}
}

filter {
    grok {        match   => { "message" => "%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
    }
}

output {
    stdout {
        codec   => rubydebug
    }
}

语法测试:
Configuration OK

运行测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/groksample.conf
Logstash startup completed
1.1.1.1 GET /index.html 30 0.23
{
       "message" => "1.1.1.1 GET /index.html 30 0.23",
      "@version" => "1",
    "@timestamp" => "2018-04-17T01:41:09.951Z",
          "host" => "master1.com",
      "clientip" => "1.1.1.1",
        "method" => "GET",
       "request" => "/index.html",
         "bytes" => "30",
      "duration" => "0.23"
}

2.4 apachelogs

[[email protected] ~]# vim /etc/logstash/conf.d/apachelogssample.conf

input {
    file {
        path    => ["/var/log/httpd/access_log"]
        type    => "apachelog"
        start_position  => "beginning"
    }
}

filter {
    grok {
        match   => { "message" => "%{COMBINEDAPACHELOG}" }
    }
}

output {
    stdout {
        codec   => rubydebug
    }
}

[[email protected] ~]# logstash -f /etc/logstash/conf.d/apachelogssample.conf --configtest
Configuration OK

运行测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/apachelogssample.conf

访问apache主页:http://10.201.106.131

2.5 nginxlog

编辑pattern
[[email protected] ~]# vim /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns

# Nginx Logs
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request} (?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes} |-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}

安装启动nginx:
[[email protected] ~]# systemctl stop httpd.service
[[email protected] ~]# yum install nginx
[[email protected] ~]# systemctl start nginx.service

logstash配置:
[[email protected] ~]# cd /etc/logstash/conf.d/
[[email protected] conf.d]# cp apachelogssample.conf nginxlogsample.conf
[[email protected] conf.d]# vim nginxlogsample.conf 

input {
    file {
        path    => ["/var/log/nginx/access.log"]
        type    => "nginxlog"
        start_position  => "beginning"
    }
}

filter {
    grok {
        match   => { "message" => "%{NGINXACCESS}" }
    }
}

output {
    stdout {
        codec   => rubydebug
    }
}

运行测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/nginxlogsample.conf
Logstash startup completed

3、output插件

3.1 redis存入数据

安装redis
[[email protected] ~]# yum install redis
配置
[[email protected] ~]# vim /etc/redis.conf
#修改其监听在0.0.0.0(监听本机所有IP)即可。
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 0.0.0.0

启动:
[[email protected] ~]# systemctl start redis.service

logstash配置:
[[email protected] ~]# cd /etc/logstash/conf.d/
[[email protected] conf.d]# cp nginxlogsample.conf nglogredissample.conf

[[email protected] conf.d]# vim nglogredissample.conf 

input {
    file {
        path    => ["/var/log/nginx/access.log"]
        type    => "nginxlog"
        start_position  => "beginning"
    }
}

filter {
    grok {
        match   => { "message" => "%{NGINXACCESS}" }
    }
}

output {
    redis {
        port    => "6379"
        host    => ["127.0.0.1"]
        data_type   => "list"
        key     => "logstash-%{type}"
    }
}

语法测试:
[[email protected] conf.d]# logstash -f ./nglogredissample.conf --configtest
Configuration OK

运行测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/nglogredissample.conf
Logstash startup completed

再次访问nginx主页,http://10.201.106.131

查看redis:
[[email protected] ~]# redis-cli
127.0.0.1:6379> LLEN logstash-nginxlog
(integer) 20

查看索引的第一个元素:
127.0.0.1:6379> LINDEX logstash-nginxlog 1
"{\"message\":\"10.201.106.1 - - [17/Apr/2018:13:51:38 +0800] \\\"GET /nginx-logo.png HTTP/1.1\\\" 200 368 \\\"http://10.201.106.131/\\\" \\\"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36\\\" \\\"-\\\"\",\"@version\":\"1\",\"@timestamp\":\"2018-04-17T05:51:39.579Z\",\"host\":\"master1.com\",\"path\":\"/var/log/nginx/access.log\",\"type\":\"nginxlog\",\"clientip\":\"10.201.106.1\",\"remote_user\":\"-\",\"timestamp\":\"17/Apr/2018:13:51:38 +0800\",\"verb\":\"GET\",\"request\":\"/nginx-logo.png\",\"httpversion\":\"1.1\",\"response\":\"200\",\"bytes\":\"368\",\"referrer\":\"\\\"http://10.201.106.131/\\\"\",\"agent\":\"\\\"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36\\\"\",\"http_x_forwarded_for\":\"\\\"-\\\"\"}"

3.2 redis读出数据到标准输出

master1为logstash-agent端
master2为logstash服务端
同步时间

master2的java环境变量设置
[[email protected] ~]# vim /etc/profile.d/java.sh

export JAVA_HOME=/usr

安装logstash
[[email protected] ~]# yum install logstash-1.5.4-1.noarch.rpm

定义环境变量:
[[email protected] ~]# vim /etc/profile.d/logstash.sh

export PATH=/opt/logstash/bin:$PATH

加载环境变量设置:
[[email protected] ~]# source /etc/profile.d/logstash.sh

配置接收redis数据
[[email protected] ~]# vim /etc/logstash/conf.d/server.conf

input {
    redis {
        port    => "6379"
        host    => "10.201.106.131"
        data_type   => "list"
        key     => "logstash-nginxlog"
    }
}

output {
    stdout {
        codec   => rubydebug
    }
}

语法测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/server.conf --configtest
Configuration OK

运行测试:
[[email protected] ~]# logstash -f /etc/logstash/conf.d/server.conf
Logstash startup completed

3.3 redis读出数据到Elasticsearch

3.3.1 Elasticsearch配置

master3为Elasticsearch

使用系统自带JDK环境,并安装java扩展
[[email protected] ~]# yum install java-1.7.0-openjdk-devel

设置java环境变量:
[[email protected] ~]# vim /etc/profile.d/java.sh 

export JAVA_HOME=/usr

安装Elasticsearch:
[[email protected] ~]# yum install elasticsearch-1.7.2.noarch.rpm

配置:
[[email protected] ~]# vim /etc/elasticsearch/elasticsearch.yml 

cluster.name: loges
node.name: "master3.com"

启动:
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl start elasticsearch

安装插件(方便查看状态):
[[email protected] ~]# /usr/share/elasticsearch/bin/plugin -i bigdesk -u file:///root/bigdesk-latest.zip
[[email protected] ~]# /usr/share/elasticsearch/bin/plugin -l
Installed plugins:
    - bigdesk

测试访问插件:

3.3.2 Kibana(前端展示)

下载链接:https://www.elastic.co/downloads/past-releases

[[email protected] ~]# ls
kibana-4.1.2-linux-x64.tar.gz

解压到/usr/local
[[email protected] ~]# tar xf kibana-4.1.2-linux-x64.tar.gz -C /usr/local/
[[email protected] local]# ln -sv kibana-4.1.2-linux-x64 kibana
‘kibana’ -> ‘kibana-4.1.2-linux-x64’

配置:
[[email protected] config]# pwd
/usr/local/kibana/config
[[email protected] config]# vim kibana.yml

#修改其中一个节点的IP或者node名字,如果是本机直接localhost
elasticsearch_url: "http://10.201.106.133:9200"

运行(如需运行后台,在命令后门加&即可):
[[email protected] ~]# /usr/local/kibana/bin/kibana

访问:
http://10.201.106.133:5601

3.3.3 配置logstash输出到Elasticsearch

[[email protected] ~]# vim /etc/logstash/conf.d/server.conf 

input {
    redis {
        port    => "6379"
        host    => "10.201.106.131"
        data_type   => "list"
        key     => "logstash-nginxlog"
    }
}

output {
    elasticsearch {
        cluster => "loges"
        index   => "logstash-%{+YYYY.MM.dd}"
    }
}

语法测试(尽量使用java8)
[[email protected] ~]# logstash -f /etc/logstash/conf.d/server.conf --configtest
[2018-04-18 01:42:55.146]  WARN -- Concurrent: [DEPRECATED] Java 7 is deprecated, please use Java 8.
Java 7 support is only best effort, it may not work. It will be removed in next release (1.0).
Configuration OK

启动(会自动发现Elasticsearch节点):
[[email protected] ~]# logstash -f /etc/logstash/conf.d/server.conf
[2018-04-18 01:44:19.274]  WARN -- Concurrent: [DEPRECATED] Java 7 is deprecated, please use Java 8.
Java 7 support is only best effort, it may not work. It will be removed in next release (1.0).
Apr 18, 2018 1:44:21 AM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-master2.com-2679-11622] version[1.7.0], pid[2679], build[929b973/2015-07-16T14:31:07Z]
Apr 18, 2018 1:44:21 AM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-master2.com-2679-11622] initializing ...
Apr 18, 2018 1:44:22 AM org.elasticsearch.plugins.PluginsService <init>
INFO: [logstash-master2.com-2679-11622] loaded [], sites []
Apr 18, 2018 1:44:27 AM org.elasticsearch.bootstrap.Natives <clinit>
WARNING: JNA not found. native methods will be disabled.
Apr 18, 2018 1:44:29 AM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-master2.com-2679-11622] initialized
Apr 18, 2018 1:44:29 AM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-master2.com-2679-11622] starting ...
Apr 18, 2018 1:44:30 AM org.elasticsearch.transport.TransportService doStart
INFO: [logstash-master2.com-2679-11622] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.201.106.132:9300]}
Apr 18, 2018 1:44:30 AM org.elasticsearch.discovery.DiscoveryService doStart
INFO: [logstash-master2.com-2679-11622] loges/xZYxFmKDSu6ziX8wtt2TSQ
Apr 18, 2018 1:44:33 AM org.elasticsearch.cluster.service.InternalClusterService$UpdateTask run
INFO: [logstash-master2.com-2679-11622] detected_master [master3.com][89ejQ2cHQzC-RlTMCRnd3g][master3.com][inet[/10.201.106.133:9300]], added {[master3.com][89ejQ2cHQzC-RlTMCRnd3g][master3.com][inet[/10.201.106.133:9300]],}, reason: zen-disco-receive(from master [[master3.com][89ejQ2cHQzC-RlTMCRnd3g][master3.com][inet[/10.201.106.133:9300]]])
Apr 18, 2018 1:44:33 AM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-master2.com-2679-11622] started
Logstash startup completed

查看master3的Elasticsearch索引:
[[email protected] ~]# curl -XGET ‘localhost:9200/_cat/indices‘
yellow open .kibana             1 1 1 0 2.5kb 2.5kb
yellow open logstash-2018.04.17 5 1 0 0  575b  575b 

查看索引上的文档
[[email protected] ~]# curl -XGET ‘localhost:9200/_search?pretty‘

3.3.4 配置Kibana

可进行搜索:

3.3.5 服务后台运行

logstash:
/etc/logstash/conf.d将无用的配置文件清除后可通过守护进程启动。
service start logstash

kibana:
[[email protected] ~]# /usr/local/kibana/bin/kibana -l /var/log/kibina.log &

3.6

由于logstash较为重量级,agent侧可以使用lumberjack代替获取数据,减少对WEB服务器的资源占用 。

原文地址:http://blog.51cto.com/zhongle21/2104507

时间: 2024-11-02 15:04:01

logstash&Kibana杂记的相关文章

基于ELK5.1(ElasticSearch, Logstash, Kibana)的一次整合测试

前言开源实时日志分析ELK平台(ElasticSearch, Logstash, Kibana组成),能很方便的帮我们收集日志,进行集中化的管理,并且能很方便的进行日志的统计和检索,下面基于ELK的最新版本5.1进行一次整合测试. ElasticSearch1.概述:ElasticSearch是一个高可扩展的开源的全文搜索分析引擎.它允许你快速的存储.搜索和分析大量数据.ElasticSearch通常作为后端程序,为需要复杂查询的应用提供服务.Elasticsearch是一个基于Lucene的开

(高版本)ELK(Elasticsearch + Logstash + Kibana)服务服务搭建

一.ELK是什么鬼? ELK实际上是三个工具的集合,Elasticsearch + Logstash + Kibana,这三个工具组合形成了一套实用.易用的监控架构,很多公司利用它来搭建可视化的海量日志分析平台. 1. ElasticSearch ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口.Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引

安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台

本文是参考logstash官方文档实践的笔记,搭建环境和所需组件如下: Redhat 5.7 64bit / CentOS 5.x JDK 1.6.0_45 logstash 1.3.2 (内带kibana) elasticsearch 0.90.10 redis 2.8.4 搭建的集中式日志分析平台流程如下: elasticsearch 1.下载elasticsearch. wget https://download.elasticsearch.org/elasticsearch/elasti

Elasticsearch + Logstash + Kibana 搭建教程

# ELK:Elasticsearch + Logstash + Kibana 搭建教程 Shipper:日志收集者.负责监控本地日志文件的变化,及时把日志文件的最新内容收集起来,输出到Redis暂存.Indexer:日志存储者.负责从Redis接收日志,写入到本地文件.Broker:日志Hub,用来连接多个Shipper和多个Indexer.无论是Shipper还是Indexer,Logstash始终只做前面提到的3件事: Shipper从日志文件读取最新的行文本,经过处理(这里我们会改写部分

使用ElasticSearch+LogStash+Kibana+Redis搭建日志管理服务

1.使用ElasticSearch+LogStash+Kibana+Redis搭建日志管理服务 http://www.tuicool.com/articles/BFzye2 2.ElasticSearch+LogStash+Kibana+Redis日志服务的高可用方案 http://www.tuicool.com/articles/EVzEZzn 3.示例 开源实时日志分析ELK平台部署 http://baidu.blog.51cto.com/71938/1676798?utm_source=t

用ElasticSearch,LogStash,Kibana搭建实时日志收集系统

用ElasticSearch,LogStash,Kibana搭建实时日志收集系统 介绍 这套系统,logstash负责收集处理日志文件内容存储到elasticsearch搜索引擎数据库中.kibana负责查询elasticsearch并在web中展示. logstash收集进程收获日志文件内容后,先输出到redis中缓存,还有一logstash处理进程从redis中读出并转存到elasticsearch中,以解决读快写慢速度不一致问题. 官方在线文档:https://www.elastic.co

Logstash+kibana+ ElasticSearch+redis

这是之前Logstash+kibana+ ElasticSearch+redis 安装时,自己整理的初学者容易看懂的资料,按照以下的步骤也已经完成了安装. 这里有二台服务器: 192.168.148.201 logstash index,redis,ElasticSearch,kibana,JDK 192.168.148.129 logstash agent,JDK 1 系统各部分应用介绍 Logstash:一个完全开源对日志进行收集.分析和存储的工具.他可以做系统的log收集,转载的工具.同时

log4net.redis+logstash+kibana+elasticsearch+redis 实现日志系统

前端时间写了个随笔 log4net.NoSql +ElasticSearch 实现日志记录 ,因项目原因需要把日志根java平台的同事集成采用logstash+kibana+elasticsearch+redis结构实现日志统计分析,所以需要一个将log4net日志输出到redis的组件.没有找到现成的,就自己动手了.参考了 log4net.NoSql 的代码. redis的C#客户端使用了 ServiceStackRedis,json序列化使用 RestSharp.代码结构如下: JsonLa

elasticsearch + logstash + kibana 搭建实时日志收集系统【原创】

实时日志统一收集的好处: 1.快速定位集群中问题机器 2.无需下载整个日志文件(往往比较大,下载耗时多) 3.可以对日志进行统计 a.发现出现次数最多的异常,进行调优处理 b.统计爬虫ip c.统计用户行为,做聚类分析等 基于上面的需求,我采用了 ELK(elasticsearch + logstash + kibana)的方案,安装方法请到他们的官网:https://www.elastic.co/ 上面查询,这里我主要讲讲我遇到的问题. ??????1.LVS 分发UDP请求不成功的问题???