ELK 做日志分析(filebeat+logstash+elasticsearch)配置

利用 Filebeat去读取日志发送到 Logstash ,再由 Logstash 处理后发送给 Elasticsearch 。

一、Filebeat

  1. 项目日志文件:

利用 Filebeat 去读取文件,paths 下面配置路径地址,Filebeat 会自动去读取 /data/share/business_log/TA-*/debug.log 文件

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /usr/local/server/openresty/nginx/logs/*.log
    - /data/share/business_log/TA-*/debug.log
    #- c:\programdata\elasticsearch\logs\*

filebeat 对于多行日志的处理

multiline:
    pattern: ‘^[0-2][0-9]:[0-5][0-9]:[0-5][0-9]‘
    negate: true
    match: after

上面配置的意思是:不以时间格式开头的行都合并到上一行的末尾(正则写的不好,忽略忽略)

pattern:正则表达式

negate:true 或 false;默认是false,匹配pattern的行合并到上一行;true,不匹配pattern的行合并到上一行

match:after 或 before,合并到上一行的末尾或开头

还有更多两个配置,默认也是注释的,没特殊要求可以不管它

max_lines: 500

timeout: 5s

max_lines:合并最大行,默认500

timeout:一次合并事件的超时时间,默认5s,防止合并消耗太多时间甚至卡死

  1. nginx日志文件
#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /usr/local/server/openresty/nginx/logs/access.log
    - /usr/local/server/openresty/nginx/logs/error.log
    #- /data/share/business_log/TA-*/debug.log
    #- c:\programdata\elasticsearch\logs\*
  1. 输出配置

    注释掉 Elasticsearch 下面的配置项,并配置 Logstash 下面的配置,会将 Filebeat 读取到的日志文件发送到 hosts 里面配置的 Logstash 服务器上面去

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.18.1.152:5044","172.18.1.153:5044","172.18.1.154:5044"]
  index: "logstash-%{+yyyy.MM.dd}"

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

Filebeat 启动命令:nohup ./filebeat -e -c filebeat-TA.yml >/dev/null 2>&1 &

二、Logstash

  1. 基本配置

Logstash 本身不能建立集群,Filebeat 连接 Logstash 后会自动轮询 Logstash 服务器是否可用,把数据发送到可用的 Logstash 服务器上面去

Logstash 配置,监听5044端口,接收 Filebeat 发送过来的日志,然后利用 grok 对日志过滤,根据不同的日志设置不同的 type,并将日志存储到 Elasticsearch 集群上面

项目日志跟nginx日志配置在一起,elasticsearch 配置的索引 index 里面不能大写,不然会出现奇怪的bug

input {
  beats {
    port => "5044"
  }
}

filter {

  date {
      match => ["@timestamp", "yyyy-MM-dd HH:mm:ss"]
  }
  grok {
    match => {
      "source" => "(?<type>([A-Za-z]*-[A-Za-z]*-[A-Za-z]*)|([A-Za-z]*-[A-Za-z]*)|access|error)"
    }
  }

}

output {
  # 针对不同的项目日志需要写不同的判断项
  if [type] == "MS-System-OTA"{
    elasticsearch {
      hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
      index => "logstash-ms-system-ota-%{+YYYY.MM.dd}"
    }
  }else if [type] == "access" or [type] == "error"{
    elasticsearch {
      hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
      index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
  }else{
    elasticsearch {
      hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
    }
  }
  stdout {
    codec => rubydebug
  }
}
  1. logstash 的 grok-patterns
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b

POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>‘(?>\\.|[^\\‘]+)+‘)|‘‘|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}

# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
IP (?:%{IPV6}|%{IPV4})
HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT %{IPORHOST}:%{POSINT}

# paths
PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[\w_%[email protected]:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn‘t turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*‘(){},~:;[email protected]#%_\-]*)+
#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM \?[A-Za-z0-9$.+!*‘|(){},[email protected]#%&/=:;_?\-\[\]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)[email protected])?(?:%{URIHOST})?(?:%{URIPATHPARAM})?

# Months: January, Feb, 3, 03, 12, December
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHNUM2 (?:0[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])

# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)

# Years?
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# ‘60‘ is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}

# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}

# Shortcuts
QS %{QUOTEDSTRING}

# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

# Log Levels
LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)
  1. 针对几个不同的message写的几个grok demo 读取日志文件
    1. 对于 nginx 的 error.log 的 message 的处理
    # message:   2018/09/18 16:33:51 [error] 15003#0: *545757 no live upstreams while connecting to upstream, client: 39.108.4.83, server: dev-springboot-admin.tvflnet.com, request: "POST /instances HTTP/1.1", upstream: "http://localhost/instances", host: "dev-springboot-admin.tvflnet.com"
    
    filter {
      #定义数据的格式
      grok {
        match => { "message" => "%{DATA:timestamp}\ \[%{DATA:level}\] %{DATA:nginxmessage}\, client: %{DATA:client}\, server: %{DATA:server}\, request: "%{DATA:request}\", upstream: "%{DATA:upstream}\", host: "%{DATA:host}\""}
      }
    }
    1. 对于 nginx 的 error.log 的 message 的处理
    # message:    2018/04/19 20:40:27 [error] 4222#0: *53138 open() "/data/local/project/WebSites/AppOTA/theme/js/frame/layer/skin/default/icon.png" failed (2: No such file or directory), client: 218.17.216.171, server: dev-app-ota.tvflnet.com, request: "GET /theme/js/frame/layer/skin/default/icon.png HTTP/1.1", host: "dev-app-ota.tvflnet.com", referrer: "http://dev-app-ota.tvflnet.com/theme/js/frame/layer/skin/layer.css"
    
    filter {
      #定义数据的格式
      grok {
        match => { "message" => "%{DATA:timestamp}\ \[%{DATA:level}\] %{DATA:nginxmessage}\, client: %{DATA:client}\, server: %{DATA:server}\, request: "%{DATA:request}\", host: "%{DATA:host}\", referrer: "%{DATA:referrer}\""}
      }
    }
    1. 对于 lua 的 error.log 的 message 的处理
    # message:    2018/09/05 18:02:19 [error] 2325#0: *17083157 [lua] PushFinish.lua:38: end push statistics, client: 119.137.53.205, server: dev-system-ota-statistics.tvflnet.com, request: "POST /upgrade/push HTTP/1.1", host: "dev-system-ota-statistics.tvflnet.com"
    
    filter {
      #定义数据的格式
      grok {
        match => { "message" => "%{DATA:timestamp}\ \[%{DATA:level}\] %{DATA:luamessage}\, client: %{DATA:client}\, server: %{DATA:server}\, request: "%{DATA:request}\", host: "%{DATA:host}\""}
      }
    }
    1. 对于 电视端接口日志的 message 的处理
    # message:    traceid:[Thread:943-sn:sn-mac:mac] 2018-09-18 11:07:03.525 DEBUG com.flnet.utils.web.log.DogLogAspect 55 - Params-参数(JSON):{"backStr":"{\"groupid\":5}","build":201808310938,"ip":"119.147.146.189","mac":"mac","modelCode":"SHARP_0_50#SHARP#IQIYI#LCD_50SUINFCA_H","sn":"sn","version":"modelCode"}
    
    filter {
      #定义数据的格式
      grok {
        match => { "message" => "traceid:%{DATA:traceid}\[Thread:%{DATA:thread}\-sn:%{DATA:sn}\-mac:%{DATA:mac}\]\ %{TIMESTAMP_ISO8601:timestamp}\ %{DATA:level}\ %{GREEDYDATA:message}"}
      }
    }
    1. 对于 项目日志的 message 的处理
    # message:    traceid:[] 2018-09-14 02:14:48.209 WARN  de.codecentric.boot.admin.client.registration.ApplicationRegistrator 115 - Failed to register application as Application(name=ta-system-ota, managementUrl=http://TV-DEV-API01:10005/actuator, healthUrl=http://TV-DEV-API01:10005/actuator/health, serviceUrl=http://TV-DEV-API01:10005/, metadata={startup=2018-09-10T10:20:41.812+08:00}) at spring-boot-admin ([https://dev-springboot-admin.tvflnet.com/instances]): I/O error on POST request for "https://dev-springboot-admin.tvflnet.com/instances": connect timed out; nested exception is java.net.SocketTimeoutException: connect timed out. Further attempts are logged on DEBUG level
    
    filter {
      #定义数据的格式
      grok {
        match => { "message" => "traceid:\[%{DATA:traceid}\] %{TIMESTAMP_ISO8601:timestamp}\ %{DATA:level}\ %{GREEDYDATA:message}"}
      }
    }

    对于多项 不同的匹配配置多个grok

    Logstash 启动命令:nohup ./bin/logstash -f ./config/conf.d/logstash-simple.conf >/dev/null 2>&1 &

  2. Logstash 在线验证地址

原文地址:https://www.cnblogs.com/lasse1897/p/9680212.html

时间: 2024-07-30 21:25:49

ELK 做日志分析(filebeat+logstash+elasticsearch)配置的相关文章

elk 日志分析系统Logstash+ElasticSearch+Kibana4

elk 日志分析系统 Logstash+ElasticSearch+Kibana4 logstash 管理日志和事件的工具 ElasticSearch 搜索 Kibana4 功能强大的数据显示客户端 redis 缓存 安装包 logstash-1.4.2-1_2c0f5a1.noarch.rpm elasticsearch-1.4.4.noarch.rpm logstash-contrib-1.4.2-1_efd53ef.noarch.rpm kibana-4.0.1-linux-x64.tar

用Grafana为Elasticsearch做日志分析

用Grafana为Elasticsearch做日志分析 作者:chszs,未经博主允许不得转载.经许可的转载需注明作者和博客主页:http://blog.csdn.net/chszs Grafana是一个开源的.功能强大的指标仪表板和图形编辑器工具,它面向Graphite.Elasticsearch.OpenTSDB.Prometheus和InfluxDB等数据源.目前Grafana的最新版本为2.6版. Grafana仪表板界面如下: Graphite:Graphite是一个可扩展的实时图表,

ELK - 实用日志分析系统

目前日志分析系统用的越来越广泛,而且最主流的技术即ELK,下面和大家分享一下: --------------------------------------------------------------------------------------- 一:简 介  Elastic Stack 是 原 ELK Stack 在 5.0 版本加入 Beats 套件后的新称呼,近两年飞速崛起,成为开源界机器数据分析和日志处理第一选择. 组成: kibana:开源工具,为 EL 提供友好的 Web 界

性能优化分析Spring Cloud ELK+kafka日志分析平台

一.概述 在笔者的上一篇博客介绍了Spring Cloud ELK+kafka日志分析平台的搭建,http://xuyangyang.club/articles/2018/05/24/1527176074152.html,但是笔者在测试环境中发现,在logstash采用了grok插件去处理日志埋点和解析的时候发现了高资源占用,在阿里云8核16G的服务器部署后,测试环境大概每秒不超过几百条的日志的解析下竟然CPU占用高达95%左右,笔者分析了其中的原因,首先由于几个服务的日志格式相关配置还没有落地

docker:搭建ELK 开源日志分析系统

ELK 是由三部分组成的一套日志分析系统, Elasticsearch: 基于json分析搜索引擎,Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片, 索引副本机制,restful风格接口,多数据源,自动搜索负载等. Logstash:动态数据收集管道,Logstash是一个完全开源的工具,它可以对你的日志进行收集.分析,并将其存储供以后使用 Kibana:可视化视图,将elasticsearh所收集的data通过视图展现.kibana 是一个

filebeat + logstash + elasticsearch + granfa

filebeat + logstash + elasticsearch + granfa https://www.cnblogs.com/wenchengxiaopenyou/p/9034213.html 一.背景 前端web服务器为nginx,采用filebeat + logstash + elasticsearch + granfa 进行数据采集与展示,对客户端ip进行地域统计,监控服务器响应时间等. 二.业务整体架构: nginx日志落地-->filebear-->logstash--&

日志分析-1.rsyslog 基础配置(服务器/客户端)

日志分析-1.rsyslog 基础配置(服务器/客户端)centos6起/etc/syslog.conf不再有!而是/etc/rsyslog.conf代替!rsyslog是syslog的多线程增强版,现在Fedora.Ubuntu,.rhel6.centos6默认的日志系统都是rsyslog了.rsyslog主要用来收集系统产生的各种日志,日志默认放在/var/log/目录下.日志收集工具,不仅仅可以收集本机的日志,还可以收集其他机器的日志 在客户端/服务器架构的配置下,rsyslog同时扮演了

ELK Stack 日志分析 Elasticsearch搜索权限

视频下载地址:链接:http://pan.baidu.com/s/1jItWoYy 密码:t6cj 这几年国内运维都在不断完善运维自动化! 设想,IT运维自动化建设完成之后,那下一步怎么走? 我可以很负责的告诉你,接下来必将是智能运维的时代!!! 智能运维,是在IT信息化建设完善的前提下的一种新运维模式! 它依靠的是 实时的大数据分析平台 + 完善的数据分析策略 它的作用就是 -- 能在运维故障发生之前就预测报警! 如此高大上,如此前沿的技术,大家一定要把握住 二话不说,提前学习! ELK就是智

ELK日志分析系统 介绍 安装配置

ELK日志分析系统 一.ELK介绍 ELK顾名思义:是由Elasticsearch,Logstash 和 Kibana三部分组成的. 其中Elasticsearch 是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析.它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编写.目前,最新的版本是 5.4. 主要特点 实时分析 分布式实时文件存储,并将每一个字段都编入索引 文档导向,所有的对象全部是文档 高可用性,易扩展,支持集群(Cl