Logstash+Elasticsearch+Kibana+Nginx set up our private log query system

Logstash+Elasticsearch+Kibana+S3+Nginx

build our private log query system

  • System structure

  • How to setup

    • Logstash-index (Logstash server)

      yum -y install java-1.7.0-openjdk

    • Install and configure Elasticsearch ( Logstash 1.4.2 recommends Elasticsearch 1.1.1 )

      rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

      vi /etc/yum.repos.d/elasticsearch.repo  ( Add the following repository configuration )

      [elasticsearch-1.1]

      name=Elasticsearch repository for 1.1.x packages

      baseurl=http://packages.elasticsearch.org/elasticsearch/1.1/centos

      gpgcheck=1

      gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch

      enabled=1

    • yum -y install elasticsearch-1.1.1

    • vi /etc/elasticsearch/elasticsearch.ymlAdd the following line somewhere in the file, to disable dynamic scripts      script.disable_dynamic: true
    • You will also want to restrict outside access to your Elasticsearch instance, so outsiders can‘t read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host and uncomment it so it looks like this:

      network.host: localhost

    • Then disable multicast by finding the discovery.zen.ping.multicast.enabled item and uncommenting so it looks like this:    discovery.zen.ping.multicast.enabled: false
    • chkconfig --add elasticsearch
    • service elasticsearch restart
  • Install Kibana (  Logstash 1.4.2 recommends Kibana 3.0.1 )

    • mkdir -p /usr/share/nginx/kibana3
    • cp -R ~/kibana-3.0.1/* /usr/share/nginx/kibana3/
  • Install Nginx

    • rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
    • yum -y install nginx
    • cd ~; curl -OL https://github.com/elasticsearch/kibana/raw/master/sample/nginx.conf
    • vi nginx.conf  ( Find and change the values of the server_name to your FQDN (or localhost if you aren‘t using a domain name) and root to where we installed Kibana, so they look like the following entries )

server_name FQDN;
root  /usr/share/nginx/kibana3;

  • cp ~/nginx.conf /etc/nginx/conf.d/default.conf
  • htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd linuxblind
  • Password: linuxblind
  • chkconfig --levels 235 nginx on
  • service nginx restart
  • Install and configure Logstash

  • vi /etc/yum.repos.d/logstash.repo (Add the following repository configuration)

[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

  • yum -y install logstash-1.4.2
  • vi /etc/logstash/conf.d/logstash.conf ( see below)
  • service logstash start
    • vi /etc/yum.repos.d/logstash.repo (Add the following repository configuration)
    • yum -y install java-1.7.0-openjdk
    • cd /etc/pki/tls; openssl req -new -newkey rsa:4096 -days 3650 -nodes -x509 -subj "/C=CN/ST=Beijing/L=Beijing/O=IT/CN=logstash.linuxblind.com"  -keyout logstash.linuxblind.com.key -out logstash.linuxblind.com.certLogstash agent
    • Generate SSL Certificate

    • Install JDK1.7

    • Install and configure Logstash

[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

  • yum -y install logstash-1.4.2
  • service logstash start

=============== Logstash Collect maillog (sendmail) =====================

Step1:

  • Copy SSL certificate from logstash-indexer to logstash-agent

    • scp /etc/pki/tls/certs/logstash.linuxblind.com.cert [email protected]_agent_IP:/tmp

Step2:

  • In logstash-agent
  • Install and Configure logstash-forwarder

    • Change content like below:
    • cd ~; curl -O http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm
    • rpm -ivh ~/logstash-forwarder-0.3.1-1.x86_64.rpm
    • cd /etc/init.d/; curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
    • chmod +x logstash-forwarder
    • curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig
    • vi /etc/sysconfig/logstash-forwarder

LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"

  • cp /tmp/logstash.linuxblind.com.cert /etc/pki/tls/certs/
  • vi /etc/logstash-forwarder

{

"network": {

"servers": [ "logstash.linuxblind.com:5000" ],

"timeout": 15,

"ssl certificate": "/etc/pki/tls/certs/logstash.linuxblind.com.cert",

"ssl key": "/etc/pki/tls/private/logstash.linuxblind.com.key",

"ssl ca": "/etc/pki/tls/certs/logstash.linuxblind.com.cert"

},

"files": [

{

"paths": [

"/var/log/message"

],

"fields": { "type": "syslog" }

},

{

"paths": [

"/var/log/maillog"

],

"fields": { "type": "maillog" }

}

]

}

  • chkconfig logstash-forwarder on
  • /etc/init.d/logstash-forwarder start
  • crtl+c  # exit

Step3:

  • In logstash-indexer

    • Modified /opt/logstash/patterns/grok-patterns and append following content

# Match for sendmail

#EMAIL [.a-zA-Z0-9_-][email protected]%{IPORHOST}

LOGIN [.a-zA-Z0-9_-]+

EMAIL %{LOGIN}@%{IPORHOST}

DSN [0-9][.][0-9][.][0-9]

QID [A-za-z0-9]{14}

# Match a relay that gives us a QID in the return status.

SENDMAIL_TO_1 %{SYSLOGBASE} %{QID:qid}: to=<%{EMAIL:to}>, (%{WORD}=%{DATA},)+ relay=%{IPORHOST:relay} \[%{IP}\], dsn=%{DSN:dsn}, stat=%{DATA:status} \(%{QID:qid} %{GREEDYDATA:status_message}\)

# Match a relay that does NOT give us a QID in the return status.

SENDMAIL_TO_2 %{SYSLOGBASE} %{QID:qid}: to=<%{EMAIL:to}>, (%{WORD}=%{DATA},)+ relay=%{IPORHOST:relay} \[%{IP}\], dsn=%{DSN:dsn}, stat=%{DATA:status} \(%{GREEDYDATA:status_message}\)

# Match a message with no relay IP address or status message.

SENDMAIL_TO_3 %{SYSLOGBASE} %{QID:qid}: to=<%{EMAIL:to}>, (%{WORD}=%{DATA},)+ relay=%{IPORHOST:relay}, dsn=%{DSN:dsn}, stat=%{GREEDYDATA:status}

# Match a message with no relay info at all.

SENDMAIL_TO_4 %{SYSLOGBASE} %{QID:qid}: to=<%{EMAIL:to}>, (%{WORD}=%{DATA},)+ stat=%{GREEDYDATA:status}

### TODO - match multiple recipients in To: field.

SENDMAIL_TO_5 %{SYSLOGBASE} %{QID:qid}: to=(<%{EMAIL:to}>,)+ (%{WORD}=%{DATA},)+ %{GREEDYDATA:status}

SENDMAIL_TO (%{SENDMAIL_TO_1}|%{SENDMAIL_TO_2}|%{SENDMAIL_TO_3}|%{SENDMAIL_TO_4})

SENDMAIL_FROM %{SYSLOGBASE} %{QID:qid}: from=<%{EMAIL:from}>, (%{WORD}=%{DATA},)+ relay=%{IPORHOST:relay} \[%{IP}\]

SENDMAIL_OTHER_1 %{SYSLOGBASE} %{QID:qid}: %{GREEDYDATA:message}

SENDMAIL_OTHER_2 %{SYSLOGBASE} STARTTLS=(client|server), relay=(\[)?%{IPORHOST:relay}(\])?%{GREEDYDATA:message}

SENDMAIL_OTHER_3 %{SYSLOGBASE} STARTTLS: %{GREEDYDATA:message}

SENDMAIL_OTHER_4 %{SYSLOGBASE} ruleset=tls_server, arg1=SOFTWARE, relay=%{IPORHOST:relay}, %{GREEDYDATA:message}

SENDMAIL_OTHER_5 %{SYSLOGBASE} STARTTLS=client, error: %{GREEDYDATA:message}

SENDMAIL_RELAY %{SYSLOGBASE} ruleset=check_relay, arg1=(\[)?%{IPORHOST}(\])?, arg2=%{IP:ip}, relay=(\[)?%{IPORHOST:relay}(\])??%{GREEDYDATA:message}

SENDMAIL_AUTH_1 %{SYSLOGBASE} AUTH=server, relay=%{IPORHOST:relay} \[%{IP}\]( \(may be forged\))?, authid=%{LOGIN:user}(@%{IPORHOST})?, %{GREEDYDATA:message}

SENDMAIL_AUTH_2 %{SYSLOGBASE} AUTH=server, relay=\[%{IP}\], authid=%{LOGIN:user}(@%{IPORHOST})?, %{GREEDYDATA:message}

SENDMAIL_AUTH (%{SENDMAIL_AUTH_1}|%{SENDMAIL_AUTH_2})

SENDMAIL_OTHER (%{SENDMAIL_OTHER_1}|%{SENDMAIL_OTHER_2}|%{SENDMAIL_OTHER_3}|%{SENDMAIL_OTHER_4}|%{SENDMAIL_OTHER_5})

SENDMAIL (%{SENDMAIL_TO}|%{SENDMAIL_FROM}|%{SENDMAIL_OTHER}|%{SENDMAIL_AUTH}|%{SENDMAIL_RELAY})

  • Modified /etc/logstash/conf.d/10-syslog.conf and Add following content in right place.

if [type] == "maillog" {

grok {

match => [ "message", "%{SENDMAIL_TO_1}",

"message", "%{SENDMAIL_TO_2}",

"message", "%{SENDMAIL_TO_3}",

"message", "%{SENDMAIL_TO_4}",

"message", "%{SENDMAIL_TO_5}",

"message", "%{SENDMAIL_TO}",

"message", "%{SENDMAIL_FROM}",

"message", "%{SENDMAIL_OTHER_1}",

"message", "%{SENDMAIL_OTHER_2}",

"message", "%{SENDMAIL_OTHER_3}",

"message", "%{SENDMAIL_OTHER_4}",

"message", "%{SENDMAIL_OTHER_5}",

"message", "%{SENDMAIL_RELAY}",

"message", "%{SENDMAIL_AUTH_1}",

"message", "%{SENDMAIL_AUTH_2}",

"message", "%{SENDMAIL_AUTH}",

"message", "%{SENDMAIL_OTHER}",

"message", "%{SENDMAIL}"

]

}

}

  • service logstash restart

============= Configuration For Logstash with Logstash-forwarder =============

  • On Logstash master

    • cat 01-lumberjack-input.conf

input {

lumberjack {

port => 5000

type => "logs"

ssl_certificate => "/etc/pki/tls/certs/logstash.linuxblind.com.cert"

ssl_key => "/etc/pki/tls/private//logstash.linuxblind.com.key"

}

}

  • cat 10-syslog.conf

filter {

if [type] == "syslog" {

grok {

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

add_field => [ "received_at", "%{@timestamp}" ]

add_field => [ "received_from", "%{host}" ]

}

syslog_pri { }

date {

match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

}

}

if [type] == "maillog" {

grok {

match => [ "message", "%{SENDMAIL_TO_1}",

"message", "%{SENDMAIL_TO_2}",

"message", "%{SENDMAIL_TO_3}",

"message", "%{SENDMAIL_TO_4}",

"message", "%{SENDMAIL_TO_5}",

"message", "%{SENDMAIL_TO}",

"message", "%{SENDMAIL_FROM}",

"message", "%{SENDMAIL_OTHER_1}",

"message", "%{SENDMAIL_OTHER_2}",

"message", "%{SENDMAIL_OTHER_3}",

"message", "%{SENDMAIL_OTHER_4}",

"message", "%{SENDMAIL_OTHER_5}",

"message", "%{SENDMAIL_RELAY}",

"message", "%{SENDMAIL_AUTH_1}",

"message", "%{SENDMAIL_AUTH_2}",

"message", "%{SENDMAIL_AUTH}",

"message", "%{SENDMAIL_OTHER}",

"message", "%{SENDMAIL}"

]

}

}

if [type] == "apache" {

grok {

pattern => "%{COMBINEDAPACHELOG}"

}

}

}

  • cat 30-lumberjack-output.conf

    output {

    elasticsearch { host => localhost }

    stdout { codec => rubydebug }

    }

  • On Logstash agent
    • cat /etc/logstash-forwarder

{

"network": {

"servers": [ "logstash.linuxblind.com:5000" ],

"timeout": 15,

"ssl certificate": "/etc/pki/tls/certs/logstash.linuxblind.com.cert",

"ssl key": "/etc/pki/tls/private/logstash.linuxblind.com.key",

"ssl ca": "/etc/pki/tls/certs/logstash.linuxblind.com.cert"

},

"files": [

{

"paths": [

"/var/log/message"

],

"fields": { "type": "syslog" }

},

{

"paths": [

"/var/log/maillog"

],

"fields": { "type": "maillog" }

}

]

}

================================================================

  • How to package rpm for logstash-forwarder(32bit) and then Install logstash-forwarder on Amazon Linux (32bit) - The first time, you build environment need release is 32bit (i686/i386)

    • make rpm  (choose this)
    • make deb
    • yum -y install gcc ruby-devel golang git rpm-build
    • gem install fpm
    • git clone git://github.com/elasticsearch/logstash-forwarder.git
    • cd logstash-forwarder
    • go build
    • Make packages, either:
    • rpm -ivh logstash-forwarder-0.3.1-1.i686.rpm
    • wget https://raw.githubusercontent.com/jamtur01/logstashbook-code/master/code/4/logstash_forwarder_redhat_init --no-check-certificate
    • mv logstash_forwarder_redhat_init logstash-forwarder
    • chmod +x logstash-forwarder
    • cp -f logstash-forwarder /etc/init.d/logstash-forwarder
  • The Second Time

LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"

  • Modified /etc/logstash-forwarder to match log paths
  • cd /var/log/; /etc/init.d/logstash-forwarder start

作者注:写的比较粗,有点不堪入目,有时间再仔细整理下, 另外上面你下载不了,可以找我索取。

QQ:1037447289

时间: 2024-10-01 06:41:51

Logstash+Elasticsearch+Kibana+Nginx set up our private log query system的相关文章

使用logstash+elasticsearch+kibana快速搭建日志平台

日志的分析和监控在系统开发中占非常重要的地位,系统越复杂,日志的分析和监控就越重要,常见的需求有: 根据关键字查询日志详情 监控系统的运行状况 统计分析,比如接口的调用次数.执行时间.成功率等 异常数据自动触发消息通知 基于日志的数据挖掘 很多团队在日志方面可能遇到的一些问题有: 开发人员不能登录线上服务器查看详细日志,经过运维周转费时费力 日志数据分散在多个系统,难以查找 日志数据量大,查询速度慢 一个调用会涉及多个系统,难以在这些系统的日志中快速定位数据 数据不够实时 常见的一些重量级的开源

【转载】使用logstash+elasticsearch+kibana快速搭建日志平台

原文链接:http://www.cnblogs.com/buzzlight/p/logstash_elasticsearch_kibana_log.html 日志的分析和监控在系统开发中占非常重要的地位,系统越复杂,日志的分析和监控就越重要,常见的需求有: 根据关键字查询日志详情 监控系统的运行状况 统计分析,比如接口的调用次数.执行时间.成功率等 异常数据自动触发消息通知 基于日志的数据挖掘 很多团队在日志方面可能遇到的一些问题有: 开发人员不能登录线上服务器查看详细日志,经过运维周转费时费力

安装logstash,elasticsearch,kibana三件套

原文地址:http://www.cnblogs.com/yjf512/p/4194012.html logstash,elasticsearch,kibana三件套 elk是指logstash,elasticsearch,kibana三件套,这三件套可以组成日志分析和监控工具 注意: 关于安装文档,网络上有很多,可以参考,不可以全信,而且三件套各自的版本很多,差别也不一样,需要版本匹配上才能使用.推荐直接使用官网的这一套:elkdownloads. 比如我这里下载的一套是logstash 1.4

logstash+elasticsearch +kibana 日志管理系统

Logstash是一个完全开源的工具,他可以对你的日志进行收集.分析,并将其存储供以后使用(如,搜索),您可以使用它.说到搜索,logstash带有一个web界面,搜索和展示所有日志.kibana 也是一个开源和免费的工具,他可以帮助您汇总.分析和搜索重要数据日志并提供友好的web界面.他可以为 Logstash 和 ElasticSearch 提供的日志分析的 Web 界面. 目的就是为了运维.研发很方便的进行日志的查询.Kibana一个免费的web壳:Logstash集成各种收集日志插件,还

logstash+elasticsearch+kibana+redis 实战

写此文章和就是为了记录logstash+elasticsearch+kibana+redis搭建过程.所有程序都是运行在windows 平台下. 1. 下载 1.1 logstash, elasticsearch, kinana 从官方站点下载: https://www.elastic.co/ 1.2 redis 官方的没有windows平台的.可以从github上下载windows平台版: https://github.com/MSOpenTech/redis/releases 2. 启动各部

Logstash+Elasticsearch+Kibana 联合使用搭建日志分析系统(Windows系统)

最近在做日志分析这块儿,要使用 Logstash+Elasticsearch+Kibana 实现日志的导入.过滤及可视化管理,官方文档写的不够详细,网上的文章大多要么是针对Linux系统的用法,要么就是抄袭别人的配置大都没法运行.费了很大劲才搞定了这仨东西,写一篇用法心得,废话不多说,进入主题. 首先,你的电脑上要装Java 的JDK环境,要使用  Logstash+Elasticsearch+Kibana,需要下载这三个软件和一些必要的插件,列表如下 : 1.Java JDK (最新版Logs

(原)logstash-forwarder + logstash + elasticsearch + kibana

[logstash-forwarder + logstash + elasticsearch + kibana]------------------------------------------------------------------------------------------------------------------------------------------------摘要:logstash-forwarder搜集日志,汇总给logstash,然后输出到elastic

安装logstash+elasticsearch+kibana

系统环境 # cat /etc/redhat-release CentOS release 6.4 (Final) # uname -a Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux 1.下载软件包 # curl -O https://download.elasticsearch.org/logstash/lo

Logstash+Elasticsearch+Kibana日志服务器搭建

官网https://www.elastic.co 软件版本: Logstash 2.2.0 All Plugins Elasticsearch 2.2.0 Kibana 4.4.0 说明:此环境变Centos6.5 64位,单机做测试,具体配置从简. 1.Logstash安装配置 解压到/usr/local/logstash-2.2.0/ Logstash配置文件: vim /usr/local/logstash-2.2.0/etc/agent.conf input {     file {