日志收集三:rsyslog + elasticsearch + kibana

rsyslog v8

配置文件语法改变较大,添加了很多其他输入输出模块,可以更好的进行扩展。
在v5的基础上,添加elasticsearch的发送。
v8配置参考:请添加链接描述

v8采集端完整配置

module(load="imuxsock") # provides support for local system logging (e.g. via logger command)
module(load="imklog")   # provides kernel logging support (previously done by rklogd)
module(load="omelasticsearch")
module(load="imfile" mode="inotify" readtimeout="5" )
#input(type="imfile" file="/var/log/zabbix_*.log" tag="C+eebo.test.test+CD" ruleset="fileinput" facility="local6" PersistStateInterval="100" startmsg.regex="^[[:digit:]]{5}:[[:digit:]]{8}:[[:digit:]]{6}.[[:digit:]]{3}" maxsubmitatonce="1024000" readMode="2")

module(load="imudp" threads="2") # needs to be done just once
input(type="imudp" port="514" address="127.0.0.1" rcvbufSize="1m" Ruleset="udpinput")

### server model need to load and set
#module(load="imtcp" keepalive="on" keepalive.time="30" KeepAlive.Probes="1") # needs to be done just once
#input(type="imtcp" port="514" Ruleset="tcpinput")

#### GLOBAL DIRECTIVES ####

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

# Spool files
#$WorkDirectory /var/spool/rsyslog
$WorkDirectory /var/lib/rsyslog

# Filter duplicate messages
$RepeatedMsgReduction on

$EscapeControlCharactersOnReceive off

#### RULES ####

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don‘t log private authentication messages!
#*.info;mail.none;authpriv.none;cron.none;local0.none;local7.none                /var/log/messages
auth.info;authpriv.info;daemon.info;kern.info;syslog.info;user.info;lpr.info;mark.info                /var/log/messages

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  /var/log/maillog

# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 :omusrmsg:*

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

template(name="msg" type="string" string="%rawmsg%\n")
template(name="msg" type="string" string="%rawmsg%\n")

template(name="timemsg" type="string" string="%timegenerated:8:15% %msg%\n")

template(name="udppfile" type="string" string="/data/rsyslogs/%HOSTNAME%/%syslogtag%/%$year%-%$month%-%$day%.log")
template(name="tcppfile" type="string" string="/data/rsyslogs/%HOSTNAME%/%syslogtag%/%$year%-%$month%-%$day%_%fromhost-ip%.log")
template(name="tcpnfile" type="string" string="/data/rsyslogs/nginx/%syslogtag:F,66:1%/%$year%-%$month%-%$day%_%fromhost-ip%_%syslogtag:F,66:2%.log")

### get local code log from syslog to elasearchtich
template(name="udppname" type="string" string="%HOSTNAME%-%$year%.%$month%.%$day%")
template(name="udpelastic" type="list" option.json="on"){
    constant(value="{")
    constant(value="\"PROGRAM_NAME\":\"") property(name="hostname")
    constant(value="\",\"ENV\":\"") property(name="syslogtag")
    constant(value="\",\"LOG_TYPE\":\"code_syslog\",")
    constant(value="\"HOSTNAME\":\"") property(name="$!myhostname") #get host ip
    constant(value="\",\"TIME\":\"") property(name="timereported" dateFormat="rfc3339")
    constant(value="\",\"MSG\":\"") property(name="msg")
    constant(value="\"}")
}
### get local nginx log from syslog to elasticsearch
#ACCESS:access_log syslog:server=127.0.0.1:514,facility=local1,tag=m_test_2haohr_com,severity=info main;
#ERROR:error_log syslog:server=127.0.0.1:514,facility=local1,tag=m_test_2haohr_com,severity=debug;
template(name="websitename" type="string" string="%syslogtag:F,66:1%-%$year%.%$month%.%$day%")
template(name="nginxelastic" type="list" option.json="on"){
    constant(value="{")
    constant(value="\"PROGRAM_NAME\":\"") property(name="syslogtag")
    constant(value="\",\"ENV\":\"") property(name="$!env")
    constant(value="\",\"LOG_TYPE\":\"") property(name="$!logtype")
    constant(value="\",\"HOSTNAME\":\"") property(name="hostname") #get host ip
    constant(value="\",\"TIME\":\"") property(name="timereported" dateFormat="rfc3339")
    constant(value="\",\"MSG\":\"") property(name="msg")
    constant(value="\"}")
}

ruleset(name="udpinput") {
    if prifilt("local0.*") then {
        action(type="omfile" dynaFile="udppfile" template="timemsg")
        action(type="omfwd" protocol="tcp" target="10.10.10.10" port="514" zipLevel="5" keepalive="on")
        set $!myhostname = "10-10-3-215";
        action(type="omelasticsearch" server="10.10.166.166" serverport="9200" template="udpelastic" dynSearchIndex="on" searchIndex="udppname" searchType="events")
    }else if $syslogfacility-text == "local1" then {
        set $!sitename = $syslogtag;
        if $syslogtag contains "_dev" then {
            set $!env = "dev";
        } else if $syslogtag contains "_test" then {
            set $!env = "test";
        } else {
            set $!env = "production";
        }
        if $syslogseverity-text == "info" then {
            set $!logtype = "nginx_access";
        } else if $syslogseverity-text == "debug" then {
            set $!logtype = "nginx_error";
        }
        action(type="omfwd" protocol="tcp" target="10.10.10.10" port="514" zipLevel="5" keepalive="on")
        action(type="omelasticsearch" server="10.10.166.166" serverport="9200" template="nginxelastic" dynSearchIndex="on" searchIndex="websitename" bulkmode="on" Action.ResumeRetryCount="-1")
    } else{
        action(type="omfile" file="var/log/test.test.log")
        stop
    }
}

template(name="filepname" type="string" string="%syslogtag:F,43:2%-%$year%.%$month%.%$day%")
template(name="fileselasticsearch" type="list" option.json="on"){
    constant(value="{")
    constant(value="\"PROGRAM_NAME\":\"") property(name="$!programname")
    constant(value="\",\"ENV\":\"") property(name="$!env")
    constant(value="\",\"LOG_TYPE\":\"") property(name="$!logtype")
    constant(value="\",\"HOSTNAME\":\"") property(name="hostname") #get host ip
    constant(value="\",\"TIME\":\"") property(name="timereported" dateFormat="rfc3339")
    constant(value="\",\"MSG\":\"") property(name="msg")
    constant(value="\"}")
}
ruleset(name="fileinput"){
    if prifilt("local6.*") then {
        set $!programname = field($syslogtag,"+",2);
        if re_match($syslogtag,"G+.*+.*E") then {
            set $!logtype = "gunicorn_error";
        } else if re_match($syslogtag,"G+.*+.*A") then {
            set $!logtype = "gunicorn_access";
        } else if re_match($syslogtag,"S+.*+.*G") then {
            set $!logtype = "gunicorn_supervisor";
        } else if re_match($syslogtag,"C+.*+.*C") then {
            set $!logtype = "celery_log";
        } else if re_match($syslogtag,"S+.*+.*C") then {
            set $!logtype = "celery_supervisor";
        }

        if $syslogtag contains "D" then {
            set $!env = "dev";
        } else if $syslogtag contains "T" then {
            set $!env = "test";
        } else if $syslogtag contains "P" then {
            set $!env = "production";
        }

        action(type="omfwd" protocol="tcp" target="10.10.10.10" port="514" zipLevel="5" keepalive="on")
        action(type="omelasticsearch" server="10.10.166.166" serverport="9200" template="fileselasticsearch" dynSearchIndex="on" searchIndex="filepname" bulkmode="on" Action.ResumeRetryCount="-1")
    }
    stop
}

使用 ansible-playbook 替换所有机器

---
- hosts: GZtest
  remote_user: root
  vars:
    rsyslog_port: 514
    protocol: udp
  tasks:
  - name: add rsyslogd repository
    yum_repository:
     name: rsyslog_v8
     description: Adiscon CentOS-$releasever - local packages for $basearch
     baseurl: http://rpms.adiscon.com/v8-stable/epel-$releasever/$basearch
     enabled: yes
     gpgcheck: no
     gpgkey: http://rpms.adiscon.com/RPM-GPG-KEY-Adiscon
     file: rsyslog
     protect: yes 

  - name: install rsyslog-main
    yum:
     name: rsyslog
     state: latest

  - name: install rsyslog-elastic module
    yum:
     name: rsyslog-elasticsearch
     state: latest
  - name: copy rsyslog.conf
    copy:
     backup: yes
     src: /etc/rsyslog.d/rsyslog_v8_client.conf
     dest: /etc/rsyslog.conf

#    shell: sed -i "s/222/{{ansible_hostname}}/g" /root/shell/test.txt
  - name: change localhost
    replace:
      dest: /etc/rsyslog.conf
      regexp: ‘10-10-3-215‘
      replace: ‘{{ansible_hostname}}‘
    notify:
    - restart rsyslogd

  handlers:
  - name: restart rsyslogd
    service:
      name: rsyslog
      enabled: yes
      state: restarted

其他:

elasticsearsh安装配置

单机es存储日志量有限,可以考虑保存最近一个礼拜的,其他的删除,有需要去文件中查看。
考虑添加队列,提升es实时性能

kibana安装配置

权限不好控制,暂时无法通过插件限制开发只使用discovery模块。
安全很重要,至少要通过Basic HTTP authentication验证。

效果

elasticsearch中保存的index

kibana查看日志:

原文地址:http://blog.51cto.com/11424123/2126745

时间: 2024-08-13 13:44:57

日志收集三:rsyslog + elasticsearch + kibana的相关文章

logstash日志收集分析系统elasticsearch&kibana

logstash日志收集分析系统Logstash provides a powerful pipeline for storing, querying, and analyzing your logs. When using Elasticsearch as a backend data store and Kibana as a frontend reporting tool, Logstash acts as the workhorse. It includes an arsenal of

Tomcat容器日志收集方案fluentd+elasticsearch+kilbana

在上一遍博文中我们介绍了Nginx容器访问日志收集的方案,我们使用EFK的架构来完成对容器日志内应用日志的收集,如果不知道什么是EFK架构,那么请访问以下链接获取相关的帮助 Nginx容器日志收集方案fluentd+elasticsearch+kilbana 如果你已经认真阅读了上面的链接,并撑握了其用法,那么再来看本博文(针对于初学者),下面假设我们已经搭建好了上一讲所需要的基础环境,我们接下来就直接开始步入正题. 在步入正题之前我们首先需要确认我们需要完成的目标与效果,同样我们在启动Tomc

Nginx容器日志收集方案fluentd+elasticsearch+kilbana

容器技术在发展到今天已经是相当的成熟,但容器不同于虚拟机,我们在使用容器的同时也有很多相关的技术问题需要解决,比如:容器性能监控,数据持久化,日志监控与分析等.我们不能像平时使用虚拟机一样来管理容器,本文我将给大家带来fluentd+elasticsearch+kilbana容器日志收集方案. 我们将通过容器的fluentd日志驱动将系统内产生的日志发送给fluentd服务端,再过来fluentd服务端处理所有容器发送过来的日志,再转发到elasticsearch,最后通过kilbana来展示和

elk日志收集之rsyslog软连接监控文件深度坑

业务中通过rsyslog监控本地文件收集了一些redis和mc的慢日志,推到elk集群分析,这些日志一天一个文件,每晚零点5分通过计划任务用软连接的方式将新的文件固定到指定文件下,但是最近发现日志丢了很多,分析中发现了一个深坑,先说下现有的配置: ....................... 浏览全部请点击运维网咖社地址:elk日志收集之rsyslog软连接监控文件深度坑

rsyslog+analyzer+mysql实现日志收集展示

why->what->where->when->who->how 1.为什么要进行日志收集?为什么要用到rsyslog? 日志是我们对系统和应用程序的运行状况分析的根本依据,同时一些日志也有其特殊的作用,例如mysql的二进制日志和事务日志.因此要进行日志收集,为了避免重复的进行日志系统的实现,因此在linux发行版中提供了系统日志收集-rsyslogd 2.什么是rsyslog? rsyslog的前身是syslog,其是一个日志收集器,用于应用程序日志收集和内核日志收集.

ELK之方便的日志收集、搜索、展示工具

大家在做分部署系统开发的时候是不是经常因为查找日志而头疼,因为各服务器各应用都有自己日志,但比较分散,查找起来也比较麻烦,今天就给大家推荐一整套方便的工具ELK,ELK是Elastic公司开发的一整套完整的日志分析技术栈,它们是Elasticsearch,Logstash,和Kibana,简称ELK.Logstash做日志收集分析,Elasticsearch是搜索引擎,而Kibana是Web展示界面. 1.日志收集分析Logstash LogstashLogstash 是一个接收,处理,转发日志

syslog+rsyslog+logstash+elasticsearch+kibana搭建日志收集

最近rancher平台上docker日志收集捣腾挺久的,尤其在配置上,特写下记录 Unix/Linux系统中的大部分日志都是通过一种叫做syslog的机制产生和维护的.syslog是一种标准的协议,分为客户端和服务器端,客户端是产生日志消息的一方,而服务器端负责接收客户端发送来的日志消息,并做出保存到特定的日志文件中或者其他方式的处理. ryslog 是一个快速处理收集系统日志的程序,提供了高性能.安全功能和模块化设计.rsyslog 是syslog 的升级版,它将多种来源输入输出转换结果到目的

GlusterFS + lagstash + elasticsearch + kibana 3 + redis日志收集存储系统部署 01

因公司数据安全和分析的需要,故调研了一下 GlusterFS + lagstash + elasticsearch + kibana 3 + redis 整合在一起的日志管理应用: 安装,配置过程,使用情况等续 一,glusterfs分布式文件系统部署: 说明: 公司想做网站业务日志及系统日志统一收集和管理,经过对mfs, fastdfs 等分布式文件系统的调研,最后选择了 glusterfs,因为Gluster具有高扩展性.高性能.高可用性.可横向扩展的弹性特点,无元数据服务器设计使glust

logstash+elasticsearch+kibana日志收集

一. 环境准备 角色 SERVER IP logstash agent 10.1.11.31 logstash agent 10.1.11.35 logstash agent 10.1.11.36 logstash central 10.1.11.13 elasticsearch  10.1.11.13 redis 10.1.11.13 kibana 10.1.11.13 架构图如下: 整个流程如下: 1) 远程节点的logstash agent收集本地日志后发送到远程redis的list队列