logstash之multiline插件,匹配多行日志

在外理日志时,除了访问日志外,还要处理运行时日志,该日志大都用程序写的,比如log4j。运行时日志跟访问日志最大的不同是,运行时日志是多行,也就是说,连续的多行才能表达一个意思。

在filter中,加入以下代码:

filter {

  multiline {  }

}

如果能按多行处理,那么把他们拆分到字段就很容易了。

字段属性:

对于multiline插件来说,有三个设置比较重要:negate , pattern 和 what

negate:类型是boolean默认为false

pattern:

必须设置,并且没有默认值,类型为string,要匹配下则表达式

what:

必须设置,并且没有默认值,可以为previous(之前的)或next

下面看看这个例子:

# cat logstash_multiline_shipper.conf
input {
    file {
        path => "/apps/logstash/conf/test/c.out"
        type => "runtimelog"
        codec => multiline {
            pattern => "^\["
            negate => true
            what => "previous"
         }
        start_position => "beginning"
        sincedb_path => "/apps/logstash/logs/sincedb-access"
        ignore_older =>0
     }
 }
output {
    stdout{
        codec => rubydebug
     }
 }

说明:区配以"["开头的行,如果不是,那肯定是属于前一行的

测试数据如下:

[16-04-12 03:40:01 DEBUG] model.MappingNode:- [‘/store/shopclass‘] matched over.

[16-04-12 03:40:02 DEBUG] impl.JdbcEntityInserter:- from product_category product_category

where product_category.PARENT_ID is null and product_category.STATUS = ? and product_category.DEALER_ID is null

order by product_category.ORDERS asc

[16-04-12 03:40:03 DEBUG] model.MappingNode:- [‘/store/shopclass‘] matched over.

[16-04-12 03:40:04 DEBUG] model.MappingNode:- [‘/store/shopclass‘] matched over.

[16-04-12 03:40:05 DEBUG] impl.JdbcEntityInserter:- from product_category product_category

where product_category.PARENT_ID is null and product_category.STATUS = ? and product_category.DEALER_ID is null

order by product_category.ORDERS desc

[16-04-12 03:40:06 DEBUG] impl.JdbcEntityInserter:- from product_category product_category

where product_category.PARENT_ID is null and product_category.STATUS = ? and product_category.DEALER_ID is null

order by product_category.ORDERS asc

[16-04-12 03:40:07 DEBUG] model.MappingNode:- [‘/store/shopclass‘] matched over.

启动logstash:

# ./../bin/logstash -f logstash_multiline_shipper.conf
Sending Logstash‘s logs to /apps/logstash/logs which is now configured via log4j2.properties
[2016-12-09T15:16:59,173][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2016-12-09T15:16:59,192][INFO ][logstash.pipeline        ] Pipeline main started
[2016-12-09T15:16:59,263][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}

加入测试数据到被监控的log后,查看输出:

# ./../bin/logstash -f logstash_multiline_shipper.conf
Sending Logstash‘s logs to /apps/logstash/logs which is now configured via log4j2.properties
[2016-12-09T15:16:59,173][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2016-12-09T15:16:59,192][INFO ][logstash.pipeline        ] Pipeline main started
[2016-12-09T15:16:59,263][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}
{
          "path" => "/apps/logstash/conf/test/c.out",
    "@timestamp" => 2016-12-09T07:21:15.403Z,
      "@version" => "1",
          "host" => "ofs1",
       "message" => "# ./../bin/logstash -f logstash_multiline_shipper.conf \nSending Logstash‘s logs to /apps/logstash/logs which is now configured via log4j2.properties",
          "type" => "runtimelog",
          "tags" => [
        [0] "multiline"
    ]
}
{
          "path" => "/apps/logstash/conf/test/c.out",
    "@timestamp" => 2016-12-09T07:21:15.409Z,
      "@version" => "1",
          "host" => "ofs1",
       "message" => "[2016-12-09T15:16:59,173][INFO ][logstash.pipeline        ] Starting pipeline {\"id\"=>\"main\", \"pipeline.workers\"=>4, \"pipeline.batch.size\"=>125, \"pipeline.batch.delay\"=>5, \"pipeline.max_inflight\"=>500}",
          "type" => "runtimelog",
          "tags" => []
}
{
          "path" => "/apps/logstash/conf/test/c.out",
    "@timestamp" => 2016-12-09T07:21:15.410Z,
      "@version" => "1",
          "host" => "ofs1",
       "message" => "[2016-12-09T15:16:59,192][INFO ][logstash.pipeline        ] Pipeline main started",
          "type" => "runtimelog",
          "tags" => []
}
时间: 2024-10-05 07:32:15

logstash之multiline插件,匹配多行日志的相关文章

ELK显示多行日志

1.默认,logstash对日志文件的选取是以单行为单位的:但像log4j这种输出日志经常会是以时间头开始的多行日志: 2.显示多行,需要配置logstash的config: input { file {    type =>"cbb-sh-robot"    path => ["/home/weblogic/apps/cbb-robot/cbb-sender-sh-pre-robot-logs/cbb.log","/home/weblogic

ELK 之Filebeat 结合Logstash 过滤出来你想要的日志

先扯点没用的 收集日志的目的是有效的利用日志,有效利用日志的前提是日志经过格式化符合我们的要求,这样才能真正的高效利用收集到elasticsearch平台的日志.默认的日志到达elasticsearch 是原始格式,乱的让人抓狂,这个时候你会发现Logstash filter的可爱之处,它很像一块橡皮泥,如果我们手巧的话就会塑造出来让自己舒舒服服的作品,but 如果你没搞好的话那就另说了,本文的宗旨就是带你一起飞,搞定这块橡皮泥.当你搞定之后你会觉得kibana 的世界瞬间清爽了很多!FIleb

Elasticsearch,Kibana,Logstash,NLog实现ASP.NET Core 分布式日志系统

Elasticsearch,Kibana,Logstash,NLog实现ASP.NET Core 分布式日志系统 Elasticsearch 官方网站 Elasticsearch文档 NLog.Targets.ElasticSearch   package Elasticsearch - 简介 Elasticsearch 作为核心的部分,是一个具有强大索引功能的文档存储库,并且可以通过 REST API 来搜索数据. 它使用 Java 编写,基于 Apache Lucene,尽管这些细节隐藏在

linux 查找指定内容并显示指定行数的命令,显示匹配行和行号

grep -i "desktop-printing-0.19-20.2.el5.x86_64" -n -A 10 install.log linux 查找指定内容并显示指定行数的命令,显示匹配行和行号,布布扣,bubuko.com

安装logstash+kibana+elasticsearch+redis搭建集中式日志分析平台

本文是参考logstash官方文档实践的笔记,搭建环境和所需组件如下: Redhat 5.7 64bit / CentOS 5.x JDK 1.6.0_45 logstash 1.3.2 (内带kibana) elasticsearch 0.90.10 redis 2.8.4 搭建的集中式日志分析平台流程如下: elasticsearch 1.下载elasticsearch. wget https://download.elasticsearch.org/elasticsearch/elasti

命令行日志输出

在linux中运行命令行时,会输出一些日志信息,特别典型的是启用WebLogic命令时,输出一些信息,当启用demon模式运行时,又想收集这些信息咋办? 解决的办法就是使用输出重定向,如下面的命令: nohup ./startWebLogic.sh  >  app.log    2>&1     & 这是什么意思呢? 其中app.log是保存输出的文件名称: 2>&1 表示不仅命令行正常的输出保存到app.log中,产生错误信息的输出也保存到app.log文件中:

原!tomcat启动超时(打印了几行日志,后面没了。也不报错,处于启动状态,直到超时)

项目框架:spring+struts2+mybatis 今天优化代码,改了一堆mybatis dao和xml文件,启动项目时,就出现如标题描述的状况:打印了几行日志,后面就不打印了,也不报错,处于启动状态,直到超时. 首先检查是不是tomcat有问题,发现tomcat没有问题,后来感觉应该是项目代码问题,百度了一圈,发现也有碰到这种问题的  在mybatis的Mapper配置文件里配置了两个同名的ID,但是它们参数不同,结果就出现了部署时tomcat启动不了,改了启动时间也不行. 后来看了dao

javascript自定义滚动条插件,几行代码的事儿

在实际项目中,经常由于浏览器自带的滚动条样式太戳,而且在各个浏览器中显示不一样,所以我们不得不去实现自定义的滚动条,今天我就用最少的代码实现了一个自定义滚动条,代码量区区只有几十行,使用起来也非常方便. <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title

多行日志合并处理的内外存方法

上一讲中,我们介绍了如何用SPL将一行日志结构化为一条记录,今天则要说一下多行日志对应一条记录的情况,我们称之为不定行日志. 事实上,集算器自己的输出日志就是这种不定行日志,我们来看一下集算器节点机下的一个日志文件rqlog. log,同样摘录两段日志: [2018-05-14 09:20:20] DEBUG: 临时文件过期时间为:12小时. [2018-05-14 09:20:20] DEBUG: Temporary file directory is: D:\temp\esProc\node