Logstash收集nginx日志

1、首先是要在nginx里面配置日志格式化输出

    log_format  main  "$http_x_forwarded_for | $time_local | $request | $status | $body_bytes_sent | $request_body | $content_length | $http_referer | $http_user_agent |"
                      "$http_cookie | $remote_addr | $hostname | $upstream_addr | $upstream_response_time | $request_time" ;

    access_log  /var/log/nginx/access.log  main;

2、接下来开始在logstash创建处理nginx的配置文件

input {
        file {
                path => ["/var/log/nginx/access.log"]
        }
}

filter {
        ruby {
                init => "@kname =[‘http_x_forwarded_for‘,‘time_local‘,‘request‘,‘status‘,‘body_bytes_sent‘,‘request_body‘,‘content_length‘,‘http_referer‘,‘http_user_agent‘,‘http_cookie‘,‘remote_addr‘,‘hostname‘,‘upstream_addr‘,‘upstream_response_time‘,‘request_time‘]"
                code => "new_event = LogStash::Event.new(Hash[@kname.zip(event.get(‘message‘).split(‘|‘))])
                new_event.remove(‘@timestamp‘)
                event.append(new_event)
                "
        }

if [request] {
        ruby {
                init => "@kname = [‘method‘,‘uri‘,‘verb‘]"
                code => "
                        new_event = LogStash::Event.new(Hash[@kname.zip(event.get(‘request‘).split(‘ ‘))])
                        new_event.remove(‘@timestamp‘)
                        event.append(new_event)
                "
        }
 }
if [uri] {
        ruby{
                init => "@kname = [‘url_path‘,‘url_args‘]"
                code => "
                        new_event = LogStash::Event.new(Hash[@kname.zip(event.get(‘uri‘).split(‘?‘))])
                        new_event.remove(‘@timestamp‘)
                        event.append(new_event)
                "
        }
 }
kv {
        prefix =>"url_"
        source =>"url_args"
        field_split =>"&"
        include_keys => ["uid","cip"]
        remove_field => ["url_args","uri","request"]
}
mutate {
        convert => [
                "body_bytes_sent","integer",
                "content_length","integer",
                "upstream_response_time","float",
                "request_time","float"
        ]
 }
date {
        match => [ "time_local","dd/MMM/yyyy:hh:mm:ss Z" ]
        locale => "en"
 }
}
output{stdout{}}

此处的例子借鉴ELKstack权威指南里面的例子,不过书中的例子有错,我这里修改好了,可以参考书籍39页和66页

3、最后允许一下看一下效果所示:

{
                  "url_path" => "/",
           "body_bytes_sent" => 0,
                  "@version" => "1",
                   "message" => "- | 05/Mar/2019:16:21:40 +0800 | GET / HTTP/1.1 | 304 | 0 | - | - | - | Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0 |- | 172.16.0.10 | elk-chaofeng07 | - | - | 0.000",
                      "host" => "ELK-chaofeng07",
               "http_cookie" => "- ",
             "upstream_addr" => " - ",
    "upstream_response_time" => 0.0,
                "@timestamp" => 2019-03-05T08:21:41.352Z,
                       "uri" => "/",
                   "request" => " GET / HTTP/1.1 ",
                      "path" => "/var/log/nginx/access.log",
                  "url_args" => nil,
                  "hostname" => " elk-chaofeng07 ",
                      "verb" => "HTTP/1.1",
           "http_user_agent" => " Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0 ",
                "time_local" => " 05/Mar/2019:16:21:40 +0800 ",
              "request_body" => " - ",
               "remote_addr" => " 172.16.0.10 ",
                    "status" => " 304 ",
              "request_time" => 0.0,
                    "method" => "GET",
              "http_referer" => " - ",
                      "tags" => [
        [0] "_dateparsefailure"
    ],
            "content_length" => 0,
      "http_x_forwarded_for" => "- "
}

唯一不足的就是中间报了个错误,可以自行解决一下。

原文地址:https://www.cnblogs.com/FengGeBlog/p/10477829.html

时间: 2024-10-09 23:55:48

Logstash收集nginx日志的相关文章

Logstash收集nginx日志之使用grok过滤插件解析日志

grok作为一个logstash的过滤插件,支持根据模式解析文本日志行,拆成字段. nginx日志的配置: log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; logst

使用Logstash收集Nginx日志

Nginx

centos6.5下安装配置ELK及收集nginx日志

Elasticsearch 是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等. Logstash 是一个完全开源的工具,他可以对你的日志进行收集.分析,并将其存储供以后使用(如,搜索) kibana 也是一个开源和免费的工具,他Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总.分析和搜索重要数据日志. 环境:192.168.50.119

logstash收集syslog日志

logstash收集syslog日志注意:生产用syslog收集日志!!! 编写logstash配置文件 #首先我用rubydebug测试数据 [[email protected] conf.d]# cat syslog.conf input{ syslog{ type => "system-syslog" host => "192.168.247.135" port => "514" } } output{ stdout{ c

ELK使用filter收集nginx日志-07

修改nginx日志格式 log_format hanye '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host' '[$body_bytes_sent] $request_body "$http_referer" "$http_user_agent" [$ssl_protocol] [$ssl_cipher]' '[$request_time] [

ELK 二进制安装并收集nginx日志

对于日志来说,最常见的需求就是收集.存储.查询.展示,开源社区正好有相对应的开源项目:logstash(收集).elasticsearch(存储+搜索).kibana(展示),我们将这三个组合起来的技术称之为ELKStack,所以说ELKStack指的是Elasticsearch(java).Logstash(jruby).Kibana技术栈的结合, ELK5.X搭建并收集Nginx日志 ELK ELK5.X搭建并收集Nginx日志一.基础环境配置及软件包下载 二.安装Elasticsearch

logstash对nginx日志进行解析

logstash对nginx日志进行解析过滤转换等操作:配置可以用于生产环境,架构为filebeat读取日志放入redis,logstash从redis读取日志后进行操作:对user_agent和用户ip也进行了解析操作,便于统计: input { redis { host => "192.168.1.109" port => 6379 db => "0" data_type => "list" key => &qu

elk系统搭建并收集nginx日志-主要步骤

一)简介 elk系统是一套目前较为流行的日志收集分析系统,主要由elasticserch,logstash,kibana三部分组成,其中elasticsearch负责数据的存储,logstash负责日志的收集过滤,kibana负责日志的可视化部分.整个工作流程为logstash收集日志,过滤后输出并保存到elasticsearch中,最后用户通过kibana从elasticsearch中读取数据并处理.本文中日志收集引入filebeat收集日志,logstash监听在5000端口并接受fileb

ELK日志服务器的快速搭建并收集nginx日志

今天给大家带来的是开源实时日志分析 ELK , ELK 由 ElasticSearch . Logstash 和 Kiabana 三个开源工具组成.官方网站:https://www.elastic.co 其中的3个软件是: Elasticsearch 是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制, restful 风格接口,多数据源,自动搜索负载等. Logstash 是一个完全开源的工具,他可以对你的日志进行收集.分析,并将其存储供以后使用(如,搜索