logstash对nginx日志进行解析过滤转换等操作;
配置可以用于生产环境,架构为filebeat读取日志放入redis,logstash从redis读取日志后进行操作;
对user_agent和用户ip也进行了解析操作,便于统计;
input {
redis {
host => "192.168.1.109"
port => 6379
db => "0"
data_type => "list"
key => "test"
}
}
filter{
json {
source => "message"
remove_field => "message"
}
useragent {
source => "agent"
target => "agent"
remove_field => ["[agent][build]","[agent][os_name]","[agent][device]","[agent][minor]","[agent][patch]"]
}
date {
match => ["access_time", "dd/MMM/yyyy:HH:mm:ss Z"]
}
mutate {
remove_field => ["beat","host","prospector","@version","offset","input","source","access_time"]
convert => {"body_bytes_sent" => "integer"}
convert => {"up_response_time" => "float"}
convert => {"request_time" => "float"}
}
geoip {
source => "remote_addr"
target => "geoip"
remove_field => ["[geoip][country_code3]","[geoip][location]","[geoip][longitude]","[geoip][latitude]","[geoip][region_code]"]
add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
}
mutate {
convert => ["[geoip][coordinates]","float"]
}
}
output {
if [tags][0] == "newvp" {
elasticsearch {
hosts => ["192.168.1.110:9200","192.168.1.111:9200","192.168.1.112:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
#stdout用于调试,正式使用可以去掉
}
}
filebeat读取日志的写法:
filebeat.inputs:
- type: log
paths:
- /var/log/nginx/access.log
tags: ["newvp"]
fields:
type: newvp
fields_under_root: true
output.redis:
hosts: ["192.168.1.109"]
key: "test"
datatype: list
原文地址:http://blog.51cto.com/liuzhengwei521/2141244
时间: 2024-11-05 23:25:06