ELK学习实验019:ELK使用redis缓存

1 安装一个redis服务

[[email protected] ~]# yum -y install redis

直接启动

[[email protected] ~]# systemctl restart redis

[[email protected] ~]# systemctl status redis

[[email protected] ~]# redis-cli  -h 127.0.0.1

2 配置filebeat,把数据传给redis

[[email protected] ~]# vim /etc/filebeat/filebeat.yml

filebeat.inputs:
#####################################################
## Nginx log
#####################################################
- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/access.log
  json.key_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

#- type: log
#  enabled: true
#  paths:
#    - /usr/local/nginx/logs/error.log
#  tags: ["error"]

#####################################################
## tomcat  log
#####################################################
- type: log
  enabled: true
  paths:
    - /var/log/tomcat/localhost_access_log.*.txt
  json.key_under_root: true
  json.overwrite_keys: true
  tags: ["tomcat"]

#####################################################
## java  log
#####################################################
- type: log
  enabled: true
  paths:
    - /usr/local/elasticsearch/logs/my-elktest-cluster.log
  tags: ["es-java"]
multiline.pattern: ‘^\[‘
  multiline.negate: true
  multiline.match: "after"

#####################################################
## docker  log
#####################################################
- type: docker
  containers.ids:
    - ‘*‘
  json.key_under_root: true
  json.overwrite_keys: true
  tags: ["docker"]

#####################################################
## outout redis
#####################################################
output.redis:
  hosts: ["127.0.0.1"]
  key: "filebeat"
  db: 0
  timeout: 5

[[email protected] ~]# systemctl restart filebeat

访问产生日志

3 查看redis

127.0.0.1:6379> keys *
1) "filebeat"
127.0.0.1:6379>
127.0.0.1:6379> keys *
1) "filebeat"
127.0.0.1:6379> type filebeat         #查看类型
list
127.0.0.1:6379> llen filebeat         #查看长度
(integer) 22
127.0.0.1:6379> LRANGE filebeat 1 22
 1) "{\"@timestamp\":\"2020-01-20T14:22:15.291Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.4.2\"},\"agent\":{\"hostname\":\"node4\",\"id\":\"bb3818f9-66e2-4eb2-8f0c-3f35b543e025\",\"version\":\"7.4.2\",\"type\":\"filebeat\",\"ephemeral_id\":\"663027a7-1bdc-4a9f-b9d3-1297ef06c0b0\"},\"log\":{\"offset\":21185,\"file\":{\"path\":\"/usr/local/nginx/logs/error.log\"}},\"message\":\"2020/01/20 09:22:08 [error] 2790#0: *32 open() \\\"/usr/local/nginx/html/favicon.ico\\\" failed (2: No such file or directory), client: 192.168.132.1, server: localhost, request: \\\"GET /favicon.ico HTTP/1.1\\\", host: \\\"192.168.132.134\\\", referrer: \\\"http://192.168.132.134/\\\"\",\"tags\":[\"error\"],\"input\":{\"type\":\"log\"},\"ecs\":{\"version\":\"1.1.0\"},\"host\":{\"name\":\"node4\"}}"

使用json解析

{
    "@timestamp": "2020-01-20T14:22:15.293Z",
    "@metadata": {
        "beat": "filebeat",
        "type": "_doc",
        "version": "7.4.2"
    },
    "log": {
        "offset": 21460,
        "file": {
            "path": "/usr/local/nginx/logs/access.log"
        }
    },
    "json": {
        "size": 612,
        "xff": "-",
        "upstreamhost": "-",
        "url": "/index.html",
        "domain": "192.168.132.134",
        "upstreamtime": "-",
        "@timestamp": "2020-01-20T09:22:08-05:00",
        "clientip": "192.168.132.1",
        "host": "192.168.132.134",
        "status": "200",
        "http_host": "192.168.132.134",
        "Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36",
        "responsetime": 0,
        "referer": "-"
    },
    "tags": ["access"],
    "input": {
        "type": "log"
    },
    "ecs": {
        "version": "1.1.0"
    },
    "host": {
        "name": "node4"
    },
    "agent": {
        "id": "bb3818f9-66e2-4eb2-8f0c-3f35b543e025",
        "version": "7.4.2",
        "type": "filebeat",
        "ephemeral_id": "663027a7-1bdc-4a9f-b9d3-1297ef06c0b0",
        "hostname": "node4"
    }
}

4 使用logstash收集消费resdis的数据

再node4节点安装logstash

[[email protected] ~]#  wget https://artifacts.elastic.co/downloads/logstash/logstash-7.5.1.rpm

[[email protected] ~]# rpm -ivh logstash-7.5.1.rpm

[[email protected] ~]# vim /etc/logstash/conf.d/logsatsh.conf

input {
  redis {
    host => "127.0.0.1"
    port => "6379"
    db => "0"
    key => "filebeat"
    data_type => "list"
  }
}
filter{
  mutate {
    convert => ["upstream_time","float"]
    convert => ["request_time","float"]
  }
}

output{
  stdout {}
    elasticsearch {
      hosts => "192.168.132.131:9200"
      manage_template => false
      index => "nginx_access-%{+yyyy.MM.dd}"
  }
}

[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh.conf

已经有索引

访问

[[email protected] ~]# ab -n 20000 -c 20 http://192.168.132.134

查看redis

127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> keys *
1) "filebeat"
127.0.0.1:6379> LLEN filebeat
(integer) 16875
127.0.0.1:6379> LLEN filebeat
(integer) 16000
127.0.0.1:6379> LLEN filebeat
(integer) 15125
127.0.0.1:6379> LLEN filebeat
(integer) 14375
127.0.0.1:6379> LLEN filebeat
(integer) 13625
127.0.0.1:6379> LLEN filebeat
(integer) 13000
127.0.0.1:6379> LLEN filebeat
(integer) 12375

使用kibana查看

{
  "_index": "nginx_access-2020.01.20",
  "_type": "_doc",
  "_id": "A1J-w28BOF7DoSFdyQr8",
  "_version": 1,
  "_score": null,
  "_source": {
    "host": {
      "name": "node4"
    },
    "tags": [
      "access"
    ],
    "input": {
      "type": "log"
    },
    "ecs": {
      "version": "1.1.0"
    },
    "log": {
      "file": {
        "path": "/usr/local/nginx/logs/access.log"
      },
      "offset": 12386215
    },
    "json": {
      "host": "192.168.132.134",
      "upstreamtime": "-",
      "xff": "-",
      "status": "200",
      "referer": "-",
      "http_host": "192.168.132.134",
      "Agent": "ApacheBench/2.3",
      "url": "/index.html",
      "responsetime": 0,
      "domain": "192.168.132.134",
      "size": 612,
      "clientip": "192.168.132.135",
      "upstreamhost": "-",
      "@timestamp": "2020-01-20T10:07:11-05:00"
    },
    "agent": {
      "hostname": "node4",
      "id": "bb3818f9-66e2-4eb2-8f0c-3f35b543e025",
      "type": "filebeat",
      "ephemeral_id": "efddca40-1d19-4036-9724-410b1b6d4c8b",
      "version": "7.4.2"
    },
    "@version": "1",
    "@timestamp": "2020-01-20T15:07:16.439Z"
  },
  "fields": {
    "[email protected]": [
      "2020-01-20T15:07:11.000Z"
    ],
    "@timestamp": [
      "2020-01-20T15:07:16.439Z"
    ]
  },
  "sort": [
    1579532836439
  ]
}

5 filebeat添加错误日志

- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/error.log
  tags: ["error"]
output.redis:
hosts: ["127.0.0.1"]
keys:
- key: "nginx_access"
when.contains:
tags: "access"
- key: "nginx_error"
when.contains:
tags: "error"

访问的错误日志

[[email protected] ~]# ab -n 20000 -c 200 http://192.168.132.134/hehe

127.0.0.1:6379> keys *
1) "nginx_error"
2) "nginx_access"

6 有两个key,配置logstash

[[email protected] ~]# cat /etc/logstash/conf.d/logsatsh.conf

input {
  redis {
    host => "127.0.0.1"
    port => "6379"
    db => "0"
    key => "nginx_access"
    data_type => "list"
  }
  redis {
    host => "127.0.0.1"
    port => "6379"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}
filter{
  mutate {
    convert => ["upstream_time","float"]
    convert => ["request_time","float"]
  }
}

output{
  stdout {}
  if "access" in [tags]{
    elasticsearch {
      hosts => "192.168.132.131:9200"
      manage_template => false
      index => "nginx_access-%{+yyyy.MM.dd}"
    }
  }
  if "error" in [tags]{
    elasticsearch {
      hosts => "192.168.132.131:9200"
      manage_template => false
      index => "nginx_error-%{+yyyy.MM.dd}"
    }
  }}

[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh.conf

7 启动后,redis被消费

127.0.0.1:6379> keys *
1) "nginx_error"
2) "nginx_access"
127.0.0.1:6379> LLEN nginx_error
(integer) 20000
127.0.0.1:6379> LLEN nginx_error
(integer) 20000
127.0.0.1:6379> LLEN nginx_error
(integer) 20000
127.0.0.1:6379> LLEN nginx_error
(integer) 20000
127.0.0.1:6379> LLEN nginx_error
(integer) 16125
127.0.0.1:6379> LLEN nginx_error
(integer) 15750
127.0.0.1:6379> LLEN nginx_access
(integer) 14625
127.0.0.1:6379> LLEN nginx_access
(integer) 14375
127.0.0.1:6379> LLEN nginx_error
(integer) 14000
127.0.0.1:6379> LLEN nginx_error 

查看索引

8 优化配置

通过上面的传输,logstah重视通过tags来区分日志,所以再logstash的input中可以不配置两个key,只需要配置一个key即可

[[email protected] ~]# vim /etc/logstash/conf.d/logsatsh.conf

input {
  redis {
    host => "127.0.0.1"
    port => "6379"
    db => "0"
    key => "nginx"
    data_type => "list"
  }
}
filter{
  mutate {
    convert => ["upstream_time","float"]
    convert => ["request_time","float"]
  }
}

output{
  stdout {}
  if "access" in [tags]{
    elasticsearch {
      hosts => "192.168.132.131:9200"
      manage_template => false
      index => "nginx_access-%{+yyyy.MM.dd}"
    }
  }
  if "error" in [tags]{
    elasticsearch {
      hosts => "192.168.132.131:9200"
      manage_template => false
      index => "nginx_error-%{+yyyy.MM.dd}"
    }
  }
}

filebeat配置

#####################################################
## outout redis
#####################################################
output.redis:
  hosts: ["127.0.0.1"]
  keys: "nginx"

[[email protected] ~]# systemctl restart filebeat

[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh.conf

访问生成日志

[[email protected] ~]# ab -n 20000 -c 200 http://192.168.132.134/hehe

查看redis

127.0.0.1:6379> keys *
1) "nginx"
127.0.0.1:6379> LLEN nginx
(integer) 20125
127.0.0.1:6379> LLEN nginx
(integer) 19625
127.0.0.1:6379> LLEN nginx
(integer) 18250
127.0.0.1:6379> LLEN nginx
(integer) 17500
127.0.0.1:6379> LLEN nginx
(integer) 16750
127.0.0.1:6379> LLEN nginx
(integer) 0

查看索引

索引生成,实验完成

原文地址:https://www.cnblogs.com/zyxnhr/p/12219867.html

时间: 2024-08-29 18:12:03

ELK学习实验019:ELK使用redis缓存的相关文章

ELK学习实验014:Nginx日志JSON格式收集

1 Kibana的显示配置 https://demo.elastic.co/app/kibana#/dashboard/welcome_dashboard 环境先处理干净 安装nginx和httpd-tools 2 使用压测工具产生日志 [[email protected] ~]# ab -n 100 -c 100 http://192.168.132.134/ This is ApacheBench, Version 2.3 <$Revision: 1430300 $> Copyright

Redis学习笔记(11)——Redis缓存集群方案

由于单台Redis服务器的内存管理能力有限,使用过大内存的Redis又会使得服务器的性能急剧下降,一旦服务器发生故障将会影响更大范围业务,而Redis 3.0 beta1支持的集群功能还不适合生产环境的使用.于是为了获取更好的Redis缓存性能及可用性,很多公司都研发了Redis缓存集群方案.现对NetFlix.Twitter.国内的豌豆荚在缓存集群方面的解决方案进行一个汇总,以供读者参考,具体内容如下: 1.NetFlix对Dynamo的开源通用实现Dynomite Dynomite是NetF

ELK学习实验009:安装kibana的仪表盘

一 metricbeat仪表盘 1.1 安装metricbeat仪表盘 可以将metricbeat数据在kibana中展示 [[email protected] ~]# cd /usr/local/metricbeat/ [[email protected] metricbeat]# grep -Ev "^$|[#;]" metricbeat.yml metricbeat.config.modules: path: ${path.config}/modules.d/*.yml relo

ELK学习实验016:filebeat收集tomcat日志

filebeat收集tomcat日志 1 安装tomcat [[email protected] ~]# yum -y install tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp tomcat-javadoc [[email protected] ~]# systemctl start tomcat [[email protected] ~]# systemctl status tomcat [[email prot

ELK学习实验018:filebeat收集docker日志

Filebeat收集Docker日志 1 安装docker [[email protected] ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [[email protected] ~]# yum update [[email protected] ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/d

ELK学习总结(一)

一. ELK是什么?ELK 是elastic公司提供的一套完整的日志收集以及展示的解决方案,是三个产品的首字母缩写,分别是ElasticSearch.Logstash 和 Kibana. ElasticSearch简称ES,它是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析.它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编写.Logstash是一个具有实时传输能力的数据收集引擎,用来进行数据收集(如:读取文本文件).解析,并

ELK技术实战-安装Elk 5.x平台

ELK技术实战–了解Elk各组件   转载  http://www.ywnds.com/?p=9776 ELK技术实战-部署Elk 2.x平台 ELK Stack是软件集合Elasticsearch.Logstash.Kibana的简称,由这三个软件及其相关的组件可以打造大规模日志实时处理系统. 其中,Elasticsearch 是一个基于 Lucene 的.支持全文索引的分布式存储和索引引擎,主要负责将日志索引并存储起来,方便业务方检索查询. Logstash是一个日志收集.过滤.转发的中间件

C# mvc 前端调用 redis 缓存的信息

新手 这几天网上学习下redis ,自己总结下过程,怕之后忘记了,基本会用最简单的,有的还是不懂,先记下来,自己摸索的. 没有安装redis的先安装,教程:http://www.cnblogs.com/yyy116008/p/7508681.html 安装好了之后再配置  教程:http://www.cnblogs.com/yyy116008/p/7520635.html 安装配置好了之后: 1 传一个list集合 转化成 对象 ,值用redis缓存对象 前端代码:

Linux之搭建redis缓存服务器

Linux之搭建redis缓存服务器(nginx+tomcat+redis+mysql实现session会话共享) 一.redis介绍 redis是一个key-value存储系统.和Memcached类似,它支持存储的value类型相对更多,包括string(字符串).list(链表).set(集合).zset(sorted set --有序集合)和hash(哈希类型).与memcached一样,为了保证效率,数据都是缓存在内存中.区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写