ELKStack之生产案例(下)

链接:https://pan.baidu.com/s/1V2aYpB86ZzxL21Hf-AF1rA
提取码:7izv
复制这段内容后打开百度网盘手机App,操作更方便哦

4. 引入Redis

4.1 实验环境说明

主机名 主机IP 用途
ES1 192.168.200.16 elasticsearch-node1
ES2 192.168.200.17 elasticsearch-node2
ES3 192.168.200.18 elasticsearch-node3
Logstash-Kibana 192.168.200.19 日志可视化服务器
Web-Server 192.168.200.20 模拟各种待收集的日志客户端

4.2 在logstash-Kibana上安装部署redis

4.2.1 安装epel源

[[email protected] ~]# yum -y install epel-release

4.2.2 利用yum安装redis

[[email protected] ~]# yum -y install redis
[[email protected] ~]# redis-server --version
Redis server v=3.2.12 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=7897e7d0e13773f

4.2.3 修改redis配置文件

[[email protected] ~]# cp /etc/redis.conf{,.bak}

#修改前配置
[[email protected] ~]# cat -n /etc/redis.conf.bak | sed -n '61p;480p'
    61  bind 127.0.0.1
   480  # requirepass foobared

#修改后配置
[[email protected] ~]# cat -n /etc/redis.conf | sed -n '61p;480p'
    61  bind 0.0.0.0
   480  requirepass yunwei

4.2.4 启动redis-server

[[email protected] ~]# systemctl start redis
[[email protected] ~]# netstat -antup | grep redis
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      8391/redis-server 0 

4.3 在Web服务器上安装logstash

4.3.1 yum安装jdk1.8

[[email protected] ~]# yum -y install java-1.8.0-openjdk

4.3.2 添加ELK的yum源文件

[[email protected] ~]# vim /etc/yum.repos.d/elastic.repo
[[email protected] ~]# cat /etc/yum.repos.d/elastic.repo
[elastic-6.x]

name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

4.3.3 yum安装logstash和filebeat

[[email protected] ~]# yum -y install logstash filebeat

4.3.4 创建收集数据写入redis的logstash配置文件

[[email protected] ~]# vim /etc/logstash/conf.d/logstash-to-redis.conf
[[email protected] ~]# cat /etc/logstash/conf.d/logstash-to-redis.conf
input {
   file {
       path => ["/var/log/messages"]
       type => "system"
       tags => ["syslog","test"]
       start_position => "beginning"
    }
    file {
        path => ["/var/log/audit/audit.log"]
        type => "system"
        tags => ["auth","test"]
        start_position => "beginning"
    }
}
filter {
}
output {
   redis {
      host => ["192.168.200.19:6379"]
      password => "yunwei"
      db => "0"
      data_type => "list"
      key => "logstash"
    }
}

4.3.5 启动WebServer服务器上的logstash

[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-to-redis.conf
#以下省略若干。。。

4.3.6 验证logstash是否成功将数据写入redis

[[email protected] ~]# redis-cli -a yunwei info Keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0

[[email protected] ~]# redis-cli -a yunwei scan 0
1) "0"
2) 1) "logstash"

[[email protected] ~]# redis-cli -a yunwei lrange logstash 0 1
1) "{\"type\":\"system\",\"path\":\"/var/log/messages\",\"@version\":\"1\",\"message\":\"May  3 19:33:21 ywb journal: Runtime journal is using 6.0M (max allowed 48.7M, trying to leave 73.0M free of 481.1M available \xe2\x86\x92 current limit 48.7M).\",\"@timestamp\":\"2019-09-11T16:34:10.575Z\",\"tags\":[\"syslog\",\"test\"],\"host\":\"web-server\"}"
2) "{\"type\":\"system\",\"path\":\"/var/log/audit/audit.log\",\"@version\":\"1\",\"message\":\"type=DAEMON_START msg=audit(1556883204.910:7254): op=start ver=2.8.1 format=raw kernel=3.10.0-862.el7.x86_64 auid=4294967295 pid=632 uid=0 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=success\",\"@timestamp\":\"2019-09-11T16:34:10.577Z\",\"tags\":[\"auth\",\"test\"],\"host\":\"web-server\"}"

[[email protected] ~]# redis-cli -a yunwei llen logstash
(integer) 26078

4.4 在logstash-kibana服务器上配置读取redis数据的logstash配置文件

#在Logstash-Kibana进行如下操作
[[email protected] ~]# vim /etc/logstash/conf.d/logstash-from-redis.conf
[[email protected] ~]# cat /etc/logstash/conf.d/logstash-from-redis.conf
input {
   redis {
      host => "192.168.200.19"
      port => 6379
      password => "yunwei"
      db => "0"
      data_type => "list"
      key => "logstash"
   }
}
filter {
}
output {
   if [type] == "system" {
        if [tags][0] == "syslog" {
              elasticsearch {
                     hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"]
                     index => "logstash-mr_yang-syslog-%{+YYYY.MM.dd}"
              }
              stdout { codec => rubydebug }
        }
        else if [tags][0] == "auth" {
               elasticsearch {
                     hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"]
                     index => "logstash-mr_yang-auth-%{+YYYY.MM.dd}"
              }
              stdout { codec => rubydebug }
          }
     }
}

4.5 在ES1上启动图形化ES插件,清空ES上所有的索引

[[email protected] ~]# cd elasticsearch-head/
[[email protected] elasticsearch-head]# npm run start

> [email protected] start /root/elasticsearch-head
> grunt server

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100

4.6 在logstash-kibana服务器上启动logstash,并查看kibana

#启动logstash
[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-from-redis.conf

#查看redis的key情况
[[email protected] ~]# redis-cli -a yunwei info Keyspace
# Keyspace
[[email protected] ~]# redis-cli -a yunwei llen logstash
(integer) 0

#备注:
#我们神奇的发现redis里的key已经全部都没有了
#这是因为redis在这里充当的是一个轻量级消息队列
#写入redis的logstash是生产者模型
#读取redis的logstash是消费者模型

4.6.1 重新创建好索引后,如下图

http://192.168.200.19:5601

4.6.2 查看elasticsearch里索引的数据大小

5. 引入Filebeat

  • filebeat优点:轻量。缺点:不支持正则
  • logstash优点:支持正则提取。缺点:比较重,依赖于java

5.1 在WebServer上yum安装filebeat

#安装filebeat
[[email protected] ~]# yum -y install filebeat

#修改filebeat配置文件
[[email protected] ~]# cp /etc/filebeat/filebeat.yml{,.bak}
[[email protected] ~]# vim /etc/filebeat/filebeat.yml
[[email protected] filebeat]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  paths:
    - /var/log/messages
  tags: ["syslog","test"]
  fields:
    type: system
  fields_under_root: true
- type: log
  paths:
    - /var/log/audit/audit.log
  tags: ["auth","test"]
  fields:
    type: system
  fields_under_root: true
output.redis:
  hosts: ["192.168.200.19"]
  password: "yunwei"
  key: "filebeat"
  db: 0
  datatype: list
#启动filebeat进行数据收集测试
[[email protected] ~]# systemctl start filebeat

#查看logstash-kibana服务器中的redis是否有数据
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 224

5.2 利用图形化软件清空ES中的索引,再开启logstash读取redis数据写入ES

#修改logstash配置文件
[[email protected] ~]# vim /etc/logstash/conf.d/logstash-from-redis.conf
[[email protected] ~]# cat /etc/logstash/conf.d/logstash-from-redis.conf
input {
   redis {
      host => "192.168.200.19"
      port => 6379
      password => "yunwei"
      db => "0"
      data_type => "list"
      key => "filebeat"           #修改本行的读取的redis的key即可
   }
}
filter {
}
output {
   if [type] == "system" {
        if [tags][0] == "syslog" {
              elasticsearch {
                     hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"]
                     index => "logstash-mr_yang-syslog-%{+YYYY.MM.dd}"
              }
              stdout { codec => rubydebug }
        }
        else if [tags][0] == "auth" {
               elasticsearch {
                     hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"]
                     index => "logstash-mr_yang-auth-%{+YYYY.MM.dd}"
              }
              stdout { codec => rubydebug }
          }
     }
}
#清空ES数据后,启动logstash读取redis数据
[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-from-redis.conf

#查看redis的key被消费情况
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 7
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 0
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 0

6. 生产应用案例(Filebeat+Redis+ELK)

主机名 主机IP 用途
ES1 192.168.200.16 elasticsearch-node1
ES2 192.168.200.17 elasticsearch-node2
ES3 192.168.200.18 elasticsearch-node3
Logstash-Kibana 192.168.200.19 日志可视化服务器
Web-Server 192.168.200.20 模拟各种待收集的日志客户端

6.1 收集Nginx日志

6.1.1 部署nginxWeb

#安装依赖包
[[email protected] ~]# yum -y install pcre-devel openssl-devel
#编译安装nginx
[[email protected] ~]# tar xf nginx-1.10.2.tar.gz -C /usr/src/
[[email protected] ~]# cd /usr/src/nginx-1.10.2/
[[email protected] nginx-1.10.2]# useradd -s /sbin/nologin -M nginx
[[email protected] nginx-1.10.2]# ./configure --user=nginx --group=nginx --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module
#一些省略若干。。。

[[email protected] nginx-1.10.2]# make && make install
#一些省略若干。。。
#为nginx命令创建软连接
[[email protected] nginx-1.10.2]# ln -s /usr/local/nginx/sbin/* /usr/local/sbin/
[[email protected] nginx-1.10.2]# which nginx
/usr/local/sbin/nginx
[[email protected] nginx-1.10.2]# nginx -v
nginx version: nginx/1.10.2
#编译nginx配置文件
[[email protected] nginx-1.10.2]# cd /usr/local/nginx/
[[email protected] nginx]# vim conf/nginx.conf
[[email protected] nginx]# cat conf/nginx.conf
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
    log_format json '{ "@timestamp":"$time_iso8601", '
                    '"remote_addr":"$remote_addr",'
                    '"remote_user":"$remote_user",'
                    '"body_bytes_sent":"$body_bytes_sent",'
                    '"request_time":"$request_time",'
                    '"status":"$status",'
                    '"request_uri":"$request_uri",'
                    '"request_method":"$request_method",'
                    '"http_referer":"$http_referer",'
                    '"body_bytes_sent":"$body_bytes_sent",'
                    '"http_x_forwarded_for":"$http_x_forwared_for",'
                    '"http_user_agent":"$http_user_agent"}';
    access_log logs/access_main.log main;        #开启main格式访问日志记录
    access_log logs/access_json.log json;        #开启json格式访问日志记录
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  www.ywb.com;
        location / {
            root   html/www;
            index  index.html index.htm;
        }
    }
}

[[email protected] nginx]# mkdir -p html/www
[[email protected] nginx]# echo "welcome to hyx" > html/www/index.html
[[email protected] nginx]# cat html/www/index.html
welcome to hyx
#启动nginx
[[email protected] nginx]# nginx
[[email protected] nginx]# netstat -antup | grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      11789/nginx: master 

[[email protected] nginx]# curl 192.168.200.20
welcome to hyx
[[email protected] nginx]# curl 192.168.200.20
welcome to hyx
[[email protected] nginx]# cat logs/access_main.log      #查看main格式访问日志
192.168.200.20 - - [22/Sep/2019:16:21:22 +0800] "GET / HTTP/1.1" 200 15 "-" "curl/7.29.0" "-"
192.168.200.20 - - [22/Sep/2019:16:21:23 +0800] "GET / HTTP/1.1" 200 15 "-" "curl/7.29.0" "-"

[[email protected] nginx]# cat logs/access_json.log      #查看json格式访问日志
{ "@timestamp":"2019-09-22T16:21:22+08:00", "remote_addr":"192.168.200.20","remote_user":"-","body_bytes_sent":"15","request_time":"0.000","status":"200","request_uri":"/","request_method":"GET","http_referer":"-","body_bytes_sent":"15","http_x_forwarded_for":"-","http_user_agent":"curl/7.29.0"}
{ "@timestamp":"2019-09-22T16:21:23+08:00", "remote_addr":"192.168.200.20","remote_user":"-","body_bytes_sent":"15","request_time":"0.000","status":"200","request_uri":"/","request_method":"GET","http_referer":"-","body_bytes_sent":"15","http_x_forwarded_for":"-","http_user_agent":"curl/7.29.0"}

6.1.2 修改WebServer服务器上的filebeat配置文件

#filebeat配置文件修改成如下所示
[[email protected] nginx]# vim /etc/filebeat/filebeat.yml
[[email protected] nginx]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  paths:
    - /usr/local/nginx/logs/access_json.log     #收集json格式的访问日志
  tags: ["access"]
  fields:
    app: www
    type: nginx-access-json
  fields_under_root: true

- type: log
  paths:
    - /usr/local/nginx/logs/access_main.log     #收集main格式的访问日志
  tags: ["access"]
  fields:
    app: www
    type: nginx-access
  fields_under_root: true

- type: log
  paths:
    - /usr/local/nginx/logs/error.log           #收集错误日志
  tags: ["error"]
  fields:
    app: www
    type: nginx-error
  fields_under_root: true

output.redis:                                    #输出到redis
  hosts: ["192.168.200.19"]
  password: "yunwei"
  key: "filebeat"
  db: 0
  datatype: list
#启动filebeat
[[email protected] nginx]# systemctl restart filebeat
#查看logstash-kibana服务器上redis储存的key
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 63

6.1.3 修改logstash-kibana服务器上logstash的配置文件

#logstash配置文件修改成如下所示
[[email protected] ~]# vim /etc/logstash/conf.d/logstash-from-redis.conf
[[email protected] ~]# cat /etc/logstash/conf.d/logstash-from-redis.conf
input {
  redis {
    host => "192.168.200.19"
    port => 6379
    password => "yunwei"
    db => "0"
        data_type => "list"
        key => "filebeat"
      }
    }
    filter {
      if [app] == "www" {   #如果日志项目名称是www
         if [type] == "nginx-access-json" {     #如果是json类型的数据
            json {
                 source => "message"            #将源为message的json格式数据进行解析
                 remove_field => ["message"]    #移除message字段
            }
            geoip {
                 source => "remote_addr"    #针对remote_addr的数据进行来源解析
                 target => "geoip"          #将解析结果输出到geoip字段中
                 database => "/opt/GeoLite2-City.mmdb"  #geoip的解析库文件位置
                 add_field => ["[geoip][coordinates]","%{[geoip][longitude]}"]  #添加列表格式字段数据
                 add_field => ["[geoip][coordinates]","%{[geoip][latitude]}"]   #添加列表格式字段数据
            }
            mutate {
                 convert => ["[geoip][coordinates]","float"]    #将列表格式转换成字符串格式
            }
         }
         if [type] == "nginx-access" {      #如果是main格式类型数据
            grok {
                 match => {
                    "message" => '(?<client>[0-9.]+).*' #从message中抓取client字段数据
                 }
            }
            geoip {
                 source => "client" #对client字段数据进行来源解析
                 target => "geoip"
                 database => "/opt/GeoLite2-City.mmdb"
                 add_field => ["[geoip][coordinates]","%{[geoip][longitude]}"]
                 add_field => ["[geoip][coordinates]","%{[geoip][latitude]}"]
            }
            mutate {
                 convert => ["[geoip][coordinates]","float"]
            }
         }
      }
    }
    output {
      elasticsearch {
          hosts => ["http://192.168.200.16:9200","http://192.168.200.17:9200","http://192.168.200.18:9200"]
          index => "logstash-mr_yang-%{type}-%{+YYYY.MM.dd}"    #根据type变量的值的不同写入不同的索引
      }
      stdout { codec => rubydebug }
    }
#启动logstash进程
[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-from-redis.conf

#查看redis的key的消费情况
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 0

6.1.4 创建kibana的索引

在kibana上关联索引,进行数据收集的展示http://192.168.200.19:5601

6.2 收集Java堆栈日志

6.2.1 部署tomcat

[[email protected] ~]# wget http://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.33/bin/apache-tomcat-8.5.33.tar.gz
[[email protected] ~]# tar xf apache-tomcat-8.5.33.tar.gz -C /usr/local/
[[email protected] ~]# mv /usr/local/apache-tomcat-8.5.33 /usr/local/tomcat

[[email protected] ~]# /usr/local/tomcat/bin/startup.sh
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Tomcat started.

[[email protected] ~]# tail -f /usr/local/tomcat/logs/catalina.out   #查看日志
26-Sep-2019 04:53:19.113 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [24] ms
26-Sep-2019 04:53:19.113 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
26-Sep-2019 04:53:19.448 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [335] ms
26-Sep-2019 04:53:19.448 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
26-Sep-2019 04:53:19.474 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [26] ms
26-Sep-2019 04:53:19.475 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
26-Sep-2019 04:53:19.499 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [24] ms
26-Sep-2019 04:53:19.514 信息 [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
26-Sep-2019 04:53:19.523 信息 [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
26-Sep-2019 04:53:19.526 信息 [main] org.apache.catalina.startup.Catalina.start Server startup in 962 ms

6.2.2 用浏览器访问tomcat

http://192.168.200.20:8080/

6.2.3 配置filebeat收集日志

catalina.out就是tomcat的堆栈日志

#catalina.out的堆栈报错示例
2019-09-26 04:20:08
[ERROR]-[Thread: Druid-ConnectionPool-Create-1090484466]-[com.alibaba.druid.pool.DruidDataSource$CreateConnectionThread.run()]: create connection error, url: jdbc:mysql://localhost:3306/jpress?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull, errorCode 0, state 08S01
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at sun.reflect.GeneratedConstructorAccessor25.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
        at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:350)
        at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2393)
        at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2430)
        at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2215)
        at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:813)
        at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
        at sun.reflect.GeneratedConstructorAccessor22.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:399)
        at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:334)
        at com.alibaba.druid.filter.FilterChainImpl.connection_connect(FilterChainImpl.java:148)
        at com.alibaba.druid.filter.stat.StatFilter.connection_connect(StatFilter.java:211)
        at com.alibaba.druid.filter.FilterChainImpl.connection_connect(FilterChainImpl.java:142)
        at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1423)
        at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1477)
        at com.alibaba.druid.pool.DruidDataSource$CreateConnectionThread.run(DruidDataSource.java:2001)
    Caused by: java.net.ConnectException: 拒绝连接 (Connection refused)
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at java.net.Socket.connect(Socket.java:538)
        at java.net.Socket.<init>(Socket.java:434)
        at java.net.Socket.<init>(Socket.java:244)
        at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:257)
        at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:300)
        ... 17 more
#修改filebeat配置文件加入对tomcat的堆栈报错的数据收集
[[email protected] ~]# vim /etc/filebeat/filebeat.yml
[[email protected] ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  paths:
    - /usr/local/nginx/logs/access_json.log
  tags: ["access"]
  fields:
    app: www
    type: nginx-access-json
  fields_under_root: true

- type: log
  paths:
    - /usr/local/nginx/logs/access_main.log
  tags: ["access"]
  fields:
    app: www
    type: nginx-access
  fields_under_root: true

- type: log
  paths:
    - /usr/local/nginx/logs/error.log
  tags: ["error"]
  fields:
    app: www
    type: nginx-error
  fields_under_root: true

- type: log
  paths:
    - /usr/local/tomcat/logs/catalina.out
  tags: ["tomcat"]
  fields:
    app: www
    type: tomcat-catalina
  fields_under_root: true
  multiline:
    pattern: '^\['
    negate: true
    match: after

output.redis:
  hosts: ["192.168.200.19"]
  password: "yunwei"
  key: "filebeat"
  db: 0
  datatype: list
#重新启动filebeat
[[email protected] ~]# systemctl restart filebeat

#查看redis的数据队列数
[[email protected] ~]# redis-cli -a yunwei llen filebeat
(integer) 7

#启动logstash-kibana服务器下的logstash进程
[[email protected] ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-from-redis.conf

6.2.4 创建kibana的索引展示

6.3 Kibana可视化和仪表盘

6.3.1 在nginx访问日志的main格式中,模拟些不同的访问IP

113.108.182.52
123.150.187.130
203.186.145.250
114.80.166.240
119.147.146.189
58.89.67.152
[[email protected] ~]# a='58.89.67.152 - - [26/Aug/2018:14:17:33 +0800] "GET / HTTP/1.1" 200 21 "-" "curl/7.29.0" "-"'
[[email protected] ~]# for i in `seq 50`;do echo $a >> /usr/local/nginx/logs/access_main.log ;done

6.3.2 PV/IP

统计pv其实就是统计单位时间内的访问量

6.3.3 统计IP其实就是统计去重复以后的访问IP数

6.3.4 用户地理位置分布

原文地址:https://www.cnblogs.com/ywb123/p/11594257.html

时间: 2024-11-05 11:56:32

ELKStack之生产案例(下)的相关文章

日志监控_ElasticStack-0002.Logstash编码插件及实际生产案例应用?

新版插件: 说明: 从5.0开始,插件都独立拆分成gem包,每个插件可独立更新,无需等待Logstash自身整体更新,具体管理命令可参考./bin/logstash-plugin --help帮助信息../bin/logstash-plugin list其实所有的插件就位于本地./vendor/bundle/jruby/1.9/gems/目录下 扩展: 如果GitHub上面(https://github.com/logstash-plugins/)发布了扩展插件,可通过./bin/logstas

日志监控_ElasticStack-0003.Logstash输入插件及实际生产案例应用?

新版插件: 说明: 从5.0开始,插件都独立拆分成gem包,每个插件可独立更新,无需等待Logstash自身整体更新,具体管理命令可参考./bin/logstash-plugin --help帮助信息../bin/logstash-plugin list其实所有的插件就位于本地./vendor/bundle/jruby/1.9/gems/目录下 扩展: 如果GitHub上面(https://github.com/logstash-plugins/)发布了扩展插件,可通过./bin/logstas

Zabbix 3.0 生产案例 [四]

Zabbix 3.0 生产案例 [四] zabbix 时间:2016年9月22日 笔者QQ:381493251 Abcdocker交流群:454666672 如果遇到什么问题可以进群询问,我们是一个乐于帮助的集体! Zabbix 生产案例实战 一.项目规划 1.主机分组: 交换机 Nginx Tomcat MySQL 2.监控对象识别: 1.使用SNMP监控交换机 2.使用IPMI监控服务器硬件 3.使用Agent监控服务器 4.使用JMX监控Java应用 5.监控MySQL 6.监控Web状态

四步法分析定位生产环境下MySQL上千条SQL中的问题所在

第一步:通过以下两种方式之一来打开慢查询功能 (1)方式一:通过修改mysql的my.cnf文件 如果是5.0或5.1等版本需要增加以下选项: log-slow-queries="mysql_slow_query.log" 如果是5.5版本以上可以增加如下选项: slow-query-log=On slow_query_log_file="mysql_slow_query.log" log-query-not-using-indexes 但是以上修改mysql配置文

Play生产模式下java.io.FileNotFoundException那点事

之前”用Scala+Play构建地理数据查询接口”那篇文章里,用到的数据是json格式的文本文件area.json,存放在conf/jsons文件夹下.最开始是放在public/文件夹下,在线上准生产模式下运行: activator dist 得到mosquito-1.0.zip压缩包,解压后: 去/bin目录下运行mosquito脚本报错: java.io.FileNotFoundException 然后就去解压的mosquito-1.0/看发现并没有public文件夹,由此可见public文

生产环境下ftp的迁移并构建高可用

说明:这是1个小项目就两台DELL的服务器,和一台IP SAN存储(DELL MD3200i).原来是4台小服务器,而且服务器太老了,经常有问题,这回相当于一次ftp的迁移,以前用的是proftp,这次换成了vsftp.数据量有2.5T. 拓扑很简单: 系统:CENTOS 6.4(64bit) 高可用软件:corosync+pacemaker host:ftp1 192.168.1.190 ftp2  192.168.1.191 stonith(ipmi):ftp1 192.168.1.180

生产环境下的iptables

生产环境下的iptables设置,这是我自己的一点总结,浅显之处望大家指出批评,共同学习. 我的局域网为192.168.1.0/24. 1.先清空所有规则 iptables -F iptables -X iptables -Z iptables -t nat -F iptables -t nat -X iptables -t nat -Z 设置默认规则前开发ssh(6123)端口 iptables -A INPUT -i eth0 -s 192.168.1.0/24 -p tcp --dport

读生产环境下go语言最佳实践有感

最近看了一篇关于go产品开发最佳实践的文章,go-in-procution.作者总结了他们在用go开发过程中的很多实际经验,我们很多其实也用到了,鉴于此,这里就简单的写写读后感,后续我也争取能将这篇文章翻译出来.后面我用soundcloud来指代原作者. 开发环境 在soundcloud,每个人使用一个独立的GOPATH,并且在GOPATH直接按照go规定的代码路径方式clone代码. $ mkdir -p $GOPATH/src/github.com/soundcloud $ cd $GOPA

angular2-aot-webpack 生产环境下编译angular2

这里讲讲,angular2在生产模式下用webpack2进行打包的方法: //使用rollup打包还是比较坑的,功能及插件上都不如webpack, 关键不支持代码分离,导致angular里的lazy loader将无法使用. 具体步骤: angular=>aot=>webpack(Tree shaking&& Uglify) angular=>aot: 首先你需要的依赖: "@angular/compiler"     "@angular/c