日志平台之ELKStack实践

在运维系统中,经常遇到如下情况:

①开发人员不能登录线上服务器查看详细日志

②各个系统都有日志,日志数据分散难以查找

③日志数据量大,查询速度慢,或者数据不够实时

④一个调用会涉及多个系统,难以在这些系统的日志中快速定位数据

我们可以采用目前比较流行的ELKStack来满足以上需求。


ELK Stack组成部分:


原理流程图如下:

实战操作:

①下载安装包:

[[email protected] tools]# ll

total 289196

-rw-r--r-- 1 root root  28487351 Mar 24 11:29 elasticsearch-1.7.5.tar.gz

-rw-r--r-- 1 root root 173271626 Mar 24 11:19 jdk-8u45-linux-x64.tar.gz

-rw-r--r-- 1 root root  18560531 Mar 24 11:00 kibana-4.1.6-linux-x64.tar.gz

-rw-r--r-- 1 root root  74433726 Mar 24 11:06 logstash-2.1.3.tar.gz

-rw-r--r-- 1 root root   1375200 Mar 24 11:03 redis-3.0.7.tar.gz

[[email protected] tools]#

②安装配置elasticsearch

[[email protected] tools]# tar xf elasticsearch-1.7.5.tar.gz

[[email protected] tools]# mv elasticsearch-1.7.5 /usr/local/elasticsearch

[[email protected] tools]#

[[email protected] tools]# cd /usr/local/elasticsearch/

[[email protected] elasticsearch]# ll

total 40

drwxr-xr-x 2 root root  4096 Mar 24 11:34 bin

drwxr-xr-x 2 root root  4096 Mar 24 11:34 config

drwxr-xr-x 3 root root  4096 Mar 24 11:34 lib

-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt

-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt

-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile

[[email protected] elasticsearch]#

[[email protected] elasticsearch]# cd config/

[[email protected] config]# ll

total 20

-rw-rw-r-- 1 root root 13476 Feb  2 17:24 elasticsearch.yml

-rw-rw-r-- 1 root root  2054 Feb  2 17:24 logging.yml

[[email protected] config]#

修改配置文件如下:

[[email protected] config]#

[[email protected] config]# grep ‘^[a-z]‘ elasticsearch.yml

cluster.name: elasticsearch

node.name: "node01"

node.master: true

node.data: true

index.number_of_shards: 5

index.number_of_replicas: 1

path.conf: /usr/local/elasticsearch/conf

path.data: /usr/local/elasticsearch/data

path.work: /usr/local/elasticsearch/work

path.logs: /usr/local/elasticsearch/logs

path.plugins: /usr/local/elasticsearch/plugins

bootstrap.mlockall: true

[[email protected] config]#

创建对应的目录:

[[email protected] elasticsearch]# mkdir -p /usr/local/elasticsearch/{data,work,logs,plugins}

[[email protected] elasticsearch]# ll

total 56

drwxr-xr-x 2 root root  4096 Mar 24 11:34 bin

drwxr-xr-x 2 root root  4096 Mar 24 12:52 config

drwxr-xr-x 3 root root  4096 Mar 24 12:51 data

drwxr-xr-x 3 root root  4096 Mar 24 11:34 lib

-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt

drwxr-xr-x 2 root root  4096 Mar 24 13:00 logs

-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt

drwxr-xr-x 2 root root  4096 Mar 24 13:01 plugins

-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile

drwxr-xr-x 2 root root  4096 Mar 24 13:00 work

[[email protected] elasticsearch]#

③启动elasticsearch服务

[[email protected] elasticsearch]# pwd

/usr/local/elasticsearch

[[email protected] elasticsearch]# ll

total 44

drwxr-xr-x 2 root root  4096 Mar 24 11:34 bin

drwxr-xr-x 2 root root  4096 Mar 24 12:52 config

drwxr-xr-x 3 root root  4096 Mar 24 12:51 data

drwxr-xr-x 3 root root  4096 Mar 24 11:34 lib

-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt

-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt

-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile

[[email protected] elasticsearch]# /usr/local/elasticsearch/bin/elasticsearch

[[email protected] elasticsearch]

如果要elasticsearch在后台运行则只需要添加-d,即:

[[email protected] elasticsearch]# /usr/local/elasticsearch/bin/elasticsearch -d

启动之后查看端口状态:

[[email protected] elasticsearch]# netstat -lpnut|grep java

tcp        0      0 :::9300                     :::*                        LISTEN      26868/java

tcp        0      0 :::9200                     :::*                        LISTEN      26868/java

udp        0      0 :::54328                    :::*                                    26868/java

[[email protected] elasticsearch]#

访问查看状态信息:

[[email protected] elasticsearch]# curl http://10.0.0.10:9200

{

"status" : 200,

"name" : "node01",

"cluster_name" : "elasticsearch",

"version" : {

"number" : "1.7.5",

"build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",

"build_timestamp" : "2016-02-02T09:55:30Z",

"build_snapshot" : false,

"lucene_version" : "4.10.4"

},

"tagline" : "You Know, for Search"

}

[[email protected] elasticsearch]#

④管理elasticsearch服务的脚本

[[email protected] elasticsearch]# git clone https://github.com/elastic/elasticsearch-servicewrapper

Initialized empty Git repository in /usr/local/elasticsearch/elasticsearch-servicewrapper/.git/

remote: Counting objects: 184, done.

remote: Total 184 (delta 0), reused 0 (delta 0), pack-reused 184

Receiving objects: 100% (184/184), 4.55 MiB | 46 KiB/s, done.

Resolving deltas: 100% (53/53), done.

[[email protected] elasticsearch]#

[[email protected] elasticsearch]# mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/

[[email protected] elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch

Usage: /usr/local/elasticsearch/bin/service/elasticsearch [ console | start | stop | restart | condrestart | status | install | remove | dump ]

Commands:

console      Launch in the current console.

start        Start in the background as a daemon process.

stop         Stop if running as a daemon or in another console.

restart      Stop if running and then start.

condrestart  Restart only if already running.

status       Query the current status.

install      Install to start automatically when system boots.

remove       Uninstall.

dump         Request a Java thread dump if running.

[[email protected] elasticsearch]#

根据提示,这里我们将其安装到系统启动项中:

[[email protected] elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch install

Detected RHEL or Fedora:

Installing the Elasticsearch daemon..

[[email protected] elasticsearch]# chkconfig --list|grep elas

elasticsearch   0:off   1:off   2:on    3:on    4:on    5:on    6:off

[[email protected] elasticsearch]#

通过service命令来启动elasticsearch服务

[[email protected] logs]# service elasticsearch start

Starting Elasticsearch...

Waiting for Elasticsearch......

running: PID:28084

[[email protected] logs]#

如果启动报错,即输入如下内容:

[[email protected] service]# service elasticsearch start

Starting Elasticsearch...

Waiting for Elasticsearch......................

WARNING: Elasticsearch may have failed to start.

通过启动日志排查问题:

[[email protected] logs]# pwd

/usr/local/elasticsearch/logs

[[email protected] logs]# ll

total 4

-rw-r--r-- 1 root root 3909 Mar 24 13:32 service.log

[[email protected] logs]# cat service.log

The configured wrapper.java.command could not be found, attempting to launch anyway: java

Launching a JVM...

VM...

jvm 3    | VM...

wrapper  |

wrapper  | ------------------------------------------------------------------------

wrapper  | Advice:

wrapper  | Usually when the Wrapper fails to start the JVM process, it is because

wrapper  | of a problem with the value of the configured Java command.  Currently:

wrapper  | wrapper.java.command=java

wrapper  | Please make sure that the PATH or any other referenced environment

wrapper  | variables are correctly defined for the current environment.

wrapper  | ------------------------------------------------------------------------

wrapper  |

wrapper  | The configured wrapper.java.command could not be found, attempting to launch anyway: java

wrapper  | Launching a JVM...

根据上述提示,根据实际配置环境将对应参数修改为红色部分,然后重启

[[email protected] service]# pwd

[[email protected] service]#

[[email protected] service]# ll elasticsearch.conf

-rw-r--r-- 1 root root 4768 Mar 24 13:32 elasticsearch.conf

[[email protected] service]#

[[email protected] logs]# service elasticsearch start

Starting Elasticsearch...

Waiting for Elasticsearch......

running: PID:28084

[[email protected] logs]#

检查启动状态:

[[email protected] service]# netstat -lnput

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      968/sshd

tcp        0      0 127.0.0.1:32000             0.0.0.0:*                   LISTEN      28086/java

tcp        0      0 :::9300                     :::*                        LISTEN      28086/java

tcp        0      0 :::22                       :::*                        LISTEN      968/sshd

tcp        0      0 :::9200                     :::*                        LISTEN      28086/java

udp        0      0 :::54328                    :::*                                    28086/java

[[email protected] service]# curl http://10.0.0.10:9200

{

"status" : 200,

"name" : "node01",

"cluster_name" : "elasticsearch",

"version" : {

"number" : "1.7.5",

"build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",

"build_timestamp" : "2016-02-02T09:55:30Z",

"build_snapshot" : false,

"lucene_version" : "4.10.4"

},

"tagline" : "You Know, for Search"

}

[[email protected] service]#

⑤JAVA API

node client

Transport client

⑥RESTful API

⑦javascript,.NET,PHP,Perl,Python,Ruby

查询例子:

[[email protected] service]# curl -XGET ‘http://10.0.0.10:9200/_count?pretty‘ -d {"query":{"match_all":{}}}

{

"count" : 0,

"_shards" : {

"total" : 0,

"successful" : 0,

"failed" : 0

}

}

[[email protected] service]#

插件能额外扩展elasticsearch功能,提供各类功能等等。有三种类型的插件:

  1. java插件

只包含JAR文件,必须在集群中每个节点上安装而且需要重启才能使插件生效。

    2. 网站插件

这类插件包含静态web内容,如js、css、html等等,可以直接从elasticsearch服务,如head插件。只需在一个节点上安装,不需要重启服务。可以通过下面的URL访问,如:http://node-ip:9200/_plugin/plugin_name

    3. 混合插件

顾名思义,就是包含上面两种的插件。

安装elasticsearch插件Marvel:

[[email protected] service]# /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest

-> Installing elasticsearch/marvel/latest...

Trying http://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.zip...

Downloading ....................................................................................................................................................................................................................................................................................................................................................DONE

Installed elasticsearch/marvel/latest into /usr/local/elasticsearch/plugins/marvel

[[email protected] service]#

安装完成之后通过浏览器访问:

http://10.0.0.10:9200/_plugin/marvel/kibana/index.html#/dashboard/file/marvel.overview.json

选择Dashboards菜单:

点击sense进入如下界面:



编写内容:


记录右边生成的id,然后在左边通过id进行查询


如果要进一步查询,则可以做如下操作:


⑧集群管理插件安装

[[email protected] service]# /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest

-> Installing elasticsearch/marvel/latest...

Trying http://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.zip...

Downloading ....................................................................................................................................................................................................................................................................................................................................................DONE

Installed elasticsearch/marvel/latest into /usr/local/elasticsearch/plugins/marvel

[[email protected] service]# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head

-> Installing mobz/elasticsearch-head...

Trying https://github.com/mobz/elasticsearch-head/archive/master.zip...

Downloading ...................................................................................................................................................................................................................................................................................DONE

Installed mobz/elasticsearch-head into /usr/local/elasticsearch/plugins/head

[[email protected] service]#

安装完成后,通过浏览器访问:

http://10.0.0.10:9200/_plugin/head/

部署另外一个节点node02,需要在elasticsearch的配置文件中修改node.name: "node02"即可,其他与node01保持一致

[[email protected] tools]# cd

[[email protected] ~]# grep ‘^[a-z]‘ /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: elasticsearch

node.name: "node02"

node.master: true

node.data: true

index.number_of_shards: 5

index.number_of_replicas: 1

path.conf: /usr/local/elasticsearch/conf

path.data: /usr/local/elasticsearch/data

path.work: /usr/local/elasticsearch/work

path.logs: /usr/local/elasticsearch/logs

path.plugins: /usr/local/elasticsearch/plugins

bootstrap.mlockall: true

[[email protected] ~]#

[[email protected] ~]# mkdir -p /usr/local/elasticsearch/{data,work,logs.plugins}

[[email protected] ~]# ll /usr/local/elasticsearch/

total 56

drwxr-xr-x 2 root root  4096 Mar 24 14:23 bin

drwxr-xr-x 2 root root  4096 Mar 24 14:23 config

drwxr-xr-x 2 root root  4096 Mar 24 14:31 data

drwxr-xr-x 4 root root  4096 Mar 24 14:27 elasticsearch-servicewrapper

drwxr-xr-x 3 root root  4096 Mar 24 14:23 lib

-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt

drwxr-xr-x 2 root root  4096 Mar 24 14:31 logs.plugins

-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt

-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile

drwxr-xr-x 2 root root  4096 Mar 24 14:31 work

[[email protected] ~]#

安装管理elasticsearch的管理工具elasticsearch-servicewrapper

[[email protected] elasticsearch]# git clone https://github.com/elastic/elasticsearch-servicewrapper

Initialized empty Git repository in /usr/local/elasticsearch/elasticsearch-servicewrapper/.git/

remote: Counting objects: 184, done.

remote: Total 184 (delta 0), reused 0 (delta 0), pack-reused 184

Receiving objects: 100% (184/184), 4.55 MiB | 10 KiB/s, done.

Resolving deltas: 100% (53/53), done.

[[[email protected] elasticsearch]# ll

total 80

drwxr-xr-x 3 root root  4096 Mar 24 14:33 bin

drwxr-xr-x 2 root root  4096 Mar 24 14:23 config

drwxr-xr-x 2 root root  4096 Mar 24 14:31 data

drwxr-xr-x 3 root root  4096 Mar 24 14:33 elasticsearch-servicewrapper

drwxr-xr-x 3 root root  4096 Mar 24 14:23 lib

-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt

drwxr-xr-x 2 root root  4096 Mar 24 14:31 logs

-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt

drwxr-xr-x 2 root root  4096 Mar 24 14:36 plugins

-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile

drwxr-xr-x 2 root root  4096 Mar 24 14:31 work

-rw-r--r-- 1 root root 18208 Mar 24 14:35 wrapper.log

[[email protected] elasticsearch]#

[[email protected] elasticsearch]# mv elasticsearch-servicewrapper/service /usr/local/elasticsearch/bin/

[[email protected] elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch

Usage: /usr/local/elasticsearch/bin/service/elasticsearch [ console | start | stop | restart | condrestart | status | install | remove | dump ]

Commands:

console      Launch in the current console.

start        Start in the background as a daemon process.

stop         Stop if running as a daemon or in another console.

restart      Stop if running and then start.

condrestart  Restart only if already running.

status       Query the current status.

install      Install to start automatically when system boots.

remove       Uninstall.

dump         Request a Java thread dump if running.

[[email protected] elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch install

Detected RHEL or Fedora:

Installing the Elasticsearch daemon..

[[email protected] elasticsearch]# /etc/init.d/elasticsearch start

Starting Elasticsearch...

Waiting for Elasticsearch..........................

running: PID:26753

[[email protected] elasticsearch]#

[[email protected] service]# pwd

/usr/local/elasticsearch/bin/service

[[email protected] service]# vim elasticsearch.conf

提示:set.default.ES_HEAP_SIZE值设置小于服务器的物理内存,不能等于实际的内存,否则就会启动失败

接上操作,刷新 http://10.0.0.10:9200/_plugin/head/

查看node01的信息:

概览:

索引:


浏览:

基本信息:


符合查询:


logstash快速入门

官方建议yum安装:

Download and install the public signing key:

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo

[logstash-2.2]

name=Logstash repository for 2.2.x packages

baseurl=http://packages.elastic.co/logstash/2.2/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

And your repository is ready for use. You can install it with:

yum install logstash

[[email protected] tools]# tar xf logstash-2.1.3.tar.gz

[[email protected] tools]# mv logstash-2.1.3 /usr/local/logstash

[[email protected] tools]# cd /usr/local/logstash/

[[email protected] logstash]# ll

total 152

drwxr-xr-x 2 root root   4096 Mar 24 15:50 bin

-rw-rw-r-- 1 root root 100805 Feb 17 05:00 CHANGELOG.md

-rw-rw-r-- 1 root root   2249 Feb 17 05:00 CONTRIBUTORS

-rw-rw-r-- 1 root root   3771 Feb 17 05:05 Gemfile

-rw-rw-r-- 1 root root  21970 Feb 17 05:00 Gemfile.jruby-1.9.lock

drwxr-xr-x 4 root root   4096 Mar 24 15:50 lib

-rw-rw-r-- 1 root root    589 Feb 17 05:00 LICENSE

-rw-rw-r-- 1 root root    149 Feb 17 05:00 NOTICE.TXT

drwxr-xr-x 4 root root   4096 Mar 24 15:50 vendor

[[email protected] logstash]#

logstash配置文件格式:

input { stdin { } }

output {

elasticsearch { hosts => ["localhost:9200"] }

stdout { codec => rubydebug }

}

配置文件结构:

# This is a comment. You should use comments to describe

# parts of your configuration.

input {

...

}

filter {

...

}

output {

...

}

插件的结构:

input {

file {

path => "/var/log/messages"

type => "syslog"

}

file {

path => "/var/log/apache/access.log"

type => "apache"

}

}

插件的数据结构:

数组array

Example:

path => [ "/var/log/messages", "/var/log/*.log" ]

path => "/data/mysql/mysql.log"

布尔类型

Example:

ssl_enable => true

字节

Examples:

my_bytes => "1113"   # 1113 bytes

my_bytes => "10MiB"  # 10485760 bytes

my_bytes => "100kib" # 102400 bytes

my_bytes => "180 mb" # 180000000 bytes

codeC

Example:

codec => "json"



HASH

Example:

match => {

"field1" => "value1"

"field2" => "value2"

...

}

Number

Numbers must be valid numeric values (floating point or integer).

Example:

port => 33

Password

A password is a string with a single value that is not logged or printed.

Example:

my_password => "password"

Pathedit

A path is a string that represents a valid operating system path.

Example:

my_path => "/tmp/logstash"

String

A string must be a single character sequence. Note that string values are enclosed in quotes, either double or single. Literal quotes in the string need to be escaped with a backslash if they are of the same kind as the string delimiter, i.e. single quotes within a single-quoted string need to be escaped as well as double quotes within a double-quoted string.

Example:

name => "Hello world"

name => ‘It\‘s a beautiful day‘

Comments

Comments are the same as in perl, ruby, and python. A comment starts with a # character, and does not need to be at the beginning of a line. For example:

# this is a comment

input { # comments can appear at the end of a line, too

# ...

}

logstash版本不同,插件也会不同,我这里用的logstash-2.1.3.tar.gz

https://www.elastic.co/guide/en/logstash/2.1/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-hosts

logstash的启动方式为:

/usr/local/logstash/bin/logstash  -f /etc/logstash.conf  &

使用方法如下:

[[email protected] ~]# /usr/local/logstash/bin/logstash -t

Error: Usage:

/bin/logstash agent [OPTIONS]

Options:

-f, --config CONFIG_PATH      Load the logstash config from a specific file

or directory.  If a directory is given, all

files in that directory will be concatenated

in lexicographical order and then parsed as a

single config file. You can also specify

wildcards (globs) and any matched files will

be loaded in the order described above.

-e CONFIG_STRING              Use the given string as the configuration

data. Same syntax as the config file. If no

input is specified, then the following is

used as the default input:

"input { stdin { type => stdin } }"

and if no output is specified, then the

following is used as the default output:

"output { stdout { codec => rubydebug } }"

If you wish to use both defaults, please use

the empty string for the ‘-e‘ flag.

(default: "")

-w, --filterworkers COUNT     Sets the number of filter workers to run.

(default: 0)

-l, --log FILE                Write logstash internal logs to the given

file. Without this flag, logstash will emit

logs to standard output.

-v                            Increase verbosity of logstash internal logs.

Specifying once will show ‘informational‘

logs. Specifying twice will show ‘debug‘

logs. This flag is deprecated. You should use

--verbose or --debug instead.

--quiet                       Quieter logstash logging. This causes only

errors to be emitted.

--verbose                     More verbose logging. This causes ‘info‘

level logs to be emitted.

--debug                       Most verbose logging. This causes ‘debug‘

level logs to be emitted.

-V, --version                 Emit the version of logstash and its friends,

then exit.

-p, --pluginpath PATH         A path of where to find plugins. This flag

can be given multiple times to include

multiple paths. Plugins are expected to be

in a specific directory hierarchy:

‘PATH/logstash/TYPE/NAME.rb‘ where TYPE is

‘inputs‘ ‘filters‘, ‘outputs‘ or ‘codecs‘

and NAME is the name of the plugin.

-t, --configtest              Check configuration for valid syntax and then exit.

--[no-]allow-unsafe-shutdown  Force logstash to exit during shutdown even

if there are still inflight events in memory.

By default, logstash will refuse to quit until all

received events have been pushed to the outputs.

(default: false)

-h, --help                    print help

07-logstash-file-redis-es


redis的安装和配置:

tar xf redis-3.0.7.tar.gz

cd redis-3.0.7

make MALLOC=jemalloc

make PREFIX=/application/redis-3.0.7 install

ln -s /application/redis-3.0.7 /application/redis

echo "export PATH=/application/redis/bin:$PATH" >>/etc/profile

source /etc/profile

mkdir -p /application/redis/conf

cp redis.conf  /application/redis/conf/

vim /application/redis/conf/redis.conf

修改bind地址,为本机ip地址

redis-server /application/redis/conf/redis.conf &

切换到node01,登录node02的redis服务

[[email protected] ~]# redis-cli -h 10.0.0.11 -p 6379

10.0.0.11:6379> info

# Server

redis_version:3.0.7

redis_git_sha1:00000000

修改logstash配置文件如下:

[[email protected] httpd]# cat /etc/logstash.conf

input {

file {

path => "/etc/httpd/logs/access_log"

}

}

output {

redis {

data_type => "list"

key => "system-messages"

host => "10.0.0.11"

port => "6379"

db => "1"

}

}

[[email protected] httpd]#

这里用Apache访问日志作为测试,定义输入文件为"/etc/httpd/logs/access_log ",输出文件存储到redis里面,定义key为 "system-messages",host为10.0.0.11,端口6379,存储数据库库为db=1。

保证Apache是运行状态:

[[email protected] httpd]# /etc/init.d/httpd status

httpd (pid  15418) is running...

[[email protected] httpd]#

[[email protected] httpd]# tail -f /etc/httpd/logs/access_log

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/poweredby.png HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET / HTTP/1.1" 403 4961 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/apache_pb.gif HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/poweredby.png HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET / HTTP/1.1" 403 4961 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/apache_pb.gif HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/poweredby.png HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET / HTTP/1.1" 403 4961 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/apache_pb.gif HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.50 - - [28/Mar/2016:14:53:24 +0800] "GET /icons/poweredby.png HTTP/1.1" 304 - "http://10.0.0.10/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"

10.0.0.11:6379[1]> keys *

1) "system-messages"

10.0.0.11:6379[1]>

10.0.0.11:6379[1]> LLEN system-messages

(integer) 18

10.0.0.11:6379[1]>

10.0.0.11:6379[1]> LINDEX system-messages -1

"{\"message\":\"10.0.0.50 - - [28/Mar/2016:15:09:08 +0800] \\\"GET /icons/poweredby.png HTTP/1.1\\\" 304 - \\\"http://10.0.0.10/\\\" \\\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36\\\"\",\"@version\":\"1\",\"@timestamp\":\"2016-03-28T07:09:09.469Z\",\"path\":\"/etc/httpd/logs/access_log\",\"host\":\"node01\"}"

10.0.0.11:6379[1]> LINDEX system-messages -2

"{\"message\":\"10.0.0.50 - - [28/Mar/2016:15:09:08 +0800] \\\"GET /icons/apache_pb.gif HTTP/1.1\\\" 304 - \\\"http://10.0.0.10/\\\" \\\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36\\\"\",\"@version\":\"1\",\"@timestamp\":\"2016-03-28T07:09:09.468Z\",\"path\":\"/etc/httpd/logs/access_log\",\"host\":\"node01\"}"

10.0.0.11:6379[1]> LINDEX system-messages 1

"{\"message\":\"10.0.0.50 - - [28/Mar/2016:15:09:07 +0800] \\\"GET /icons/apache_pb.gif HTTP/1.1\\\" 304 - \\\"http://10.0.0.10/\\\" \\\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36\\\"\",\"@version\":\"1\",\"@timestamp\":\"2016-03-28T07:09:07.435Z\",\"path\":\"/etc/httpd/logs/access_log\",\"host\":\"node01\"}"

10.0.0.11:6379[1]>

发现通过浏览器访问的日志已经存入到了redis中,这一步实现日志的转存,下一步实现日志的从redis中通过logstash转存到elasticsearch中


以上实现了数据从数据源(access_log)中转存到redis中,下一步实现从redis中转存到elasticsearch中

这里在node02上开启一个logstash,并且编写配置文件/etc/logstash.conf

[[email protected] conf]# ps -ef|grep logstash

root      43072  42030  0 15:46 pts/1    00:00:00 grep logstash

[[email protected] conf]# cat /etc/logstash.conf

input {

redis {

data_type => "list"

key => "system-messages"

host => "10.0.0.11"

port => "6379"

db => "1"

}

}

output {

elasticsearch {

hosts => "10.0.0.10"

#protocol => "http"

index => "redis-messages-%{+YYYY.MM.dd}"

}

}

[[email protected] conf]#

说明:定义数据写入文件为redis,对应的键和主机以及端口如下:

redis {

data_type => "list"

key => "system-messages"

host => "10.0.0.11"

port => "6379"

db => "1"

}

数据输出到 elasticsearch中,具体配置为:

elasticsearch {

hosts => "10.0.0.10"

index => "system-redis-messages-%{+YYYY.MM.dd}"

}

启动logstash进行日志收集

[[email protected] conf]# /usr/local/logstash/bin/logstash  -f /etc/logstash.conf  &

[1] 43097

[[email protected] conf]#

[[email protected] conf]#

[[email protected] conf]#

[[email protected] conf]#

[[email protected] conf]#

[[email protected] conf]#

[[email protected] conf]# ps -ef|grep logstash

root      43097  42030 99 15:53 pts/1    00:00:16 /usr/local/jdk/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xss2048k -Djffi.boot.library.path=/usr/local/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/logstash/heapdump.hprof -Xbootclasspath/a:/usr/local/logstash/vendor/jruby/lib/jruby.jar -classpath ::/usr/local/jdk/lib:/usr/local/jdk/jre/lib:/usr/local/jdk/lib/tools.jar -Djruby.home=/usr/local/logstash/vendor/jruby -Djruby.lib=/usr/local/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /usr/local/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash.conf

root      43131  42030  0 15:53 pts/1    00:00:00 grep logstash

[[email protected] conf]#

再回过来查看redis的db中的数据是否已经转存过去(即存储到elasticserach中)

10.0.0.11:6379[1]> LINDEX system-messages 1

"{\"message\":\"10.0.0.50 - - [28/Mar/2016:15:09:07 +0800] \\\"GET /icons/apache_pb.gif HTTP/1.1\\\" 304 - \\\"http://10.0.0.10/\\\" \\\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36\\\"\",\"@version\":\"1\",\"@timestamp\":\"2016-03-28T07:09:07.435Z\",\"path\":\"/etc/httpd/logs/access_log\",\"host\":\"node01\"}"

10.0.0.11:6379[1]> LINDEX system-messages 1

(nil)

10.0.0.11:6379[1]> keys *

(empty list or set)

10.0.0.11:6379[1]> LLEN system-messages

(integer) 0

10.0.0.11:6379[1]>

从上述输入可以知道数据应转存到elasticsearch中,下面我可以进行查看:


进一步查看日志信息:


logstash收集json格式的nginx日志,然后将日志转存到elasticsearch中

json格式分割nginx日志配置参数:

log_format logstash_json ‘{ "@timestamp": "$time_local", ‘

‘"@fields": { ‘

‘"remote_addr": "$remote_addr", ‘

‘"remote_user": "$remote_user", ‘

‘"body_bytes_sent": "$body_bytes_sent", ‘

‘"request_time": "$request_time", ‘

‘"status": "$status", ‘

‘"request": "$request", ‘

‘"request_method": "$request_method", ‘

‘"http_referrer": "$http_referer", ‘

‘"body_bytes_sent":"$body_bytes_sent", ‘

‘"http_x_forwarded_for": "$http_x_forwarded_for", ‘

‘"http_user_agent": "$http_user_agent" } }‘;

修改后的nginx配置文件为:

[[email protected] conf]# pwd

/application/nginx/conf

[[email protected] conf]# cat nginx.conf

worker_processes  4;

events {

worker_connections  1024;

}

http {

include       mime.types;

default_type  application/octet-stream;

log_format logstash_json ‘{ "@timestamp": "$time_local", ‘

‘"@fields": { ‘

‘"remote_addr": "$remote_addr", ‘

‘"remote_user": "$remote_user", ‘

‘"body_bytes_sent": "$body_bytes_sent", ‘

‘"request_time": "$request_time", ‘

‘"status": "$status", ‘

‘"request": "$request", ‘

‘"request_method": "$request_method", ‘

‘"http_referrer": "$http_referer", ‘

‘"body_bytes_sent":"$body_bytes_sent", ‘

‘"http_x_forwarded_for": "$http_x_forwarded_for", ‘

‘"http_user_agent": "$http_user_agent" } }‘;

sendfile        on;

keepalive_timeout  65;

server {

listen       80;

server_name  localhost;

access_log  logs/access_json.log logstash_json;

location / {

root   html;

index  index.html index.htm;

}

error_page   500 502 503 504  /50x.html;

location = /50x.html {

root   html;

}

}

}

[[email protected] conf]#

启动nginx:

[[email protected] ~]# /application/nginx/sbin/nginx  -t

nginx: the configuration file /application/nginx-1.6.2/conf/nginx.conf syntax is ok

nginx: configuration file /application/nginx-1.6.2/conf/nginx.conf test is successful

[[email protected] ~]# /application/nginx/sbin/nginx

[[email protected] ~]#

通过ab命令访问本地nginx服务:

[[email protected] ~]# ab -n100 -c 10 http://10.0.0.10/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.0.0.10 (be patient).....done

Server Software:        nginx/1.6.2

Server Hostname:        10.0.0.10

Server Port:            80

Document Path:          /

Document Length:        612 bytes

Concurrency Level:      10

Time taken for tests:   0.015 seconds

Complete requests:      100

Failed requests:        0

Write errors:           0

Total transferred:      85244 bytes

HTML transferred:       61812 bytes

Requests per second:    6578.95 [#/sec] (mean)

Time per request:       1.520 [ms] (mean)

Time per request:       0.152 [ms] (mean, across all concurrent requests)

Transfer rate:          5476.72 [Kbytes/sec] received

Connection Times (ms)

min  mean[+/-sd] median   max

Connect:        0    0   0.2      0       1

Processing:     0    1   0.6      1       3

Waiting:        0    1   0.6      1       3

Total:          1    1   0.7      1       3

Percentage of the requests served within a certain time (ms)

50%      1

66%      1

75%      2

80%      2

90%      3

95%      3

98%      3

99%      3

100%      3 (longest request)

[[email protected] ~]#

访问nginx产生的日志为:

[[email protected] logs]# head -10 access_json.log

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.001", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.001", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.002", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.002", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.001", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.002", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.002", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.001", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.001", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

{ "@timestamp": "28/Mar/2016:22:26:24 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.001", "status": "200", "request": "GET / HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }

[[email protected] logs]#

logstash的配置文件文件为:

[[email protected] logs]# cat /etc/logsta.sh

input {

file {

path => "/application/nginx/logs/access_json.log"

codec => "json"

}

}

output {

redis {

data_type => "list"

key => "nginx-access-log"

host => "10.0.0.11"

port => "6379"

db => "2"

}

}

[[email protected] logs]#

重启动logstash

[[email protected] logs]# /usr/local/logstash/bin/logstash  -f /etc/logstash.conf  &

[1] 20328

[[email protected] logs]#

[[email protected] logs]#

[[email protected] logs]#

[[email protected] logs]#

[[email protected] logs]# ps -ef|grep logstash

root      20328  16263 99 22:47 pts/1    00:00:43 /usr/local/jdk/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xss2048k -Djffi.boot.library.path=/usr/local/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/logstash/heapdump.hprof -Xbootclasspath/a:/usr/local/logstash/vendor/jruby/lib/jruby.jar -classpath :.:/usr/local/jdk/lib:/usr/local/jdk/jre/lib:/usr/local/jdk/lib/tools.jar -Djruby.home=/usr/local/logstash/vendor/jruby -Djruby.lib=/usr/local/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /usr/local/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash.conf

root      20371  16263  0 22:47 pts/1    00:00:00 grep logstash

[[email protected] logs]#

连接redis查看日志是否写入:

[[email protected] logs]# redis-cli -h 10.0.0.11 -p 6379

10.0.0.11:6379> select 2

OK

10.0.0.11:6379[2]> keys *

(empty list or set)

10.0.0.11:6379[2]>

通过ab命令访问nginx,

[[email protected] ~]# ab -n1000 -c 100 http://10.0.0.10/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.0.0.10 (be patient)

Completed 100 requests

Completed 200 requests

Completed 300 requests

Completed 400 requests

Completed 500 requests

Completed 600 requests

Completed 700 requests

Completed 800 requests

Completed 900 requests

Completed 1000 requests

Finished 1000 requests

Server Software:        nginx/1.6.2

Server Hostname:        10.0.0.10

Server Port:            80

Document Path:          /

Document Length:        612 bytes

Concurrency Level:      100

Time taken for tests:   0.274 seconds

Complete requests:      1000

Failed requests:        0

Write errors:           0

Total transferred:      881136 bytes

HTML transferred:       638928 bytes

Requests per second:    3649.85 [#/sec] (mean)

Time per request:       27.398 [ms] (mean)

Time per request:       0.274 [ms] (mean, across all concurrent requests)

Transfer rate:          3140.64 [Kbytes/sec] received

Connection Times (ms)

min  mean[+/-sd] median   max

Connect:        0    8   4.3      9      15

Processing:     4   18   8.6     16      51

Waiting:        1   15   9.3     12      49

Total:         12   26   5.7     25      52

Percentage of the requests served within a certain time (ms)

50%     25

66%     27

75%     28

80%     29

90%     34

95%     38

98%     42

99%     42

100%     52 (longest request)

[[email protected] ~]#

再次检查redis是否写入数据:

[[email protected] logs]# redis-cli -h 10.0.0.11 -p 6379

10.0.0.11:6379> select 2

OK

10.0.0.11:6379[2]> keys *

(empty list or set)

10.0.0.11:6379[2]>

10.0.0.11:6379[2]> keys *

1) "nginx-access-log"

10.0.0.11:6379[2]>

10.0.0.11:6379[2]>

数据成功写入redis,接下来将redis数据写入elasticsearch中

在nonde02上编写logstash配置文件

[[email protected] etc]# cat logstash.conf

input {

redis {

data_type => "list"

key => "nginx-access-log"

host => "10.0.0.11"

port => "6379"

db => "2"

}

}

output {

elasticsearch {

hosts => "10.0.0.10"

index => "nginx-access-log-%{+YYYY.MM.dd}"

}

}

[[email protected] etc]#

测试文件是否正确,然后重启logstash

[[email protected] etc]# /usr/local/logstash/bin/logstash -t  -f /etc/logstash.conf

Configuration OK

[[email protected] etc]#

[[email protected] etc]# /usr/local/logstash/bin/logstash  -f /etc/logstash.conf  &

[1] 44494

[[email protected] etc]#

用ab进行测试访问:

[[email protected] ~]# ab -n 10000 -c100 http://10.0.0.10/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.0.0.10 (be patient)

Completed 1000 requests

Completed 2000 requests

Completed 3000 requests

Completed 4000 requests

Completed 5000 requests

Completed 6000 requests

Completed 7000 requests

Completed 8000 requests

Completed 9000 requests

Completed 10000 requests

Finished 10000 requests

Server Software:        nginx/1.6.2

Server Hostname:        10.0.0.10

Server Port:            80

Document Path:          /

Document Length:        612 bytes

Concurrency Level:      100

Time taken for tests:   2.117 seconds

Complete requests:      10000

Failed requests:        0

Write errors:           0

Total transferred:      8510052 bytes

HTML transferred:       6170796 bytes

Requests per second:    4722.57 [#/sec] (mean)

Time per request:       21.175 [ms] (mean)

Time per request:       0.212 [ms] (mean, across all concurrent requests)

Transfer rate:          3924.74 [Kbytes/sec] received

Connection Times (ms)

min  mean[+/-sd] median   max

Connect:        0    4   3.7      3      25

Processing:     2   17  10.8     15      75

Waiting:        1   15  10.3     12      75

Total:          5   21  10.9     18      75

Percentage of the requests served within a certain time (ms)

50%     18

66%     24

75%     27

80%     29

90%     36

95%     44

98%     49

99%     57

100%     75 (longest request)

[[email protected] ~]#

redis查看数据写入

[[email protected] ~]# redis-cli -h 10.0.0.11 -p 6379

10.0.0.11:6379> select 2

OK

10.0.0.11:6379[2]> keys *

(empty list or set)

10.0.0.11:6379[2]> keys *

1) "nginx-access-log"

10.0.0.11:6379[2]> keys *

(empty list or set)

10.0.0.11:6379[2]> keys *

1) "nginx-access-log"

10.0.0.11:6379[2]> keys *

(empty list or set)

10.0.0.11:6379[2]>

查看elasticsearch数据

进一步查看数据

数据格式为键值形式,即json格式

运用kibana插件对elasticsearch+logstash+redis收集到的数据进行可视化展示

解压:

[[email protected] tools]# tar xf kibana-4.1.6-linux-x64.tar.gz

[[email protected] tools]# mv kibana-4.1.6-linux-x64 /usr/local/kibana

[[email protected] tools]# cd /usr/local/kibana/

[[email protected] kibana]#

配置:

[[email protected] config]# pwd

/usr/local/kibana/config

[[email protected] config]# ll

total 4

-rw-r--r-- 1 logstash games 2933 Mar 10 03:29 kibana.yml

[[email protected] config]#

[[email protected] config]# vim kibana.yml

修改为:


[[email protected] config]# cd ..

[[email protected] kibana]# pwd

/usr/local/kibana

[[email protected] kibana]# ll

total 28

drwxr-xr-x 2 logstash games 4096 Mar 28 23:24 bin

drwxr-xr-x 2 logstash games 4096 Mar 28 23:31 config

-rw-r--r-- 1 logstash games  563 Mar 10 03:29 LICENSE.txt

drwxr-xr-x 6 logstash games 4096 Mar 28 23:24 node

drwxr-xr-x 2 logstash games 4096 Mar 28 23:24 plugins

-rw-r--r-- 1 logstash games 2510 Mar 10 03:29 README.txt

drwxr-xr-x 9 logstash games 4096 Mar 10 03:29 src

[[email protected] kibana]# ./bin/kibana -h

Usage: kibana [options]

Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.

Options:

-h, --help                 output usage information

-V, --version              output the version number

-e, --elasticsearch <uri>  Elasticsearch instance

-c, --config <path>        Path to the config file

-p, --port <port>          The port to bind to

-q, --quiet                Turns off logging

-H, --host <host>          The host to bind to

-l, --log-file <path>      The file to log to

--plugins <path>           Path to scan for plugins

[[email protected] kibana]#

将kibana放入后台运行

[[email protected] kibana]# nohup ./bin/kibana &

[1] 20765

[[email protected] kibana]# nohup: ignoring input and appending output to `nohup.out‘

[[email protected] kibana]#

[[email protected] kibana]#

[[email protected] kibana]#

查看启动状态

[[email protected] kibana]# netstat -pnutl|grep 5601

tcp        0      0 0.0.0.0:5601                0.0.0.0:*                   LISTEN      20765/./bin/../node

[[email protected] kibana]#

[[email protected] kibana]#

[[email protected] kibana]# ps -ef|grep kibana

root      20765  20572  4 23:34 pts/6    00:00:02 ./bin/../node/bin/node ./bin/../src/bin/kibana.js

root      20780  20572  0 23:35 pts/6    00:00:00 grep kibana

[[email protected] kibana]#

浏览器输入http://192.168.0.100:5601/


选择Create

切换到discover


点击“Laster minutes”

点击“today”


如果根据需求显示(默认是显示全部):这里用add来选择要显示的内容



搜索功能,比如搜索状态为404的


搜索状态为200的



总结ELKStack工作流程

元数据(tomcat、Apache、PHP等服务器的日志文件log)-------->logstash将原始数据写入到redis中,然后通过logstash将redis中的数据写入到elasticsearch,最后通过kibana对elasticsearch数据进行分析整理,并展示出来

时间: 2024-10-24 15:49:56

日志平台之ELKStack实践的相关文章

亿级日志平台实践

本篇主要讲工作中的真实经历,我们怎么打造亿级日志平台,同时手把手教大家建立起这样一套亿级 ELK 系统.日志平台具体发展历程可以参考上篇 「从 ELK 到 EFK 演进」 废话不多说,老司机们座好了,我们准备发车了~~~ 整体架构 整体架构主要分为 4 个模块,分别提供不同的功能 Filebeat:轻量级数据收集引擎.基于原先 Logstash-fowarder 的源码改造出来.换句话说:Filebeat就是新版的 Logstash-fowarder,也会是 ELK Stack 在 Agent

《开源安全运维平台OSSIM最佳实践》

经多年潜心研究开源技术,历时三年创作的<开源安全运维平台OSSIM最佳实践>一书即将出版.该书用80多万字记录了,作者10多年的IT行业技术积累,重点展示了开源安全管理平台OSSIM在大型企业网运维管理中的实践.国内目前也有各式各样的开源安全运维系统,经过笔者对比分析得出这些工具无论在功能上.性能上还是在安全和稳定性易用性上都无法跟OSSIM系统想媲美,而且很多国内的开源安全运维项目在发布1-2年后就逐步淡出了舞台,而OSSIM持续发展了十多年.下面就看看这本书中涉及OSSIM主要讲解那些内容

使用logstash+elasticsearch+kibana快速搭建日志平台

日志的分析和监控在系统开发中占非常重要的地位,系统越复杂,日志的分析和监控就越重要,常见的需求有: 根据关键字查询日志详情 监控系统的运行状况 统计分析,比如接口的调用次数.执行时间.成功率等 异常数据自动触发消息通知 基于日志的数据挖掘 很多团队在日志方面可能遇到的一些问题有: 开发人员不能登录线上服务器查看详细日志,经过运维周转费时费力 日志数据分散在多个系统,难以查找 日志数据量大,查询速度慢 一个调用会涉及多个系统,难以在这些系统的日志中快速定位数据 数据不够实时 常见的一些重量级的开源

【方案】去哪儿网徐磊:如何利用开源技术构建日处理130亿+的实时日志平台?

转自:http://mp.weixin.qq.com/s?__biz=MzIzMzEzODYwOA==&mid=2665284466&idx=1&sn=2b06a529821734e36e26e642424f24fc&scene=2&srcid=0527p3qISp6dFqGg8iLIYgRF&from=timeline&isappinstalled=0#wechat_redirect [本文系互联网技术联盟(ITA1024)原创首发,转载或节选内容

《开源安全运维平台-OSSIM最佳实践》已经上市

<开源安全运维平台-OSSIM最佳实践>已上市 经多年潜心研究开源技术,历时三年创作的<开源安全运维平台OSSIM最佳实践>一书即将出版.该书用100多万字记录了作者10多年的OSSIM研究应用成果,重点展示了开源安全管理平台OSSIM在大型企业网运维管理中的实践.国内目前也有各式各样的运维系统,经过笔者对比分析得出这些工具无论在功能上.性能上还是在安全和稳定性易用性上都无法跟OSSIM系统想媲美,而且很多国内的开源安全运维项目在发布几年后就逐步淡出了舞台,而OSSIM持续发展了十

基于Kafka+ELK搭建海量日志平台

早在传统的单体应用时代,查看日志大都通过SSH客户端登服务器去看,使用较多的命令就是 less 或者 tail.如果服务部署了好几台,就要分别登录到这几台机器上看,等到了分布式和微服务架构流行时代,一个从APP或H5发起的请求除了需要登陆服务器去排查日志,往往还会经过MQ和RPC调用远程到了别的主机继续处理,开发人员定位问题可能还需要根据TraceID或者业务唯一主键去跟踪服务的链路日志,基于传统SSH方式登陆主机查看日志的方式就像图中排查线路的工人一样困难,线上服务器几十上百之多,出了问题难以

【微信公众平台开发最佳实践】正品!包邮!半价!双12特惠,仅此一天,微信开发必备,请尽快购买!

[正品!五折!包邮!] #史上最低价,错过等明年!# 微信开发必备宝典 <微信公众平台开发最佳实践>双12盛典, 定价69,现价34.5元, 点此购买

《开源安全运维平台OSSIM最佳实践》当当自营店 315活动 仅售6折

<开源安全运维平台OSSIM最佳实践>当当自营店 3-15活动 ,仅售 6 折. 购书地址: http://product.dangdang.com/23903741.html

Windows平台分布式架构实践 - 负载均衡(下)

概述 我们在上一篇Windows平台分布式架构实践 - 负载均衡中讨论了Windows平台下通过NLB(Network Load Balancer) 来实现网站的负载均衡,并且通过压力测试演示了它的效果,可以说还是非常的理想的.同时我们也收集到了不少的问题,比如说如何在这种分布式的架构下使用Session,NLB中有一台服务器挂掉了会导致对外暴露的地址无法访问,如果实现服务器之间的同步,如果更好的进行热修复等等,还有我们在上一篇中也提到了NLB所提供的功能是非常简单的,为了回答我们前面提到的问题