Centos 6.4 搭建ELK(1)

前段时间用ossec收集了一些系统的日志(syslog、secure、maillog等),看了下elk这个架构,发现很适合ossec,也很好玩。

介绍:

elk官网 https://www.elastic.co/downloads

elk由elasticSearch、logstash和kiabana三个开源工具组成。

ossec+redis+elk架构图:

ossec:事件源、alert源

redis:用于做队列,防止数据丢失

logstash: 它用来对日志进行: 收集、分割

elasticSearch: 开源分布式搜索引擎,提供搜索功能,并用来存储最终的数据

kibana: web界面,支持各种查询、统计,展示

简单的工作流程就是logstash-shipper监控并过滤日志,将过滤后的日志内容发给redis(这里的redis只处理队列不做存储),logstash indexer将日志收集在一起,交给全文搜索服务elasticSearch,可以用elasticSearch进行自定义搜索和存储日志,通过kibana提供日志分析的web界面.

logstash-shipper表示收集日志,使用ossec收集各种来源的日志数据。Broker作为远端agent(Ossec server)与中心agent(Logstash)之间的缓冲区,这里使用Redis实现,用来提升系统性能与可靠性,当中心agent提取数据失败时,数据保存在redis中,不至于丢失。中心agent就是Logstash(indexer),从Broker中读取数据,执行相关的分析和处理(Filter)。

安装:

1、elk包

elk更新很快,版本众多,如果选择版本不一致,可能没办法使用。elk有3种安装方式,我这里选择tar.gz包来安装。

logstash-1.5.2.tar.gz

elasticsearch-1.6.0.tar.gz

kibana-4.1.1-linux-x64.tar.gz

redis-3.0.6.tar.gz

2、服务器IP

ossec client:192.168.153.187

ossec server:192.168.153.172(安装ossec server和logstash,把这台服务器看成是logstash的客户端(即logstash-shipper)。把192.168.153.187的日志收集到该服务器的/var/ossec/logs/alerts/alerts.log目录

elk+redis:192.168.153.200(这个logstash是server,即indexer)

3、安装过程

(1)、192.168.153.187

安装 ossec client,安装见之前的博客

(2)、192.168.153.172

安装 ossec server,安装见之前的博客

安装logstash

[[email protected] ~]# https://download.elastic.co/logstash/logstash/logstash-1.5.2.tar.gz

[[email protected] ~]# tar -xf logstash-1.5.2.tar.gz -C /usr/local/

安装完成后运行如下命令:

[[email protected] ~]# /usr/local/logstash-1.5.2/bin/logstash -e ‘input { stdin { } } output { stdout {} }‘

Logstash startup completed

Hello World!

2016-06-15T03:28:56.938Z noc.vfast.com Hello World!

这种情况,我们首先创建一个简单的配置文件,并且指定logstash使用这个配置文件。例如:在logstash安装目录下创建一个“基本配置”测试文件logstash-test.conf,文件内容如下:

[[email protected] ~]# cat logstash-simple.conf

input { stdin { } }

output {

stdout { codec=> rubydebug }

}

Logstash使用input和output定义收集日志时的输入和输出的相关配置,本例中input定义了一个叫"stdin"的input,output定义一个叫"stdout"的output。无论我们输入什么字符,Logstash都会按照某种格式来返回我们输入的字符,其中output被定义为"stdout"并使用了codec参数来指定logstash输出格式。

使用logstash的-f参数来读取配置文件,执行如下开始进行测试:

# echo "`date`  hello World"

Thu Jul 16 04:06:48 CST 2016 hello World

# /usr/local/logstash-1.5.2/bin/logstash -f logstash-simple.conf

Logstash startup completed

Tue Jul 14 18:07:07 EDT 2016 hello World   #该行是执行echo “`date`hello World” 后输出的结果,直接粘贴到该位置

{

"message" => "Tue Jul 14 18:07:07 EDT 2016 helloWorld",

"@version" => "1",

"@timestamp" => "2016-06-14T22:07:28.284Z",

"host" => "noc.vfast.com"

(3)、192.168.153.200

安装logstash

[[email protected] ~]# https://download.elastic.co/logstash/logstash/logstash-1.5.2.tar.gz

[[email protected] ~]# tar -xf logstash-1.5.2.tar.gz -C /usr/local/

[[email protected] ~]# /usr/local/logstash-1.5.2/bin/logstash -f /usr/local/logstash-1.5.2/logstash-200.conf

Logstash startup completed

{

"@timestamp" => "2016-05-19T02:03:22.746Z",

"@version" => "1",

"ossec_group" => "pam,syslog,",

"reporting_ip" => "192.168.153.187",

"reporting_source" => "/var/log/secure",

"rule_number" => "5502",

"severity" => 3,

"signature" => "Login session closed.",

"@message" => "May 19 10:03:57 localhost sshd[4623]: pam_unix(sshd:session): session closed for user root",

"@fields.hostname" => "agent15",

"@fields.product" => "ossec",

"raw_message" => "** Alert 1463623401.3764: - pam,syslog,\n2016 May 19 10:03:21 (agent15) 192.168.153.187-

>/var/log/secure\nRule: 5502 (level 3) -> ‘Login session closed.‘\nMay 19 10:03:57 localhost sshd[4623]: pam_unix

(sshd:session): session closed for user root",

"ossec_server" => "ossec-server"

}

{

"@timestamp" => "2016-05-19T02:03:58.846Z",

"@version" => "1",

"ossec_group" => "syslog,sshd,authentication_success,",

"reporting_source" => "192.168.153.172",

"rule_number" => "5715",

"severity" => 3,

"signature" => "SSHD authentication success.",

"src_ip" => "192.168.153.1",

"acct" => "root",

"@message" => "May 19 10:03:57 ossec-server sshd[22805]: Accepted password for root from 192.168.153.1 port 31490

ssh2",

"@fields.hostname" => "ossec-server",

"@fields.product" => "ossec",

"raw_message" => "** Alert 1463623437.4008: - syslog,sshd,authentication_success,\n2016 May 19 10:03:57 ossec-server-

>192.168.153.172\nRule: 5715 (level 3) -> ‘SSHD authentication success.‘\nSrc IP: 192.168.153.1\nUser: root\nMay 19 10:03:57

ossec-server sshd[22805]: Accepted password for root from 192.168.153.1 port 31490 ssh2",

"ossec_server" => "ossec-server"

(4)、安装elasticsearch

elasticsearch是依赖java的,所以先安装java

[[email protected] ~]# yum install java-1.8.0-openjdk

[[email protected] ~]# java -version

openjdk version "1.8.0_91"

[[email protected] ~]# tar -xf elasticsearch-1.6.0.tar.gz -C /usr/local/

后台启动Elasticsearch

[[email protected] ~]# /usr/local/elasticsearch-1.6.0/bin/elasticsearch -d

访问192.168.153.200:9200端口,status是200,表明es启动成功

[[email protected] ~]# curl http://192.168.153.200:9200

{

"status" : 200,

"name" : "elasticsearch-node01",

"cluster_name" : "elasticsearch",

"version" : {

"number" : "1.6.0",

"build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",

"build_timestamp" : "2015-06-09T13:36:34Z",

"build_snapshot" : false,

"lucene_version" : "4.10.4"

},

"tagline" : "You Know, for Search"

}

(5)、安装redis 3.0.6

# tar zxvf redis-3.0.6.tar.gz

# cd redis-3.0.6

# make PREFIX=/usr/local/redis install

//这里纠结一下, redis如果不指定prefix路径,那么默认会在你这个解压的文件夹中编译生成bin文件

# ln -sv /usr/local/redis/bin/redis-server /usr/bin/redis-server

# ln -sv /usr/local/redis/bin/redis-cli /usr/bin/redis-cli

# cp tmp/redis-3.0.6/utils/redis_init_script /etc/rc.d/init.d/redis

# vi /etc/rc.d/init.d/redis

//然后在第二行插入chkconfig配置,然后修改EXEC和CLI,我的这个文件前几行是这样的

#!/bin/sh

# chkconfig: 2345 90 10

# Simple Redis init.d script conceived to work on Linux systems

# as it does use of the /proc filesystem.

REDISPORT=6379

EXEC=/usr/local/redis/bin/redis-server

CLIEXEC=/usr/local/redis/bin/redis-cli

PIDFILE=/var/run/redis_${REDISPORT}.pid

CONF="/etc/redis/${REDISPORT}.conf"

//按照上面的修改好了之后保存退出

# mkdir /etc/redis/

//这个目录用于放我们的配置文件

# mkdir /var/rdb/

//这个目录存放redis的数据库文件

redis源码包中自带redis.conf,但这个只是模版,具体使用还需要自行斟酌具体配置,这里我发出来一个我的配置

# vi /etc/redis/redis.conf

[[email protected] ~]# /etc/init.d/redis start

Starting Redis server...

1447:M 18 May 17:03:50.342 * Increased maximum number of open files to 10032 (it was originally set to 1024).

_._

_.-``__ ‘‘-._

_.-``    `.  `_.  ‘‘-._           Redis 3.0.6 (00000000/0) 64 bit

.-`` .-```.  ```\/    _.,_ ‘‘-._

(    ‘      ,       .-`  | `,    )     Running in standalone mode

|`-._`-...-` __...-.``-._|‘` _.-‘|     Port: 6379

|    `-._   `._    /     _.-‘    |     PID: 1447

`-._    `-._  `-./  _.-‘    _.-‘

|`-._`-._    `-.__.-‘    _.-‘_.-‘|

|    `-._`-._        _.-‘_.-‘    |           http://redis.io

`-._    `-._`-.__.-‘_.-‘    _.-‘

|`-._`-._    `-.__.-‘    _.-‘_.-‘|

|    `-._`-._        _.-‘_.-‘    |

`-._    `-._`-.__.-‘_.-‘    _.-‘

`-._    `-.__.-‘    _.-‘

`-._        _.-‘

`-.__.-‘

1447:M 18 May 17:03:50.345 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is

set to the lower value of 128.

1447:M 18 May 17:03:50.346 # Server started, Redis version 3.0.6

1447:M 18 May 17:03:50.346 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix

this issue add ‘vm.overcommit_memory = 1‘ to /etc/sysctl.conf and then reboot or run the command ‘sysctl

vm.overcommit_memory=1‘ for this to take effect.

1447:M 18 May 17:03:50.346 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create

latency and memory usage issues with Redis. To fix this issue run the command ‘echo never >

/sys/kernel/mm/transparent_hugepage/enabled‘ as root, and add it to your /etc/rc.local in order to retain the setting after a

reboot. Redis must be restarted after THP is disabled.

1447:M 18 May 17:03:50.357 * DB loaded from disk: 0.011 seconds

1447:M 18 May 17:03:50.357 * The server is now ready to accept connections on port 6379

1447:M 18 May 17:21:03.197 * 1 changes in 900 seconds. Saving...

1447:M 18 May 17:21:03.198 * Background saving started by pid 1466

1466:C 18 May 17:21:03.202 * DB saved on disk

1466:C 18 May 17:21:03.202 * RDB: 0 MB of memory used by copy-on-write

1447:M 18 May 17:21:03.299 * Background saving terminated with success

1447:M 18 May 17:26:04.090 * 10 changes in 300 seconds. Saving...

1447:M 18 May 17:26:04.090 * Background saving started by pid 1468

1468:C 18 May 17:26:04.104 * DB saved on disk

[[email protected]]# redis-cli

127.0.0.1:6379> MONITOR

OK

1463623574.234636 [0 127.0.0.1:48009] "blpop" "logstash:redis" "0" "1"

1463623575.258853 [0 127.0.0.1:48009] "blpop" "logstash:redis" "0" "1"

1463623575.453969 [0 192.168.153.172:36662] "rpush" "logstash:redis" "{\"@timestamp\":\"2016-05-19T02:03:58.848Z\",\"@version

\":\"1\",\"ossec_group\":\"pam,syslog,authentication_success,\",\"reporting_source\":\"192.168.153.172\",\"rule_number\":

\"5501\",\"severity\":3,\"signature\":\"Login session opened.\",\"@message\":\"May 19 10:03:57 ossec-server sshd[22805]:

pam_unix(sshd:session): session opened for user root by (uid=0)\",\"@fields.hostname\":\"ossec-server\",\"@fields.product\":

\"ossec\",\"raw_message\":\"** Alert 1463623437.4316: - pam,syslog,authentication_success,\\n2016 May 19 10:03:57 ossec-

server->192.168.153.172\\nRule: 5501 (level 3) -> ‘Login session opened.‘\\nMay 19 10:03:57 ossec-server sshd[22805]: pam_unix

(sshd:session): session opened for user root by (uid=0)\",\"ossec_server\":\"ossec-server\"}"

1463623575.456066 [0 127.0.0.1:48009] "blpop" "logstash:redis" "0" "1"

1463623576.477031 [0 127.0.0.1:48009] "blpop" "logstash:redis" "0" "1"

1463623601.018922 [0 127.0.0.1:48009] "blpop" "logstash:redis" "0" "1"

1463623601.534860 [0 192.168.153.172:36662] "rpush" "logstash:redis" "{\"@timestamp\":\"2016-05-19T02:05:17.007Z\",\"@version

\":\"1\",\"ossec_group\":\"pam,syslog,\",\"reporting_source\":\"192.168.153.172\",\"rule_number\":\"5502\",\"severity\":3,

\"signature\":\"Login session closed.\",\"@message\":\"May 19 10:05:16 ossec-server sshd[22805]: pam_unix(sshd:session):

session closed for user root\",\"@fields.hostname\":\"ossec-server\",\"@fields.product\":\"ossec\",\"raw_message\":\"** Alert

1463623516.4585: - pam,syslog,\\n2016 May 19 10:05:16 ossec-server->192.168.153.172\\nRule: 5502 (level 3) -> ‘Login session

closed.‘\\nMay 19 10:05:16 ossec-server sshd[22805]: pam_unix(sshd:session): session closed for user root\",\"ossec_server\":

\"ossec-server\"}"

1463623601.542622 [0 127.0.0.1:48009] "blpop" "logstash:redis" "0" "1"

1463623601.562655 [0 192.168.153.172:36662] "rpush" "logstash:redis" "{\"@timestamp\":\"2016-05-19T02:05:43.092Z\",\"@version

\":\"1\",\"ossec_group\":\"syslog,sshd,authentication_success,\",\"reporting_ip\":\"192.168.153.187\",\"reporting_source\":

\"/var/log/secure\",\"rule_number\":\"5715\",\"severity\":3,\"signature\":\"SSHD authentication success.\",\"src_ip\":

\"192.168.153.1\",\"acct\":\"root\",\"@message\":\"May 19 10:06:18 localhost sshd[4834]: Accepted password for root from

192.168.153.1 port 31537 ssh2\",\"@fields.hostname\":\"agent15\",\"@fields.product\":\"ossec\",\"raw_message\":\"** Alert

1463623542.4820: - syslog,sshd,authentication_success,\\n2016 May 19 10:05:42 (agent15) 192.168.153.187->/var/log/secure\

\nRule: 5715 (level 3) -> ‘SSHD authentication success.‘\\nSrc IP: 192.168.153.1\\nUser: root\\nMay 19 10:06:18 localhost sshd

[4834]: Accepted password for root from 192.168.153.1 port 31537 ssh2\",\"ossec_server\":\"ossec-server\"}"

redis 设置密码访问

vi redis.conf  #此文件默认在根目录下。

# requirepass foobared去掉注释,foobared改为自己的密码,我在这里改为

requirepass xxxxxxxx

重启服务  [[email protected] ~]# /etc/init.d/redis restart

测试连接:./redis-cli -h 192.168.153.200 -p 6379

输入命令 会提示(error) NOAUTH Authentication required. 这是属于正常现象。

我们输入 auth  xxxxxxxx  #你刚才设置的密码

[[email protected] ~]# /usr/local/logstash-1.5.2/bin/logstash -f /usr/local/logstash-1.5.2/logstash-ossec.conf

{

"@timestamp" => "2016-05-19T02:05:43.103Z",

"@version" => "1",

"ossec_group" => "pam,syslog,authentication_success,",

"reporting_ip" => "192.168.153.187",

"reporting_source" => "/var/log/secure",

"rule_number" => "5501",

"severity" => 3,

"signature" => "Login session opened.",

"@message" => "May 19 10:06:18 localhost sshd[4834]: pam_unix(sshd:session): session opened for user root by

(uid=0)",

"@fields.hostname" => "agent15",

"@fields.product" => "ossec",

"raw_message" => "** Alert 1463623542.5137: - pam,syslog,authentication_success,\n2016 May 19 10:05:42 (agent15)

192.168.153.187->/var/log/secure\nRule: 5501 (level 3) -> ‘Login session opened.‘\nMay 19 10:06:18 localhost sshd[4834]:

pam_unix(sshd:session): session opened for user root by (uid=0)",

"ossec_server" => "ossec-server",

"type" => "ossec"

(6)、安装kibana

[[email protected] ~]# tar -xf kibana-4.1.1-linux-x64.tar.gz -C /usr/local/

[[email protected] ~]# nohup /root/elk20160518/kibana-4.1.1-linux-x64/bin/kibana &

(7)、访问kibana

http://192.168.153.200:5601

elk安装参考文章

http://baidu.blog.51cto.com/71938/1676798

时间: 2024-10-10 13:11:05

Centos 6.4 搭建ELK(1)的相关文章

centos 7.3搭建ELK日志分析系统详解

一.前言: 日志分析是我们运维解决系统故障.发现问题的主要手段.为了可以集中管理多台服务器的日志记录,开源实时日志分析ELK平台应用而生,ELK由Elasticsearch.Logstash和Kibana三个开源工具组成,这三个工具可以分别部署在不同的服务器上,并且相互关联,不过需要收集哪台服务器的日志,就必须在该服务器上部署Logstash.ELK的官网是:Elastic官网 ELK的工作原理如下(懂得其中原理,才可部署一个高可用的ELK平台):Logstash收集APPServer(应用服务

搭建ELK

CentOS 6.5搭建ELK环境 ELK工作流程 多个独立的Agent(Shipper)负责收集不同来源的数据,一个中心Agent(Indexer)负责汇总和分析数据,在中心Agent前的Broker(使用Redis实现)作为缓冲区,中心Agent后的ElasticSearch用于存储和搜索数据,前端的Kibana提供丰富的图表展示. Shipper表示日志收集,使用LogStash收集各种来源的日志数据,可以是系统日志.文件.Redis.mq等等: Broker作为远程Agent与中心Age

centos 7搭建ELK日志分析系统

一.ELK的组成 ELK由ElasticSearch.Logstash和Kiabana三个开源工具组成,其官方网站为https://www.elastic.co/cn Elasticsearch:是个开源分布实时分析搜索引擎,建立在全文搜索引擎库Apache Lucens基础上,同时隐藏了Apache Luces的复杂性.Elasticsearch将所有的功能打包成一个独立的服务,并提供了一个简单的RESTful API接口,它具有分布式.零配置.自动发现.索引自动分片.索引副本机制.RESTf

通过docker搭建ELK集群

单机ELK,另外两台服务器分别有一个elasticsearch节点,这样形成一个3节点的ES集群. 可以先尝试单独搭建es集群或单机ELK https://www.cnblogs.com/lz0925/p/12018209.html // 单机ELK https://www.cnblogs.com/lz0925/p/12011026.html // 三节点es集群 注意点 服务器内存:要求不低于8G,如果4G,没有跑其他程序的话,应该也可以,低于4G基本不用考虑. 我的系统:阿里云centOS7

CentOS 6.4 搭建git 服务器

CentOS 6.4 搭建git 服务器 (2013-11-22 19:04:09)转载▼ 标签: it 分类: Linux 此文件是依据markdown所编写,更好效果参见本人github的文档https://github.com/jackliu2013/recipes/blob/master/doc/linux/CentOS_6.4_git服务器搭建.md ##CentOS安装Git服务器 Centos 6.4 + Git 1.8.2.2 + gitosis## 1.查看Linux系统服务器

Linux Centos 6.6搭建SFTP服务器

在Centos 6.6环境使用系统自带的internal-sftp搭建SFTP服务器. 打开命令终端窗口,按以下步骤操作. 0.查看openssh的版本 1 ssh -V 使用ssh -V 命令来查看openssh的版本,版本必须大于4.8p1,低于的这个版本需要升级. 1.创建sftp组 1 groupadd sftp 2.创建一个sftp用户,用户名为mysftp,密码为mysftp 修改用户密码和修改Linux用户密码是一样的. useradd -g sftp -s /bin/false

centos DNS服务搭建 第三十节课

centos  DNS服务搭建     第三十节课 上半节课 下半节课 一. DNS原理相关DNS 为Domain Name System(域名系统)的缩写,它是一种将ip地址转换成对应的主机名或将主机名转换成与之相对应ip地址的一种服务机制.其中通过域名解析出ip地址的叫做正向解析,通过ip地址解析出域名的叫做反向解析. DNS使用TCP和UDP, 端口号都是53, 但它主要使用UDP,服务器之间备份使用TCP.全世界只有13台“根”服务器,1个主根服务器放在美国,其他12台为辅根服务器,DN

基于CentOS 5.4搭建nginx+php+spawn-fcgi+mysql高性能php平台

一.安装准备 1.1平台环境: CentOS 5.4 x86_64 GNU/Linux nginx-0.8.21 php-5.2.9 spawn-fcgi-1.6.3 mysql-5.1.34 .2系统安装及分区:1.2.1操作系统安装:         安装过程中选择最少的包,采用文本模式安装,不安装图形.1.2.3系统分区:         /boot  100M    (大约100左右)          SWAP  4G      物理内存的2倍(如果你的物理内存大于4G,分配4G即可)

Centos 6.4搭建LAMP

网上关于LAMP的文章很多,但是一部分因为系统环境或软件升级原因不能使用或者有一些小小的问题,本文由网上资料整理得出,在centos6.4及6.2系统上经过验证,如有问题可以留言大家相互讨论,本人新手,希望能够一起学习进步. 需要软件列表 apr-1.5.0.tar.bz2 apr-util-1.5.3.tar.bz2 zlib-1.2.8.tar.gz pcre-8.34.tar.gz httpd-2.4.9.tar.gz php-5.5.6.tar.gz mysql-5.5.25.tar.g