ELK配置总结

在经过了近半个月的ELK环境的搭建配置,我做了我个人的工作总结,和大家分享一下。

一、命令总结

1.1、Es服务端口查看

# netstat -nlpt | grep -E"9200|9300"

1.2、Es插件安装和移除

# ./bin/plugin install file:///home/apps/license-2.3.3.zip

# ./bin/plugin install file:///home/apps/marvel-agent-2.3.3.zip

es移除插件

/usr/local/elasticsearch # ./bin/pluginremove plugins/marvel-agent

/usr/local/elasticsearch # ./bin/pluginremove plugins/license

1.3、Kibana插件安装和移除

/usr/local/kibana # ./bin/kibana plugin--install marvel --url file:///home/apps/marvel-2.3.3.tar.gz

移除插件

./bin/kibana plugin --remove marvel

1.4、logstash检查文件是否有错误

/usr/local/logstash/etc # ../bin/logstash-f logstash.conf  --configtest --verbose

(显示“Configuration OK”说明没有问题)

1.5、启动logstash

/usr/local/logstash/etc # ../bin/logstash-f logstash.conf

1.6、验证es服务

#curl -XGET ES-NODE-IP:9200

1.7、查看es集群状态

# curl -XGET ES-NODE-IP:9200/_cluster/health?pretty=true

1.8、添加和删除es索引,nginx-logs为索引名

curl -XPUT http://ES-NODE-IP:9200/nginx-logs/

删除es索引

curl -XDELETE http://ES-NODE-IP:9200/nginx-logs/

1.9验证head插件

http://ES-NODE-IP:9200/_plugin/head/

1.10、创建elasticsearch账户

# groupadd esuser

# useradd -d /home/esuser -m esuser

# passwd esuser

1.11查看kafka服务启动状态

# jps

9536 Main

15200 Jps

14647 Kafka

8760 Elasticsearch

21177 -- process information unavailable

14316 QuorumPeerMain

5791 QuorumPeerMain

1.12、查看kafka端口状态

# netstat -nlpt | grep -E"2181|3888"

tcp       0      0 :::2181                 :::*              LISTEN      14316/java

tcp       0      0 192.168.1.105:3888     :::*              LISTEN      5791/java

1.13、创建、删除、列出主题(topic)

创建主题(topic)

/usr/local/kafka # bin/kafka-topics.sh--create --zookeeper localhost:2181 --replication-factor 1 --partitions 1--topic test

Created topic "test".

删除主题(topic)

/usr/local/kafka # ./bin/kafka-topics.sh--delete --zookeeper 182.180.50.211:2181 --topic nginx-messages

列出所有的topic

/usr/local/kafka # bin/kafka-topics.sh--list --zookeeper localhost:2181

Test

显示topic的详细信息

/usr/local/kafka # bin/kafka-topics.sh --describe--zookeeper localhost:218

1.14、kafka创建生产者和消费者

创建生产者

/usr/local/kafka #bin/kafka-console-producer.sh --broker-list localhost:9092  --topic test

This is a message

创建消费者

/usr/local/kafka #  bin/kafka-console-consumer.sh --zookeeperlocalhost:2181 --topic test --from-beginning

This is a message

如果向上边的可以收到来自生产者的消息,就说明基于kafka的zookeeper环境配置好了。

1.15、测试日志文件的数据传输

# cp /var/log/messages /home

# >/var/log/messages

# echo "hello kibana aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">> /var/log/messages

1.16、重新加载nginx配置文件

# ./nginx -s reload

二、  配置汇总

2.1、Java环境配置

# vi /etc/profile

export PATH=$PATH:/soft_ins/mysql/bin

JAVA_HOME=/usr/local/jdk1.8.0_101

PATH=$JAVA_HOME/bin:$PATH

CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar

export PATH JAVA_HOME CLASSPATH

# source /etc/profile

# java -version

java version "1.8.0_101"

Java(TM) SE Runtime Environment (build1.8.0_101-b13)

Java HotSpot(TM) 64-Bit Server VM (build25.101-b13, mixed mode)

2.2、es配置

# vi config/elasticsearch.yml

cluster.name: es_cluster

node.name: node3

path.data: /usr/local/elasticsearch/data

path.logs: /usr/local/elasticsearch/logs

network.host: 192.168.1.103

http.port: 9200

discovery.zen.ping.unicast.hosts:["192.168.1.103","192.168.1.104","192.168.1.105"]

说明:

discovery.zen.ping.unicast.hosts:["192.168.1.103","192.168.1.104","192.168.1.105"]是es集群的说明,如果是单各es节点的话就没必要配置这个。

2.3、kibana配置

# vi /usr/local/kibana/config/kibana.yml

server.port: 5601

host: "192.168.1.103"

elasticsearch_url: http://192.168.1.103:9200,192.168.1.104:9200,192.168.1.105:9200

说明:

elasticsearch_url: http://192.168.1.103:9200,192.168.1.104:9200,192.168.1.105:9200也是只想的es集群,如果是单台则只需配置单个es节点。

2.4、kafka和zookeeper非分离的配置

单个kafka&zookeeper节点

/usr/local/kafka # vi config/zookeeper.properties

dataDir=/usr/local/kafka/tmp/zookeeper

/usr/local/kafka # viconfig/server.properties

log.dirs=/usr/local/kafka/tmp/kafka-logs

Zookeepe集群配置

# vi config/zookeeper.properties

dataDir=/usr/local/kafka/tmp/zookeeper

initLimit=5

syncLimit=2

server.2=192.168.1.101:2888:3888

server.3=192.168.1.102:2888:3888

server.4=192.168.1.103:2888:3888

kafka集群:

/usr/local/kafka # viconfig/server.properties

broker.id=2

prot=9092

host.name=192.168.1.101

log.dirs=/usr/local/kafka/tmp/kafka-logs

num.partitions=16

zookeeper.connect=192.168.1.101:2181,192.168.1.102:2181,192.168.1.103:2181

注意其他两个节点broker.id分别是3和4,,host.name也按实际的配置

启动zookeeper服务

/usr/local/kafka #./bin/zookeeper-server-start.sh config/zookeeper.properties

启动kafka服务

/usr/local/kafka #./bin/kafka-server-start.sh config/server.properties

2.5、kafka和zookeeper分离的配置

Zookeeper的配置:

生成zookeeper配置文件

# cd zookeeper/conf

# cp zoo_sample.cfg zoo.cfg

编辑配置文件

# vi zoo.cfg

dataDir=/usr/local/zookeeper/tmp/zookeeper

server.1=192.168.1.101:2888:3888

server.2=192.168.1.102:2888:3888

server.3=192.168.1.103:2888:3888

# cd ..

# mkdir -p tmp/zookeeper

# echo "1" >tmp/zookeeper/myid

配置node2和node3的zookeeper

依照node1的配置配置node2和node3,注意下面的参数三个节点各有差异

Node2:

# echo "2" >tmp/zookeeper/myid

Node3:

# echo "3" >tmp/zookeeper/myid

其他配置都一样

依次启动三个节点的服务

# ./bin/zkServer.sh start conf/zoo.cfg

Kafka的配置:

配置node1的kafka

# cd ../kafka

# vi config/server.properties

broker.id=0

port=9092

host.name=x-shcs-creditcard-v01

log.dirs=/usr/local/kafka/tmp/kafka-logs

num.partitions=2

zookeeper.connect=192.168.1.101:2181,192.168.1.102:2181,192.168.1.103:2181

配置Node2和node3的kafka

依照node1的配置配置node2和node3,注意下面的参数三个节点各有差异

Node2:

broker.id=1

host.name=node2

node3:

broker.id=:2

host.name=node3

说明:

host.name是节点的主机名

依次启动三个节点的kafka

# ./bin/kafka-server-start config/server.properties

三、问题汇总

3.1、marvel插件安装出错

node3:/usr/local/elasticsearch# ./bin/plugin install file:///home/apps/license-2.3.3.zip

-> Installing fromfile:/home/apps/license-2.3.3.zip...

Trying file:/home/apps/license-2.3.3.zip...

Downloading .DONE

Verifying file:/home/apps/license-2.3.3.zipchecksums if available ...

NOTE: Unable to verify checksum fordownloaded plugin (unable to find .sha1 or .md5 file to verify)

ERROR: Plugin [license] is incompatiblewith Elasticsearch [2.1.1]. Was designed for version [2.3.3]

插件版本和ES版本不兼容,换为2.3.3版本的插件就没有问题了。

3.2、启动es服务出错

/usr/local/elasticsearch>./bin/elasticsearch

log4j:ERROR setFile(null,true) call failed.

java.io.FileNotFoundException:/usr/local/elasticsearch/logs/es_cluster.log (Permission denied)

文件权限的问题,es服务启动需要切换到非root用户,而该用户没有访问/usr/local/elasticsearch/logs/es_cluster.log的权限。

修改文件权限就好了

# chown -R esuser:esuser/usr/local/elasticsearch/data/

# chown -R esuser:esuser/usr/local/elasticsearch/logs

3.3、kibana版本不兼容

# ./bin/kibana

{"name":"Kibana","hostname":"node3","pid":20969,"level":50,"err":{"message":"unknownerror","name":"Error","stack":"Error:unknown error\n    at respond(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:237:15)\n    at checkRespForFailure(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n    at HttpConnector.<anonymous>(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n    at IncomingMessage.bound(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n    at IncomingMessage.emit(events.js:117:20)\n    at_stream_readable.js:944:16\n    atprocess._tickCallback (node.js:442:13)"},"msg":"","time":"2016-08-30T07:07:57.923Z","v":0}

换为更高的kibana版本就没问题了

3.4、nginx重载配置问价出错

/usr/local/nginx/sbin # ./nginx -s reload

nginx: [error] open()"/usr/local/nginx/logs/nginx.pid" failed (2: No such file ordirectory)

/usr/local/nginx/sbin # ls ../logs/

access.log error.log

/usr/local/nginx/sbin # ./nginx -c/usr/local/nginx/conf/nginx.conf

/usr/local/nginx/sbin # ls ../logs/

access.log error.log   nginx.pid

3.5、启动kafka出错有一个broker没起来,提示Failed to acquire lock on file .lock in/usr/local/kafka/tmp/kafka-logs.

关闭kafka服务,查看还有哪个节点上的kafka还有进程,杀死改进城之后再次启动就没问题了。

3.6、kafka删除topic出错,删除不了(还未解决)

/usr/local/kafka # ./bin/kafka-topics.sh--delete --zookeeper 192.168.1.101:2181 --topic nginx-messages

Topic nginx-messages is marked for deletion.

Note: This will have no impact ifdelete.topic.enable is not set to true

按网上说的修改

配置文件在kafka\config目录

# vi server.properties

delete.topic.enable=true

但是修改之后还是删除不了topic。

四、 Logstash配置文件

4.1、配置一

input {

stdin {

type => "stdin-type"

}

file {

type => "syslog-ng"

#Wildcards work, here :)

path => [ "/var/log/*.log", "/var/log/messages","/var/log/syslog" ]

}

}

output {

stdout { }

elasticsearch{

hosts=>["192.168.1.101:9200","192.168.1.102:9200","192.168.1.103:9200"]

}

配置一的架构是被收集日志的机器直接通过本机的logstash把日志传送给es集群

4.2、配置二

input {

file {

type => "system-message"

path => "/var/log/messages"

start_position => "beginning"

}

}

output {

#stdout { codec => rubydebug }

kafka {

bootstrap_servers =>"192.168.1.103:9092"

topic_id => "system-messages"

compression_type => "snappy"

}

}

配置二的架构是被收集日志的机器直接通过本机的logstash把日志传送给es单个节点

4.3、配置三

input {

kafka {

zk_connect =>"192.168.1.103:9092"

topic_id => "System-Messages"

codec => plain

reset_beginning => false

consumer_threads => 5

decorate_events => true

}

}

output {

elasticsearch {

hosts => "192.168.1.103:9092"

index =>"test-System-Messages-%{+YYYY-MM}"

}

}

配置三的架构是被kafka节点收集到日志之后传送给es节点

4.4、配置四

# vi/usr/local/logstash/etc/logstash_shipper.conf

input {

file {

type => "system-message"

path => "/var/log/messages"

start_position => "beginning"

}

}

output {

stdout { codec => rubydebug }

kafka {

bootstrap_servers =>"192.168.1.101:9092,192.168.1.102:9092,192.168.1.103:9092"

topic_id => "messages"

compression_type => "snappy"

}

logstash消费端

/usr/local/logstash # vietc/logstash_indexer.conf

input {

kafka {

zk_connect => "192.168.1.101:2181,192.168.1.102:2181,192.168.1.103:2181"

topic_id => "system-message"

codec => plain

reset_beginning => false

consumer_threads => 5

decorate_events => true

}

}

output {

elasticsearch {

hosts => "192.168.1.105:9200"

index =>"test-system-messages-%{+YYYY-MM}"

}

}

配置四的架构是被收集日志的机器直接通过本机的logstash把日志传送给kafka集群,之后kafka集群再把日志传送给es节点。

时间: 2024-08-05 19:35:05

ELK配置总结的相关文章

elk配置路径

elk/usr/local/etc/elasticsearch-6.5.1/usr/local/Cellar/logstash/6.5.1/./logstash -f/usr/local/Cellar/kibana/6.5.1/==================================nginx/usr/local/Cellar/nginx/1.15.8/usr/local/etc/nginx/nginx.conf/usr/local/etc/nginx/servers/=======

(高版本)ELK(Elasticsearch + Logstash + Kibana)服务服务搭建

一.ELK是什么鬼? ELK实际上是三个工具的集合,Elasticsearch + Logstash + Kibana,这三个工具组合形成了一套实用.易用的监控架构,很多公司利用它来搭建可视化的海量日志分析平台. 1. ElasticSearch ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口.Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引

ELK平台搭建 ES

系统环境: System: Centos 6.5 ElasticSearch: 2.3.3 Logstash: 2.3.3 Kibana: 4.5.1 Java: jdk_1.8.0_71 新建用户: ELK不允许root用户启动 #useradd elk JDK或JRE下载安装: java也可到这个地址下载https://www.reucon.com/cdn/java/ # mkdir /usr/java/  # cd /usr/java # tar -zxvf jdk-8u71-linux-

ELK 实现 Java 分布式系统日志分析架构

日志是分析线上问题的重要手段,通常我们会把日志输出到控制台或者本地文件中,排查问题时通过根据关键字搜索本地日志,但越来越多的公司,项目开发中采用分布式的架构,日志会记录到多个服务器或者文件中,分析问题时可能需要查看多个日志文件才能定位问题,如果相关项目不是一个团队维护时沟通成本更是直线上升.把各个系统的日志聚合并通过关键字链接一个事务处理请求,是分析分布式系统问题的有效的方式. ELK(elasticsearch+logstash+kibana)是目前比较常用的日志分析系统,包括日志收集(log

elk设计

架构解读 : (整个架构从左到右,总共分为5层)(本文将第三层以下的进行了合并,无elasticsearch集群) 第一层.数据采集层 最左边的是业务服务器集群,上面安装了filebeat做日志采集,同时把采集的日志分别发送给两个logstash服务. 第二层.数据处理层,数据缓存层 logstash服务把接受到的日志经过格式处理,转存到本地的kafka broker+zookeeper 集群中. 第三层.数据转发层 这个单独的Logstash节点会实时去kafka broker集群拉数据,转发

k8s日志收集配置

容器日志样例 172.101.32.1 - - [03/Jun/2019:17:14:10 +0800] "POST /ajaxVideoQueues!queryAllUser.action?rnd=1559553110429 HTTP/1.0" 200 65 "http://www.wsjy.gszq.com:81/sysNotice!sysList.action" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; r

ELK+Filebeat 集中式日志解决方案详解

原文:ELK+Filebeat 集中式日志解决方案详解 链接:https://www.ibm.com/developerworks/cn/opensource/os-cn-elk-filebeat/index.html?ca=drs- ELK Stack 简介 ELK 不是一款软件,而是 Elasticsearch.Logstash 和 Kibana 三种软件产品的首字母缩写.这三者都是开源软件,通常配合使用,而且又先后归于 Elastic.co 公司名下,所以被简称为 ELK Stack.根据

elk6.2集群搭建,cerebro集群管理

环境准备: 两台centos 7.3 x86_64 这里软件版本是2018 .3月的,如以后最新版本请去官网下载 规划: cm-elk-01: IP:192.168.10.63 安装: elasticsearch kibana:前端展示 cm-elk-02: IP:192.168.10.64 安装: elasticsearch cerebro:查看集群状态 搭建前准备: 1. 两台服务器需要修改最大文件数65536  vim /etc/security/limits.conf   最后添加   

ELK6.2.4集群安装使用

一 简介 Elasticsearch是一个高度可扩展的开源全文搜索和分析引擎.它允许您快速,近实时地存储,搜索和分析大量数据.它通常用作支持具有复杂搜索功能和需求的应用程序的底层引擎/技术. 下载地址:https://www.elastic.co/cn/downloads       版本:elasticsearch-6.2.4.tar.gz     logstash-6.2.4.tar.gz    kibana-6.2.4-x86_64.rpm   filebeat-6.2.4-x86_64.