ELK日志分析系统简介
日志服务器
- 提高安全性
- 集中存放日志
- 缺陷
- 对日志的分析困难
ELK日志分析系统
- Elasticsearch
- Logstash
- Kibana
日志处理步骤
- 将日志进行集中化管理
- 将日志格式化( Logstash )并输出到Elasticsearch
- 对格式化后的数据进行索弓|和存储( Elasticsearch )
- 前端数据的展示( Kibana )
Elasticsearch介绍
Elasticsearch的概述
- 提供了一个分布式多用户能力的全文搜索弓|擎
Elasticsearch的概念
- 接近实时
- 集群
- 节点
- 索引
- 索引(库)->类型(表)->文档(记录)
- 分片和副本
Logstash介绍
Logstash介绍
- 一款强大的数据处理工具,可以实现数据传输、格式处理、格式化输出
- 数据输入、数据加工(如过滤,改写等)以及数据输出
LogStash主要组件
- Shipper
- Indexer
- Broker
- Search and Storage
- Web Interface
Kibana介绍
Kibana介绍
- 一个针对Elasticsearch的开源分析及可视化平台
- 搜索、查看存储在Elasticsearch索弓|中的数据
- 通过各种图表进行高级数据分析及展示
Kibana主要功能
- Elasticsearch无缝之 集成
- 整合数据,复杂数据分析
- 让更多团队成员受益
- 接口灵活,分享更容易
- 配置简单,可视化多数据源
- 简单数据导出
部署ELK日志分析系统
实验环境
- node1节点服务器IP地址:192.168.80.128
- node2节点服务器IP地址:192.168.80.129
- apache服务器IP地址:192.168.80.800
在node1,node2上安装elasticsearch
[[email protected] ~]# vim /etc/hosts //配置解析名
192.168.80.128 node1
192.168.80.129 node2
[[email protected] ~]# java -version //查看是Java是否安装
[[email protected] ~]# mount.cifs //192.168.80.2/LNMP-C7 /mnt/
Password for [email protected]//192.168.80.2/LNMP-C7:
[[email protected] mnt]# cd /mnt/elk/
[[email protected] elk]# rpm -ivh elasticsearch-5.5.0.rpm //安装
[[email protected] elk]# systemctl daemon-reload //重载守护进程
[[email protected] elk]# systemctl enable elasticsearch.service //开机自动启动
[[email protected] elk]# cd /etc/elasticsearch/
[[email protected] elasticsearch]# cp elasticsearch.yml elasticsearch.yml.bak //备份
[[email protected] elasticsearch]# vim elasticsearch.yml //修改配置文件
cluster.name: my-elk-cluster //集群名
node.name: node1 //节点名,第二个节点为node2
path.data: /data/elk_data //数据存放位置
path.logs: /var/log/elasticsearch/ //日志存放位置
bootstrap.memory_lock: false //不在启动时锁定内存
network.host: 0.0.0.0 //提供服务绑定的IP地址,为所有地址
http.port: 9200 ##端口号为9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"] //集群发现通过单播实现
[[email protected] elasticsearch]# mkdir -p /data/elk_data //创建数据存放点
[[email protected] elasticsearch]# chown elasticsearch.elasticsearch /data/elk_data/ //给权限
[[email protected] elasticsearch]# systemctl start elasticsearch.service //开启服务
[[email protected] elasticsearch]# netstat -ntap | grep 9200 //查看开启情况
tcp6 0 0 :::9200 :::* LISTEN 2166/java
在node1,node2上安装node组件依赖包
[[email protected] elasticsearch]# yum install gcc gcc-c++ make -y //安装编译工具
[[email protected] elasticsearch]# cd /mnt/elk/
[[email protected] elk]# tar zxvf node-v8.2.1.tar.gz -C /opt/ //解压插件
[[email protected] elk]# cd /opt/node-v8.2.1/
[[email protected] node-v8.2.1]# ./configure //配置
[[email protected] node-v8.2.1]# make && make install //编译安装
在node1,node2上安装phantomjs前端框架
[[email protected] elk]# tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src/ //解压到/usr/local/src下
[[email protected] elk]# cd /usr/local/src/phantomjs-2.1.1-linux-x86_64/bin/
[[email protected] bin]# cp phantomjs /usr/local/bin/ //编译系统识别
在node1,node2上安装elasticsearch-head数据可视化
[[email protected] bin]# cd /mnt/elk/
[[email protected] elk]# tar zxvf elasticsearch-head.tar.gz -C /usr/local/src/ //解压
[[email protected] elk]# cd /usr/local/src/elasticsearch-head/
[[email protected] elasticsearch-head]# npm install //安装
修改配置文件
[[email protected] elasticsearch-head]# vim /etc/elasticsearch/elasticsearch.yml //末行加入
http.cors.enabled: true //开启跨域访问支持,默认为false
http.cors.allow-origin: "*" //跨域访问允许的域名地址
[[email protected] elasticsearch-head]# systemctl restart elasticsearch.service //重启
[[email protected] elasticsearch-head]# cd /usr/local/src/elasticsearch-head/
[[email protected] elasticsearch-head]# npm run start & //后台运行数据可视化服务
[1] 82515
[[email protected] elasticsearch-head]# netstat -ntap | grep 9100
tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 82525/grunt
[[email protected] elasticsearch-head]# netstat -ntap | grep 9200
tcp6 0 0 :::9200 :::* LISTEN 82981/java
在浏览器上查看健康值状态
在node1上创建索引
创建索引信息
[[email protected] ~]# curl -XPUT ‘localhost:9200/index-demo/test/1?pretty&pretty‘ -H ‘content-Type: application/json‘ -d ‘{"user":"zhangsan","mesg":"hello world"}‘
在浏览器中查看
在Apache服务器上安装logstash,多elasticsearch进行对接
[[email protected] ~]# yum install httpd -y //安装服务
[[email protected] ~]# systemctl start httpd.service //启动服务
[[email protected] ~]# java -version
[[email protected] ~]# mount.cifs //192.168.100.8/LNMP-C7 /mnt/ //挂载
Password for [email protected]//192.168.100.8/LNMP-C7:
[[email protected] ~]# cd /mnt/elk/
[[email protected] elk]# rpm -ivh logstash-5.5.1.rpm //安装logstash
[[email protected] elk]# systemctl start logstash.service
[[email protected] elk]# systemctl enable logstash.service //设置开机自启
[[email protected] elk]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/ //便于系统识别
[[email protected] elk]# logstash -e ‘input { stdin{} } output { stdout{} }‘ //标准输入输出
The stdin plugin is now waiting for input:
16:58:11.145 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com //输入
2019-12-19T08:58:35.707Z apache www.baidu.com
www.sina.com.cn //输入
2019-12-19T08:58:42.092Z apache www.sina.com.cn
[[email protected] elk]# logstash -e ‘input { stdin{} } output { stdout{ codec=>rubydebug } }‘ //使用rubydebug显示详细输出,codec为一种编解码器
The stdin plugin is now waiting for input:
17:03:08.226 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com //格式化的处理
{
"@timestamp" => 2019-12-19T09:03:80.267Z,
"@version" => "1",
"host" => "apache",
"message" => "www.baidu.com"
}
[[email protected] elk]# logstash -e ‘input { stdin{} } output { elasticsearch { hosts=>["192.168.80.129:9200"] } }‘
##使用logstach将信息写入elasticsearch中
The stdin plugin is now waiting for input:
17:06:46.846 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com //输入信息
www.sina.com.cn
用浏览器查看信息
将系统日志文件输出到elasticsearch
[[email protected] elk]# chmod o+r /var/log/messages //给其他用户读权限
[[email protected] elk]# vim /etc/logstash/conf.d/system.conf //创建文件
input {
file{
path => "/var/log/messages" //输出目录
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
#输入地址指向node1节点
hosts => ["192.168.80.129:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
[[email protected] elk]# systemctl restart logstash.service //重启服务
用数据浏览查看详细信息
在node1服务器上安装kibana数据可视化
[[email protected] ~]# cd /mnt/elk/
[[email protected] elk]# rpm -ivh kibana-5.5.1-x86_64.rpm //安装
[[email protected] elk]# cd /etc/kibana/
[[email protected] kibana]# cp kibana.yml kibana.yml.bak //备份
[[email protected] kibana]# vim kibana.yml //修改配置文件
server.port: 5601 //端口号
server.host: "0.0.0.0" //监听任意网段
elasticsearch.url: "http://192.168.80.129:9200" //本机节点地址
kibana.index: ".kibana" //索引名称
[[email protected] kibana]# systemctl start kibana.service //开启服务
[[email protected] kibana]# systemctl enable kibana.service
浏览器访问kibana
在apache服务器中对接apache日志文件,进行统计
[[email protected] elk]# vim /etc/logstash/conf.d/apache_log.conf //创建配置文件
input {
file{
path => "/etc/httpd/logs/access_log" //输入信息
type => "access"
start_position => "beginning"
}
file{
path => "/etc/httpd/logs/error_log"
type => "error"
start_position => "beginning"
}
}
output {
if [type] == "access" { //根据条件判断输出信息
elasticsearch {
hosts => ["192.168.80.129:9200"]
index => "apache_access-%{+YYYY.MM.dd}"
}
}
if [type] == "error" {
elasticsearch {
hosts => ["192.168.80.129:9200"]
index => "apache_error-%{+YYYY.MM.dd}"
}
}
}
[[email protected] elk]# logstash -f /etc/logstash/conf.d/apache_log.conf //根据配置文件配置logstach
访问网页信息,查看kibana统计情况
选择management>Index Patterns>create index patterns;创建apache两个日志的信息
原文地址:https://blog.51cto.com/14473285/2461853
时间: 2024-10-05 07:24:30