kibana应用 logstash应用

Top

NSD ARCHITECTURE DAY04

  1. 案例1:导入数据
  2. 案例2:综合练习

1 案例1:导入数据

1.1 问题

本案例要求批量导入数据:

  • 批量导入数据并查看

1.2 步骤

实现此案例需要按照如下步骤进行。

步骤一:导入数据

使用POST方式批量导入数据,数据格式为json,url 编码使用data-binary导入含有index配置的json文件

  1. [[email protected] ~]# scp /var/ftp/elk/*.gz 192.168.1.66:/root/
  2. [[email protected] ~]# gzip -d logs.jsonl.gz
  3. [[email protected] ~]# gzip -d accounts.json.gz
  4. [[email protected] ~]# gzip -d shakespeare.json.gz
  5. [[email protected] ~]# curl -X POST "http://192.168.1.61:9200/_bulk" \
  6. --data-binary @shakespeare.json
  7. [[email protected] ~]# curl -X POST "http://192.168.1.61:9200/xixi/haha/_bulk" \
  8. --data-binary @accounts.json
  9. //索引是xixi,类型是haha,必须导入索引和类型,没有索引,要加上
  10. [[email protected] ~]# curl -X POST "http://192.168.1.61:9200/_bulk" \
  11. --data-binary @logs.jsonl

2)使用GET查询结果

  1. [[email protected] ~]# curl -XGET ‘http://192.168.1.61:9200/_mget?pretty‘ -d ‘{
  2. "docs":[
  3. {
  4. "_index":"shakespeare",
  5. "_type:":"act",
  6. "_id":0
  7. },
  8. {
  9. "_index":"shakespeare",
  10. "_type:":"line",
  11. "_id":0
  12. },
  13. {
  14. "_index":"xixi",
  15. "_type:":"haha",
  16. "_id":25
  17. }
  18. ]
  19. }‘
  20. {        //查询的结果
  21. "docs" : [ {
  22. "_index" : "shakespeare",
  23. "_type" : "act",
  24. "_id" : "0",
  25. "_version" : 1,
  26. "found" : true,
  27. "_source" : {
  28. "line_id" : 1,
  29. "play_name" : "Henry IV",
  30. "speech_number" : "",
  31. "line_number" : "",
  32. "speaker" : "",
  33. "text_entry" : "ACT I"
  34. }
  35. }, {
  36. "_index" : "shakespeare",
  37. "_type" : "act",
  38. "_id" : "0",
  39. "_version" : 1,
  40. "found" : true,
  41. "_source" : {
  42. "line_id" : 1,
  43. "play_name" : "Henry IV",
  44. "speech_number" : "",
  45. "line_number" : "",
  46. "speaker" : "",
  47. "text_entry" : "ACT I"
  48. }
  49. }, {
  50. "_index" : "xixi",
  51. "_type" : "haha",
  52. "_id" : "25",
  53. "_version" : 1,
  54. "found" : true,
  55. "_source" : {
  56. "account_number" : 25,
  57. "balance" : 40540,
  58. "firstname" : "Virginia",
  59. "lastname" : "Ayala",
  60. "age" : 39,
  61. "gender" : "F",
  62. "address" : "171 Putnam Avenue",
  63. "employer" : "Filodyne",
  64. "email" : "[email protected]",
  65. "city" : "Nicholson",
  66. "state" : "PA"
  67. }
  68. } ]
  69. }

步骤二:使用kibana查看数据是否导入成功

1)数据导入以后查看logs是否导入成功,如图-1所示:

  1. [[email protected] ~]# firefox http://192.168.1.65:9200/_plugin/head/

图-1

2)kibana导入数据,如图-2所示:

  1. [[email protected] ~]# firefox http://192.168.1.66:5601

图-2

3)成功创建会有logstash-*,如图-3所示:

/

图-3

4)导入成功之后选择Discover,如图-4所示:

图-4

注意: 这里没有数据的原因是导入日志的时间段不对,默认配置是最近15分钟,在这可以修改一下时间来显示

5)kibana修改时间,选择Lsat 15 miuntes,如图-5所示:

图-5

6)选择Absolute,如图-6所示:

图-6

7)选择时间2015-5-15到2015-5-22,如图-7所示:

图-7

8)查看结果,如图-8所示:

图-8

9)除了柱状图,Kibana还支持很多种展示方式 ,如图-9所示:

图-9

10)做一个饼图,选择Pie chart,如图-10所示:

图-10

11)选择from a new serach,如图-11所示:

图-11

12)选择Spilt Slices,如图-12所示:

图-12

13)选择Trems,Memary(也可以选择其他的,这个不固定),如图-13所示:

图-13

14)结果,如图-14所示:

图-14

15)保存后可以在Dashboard查看,如图-15所示:

图-15

2 案例2:综合练习

2.1 问题

本案例要求:

  • 练习插件
  • 安装一台Apache服务并配置
  • 使用filebeat收集Apache服务器的日志
  • 使用grok处理filebeat发送过来的日志
  • 存入elasticsearch

2.2 步骤

实现此案例需要按照如下步骤进行。

步骤一:安装logstash

1)配置主机名,ip和yum源,配置/etc/hosts(请把se1-se5和kibana主机配置和logstash一样的/etc/hosts)

  1. [[email protected] ~]# vim /etc/hosts
  2. 192.168.1.61 se1
  3. 192.168.1.62 se2
  4. 192.168.1.63 se3
  5. 192.168.1.64 se4
  6. 192.168.1.65 se5
  7. 192.168.1.66 kibana
  8. 192.168.1.67 logstash

2)安装java-1.8.0-openjdk和logstash

  1. [[email protected] ~]# yum -y install java-1.8.0-openjdk
  2. [[email protected] ~]# yum -y install logstash
  3. [[email protected] ~]# java -version
  4. openjdk version "1.8.0_131"
  5. OpenJDK Runtime Environment (build 1.8.0_131-b12)
  6. OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
  7. [[email protected] ~]# touch /etc/logstash/logstash.conf
  8. [[email protected] ~]# /opt/logstash/bin/logstash --version
  9. logstash 2.3.4
  10. [[email protected] ~]# /opt/logstash/bin/logstash-plugin list //查看插件
  11. ...
  12. logstash-input-stdin    //标准输入插件
  13. logstash-output-stdout    //标准输出插件
  14. ...
  15. [[email protected] ~]# vim /etc/logstash/logstash.conf
  16. input{
  17. stdin{
  18. }
  19. }
  20. filter{
  21. }
  22. output{
  23. stdout{
  24. }
  25. }
  26. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  27. //启动并测试
  28. Settings: Default pipeline workers: 2
  29. Pipeline main started
  30. aa        //logstash 配置从标准输入读取输入源,然后从标准输出输出到屏幕
  31. 2018-09-15T06:19:28.724Z logstash aa

备注:若不会写配置文件可以找帮助,插件文档的位置:

https://github.com/logstash-plugins

3)codec类插件

  1. [[email protected] ~]# vim /etc/logstash/logstash.conf
  2. input{
  3. stdin{
  4. codec => "json"        //输入设置为编码json
  5. }
  6. }
  7. filter{
  8. }
  9. output{
  10. stdout{
  11. codec => "rubydebug"        //输出设置为rubydebug
  12. }
  13. }
  14. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  15. Settings: Default pipeline workers: 2
  16. Pipeline main started
  17. {"a":1}
  18. {
  19. "a" => 1,
  20. "@version" => "1",
  21. "@timestamp" => "2018-09-15T06:34:14.538Z",
  22. "host" => "logstash"
  23. }

4)file模块插件

  1. [[email protected] ~]# vim /etc/logstash/logstash.conf
  2. input{
  3. file {
  4. path => [ "/tmp/a.log", "/var/tmp/b.log" ]
  5. sincedb_path => "/var/lib/logstash/sincedb"    //记录读取文件的位置
  6. start_position => "beginning"                //配置第一次读取文件从什么地方开始
  7. type => "testlog"                    //类型名称
  8. }
  9. }
  10. filter{
  11. }
  12. output{
  13. stdout{
  14. codec => "rubydebug"
  15. }
  16. }
  17. [[email protected] ~]# touch /tmp/a.log
  18. [[email protected] ~]# touch /var/tmp/b.log
  19. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf

另开一个终端:写入数据

  1. [[email protected] ~]# echo a1 > /tmp/a.log
  2. [[email protected] ~]# echo b1 > /var/tmp/b.log

之前终端查看:

  1. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  2. Settings: Default pipeline workers: 2
  3. Pipeline main started
  4. {
  5. "message" => "a1",
  6. "@version" => "1",
  7. "@timestamp" => "2018-09-15T06:44:30.671Z",
  8. "path" => "/tmp/a.log",
  9. "host" => "logstash",
  10. "type" => "testlog"
  11. }
  12. {
  13. "message" => "b1",
  14. "@version" => "1",
  15. "@timestamp" => "2018-09-15T06:45:04.725Z",
  16. "path" => "/var/tmp/b.log",
  17. "host" => "logstash",
  18. "type" => "testlog"
  19. }

5)tcp、udp模块插件

  1. [[email protected] ~]# vim /etc/logstash/logstash.conf
  2. input{
  3. file {
  4. path => [ "/tmp/a.log", "/var/tmp/b.log" ]
  5. sincedb_path => "/var/lib/logstash/sincedb"
  6. start_position => "beginning"
  7. type => "testlog"
  8. }
  9. tcp {
  10. host => "0.0.0.0"
  11. port => "8888"
  12. type => "tcplog"
  13. }
  14. udp {
  15. host => "0.0.0.0"
  16. port => "9999"
  17. type => "udplog"
  18. }
  19. }
  20. filter{
  21. }
  22. output{
  23. stdout{
  24. codec => "rubydebug"
  25. }
  26. }
  27. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  28. //启动

另开一个终端查看,可以看到端口

  1. [[email protected] tmp]# netstat -antup | grep 8888
  2. tcp6 0 0 :::8888 :::* LISTEN 22191/java
  3. [[email protected] tmp]# netstat -antup | grep 9999
  4. udp6 0 0 :::9999 :::* 22191/java

在另一台主机上写一个脚本,发送数据,使启动的logstash可以接收到数据

  1. [[email protected] ~]# vim tcp.sh
  2. function sendmsg(){
  3. if [[ "$1" == "tcp" ]];then
  4. exec 9<>/dev/tcp/192.168.1.67/8888
  5. else
  6. exec 9<>/dev/udp/192.168.1.67/9999
  7. fi
  8. echo "$2" >&9
  9. exec 9<&-
  10. }
  11. [[email protected] ~]# . tcp.sh        //重新载入一下
  12. [[email protected] ~]# sendmsg udp "is tcp test"
  13. [[email protected] ~]# sendmsg udp "is tcp ss"

logstash主机查看结果

  1. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  2. Settings: Default pipeline workers: 2
  3. Pipeline main started
  4. {
  5. "message" => "is tcp test\n",
  6. "@version" => "1",
  7. "@timestamp" => "2018-09-15T07:45:00.638Z",
  8. "type" => "udplog",
  9. "host" => "192.168.1.65"
  10. }
  11. {
  12. "message" => "is tcp ss\n",
  13. "@version" => "1",
  14. "@timestamp" => "2018-09-15T07:45:08.897Z",
  15. "type" => "udplog",
  16. "host" => "192.168.1.65"
  17. }

6)syslog插件练习

  1. [[email protected] ~]# systemctl list-unit-files | grep syslog
  2. rsyslog.service enabled
  3. syslog.socket static
  4. [[email protected] ~]# vim /etc/logstash/logstash.conf
  5. start_position => "beginning"
  6. type => "testlog"
  7. }
  8. tcp {
  9. host => "0.0.0.0"
  10. port => "8888"
  11. type => "tcplog"
  12. }
  13. udp {
  14. host => "0.0.0.0"
  15. port => "9999"
  16. type => "udplog"
  17. }
  18. syslog {
  19. port => "514"
  20. type => "syslog"
  21. }
  22. }
  23. filter{
  24. }
  25. output{
  26. stdout{
  27. codec => "rubydebug"
  28. }
  29. }

另一个终端查看是否检测到514

  1. [[email protected] ~]# netstat -antup | grep 514
  2. tcp6 0 0 :::514 :::* LISTEN 22728/java
  3. udp6 0 0 :::514 :::* 22728/java

另一台主机上面操作,本地写的日志本地可以查看

  1. [[email protected] ~]# vim /etc/rsyslog.conf
  2. local0.info /var/log/mylog //自己添加这一行
  3. [[email protected] ~]# systemctl restart rsyslog    //重启rsyslog
  4. [[email protected] ~]# ll /var/log/mylog        //提示没有那个文件或目录
  5. ls: cannot access /var/log/mylog: No such file or directory
  6. [[email protected] ~]# logger -p local0.info -t nsd "elk"        //写日志
  7. [[email protected] ~]# ll /var/log/mylog        //再次查看,有文件
  8. -rw------- 1 root root 29 Sep 15 16:23 /var/log/mylog
  9. [[email protected] ~]# tail /var/log/mylog //可以查看到写的日志
  10. Sep 15 16:23:25 se5 nsd: elk
  11. [[email protected] ~]# tail /var/log/messages
  12. //可以查看到写的日志,因为配置文件里有写以.info结尾的可以收到
  13. ...
  14. Sep 15 16:23:25 se5 nsd: elk

把本地的日志发送给远程1.67

  1. [[email protected] ~]# vim /etc/rsyslog.conf
  2. local0.info @192.168.1.67:514
  3. //写一个@或两个@@都可以,一个@代表udp,两个@@代表tcp
  4. [[email protected] ~]# systemctl restart rsyslog
  5. [[email protected] ~]# logger -p local0.info -t nds "001 elk"
  6. [[email protected] bin]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  7. //检测到写的日志
  8. {
  9. "message" => "001 elk",
  10. "@version" => "1",
  11. "@timestamp" => "2018-09-05T09:15:47.000Z",
  12. "type" => "syslog",
  13. "host" => "192.168.1.65",
  14. "priority" => 134,
  15. "timestamp" => "Jun 5 17:15:47",
  16. "logsource" => "kibana",
  17. "program" => "nds1801",
  18. "severity" => 6,
  19. "facility" => 16,
  20. "facility_label" => "local0",
  21. "severity_label" => "Informational"
  22. }

rsyslog.conf配置向远程发送数据,远程登陆1.65的时侯,把登陆日志的信息(/var/log/secure)转发给logstash即1.67这台机器

  1. [[email protected] ~]# vim /etc/rsyslog.conf
  2. 57 authpriv.* @@192.168.1.67:514
  3. //57行的/var/log/secure改为@@192.168.1.67:514
  4. [[email protected] ~]# systemctl restart rsyslog
  5. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  6. //找一台主机登录1.65,logstash主机会有数据
  7. Settings: Default pipeline workers: 2
  8. Pipeline main started
  9. {
  10. "message" => "Accepted password for root from 192.168.1.254 port 33780 ssh2\n",
  11. "@version" => "1",
  12. "@timestamp" => "2018-09-15T08:40:57.000Z",
  13. "type" => "syslog",
  14. "host" => "192.168.1.65",
  15. "priority" => 86,
  16. "timestamp" => "Sep 15 16:40:57",
  17. "logsource" => "se5",
  18. "program" => "sshd",
  19. "pid" => "26133",
  20. "severity" => 6,
  21. "facility" => 10,
  22. "facility_label" => "security/authorization",
  23. "severity_label" => "Informational"
  24. }
  25. {
  26. "message" => "pam_unix(sshd:session): session opened for user root by (uid=0)\n",
  27. "@version" => "1",
  28. "@timestamp" => "2018-09-15T08:40:57.000Z",
  29. "type" => "syslog",
  30. "host" => "192.168.1.65",
  31. "priority" => 86,
  32. "timestamp" => "Sep 15 16:40:57",
  33. "logsource" => "se5",
  34. "program" => "sshd",
  35. "pid" => "26133",
  36. "severity" => 6,
  37. "facility" => 10,
  38. "facility_label" => "security/authorization",
  39. "severity_label" => "Informational"

7)filter grok插件

grok插件:

解析各种非结构化的日志数据插件

grok使用正则表达式把飞结构化的数据结构化

在分组匹配,正则表达式需要根据具体数据结构编写

虽然编写困难,但适用性极广

  1. [[email protected] ~]# vim /etc/logstash/logstash.conf
  2. input{
  3. stdin{ codec => "json" }
  4. file {
  5. path => [ "/tmp/a.log", "/var/tmp/b.log" ]
  6. sincedb_path => "/var/lib/logstash/sincedb"
  7. start_position => "beginning"
  8. type => "testlog"
  9. }
  10. tcp {
  11. host => "0.0.0.0"
  12. port => "8888"
  13. type => "tcplog"
  14. }
  15. udp {
  16. host => "0.0.0.0"
  17. port => "9999"
  18. type => "udplog"
  19. }
  20. syslog {
  21. port => "514"
  22. type => "syslog"
  23. }
  24. }
  25. filter{
  26. grok{
  27. match => ["message", "(?<key>reg)"]
  28. }
  29. }
  30. output{
  31. stdout{
  32. codec => "rubydebug"
  33. }
  34. }
  35. [[email protected] ~]# yum -y install httpd
  36. [[email protected] ~]# systemctl restart httpd
  37. [[email protected] ~]# vim /var/log/httpd/access_log
  38. 192.168.1.254 - - [15/Sep/2018:18:25:46 +0800] "GET / HTTP/1.1" 403 4897 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0"

复制/var/log/httpd/access_log的日志到logstash下的/tmp/a.log

  1. [[email protected] ~]# vim /tmp/a.log
  2. 192.168.1.254 - - [15/Sep/2018:18:25:46 +0800] "GET / HTTP/1.1" 403 4897 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0"
  3. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  4. //出现message的日志,但是没有解析是什么意思
  5. Settings: Default pipeline workers: 2
  6. Pipeline main started
  7. {
  8. "message" => ".168.1.254 - - [15/Sep/2018:18:25:46 +0800] \"GET / HTTP/1.1\" 403 4897 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0\"",
  9. "@version" => "1",
  10. "@timestamp" => "2018-09-15T10:26:51.335Z",
  11. "path" => "/tmp/a.log",
  12. "host" => "logstash",
  13. "type" => "testlog",
  14. "tags" => [
  15. [0] "_grokparsefailure"
  16. ]
  17. }

若要解决没有解析的问题,同样的方法把日志复制到/tmp/a.log,logstash.conf配置文件里面修改grok

查找正则宏路径

  1. [[email protected] ~]# cd /opt/logstash/vendor/bundle/ \
  2. jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
  3. [[email protected] ~]# vim grok-patterns //查找COMBINEDAPACHELOG
  4. COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
  5. [[email protected] ~]# vim /etc/logstash/logstash.conf
  6. ...
  7. filter{
  8. grok{
  9. match => ["message", "%{COMBINEDAPACHELOG}"]
  10. }
  11. }
  12. ...

解析出的结果

  1. [[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
  2. Settings: Default pipeline workers: 2
  3. Pipeline main started
  4. {
  5. "message" => "192.168.1.254 - - [15/Sep/2018:18:25:46 +0800] \"GET /noindex/css/open-sans.css HTTP/1.1\" 200 5081 \"http://192.168.1.65/\" \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0\"",
  6. "@version" => "1",
  7. "@timestamp" => "2018-09-15T10:55:57.743Z",
  8. "path" => "/tmp/a.log",
  9. "host" => "logstash",
  10. "type" => "testlog",
  11. "clientip" => "192.168.1.254",
  12. "ident" => "-",
  13. "auth" => "-",
  14. "timestamp" => "15/Sep/2018:18:25:46 +0800",
  15. "verb" => "GET",
  16. "request" => "/noindex/css/open-sans.css",
  17. "httpversion" => "1.1",
  18. "response" => "200",
  19. "bytes" => "5081",
  20. "referrer" => "\"http://192.168.1.65/\"",
  21. "agent" => "\"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0\""
  22. }

步骤二:?安装Apache服务,用filebeat收集Apache服务器的日志,存入elasticsearch

1)在之前安装了Apache的主机上面安装filebeat

  1. [[email protected] ~]# yum -y install filebeat
  2. [[email protected] ~]# vim/etc/filebeat/filebeat.yml
  3. paths:
  4. - /var/log/httpd/access_log //日志的路径,短横线加空格代表yml格式
  5. document_type: apachelog //文档类型
  6. elasticsearch:        //加上注释
  7. hosts: ["localhost:9200"]                //加上注释
  8. logstash:                    //去掉注释
  9. hosts: ["192.168.1.67:5044"]     //去掉注释,logstash那台主机的ip
  10. [[email protected] ~]# systemctl start filebeat
  11. [[email protected] ~]# vim /etc/logstash/logstash.conf
  12. input{
  13. stdin{ codec => "json" }
  14. beats{
  15. port => 5044
  16. }
  17. file {
  18. path => [ "/tmp/a.log", "/var/tmp/b.log" ]
  19. sincedb_path => "/dev/null"
  20. start_position => "beginning"
  21. type => "testlog"
  22. }
  23. tcp {
  24. host => "0.0.0.0"
  25. port => "8888"
  26. type => "tcplog"
  27. }
  28. udp {
  29. host => "0.0.0.0"
  30. port => "9999"
  31. type => "udplog"
  32. }
  33. syslog {
  34. port => "514"
  35. type => "syslog"
  36. }
  37. }
  38. filter{
  39. if [type] == "apachelog"{
  40. grok{
  41. match => ["message", "%{COMBINEDAPACHELOG}"]
  42. }}
  43. }
  44. output{
  45. stdout{ codec => "rubydebug" }
  46. if [type] == "filelog"{
  47. elasticsearch {
  48. hosts => ["192.168.1.61:9200", "192.168.1.62:9200"]
  49. index => "filelog"
  50. flush_size => 2000
  51. idle_flush_time => 10
  52. }}
  53. }
  54. [[email protected] logstash]# /opt/logstash/bin/logstash \
  55. -f /etc/logstash/logstash.conf

打开另一终端查看5044是否成功启动

  1. [[email protected] ~]# netstat -antup | grep 5044
  2. tcp6 0 0 :::5044 :::* LISTEN 23776/java
  3. [[email protected] ~]# firefox 192.168.1.65 //ip为安装filebeat的那台机器

回到原来的终端,有数据

2)修改logstash.conf文件

  1. [[email protected] logstash]# vim logstash.conf
  2. ...
  3. output{
  4. stdout{ codec => "rubydebug" }
  5. if [type] == "apachelog"{
  6. elasticsearch {
  7. hosts => ["192.168.1.61:9200", "192.168.1.62:9200"]
  8. index => "apachelog"
  9. flush_size => 2000
  10. idle_flush_time => 10
  11. }}
  12. }

浏览器访问Elasticsearch,有apachelog,如图-16所示:

图-16

原文地址:https://www.cnblogs.com/tiki/p/10785554.html

时间: 2024-10-17 14:59:03

kibana应用 logstash应用的相关文章

Nlog、elasticsearch、Kibana以及logstash

Nlog.elasticsearch.Kibana以及logstash 前言 最近在做文档管理中,需要记录每个管理员以及用户在使用过程中的所有操作记录,本来是通过EF直接将操作数据记录在数据库中,在查询的时候直接从数据库中读取,但是这样太蠢了,于是在网上找到了logstash这NB的工具,和大家分享一下学习的过程. 环境准备 需要在官网(https://www.elastic.co/)上下载这三个工具,我下载的版本是elasticsearch-2.3.4,kibana-4.5.2-windows

用Kibana和logstash快速搭建实时日志查询、收集与分析系统

Logstash是一个完全开源的工具,他可以对你的日志进行收集.分析,并将其存储供以后使用(如,搜索),您可以使用它.说到搜索,logstash带有一个web界面,搜索和展示所有日志.kibana 也是一个开源和免费的工具,他可以帮助您汇总.分析和搜索重要数据日志并提供友好的web界面.他可以为 Logstash 和 ElasticSearch 提供的日志分析的 Web 界面说到这里,我们看看 kibana 和 logstash到底能为我们做些什么呢?下面是kibana的界面 简单来讲他具体的工

ELK学习笔记(一)安装Elasticsearch、Kibana、Logstash和X-Pack

最近在学习ELK的时候踩了不少的坑,特此写个笔记记录下学习过程. 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安全性,从而及时采取措施纠正错误. 通常,日志被分散的储存不同的设备上.如果你管理数十上百台服务器,你还在使用依次登录每台机器的传统方法查阅日志.这样是不是感觉很繁琐和效率低下.当务之急我们使用集中化的日志管理,例如:开源的syslog,将所有服务器上的日志收

Ubuntu 下安装Kibana和logstash

准备工作 安装:openssl 卸载旧版本 apt-get remove openssl apt-get autoremove openssl 下载最新版本 wget http://www.openssl.org/source/openssl-1.0.1i.tar.gz tar -zxvf openssl-1.0.1i.tar.gz cd /opt/openssl-1.0.1i ./config --prefix=/usr/local/ssl make & make install 建立软连接

ElasticSearch Kibana 和Logstash 安装x-pack记录

前言 最近用到了ELK的集群,想想还是用使用官方的x-pack的monitor功能对其进行监控,这里先上图看看: 环境如下: 操作系统: window 2012 R2 ELK : elasticsearch 5.6.0 , Logstash 5.6.0 , Kibana 5.6.0 x-pack安装 因为国内通告网咯安装有点卡,这里大家可以先下载 x-pack 5.6.0的离线安装包,这里我下载的目录为G:\uploadsoftware\x-pack-5.6.0.zip. ElasticSear

日志分析 logstash elashsearch kibana

日志分析界面: logstash(分析端) + elashsearch(存储端) + kibana(展示端) 工具: 进行数据整理 statsd 1.diamond --> statsd --> graphite 2.实时收集数据的做法: logstash --> statsd --> graphite 日志收集可视化(LEK): logstash + elasticsearch + kibana legend: logstash --> redis --> Elast

How To Use Logstash and Kibana To Centralize Logs On CentOS 6

原文链接:https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-6 Introduction In this tutorial, we will go over the installation of Logstash 1.4.2 and Kibana 3, and how to configure them to gather an

CENTOS6.5安装日志分析ELK elasticsearch + logstash + redis + kibana

1.日志平台的工作流程 多个独立的agent(Shipper)负责收集不同来源的数据,一个中心agent(Indexer)负责汇总和分析数据,在中心agent前的Broker(使用redis实现)作为缓冲区,中心agent后的ElasticSearch用于存储和搜索数据,前端的Kibana提供丰富的图表展示. Shipper表示日志收集,使用LogStash收集各种来源的日志数据,可以是系统日志.文件.redis.mq等等: Broker作为远程agent与中心agent之间的缓冲区,使用red

log4net.redis+logstash+kibana+elasticsearch+redis 实现日志系统

前端时间写了个随笔 log4net.NoSql +ElasticSearch 实现日志记录 ,因项目原因需要把日志根java平台的同事集成采用logstash+kibana+elasticsearch+redis结构实现日志统计分析,所以需要一个将log4net日志输出到redis的组件.没有找到现成的,就自己动手了.参考了 log4net.NoSql 的代码. redis的C#客户端使用了 ServiceStackRedis,json序列化使用 RestSharp.代码结构如下: JsonLa