EFK收集Kubernetes应用日志

本节内容:

  • EFK介绍
  • 安装配置EFK
    • 配置efk-rbac.yaml文件
    • 配置 es-controller.yaml
    • 配置 es-service.yaml
    • 配置 fluentd-es-ds.yaml
    • 配置 kibana-controller.yaml
    • 配置 kibana-service.yaml
    • 给 Node 设置标签
    • 执行定义文件
    • 检查执行结果
  • 访问 kibana

一、EFK介绍

  • Logstash(或者Fluentd)负责收集日志
  • Elasticsearch存储日志并提供搜索
  • Kibana负责日志查询和展示

官方地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

通过在每台node上部署一个以DaemonSet方式运行的fluentd来收集每台node上的日志。Fluentd将docker日志目录/var/lib/docker/containers和/var/log目录挂载到Pod中,然后Pod会在node节点的/var/log/pods目录中创建新的目录,可以区别不同的容器日志输出,该目录下有一个日志文件链接到/var/lib/docker/contianers目录下的容器日志输出。

二、安装配置EFK

1. 配置efk-rbac.yaml文件

EFK服务也需要一个efk-rbac.yaml文件,配置serviceaccount为efk。

[[email protected] opt]# mkdir efk
[[email protected] opt]# cd efk

[[email protected] efk]# cat efk-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: efk
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: efk
subjects:
  - kind: ServiceAccount
    name: efk
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

efk-rbac.yaml

2. 配置 es-controller.yaml

[[email protected] efk]# vim es-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: elasticsearch-logging-v1
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v1
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 2
  selector:
    k8s-app: elasticsearch-logging
    version: v1
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: efk
      containers:
      - image: index.tenxcloud.com/jimmy/elasticsearch:v2.4.1-2
        name: elasticsearch-logging
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: es-persistent-storage
          mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: es-persistent-storage
        emptyDir: {}

es-controller.yaml

3. 配置 es-service.yaml

[[email protected] efk]# vim es-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch-logging

es-service.yaml

4. 配置 fluentd-es-ds.yaml

[[email protected] efk]# cat fluentd-es-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd-es-v1.22
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v1.22
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v1.22
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ‘‘
    spec:
      serviceAccountName: efk
      containers:
      - name: fluentd-es
        image: index.tenxcloud.com/jimmy/fluentd-elasticsearch:1.22
        command:
          - ‘/bin/sh‘
          - ‘-c‘
          - ‘/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log‘
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      nodeSelector:
        beta.kubernetes.io/fluentd-ds-ready: "true"
      tolerations:
      - key : "node.alpha.kubernetes.io/ismaster"
        effect: "NoSchedule"
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

fluentd-es-ds.yaml

5. 配置 kibana-controller.yaml

[[email protected] efk]# cat kibana-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
    spec:
      serviceAccountName: efk
      containers:
      - name: kibana-logging
        image: index.tenxcloud.com/jimmy/kibana:v4.6.1-1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
          requests:
            cpu: 100m
        env:
          - name: "ELASTICSEARCH_URL"
            value: "http://elasticsearch-logging:9200"
          - name: "KIBANA_BASE_URL"
            value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP

kibana-controller.yaml

6. 配置 kibana-service.yaml

[[email protected] efk]# cat kibana-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana-logging

kibana-service.yaml

[email protected] efk]# ls
efk-rbac.yaml  es-controller.yaml  es-service.yaml  fluentd-es-ds.yaml  kibana-controller.yaml  kibana-service.yaml

7. 给 Node 设置标签

定义 DaemonSet fluentd-es-v1.22 时设置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true ,所以需要在期望运行 fluentd 的 Node 上设置该标签;

[[email protected] efk]# kubectl label nodes 172.16.7.151 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.7.151" labeled
[[email protected] efk]# kubectl label nodes 172.16.7.152 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.7.152" labeled
[[email protected] efk]# kubectl label nodes 172.16.7.153 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.7.153" labeled

8. 执行定义文件

[[email protected] efk]# kubectl create -f .

9. 检查执行结果

[[email protected] efk]# kubectl get deployment -n kube-system|grep kibana
kibana-logging         1         1         1            1           1h

[[email protected] efk]# kubectl get pods -n kube-system|grep -E ‘elasticsearch|fluentd|kibana‘
elasticsearch-logging-v1-nw3p3          1/1       Running   0          43m
elasticsearch-logging-v1-pp89h          1/1       Running   0          43m
fluentd-es-v1.22-cqd1s                  1/1       Running   0          15m
fluentd-es-v1.22-f5ljr                  0/1       Error     6          15m
fluentd-es-v1.22-x24jx                  1/1       Running   0          15m
kibana-logging-4293390753-kg8kx         1/1       Running   0          1h

[[email protected] efk]# kubectl get service  -n kube-system|grep -E ‘elasticsearch|kibana‘
elasticsearch-logging   10.254.50.63     <none>        9200/TCP                        1h
kibana-logging          10.254.169.159   <none>        5601/TCP                        1h

kibana Pod 第一次启动时会用较长时间(10-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度。

[[email protected] efk]# kubectl logs kibana-logging-4293390753-86h5d -n kube-system -f
ELASTICSEARCH_URL=http://elasticsearch-logging:9200
server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
{"type":"log","@timestamp":"2017-10-13T00:51:31Z","tags":["info","optimize"],"pid":5,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}
{"type":"log","@timestamp":"2017-10-13T01:13:36Z","tags":["info","optimize"],"pid":5,"message":"Optimization of bundles for kibana and statusPage complete in 1324.64 seconds"}
{"type":"log","@timestamp":"2017-10-13T01:13:37Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:38Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:40Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:40Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:40Z","tags":["listening","info"],"pid":5,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2017-10-13T01:13:45Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-10-13T01:13:49Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

三、访问kibana

1. 通过 kube-apiserver 访问:获取 kibana 服务 URL

[[email protected] efk]# kubectl cluster-info
Kubernetes master is running at https://172.16.7.151:6443
Elasticsearch is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
monitoring-grafana is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
monitoring-influxdb is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.

浏览器访问 URL: https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana

2. 通过 kubectl proxy 访问:创建代理

[[email protected] efk]# kubectl proxy --address=‘172.16.7.151‘ --port=8086 --accept-hosts=‘^*$‘ &  

浏览器访问 URL:http://172.16.7.151:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging

如果你在这里发现Create按钮是灰色的无法点击,且Time-filed name中没有选项,fluentd要读取/var/log/containers/目录下的log日志,这些日志是从/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log链接过来的,查看你的docker配置,—-log-driver需要设置为json-file格式,默认的可能是journald。

查看当前的--log-driver:

[[email protected] ~]# docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
 Go version:      go1.7.4
 Git commit:      88a4867/1.12.6
 Built:           Mon Jul  3 16:02:02 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
 Go version:      go1.7.4
 Git commit:      88a4867/1.12.6
 Built:           Mon Jul  3 16:02:02 2017
 OS/Arch:         linux/amd64
[[email protected] efk]# docker info |grep ‘Logging Driver‘
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
WARNING: bridge-nf-call-ip6tables is disabled
Logging Driver: journald

修改当前版本docker的--log-driver:

[[email protected] ~]# vim /etc/sysconfig/docker
OPTIONS=‘--selinux-enabled --log-driver=json-file --signature-verification=false‘
[[email protected] efk]# systemctl restart docker

【注意】:本来修改这个参数应该在在/etc/docker/daemon.json文件中添加:

{
      "log-driver": "json-file",
}

但是在该版本中,--log-driver是在文件/etc/sysconfig/docker中定义的。在docker-ce版本中,默认的--log-driver是json-file。

遇到的问题:

由于之前在/etc/docker/daemon.json中配置--log-driver,重启导致docker程序启动失败,等到后来在/etc/sysconfig/docker配置文件中配置好后,启动docker却发现当前node变成NotReady状态,所有的Pod也变为Unknown状态。查看kubelet状态,发现kubelet程序已经挂掉了。

[[email protected] ~]# kubectl get nodes
NAME           STATUS     AGE       VERSION
172.16.7.151   NotReady   28d       v1.6.0
172.16.7.152   Ready      28d       v1.6.0
172.16.7.153   Ready      28d       v1.6.0

启动kubelet:

[[email protected] ~]# systemctl start kubelet
[[email protected] ~]# kubectl get nodes
NAME           STATUS    AGE       VERSION
172.16.7.151   Ready     28d       v1.6.0
172.16.7.152   Ready     28d       v1.6.0
172.16.7.153   Ready     28d       v1.6.0

浏览器再次访问 kibana URL:http://172.16.7.151:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging,此时就会发现有Create按钮了。

在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),去掉已经勾选的 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;

创建Index后,可以在 Discover 下看到 ElasticSearch logging 中汇聚的日志。

原文地址:http://www.cnblogs.com/zhaojiankai/p/7898286.html

时间: 2024-10-29 15:39:28

EFK收集Kubernetes应用日志的相关文章

efk收集k8s 容器日志安装记录

部署环境 $ kubectl get node NAME STATUS ROLES AGE VERSION master01 Ready master 13d v1.14.0 master02 Ready master 13d v1.14.0 master03 Ready master 13d v1.14.0 node01 Ready <none> 13d v1.14.0 node02 Ready <none> 13d v1.14.0 node03 Ready <none&g

Kubernetes运维之使用ELK Stack收集K8S平台日志

kubernetes运维之使用elk Stack收集k8s平台日志目录: 收集哪些日志 elk Stack日志方案 容器中的日志怎么收集 k8S平台中应用日志收集 一.收集哪些日志 ? k8s系统的组件日志 比如kubectl get cs下面的组件 master节点上的controller-manager,scheduler,apiservernode节点上的kubelet,kube-proxy? k8s Cluster里面部署的应用程序日志 标准输出 日志文件elk Stack日志方案,改怎

EFK收集nginx日志

准备三台centos7的服务器 两核两G的 关闭防火墙和SELinux systemctl stop firewalld setenforce 0 1.每一台都安装jdk rpm -ivh jdk-8u131-linux-x64_.rpm 准备中... ################################# [100%] 正在升级/安装... 1:jdk1.8.0_131-2000:1.8.0_131-fcs ################################# [10

使用Logstash收集PHP相关日志

这里收集三种日志 PHP的错误日志,PHP-FPM的错误日志和慢查询日志 在php.ini中设置 error_log = /data/app_data/php/logs/php_errors.log 在php-fpm.conf中设置 error_log = /data/app_data/php/logs/php-fpm_error.log slowlog = /data/app_data/php/logs/php-fpm_slow.log PHP错误日志如下: [29-Jan-2015 07:3

使用logstash结合logback收集微服务日志

因为公司开发环境没有装elk,所以每次查看各个微服务的日志只能使用如下命令 这样子访问日志是并不方便,于是想为每个微服务的日志都用logstash收集到一个文件out中,那以后只要输出这个文件则可查看所有的日志 结合logback和logstash收集日志 1.为需要收集日志的微服务引入logstash-logback-encode依赖 //用logstash收集logback compile 'net.logstash.logback:logstash-logback-encoder:5.2'

最全Kubernetes审计日志方案

前言 当前Kubernetes(K8S)已经成为事实上的容器编排标准,大家关注的重点也不再是最新发布的功能.稳定性提升等,正如Kubernetes项目创始人和维护者谈到,Kubernetes已经不再是buzzword,当我们谈起它的时候,变得越发的boring,它作为成熟项目已经走向了IT基础设施的中台,为适应更大规模的生产环境和更多场景的应用不断延展迭代. 而现在我们更加专注于如何利用K8S平台进行CICD.发布管理.监控.日志管理.安全.审计等等.本期我们将介绍如何利用K8S中的Audit事

Kubernetes审计日志方案

前言当前Kubernetes(K8S)已经成为事实上的容器编排标准,大家关注的重点也不再是最新发布的功能.稳定性提升等,正如Kubernetes项目创始人和维护者谈到,Kubernetes已经不再是buzzword,当我们谈起它的时候,变得越发的boring,它作为成熟项目已经走向了IT基础设施的中台,为适应更大规模的生产环境和更多场景的应用不断延展迭代. 而现在我们更加专注于如何利用K8S平台进行CICD.发布管理.监控.日志管理.安全.审计等等.本期我们将介绍如何利用K8S中的Audit事件

LC3视角:Kubernetes下日志采集、存储与处理技术实践

摘要: 在Kubernetes服务化.日志处理实时化以及日志集中式存储趋势下,Kubernetes日志处理上也遇到的新挑战,包括:容器动态采集.大流量性能瓶颈.日志路由管理等问题.本文介绍了"Logtail + 日志服务 + 生态"架构,介绍了:Logtail客户端在Kubernetes日志采集场景下的优势:日志服务作为基础设施一站式解决实时读写.HTAP两大日志强需求:日志服务数据的开放性以及与云产品.开源社区相结合,在实时计算.可视化.采集上为用户提供的丰富选择. Kubernet

日志系统之基于flume收集docker容器日志

最近我在日志收集的功能中加入了对docker容器日志的支持.这篇文章简单谈谈策略选择和处理方式. 关于docker的容器日志 docker 我就不多说了,这两年火得发烫.最近我也正在把日志系统的一些组件往docker里部署.很显然,组件跑在容器里之后很多东西都会受到容器的制约,比如日志文件就是其中之一. 当一个组件部署到docker中时,你可以通过如下命令在标准输出流(命令行)中查看这个组件的日志: docker logs ${containerName} 日志形如: 但这种方式并不能让你实时获