efk收集k8s 容器日志安装记录

部署环境
$ kubectl get  node
NAME       STATUS     ROLES    AGE   VERSION
master01   Ready      master   13d   v1.14.0
master02   Ready      master   13d   v1.14.0
master03   Ready      master   13d   v1.14.0
node01     Ready      <none>   13d   v1.14.0
node02     Ready      <none>   13d   v1.14.0
node03     Ready      <none>   13d   v1.14.0
目录结构
# cd efk/
# tree
.
├── es
│   ├── es-statefulset.yaml
│   ├── pvc.yaml
│   ├── pv.yaml
│   ├── rbac.yaml
│   └── service.yaml
├── filebeate
│   ├── config.yaml
│   ├── daemonset.yaml
│   ├── filebeat.tgz
│   └── rbac.yaml
└── kibana
    ├── deployment.yaml
    └── service.yaml
创建 elasticsearch
# cd /root/efk/es
创建 es pv
$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: "es-pv"
  labels:
    name: "es-pv"
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: /es  #一定要是777 的权限,否则创建pod 的时候会报错

# 生成配置文件
kubectl create -f pv.yaml
创建es pvc
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "es-pvc"
  namespace: kube-system
  labels:
    name: "es-pvc"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  selector:
    matchLabels:
      name: es-pv

# 生成配置文件
kubectl create -f  pvc.yaml
创建 es rbac认证
$ cat rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addomanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  namespace: kube-system
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: elasticsearch-logging
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind:  ClusterRole
  name:  elasticsearch-logging
  apiGroup: ""

# 生成配置文件
kubectl create -f  rbac.yaml
创建 es pod相关StatefulSet
$ cat es-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v6.3.0
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 1
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v6.3.0
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v6.3.0
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: docker.io/elasticsearch:6.6.1
        name: elasticsearch-logging
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: elasticsearch-logging
        persistentVolumeClaim:
          claimName: es-pvc
      initContainers:
      - image: alpine:3.7
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-logging-init
        securityContext:
          privileged: true

# 生成配置文件
kubectl create -f  es-statefulset.yaml
创建 es pod service
$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch-logging
# 生成配置文件
kubectl create -f  service.yaml
创建filebeate
# cd /root/efk/filebeate
创建 filebeate rbac认证
$ cat rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

# 生成配置文件
kubectl create -f  rbac.yaml
创建 filebeate configmap (包括filebeate 配置文件 以及抓取容器日志所需的配置文件)
$ cat config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.config:
      prospectors:
        path: /usr/share/filebeat/prospectors.d/*.yml
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false
    processors:
      - add_cloud_metadata:
    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}
    output.elasticsearch:
      hosts: [‘${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}‘]

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: log
      enabled: true
      symlinks: true
      paths:
        - /var/log/containers/*.log
      exclude_files: ["calico","firewall","filebeat","kube-proxy"]
      processors:
        - add_kubernetes_metadata:
            in_cluster: true

# 生成配置文件
kubectl create -f  config.yaml
创建 filebeate pod 相关 DaemonSet 控制器
$ cat daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: prima/filebeat:6.4.2
        args: [
          "-c","/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-logging
        - name: ELASTICSEARCH_USERNAME
          value:
        - name: ELASTICSEARCH_PASSWORD
          value:
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config    #filebeat 配置文件
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: prospectors #k8s 宿主机运行相关日志文件
          mountPath: /usr/share/filebeat/prospectors.d
          readOnly: true
        - name: data #filebeat pod 存放数据的目录
          mountPath: /usr/share/filebeat/data
        - name: varlog   #存放宿主机上 /var/log 的日志
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers #存放宿主机上 关于k8s 相关json文件
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlog
        hostPath:
          path: /var/log/
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers/
      - name: prospectors
        configMap:
          defaultMode: 0600
          name: filebeat-prospectors
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate   # 宿主机上不存在创建此目录

# 生成配置文件
kubectl create -f  daemonset.yaml
创建 kibana
# cd /root/efk/kibana
创建 kibana pod 相关deployment控制器
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-serivce: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: ‘docker/default‘
    spec:
      containers:
      - name: kibana-logging
        image: kibana:6.6.1
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch-logging:9200
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
# 生成配置文件
kubectl create -f  deployment.yaml
创建 kibana pod service
$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  type: NodePort
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
    nodePort: 30003
  selector:
    k8s-app: kibana-logging

# 生成配置文件
kubectl create -f  service.yaml
查看 pod service
$ kubectl get svc -n kube-system
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
elasticsearch-logging   ClusterIP   10.101.1.2       <none>        9200/TCP                 22h
kibana-logging          NodePort    10.101.121.228   <none>        5601:30003/TCP           21h    #30003为kibana 端口
kube-dns                ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   20d
kubernetes-dashboard    NodePort    10.110.209.252   <none>        443:31021/TCP            20d
traefik-web-ui          ClusterIP   10.102.131.255   <none>        80/TCP                   19d

原文地址:https://www.cnblogs.com/lixinliang/p/12217272.html

时间: 2024-07-30 09:42:55

efk收集k8s 容器日志安装记录的相关文章

Kubernetes运维之使用ELK Stack收集K8S平台日志

kubernetes运维之使用elk Stack收集k8s平台日志目录: 收集哪些日志 elk Stack日志方案 容器中的日志怎么收集 k8S平台中应用日志收集 一.收集哪些日志 ? k8s系统的组件日志 比如kubectl get cs下面的组件 master节点上的controller-manager,scheduler,apiservernode节点上的kubelet,kube-proxy? k8s Cluster里面部署的应用程序日志 标准输出 日志文件elk Stack日志方案,改怎

EFK收集Kubernetes应用日志

本节内容: EFK介绍 安装配置EFK 配置efk-rbac.yaml文件 配置 es-controller.yaml 配置 es-service.yaml 配置 fluentd-es-ds.yaml 配置 kibana-controller.yaml 配置 kibana-service.yaml 给 Node 设置标签 执行定义文件 检查执行结果 访问 kibana 一.EFK介绍 Logstash(或者Fluentd)负责收集日志 Elasticsearch存储日志并提供搜索 Kibana负

日志系统之基于flume收集docker容器日志

最近我在日志收集的功能中加入了对docker容器日志的支持.这篇文章简单谈谈策略选择和处理方式. 关于docker的容器日志 docker 我就不多说了,这两年火得发烫.最近我也正在把日志系统的一些组件往docker里部署.很显然,组件跑在容器里之后很多东西都会受到容器的制约,比如日志文件就是其中之一. 当一个组件部署到docker中时,你可以通过如下命令在标准输出流(命令行)中查看这个组件的日志: docker logs ${containerName} 日志形如: 但这种方式并不能让你实时获

elk6.3.1+zookeeper+kafka+filebeat收集dockerswarm容器日志

前面有说过使用redis来缓解elk的数据接受压力,但是呢,如果redis面对突发情况也会承受不住的,这里需要借助两个工具,zookeeper和kafka Zookeeper主要值借助分布式锁,保证事务的不变,原子性隔离性... Kafka消息队列,从生产这到filebeta再到消费这logstash接受到es中,起到缓存,减缓压力 来吧开始往上怼了 首先下载zookeeper和卡夫卡 wget http://mirrors.hust.edu.cn/apache/zookeeper/zookee

Tomcat容器日志收集方案fluentd+elasticsearch+kilbana

在上一遍博文中我们介绍了Nginx容器访问日志收集的方案,我们使用EFK的架构来完成对容器日志内应用日志的收集,如果不知道什么是EFK架构,那么请访问以下链接获取相关的帮助 Nginx容器日志收集方案fluentd+elasticsearch+kilbana 如果你已经认真阅读了上面的链接,并撑握了其用法,那么再来看本博文(针对于初学者),下面假设我们已经搭建好了上一讲所需要的基础环境,我们接下来就直接开始步入正题. 在步入正题之前我们首先需要确认我们需要完成的目标与效果,同样我们在启动Tomc

Nginx容器日志收集方案fluentd+elasticsearch+kilbana

容器技术在发展到今天已经是相当的成熟,但容器不同于虚拟机,我们在使用容器的同时也有很多相关的技术问题需要解决,比如:容器性能监控,数据持久化,日志监控与分析等.我们不能像平时使用虚拟机一样来管理容器,本文我将给大家带来fluentd+elasticsearch+kilbana容器日志收集方案. 我们将通过容器的fluentd日志驱动将系统内产生的日志发送给fluentd服务端,再过来fluentd服务端处理所有容器发送过来的日志,再转发到elasticsearch,最后通过kilbana来展示和

EFK收集nginx日志

准备三台centos7的服务器 两核两G的 关闭防火墙和SELinux systemctl stop firewalld setenforce 0 1.每一台都安装jdk rpm -ivh jdk-8u131-linux-x64_.rpm 准备中... ################################# [100%] 正在升级/安装... 1:jdk1.8.0_131-2000:1.8.0_131-fcs ################################# [10

用ELK工具收集rancher1.6上容器日志

前言 rancher1.6部署的docker集群,在rancher的界面上也能看到日志:但是rancher上看到的日志容量有限,只要稍微早一点的日志,就无法查看了,需要手动到服务器上使用docker logs查看日志,不太方便,因此搭建一个elk来收集rancher上部署的docker容器日志.rancher上部署的docker容器,日志位置在/var/lib/docker/containers/containerID/目录下的*-json.log文件里,因此需要收集这个文件的内容. 部署规划

k8s查看容器日志---查看运行中指定pod以及指定pod中容器的日志

1.查看指定pod的日志 kubectl logs <pod_name> kubectl logs -f <pod_name> #类似tail -f的方式查看(tail -f 实时查看日志文件 tail -f 日志文件log) 2.查看指定pod中指定容器的日志 kubectl logs <pod_name> -c <container_name> PS:查看Docker容器日志docker logs <container_id> 原文地址:ht