k8s集群日志收集

k8s集群日志收集

  1. 收集哪些日志
    K8S系统的组件日志
    K8S Cluster里面部署的应用程序日志

  2. 日志方案 Filebeat+ELK

      

Filebeat(日志采集工具)+Logstach(数据处理引擎)+Elasticserch(数据存储、全文检索、分布式搜索引擎)+Kibana(展示数据、绘图、搜索)

      

3 容器中的日志怎么收集
      

收集方案:Pod中附加专用日志收集的容器
优点:低耦合
缺点:每个Pod启动一个日志收集代理,增加资源消耗和运维维护成本
            
4 部署DaemonSet采取k8s组件日志/var/log/messages
      
4.1 ELK安装在harbor节点
      
安装、配置ELK
      

1)安装openjdk
yum -y install java-1.8.0-openjdk
            
2)安装Logstash
https://www.elastic.co/guide/en/logstash/7.6/installing-logstash.html
            
配置网络源
/etc/yum.repos.d/logstash.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
                        

3)安装ELK组件
yum -y install logstash elasticsearch kibana
            
4)配置组件
            
修改kibana配置
vi /etc/kibana/kibana.yml
            
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
                        

修改logstash配置
            
vi /etc/logstash/conf.d/logstash-to-es.conf

input {
beats {
port => 5044
}

}

filter {

}

output {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "k8s-log-%{+YYYY.MM.dd}" #索引格式
}
stdout { codec => rubydebug }

}

            
启动服务
systemctl start kibana #启动kibana
systemctl start elasticsearch #启动elasticsearch
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-to-es.conf &
            
192.168.1.143:5601 #访问kibana
            
5) 创建pod(master节点执行)收集组件日志

            

kubectl apply -f k8s-logs.yaml
                        
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-logs-filebeat-config
namespace: kube-system

data:
filebeat.yml: |-
filebeat.prospectors:

  • type: log
    paths:

    • /messages
      fields:
      app: k8s
      type: module
      fields_under_root: true

    output.logstash:
    hosts: [‘192.168.1.143:5044‘]



apiVersion: apps/v1
kind: DaemonSet
metadata:
name: k8s-logs
namespace: kube-system
spec:
selector:
matchLabels:
project: k8s
app: filebeat
template:
metadata:
labels:
project: k8s
app: filebeat
spec:
containers:

  • name: filebeat
    image: docker.elastic.co/beats/filebeat:6.4.2
    args: [
    "-c", "/etc/filebeat.yml",
    "-e",
    ]
    resources:
    requests:
    cpu: 100m
    memory: 100Mi
    limits:
    cpu: 500m
    memory: 500Mi
    securityContext:
    runAsUser: 0
    volumeMounts:

    • name: filebeat-config
      mountPath: /etc/filebeat.yml
      subPath: filebeat.yml
    • name: k8s-logs
      mountPath: /messages
      volumes:
  • name: k8s-logs
    hostPath:
    path: /var/log/messages
    type: File
  • name: filebeat-config
    configMap:

6)kibana添加索引

kibana界面->Index Patterns-》k8s-log-*->@timestamp->Create index pattern->Discover

            
7)查看日志

                        
6 收集nginx日志
            
1)创建名称空间
cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test
            
2)创建pod的configmap(保存filebeat的配置文件)
            

kubectl apply -f filebeat-nginx-configmap.yaml
            
cat filebeat-nginx-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-nginx-config
namespace: test

data:
filebeat.yml: |-
filebeat.prospectors:

  • type: log
    paths:

    • /usr/local/nginx/logs/access.log
      fields: #添加额外字段,表示字段来源和类型,日志收集时做配置
      app: www
      type: nginx-access
      fields_under_root: true #将收集的日志放在kibana界面顶级
  • type: log
    paths:
    • /usr/local/nginx/logs/error.log
      fields:
      app: www
      type: nginx-error
      fields_under_root: true

    output.logstash: #收集的日志数据输出到logstash中
    hosts: [‘192.168.1.143:5044‘]

            
3)创建secret用于保存harbor用户的密码
            
kubectl create secret docker-registry harborsecret123 --docker-server=192.168.1.143 --docker-username=‘zhanghai‘ --docker-password=‘[email protected]‘ -n test
            
4)修改logstash配置
            
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-to-es.conf &
            
cat logstash-to-es.conf
input {
beats {
port => 5044
}
}

filter {
}

output {
if [app] == "www" {
if [type] == "nginx-access" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "nginx-access-%{+YYYY.MM.dd}"
}
}
else if [type] == "nginx-error" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "nginx-error-%{+YYYY.MM.dd}"
}
}
else if [type] == "tomcat-catalina" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "tomcat-catalina-%{+YYYY.MM.dd}"
}
}
else if [app] == "k8s" {
if [type] == "module" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "k8s-log-%{+YYYY.MM.dd}"
}
}
}
}
}

            
5)创建nginx的pod
            
kubectl apply -f nginx-deployment.yaml
            
cat nginx-deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: php-demo
namespace: test
spec:
replicas: 3
selector:
matchLabels:
project: www
app: php-demo
template:
metadata:
labels:
project: www
app: php-demo
spec:
imagePullSecrets:

  • name: harborsecret123
    containers:
  • name: nginx
    image: 192.168.1.143/project/php-demo:latest
    imagePullPolicy: Always
    ports:
    • containerPort: 80
      name: web
      protocol: TCP
      resources:
      requests:
      cpu: 0.5
      memory: 256Mi
      limits:
      cpu: 1
      memory: 1Gi
      resources:
      requests:
      cpu: 0.5
      memory: 256Mi
      limits:
      cpu: 1
      memory: 1Gi
      livenessProbe:
      httpGet:
      path: /status.php
      port: 80
      initialDelaySeconds: 6
      timeoutSeconds: 20
      volumeMounts:
    • name: nginx-logs
      mountPath: /usr/local/nginx/logs
  • name: filebeat
    image: docker.elastic.co/beats/filebeat:6.4.2
    args: [
    "-c", "/etc/filebeat.yml",
    "-e",
    ]
    resources:
    limits:
    memory: 500Mi
    requests:
    cpu: 100m
    memory: 100Mi
    securityContext:
    runAsUser: 0
    volumeMounts:
    • name: filebeat-config
      mountPath: /etc/filebeat.yml
      subPath: filebeat.yml
    • name: nginx-logs
      mountPath: /usr/local/nginx/logs

      volumes:

  • name: nginx-logs
    emptyDir: {}
  • name: filebeat-config
    configMap:
    name: filebeat-nginx-config

            

5)配置kibana前台
            
kibana界面->Kibana Index Patterns-》nginx-access-*->@timestamp->Create index pattern->Discover

            

  1. 收集tomcat日志
                
    1)创建pod的configmap
                
    cat filebeat-tomcat-configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: filebeat-config
    namespace: test

data:
filebeat.yml: |-
filebeat.prospectors:

  • type: log
    paths:

    • /usr/local/tomcat/logs/catalina.*
      fields:
      app: www
      type: tomcat-catalina
      fields_under_root: true
      multiline: #多行匹配,已[开头为记录一行日志
      pattern: ‘^[‘
      negate: true
      match: after
      output.logstash:
      hosts: [‘192.168.1.143:5044‘]
                  
      kubectl apply -f filebeat-tomcat-configmap.yaml

            
2)创建tomcat的pod
            
kubectl apply -f tomcat-deployment.yaml
cat tomcat-deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tomcat-java-demo
namespace: test
spec:
replicas: 3
selector:
matchLabels:
project: www
app: java-demo
template:
metadata:
labels:
project: www
app: java-demo
spec:
imagePullSecrets:

  • name: harborsecret123
    containers:
  • name: tomcat
    image: 192.168.1.143/project/tomcat-java-demo:latest
    imagePullPolicy: Always
    ports:
    • containerPort: 8080
      name: web
      protocol: TCP
      resources:
      requests:
      cpu: 0.5
      memory: 1Gi
      limits:
      cpu: 1
      memory: 2Gi
      livenessProbe:
      httpGet:
      path: /
      port: 8080
      initialDelaySeconds: 60
      timeoutSeconds: 20
      readinessProbe:
      httpGet:
      path: /
      port: 8080
      initialDelaySeconds: 60
      timeoutSeconds: 20
      volumeMounts:
    • name: tomcat-logs
      mountPath: /usr/local/tomcat/logs
  • name: filebeat
    image: docker.elastic.co/beats/filebeat:6.4.2
    args: [
    "-c", "/etc/filebeat.yml",
    "-e",
    ]
    resources:
    limits:
    memory: 500Mi
    requests:
    cpu: 100m
    memory: 100Mi
    securityContext:
    runAsUser: 0
    volumeMounts:
    • name: filebeat-config
      mountPath: /etc/filebeat.yml
      subPath: filebeat.yml
    • name: tomcat-logs
      mountPath: /usr/local/tomcat/logs
      volumes:
  • name: tomcat-logs
    emptyDir: {}
  • name: filebeat-config
    configMap:
    name: filebeat-config

            

3)配置kibana前台
            

kibana界面->Kibana Index Patterns-tomcat-catalina-*->@timestamp->Create index pattern->Discover

原文地址:https://blog.51cto.com/13836096/2484937

时间: 2024-07-31 04:19:54

k8s集群日志收集的相关文章

flume集群日志收集

一.Flume简介 Flume是一个分布式的.高可用的海量日志收集.聚合和传输日志收集系统,支持在日志系统中定制各类数据发送方(如:Kafka,HDFS等),便于收集数据.其核心为agent,agent是一个java进程,运行在日志收集节点. agent里面包含3个核心组件:source.channel.sink.  source组件是专用于收集日志的,可以处理各种类型各种格式的日志数据,包括avro.thrift.exec.jms.spooling directory.netcat.seque

k8s集群日志

硬件环境: 三台虚拟机, 10.10.20.203 部署docker.etcd.flannel.kube-apiserver.kube-controller-manager.kube-scheduler.elsticsearch.kibana 10.10.20.206 部署docker.flannel.kubelet.kube-proxy.filebeat 10.10.20.207 部署docker.flannel.kubelet.kube-proxy.filebeat [elsticsearc

k8s集群之日志收集EFK架构

参考文档 http://tonybai.com/2017/03/03/implement-kubernetes-cluster-level-logging-with-fluentd-and-elasticsearch-stack/ https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch https://t.goodrain.com/t/k8s/242 http://logz

k8s集群之kubernetes-dashboard和kube-dns组件部署安装

说明 最好先部署kube-dns,有些组合服务直接主机用hostname解析,例如redis主从,heapster监控组件influxdb.grafana之间等. 参考文档 https://greatbsky.github.io/KubernetesDoc/kubernetes1.5.2/cn.html 安装集群文档见: http://jerrymin.blog.51cto.com/3002256/1898243 安装PODS文档见: http://jerrymin.blog.51cto.com

基于prometheus监控k8s集群

本文建立在你已经会安装prometheus服务的基础之上,如果你还不会安装,请参考:prometheus多维度监控容器 如果你还没有安装库k8s集群,情参考: 从零开始搭建基于calico的kubenetes 前言 kubernetes显然已成为各大公司亲睐的容器编排工具,各种私有云公有云平台基于它构建,那么,我们怎么监控集群中的所有容器呢?目前有三套方案: heapster+influxDB heapster为k8s而生,它从apiserver获取节点信息,每个节点kubelet内含了cAdv

Elasticstack 5.1.2 集群日志系统部署及实践

Elasticstack 5.1.2 集群日志系统部署及实践 一.ELK Stack简介 ELK Stack 是Elasticsearch.Logstash.Kibana三个开源软件的组合,在实时数据检索和分析场合,三者通常是配合共用的. 可参考:https://www.elastic.co/products 二.Elasticstack重要组件 Elasticsearch: 准实时索引 Logtash: 收集数据,配置使用 Ruby DSL Kibana 展示数据,查询聚合,生成报表 Kafk

使用kubeadm部署k8s集群08-配置LB指向kube-apiserver

使用kubeadm部署k8s集群08-配置LB指向kube-apiserver 2018/1/4 配置 LB 指向 kube-apiserver 小目标:在 3 个 master 节点前,还需配置一个 LB 来作为 apiserver 的入口 LB -> master x3 直接使用阿里云内网 SLB L4 proxy 资源(本次实例是 4 层而不使用 7 层的原因是:跳过了处理证书的环节) 申请下来资源后,将得到一个 vip 指向上述 3 个 master 节点的 IP 作为后端真实服务器 注

使用kubeadm部署k8s集群01-初始化

使用kubeadm部署k8s集群01-初始化 2018/1/3 节点配置 master x3 OS version: centos7 swapoff ### 阿里云默认:off hosts ### 每个节点上配置: [[email protected] ~]# cat /etc/hosts ### k8s master @envDev 10.10.9.67 tvm-00 10.10.9.68 tvm-01 10.10.9.69 tvm-02 Docker version: latest(17.0

使用kubeadm部署k8s集群02-配置etcd高可用

使用kubeadm部署k8s集群02-配置etcd高可用 2018/1/4 配置 etcd 高可用 新建一个 2 节点的 etcd cluster 查看 etcd 的状态 迁移原来 master 节点上的 etcd 数据到上面新建的 etcd cluster 中 切换 kube-apiserver 使用新的 etcd endpoint 地址 清理掉原来的单节点 etcd 服务 重建一个 etcd 服务,加入新集群 部署新的 etcd 节点 更新另外2个节点的 etcd.yaml 配置 新建一个