建立Heapster Influxdb Grafana集群性能监控平台

依赖于kubenets dns服务
地址:https://note.youdao.com/web/#/file/WEB42cf75c02ae113136ff664f3f137cb67/note/WEB0eec19f3667471969b3354b7128fda9c/

图形化展示度量指标的实现需要集成k8s的另外一个Addons组件: Heapster 。
Heapster原生支持K8s(v1.0.6及以后版本)和 CoreOS ,并且支持多种存储后端,比如: InfluxDB 、 ElasticSearch 、 Kafka 。

镜像地址:

index.tenxcloud.com/jimmy/heapster-amd64:v1.3.0-beta.1
index.tenxcloud.com/jimmy/heapster-influxdb-amd64:v1.1.1
index.tenxcloud.com/jimmy/heapster-grafana-amd64:v4.0.2

安装Heapster

heapster-deployment.yaml

[[email protected]_master ui]# cat heapster-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      containers:
      - name: heapster
        image: index.tenxcloud.com/jimmy/heapster-amd64:v1.3.0-beta.1
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:http://192.168.132.148:8080?inClusterConfig=false
        - --sink=influxdb:http://monitoring-influxdb:8086

注意:修改- --source为自己的master apiserver访问地址 ,修改image地址(上面已经提供)

heapster-service.yaml

[[email protected]_master ui]# cat heapster-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

创建deployment和service

#kubectl create -f heapster-deployment.yaml
#kubectl create -f heapster-service.yaml

配置Influxdb

influxdb 官方建议使用命令行或 HTTP API 接口来查询数据库,从 v1.1.0 版本开始默认关闭 admin UI,将在后续版本中移除 admin UI 插件。
开启镜像中 admin UI的办法如下:先导出镜像中的 influxdb 配置文件,开启插件后,再将配置文件内容写入 ConfigMap,最后挂载到镜像中,达到覆盖原始配置的目的。

$ #在镜像所在的宿主机上,导出镜像中的influxdb配置文件
$ docker run --rm --entrypoint ‘cat‘ -ti heapster-influxdb-amd64:v1.1.1 /etc/config.toml >config.toml.orig
$ cp config.toml.orig config.toml
$ # 修改:启用 admin 接口
$ vim config.toml
修改第35行
< enabled = false
---
> enabled = true

$ #将修改后的config.toml拷贝到Master上,再将修改后的配置写入到ConfigMap对象中

$ kubectl create configmap influxdb-config --from-file=config.toml -n kube-system

$ # 将ConfigMap中的配置文件挂载到Pod中,达到覆盖原始配置的目的

influxdb-deployment.yaml文件

[[email protected]_master ui]# cat influxdb-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: index.tenxcloud.com/jimmy/heapster-influxdb-amd64:v1.1.1
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
        - mountPath: /etc/
          name: influxdb-config
      volumes:
      - name: influxdb-config
        configMap:
          name: influxdb-config
      - name: influxdb-storage
        emptyDir: {}

influxdb-service.yaml

[[email protected]_master ui]# cat influxdb-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 8086
    targetPort: 8086
    name: http
  - port: 8083
    targetPort: 8083
    name: api
  selector:
    k8s-app: influxdb

创建deployment和service

#kubectl create -f influxdb-deployment.yaml
#kubectl create -f influxdb-service.yaml

安装grafana

grafana-deployment.yaml

[[email protected]_master ui]# cat grafana-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: index.tenxcloud.com/jimmy/heapster-grafana-amd64:v4.0.2
        ports:
          - containerPort: 3000
            protocol: TCP
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GRAFANA_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you‘re only using the API Server proxy, set this value instead:
          value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
          #value: /
      volumes:
      - name: grafana-storage
        emptyDir: {}

grafana-service.yaml

[[email protected]_master ui]# cat grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana

创建deployment和service

#kubectl create -f grafana-deployment.yaml
#kubectl create -f grafana-service.yaml

获取所有pod

[[email protected]_master ui]#  kubectl get pod -n kube-system
NAME                                           READY     STATUS    RESTARTS   AGE
heapster-3275159538-fdvhf                      1/1       Running   0          5s
kubernetes-dashboard-latest-1381663337-0wwml   1/1       Running   1          19h
monitoring-grafana-2812960871-gbsdf            1/1       Running   1          16h
monitoring-influxdb-1975863524-nmbpk           1/1       Running   1          16h

打印日志

[[email protected]_master ui]# kubectl logs -f pods/heapster-3275159538-fdvhf -n kube-system

如果没有配置dns,Influxdb会报如下错误

访问验证:

http://192.168.132.148:8080/ui

 验证Influxdb

8086端口对应31878
访问:
http://192.168.132.148:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:8083/

这里的ip为部署influxdb的主机

直接回车

原文地址:https://www.cnblogs.com/FRESHMANS/p/8445244.html

时间: 2024-10-03 19:56:57

建立Heapster Influxdb Grafana集群性能监控平台的相关文章

【prometheus】kubernetes集群性能监控

手动安装方案——阿里云解决方案(部署失败): https://www.jianshu.com/p/1c7ddf18e8b2 手动安装方案—— (部署成功,但是只有CPU内存等的监控信息,没有GPU的监控信息): https://github.com/camilb/prometheus-kubernetes/tree/master helm安装方案——GPU-Monitoring-tools解决方案(部署成功): 参考:http://fly-luck.github.io/2018/12/10/gp

K8S集群监控—cAdvisor+Heapster+InfluxDB+Grafana

容器的监控方案有多种,如单台docker主机的监控,可以使用docker stats或者cAdvisor web页面进行监控.但针对于Kubernetes这种容器编排工具而言docker单主机的监控已经不足以满足需求,在Kubernetes的生态圈中也诞生了一个个监控方案,如常用的dashboard,部署cAdvisor+Heapster+InfluxDB+Grafana监控方案,部署Prometheus和Grafana监控方案等.在这里主要讲述一下cAdvisor+Heapster监控方案.

kubernetes 监控方案之:heapster+influxdb+grafana(十八)

目录 一.Heapster 介绍 二.部署 三.使用 heapster已经deprecated了:https://github.com/kubernetes/heapster 一.Heapster 介绍 Heapster 是容器集群监控和性能分析工具,天然的支持 Kubernetes 和 CoreOS. Kubernetes 有个出名的监控 agent-cAdvisor.在每个 kubernetes Node 上都会运行 cAdvisor,它会收集本机以及容器的监控数据 (cpu,memory,

collectd+influxDB+grafana搭建性能监控平台

collectd+influxDB+grafana搭建性能监控平台 前言 InfluxDB 是 Go 语言开发的一个开源分布式时序数据库,非常适合存储指标.事件.分析等数据:键值时间数据库性能还不错 collectd 是C 语言写的一个系统性能采集工具 Grafana 是纯 Javascript 开发的前端工具,用于访问 InfluxDB,自定义报表.显示图表等.V3.0以上版本支持zabbix 数据库,可以非常方便直接由zabbix_agent 采集数据. 1 环境信息 测试环境174,175

Kubernetes集群资源监控

Kubernetes监控指标 集群监控? 节点资源利用率? 节点数? 运行Pods Pod监控? Kubernetes指标(pod) DESIRED:预期的状态CURRENT:当前的状态UP-TO-DATE:更新后的状态AVAILABLE:可以用的状态 ? 容器指标(cpu,mem)? 应用程序 Kubernetes监控方案 cAdvisor+InfluxDB+Grafana:cAdvisor(k8s自身的组件)采集的数据,Heapster会去收集数据存在InfluxDB中,Grafana对In

Redis集群性能问题深度分析

Redis集群性能问题深度分析 参考 Redis开发与运维 https://redis.io/ http://www.redis.cn/ https://github.com/antirez/redis https://github.com/sohutv/cachecloud 源起 优化之路永无止境,在此之前一做过一些架构优化汇总如下: 1,Redis集群3.0.7升级到3.2.9解决读从节点KEY过期不删除问题,集群有几千万KEY原来经核查3.0.7版本只有主上保存过期时间,所以需要主触发才能

优化cdh集群性能-可在安装集群前操作002

优化cdh集群性能-可在安装集群前操作002//读完cdh官方文档后,可知的优化操作 可在<03搭建cdh 生产环境前的Linux 优化(涉及到Linux内存参数优化)>https://blog.51cto.com/12445535/2365948 这步同时操作 讲解了:提供了一些性能问题的解决方案,并介绍了配置最佳实践. 1.禁止tuned 服务 //是内存分配管理//关于tuned服务是什么?RHEL/CentOS 在 6.3 版本以后引入了一套新的系统调优工具 tuned/tuned-a

脚本自动部署构架集群和监控状态

脚本自动部署构架集群和监控状态 shell脚本编写自动部署.初始配置.并启动nginx反向代理服务 1 #!/bin/bash 2 systemctl disable firewalld 3 systemctl stop firewalld 4 setenforce 0 5 #### 6 yum install epel-release -y 7 yum -y install zlib zlib-devel openssl openssl--devel pcre pcre-devel 8 yum

Xtradb+Haproxy高可用数据库集群(四)集群zabbix监控篇

xtradb cluster集群zabbix监控 监控指标 官网参考地址: https://www.percona.com/doc/percona-xtradb-cluster/5.6/manual/monitoring.html 1.报警参数 每个集群节点状态: wsrep_cluster_status != Primary wsrep_connected != ON wsrep_ready != ON 复制冲突过高 wsrep_local_cert_failures wsrep_local_