Kubernetes实战总结 - Prometheus部署

什么是普罗米修斯?

Prometheus是最初在SoundCloud上构建的开源系统监视和警报工具包 。

自2012年成立以来,许多公司和组织都采用了Prometheus,该项目拥有非常活跃的开发人员和用户社区。

组件说明

  • MetricServer:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如kubectl,hpa,scheduler等。
  • PrometheusOperator:是一个系统监测和警报工具箱,用来存储监控数据。
  • NodeExporter:用于各node的关键度量指标状态数据。
  • KubeStateMetrics:收集kubernetes集群内资源对象数据,制定告警规则。
  • Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数据,通过http协议传输。
  • Grafana:是可视化数据统计和监控平台。

系统架构


什么时候合适?

Prometheus非常适合记录任何纯数字时间序列。

它既适合以机器为中心的监视,也适合于高度动态的面向服务的体系结构的监视。在微服务世界中,它对多维数据收集和查询的支持是一种特别的优势。

Prometheus的设计旨在提高可靠性,使其成为中断期间要使用的系统,以使您能够快速诊断问题。

每个Prometheus服务器都是独立的,而不依赖于网络存储或其他远程服务。当基础结构的其他部分损坏时,您可以依靠它,并且无需设置广泛的基础结构即可使用它。

什么时候不合适?

普罗米修斯重视可靠性。即使在故障情况下,您始终可以查看有关系统的可用统计信息。

如果您需要100%的准确性(例如按请求计费),则Prometheus并不是一个不错的选择,因为所收集的数据可能不会足够详细和完整。

在这种情况下,最好使用其他系统来收集和分析数据以进行计费,并使用Prometheus进行其余的监视。


部署安装

  Github : https://github.com/coreos/kube-prometheus

  1、下载官方源码文件(默认镜像源来自quay.io)

  wget -o kube-prometheus.tgz https://github.com/coreos/kube-prometheus/archive/v0.3.0.tar.gz

  •  当然,如果你的国外网不太友好,也可以下载本人百度云文件(已修改镜像源到aliyuncs)kube-prometheus.tgz(eb3m )

  2、解压并部署安装

  tar -zxvf kube-prometheus.tgz && cd kube-prometheus-0.3.0/manifests

  kubectl create -f setup

  kubectl create -f .

[[email protected] manifests]# ls -R
.:
alertmanager-alertmanager.yaml              kube-state-metrics-service.yaml                             prometheus-clusterRole.yaml
alertmanager-secret.yaml                    node-exporter-clusterRoleBinding.yaml                       prometheus-operator-serviceMonitor.yaml
alertmanager-serviceAccount.yaml            node-exporter-clusterRole.yaml                              prometheus-prometheus.yaml
alertmanager-serviceMonitor.yaml            node-exporter-daemonset.yaml                                prometheus-roleBindingConfig.yaml
alertmanager-service.yaml                   node-exporter-serviceAccount.yaml                           prometheus-roleBindingSpecificNamespaces.yaml
grafana-dashboardDatasources.yaml           node-exporter-serviceMonitor.yaml                           prometheus-roleConfig.yaml
grafana-dashboardDefinitions.yaml           node-exporter-service.yaml                                  prometheus-roleSpecificNamespaces.yaml
grafana-dashboardSources.yaml               prometheus-adapter-apiService.yaml                          prometheus-rules.yaml
grafana-deployment.yaml                     prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-serviceAccount.yaml
grafana-serviceAccount.yaml                 prometheus-adapter-clusterRoleBindingDelegator.yaml         prometheus-serviceMonitorApiserver.yaml
grafana-serviceMonitor.yaml                 prometheus-adapter-clusterRoleBinding.yaml                  prometheus-serviceMonitorCoreDNS.yaml
grafana-service.yaml                        prometheus-adapter-clusterRoleServerResources.yaml          prometheus-serviceMonitorKubeControllerManager.yaml
kube-state-metrics-clusterRoleBinding.yaml  prometheus-adapter-clusterRole.yaml                         prometheus-serviceMonitorKubelet.yaml
kube-state-metrics-clusterRole.yaml         prometheus-adapter-configMap.yaml                           prometheus-serviceMonitorKubeScheduler.yaml
kube-state-metrics-deployment.yaml          prometheus-adapter-deployment.yaml                          prometheus-serviceMonitor.yaml
kube-state-metrics-roleBinding.yaml         prometheus-adapter-roleBindingAuthReader.yaml               prometheus-service.yaml
kube-state-metrics-role.yaml                prometheus-adapter-serviceAccount.yaml                      setup
kube-state-metrics-serviceAccount.yaml      prometheus-adapter-service.yaml
kube-state-metrics-serviceMonitor.yaml      prometheus-clusterRoleBinding.yaml

./setup:
0namespace-namespace.yaml                                       prometheus-operator-0prometheusruleCustomResourceDefinition.yaml  prometheus-operator-deployment.yaml
prometheus-operator-0alertmanagerCustomResourceDefinition.yaml  prometheus-operator-0servicemonitorCustomResourceDefinition.yaml  prometheus-operator-serviceAccount.yaml
prometheus-operator-0podmonitorCustomResourceDefinition.yaml    prometheus-operator-clusterRoleBinding.yaml                       prometheus-operator-service.yaml
prometheus-operator-0prometheusCustomResourceDefinition.yaml    prometheus-operator-clusterRole.yaml

ls -R

[[email protected] manifests]# kubectl create -f setup/.
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created
[root@k8s-32 manifests]# kubectl create -f .
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-pods created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
role.rbac.authorization.k8s.io/kube-state-metrics created
rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

kubectl create -f .

  3、等待部署完成

 kubectl get pod -n monitoring

NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          65m
alertmanager-main-1                    2/2     Running   0          65m
alertmanager-main-2                    2/2     Running   0          65m
grafana-7c54b4677d-btwfb               1/1     Running   0          65m
kube-state-metrics-58b656b699-p8m29    3/3     Running   0          65m
node-exporter-rc5mx                    2/2     Running   0          65m
node-exporter-vdzkb                    2/2     Running   0          65m
node-exporter-xzdw2                    2/2     Running   0          65m
prometheus-adapter-7d6f96974c-76m4z    1/1     Running   0          65m
prometheus-k8s-0                       3/3     Running   1          65m
prometheus-k8s-1                       3/3     Running   1          65m
prometheus-operator-5bd99d6457-89n7h   1/1     Running   0          66m

  4、更改访问模式(ClusterIP => NodePort)

    1)Prometheus

    kubectl edit svc/prometheus-k8s -n monitoring

    2)Alert Manager

    kubectl edit svc/alertmanager-main -n monitoring

    3)Grafana

    kubectl edit svc/grafana -n monitoring

  5、访问MasterIP:Port,其中Grafana默认用户名和密码都是admin

    

    >>> 普罗米修斯功能强大,目前我也还没有完全掌握,具体应用还需要大家自己深度学习。

作者:Leozhanggg

出处:https://www.cnblogs.com/leozhanggg/p/12661566.html

本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

原文地址:https://www.cnblogs.com/leozhanggg/p/12661566.html

时间: 2024-07-29 14:59:35

Kubernetes实战总结 - Prometheus部署的相关文章

Kubernetes实战总结 - EFK部署(v7.6.0)

基础概念 Elasticsearch 是一个实时的.分布式的可扩展的搜索引擎,允许进行全文.结构化搜索,它通常用于索引和搜索大量日志数据,也可用于搜索许多不同类型的文档. Beats 是数据采集的得力工具.将 Beats 和您的容器一起置于服务器上,或者将 Beats 作为函数加以部署,然后便可在 Elastisearch 中集中处理数据.如果需要更加强大的处理性能,Beats 还能将数据输送到 Logstash 进行转换和解析. Kibana 核心产品搭载了一批经典功能:柱状图.线状图.饼图.

Kubernetes实战总结 - dashboard部署(v2.0.0-rc6)

Kubernetes dashboard 是Kubernetes集群的基于Web的通用UI. 它允许用户管理群集中运行的应用程序并对其进行故障排除,以及管理群集本身. 部署 如果你的网络很好且可以访问国外网,那你可以直接运行下面命令部署. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml 你也可以直接复制我下面准备好的,保存为

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 部署规划

1. 安装规划 1.1 部署节点说明 etcd集群规划 etcd 中心集群 192.168.2.247192.168.2.248192.168.2.249 etcd 事件集群 192.168.2.250192.168.2.251192.168.2.252 Kubernetes master节点集群规划 192.168.3.10192.168.3.11192.168.3.12192.168.3.13192.168.3.14 Kubernetes master vip 192.168.4.1192.

kubernetes实战(二十六):kubeadm 安装 高可用 k8s v1.16.x dashboard 2.x

1.基本配置 基本配置.内核升级.基本服务安装参考https://www.cnblogs.com/dukuan/p/10278637.html,或者参考<再也不踩坑的Kubernetes实战指南>第一章第一节 2.Kubernetes组件安装 所有节点安装Kubeadm.Kubectl.kubelet yum install -y kubeadm-1.16.0-0.x86_64 kubectl-1.16.0-0.x86_64 kubelet-1.16.0-0.x86_64 所有节点启动Dock

kubernetes实战(二十七):CentOS 8 二进制 高可用 安装 k8s 1.16.x

1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.16.x,相对于其他版本,二进制安装方式并无太大区别.CentOS 8相对于CentOS 7操作更加方便,比如一些服务的关闭,无需修改配置文件即可永久生效,CentOS 8默认安装的内核版本是4.18,所以在安装k8s的过程中也无需在进行内核升级,系统环境也可按需升级,如果下载的是最新版的CentOS 8,系统升级也可省略. 2. 基本环境配置 主机信息 192.168.1.19 k8s-master01 192.168

kubernetes v1.15.4 部署手册

kubernetes v1.15.4 部署手册 配置要求 推荐在阿里云采购如下配置:(也可以使用自己的虚拟机.私有云等) 3台 2核4G 的ECS(突发性能实例 t5 ecs.t5-c1m2.large或同等配置,单台约 0.4元/小时,停机时不收费) Cent OS 7.6 安装后的软件版本为 Kubernetes v1.15.4 calico 3.8.2 nginx-ingress 1.5.3 Docker 18.09.7 检查 centos / hostname # 在 master 节点

kubernetes实战(三十):CentOS 8 二进制 高可用 安装 k8s 1.17.x

1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.17.x,相对于其他版本,二进制安装方式并无太大区别. 2. 基本环境配置 主机信息 192.168.1.19 k8s-master01 192.168.1.18 k8s-master02 192.168.1.20 k8s-master03 192.168.1.88 k8s-master-lb 192.168.1.21 k8s-node01 192.168.1.22 k8s-node02 系统环境 [[email pro

《微软Azure云计算开发实战(2):Azure部署ASP.NET MVC 网站

今天我们继续学习Azure的实战开发,<微软Azure云计算开发实战(2):Azure部署ASP.NET MVC 网站. 在你注册完Azure的使用账户以后,下面就可以登陆Azure管理界面了.因为我们后续的开发工作都要用到Azure的资源. Azure作为公有云平台,提供了几乎所有的平台支持,操作系统包括Linux Mac OS Windows,数据库主流的都支持,网站空间,数据库,虚拟主机操作系统 几乎都有.还有流媒体服务,Hadoop集成,Bigtable等. 我们先来学习一下如何部署一个