基于 Kubernetes v1.14.0 之heapster与influxdb部署

1、部署准备

说明:所有的容器组都运行在kube-system 命名空间
github 项目地址 https://github.com/kubernetes-retired/heapster.git
mkdir heapster
git clone https://github.com/kubernetes-retired/heapster.git
cd heapster/deploy/kube-config/influxdb

2、influxdb 部署

2.1、创建influxdb pvc 源项目没相关yaml

vi influxdb-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  # 挂载点名字influxdb-pvc
  name: influxdb-pvc
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
    # 存储类型名字动态pvc名字nfs-storage 这里使用nfs
  storageClassName: nfs-storage
  resources:
    requests:
      #硬盘大小
      storage: 50Gi 

2.2、创建influxdb Deployment

修改容器源  juestnow/heapster-influxdb-amd64:v1.5.2
添加指定节点运行
      nodeSelector:
        dashboard: kubernetes-dashboard #指定节点运行
vi influxdb.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      nodeSelector:
        dashboard: kubernetes-dashboard
      containers:
      - name: influxdb
        image: juestnow/heapster-influxdb-amd64:v1.5.2
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        persistentVolumeClaim:
          claimName: influxdb-pvc
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb

2.3 执行生成yaml 文件

kubectl apply -f   influxdb-pvc.yaml
kubectl apply -f  influxdb.yaml
kubectl get pod -n  kube-system -o wide | grep influxdb
[[email protected] kubernetes]# kubectl get pod -n  kube-system -o wide | grep influxdb
monitoring-influxdb-75678b664f-z9zck        1/1     Running   0          21d   10.65.3.155   nginx-2   <none>           <none>
http://10.65.3.155:8086/
返回404 证明访问正常
[[email protected] kubernetes]# kubectl get service -n  kube-system | grep influxdb
monitoring-influxdb    ClusterIP      10.64.39.166    <none>        8086/TCP                 47d
http://10.64.39.166:8086/
同样返回404 证明服务正常
这里只做服务是否正常运行验证

3、heapster 部署

3.1、 创建clusterrole

vi heapster-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:heapster
rules:
- apiGroups:
  - ""
  resources:
  - events
  - namespaces
  - nodes
  - pods
  - nodes/stats
  verbs:
  - create
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - deployments
  verbs:
  - get
  - list
  - watch

3.2 创建ClusterRoleBinding

vi heapster-rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

3.3、heapster Deployment 合集

修改  - --source=kubernetes:https://kubernetes.default
--source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
修改容器源 juestnow/heapster-amd64:v1.5.4
      nodeSelector:
        dashboard: kubernetes-dashboard #指定节点运行
vi heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      nodeSelector:
        dashboard: kubernetes-dashboard
      containers:
      - name: heapster
        image: juestnow/heapster-amd64:v1.5.4
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
        - --sink=influxdb:http://monitoring-influxdb:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

3.4 执行生成yaml 文件

kubectl apply -f  .

3.5 验证heapster 状态

[[email protected] kubernetes]#  kubectl get pod -n  kube-system | grep heapster
heapster-6f76dc9d7-vzfz8                    1/1     Running   0          21d
[[email protected] kubernetes]# kubectl get service -n  kube-system | grep heapster
heapster               NodePort       10.64.248.238   <none>        80:45389/TCP             47d
http://10.64.248.238/
返回404 正常
等待一段时间打开:kubernetes-dashboard 查看容器cpu内存使用由图表

如果出现图表证明heapster 安装 成功

下一篇: Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 metrics-server 部署

原文地址:https://blog.51cto.com/juestnow/2408819

时间: 2024-11-11 01:09:53

基于 Kubernetes v1.14.0 之heapster与influxdb部署的相关文章

基于 Kubernetes v1.14.0 之 Alertmanager 部署

1.部署准备 说明:所有的容器组都运行在monitoring 命名空间 本文参考https://github.com/coreos/kube-prometheus 由于官方维护的版本在现有的部署环境出现问题所以下面做了一些修改及变更不影响整体效果 Alertmanager 项目使用官方yaml 不做任何修改 2.Alertmanager 相关服务的yaml 准备 2.1.下载官方yaml mkdir kube-prometheus cd kube-prometheus git clone htt

Kubernetes v1.14.0 之 kube-apiserver集群部署

kube-apiserver集群准备 1.kube-apiserver 服务器配置 对外ip 内网ip cpu 内存 硬盘 192.168.3.10 172.172.1.1 64 256 1T 192.168.3.11 172.172.1.2 64 256 1T 192.168.3.12 172.172.1.3 64 256 1T 192.168.3.13 172.172.1.4 64 256 1T 192.168.3.14 172.172.1.5 64 256 1T 2.kube-apiser

Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 部署规划

1. 安装规划 1.1 部署节点说明 etcd集群规划 etcd 中心集群 192.168.2.247192.168.2.248192.168.2.249 etcd 事件集群 192.168.2.250192.168.2.251192.168.2.252 Kubernetes master节点集群规划 192.168.3.10192.168.3.11192.168.3.12192.168.3.13192.168.3.14 Kubernetes master vip 192.168.4.1192.

基于 Kubernetes v1.14.0 之 CoreDNS部署

1.部署容器前说明: 1.1.如果没有特殊指明,本文档的所有操作均在 k8s-operation 节点上执行: kuberntes 自带插件的 manifests yaml 文件使用 gcr.io 的 docker registry,国内被墙,需要手动替换为其它 registry 地址: 1.2.由于k8s-master 没有部署容器服务于路由服务,但是k8s-master 又要访问容器网络跟k8s集群网络,1.在上级路由器写入静态路由让其能访问容器网络与k8s集群网络.2.在k8s-maste

基于 Kubernetes v1.14.0 之 vpa 部署

1.部署准备 说明:所有的容器组都运行在kube-system 命名空间 本文参考https://github.com/kubernetes/autoscaler 由于官方维护的版本在现有的部署环境出现问题所以下面做了一些修改及变更不影响整体效果 同时vpa只作为学习使用,生产环境可能会出现一些未知问题,它会重新创建pod 可能业务会出现短暂的中断 2.准备相关yaml git clone https://github.com/kubernetes/autoscaler cd autoscale

kubernetes v1.14.0版本集群搭建(centos7)

一.主机环境配置(centos7.6) 1.主机名设置 1 #所有主机分别设置如下 2 # hostnamectl set-hostname master 3 # hostnamectl set-hostname node1 4 # hostnamectl set-hostname node2 2.主机名绑定hosts #所有主机设置相同 # cat /etc/hosts ::1 localhost localhost.localdomain localhost6 localhost6.loca

kubeadm创建高可用kubernetes v1.12.0集群

节点规划 主机名 IP Role k8s-master01 10.3.1.20 etcd.Master.Node.keepalived k8s-master02 10.3.1.21 etcd.Master.Node.keepalived k8s-master03 10.3.1.25 etcd.Master.Node.keepalived VIP 10.3.1.29 None 版本信息: OS::Ubuntu 16.04 Docker:17.03.2-ce k8s:v1.12 来自官网的高可用架构

kubeadm部署高可用K8S集群(v1.14.0)

一. 集群规划 主机名 IP 角色 主要插件 VIP 172.16.1.10 实现master高可用和负载均衡 k8s-master01 172.16.1.11 master kube-apiserver.kube-controller.kube-scheduler.kubelet.kube-proxy.kube-flannel.etcd k8s-master02 172.16.1.12 master kube-apiserver.kube-controller.kube-scheduler.k

使用kubeadm安装kubernetes v1.14.1

使用kubeadm安装kubernetes v1.14.1 一.环境准备 操作系统:Centos 7.5 ? ? 一台或多台运?行行着下列列系统的机器?: ? Ubuntu 16.04+ ? Debian 9 ? CentOS 7 ? RHEL 7 ? Fedora 25/26 (尽?力力服务) ? HypriotOS v1.0.1+ ? Container Linux (针对1800.6.0 版本测试) ? 每台机器? 2 GB 或更更多的 RAM (如果少于这个数字将会影响您应?用的运?行行