基于 Kubernetes v1.14.0 之 vpa 部署

1、部署准备

说明:所有的容器组都运行在kube-system 命名空间
本文参考https://github.com/kubernetes/autoscaler
由于官方维护的版本在现有的部署环境出现问题所以下面做了一些修改及变更不影响整体效果
同时vpa只作为学习使用,生产环境可能会出现一些未知问题,它会重新创建pod 可能业务会出现短暂的中断

2、准备相关yaml

git clone https://github.com/kubernetes/autoscaler
cd autoscaler/vertical-pod-autoscaler/deploy/
## 删除没用的crd
rm -rf vpa-beta2-crd.yaml   vpa-crd.yaml  vpa-beta-crd.yaml

3、创建admission-controller 使用证书

cd autoscaler/vertical-pod-autoscaler/deploy/
cat << EOF | tee /apps/work/k8s/cfssl/k8s/vpa_webhook.json
{
  "CN": "vpa-webhook.kube-system.svc",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangZhou",
      "O": "niuke",
      "OU": "niuke"
    }
  ]
}
EOF

cfssl gencert -ca=/apps/work/k8s/cfssl/pki/k8s/k8s-ca.pem -ca-key=/apps/work/k8s/cfssl/pki/k8s/k8s-ca-key.pem     -config=/apps/work/k8s/cfssl/ca-config.json     -profile=kubernetes /apps/work/k8s/cfssl/k8s/vpa_webhook.json | cfssljson -bare ./vpa_webhook
### 重命名证书
cp /opt/k8s/cfssl/pki/k8s/k8s-ca.pem ./caCert.pem
mv vpa_webhook.pem serverCert.pem
mv vpa_webhook-key.pem serverKey.pem
### 创建 secret
kubectl create secret --namespace=kube-system generic vpa-tls-certs  --from-file=caCert.pem --from-file=serverKey.pem --from-file=serverCert.pem
kubectl get secret -n kube-system | grep vpa-tls-certs
kubectl get secret vpa-tls-certs -n kube-system  -o yaml    

4、修改yaml

4.1、vpa-rbac

vi vpa-rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-reader
rules:
  - apiGroups:
      - "metrics.k8s.io"
    resources:
      - pods
    verbs:
      - get
      - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:vpa-actor
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - nodes
      - limitranges
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - get
      - list
      - watch
      - create
  - apiGroups:
      - "poc.autoscaling.k8s.io"
    resources:
      - verticalpodautoscalers
    verbs:
      - get
      - list
      - watch
      - patch
  - apiGroups:
      - "autoscaling.k8s.io"
    resources:
      - verticalpodautoscalers
    verbs:
      - get
      - list
      - watch
      - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:vpa-checkpoint-actor
rules:
  - apiGroups:
      - "poc.autoscaling.k8s.io"
    resources:
      - verticalpodautoscalercheckpoints
    verbs:
      - get
      - list
      - watch
      - create
      - patch
      - delete
  - apiGroups:
      - "autoscaling.k8s.io"
    resources:
      - verticalpodautoscalercheckpoints
    verbs:
      - get
      - list
      - watch
      - create
      - patch
      - delete
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:evictioner
rules:
  - apiGroups:
      - "extensions"
    resources:
      - replicasets
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - pods/eviction
    verbs:
      - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-reader
subjects:
  - kind: ServiceAccount
    name: vpa-recommender
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:vpa-actor
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:vpa-actor
subjects:
  - kind: ServiceAccount
    name: vpa-recommender
    namespace: kube-system
  - kind: ServiceAccount
    name: vpa-updater
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:vpa-checkpoint-actor
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:vpa-checkpoint-actor
subjects:
  - kind: ServiceAccount
    name: vpa-recommender
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:vpa-target-reader
rules:
  - apiGroups:
      - ""
    resources:
      - replicationcontrollers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - daemonsets
      - deployments
      - replicasets
      - statefulsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - jobs
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:vpa-target-reader-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:vpa-target-reader
subjects:
  - kind: ServiceAccount
    name: vpa-recommender
    namespace: kube-system
  - kind: ServiceAccount
    name: vpa-admission-controller
    namespace: kube-system
  - kind: ServiceAccount
    name: vpa-updater
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:vpa-evictionter-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:evictioner
subjects:
  - kind: ServiceAccount
    name: vpa-updater
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vpa-admission-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:admission-controller
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - configmaps
      - nodes
      - limitranges
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "admissionregistration.k8s.io"
    resources:
      - mutatingwebhookconfigurations
    verbs:
      - create
      - delete
      - get
      - list
  - apiGroups:
      - "poc.autoscaling.k8s.io"
    resources:
      - verticalpodautoscalers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "autoscaling.k8s.io"
    resources:
      - verticalpodautoscalers
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:admission-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:admission-controller
subjects:
  - kind: ServiceAccount
    name: vpa-admission-controller
    namespace: kube-system

4.2、vpa-v1-crd

vi vpa-v1-crd.yaml
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: verticalpodautoscalers.autoscaling.k8s.io
spec:
  group: autoscaling.k8s.io
  scope: Namespaced
  names:
    plural: verticalpodautoscalers
    singular: verticalpodautoscaler
    kind: VerticalPodAutoscaler
    shortNames:
      - vpa
  version: v1beta1
  versions:
    - name: v1beta1
      served: true
      storage: false
    - name: v1beta2
      served: true
      storage: true
    - name: v1
      served: true
      storage: false
  validation:
    # openAPIV3Schema is the schema for validating custom objects.
    openAPIV3Schema:
      properties:
        spec:
          required: []
          properties:
            targetRef:
              type: object
            updatePolicy:
              properties:
                updateMode:
                  type: string
            resourcePolicy:
              properties:
                containerPolicies:
                  type: array
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: verticalpodautoscalercheckpoints.autoscaling.k8s.io
spec:
  group: autoscaling.k8s.io
  scope: Namespaced
  names:
    plural: verticalpodautoscalercheckpoints
    singular: verticalpodautoscalercheckpoint
    kind: VerticalPodAutoscalerCheckpoint
    shortNames:
      - vpacheckpoint
  version: v1beta1
  versions:
    - name: v1beta1
      served: true
      storage: false
    - name: v1beta2
      served: true
      storage: true
    - name: v1
      served: true
      storage: false

4.3、admission-controller-deployment

vi admission-controller-deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vpa-admission-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vpa-admission-controller
    spec:
      serviceAccountName: vpa-admission-controller
      containers:
        - name: admission-controller
          image: juestnow/vpa-admission-controller:0.5.0
          imagePullPolicy: Always
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumeMounts:
            - name: tls-certs
              mountPath: "/etc/tls-certs"
              readOnly: true
          resources:
            limits:
              cpu: 200m
              memory: 500Mi
            requests:
              cpu: 50m
              memory: 200Mi
          ports:
          - name: vpa-webhook
            containerPort: 8000
          - name: http-metrics
            containerPort: 8944
      volumes:
        - name: tls-certs
          secret:
            secretName: vpa-tls-certs
---
apiVersion: v1
kind: Service
metadata:
  name: vpa-webhook
  namespace: kube-system
  labels:
    k8s-app: vpa-admission-controller
spec:
  ports:
  - name: vpa-webhook
    port: 443
    targetPort: 8000
  - name: http-metrics
    port: 8944
    protocol: TCP
  selector:
    app: vpa-admission-controller
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: vpa-admission-controller
  namespace: monitoring
spec:
  endpoints:
  - interval: 15s
    port: http-metrics
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      k8s-app: vpa-admission-controller

4.4、updater-deployment

vi updater-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vpa-updater
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vpa-updater
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vpa-updater
    spec:
      serviceAccountName: vpa-updater
      containers:
        - name: updater
          image: juestnow/vpa-updater:0.5.0
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 200m
              memory: 1000Mi
            requests:
              cpu: 50m
              memory: 500Mi
          ports:
          - name: http-metrics
            containerPort: 8943
---
apiVersion: v1
kind: Service
metadata:
  name: vpa-updater
  namespace: kube-system
  labels:
    k8s-app: vpa-updater
spec:
  clusterIP: None
  ports:
  - name: http-metrics
    port: 8943
    protocol: TCP
  selector:
    app: vpa-updater
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: vpa-updater
  namespace: monitoring
spec:
  endpoints:
  - interval: 15s
    port: http-metrics
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      k8s-app: vpa-updater

4.5、recommender-deployment

vi recommender-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vpa-recommender
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vpa-recommender
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vpa-recommender
    spec:
      serviceAccountName: vpa-recommender
      containers:
      - name: recommender
        image: juestnow/vpa-recommender:0.5.0
        imagePullPolicy: Always
        args:
          - "--v=4"
          - "--stderrthreshold=info"
          - "--storage=prometheus"
          - "--prometheus-address=http://prometheus-k8s.monitoring.svc:9090"
          - "--prometheus-cadvisor-job-name=kubelet"
        resources:
          limits:
            cpu: 200m
            memory: 1000Mi
          requests:
            cpu: 50m
            memory: 500Mi
        ports:
        - name: http-metrics
          containerPort: 8942
---
apiVersion: v1
kind: Service
metadata:
  name: vpa-recommender
  namespace: kube-system
  labels:
    k8s-app: vpa-recommender
spec:
  clusterIP: None
  ports:
  - name: http-metrics
    port: 8942
    protocol: TCP
  selector:
    app: vpa-recommender
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: vpa-recommender
  namespace: monitoring
spec:
  endpoints:
  - interval: 15s
    port: http-metrics
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      k8s-app: vpa-recommender

    #### 说明已对vpa-recommender 容器 重新封装了官方容器有一些问题

5、执行yaml 创建vpa 相关服务

kubectl apply -f .

6、验证vpa 服务是否创建正常

[[email protected] deploy]# kubectl api-versions| grep autoscaling.k8s
autoscaling.k8s.io/v1
autoscaling.k8s.io/v1beta2
[[email protected] deploy]# kubectl get pods -n kube-system -o wide | grep vpa
vpa-admission-controller-79d7cdfc9c-9t7m6   1/1     Running   1          14d    10.65.2.106   nginx-1   <none>           <none>
vpa-recommender-5fd87bcbb6-wbgvj            1/1     Running   1          14d    10.65.2.107   nginx-1   <none>           <none>
vpa-updater-794499ddc8-hcnrv                1/1     Running   1          14d    10.65.2.104   nginx-1   <none>           <none>
[[email protected] deploy]# kubectl get service -n kube-system | grep vpa
vpa-webhook            ClusterIP   10.64.220.134   <none>        443/TCP                  14d
http://10.65.2.106:8944/metrics
可以看到vpa-admission-controller 监控 指标
http://10.65.2.104:8943/metrics
vpa-updater 监控指标
http://10.65.2.107:8942/metrics
vpa_recommender 监控指标接口
[[email protected] kubernetes-monitor]# kubectl get vpa
No resources found.

7 创建测试项目测试vpa

vim nginx.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: my-rec-vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind: Deployment
    name: nginx-controller
  updatePolicy:
    updateMode: "Auto"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: 100m
              memory: 50Mi
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-controller
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: nginx
kubectl apply -f nginx.yaml
验证 项目部署是否成功
[[email protected] vpa]# kubectl get pod | grep nginx
nginx-controller-7f548944c-lh89n          0/1     Terminating   0          75m
nginx-controller-7f548944c-znwpt          1/1     Running       0          75m
[[email protected] vpa]# kubectl get service | grep nginx
nginx-controller            ClusterIP   10.64.32.252    <none>        80/TCP      76m
http://10.64.32.252/
[[email protected] vpa]# kubectl get vpa
NAME         AGE
my-rec-vpa   2m56s
[[email protected] vpa]# kubectl describe  vpa my-rec-vpa
Name:         my-rec-vpa
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"autoscaling.k8s.io/v1beta1","kind":"VerticalPodAutoscaler","metadata":{"annotations":{},"name":"my-rec-vpa","namespace":"de...
API Version:  autoscaling.k8s.io/v1
Kind:         VerticalPodAutoscaler
Metadata:
  Creation Timestamp:  2019-06-27T02:47:31Z
  Generation:          10
  Resource Version:    14368055
  Self Link:           /apis/autoscaling.k8s.io/v1/namespaces/default/verticalpodautoscalers/my-rec-vpa
  UID:                 e88c46af-9885-11e9-85e9-525400b41cf0
Spec:
  Target Ref:
    API Version:  apps/v1
    Kind:         Deployment
    Name:         nginx-controller
  Update Policy:
    Update Mode:  Auto
Status:
  Conditions:
    Last Transition Time:  2019-06-27T02:47:37Z
    Status:                True
    Type:                  RecommendationProvided
  Recommendation:
    Container Recommendations:
      Container Name:  nginx
      Lower Bound:
        Cpu:     25m
        Memory:  262144k
      Target:
        Cpu:     25m
        Memory:  262144k
      Uncapped Target:
        Cpu:     25m
        Memory:  262144k
      Upper Bound:
        Cpu:     1595m
        Memory:  1667500k
Events:          <none>

 kubectl get pod -o wide | grep nginx-controller
[[email protected] vpa]# kubectl get pod -o wide | grep nginx-controller
nginx-controller-7f548944c-xm79w          1/1     Running   0          33s     10.65.2.133   nginx-1   <none>           <none>
nginx-controller-7f548944c-znwpt          1/1     Running   0          86m     10.65.5.21    node04    <none>           <none>
[[email protected] vpa]# kubectl describe pod  nginx-controller-7f548944c-znwpt
Name:           nginx-controller-7f548944c-znwpt
Namespace:      default
Node:           node04/192.168.2.167
Start Time:     Thu, 27 Jun 2019 09:33:16 +0800
Labels:         app=nginx
                pod-template-hash=7f548944c
Annotations:    podpreset.admission.kubernetes.io/podpreset-allow-lxcfs-tz-env: 13290360
Status:         Running
IP:             10.65.5.21
Controlled By:  ReplicaSet/nginx-controller-7f548944c
Containers:
  nginx:
    Container ID:   docker://547c23db018073756b7e2266d01ad431a3e78bb05fb5edadc51202401548a79f
    Image:          nginx:latest
    Image ID:       docker-pullable://[email protected]:bdbf36b7f1f77ffe7bd2a32e59235dff6ecf131e3b6b5b96061c652f30685f3a
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 27 Jun 2019 09:36:43 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m### 默认yaml 值
      memory:     50Mi ###  默认yaml 值
    Environment:  <none>
    Mounts:
      /etc/localtime from allow-tz-env (rw)
      /proc/cpuinfo from proc-cpuinfo (rw)
      /proc/diskstats from proc-diskstats (rw)
      /proc/meminfo from proc-meminfo (rw)
      /proc/stat from proc-stat (rw)
      /proc/swaps from proc-swaps (rw)
      /proc/uptime from proc-uptime (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7b8ng (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  proc-cpuinfo:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/cpuinfo
    HostPathType:
  proc-diskstats:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/diskstats
    HostPathType:
  proc-meminfo:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/meminfo
    HostPathType:
  proc-stat:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/stat
    HostPathType:
  proc-swaps:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/swaps
    HostPathType:
  proc-uptime:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/uptime
    HostPathType:
  allow-tz-env:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/zoneinfo/Asia/Shanghai
    HostPathType:
  default-token-7b8ng:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7b8ng
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
等待一段时间
再次
 kubectl get pod -o wide | grep nginx-controller
 [[email protected] vpa]#  kubectl get pod -o wide | grep nginx-controller
nginx-controller-7f548944c-8mknl          1/1     Running   0          2m27s   10.65.2.146   nginx-1   <none>           <none>
nginx-controller-7f548944c-mcg49          1/1     Running   0          3m30s   10.65.5.35    node04    <none>           <none>
POD name 已经改变
再次执行
 kubectl describe pod nginx-controller-7f548944c-8mknl
 [[email protected] vpa]#  kubectl describe pod nginx-controller-7f548944c-8mknl
Name:           nginx-controller-7f548944c-8mknl
Namespace:      default
Node:           nginx-1/192.168.2.186
Start Time:     Thu, 27 Jun 2019 11:37:53 +0800
Labels:         app=nginx
                pod-template-hash=7f548944c
Annotations:    podpreset.admission.kubernetes.io/podpreset-allow-lxcfs-tz-env: 13290360
                vpaUpdates: Pod resources updated by my-rec-vpa: container 0: cpu request, memory request
Status:         Running
IP:             10.65.2.146
Controlled By:  ReplicaSet/nginx-controller-7f548944c
Containers:
  nginx:
    Container ID:   docker://46efdddab5036df39e2c0f8044804e022159f09b0e4ac42c8c9591922f8d5263
    Image:          nginx:latest
    Image ID:       docker-pullable://[email protected]:bdbf36b7f1f77ffe7bd2a32e59235dff6ecf131e3b6b5b96061c652f30685f3a
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 27 Jun 2019 11:37:58 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        25m
      memory:     262144k
    Environment:  <none>
    Mounts:
      /etc/localtime from allow-tz-env (rw)
      /proc/cpuinfo from proc-cpuinfo (rw)
      /proc/diskstats from proc-diskstats (rw)
      /proc/meminfo from proc-meminfo (rw)
      /proc/stat from proc-stat (rw)
      /proc/swaps from proc-swaps (rw)
      /proc/uptime from proc-uptime (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7b8ng (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  proc-cpuinfo:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/cpuinfo
    HostPathType:
  proc-diskstats:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/diskstats
    HostPathType:
  proc-meminfo:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/meminfo
    HostPathType:
  proc-stat:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/stat
    HostPathType:
  proc-swaps:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/swaps
    HostPathType:
  proc-uptime:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs/proc/uptime
    HostPathType:
  allow-tz-env:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/zoneinfo/Asia/Shanghai
    HostPathType:
  default-token-7b8ng:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7b8ng
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m20s  default-scheduler  Successfully assigned default/nginx-controller-7f548944c-8mknl to nginx-1
  Normal  Pulling    3m4s   kubelet, nginx-1   Pulling image "nginx:latest"
  Normal  Pulled     3m     kubelet, nginx-1   Successfully pulled image "nginx:latest"
  Normal  Created    3m     kubelet, nginx-1   Created container nginx
  Normal  Started    2m59s  kubelet, nginx-1   Started container nginx
    cpu 内存已经修改为vpa 推荐数值

原文地址:https://blog.51cto.com/juestnow/2414142

时间: 2024-09-28 03:12:00

基于 Kubernetes v1.14.0 之 vpa 部署的相关文章

基于 Kubernetes v1.14.0 之 CoreDNS部署

1.部署容器前说明: 1.1.如果没有特殊指明,本文档的所有操作均在 k8s-operation 节点上执行: kuberntes 自带插件的 manifests yaml 文件使用 gcr.io 的 docker registry,国内被墙,需要手动替换为其它 registry 地址: 1.2.由于k8s-master 没有部署容器服务于路由服务,但是k8s-master 又要访问容器网络跟k8s集群网络,1.在上级路由器写入静态路由让其能访问容器网络与k8s集群网络.2.在k8s-maste

基于 Kubernetes v1.14.0 之 Alertmanager 部署

1.部署准备 说明:所有的容器组都运行在monitoring 命名空间 本文参考https://github.com/coreos/kube-prometheus 由于官方维护的版本在现有的部署环境出现问题所以下面做了一些修改及变更不影响整体效果 Alertmanager 项目使用官方yaml 不做任何修改 2.Alertmanager 相关服务的yaml 准备 2.1.下载官方yaml mkdir kube-prometheus cd kube-prometheus git clone htt

基于 Kubernetes v1.14.0 之heapster与influxdb部署

1.部署准备 说明:所有的容器组都运行在kube-system 命名空间 github 项目地址 https://github.com/kubernetes-retired/heapster.git mkdir heapster git clone https://github.com/kubernetes-retired/heapster.git cd heapster/deploy/kube-config/influxdb 2.influxdb 部署 2.1.创建influxdb pvc 源

Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 部署规划

1. 安装规划 1.1 部署节点说明 etcd集群规划 etcd 中心集群 192.168.2.247192.168.2.248192.168.2.249 etcd 事件集群 192.168.2.250192.168.2.251192.168.2.252 Kubernetes master节点集群规划 192.168.3.10192.168.3.11192.168.3.12192.168.3.13192.168.3.14 Kubernetes master vip 192.168.4.1192.

Kubernetes v1.14.0 之 kube-apiserver集群部署

kube-apiserver集群准备 1.kube-apiserver 服务器配置 对外ip 内网ip cpu 内存 硬盘 192.168.3.10 172.172.1.1 64 256 1T 192.168.3.11 172.172.1.2 64 256 1T 192.168.3.12 172.172.1.3 64 256 1T 192.168.3.13 172.172.1.4 64 256 1T 192.168.3.14 172.172.1.5 64 256 1T 2.kube-apiser

kubernetes v1.14.0版本集群搭建(centos7)

一.主机环境配置(centos7.6) 1.主机名设置 1 #所有主机分别设置如下 2 # hostnamectl set-hostname master 3 # hostnamectl set-hostname node1 4 # hostnamectl set-hostname node2 2.主机名绑定hosts #所有主机设置相同 # cat /etc/hosts ::1 localhost localhost.localdomain localhost6 localhost6.loca

kubeadm部署高可用K8S集群(v1.14.0)

一. 集群规划 主机名 IP 角色 主要插件 VIP 172.16.1.10 实现master高可用和负载均衡 k8s-master01 172.16.1.11 master kube-apiserver.kube-controller.kube-scheduler.kubelet.kube-proxy.kube-flannel.etcd k8s-master02 172.16.1.12 master kube-apiserver.kube-controller.kube-scheduler.k

kubeadm创建高可用kubernetes v1.12.0集群

节点规划 主机名 IP Role k8s-master01 10.3.1.20 etcd.Master.Node.keepalived k8s-master02 10.3.1.21 etcd.Master.Node.keepalived k8s-master03 10.3.1.25 etcd.Master.Node.keepalived VIP 10.3.1.29 None 版本信息: OS::Ubuntu 16.04 Docker:17.03.2-ce k8s:v1.12 来自官网的高可用架构

kubeadm安装高可用kubernetes v1.14.1

前言 步骤跟之前安装1.13版本的是一样的 区别就在于kubeadm init的configuration file 目前kubeadm init with configuration file已经处于beta阶段了,在1.15版本已经进入到了v1beta2版本 虽然还没到GA版,但是相对于手动配置k8s集群,kubeadm不但简化了步骤,而且还减少了手动部署的出错的概率,何乐而不为呢 环境介绍: 系统版本:CentOS 7.6 内核:4.18.7-1.el7.elrepo.x86_64 Kub