kubernetes实战(十六):k8s高可用集群平滑升级 v1.11.x 到v1.12.x

1、基本概念

  升级之后所有的containers会重启,因为hash值会变。

  不可跨版本升级。

2、升级Master节点

  当前版本

[[email protected] ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

[[email protected]-master02 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

[[email protected]-master03 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

  查看所有kubeadm的版本

[[email protected] ~]# yum list kubeadm  --showduplicates | sort -r
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
kubeadm.x86_64                       1.9.9-0                          kubernetes
kubeadm.x86_64                       1.9.8-0                          kubernetes
kubeadm.x86_64                       1.9.7-0                          kubernetes
kubeadm.x86_64                       1.9.6-0                          kubernetes
kubeadm.x86_64                       1.9.5-0                          kubernetes
kubeadm.x86_64                       1.9.4-0                          kubernetes
kubeadm.x86_64                       1.9.3-0                          kubernetes
kubeadm.x86_64                       1.9.2-0                          kubernetes
kubeadm.x86_64                       1.9.11-0                         kubernetes
kubeadm.x86_64                       1.9.1-0                          kubernetes
kubeadm.x86_64                       1.9.10-0                         kubernetes
kubeadm.x86_64                       1.9.0-0                          kubernetes
kubeadm.x86_64                       1.8.9-0                          kubernetes
kubeadm.x86_64                       1.8.8-0                          kubernetes
kubeadm.x86_64                       1.8.7-0                          kubernetes
kubeadm.x86_64                       1.8.6-0                          kubernetes
kubeadm.x86_64                       1.8.5-0                          kubernetes
kubeadm.x86_64                       1.8.4-0                          kubernetes
kubeadm.x86_64                       1.8.3-0                          kubernetes
kubeadm.x86_64                       1.8.2-0                          kubernetes
kubeadm.x86_64                       1.8.15-0                         kubernetes
kubeadm.x86_64                       1.8.14-0                         kubernetes
kubeadm.x86_64                       1.8.13-0                         kubernetes
kubeadm.x86_64                       1.8.12-0                         kubernetes
kubeadm.x86_64                       1.8.11-0                         kubernetes
kubeadm.x86_64                       1.8.1-0                          kubernetes
kubeadm.x86_64                       1.8.10-0                         kubernetes
kubeadm.x86_64                       1.8.0-1                          kubernetes
kubeadm.x86_64                       1.8.0-0                          kubernetes
kubeadm.x86_64                       1.7.9-0                          kubernetes
kubeadm.x86_64                       1.7.8-1                          kubernetes
kubeadm.x86_64                       1.7.7-1                          kubernetes
kubeadm.x86_64                       1.7.6-1                          kubernetes
kubeadm.x86_64                       1.7.5-0                          kubernetes
kubeadm.x86_64                       1.7.4-0                          kubernetes
kubeadm.x86_64                       1.7.3-1                          kubernetes
kubeadm.x86_64                       1.7.2-0                          kubernetes
kubeadm.x86_64                       1.7.16-0                         kubernetes
kubeadm.x86_64                       1.7.15-0                         kubernetes
kubeadm.x86_64                       1.7.14-0                         kubernetes
kubeadm.x86_64                       1.7.11-0                         kubernetes
kubeadm.x86_64                       1.7.1-0                          kubernetes
kubeadm.x86_64                       1.7.10-0                         kubernetes
kubeadm.x86_64                       1.7.0-0                          kubernetes
kubeadm.x86_64                       1.6.9-0                          kubernetes
kubeadm.x86_64                       1.6.8-0                          kubernetes
kubeadm.x86_64                       1.6.7-0                          kubernetes
kubeadm.x86_64                       1.6.6-0                          kubernetes
kubeadm.x86_64                       1.6.5-0                          kubernetes
kubeadm.x86_64                       1.6.4-0                          kubernetes
kubeadm.x86_64                       1.6.3-0                          kubernetes
kubeadm.x86_64                       1.6.2-0                          kubernetes
kubeadm.x86_64                       1.6.13-0                         kubernetes
kubeadm.x86_64                       1.6.12-0                         kubernetes
kubeadm.x86_64                       1.6.11-0                         kubernetes
kubeadm.x86_64                       1.6.1-0                          kubernetes
kubeadm.x86_64                       1.6.10-0                         kubernetes
kubeadm.x86_64                       1.6.0-0                          kubernetes
kubeadm.x86_64                       1.13.0-0                         kubernetes
kubeadm.x86_64                       1.12.3-0                         kubernetes
kubeadm.x86_64                       1.12.2-0                         kubernetes
kubeadm.x86_64                       1.12.1-0                         kubernetes
kubeadm.x86_64                       1.12.0-0                         kubernetes
kubeadm.x86_64                       1.11.5-0                         kubernetes
kubeadm.x86_64                       1.11.4-0                         kubernetes
kubeadm.x86_64                       1.11.3-0                         kubernetes
kubeadm.x86_64                       1.11.2-0                         kubernetes
kubeadm.x86_64                       1.11.1-0                         kubernetes
kubeadm.x86_64                       1.11.0-0                         kubernetes
kubeadm.x86_64                       1.10.9-0                         kubernetes
kubeadm.x86_64                       1.10.8-0                         kubernetes
kubeadm.x86_64                       1.10.7-0                         kubernetes
kubeadm.x86_64                       1.10.6-0                         kubernetes
kubeadm.x86_64                       1.10.5-0                         kubernetes
kubeadm.x86_64                       1.10.4-0                         kubernetes
kubeadm.x86_64                       1.10.3-0                         kubernetes
kubeadm.x86_64                       1.10.2-0                         kubernetes
kubeadm.x86_64                       1.10.11-0                        kubernetes
kubeadm.x86_64                       1.10.1-0                         kubernetes
kubeadm.x86_64                       1.10.10-0                        kubernetes
kubeadm.x86_64                       1.10.0-0                         kubernetes
 * extras: mirrors.aliyun.com
 * base: mirrors.aliyun.com
Available Packages

  所有Master节点升级kubeadm

yum install  kubeadm-1.12.3-0.x86_64 -y --disableexcludes=kubernetes

  查看版本

[[email protected] ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

[[email protected]-master02 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

[[email protected]-master03 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

  所有Master节点修改kubeadm-config.yaml的kubernetesVersion(如果升级前集群不是参考我的文档部署的,请自行下载对应镜像)

[[email protected] ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

  提前下载镜像

[[email protected] ~]# kubeadm config images pull --config /root/kubeadm-config.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2

  备份/etc/kubernetes所有文件,自行备份

  在Master01上执行升级,一下操作在Master01执行

  修改Master01的configmap/kubeadm-config

[[email protected] ~]# kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

  主要修改以下信息:

# 将以下参数修改为本机IP
api.advertiseAddress
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
# 将以下参数修改为etcd集群节点
etcd.local.extraArgs.initial-cluster# 添加参数至extraArgsinitial-cluster-state: existing
# 将以下参数修改为本机的IP和主机名
peerCertSANs:
- k8s-master01
- 192.168.20.20
serverCertSANs:
- k8s-master01
- 192.168.20.20

  大致如下

apiVersion: v1
data:
  MasterConfiguration: |
    api:
      advertiseAddress: 192.168.20.20
      bindPort: 6443
      controlPlaneEndpoint: 192.168.20.10:16443
    apiServerCertSANs:
    - k8s-master01
    - k8s-master02
    - k8s-master03
    - k8s-master-lb
    - 192.168.20.20
    - 192.168.20.21
    - 192.168.20.22
    - 192.168.20.10
    apiServerExtraArgs:
      authorization-mode: Node,RBAC
    apiVersion: kubeadm.k8s.io/v1alpha2
    auditPolicy:
      logDir: /var/log/kubernetes/audit
      logMaxAge: 2
      path: ""
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManagerExtraArgs:
      node-monitor-grace-period: 10s
      pod-eviction-timeout: 10s
    etcd:
      local:
        dataDir: /var/lib/etcd
        extraArgs:
          advertise-client-urls: https://192.168.20.20:2379
          initial-advertise-peer-urls: https://192.168.20.20:2380
          initial-cluster: k8s-master01=https://192.168.20.20:2380, k8s-master02=https://192.168.20.21:
2380, k8s-master02=https://192.168.20.22:2380
          listen-client-urls: https://127.0.0.1:2379,https://192.168.20.20:2379
          listen-peer-urls: https://192.168.20.20:2380
          initial-cluster-state: existing
        image: ""
        peerCertSANs:
        - k8s-master01
        - 192.168.20.20
        serverCertSANs:
        - k8s-master01
        - 192.168.20.20
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    kind: MasterConfiguration
    kubeProxy:
      config:
        bindAddress: 0.0.0.0
        clientConnection:
          acceptContentTypes: ""
          burst: 10
          contentType: application/vnd.kubernetes.protobuf
          kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
          qps: 5
        clusterCIDR: 172.168.0.0/16
        configSyncPeriod: 15m0s
        conntrack:
          max: null
          maxPerCore: 32768
          min: 131072
          tcpCloseWaitTimeout: 1h0m0s
          tcpEstablishedTimeout: 24h0m0s
        enableProfiling: false
        healthzBindAddress: 0.0.0.0:10256
        hostnameOverride: ""
        iptables:
          masqueradeAll: false
          masqueradeBit: 14
          minSyncPeriod: 0s
          syncPeriod: 30s
        ipvs:
          excludeCIDRs: null
          minSyncPeriod: 0s
          scheduler: ""
          syncPeriod: 30s
        metricsBindAddress: 127.0.0.1:10249
        mode: ""
        nodePortAddresses: null
        oomScoreAdj: -999
        portRange: ""
        resourceContainer: /kube-proxy
        udpIdleTimeout: 250ms
    kubeletConfiguration:
      baseConfig:
        address: 0.0.0.0
        authentication:
          anonymous:
            enabled: false
          webhook:
            cacheTTL: 2m0s
            enabled: true
          x509:
            clientCAFile: /etc/kubernetes/pki/ca.crt
        authorization:
          mode: Webhook
          webhook:
            cacheAuthorizedTTL: 5m0s
            cacheUnauthorizedTTL: 30s
        cgroupDriver: cgroupfs
        cgroupsPerQOS: true
        clusterDNS:
        - 10.96.0.10
        clusterDomain: cluster.local
        containerLogMaxFiles: 5
        containerLogMaxSize: 10Mi
        contentType: application/vnd.kubernetes.protobuf
        cpuCFSQuota: true
        cpuManagerPolicy: none
        cpuManagerReconcilePeriod: 10s
        enableControllerAttachDetach: true
        enableDebuggingHandlers: true
        enforceNodeAllocatable:
        - pods
        eventBurst: 10
        eventRecordQPS: 5
        evictionHard:
          imagefs.available: 15%
          memory.available: 100Mi
          nodefs.available: 10%
          nodefs.inodesFree: 5%
        evictionPressureTransitionPeriod: 5m0s
        failSwapOn: true
        fileCheckFrequency: 20s
        hairpinMode: promiscuous-bridge
        healthzBindAddress: 127.0.0.1
        healthzPort: 10248
        httpCheckFrequency: 20s
        imageGCHighThresholdPercent: 85
        imageGCLowThresholdPercent: 80
        imageMinimumGCAge: 2m0s
        iptablesDropBit: 15
        iptablesMasqueradeBit: 14
        kubeAPIBurst: 10
        kubeAPIQPS: 5
        makeIPTablesUtilChains: true
        maxOpenFiles: 1000000
        maxPods: 110
        nodeStatusUpdateFrequency: 10s
        oomScoreAdj: -999
        podPidsLimit: -1
        port: 10250
        registryBurst: 10
        registryPullQPS: 5
        resolvConf: /etc/resolv.conf
        rotateCertificates: true
        runtimeRequestTimeout: 2m0s
        serializeImagePulls: true
        staticPodPath: /etc/kubernetes/manifests
        streamingConnectionIdleTimeout: 4h0m0s
        syncFrequency: 1m0s
        volumeStatsAggPeriod: 1m0s
    kubernetesVersion: v1.11.1
    networking:
      dnsDomain: cluster.local
      podSubnet: 172.168.0.0/16
      serviceSubnet: 10.96.0.0/12
    nodeRegistration: {}
    unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
  creationTimestamp: 2018-11-30T07:45:49Z
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "172"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: f4c8386f-f473-11e8-a7c1-000c293bfe27

kubeadm-config-cm.yaml

  应用配置

[[email protected] ~]# kubectl apply -f kubeadm-config-cm.yaml --force
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kubeadm-config configured

  

  在Master01节点执行,检查是否可以升级,并且获得相应的升级版本

[[email protected] ~]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
I1205 14:16:59.024022   22267 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 14:16:59.024143   22267 version.go:94] falling back to the local client version: v1.12.3
[upgrade/versions] Latest stable version: v1.12.3
I1205 14:17:09.125120   22267 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 14:17:09.125157   22267 version.go:94] falling back to the local client version: v1.12.3
[upgrade/versions] Latest version in the v1.11 series: v1.12.3

Components that must be upgraded manually after you have upgraded the control plane with ‘kubeadm upgrade apply‘:
COMPONENT   CURRENT       AVAILABLE
Kubelet     5 x v1.11.1   v1.12.3

Upgrade to the latest version in the v1.11 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.1   v1.12.3
Controller Manager   v1.11.1   v1.12.3
Scheduler            v1.11.1   v1.12.3
Kube Proxy           v1.11.1   v1.12.3
CoreDNS              1.1.3     1.2.2
Etcd                 3.2.18    3.2.24

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.12.3

_____________________________________________________________________

  Master01升级

[[email protected] ~]# kubeadm upgrade apply v1.12.3
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.3"
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"...
Static pod: kube-apiserver-k8s-master01 hash: 8e73c6033a7f7c0ed9de3c9fe358ff20
Static pod: kube-controller-manager-k8s-master01 hash: 18c2a56f846a5cbbff74093ebc5b6136
Static pod: kube-scheduler-k8s-master01 hash: 301c69426b9199b2b4f2ea0f0f7915f4
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/etcd.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: 88da4629a02c29c8e1a6a72ede24f370
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[util/etcd] Waiting 0s for initial delay
[util/etcd] Attempting to see if all cluster endpoints are available 1/10
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-k8s-master01 hash: 8e73c6033a7f7c0ed9de3c9fe358ff20
Static pod: kube-apiserver-k8s-master01 hash: 2434a94351059f81688e5e1c3275bed6
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-k8s-master01 hash: 18c2a56f846a5cbbff74093ebc5b6136
Static pod: kube-controller-manager-k8s-master01 hash: 126c3dd53a5200d93342d93c456ef3ea
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-k8s-master01 hash: 301c69426b9199b2b4f2ea0f0f7915f4
Static pod: kube-scheduler-k8s-master01 hash: e36d0e66f8da9610f746f242ac8dca22
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven‘t already done so.

  升级其他Master节点

  以下在Master02上执行

[[email protected] ~]# kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

  修改对应配置

# 将以下参数修改为本机IP
api.advertiseAddress
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
# 将以下参数修改为etcd集群节点
etcd.local.extraArgs.initial-cluster
# 添加参数至extraArgs
initial-cluster-state: existing
# 将以下参数修改为本机的IP和主机名
peerCertSANs:
- k8s-master02
- 192.168.20.21
serverCertSANs:
- k8s-master02
- 192.168.20.21
# 修改ClusterStatus的apiEndpoints
ClusterStatus: |
    apiEndpoints:
      k8s-master02:
        advertiseAddress: 192.168.20.21

  为Master02添加annotation for the cri-socket

[[email protected] manifests]# kubectl annotate node k8s-master02 kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node/k8s-master02 annotated

  在Master02上应用配置

[[email protected] ~]# kubectl apply -f kubeadm-config-cm.yaml --force
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kubeadm-config configured

  升级Master02

[[email protected] ~]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
I1205 16:29:19.322334   23143 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I1205 16:29:19.322407   23143 version.go:94] falling back to the local client version: v1.12.3
[upgrade/versions] Latest stable version: v1.12.3
I1205 16:29:29.364522   23143 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 16:29:29.364560   23143 version.go:94] falling back to the local client version: v1.12.3
[upgrade/versions] Latest version in the v1.11 series: v1.12.3

Components that must be upgraded manually after you have upgraded the control plane with ‘kubeadm upgrade apply‘:
COMPONENT   CURRENT       AVAILABLE
Kubelet     5 x v1.11.1   v1.12.3

Upgrade to the latest version in the v1.11 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.1   v1.12.3
Controller Manager   v1.11.1   v1.12.3
Scheduler            v1.11.1   v1.12.3
Kube Proxy           v1.11.1   v1.12.3
CoreDNS              1.2.2     1.2.2
Etcd                 3.2.18    3.2.24

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.12.3

_____________________________________________________________________

[[email protected]-master02 ~]# kubeadm upgrade apply v1.12.3 -f
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.3"
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"...
Static pod: kube-apiserver-k8s-master02 hash: 78dd4fc562855556d31d9bc488493105
Static pod: kube-controller-manager-k8s-master02 hash: 1f165d7dcb7bc7512482d1ee10f5cd46
Static pod: kube-scheduler-k8s-master02 hash: 301c69426b9199b2b4f2ea0f0f7915f4
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-k8s-master02 hash: 6c2acfaa7a090019e60c740068e75eac
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-k8s-master02 hash: 1f165d7dcb7bc7512482d1ee10f5cd46
Static pod: kube-controller-manager-k8s-master02 hash: 26a5d6ca3e6f688f9e684d1e8b894741
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-k8s-master02 hash: 301c69426b9199b2b4f2ea0f0f7915f4
Static pod: kube-scheduler-k8s-master02 hash: e36d0e66f8da9610f746f242ac8dca22
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master02" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven‘t already done so.

  同样方式升级MasterN

3、验证Master

  镜像

[[email protected] manifests]# grep "image:" *.yaml
etcd.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
kube-apiserver.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
kube-controller-manager.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
kube-scheduler.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3

[[email protected]-master02 manifests]# grep "image:" *.yaml
etcd.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
kube-apiserver.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
kube-controller-manager.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
kube-scheduler.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3

[[email protected]-master03 manifests]# grep "image:" *.yaml
etcd.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
kube-apiserver.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
kube-controller-manager.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
kube-scheduler.yaml:    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3

4、升级kubectl和kubelet

  将Master01节点改为维护状态禁止调度

[[email protected] ~]# kubectl drain k8s-master01 --ignore-daemonsets
[[email protected]-master01 ~]# kubectl get node
NAME           STATUS                     ROLES     AGE       VERSION
k8s-master01   Ready,SchedulingDisabled   master    5d1h      v1.11.1
k8s-master02   Ready                      master    5d1h      v1.11.1
k8s-master03   Ready                      master    5d1h      v1.11.1
k8s-node01     Ready                      <none>    5d1h      v1.11.1
k8s-node02     Ready                      <none>    5d1h      v1.11.1......

  升级Master01的kubectl和kubelet

[[email protected] ~]# yum install kubectl-1.12.3-0.x86_64 kubelet-1.12.3-0.x86_64 -y --disableexcludes=kubernetes

  重启kubelet

[[email protected] ~]# systemctl daemon-reload
[[email protected]-master01 ~]# systemctl restart kubelet
[[email protected]-master01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2018-12-05 17:20:42 CST; 11s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 10860 (kubelet)
    Tasks: 21
   Memory: 43.3M

  恢复调度

[[email protected] ~]# kubectl uncordon k8s-master01

  查看状态

[[email protected] ~]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    master   5d1h   v1.12.3
k8s-master02   Ready    master   5d1h   v1.11.1
k8s-master03   Ready    master   5d1h   v1.11.1
k8s-node01     Ready    <none>   5d1h   v1.11.1
k8s-node02     Ready    <none>   5d1h   v1.11.1
......

  同样方式升级master02和master03

[[email protected] ~]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    master   5d1h   v1.12.3
k8s-master02   Ready    master   5d1h   v1.12.3
k8s-master03   Ready    master   5d1h   v1.12.3
k8s-node01     Ready    <none>   5d1h   v1.11.1
k8s-node02     Ready    <none>   5d1h   v1.11.1 ...... 

5、升级node节点

  On each node except the master node, upgrade the kubelet config

[[email protected] ~]# kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ‘ ‘ -f 2)
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

  节点禁止调度参考上面

[[email protected] ~]# yum install  kubeadm-1.12.3-0.x86_64 -y --disableexcludes=kubernetes
[[email protected]-nodeN ~]# yum install kubectl-1.12.3-0.x86_64 kubelet-1.12.3-0.x86_64 -y --disableexcludes=kubernetes

  重启kubelet

[[email protected] ~]# systemctl daemon-reload
[[email protected]-node01 ~]# systemctl restart kubelet
[[email protected]-node01 ~]# systemctl status !$
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Thu 2018-12-06 00:54:23 CST; 25s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 16346 (kubelet)
    Tasks: 20
   Memory: 38.0M
[[email protected] ~]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    master   5d1h   v1.12.3
k8s-master02   Ready    master   5d1h   v1.12.3
k8s-master03   Ready    master   5d1h   v1.12.3
k8s-node01     Ready    <none>   5d1h   v1.12.3......

  同样方式升级其他Node节点

  查看最终状态

[[email protected] ~]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    master   5d2h   v1.12.3
k8s-master02   Ready    master   5d1h   v1.12.3
k8s-master03   Ready    master   5d1h   v1.12.3
k8s-node01     Ready    <none>   5d1h   v1.12.3
k8s-node02     Ready    <none>   5d1h   v1.12.3......

6、其他说明

  本次升级未升级网络组件,升级Calico请点击

  升级过程中如果出现错误,可以重复执行,kubeadm upgrade apply,也可以强制升级kubeadm upgrade apply VERSION -f

原文地址:https://www.cnblogs.com/dukuan/p/10071204.html

时间: 2024-10-10 00:07:45

kubernetes实战(十六):k8s高可用集群平滑升级 v1.11.x 到v1.12.x的相关文章

java架构师课程、性能调优、高并发、tomcat负载均衡、大型电商项目实战、高可用、高可扩展、数据库架构设计、Solr集群与应用、分布式实战、主从复制、高可用集群、大数据

15套Java架构师详情 * { font-family: "Microsoft YaHei" !important } h1 { background-color: #006; color: #FF0 } 15套java架构师.集群.高可用.高可扩展.高性能.高并发.性能优化.Spring boot.Redis.ActiveMQ.Nginx.Mycat.Netty.Jvm大型分布式项目实战视频教程 视频课程包含: 高级Java架构师包含:Spring boot.Spring  clo

基于Keepalived高可用集群的MariaDB读写分离机制实现

一 MariaDB读写分离机制 在实现读写分离机制之前先理解一下三种主从复制方式:1.异步复制:MariaDB默认的复制即是异步的,主库在执行完客户端提交的事务后会立即将结果返给给客户端,并不关心从库是否已经接收并处理,这样就会有一个问题,主节点如果挂掉了,此时主上已经提交的事务可能并没有传到从上,如果此时,强行将从提升为主,可能导致新主上的数据不完整.2.全同步复制:指当主库执行完一个事务,所有的从库都执行了该事务才返回给客户端.因为需要等待所有从库执行完该事务才能返回,所以全同步复制的性能必

Kubesnetes实战总结 - 部署高可用集群

Kubernetes Master 节点运行组件如下: kube-apiserver: 提供了资源操作的唯一入口,并提供认证.授权.访问控制.API 注册和发现等机制 kube-scheduler: 负责资源的调度,按照预定的调度策略将 Pod 调度到相应的机器上 kube-controller-manager: 负责维护集群的状态,比如故障检测.自动扩展.滚动更新等 etcd: CoreOS 基于 Raft 开发的分布式 key-value 存储,可用于服务发现.共享配置以及一致性保障(如数据

马哥学习笔记二十二——高可用集群原理

HA Resource:资源 FailOver:故障转移 FailBack:故障转回 资源粘性:资源是否倾向于留在当前节点 Messaging Layer:集群服务信息层,基于UDP互相传递心跳信息,集群事务信息等 heartbeat(v1,v2,v3) heartbeat v3:heartbeat,pacemaker,cluster-glue corosync cman keepalived ultramonkey CRM:(cluster resource manager)集群资源管理器,统

(六) Docker 部署 Redis 高可用集群 (sentinel 哨兵模式)

参考并感谢 官方文档 https://hub.docker.com/_/redis GitHub https://github.com/antirez/redis happyJared https://blog.csdn.net/qq_28804275/article/details/80938659 下载redis镜像(不带tag标签则表示下载latest版本) docker pull redis 从github 下载最新的redis.conf,注意重要参数 # 端口 port 6379 #

k8s 开船记:升级为豪华邮轮(高可用集群)与遇到奇怪故障(dns解析异常)

之前我们搭建的 k8s 集群只用了1台 master ,可用性不高,这两天开始搭建高可用集群,但由于之前用 kubeadm 命令创建集群时没有使用 --control-plane-endpoint 参数,无法直接升级现有集群,只能重新创建高可用(High-Availability)集群. 高可用集群的原理很简单,多台 master ,每台都保存集群数据(etcd),所有 nodes 通过专门搭建的负载均衡访问 api server ,这样当部分 master 宕机时,对集群正常运行无影响. 我们

Nginx高可用集群实战

一.简介  上一篇博文介绍了keepalived的高可用集群,其实使用nginx做前端代理,负载均衡照样可以实现双主或主备模式的高可用集群比起keepalived基于lvs的dr配置更为简单易学,下面是本人所理解的nginx高可用集群.此实验拓扑图类似keepalived拓扑,把图中keepalived字样换成nginx即可,我就不画了,有意者可重画. 实验拓扑图:       二.nginx主备模式高可用     配置nginx的upstream模块,让其反代到后端主机,后端主机配置和keep

六十一、集群介绍、keepalived介绍、用keepaliver配置高可用集群

一.集群介绍 根据功能划分为两大类:高可用和负载均衡. 高可用集群通常为两台服务器,一台工作,另外一台作为冗余,当提供服务的机器宕机,冗余将接替继续提供服务 实现高可用的开源软件有:heartbeat.keepalived.heartbeat有很多bug,所以不再使用. 负载均衡集群,需要有一台服务器作为分发器,它负责把用户的请求分发给后端的服务器处理,在这个集群里,除了分发器外,就是给用户提供服务的服务器了,这些服务器数量至少为2 实现负载均衡的开源软件有LVS.keepalived.hapr

MySQL-MMM高可用集群搭建实战(全程可跟做!)

MMM高可用集群案例拓扑图: 环境准备: 主服务器1:192.168.18.146 db1 vip:192.168.18.250 主服务器2:192.168.18.147 db2 从服务器1:192.168.18.128 db3 vip:192.168.18.251 从服务器2:192.168.18.148 db4 vip:192.168.18.252 监控服务器:192.168.18.145 Master1: [[email protected] ~]# wget -O /etc/yum.re