1、基本概念
升级之后所有的containers会重启,因为hash值会变。
不可跨版本升级。
2、升级Master节点
当前版本
[[email protected] ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} [[email protected]-master02 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} [[email protected]-master03 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
查看所有kubeadm的版本
[[email protected] ~]# yum list kubeadm --showduplicates | sort -r * updates: mirrors.aliyun.com Loading mirror speeds from cached hostfile Loaded plugins: fastestmirror kubeadm.x86_64 1.9.9-0 kubernetes kubeadm.x86_64 1.9.8-0 kubernetes kubeadm.x86_64 1.9.7-0 kubernetes kubeadm.x86_64 1.9.6-0 kubernetes kubeadm.x86_64 1.9.5-0 kubernetes kubeadm.x86_64 1.9.4-0 kubernetes kubeadm.x86_64 1.9.3-0 kubernetes kubeadm.x86_64 1.9.2-0 kubernetes kubeadm.x86_64 1.9.11-0 kubernetes kubeadm.x86_64 1.9.1-0 kubernetes kubeadm.x86_64 1.9.10-0 kubernetes kubeadm.x86_64 1.9.0-0 kubernetes kubeadm.x86_64 1.8.9-0 kubernetes kubeadm.x86_64 1.8.8-0 kubernetes kubeadm.x86_64 1.8.7-0 kubernetes kubeadm.x86_64 1.8.6-0 kubernetes kubeadm.x86_64 1.8.5-0 kubernetes kubeadm.x86_64 1.8.4-0 kubernetes kubeadm.x86_64 1.8.3-0 kubernetes kubeadm.x86_64 1.8.2-0 kubernetes kubeadm.x86_64 1.8.15-0 kubernetes kubeadm.x86_64 1.8.14-0 kubernetes kubeadm.x86_64 1.8.13-0 kubernetes kubeadm.x86_64 1.8.12-0 kubernetes kubeadm.x86_64 1.8.11-0 kubernetes kubeadm.x86_64 1.8.1-0 kubernetes kubeadm.x86_64 1.8.10-0 kubernetes kubeadm.x86_64 1.8.0-1 kubernetes kubeadm.x86_64 1.8.0-0 kubernetes kubeadm.x86_64 1.7.9-0 kubernetes kubeadm.x86_64 1.7.8-1 kubernetes kubeadm.x86_64 1.7.7-1 kubernetes kubeadm.x86_64 1.7.6-1 kubernetes kubeadm.x86_64 1.7.5-0 kubernetes kubeadm.x86_64 1.7.4-0 kubernetes kubeadm.x86_64 1.7.3-1 kubernetes kubeadm.x86_64 1.7.2-0 kubernetes kubeadm.x86_64 1.7.16-0 kubernetes kubeadm.x86_64 1.7.15-0 kubernetes kubeadm.x86_64 1.7.14-0 kubernetes kubeadm.x86_64 1.7.11-0 kubernetes kubeadm.x86_64 1.7.1-0 kubernetes kubeadm.x86_64 1.7.10-0 kubernetes kubeadm.x86_64 1.7.0-0 kubernetes kubeadm.x86_64 1.6.9-0 kubernetes kubeadm.x86_64 1.6.8-0 kubernetes kubeadm.x86_64 1.6.7-0 kubernetes kubeadm.x86_64 1.6.6-0 kubernetes kubeadm.x86_64 1.6.5-0 kubernetes kubeadm.x86_64 1.6.4-0 kubernetes kubeadm.x86_64 1.6.3-0 kubernetes kubeadm.x86_64 1.6.2-0 kubernetes kubeadm.x86_64 1.6.13-0 kubernetes kubeadm.x86_64 1.6.12-0 kubernetes kubeadm.x86_64 1.6.11-0 kubernetes kubeadm.x86_64 1.6.1-0 kubernetes kubeadm.x86_64 1.6.10-0 kubernetes kubeadm.x86_64 1.6.0-0 kubernetes kubeadm.x86_64 1.13.0-0 kubernetes kubeadm.x86_64 1.12.3-0 kubernetes kubeadm.x86_64 1.12.2-0 kubernetes kubeadm.x86_64 1.12.1-0 kubernetes kubeadm.x86_64 1.12.0-0 kubernetes kubeadm.x86_64 1.11.5-0 kubernetes kubeadm.x86_64 1.11.4-0 kubernetes kubeadm.x86_64 1.11.3-0 kubernetes kubeadm.x86_64 1.11.2-0 kubernetes kubeadm.x86_64 1.11.1-0 kubernetes kubeadm.x86_64 1.11.0-0 kubernetes kubeadm.x86_64 1.10.9-0 kubernetes kubeadm.x86_64 1.10.8-0 kubernetes kubeadm.x86_64 1.10.7-0 kubernetes kubeadm.x86_64 1.10.6-0 kubernetes kubeadm.x86_64 1.10.5-0 kubernetes kubeadm.x86_64 1.10.4-0 kubernetes kubeadm.x86_64 1.10.3-0 kubernetes kubeadm.x86_64 1.10.2-0 kubernetes kubeadm.x86_64 1.10.11-0 kubernetes kubeadm.x86_64 1.10.1-0 kubernetes kubeadm.x86_64 1.10.10-0 kubernetes kubeadm.x86_64 1.10.0-0 kubernetes * extras: mirrors.aliyun.com * base: mirrors.aliyun.com Available Packages
所有Master节点升级kubeadm
yum install kubeadm-1.12.3-0.x86_64 -y --disableexcludes=kubernetes
查看版本
[[email protected] ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} [[email protected]-master02 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} [[email protected]-master03 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
所有Master节点修改kubeadm-config.yaml的kubernetesVersion(如果升级前集群不是参考我的文档部署的,请自行下载对应镜像)
[[email protected] ~]# more kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.12.3 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
提前下载镜像
[[email protected] ~]# kubeadm config images pull --config /root/kubeadm-config.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2
备份/etc/kubernetes所有文件,自行备份
在Master01上执行升级,一下操作在Master01执行
修改Master01的configmap/kubeadm-config
[[email protected] ~]# kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml
主要修改以下信息:
# 将以下参数修改为本机IP api.advertiseAddress etcd.local.extraArgs.advertise-client-urls etcd.local.extraArgs.initial-advertise-peer-urls etcd.local.extraArgs.listen-client-urls etcd.local.extraArgs.listen-peer-urls # 将以下参数修改为etcd集群节点 etcd.local.extraArgs.initial-cluster# 添加参数至extraArgsinitial-cluster-state: existing # 将以下参数修改为本机的IP和主机名 peerCertSANs: - k8s-master01 - 192.168.20.20 serverCertSANs: - k8s-master01 - 192.168.20.20
大致如下
apiVersion: v1 data: MasterConfiguration: | api: advertiseAddress: 192.168.20.20 bindPort: 6443 controlPlaneEndpoint: 192.168.20.10:16443 apiServerCertSANs: - k8s-master01 - k8s-master02 - k8s-master03 - k8s-master-lb - 192.168.20.20 - 192.168.20.21 - 192.168.20.22 - 192.168.20.10 apiServerExtraArgs: authorization-mode: Node,RBAC apiVersion: kubeadm.k8s.io/v1alpha2 auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManagerExtraArgs: node-monitor-grace-period: 10s pod-eviction-timeout: 10s etcd: local: dataDir: /var/lib/etcd extraArgs: advertise-client-urls: https://192.168.20.20:2379 initial-advertise-peer-urls: https://192.168.20.20:2380 initial-cluster: k8s-master01=https://192.168.20.20:2380, k8s-master02=https://192.168.20.21: 2380, k8s-master02=https://192.168.20.22:2380 listen-client-urls: https://127.0.0.1:2379,https://192.168.20.20:2379 listen-peer-urls: https://192.168.20.20:2380 initial-cluster-state: existing image: "" peerCertSANs: - k8s-master01 - 192.168.20.20 serverCertSANs: - k8s-master01 - 192.168.20.20 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: MasterConfiguration kubeProxy: config: bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: 172.168.0.0/16 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" syncPeriod: 30s metricsBindAddress: 127.0.0.1:10249 mode: "" nodePortAddresses: null oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpIdleTimeout: 250ms kubeletConfiguration: baseConfig: address: 0.0.0.0 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: cgroupfs cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 port: 10250 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s kubernetesVersion: v1.11.1 networking: dnsDomain: cluster.local podSubnet: 172.168.0.0/16 serviceSubnet: 10.96.0.0/12 nodeRegistration: {} unifiedControlPlaneImage: "" kind: ConfigMap metadata: creationTimestamp: 2018-11-30T07:45:49Z name: kubeadm-config namespace: kube-system resourceVersion: "172" selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config uid: f4c8386f-f473-11e8-a7c1-000c293bfe27
kubeadm-config-cm.yaml
应用配置
[[email protected] ~]# kubectl apply -f kubeadm-config-cm.yaml --force Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply configmap/kubeadm-config configured
在Master01节点执行,检查是否可以升级,并且获得相应的升级版本
[[email protected] ~]# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘ [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.11.1 [upgrade/versions] kubeadm version: v1.12.3 I1205 14:16:59.024022 22267 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1205 14:16:59.024143 22267 version.go:94] falling back to the local client version: v1.12.3 [upgrade/versions] Latest stable version: v1.12.3 I1205 14:17:09.125120 22267 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1205 14:17:09.125157 22267 version.go:94] falling back to the local client version: v1.12.3 [upgrade/versions] Latest version in the v1.11 series: v1.12.3 Components that must be upgraded manually after you have upgraded the control plane with ‘kubeadm upgrade apply‘: COMPONENT CURRENT AVAILABLE Kubelet 5 x v1.11.1 v1.12.3 Upgrade to the latest version in the v1.11 series: COMPONENT CURRENT AVAILABLE API Server v1.11.1 v1.12.3 Controller Manager v1.11.1 v1.12.3 Scheduler v1.11.1 v1.12.3 Kube Proxy v1.11.1 v1.12.3 CoreDNS 1.1.3 1.2.2 Etcd 3.2.18 3.2.24 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.12.3 _____________________________________________________________________
Master01升级
[[email protected] ~]# kubeadm upgrade apply v1.12.3 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘ [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.12.3" [upgrade/versions] Cluster version: v1.11.1 [upgrade/versions] kubeadm version: v1.12.3 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-scheduler. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"... Static pod: kube-apiserver-k8s-master01 hash: 8e73c6033a7f7c0ed9de3c9fe358ff20 Static pod: kube-controller-manager-k8s-master01 hash: 18c2a56f846a5cbbff74093ebc5b6136 Static pod: kube-scheduler-k8s-master01 hash: 301c69426b9199b2b4f2ea0f0f7915f4 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/etcd.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971 Static pod: etcd-k8s-master01 hash: 88da4629a02c29c8e1a6a72ede24f370 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [util/etcd] Waiting 0s for initial delay [util/etcd] Attempting to see if all cluster endpoints are available 1/10 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-scheduler.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-apiserver-k8s-master01 hash: 8e73c6033a7f7c0ed9de3c9fe358ff20 Static pod: kube-apiserver-k8s-master01 hash: 2434a94351059f81688e5e1c3275bed6 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-controller-manager-k8s-master01 hash: 18c2a56f846a5cbbff74093ebc5b6136 Static pod: kube-controller-manager-k8s-master01 hash: 126c3dd53a5200d93342d93c456ef3ea [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-scheduler-k8s-master01 hash: 301c69426b9199b2b4f2ea0f0f7915f4 Static pod: kube-scheduler-k8s-master01 hash: e36d0e66f8da9610f746f242ac8dca22 [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven‘t already done so.
升级其他Master节点
以下在Master02上执行
[[email protected] ~]# kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml
修改对应配置
# 将以下参数修改为本机IP api.advertiseAddress etcd.local.extraArgs.advertise-client-urls etcd.local.extraArgs.initial-advertise-peer-urls etcd.local.extraArgs.listen-client-urls etcd.local.extraArgs.listen-peer-urls # 将以下参数修改为etcd集群节点 etcd.local.extraArgs.initial-cluster # 添加参数至extraArgs initial-cluster-state: existing # 将以下参数修改为本机的IP和主机名 peerCertSANs: - k8s-master02 - 192.168.20.21 serverCertSANs: - k8s-master02 - 192.168.20.21 # 修改ClusterStatus的apiEndpoints ClusterStatus: | apiEndpoints: k8s-master02: advertiseAddress: 192.168.20.21
为Master02添加annotation for the cri-socket
[[email protected] manifests]# kubectl annotate node k8s-master02 kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock node/k8s-master02 annotated
在Master02上应用配置
[[email protected] ~]# kubectl apply -f kubeadm-config-cm.yaml --force Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply configmap/kubeadm-config configured
升级Master02
[[email protected] ~]# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘ [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.11.1 [upgrade/versions] kubeadm version: v1.12.3 I1205 16:29:19.322334 23143 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers) I1205 16:29:19.322407 23143 version.go:94] falling back to the local client version: v1.12.3 [upgrade/versions] Latest stable version: v1.12.3 I1205 16:29:29.364522 23143 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1205 16:29:29.364560 23143 version.go:94] falling back to the local client version: v1.12.3 [upgrade/versions] Latest version in the v1.11 series: v1.12.3 Components that must be upgraded manually after you have upgraded the control plane with ‘kubeadm upgrade apply‘: COMPONENT CURRENT AVAILABLE Kubelet 5 x v1.11.1 v1.12.3 Upgrade to the latest version in the v1.11 series: COMPONENT CURRENT AVAILABLE API Server v1.11.1 v1.12.3 Controller Manager v1.11.1 v1.12.3 Scheduler v1.11.1 v1.12.3 Kube Proxy v1.11.1 v1.12.3 CoreDNS 1.2.2 1.2.2 Etcd 3.2.18 3.2.24 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.12.3 _____________________________________________________________________ [[email protected]-master02 ~]# kubeadm upgrade apply v1.12.3 -f [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘ [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.12.3" [upgrade/versions] Cluster version: v1.11.1 [upgrade/versions] kubeadm version: v1.12.3 [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"... Static pod: kube-apiserver-k8s-master02 hash: 78dd4fc562855556d31d9bc488493105 Static pod: kube-controller-manager-k8s-master02 hash: 1f165d7dcb7bc7512482d1ee10f5cd46 Static pod: kube-scheduler-k8s-master02 hash: 301c69426b9199b2b4f2ea0f0f7915f4 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-scheduler.yaml" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-apiserver-k8s-master02 hash: 6c2acfaa7a090019e60c740068e75eac [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-controller-manager-k8s-master02 hash: 1f165d7dcb7bc7512482d1ee10f5cd46 Static pod: kube-controller-manager-k8s-master02 hash: 26a5d6ca3e6f688f9e684d1e8b894741 [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s Static pod: kube-scheduler-k8s-master02 hash: 301c69426b9199b2b4f2ea0f0f7915f4 Static pod: kube-scheduler-k8s-master02 hash: e36d0e66f8da9610f746f242ac8dca22 [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master02" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven‘t already done so.
同样方式升级MasterN
3、验证Master
镜像
[[email protected] manifests]# grep "image:" *.yaml etcd.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 kube-apiserver.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3 kube-controller-manager.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3 kube-scheduler.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3 [[email protected]-master02 manifests]# grep "image:" *.yaml etcd.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 kube-apiserver.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3 kube-controller-manager.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3 kube-scheduler.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3 [[email protected]-master03 manifests]# grep "image:" *.yaml etcd.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 kube-apiserver.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3 kube-controller-manager.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3 kube-scheduler.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3
4、升级kubectl和kubelet
将Master01节点改为维护状态禁止调度
[[email protected] ~]# kubectl drain k8s-master01 --ignore-daemonsets [[email protected]-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready,SchedulingDisabled master 5d1h v1.11.1 k8s-master02 Ready master 5d1h v1.11.1 k8s-master03 Ready master 5d1h v1.11.1 k8s-node01 Ready <none> 5d1h v1.11.1 k8s-node02 Ready <none> 5d1h v1.11.1......
升级Master01的kubectl和kubelet
[[email protected] ~]# yum install kubectl-1.12.3-0.x86_64 kubelet-1.12.3-0.x86_64 -y --disableexcludes=kubernetes
重启kubelet
[[email protected] ~]# systemctl daemon-reload [[email protected]-master01 ~]# systemctl restart kubelet [[email protected]-master01 ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2018-12-05 17:20:42 CST; 11s ago Docs: https://kubernetes.io/docs/ Main PID: 10860 (kubelet) Tasks: 21 Memory: 43.3M
恢复调度
[[email protected] ~]# kubectl uncordon k8s-master01
查看状态
[[email protected] ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 5d1h v1.12.3 k8s-master02 Ready master 5d1h v1.11.1 k8s-master03 Ready master 5d1h v1.11.1 k8s-node01 Ready <none> 5d1h v1.11.1 k8s-node02 Ready <none> 5d1h v1.11.1 ......
同样方式升级master02和master03
[[email protected] ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 5d1h v1.12.3 k8s-master02 Ready master 5d1h v1.12.3 k8s-master03 Ready master 5d1h v1.12.3 k8s-node01 Ready <none> 5d1h v1.11.1 k8s-node02 Ready <none> 5d1h v1.11.1 ......
5、升级node节点
On each node except the master node, upgrade the kubelet config
[[email protected] ~]# kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ‘ ‘ -f 2) [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
节点禁止调度参考上面
[[email protected] ~]# yum install kubeadm-1.12.3-0.x86_64 -y --disableexcludes=kubernetes [[email protected]-nodeN ~]# yum install kubectl-1.12.3-0.x86_64 kubelet-1.12.3-0.x86_64 -y --disableexcludes=kubernetes
重启kubelet
[[email protected] ~]# systemctl daemon-reload [[email protected]-node01 ~]# systemctl restart kubelet [[email protected]-node01 ~]# systemctl status !$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Thu 2018-12-06 00:54:23 CST; 25s ago Docs: https://kubernetes.io/docs/ Main PID: 16346 (kubelet) Tasks: 20 Memory: 38.0M
[[email protected] ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 5d1h v1.12.3 k8s-master02 Ready master 5d1h v1.12.3 k8s-master03 Ready master 5d1h v1.12.3 k8s-node01 Ready <none> 5d1h v1.12.3......
同样方式升级其他Node节点
查看最终状态
[[email protected] ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 5d2h v1.12.3 k8s-master02 Ready master 5d1h v1.12.3 k8s-master03 Ready master 5d1h v1.12.3 k8s-node01 Ready <none> 5d1h v1.12.3 k8s-node02 Ready <none> 5d1h v1.12.3......
6、其他说明
本次升级未升级网络组件,升级Calico请点击。
升级过程中如果出现错误,可以重复执行,kubeadm upgrade apply,也可以强制升级kubeadm upgrade apply VERSION -f
原文地址:https://www.cnblogs.com/dukuan/p/10071204.html