debian8.2安装kubernetes

master上通过kubeadm安装Kubernetes

添加国内阿里源后安装kubeadm:

1 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main2 apt-get update && apt-get install kubeadm

创建kubeadm.yaml文件, 然后执行安装:

1 apiVersion: kubeadm.k8s.io/v1alpha2
2 kind: MasterConfiguration
3 controllerManagerExtraArgs:
4   horizontal-pod-autoscaler-use-rest-clients: "true"
5   horizontal-pod-autoscaler-sync-period: "10s"
6   node-monitor-grace-period: "10s"
7 apiServerExtraArgs:
8   runtime-config: "api/all=true"
9 kubernetesVersion: "stable-1.12.2"
1 kubeadm init --config kubeadm.yaml

安装过程中出现的问题:

1 [ERROR Swap]: running with swap on is not supported. Please disable swap
2 [ERROR SystemVerification]: missing cgroups: memory
3 [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.2]

解决办法:

1. 建议不使用: swapoff -a
从操作记录来看, 使用swapoff -a后kubeadm init命令虽然可以执行,但是却总是失败, 提示:
    [kubelet-check] It seems like the kubelet isn‘t running or healthy.
    [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

查看日志发现实际还是swap没有关闭的问题:
?  kubernetes  journalctl -xefu kubelet
11月 05 22:56:28 debian kubelet[7241]: F1105 22:56:28.609272    7241 server.go:262] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename                                Type                Size        Used        Priority /dev/sda9                               partition        3905532        0        -1]
?  kubernetes  cat /proc/swaps
Filename                Type        Size    Used    Priority
/dev/sda9                               partition    3905532    0    -1
?  kubernetes  

注释掉/etc/fstab下swap挂载后安装成功

2. echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory\" >> /etc/default/grub  && update-grub && reboot
3. 国内正常网络不能从k8s.grc.io拉取镜像, 所以从docker.io拉取, 然后重新打上一个符合k8s的tag:
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2
docker pull mirrorgooglecontainers/kube-proxy:v1.12.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.2

docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag docker.io/coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2

docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.2

也可以增加加速器(测试163后速度比直接访问更慢), 加入方法如下,然后重启docker服务:
?  kubernetes  cat /etc/docker/daemon.json
{
    "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
?  kubernetes  

安装成功记录:

?  kubernetes  kubeadm init --config kubeadm.yaml
I1205 23:08:15.852917    5188 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.12.2.txt": Get https://dl.k8s.io/release/stable-1.12.2.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 23:08:15.853144    5188 version.go:94] falling back to the local client version: v1.12.2
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [debian localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [debian localhost] and IPs [192.168.2.118 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [debian kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.118]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 48.078220 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node debian as master by adding the label "node-role.kubernetes.io/master=‘‘"
[markmaster] Marking the node debian as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian" as an annotation
[bootstraptoken] using token: x4p0vz.tdp1xxxx7uyerrrs
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9

?  kubernetes   

部署网络插件

安装成功后, 通过kubectl get nodes查看节点信息(kubectl命令需要使用kubernetes-admin来运行, 需要拷贝下配置文件并配置环境变量才能运行kubectl get nods):

?  kubernetes  kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
?  kubernetes  echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
?  kubernetes  source ~/.bashrc
?  kubernetes  kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
debian   NotReady   master   21m   v1.12.2
?  kubernetes 

可以看到节点NotReady, 这是由于还没有部署任何网络插件:

?  kubernetes  kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-4vjhf         0/1     Pending   0          24m
coredns-576cbf47c7-xzjk7         0/1     Pending   0          24m
etcd-debian                      1/1     Running   0          23m
kube-apiserver-debian            1/1     Running   0          23m
kube-controller-manager-debian   1/1     Running   0          23m
kube-proxy-5wb6k                 1/1     Running   0          24m
kube-scheduler-debian            1/1     Running   0          23m
?  kubernetes  

?  kubernetes  kubectl describe node debian
Name:               debian
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=debian
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 05 Dec 2018 23:09:19 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.2.118
  Hostname:    debian
Capacity:
 cpu:                2
 ephemeral-storage:  4673664Ki
 hugepages-2Mi:      0
 memory:             5716924Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  4307248736
 hugepages-2Mi:      0
 memory:             5614524Ki
 pods:               110
System Info:
 Machine ID:                 4341bb45c5c84ad2827c173480039b5c
 System UUID:                05F887C4-A455-122E-8B14-8C736EA3DBDB
 Boot ID:                    ff68f27b-fba0-4048-a1cf-796dd013e025
 Kernel Version:             3.16.0-4-amd64
 OS Image:                   Debian GNU/Linux 8 (jessie)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.1
 Kubelet Version:            v1.12.2
 Kube-Proxy Version:         v1.12.2
Non-terminated Pods:         (5 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  kube-system                etcd-debian                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-debian             250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-debian    200m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-5wb6k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-debian             100m (5%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       550m (27%)  0 (0%)
  memory    0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  Starting                 22m                kubelet, debian     Starting kubelet.
  Normal  NodeAllocatableEnforced  22m                kubelet, debian     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    22m (x6 over 22m)  kubelet, debian     Node debian status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  22m (x6 over 22m)  kubelet, debian     Node debian status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    22m (x6 over 22m)  kubelet, debian     Node debian status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet, debian     Node debian status is now: NodeHasSufficientPID
  Normal  Starting                 21m                kube-proxy, debian  Starting kube-proxy.
?  kubernetes  

部署插件后可查看所有pods已经running(插件要几分钟才能运行起来, 中间状态有ContainerCreating/CrashLoopBackOff):

?  kubernetes  kubectl get pods -n kube-system
NAME                             READY   STATUS              RESTARTS   AGE
coredns-576cbf47c7-4vjhf         0/1     Pending             0          25m
coredns-576cbf47c7-xzjk7         0/1     Pending             0          25m
etcd-debian                      1/1     Running             0          25m
kube-apiserver-debian            1/1     Running             0          25m
kube-controller-manager-debian   1/1     Running             0          25m
kube-proxy-5wb6k                 1/1     Running             0          25m
kube-scheduler-debian            1/1     Running             0          25m
weave-net-nj7bk                  0/2     ContainerCreating   0          21s
?  kubernetes  kubectl get pods -n kube-system
NAME                             READY   STATUS             RESTARTS   AGE
coredns-576cbf47c7-4vjhf         0/1     CrashLoopBackOff   2          27m
coredns-576cbf47c7-xzjk7         0/1     CrashLoopBackOff   2          27m
etcd-debian                      1/1     Running            0          27m
kube-apiserver-debian            1/1     Running            0          27m
kube-controller-manager-debian   1/1     Running            0          27m
kube-proxy-5wb6k                 1/1     Running            0          27m
kube-scheduler-debian            1/1     Running            0          27m
weave-net-nj7bk                  2/2     Running            0          2m32s
?  kubernetes  kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-4vjhf         1/1     Running   3          27m
coredns-576cbf47c7-xzjk7         1/1     Running   3          27m
etcd-debian                      1/1     Running   0          27m
kube-apiserver-debian            1/1     Running   0          27m
kube-controller-manager-debian   1/1     Running   0          27m
kube-proxy-5wb6k                 1/1     Running   0          27m
kube-scheduler-debian            1/1     Running   0          27m
weave-net-nj7bk                  2/2     Running   0          2m42s
?  kubernetes  
?  kubernetes  kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
debian   Ready    master   38m   v1.12.2
?  kubernetes  

调整master可以执行Pod

默认情况下,Kubernetes通过Taint/Toleration 机制给某一个节点打上"污点":

?  kubernetes  kubectl describe node debian | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
?  kubernetes  

那么所有Pod默认就不在被标记的节点上运行,除非:

1 1. Pod主动声明允许在这种节点上运行(通过在Pod的yaml文件中的spec部分,加入 tolerations 字段即可)。
2 2. 对于总共就几台机器的k8s测试机器来说,最好的选择就是删除Taint:
3   ?  kubernetes  kubectl taint nodes --all node-role.kubernetes.io/master-
4   node/debian untainted
5   ?  kubernetes  kubectl describe node debian | grep Taints
6   Taints:             <none>
7   ?  kubernetes  

增加节点

由于master上kubeadm/kubelet都是v1.12.2版本,worker节点执行默认apt-get install时默认装了v1.13版本,导致加入集群失败,需卸载重装匹配的版本:
[email protected]:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[email protected]:~# [email protected]:~# kubelet --version
Kubernetes v1.13.0
[email protected]:~# apt-get --purge remove kubeadm kubelet
[email protected]:~# apt-cache policy kubeadm
kubeadm:
  已安装:  (无)
  候选软件包:1.13.0-00
  版本列表:
     1.13.0-00 0
        500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
     1.12.3-00 0
        500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
     1.12.2-00 0
        500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
[email protected]:~# apt-get install kubeadm=1.12.2-00 kubelet=1.12.2-00

[email protected]:~# kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.168.2.118:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.118:6443"
[discovery] Requesting info from "https://192.168.2.118:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.118:6443"
[discovery] Successfully established connection with API Server "192.168.2.118:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian-vm" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

[email protected]-vm:~# 

节点加入集群成功

https://github.com/kubernetes/kubernetes/issues/54914
https://github.com/kubernetes/kubeadm/issues/610
https://blog.csdn.net/acxlm/article/details/79069468

原文地址:https://www.cnblogs.com/aios/p/10023299.html

时间: 2024-11-05 20:48:53

debian8.2安装kubernetes的相关文章

Centos7上安装Kubernetes集群部署docker

一.安装前准备 1.操作系统详情 需要三台主机,都最小化安装 centos7.3,并update到最新 cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core)  角色主机名IP Master      master192.168.1.14 node1    slave-1192.168.1.15 node2slave-2192.168.1.16 2.在每台主机上关闭firewalld改用iptables 输入以下命令,关闭fire

Debian8.1 安装samba与windows共享文件无法自起解决方法

Debian8.1安装配置完成并成功与window共享文件但在系统重启后访问时出现如下问题 (图)的解决方法 出现问题后手动重启samba sudo /etc/init.d/samba start 再次从window端访问成功,所以一本人现在的认知决定手动写一个脚本开机自动启动samba服务器 如何添加这一个开机自动运行脚本 1.脚本内容及其简单 #!/bin/bash ### BEGIN INIT INFO # Provides: start_samba # Required-Start: $

centos7 yum安装kubernetes 1.1

前提:centos7 已经update yum update -y 一.创建yum源 master,slave都要 kubernetes release 版本 yum源 http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/Packages/ vi  virt7-docker-common-release.repo [virt7-docker-common-release] name=virt7-docker-comm

centos 7 安装kubernetes cluster

在这里,我会展示如何安装一个kubernetes cluster,包含1个master 2个minions. 环境需求: centos7 64位系统 三台机器 master:192.168.5.131 minions:192.168.5.132 minions:192.168.5.133 kubernetes 的组件: etcd flannel kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy 一.部

Kubadem方式安装Kubernetes(1.10.0)集群

背景 kubernetes已经是现有的docker容器管理工具中必学的一个架构了,相对与swarm来说,它的架构更重,组件和配置也更复杂,当然了,提供的功能也更加强大.在这里,k8s的基本概念和架构就不描述了,网上有很多的资料可供参考. 在技术的驱使下,我们公司也不可避免地开始了k8s的研究,所以也要开始接触到这一强大的docker容器管理架构.学习k8s的第一步,首先要搭建一个k8s的集群环境.搭建k8s最简单的应该是直接使用官方提供的二进制包.但在这里,我参考了k8s官方的安装指南,选择使用

centos7使用kubeadm安装kubernetes 1.11版本多主高可用

centos7使用kubeadm安装kubernetes 1.11版本多主高可用 [TOC] kubernetes介绍要学习一个新的东西,先了解它是什么,熟悉基本概念会有很大帮助.以下是我学习时看过的一篇核心概念介绍.http://dockone.io/article/932 搭建Kubernetes集群环境有以下3种方式: minikubeMinikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境.官方地址:ht

kubeadm安装kubernetes v1.11.3 HA多主高可用并启用ipvs

环境介绍: 系统版本:CentOS 7.5 内核:4.18.7-1.el7.elrepo.x86_64 Kubernetes: v1.11.3 Docker-ce: 18.06 Keepalived保证apiserever服务器的IP高可用 Haproxy实现apiserver的负载均衡 master x3 && etcd x3 保证k8s集群可用性 192.168.1.1 master 192.168.1.2 master2 192.168.1.3 master3 192.168.1.4

Centos7下yum安装kubernetes

一.前言 ?? Kubernetes 是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度.均衡容灾.服务注册.动态扩缩容等功能套件,目前centos yum源上最新版本为1.5.2. 本文基于Centos7.5构建Kubernetes平台,在正式介绍之前,大家有必要先理解Kubernetes几个核心概念及其承担的功能. 以下为Kubernetes的架构设计图: 1. Pods ?? 在Kubernetes系统中,调度的最小颗粒不是单纯的容器,而是抽象成一个

CentOS 7.5 使用 yum 源安装 Kubernetes 集群(二)

一.安装方式介绍 1.yum 安装 目前CentOS官方已经把Kubernetes源放入到自己的默认 extras 仓库里面,使用 yum 安装,好处是简单,坏处也很明显,需要官方更新 yum 源才能获得最新版本的软件,而所有软件的依赖又不能自己指定,尤其是你的操作系统版本如果低的话,使用 yum 源安装的 Kubernetes 的版本也会受到限制,通常会低于官方很多版本,我安装的时候目前官方版本为1.12,而 yum 源中的版本为1.5.2. 2.二进制安装 使用二进制文件安装,好处是可以安装