kubernetes(k8s)集群安装calico

添加hosts解析

cat /etc/hosts
10.39.7.51 k8s-master-51
10.39.7.57 k8s-master-57
10.39.7.52 k8s-master-52

下载calico

wget http://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml

下载所需镜像

# 建议下载后 推到自己的镜像仓库
[[email protected]51 ~]# cat calico.yaml |grep image
image: quay.io/calico/node:v3.2.4
image: quay.io/calico/cni:v3.2.4
image: quay.io/calico/kube-controllers:v3.2.4

配置etcd的ip

etcd_endpoints: "https://10.39.7.51:2379,https://10.39.7.52:2379,https://10.39.7.57:2379"

根据注释添加相关值

 # If you‘re using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key # "/calico-secrets/etcd-key"

因使用青云关系 需调整mtu值

veth_mtu: "1340"

将etcd 和ca 证书用base64加密添加到calico.yaml 的相关位置

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d ‘\n‘)
  etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d ‘\n‘)
  etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d ‘\n‘)

修改calico.yaml中的CALICO_IPV4POOL_CIDR 下的value值

- name: CALICO_IPV4POOL_CIDR
    value: "10.253.0.0/18"

安装calico

kubectl apply -f calico.yaml

修改kubelet.service 配置网络插件

cat > /etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet   --hostname-override=k8s-master-51   --pod-infra-container-image=reg.enncloud.cn/enncloud/pause-amd64:3.1   --pod-manifest-path=/etc/kubernetes/manifests    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig   --kubeconfig=/etc/kubernetes/kubelet.config   --config=/etc/kubernetes/kubelet.config.json   --cert-dir=/etc/kubernetes/pki   --allow-privileged=true   --kube-reserved cpu=500m,memory=512m   --image-gc-high-threshold=85  --image-gc-low-threshold=70   --logtostderr=true   --network-plugin=cni   --runtime-cgroups=/systemd/system.slice   --kubelet-cgroups=/systemd/system.slice   --v=2

[Install]
WantedBy=multi-user.target
EOF

# 重启kubelet
systemctl daemon-reload
systemctl restart kubelet

报错记录

calico 相关的pod无法创建 日志如下

[[email protected]51 ~]# docker ps -a
CONTAINER ID        IMAGE                                                                                                                 COMMAND                  CREATED              STATUS                      PORTS               NAMES
0dabd1835d53        65733b9f36a6                                                                                                          "/usr/bin/kube-con..."   About a minute ago   Exited (1) 53 seconds ago                       k8s_calico-kube-controllers_calico-kube-controllers-5cf99599ff-hsdqw_kube-system_9175cc10-fdd2-11e8-9144-525463f5581b_13
d434d4ed5e47        e537e5882f91                                                                                                          "start_runit"            About a minute ago   Exited (1) 55 seconds ago

[[email protected]51 ~]# docker logs 0dabd1835d53
#注意:Kubeconfig 字段为空无法连接etcd

2018-12-12 06:39:47.068 [INFO][1] main.go 73: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"policy,profile,workloadendpoint,node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true}
2018-12-12 06:39:57.069 [FATAL][1] main.go 85: Failed to start error=failed to build Calico client: context deadline exceeded

[[email protected]51 ~]# docker logs d434d4ed5e47
2018-12-12 06:39:44.948 [INFO][8] startup.go 252: Early log level set to info
2018-12-12 06:39:44.949 [INFO][8] startup.go 272: Using HOSTNAME environment (lowercase) for node name
2018-12-12 06:39:44.949 [INFO][8] startup.go 280: Determined node name: k8s-master-51
Calico node failed to start
ERROR: Error accessing the Calico datastore: context deadline exceeded

# 解决办法:
修改前:
 # If you‘re using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"
修改后:
    etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"

报错记录

[[email protected]51 manifests]# kubectl get po -nkube-system
NAME                                       READY     STATUS    RESTARTS   AGE
calico-kube-controllers-5cf99599ff-zbmn2   0/1       Running   0          1m

[[email protected]51 manifests]# kubectl logs calico-kube-controllers-5cf99599ff-zbmn2 -nkube-system
E1212 10:18:20.617128       1 reflector.go:205] github.com/projectcalico/kube-controllers/pkg/controllers/networkpolicy/policy_controller.go:147: Failed to list *v1.NetworkPolicy: Get https://10.254.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.254.0.1:443: i/o timeout
E1212 10:18:20.657457       1 reflector.go:205] github.com/projectcalico/kube-controllers/pkg/controllers/pod/pod_controller.go:206: Failed to list *v1.Pod: Get https://10.254.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.254.0.1:443: i/o timeout
2018-12-12 11:01:12.737 [ERROR][1] main.go 188: Failed to reach apiserver error=<nil>

[[email protected]51 manifests]# kubectl describe po calico-kube-controllers-5cf99599ff-zbmn2 -nkube-system
Events:
  Type     Reason     Age                From                    Message
  ----     ------     ----               ----                    -------
  Normal   Scheduled  5m                 default-scheduler       Successfully assigned kube-system/calico-kube-controllers-5cf99599ff-zbmn2 to k8s-master-57
  Normal   Pulling    5m                 kubelet, k8s-master-57  pulling image "quay.io/calico/kube-controllers:v3.2.4"
  Normal   Pulled     5m                 kubelet, k8s-master-57  Successfully pulled image "quay.io/calico/kube-controllers:v3.2.4"
  Normal   Created    5m                 kubelet, k8s-master-57  Created container
  Normal   Started    5m                 kubelet, k8s-master-57  Started container
  Warning  Unhealthy  4m (x3 over 4m)    kubelet, k8s-master-57  Readiness probe failed: initialized to false
  Warning  Unhealthy  28s (x25 over 4m)  kubelet, k8s-master-57  Readiness probe failed: Error reaching apiserver: <nil> with http status code: 0

#### fuck  我在51上执行的命令 才发现 calico-controller 一直跑在57上
### google 有提到cpu的问题  经查看 果不其然啊。。。。。。。
top - 18:28:42 up  1:29,  1 user,  load average: 2.17, 2.15, 1.33
Tasks: 108 total,   1 running, 107 sleeping,   0 stopped,   0 zombie
%Cpu(s): 96.3 us,  3.0 sy,  0.0 ni,  0.5 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem :  1882600 total,   206224 free,   662924 used,  1013452 buff/cache
KiB Swap:  1048572 total,  1048572 free,        0 used.  1025652 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
31230 root      20   0   37076  16788   9372 S 187.7  0.9  20:31.36 kube-controller 

#### 可能是没装kube-proxy   这之间的联系 还没有搞清楚

安装完kube-proxy之后 calico-controller 起来了
[[email protected]51 ~]# kubectl get po -nkube-system
NAME                                       READY     STATUS             RESTARTS   AGE
calico-kube-controllers-5cf99599ff-cbs5l   1/1       Running            0          21h

calico.yaml

# Calico Version v3.2.4
# https://docs.projectcalico.org/v3.2/releases#v3.2.4
# This manifest includes the following component versions:
#   calico/node:v3.2.4
#   calico/cni:v3.2.4
#   calico/kube-controllers:v3.2.4

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "https://10.39.7.51:2379,https://10.39.7.52:2379,https://10.39.7.57:2379"

  # If you‘re using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"
  # Configure the Calico backend to use.
  calico_backend: "bird"

  # Configure the MTU to use
  veth_mtu: "1340"

  # The CNI network configuration to install on each node.  The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.0",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

---

# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following files with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # This self-hosted install expects three files with the following names.  The values
  # should be base64 encoded strings of the entire contents of each file.
  etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdzhzK0lMUVV5N293d3l4YlBsVTd5NWw1VDhyMXIzUjhqVDhWcFErcFRSbUUxdkwwCmRLZDV0NWszcDRsZGxhcmxLVlh0N3Fob1FQZVdEUVU3dUdQU0lFazZ4REE1VGU3YjJEbFJoa1dUUE1VZWtOTTUKTld2THNrNi83V0lGT1RZNERLUW4rREhGVENHZk9oMFpkOU15RVJ6YmhpbGpXOXRQWWI1anZnQSs1OExob25IbQpzRHJvS0t1aVQ3c0o4cVduOExZL0l0YlNJdWYrUkhPVFZEZ2NyY3NUYnJGakZuWXhWUEU5Q3lyYmdSc1ZXYnd2Ck9oQ3I2YXNVaTVOeXhscFd5SGYveERVS2dTQnlOMVV1bVBTMnNacXdmbFc2NGRrc3BLTUIxSjczeStNSFpLejAKcm9vMG1lUDZ5R1l4MDlLVGt6bmFoNGlYc09GcEN2RThqS0lKTXdJREFRQUJBb0lCQUJQd1ZLbGI3V281MGRGdQphUFJXRmJyTUxGQjE2TU12WjZleXJ1K2FRckY1VWMvWitnOFBKeFFOWkYrSlc2QnNRTjRPeENZenZEb3hmSFJpCi9ndnZEbXovU0I3R2ttOWZUY0FkUmpJWVQ4QTJpc0JRNGxpUVc3UVMxUFRGc0taODRRUllpMEY1UUJCYXRDNWsKM0QwWm90V2ZUTFBDN3oxaGZob0VHNEF6NGpRVHRPNW5QT3d6dWczZERRYTF3eVFnUHB4OUpwb0lhc0piU1dWcQowZ0ZOWXk4TjR6N1dwS2tPZWQ2MWhPb0VGMDdaRlVoL3FTbDlRdFRUaWVhb29WUHA2OVpIOUtzM2FlWG8wRnhTCjJzZkpXb0w2TjdYcG1JcEtId2tWMEZodld6TlRPZURKbllkTUt2NkJZVTVmUFVpOWJ1Ukk2V1RiZWpydEQyM3cKakpHbEQ3RUNnWUVBNU95VVB4MWVrNjRScVd1UHpVeHQ2clgzSC9SeUl1VDR0WGVubFpONk9LMThNYWJkOW4xSgpDZ2R6eWtKc3BWdHc4UjhnSk9VUVRxN1pGTkRyQjJRakJEMytVWi9KUW5IamtqWWdvRHRxeHY0dVpJc3FSTVNECldaRE8rUnc1WjBadU1lMGhiZzVBM3NScGJ0TWNJaUlRY2RLd3F2WHNOZE1DL1Njczd4em5DN2NDZ1lFQTJ2T0oKNkRDUUpEMlI5YTJycWlCekF4YmFVR01pY3RaS1IxQzlTTE1JY0liNnlrY1o4ZkVNZzR2T2VYbnE0c3hHb3VjSApyM1RtSFhuS0pXT1F0d0VmZVRLazFhNjhjWXplNDFZbzJsMnY1aXpwbjZSQmljSVlVeEhORXlUaVVXOVlqbTJECncvYXVEVHd2d1plRm14Wmh0dVhuWjZnaHY4QklxSy9VeHQ2czVtVUNnWUFtbVNVZHh5dnRKb3BmMEh6OGxvaHAKN0tod1FOMEZ1U21lSDBDb1hhZGI2eFJub3NVR0RIWEdOUjkyMk9CTXVUQS8xNG1wN3QxakJ1UWZPR0tJYW4vawo5VGJ1T0V6TTRUc0hxZ3l5TjVKM1h1QWZuNzlPdlB1UW5IUHBXTEx0RU5qL25nWG04b3hKZzBCcnFUaXpJSjg1Ck1kY0wzRThwZkJ5aTVub1REd0o5M1FLQmdRQzJWUGtURXQzMlVBK3N0K09zMlRqdDZhb0VKNG9ZZDd1RGlBa1kKOFg2bHRzSkNrTk5hVjVKRU9iaklFRzg2VDBMRGhnRXdhL2oxc3VaVUhJWDI0RWlGZFZjdlcwUXlpMDFSby82QgpXbU9SR3ZXeXErYW9BYXFnQXNMMG5sS1ozayt3ekNKZW5wNXpCeHY5NjJDbnRndkpjOHN3MXlMRHZDekZ6U2MwCk5WdG13UUtCZ0duUjJJalNIWUZUT0QwS0RNQXF4Wm4xelZjenZRL21oV3I3Y0FFeFZYZUFkU3hNWG9QZTh5V2wKcmQxRFExdzg1UmhGK0IyRlgrcFFxNlgzbUFobmRqTktEdUUvdlhwS3B5MDA2UEdqL3B1dVRKSVhoRzlncTUxbwpHdHZPblBtZmhpYlhZbXJ5ZU5mdUF4MmJuWnBqbmhzUjI5dm4zSzhuQTcwYm13ZVVTVlhCCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
  etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ5akNDQXQ2Z0F3SUJBZ0lVYUFkc2FYYStsZFZDem83dUM1SE9wNS9zOHU4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU0TVRFeE9UQTJOREl3TUZvWERUSTRNVEV4TmpBMk5ESXdNRm93WHpFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUTB3Q3dZRFZRUURFd1JsZEdOa01JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXc4cytJTFFVeTdvd3d5eGJQbFU3eTVsNVQ4cjEKcjNSOGpUOFZwUStwVFJtRTF2TDBkS2Q1dDVrM3A0bGRsYXJsS1ZYdDdxaG9RUGVXRFFVN3VHUFNJRWs2eERBNQpUZTdiMkRsUmhrV1RQTVVla05NNU5XdkxzazYvN1dJRk9UWTRES1FuK0RIRlRDR2ZPaDBaZDlNeUVSemJoaWxqClc5dFBZYjVqdmdBKzU4TGhvbkhtc0Ryb0tLdWlUN3NKOHFXbjhMWS9JdGJTSXVmK1JIT1RWRGdjcmNzVGJyRmoKRm5ZeFZQRTlDeXJiZ1JzVldid3ZPaENyNmFzVWk1Tnl4bHBXeUhmL3hEVUtnU0J5TjFVdW1QUzJzWnF3ZmxXNgo0ZGtzcEtNQjFKNzN5K01IWkt6MHJvbzBtZVA2eUdZeDA5S1Rrem5haDRpWHNPRnBDdkU4aktJSk13SURBUUFCCm80R2pNSUdnTUE0R0ExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUIKQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBZEJnTlZIUTRFRmdRVXdlY2NsNzMvcS95ZDl6cmRQRWFOSCt2eQpTcjR3SHdZRFZSMGpCQmd3Rm9BVTB3RHQ2N3p1S1grYU8wTldCd0g1N29qdVIzOHdJUVlEVlIwUkJCb3dHSWNFCmZ3QUFBWWNFQ2ljSE00Y0VDaWNITkljRUNpY0hPVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBRXh1SndZRUEKdm9ZYzU0eHEzL3ZkMmtRMWdRUUpPSFhzM0tDNjFZMTNjNFBwTndWR3VQR0xzVlBwY1hVM2lFdTJTZUp5YWVpQQo4UFFKOVNuS0ZNcGY3QjBqSktKRjNQVGJDeE5taHZ2bGpsMVA3WmxYSS8zTmZGTE40d01Cenl5MFBWVmRlQm9oCjdlMXhpUnIra081SGE0bEVEUCtDeW4zWmxpN3dSeFY5ZUNYTDZGeDBuU2x4N0x2dXZiUVo1M1NEUGc4UjQramMKSHZPT0NVbXpOU3RpNXRhcTNTSFNOMWtFVUR6eFFDTGR2MTUxbVNzdU1pMi9GQkFORGRvbHlwZW9XZEswTWJBagpYZFFBSWEzdk5FY2pKVzVxZW9CRFR2TkQxSFV1RzhJbzB0K0twbjBSTS9oelc4R3FJNXFLVGQyRXJCNUlLRW0zCnU0cThMeGxod3MyL2tRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVSUFQN1FDd0VTZGlzL2oxVi9YNmUzd0pxNFZBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU0TVRFd09EQTFORGN3TUZvWERUSXpNVEV3TnpBMU5EY3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbGFVcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeGFiQnl1VGlTaHB5ejR0dHNhdGIKemo1dkhPZ2RhU3hKVExOeGVBOHNBM2RtRnJzN0xScjhTdzBGQkhPQ0taKzZCcVFKb2dPWXVGYXBtWHkxWmNXMgptUW92bnNydDVhSEZGTmNmbndVZGhEa1Q0MG9ZeldTeXRrVnNWTEFPSEZwekVQV1h3TGJvNnI0YndSWkJtRWlNClJBb0JJeFY4dS9PNEV5L1hEM0pRN2wrZEpnSFJZMGlnZzZQeUF0TnJseGdQVDF1VUlhdHRDQUxHK3pDVWJ5ZXYKaW4xYWlqdmZDZVJ3NmtiemZyNisxY3VvbGRTY1lOYnZTK0NJbHBuY0NPdlpURlJxT3BxTU1lOUJ5N0QwcHA1RQpISit6cDV6Qys0YStrbTdnUWVNTHNvT1dnb3RpRU45clNNVFhacDdjZEhJeUVyaEg0WmFDZHk5UDB3Qy9ibEtLCk1RSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVMHdEdDY3enVLWCthTzBOV0J3SDU3b2p1UjM4d0h3WURWUjBqQkJnd0ZvQVUwd0R0Njd6dQpLWCthTzBOV0J3SDU3b2p1UjM4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHRU5lYkZnd2F1eFgrbnBNREwyClloUEFyN0lnd2RvQlh0UUlVUTkvRnRQQWR5Y0pKY3pyOFpOZTE3SUM0TUNBanJleFBjckx4WDA2d1NXQVNnWVYKWktwTU8zeEdDY0FiekhuaHNWaFZoekVjeEVGU1lGZk5ZNkR3OGgyV0d0a2pvRGgwNGRaZ1BSU3RwY1Q0UlFiUgo1WW1vOWl3eWRwVVkxNFd2di9OaDVadHcrU1NKdlRGazVDekljUVZmTEdyZENyRWp3ZVptQjdpdFo5eW82UW1rClRQdHVoNW9HMnUrcUg4L1BBeVgyR1lXQmlScEZHNmdWSmp4UERjZVlBRmU2QVNmb0MzREdMOGFqaVcwRFlhTlUKWTZZdU81QWNIc1lCd0JTU3VJNDZnMzR3K0JtdnBHbzRpTENESW51Qi9lKzlsNmdRbU1nNVMvOTlMOVc1VzlKMQpwbzg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K

---

# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # This, along with the CriticalAddonsOnly toleration below,
        # marks the pod as a critical add-on, ensuring it gets
        # priority scheduling and that its resources are reserved
        # if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ‘‘
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: quay.io/calico/node:v3.2.4
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.253.0.0/18"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
              host: localhost
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -bird-ready
              - -felix-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: quay.io/calico/cni:v3.2.4
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---

# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ‘‘
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      # The controllers must run in the host network namespace so that
      # it isn‘t governed by policy that would prevent it from working.
      hostNetwork: true
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      containers:
        - name: calico-kube-controllers
          image: quay.io/calico/kube-controllers:v3.2.4
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,profile,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

原文地址:https://www.cnblogs.com/sunsky303/p/11268289.html

时间: 2024-10-10 22:28:12

kubernetes(k8s)集群安装calico的相关文章

Kubernetes(K8S)集群在centos7.4下创建

自己在搭Kubernetes(K8S)集群下遇到的坑写一点随笔. 本次采用192.168.60.21,192.168.60.22,192.168.60.23作为需要装的服务器. master需要安装etcd, flannel,docker, kubernetes   192.168.60.21 yum –y install etcd yum –y install flannel yum –y install docker yum –y install kubernetes 分支上安装flanne

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

kubernetes (K8S) 集群的搭建方式

kubernetes (K8S) 集群的搭建方式有两种: 守护进程模式和容器模式 (请注意看图,不一样的) 容器的编排管理工具,当然推荐使用容器来部署了.不过容器镜像是在 Google 云上的,需要各位各显神通了. container部署: systemd模式部署: 原文地址:https://www.cnblogs.com/liuxgcn/p/11154259.html

K8S集群安装

主要参考 https://github.com/opsnull/follow-me-install-kubernetes-cluster 01.系统初始化和全局变量 添加 k8s 和 docker 账户 在每台机器上添加 k8s 账户,可以无密码 sudo: $ sudo useradd -m k8s $ sudo visudo $ sudo grep '%wheel.*NOPASSWD: ALL' /etc/sudoers %wheel ALL=(ALL) NOPASSWD: ALL $ su

(二)搭建一个完成的Kubernetes/K8s集群v.1.16

单节点集群多节点集群 注意node通过连接loadbalancer 转发到mateter 的apiserver来进行运作的 集群规划: 角色 ip 组件 K8S-master1 192.168.0.101 kube-apiserver kube-controller-manager kube-scheduleretcd K8S-master2 192.168.0.102 kube-apiserver kube-controller-manager kube-scheduleretcd K8S-n

K8S 集群安装

一.环境 1.操作系统 Centos 7.4 2.主机信息 K8S Master主机: kb-master-001 192.168.0.11 kb-master-002 192.168.0.12 kb-master-003 192.168.0.13 K8S Node主机: kb-node-001 192.168.0.11 kb-node-002 192.168.0.12 kb-node-003 192.168.0.13 kb-node-004 192.168.0.14 K8S etcd主机: k

(一)Kubernetes/K8s 集群架构与组件

K8s相关概念:master/nodemaster Master 是 Cluster 的大脑,它的主要职责是调度,即决定将应用放在哪里运行,实现高可用,可以运行多个 Master.运行的相关组件:Kubernetes API Server(kube-apiserver),集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储.Kubernetes Controller Manager,处理集群中常规后

K8S集群安装 之 安装主控节点etcd服务

一.在根证书服务器上创建基于根证书的config配置文件 200 certs]# cd /opt/certs/ 200 certs]# vi /opt/certs/ca-config.json { "signing": { "default": { "expiry": "175200h" }, "profiles": { "server": { "expiry": &

K8S集群安装 之 安装部署controller-manager

一.在两个nodes节点上安装controller-manager服务 # 221/222机器: bin]# vi /opt/kubernetes/server/bin/kube-controller-manager.sh #!/bin/sh ./kube-controller-manager --cluster-cidr 172.7.0.0/16 --leader-elect true --log-dir /data/logs/kubernetes/kube-controller-manage