etcd管理,证书配置,扩展,迁移恢复,带证书扩展节点

广告 | kubernetes各版本离线安装包

etcd 证书配置

生产环境中给etcd配置证书相当重要,如果没有证书,那么k8s集群很容易被×××利用而去挖矿什么的。做法非常简单,比如你下了一个不安全的镜像,通过程序扫描到etcd的ip和端口,那么×××就可以绕开apiserver的认证直接写数据,写一些deployment pod等等,apiserver就会读到这些,从而去部署×××的程序。 我们就有一个集群这样被利用去挖矿了,安全无小事,如果×××恶意×××也可轻松删除你的所有数据,所以证书与定期备份都很重要,即便有多个etcd节点,本文深入探讨etcd管理的重要的几个东西。

证书生成

cfssl安装:

mkdir ~/bin
curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x ~/bin/{cfssl,cfssljson}
export PATH=$PATH:~/bin
mkdir ~/cfssl
cd ~/cfssl

写入如下json文件,ip替换成自己的

[email protected] cfssl]# cat ca-config.json
{
    "signing": {
        "default": {
            "expiry": "43800h"
        },
        "profiles": {
            "server": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
[[email protected] cfssl]# cat ca-csr.json
{
    "CN": "My own CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "O": "My Company Name",
            "ST": "San Francisco",
            "OU": "Org Unit 1",
            "OU": "Org Unit 2"
        }
    ]
}
[ro[email protected] cfssl]# cat server.json
{
    "CN": "etcd0",
    "hosts": [
        "127.0.0.1",
        "0.0.0.0",
        "10.1.86.201",
        "10.1.86.203",
        "10.1.86.202"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

[[email protected] cfssl]# cat member1.json  # 填本机IP
{
    "CN": "etcd0",
    "hosts": [
        "10.1.86.201"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

[[email protected] cfssl]# cat client.json
{
    "CN": "client",
    "hosts": [
       ""
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server.json | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client

启动etcd

cfssl目录拷贝到/etc/kubernetes/pki/cfssl 目录

[[email protected] manifests]# cat etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.201:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.201:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.201:2379
    - --listen-peer-urls=https://10.1.86.201:2380
    - --name=etcd0
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
   #livenessProbe:
   #  exec:
   #    command:
   #    - /bin/sh
   #    - -ec
   #    - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.pem
   #      --cert=/etc/kubernetes/pki/etcd/client.pem --key=/etc/kubernetes/pki/etcd/client-key.pem
   #      get foo
   #  failureThreshold: 8
   #  initialDelaySeconds: 15
   #  timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
status: {}

进入etcd容器执行:

alias etcdv3="ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/client.pem --key=/etc/kubernetes/pki/etcd/client-key.pem"
etcdv3 member add etcd1 --peer-urls="https://10.1.86.202:2380"

增加节点

拷贝etcd0(10.1.86.201)节点上的证书到etcd1(10.1.86.202)节点上
修改member1.json:

{
    "CN": "etcd1",
    "hosts": [
        "10.1.86.202"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

重新生成在etcd1上生成member1证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1

启动etcd1:

[[email protected] manifests]# cat etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.202:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.202:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380,etcd1=https://10.1.86.202:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.202:2379
    - --listen-peer-urls=https://10.1.86.202:2380
    - --name=etcd1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --initial-cluster-state=existing  # 千万别加双引号,被坑死
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
  # livenessProbe:
  #   exec:
  #     command:
  #     - /bin/sh
  #     - -ec
  #     - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.202]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
  #       --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
  #       get foo
  #   failureThreshold: 8
  #   initialDelaySeconds: 15
  #   timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
status: {}

或者用docker起先测试一下:

docker run --net=host -v /etc/kubernetes/pki/cfssl:/etc/kubernetes/pki/etcd k8s.gcr.io/etcd-amd64:3.2.18 etcd --advertise-client-urls=https://10.1.86.202:2379 --cert-file=/etc/kubernetes/pki/etcd/server.pem --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://10.1.86.202:2380 --initial-cluster=etcd0=https://10.1.86.201:2380,etcd1=https://10.1.86.202:2380 --key-file=/etc/kubernetes/pki/etcd/server-key.pem  --listen-client-urls=https://10.1.86.202:2379 --listen-peer-urls=https://10.1.86.202:2380 --name=etcd1 --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem --peer-client-cert-auth=true --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem --initial-cluster-state="existing"

etcd0上检查集群健康:

# etcdctl --endpoints=https://[10.1.86.201]:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/client.pem --key-file=/etc/kubernetes/pki/etcd/client-key.pem cluster-heal
th
member 5856099674401300 is healthy: got healthy result from https://10.1.86.201:2379
member df99f445ac908d15 is healthy: got healthy result from https://10.1.86.202:2379
cluster is healthy

etcd2增加同理,略

apiserver etcd证书 配置:

- --etcd-cafile=/etc/kubernetes/pki/cfssl/ca.pem
- --etcd-certfile=/etc/kubernetes/pki/cfssl/client.pem
- --etcd-keyfile=/etc/kubernetes/pki/cfssl/client-key.pem

快照与扩展节点

etcd快照恢复

说明:
有证书集群以下所有命令需带上如下证书参数,否则访问不了

--cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key

endpoints默认为127.0.0.1:2379,若需指定远程etcd地址,可通过如下参数指定

--endpoints 172.16.154.81:2379

1、获取数据快照

ETCDCTL_API=3 etcdctl snapshot save snapshot.db

2、从快照恢复数据

ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --data-dir=/var/lib/etcd/

3、启动新etcd节点,指定--data-dir=/var/lib/etcd/

etcd节点扩展

节点名 IP 备注
infra0 172.16.154.81 初始节点,k8s的master节点,kubeadm所部署的单节点etcd所在机器
infra1 172.16.154.82 待添加节点,k8s的node节点
infra2 172.16.154.83 待添加节点,k8s的node节点

1、从初始etcd节点获取数据快照

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --endpoints=https://127.0.0.1:2379 snapshot save snapshot.db

2、将快照文件snapshot.db复制到infra1节点,并执行数据恢复命令

数据恢复命令

ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --data-dir=/var/lib/etcd/

注:执行上述命令需要机器上有etcdctl

上述命令执行成功会将快照中的数据存放到/var/lib/etcd目录中

3、在infra1节点启动etcd
将如下yaml放入/etc/kubernetes/manifests

apiVersion: v1
kind: Pod
metadata:
  labels:
    component: etcd
    tier: control-plane
  name: etcd-172.16.154.82
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --name=infra0
    - --initial-advertise-peer-urls=http://172.16.154.82:2380
    - --listen-peer-urls=http://172.16.154.82:2380
    - --listen-client-urls=http://172.16.154.82:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://172.16.154.82:2379
    - --data-dir=/var/lib/etcd
    - --initial-cluster-token=etcd-cluster-1
    - --initial-cluster=infra0=http://172.16.154.82:2380
    - --initial-cluster-state=new
    image: hub.xfyun.cn/k8s/etcd-amd64:3.1.12
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    volumeMounts:
    - name: etcd-data
      mountPath: /var/lib/etcd
  hostNetwork: true
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data

4、infra2节点加入etcd集群中
在infra1中etcd容器中执行

ETCDCTL_API=3 etcdctl member add infra2 --peer-urls="http://172.16.154.83:2380"

将如下yaml放入/etc/kubernetes/manifests,由kubelet启动etcd容器

apiVersion: v1
kind: Pod
metadata:
  labels:
    component: etcd
    tier: control-plane
  name: etcd-172.16.154.83
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --name=infra1
    - --initial-advertise-peer-urls=http://172.16.154.83:2380
    - --listen-peer-urls=http://172.16.154.83:2380
    - --listen-client-urls=http://172.16.154.83:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://172.16.154.83:2379
    - --data-dir=/var/lib/etcd
    - --initial-cluster-token=etcd-cluster-1
    - --initial-cluster=infra1=http://172.16.154.82:2380,infra2=http://172.16.154.83:2380
    - --initial-cluster-state=existing
    image: hub.xfyun.cn/k8s/etcd-amd64:3.1.12
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    volumeMounts:
    - name: etcd-data
      mountPath: /var/lib/etcd
  hostNetwork: true
  volumes:
  - hostPath:
      path: /home/etcd
      type: DirectoryOrCreate
    name: etcd-data

infra0节点加入集群重复上述操作;注意在加入集群之前,将之前/var/lib/etcd/的数据删除。

实践 - 给kubeadm单etcd增加etcd节点

环境介绍

10.1.86.201 单点etcd etcd0

10.1.86.202 扩展节点 etcd1

10.1.86.203 扩展节点 etcd2

安装k8s

先在etcd0节点上启动k8s,当然是使用sealyun的安装包 三步安装不多说

×××

按照上述生成证书的方法生成证书并拷贝到对应目录下

cp -r cfssl/ /etc/kubernetes/pki/

修改etcd配置:

cd /etc/kubernetes/manifests/
mv etcd.yaml ..   # 不要直接修改,防止k8s去读swap文件
vim ../etcd.yaml

vim里面全局替换,把127.0.0.1替换成ip地址

:%s/127.0.0.1/10.1.86.201/g

注释掉健康检测探针,否则加节点时健康检测会导致etcd0跪掉

#   livenessProbe:
#     exec:
#       command:
#       - /bin/sh
#       - -ec
#       - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
#         --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
#         get foo
#     failureThreshold: 8
#     initialDelaySeconds: 15
#     timeoutSeconds: 15

×××挂载配置目录

  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs

×××配置,全改完长这样:

[[email protected] manifests]# cat ../etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.201:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.201:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.201:2379
    - --listen-peer-urls=https://10.1.86.201:2380
    - --name=dev-86-201
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
#   livenessProbe:
#     exec:
#       command:
#       - /bin/sh
#       - -ec
#       - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
#         --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
#         get foo
#     failureThreshold: 8
#     initialDelaySeconds: 15
#     timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

启动etcd, 把yaml文件移回来:

mv ../etcd.yaml .

修改APIserver参数:

mv kube-apiserver.yaml ..
vim ../kube-apiserver.yaml
    - --etcd-cafile=/etc/kubernetes/pki/cfssl/ca.pem
    - --etcd-certfile=/etc/kubernetes/pki/cfssl/client.pem
    - --etcd-keyfile=/etc/kubernetes/pki/cfssl/client-key.pem
    - --etcd-servers=https://10.1.86.201:2379

启动apiserver:

mv ../kube-apiserver.yaml .

验证:

kubectl get pod -n kube-system  # 能正常返回pod标志成功

到此etcd0上的操作完成

增加新节点, 进入到etcd容器内:

[[email protected] ~]# docker exec -it a7001397e1e5 sh
/ # alias etcdv3="ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/client.pem --key=/etc/kubernetes/pki/etcd/client-key
.pem"
/ # etcdv3 member update a874c87fd42044f  --peer-urls="https://10.1.86.201:2380" # 更新peer url 很重要
/ # etcdv3 member add etcd1 --peer-urls="https://10.1.86.202:2380"
Member 20c2a99381581958 added to cluster c9be114fc2da2776

ETCD_NAME="etcd1"
ETCD_INITIAL_CLUSTER="dev-86-201=https://127.0.0.1:2380,etcd1=https://10.1.86.202:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

/ # alias etcdv2="ETCDCTL_API=2 etcdctl --endpoints=https://[10.1.86.201]:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/client.pem --key-file=/etc/kubernetes/pki/etcd/client-key.pem"
/ # etcdv2 cluster-health

etcd1上增加一个etcd节点

同样先在etcd1(10.1.86.202) 上安装k8s,同etcd0上的安装

把etcd0的cfssl证书目录拷贝到etcd1上备用

scp -r [email protected]:/etc/kubernetes/pki/cfssl /etc/kubernetes/pki

修改member1.json:

[[email protected] cfssl]# cat member1.json
{
    "CN": "etcd1",      # CN 改一下
    "hosts": [
        "10.1.86.202"   # 主要改成自身ip
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

重新生成member1证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1

验证证书:

openssl x509 -in member1.pem -text -noout

修改etcd1的etcd配置:

mv etcd.yaml ..
rm /var/lib/etcd/ -rf # 因为这是个扩展节点,需要同步etcd0的数据,所以把它自己数据删掉
vim ../etcd.yaml

修改后yaml文件u

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.202:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.202:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380,etcd1=https://10.1.86.202:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.202:2379
    - --listen-peer-urls=https://10.1.86.202:2380
    - --name=etcd1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --initial-cluster-state=existing  # 千万别加双引号,被坑死
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
  # livenessProbe:
  #   exec:
  #     command:
  #     - /bin/sh
  #     - -ec
  #     - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.202]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
  #       --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
  #       get foo
  #   failureThreshold: 8
  #   initialDelaySeconds: 15
  #   timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
status: {}

在容器内查看集群已经健康运行了:

/ # alias etcdv2="ETCDCTL_API=2 etcdctl --endpoints=https://[10.1.86.201]:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/client.pem --key-file=/etc/kubernetes/pki/etcd/client-key.pem"
/ # etcdv2 cluster-health
member a874c87fd42044f is healthy: got healthy result from https://10.1.86.201:2379
member bbbbf223ec75e000 is healthy: got healthy result from https://10.1.86.202:2379
cluster is healthy

然后就可以把apiserver启动参数再加一个etcd1:

    - --etcd-servers=https://10.1.86.201:2379
    - --etcd-servers=https://10.1.86.202:2379

第三个节点同第二个,不再赘述。

细节问题非常多,一个端口,一个IP都不要填错,否则就会各种错误, 包括新加节点要清etcd数据这些小细节问题。
大功告成!

原文地址:http://blog.51cto.com/fangnux/2159992

时间: 2024-10-29 00:45:25

etcd管理,证书配置,扩展,迁移恢复,带证书扩展节点的相关文章

nginx ssl证书配置

1.Nginx安装与配置   安装pcre  #cd /usr/local/src  #yum -y install make zlib zlib-devel gcc-c++ libtool  openssl openssl-devel  #wget http://downloads.sourceforge.net/project/pcre/pcre/8.35/pcre-8.35.tar.gz  # tar zxvf pcre-8.35.tar.gz  #cd pcre-8.35  # ./co

08r2活动目录迁移升级2012r2--(证书角色迁移)

在服务器管理器中删除旧CA服务器CA角色,之后在目标服务器导入证书服务器备份文件以下操作在目标服务器进行目标服务器上导入之前备份的证书使用之前设置的秘密在win server 2012 r2服务器管理器中添加证书角色,比较简单,不详细描述,以下是关键步骤完成证书配置还原从旧证书服务器备份出来的文件到新证书服务器完成还原后启动证书服务导入备份的注册表修改CAserverName为新证书服务器计算机名修改相应权限完成辅助域控升级为windows server 2012 r2以及证书服务器迁移到win

如何进行高效的源码阅读:以Spring Cache扩展为例带你搞清楚

摘要 日常开发中,需要用到各种各样的框架来实现API.系统的构建.作为程序员,除了会使用框架还必须要了解框架工作的原理.这样可以便于我们排查问题,和自定义的扩展.那么如何去学习框架呢.通常我们通过阅读文档.查看源码,然后又很快忘记.始终不能融汇贯通.本文主要基于Spring Cache扩展为例,介绍如何进行高效的源码阅读. SpringCache的介绍 为什么以Spring Cache为例呢,原因有两个 Spring框架是web开发最常用的框架,值得开发者去阅读代码,吸收思想 缓存是企业级应用开

苹果ATS特性服务器证书配置指南

配置指南: 需要配置符合PFS规范的加密套餐,目前推荐配置: ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4 需要在服务端TLS协议中启用TLS1.2,目前推荐配置: TLSv1 TLSv1.1 TLSv1.2 1.Nginx 证书配置 更新Nginx根目录下 conf/nginx.conf 文件如下: server {    ssl_ciphers ECDHE-RSA-AES128-GCM-

LVM管理-元数据及分区表的恢复

日常我们为了查看物理卷.卷组.逻辑卷信息会使用一些命令,例如: 这些信息被放置在物理卷的第二扇区中,称为LVM标签,而LVM标签包含UUID号.记录块设备大小.记录元数据位置.其中,LVM的元数据包含了LVM卷组的详细配置并且可以ASCLL格式保存. 一.元数据备份 LVM的元数据默认放置的位置: 我们可以查看元数据文件: 对元数据作备份有3种方法: 第一种: 使用dd将设备信息输出到一个文件中,不过值得注意的是输出的文件我们在查看时会看到一些乱码,在恢复信息时候我们需要将文件中的乱码手动删除.

一起学ASP.NET Core 2.0学习笔记(二): ef core2.0 及mysql provider 、Fluent API相关配置及迁移

不得不说微软的技术迭代还是很快的,上了微软的船就得跟着她走下去,前文一起学ASP.NET Core 2.0学习笔记(一): CentOS下 .net core2 sdk nginx.supervisor.mysql环境搭建搭建好了.net core linux的相关环境,今天就来说说ef core相关的配置及迁移: 简介: Entity Framework(以下简称EF) 是微软以 ADO.NET 为基础所发展出来的对象关系对应 (O/R Mapping) 解决方案,EF Core是Entity

linux ssl证书配置(apache)

1. 前提是 已通过第三方 申请到 .crt .key 和 .ca-bundle 文件 2. 将三个文件拷贝到linux服务器上 任意一个指定的目录 3. 找到要编辑的apache配置 Apache主配置文件通常叫做 httpd.conf 或 apache2.conf. 常见路径包括 /etc/httpd/ 或 /etc/apache2/ SSL 证书配置通常位于一个不同的配置文件的 <VirtualHost> 区块内. 配置文件可能位于诸如/etc/httpd/vhosts.d/, /etc

F5证书配置

几种格式文件的说明: csr-->在F5上生成的文件.包含了域名.公司名.部门名.城市.邮箱等信息. crt/cer-->公钥,证书文件,由权威证书机构颁发. key-->私钥,与csr配套生成,成对使用. 例: 1_root_bundle.crt-->证书链(包含证书的树型结构,追溯至根证书机构) 2_test_wosign.com.crt-->公钥(证书机构使用私钥对你的csr进行签名) 3_test_wosign.com.key-->私钥(与公钥配套使用) SNI

SSL:Ubuntu证书配置

CA证书的配置 Ubuntu上CA证书的配置可以通过工具ca-certificates来方便的进行.该工具默认是随Ubuntu安装的,如果没有可以通过下面的命令来安装: sudo apt-get install ca-certificates 需要安装CA证书我们只需要将其放在"/usr/share/ca-certificates"目录或其子目录下,ca-certificates工具就能自动扫描到.为了不与其它根证书混淆,我们创建一个子目录名为"extra": su