kubernetes容器集群部署Etcd集群

安装etcd

二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.2.12

[[email protected] ~]# GOOGLE_URL=https://storage.googleapis.com/etcd
[[email protected] ~]# GITHUB_URL=https://github.com/coreos/etcd/releases/download
[[email protected] ~]# DOWNLOAD_URL=${GOOGLE_URL}
[[email protected] ~]# ETCD_VER=v3.2.12
[[email protected] ~]# curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10.0M  100 10.0M    0     0  2161k      0  0:00:04  0:00:04 --:--:-- 2789k
[[email protected] ~]# ls /tmp
etcd-v3.2.12-linux-amd64.tar.gz
解压
[[email protected] ~]# tar -zxf /tmp/etcd-v3.2.12-linux-amd64.tar.gz
[[email protected] ~]# ls
etcd-v3.2.12-linux-amd64
创建集群部署目录
[[email protected] ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
[[email protected] ~]# tree /opt/kubernetes
/opt/kubernetes
├── bin
├── cfg
└── ssl
[[email protected] ~]# mv etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin
[[email protected] ~]# mv etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin
[[email protected] ~]# ls /opt/kubernetes/bin
etcd  etcdctl
添加配置文件
[[email protected] ~]# cat /opt/kubernetes/cfg/etcd
#[Member]
#指定etcd名称
ETCD_NAME="etcd03"
#数据目录
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#监听集群端口
ETCD_LISTEN_PEER_URLS="https://192.168.238.130:2380"
#监听数据端口
ETCD_LISTEN_CLIENT_URLS="https://192.168.238.130:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.238.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.238.130:2379"
#集群节点信息
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380;etcd03=https://192.168.238.130:2380"
#token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[[email protected] ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTENT_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER} --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

证书存放到指定目录
[[email protected] ~]# cp ssl/server*pem ssl/ca*pem /opt/kubernetes/ssl/
[[email protected] ~]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
启动etcd
[[email protected] ~]# systemctl start etcd
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.
启动失败查看日志
[[email protected] ~]# journalctl -u etcd
-- Logs begin at Tue 2019-07-02 17:22:07 EDT, end at Tue 2019-07-02 17:58:00 EDT. --
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
Jul 02 17:57:59 master etcd[8172]: invalid value ",http://127.0.0.1:2379" for flag -listen-
Jul 02 17:57:59 master etcd[8172]: usage: etcd [flags]
Jul 02 17:57:59 master etcd[8172]: start an etcd server
Jul 02 17:57:59 master etcd[8172]: etcd --version
Jul 02 17:57:59 master etcd[8172]: show the version of etcd
Jul 02 17:57:59 master etcd[8172]: etcd -h | --help
Jul 02 17:57:59 master etcd[8172]: show the help information about etcd
Jul 02 17:57:59 master etcd[8172]: etcd --config-file
Jul 02 17:57:59 master etcd[8172]: path to the server configuration file
Jul 02 17:57:59 master etcd[8172]: etcd gateway
Jul 02 17:57:59 master etcd[8172]: run the stateless pass-through etcd TCP connection forwa
Jul 02 17:57:59 master etcd[8172]: etcd grpc-proxy
Jul 02 17:57:59 master etcd[8172]: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 02 17:57:59 master systemd[1]: etcd.service: main process exited, code=exited, status=2
Jul 02 17:57:59 master systemd[1]: Failed to start Etcd Server.
Jul 02 17:57:59 master systemd[1]: Unit etcd.service entered failed state.
Jul 02 17:57:59 master systemd[1]: etcd.service failed.
Jul 02 17:57:59 master systemd[1]: etcd.service holdoff time over, scheduling restart.
Jul 02 17:57:59 master systemd[1]: Stopped Etcd Server.
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
Jul 02 17:57:59 master etcd[8176]: invalid value ",http://127.0.0.1:2379" for flag -listen-
Jul 02 17:57:59 master etcd[8176]: usage: etcd [flags]
Jul 02 17:57:59 master etcd[8176]: start an etcd server
Jul 02 17:57:59 master etcd[8176]: etcd --version
Jul 02 17:57:59 master etcd[8176]: show the version of etcd
Jul 02 17:57:59 master etcd[8176]: etcd -h | --help
Jul 02 17:57:59 master etcd[8176]: show the help information about etcd
Jul 02 17:57:59 master etcd[8176]: etcd --config-file
Jul 02 17:57:59 master etcd[8176]: path to the server configuration file
Jul 02 17:57:59 master etcd[8176]: etcd gateway
Jul 02 17:57:59 master etcd[8176]: run the stateless pass-through etcd TCP connection forwa
Jul 02 17:57:59 master etcd[8176]: etcd grpc-proxy
Jul 02 17:57:59 master etcd[8176]: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 02 17:57:59 master systemd[1]: etcd.service: main process exited, code=exited, status=2
Jul 02 17:57:59 master systemd[1]: Failed to start Etcd Server.
Jul 02 17:57:59 master systemd[1]: Unit etcd.service entered failed state.
Jul 02 17:57:59 master systemd[1]: etcd.service failed.
Jul 02 17:57:59 master systemd[1]: etcd.service holdoff time over, scheduling restart.
Jul 02 17:57:59 master systemd[1]: Stopped Etcd Server.
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
lines 1-42

[[email protected] ~]# tail -n 20 /var/log/messages
Jul  2 17:58:00 localhost etcd: etcd --version
Jul  2 17:58:00 localhost etcd: show the version of etcd
Jul  2 17:58:00 localhost etcd: etcd -h | --help
Jul  2 17:58:00 localhost etcd: show the help information about etcd
Jul  2 17:58:00 localhost etcd: etcd --config-file
Jul  2 17:58:00 localhost etcd: path to the server configuration file
Jul  2 17:58:00 localhost etcd: etcd gateway
Jul  2 17:58:00 localhost etcd: run the stateless pass-through etcd TCP connection forwarding proxy
Jul  2 17:58:00 localhost etcd: etcd grpc-proxy
Jul  2 17:58:00 localhost etcd: run the stateless etcd v3 gRPC L7 reverse proxy
Jul  2 17:58:00 localhost systemd: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Jul  2 17:58:00 localhost systemd: Failed to start Etcd Server.
Jul  2 17:58:00 localhost systemd: Unit etcd.service entered failed state.
Jul  2 17:58:00 localhost systemd: etcd.service failed.
Jul  2 17:58:00 localhost systemd: etcd.service holdoff time over, scheduling restart.
Jul  2 17:58:00 localhost systemd: Stopped Etcd Server.
Jul  2 17:58:00 localhost systemd: start request repeated too quickly for etcd.service
Jul  2 17:58:00 localhost systemd: Failed to start Etcd Server.
Jul  2 17:58:00 localhost systemd: Unit etcd.service entered failed state.
Jul  2 17:58:00 localhost systemd: etcd.service failed.

[[email protected] ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: activating (start) since Tue 2019-07-02 18:32:55 EDT; 16s ago
 Main PID: 8138 (etcd)
   Memory: 20.5M
   CGroup: /system.slice/etcd.service
           └─8138 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.130:2380 --listen-client-urls=https://192.168.238.13...

Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 received MsgVoteResp from a7e9807772a004c5 at term 72
Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 72
Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to c858c42725f38881 at term 72
Jul 02 18:33:10 master etcd[8138]: health check for peer 203750a5948d27da could not connect: dial tcp 192.168.238.128:2380: i/o timeout
Jul 02 18:33:10 master etcd[8138]: health check for peer c858c42725f38881 could not connect: dial tcp 192.168.238.129:2380: i/o timeout
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 is starting a new election at term 72
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 became candidate at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 received MsgVoteResp from a7e9807772a004c5 at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to c858c42725f38881 at term 73
[[email protected] ~]# ps -ef|grep etcd
root       8138      1  0 18:32 ?        00:00:00 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.130:2380 --listen-client-urls=https://192.168.238.130:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.238.130:2379 --initial-advertise-peer-urls=https://192.168.238.130:2380 --initial-cluster=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-token=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root       8147   8085  0 18:34 pts/0    00:00:00 grep --color=auto etcd
到此主节点部署完成
生成节点间免密登陆密钥
[[email protected] ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
1b:b9:49:23:fc:32:64:6f:72:bd:77:d5:98:28:d4:a0 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|          .      |
|         . o     |
|     .  E.. .    |
|      = S.   . o.|
|     o = B. . o o|
|      + O ..   . |
|       *   .. .  |
|          .. .   |
+-----------------+
[[email protected] ~]# ls /root/.ssh/
id_rsa  id_rsa.pub
分发密钥到各个节点
[[email protected] ~]# ssh-copy-id [email protected]
The authenticity of host '192.168.238.129 (192.168.238.129)' can't be established.
ECDSA key fingerprint is d2:7e:40:ca:2b:fb:be:53:f3:2c:8c:e7:54:08:3d:d4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

[[email protected] ~]# ssh-copy-id [email protected]
The authenticity of host '192.168.238.128 (192.168.238.128)' can't be established.
ECDSA key fingerprint is d2:7e:40:ca:2b:fb:be:53:f3:2c:8c:e7:54:08:3d:d4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
测试免密登陆
[[email protected] ~]# ssh [email protected]
Last login: Tue Jul  2 17:23:09 2019 from 192.168.238.1
[[email protected] ~]# hostname
node01
节点1创建etcd安装目录
[[email protected] ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
主节点发送二进制包至node01
[[email protected] ~]# scp -r /opt/kubernetes/bin/ [email protected]:/opt/kubernetes/
etcd                                                                                                                                                       100%   17MB  17.0MB/s   00:00
etcdctl                                                                                                                                                    100%   15MB  14.5MB/s   00:01
node01查看文件
[[email protected] ~]# ls /opt/kubernetes/bin/
etcd  etcdctl
主节点发送配置文件至node01
[[email protected] ~]# scp -r /opt/kubernetes/cfg/ [email protected]:/opt/kubernetes/
etcd
[[email protected] ~]# scp -r /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system
etcd.service
node01查看文件
[[email protected] ~]# ls /opt/kubernetes/cfg/
etcd
[[email protected] ~]# ll /usr/lib/systemd/system/etcd.service
-rw-r--r-- 1 root root 996 Jul  2 20:55 /usr/lib/systemd/system/etcd.service
主节点发送数字证书至node01
[[email protected] ~]# scp -r /opt/kubernetes/ssl/ [email protected]:/opt/kubernetes/
server-key.pem                                                                                                                                             100% 1675     1.6KB/s   00:00
server.pem                                                                                                                                                 100% 1489     1.5KB/s   00:00
ca-key.pem                                                                                                                                                 100% 1679     1.6KB/s   00:00
ca.pem
node01查看文件
[[email protected] ~]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
修改配置文件
[[email protected] ~]# cat /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.238.129:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.238.129:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.238.129:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.238.129:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动
[[email protected] ~]# systemctl start etcd
[[email protected] ~]# ps -ef|grep etcd
root       8702      1  0 21:01 ?        00:00:00 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.129:2380 --listen-client-urls=https://192.168.238.129:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.238.129:2379 --initial-advertise-peer-urls=https://192.168.238.129:2380 --initial-cluster=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-token=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root       8709   7875  0 21:02 pts/0    00:00:00 grep --color=auto etcd
[[email protected] ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: activating (start) since Tue 2019-07-02 21:01:39 EDT; 54s ago
 Main PID: 8702 (etcd)
   Memory: 6.2M
   CGroup: /system.slice/etcd.service
           └─8702 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.129:2380 --listen-client-urls=https://192.168.238.12...

Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 is starting a new election at term 36
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 became candidate at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to a7e9807772a004c5 at term 37
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 is starting a new election at term 37
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 became candidate at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to a7e9807772a004c5 at term 38
设置开机自启动
[[email protected] ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
同理部署node02

查看集群状态

设置环境变量
[[email protected] ~]# tail -n 1 /etc/profile
PATH=/opt/kubernetes/bin:$PATH
[[email protected] ~]# source /etc/profile
[[email protected] ~]# which etcd
/opt/kubernetes/bin/etcd
[[email protected] ~]# which etcdctl
/opt/kubernetes/bin/etcdctl
[[email protected] ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" cluster-health
cluster may be unhealthy: failed to list members
Error:  client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://192.168.238.130:2379 exceeded header timeout
; error #1: client: endpoint https://192.168.238.128:2379 exceeded header timeout
; error #2: client: endpoint https://192.168.238.129:2379 exceeded header timeout

error #0: client: endpoint https://192.168.238.130:2379 exceeded header timeout
error #1: client: endpoint https://192.168.238.128:2379 exceeded header timeout
error #2: client: endpoint https://192.168.238.129:2379 exceeded header timeout
失败的原因可能是防火墙或者selinux导致

[[email protected] ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" cluster-health
member 203750a5948d27da is healthy: got healthy result from https://192.168.238.128:2379
member a7e9807772a004c5 is healthy: got healthy result from https://192.168.238.130:2379
member c858c42725f38881 is healthy: got healthy result from https://192.168.238.129:2379
cluster is healthy

原文地址:https://www.cnblogs.com/yinshoucheng-golden/p/11124051.html

时间: 2024-11-07 03:49:14

kubernetes容器集群部署Etcd集群的相关文章

部署etcd集群

部署etcd集群 第一步:先拉取etcd二进制压缩包 wget https://github.com/coreos/etcd/releases/download/v3.3.2/etcd-v3.3.2-linux-amd64.tar.gz //解压压缩包 tar zxvf etcd-v3.3.2-linux-amd64.tar.gz 第二步:建立一个文件,分别存放bin文件,cfg配置文件,ssl验证文件 mkdir /opt/kubernetes/{bin,cfg,ssl} //然后将etcd,

基于已有集群动态发现方式部署 Etcd 集群

etcd提供了多种部署集群的方式,在「通过静态发现方式部署etcd集群」 一文中我们介绍了如何通过静态发现方式部署集群. 不过很多时候,你只知道你要搭建一个多大(包含多少节点)的集群,但是并不能事先知道这几个节点的ip,从而无法使用--initial-cluster参数. 这个时候,你就需要使用discovery的方式来搭建etcd集群.discovery方式有两种:etcd discovery和DNS discovery. 这里我们先介绍下etcd discovery方式.这种启动方式依赖另外

基于 DNS 动态发现方式部署 Etcd 集群

使用discovery的方式来搭建etcd集群方式有两种:etcd discovery和DNS discovery.在 「基于已有集群动态发现方式部署etcd集群」一文中讲解了etcd discovery这种方式,今天我们就来讲讲DNS discovery这种方式的实现. etcd在基于DNS做服务发现时,实际上是利用DNS的SRV记录不断轮训查询实现的.DNS SRV是DNS数据库中支持的一种资源记录的类型,它记录了哪台计算机提供了哪个服务这么一个简单信息. 本文采用dnsmasq作为dns服

Mongodb集群部署以及集群维护命令

Mongodb集群部署以及集群维护命令 http://lipeng200819861126-126-com.iteye.com/blog/1919271 mongodb分布式集群架构及监控配置 http://freeze.blog.51cto.com/1846439/884925/ 见文中: 七.监控配置:      早在去年已经出现MongoDB和Redis的Cacti模板,使用它,你可以对你的MongoDB和Redis服务进行流量监控.cacti的模板一直在更新,若企业已经用到nosql这种

Kubernetes+Flannel 环境中部署HBase集群

注:目前方案不满足加入新节点(master节点或regionserver节点)而不更改已运行节点的参数的需求,具体讨论见第六部分. 一.背景知识 先看下HBase的组成: Master:Master主要负责管理RegionServer集群,如负载均衡及资源分配等,它本身也可以以集群方式运行,但同一时刻只有一个master处于激活状态.当工作中的master宕掉后,zookeeper会切换到其它备选的master上. RegionServer:负责具体数据块的读写操作. ZooKeeper:负责集

部署k8s ssl集群实践4:部署etcd集群

参考文档:https://github.com/opsnull/follow-me-install-kubernetes-cluster感谢作者的无私分享.集群环境已搭建成功跑起来.文章是部署过程中遇到的错误和详细操作步骤记录.如有需要对比参考,请按照顺序阅读和测试. 4.1下载和分发二进制安装包 [[email protected] kubernetes]# wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3

k8s部署etcd集群

1.k8s部署高可用etcd集群时遇到了一些麻烦,这个是自己其中一个etcd的配置文件 例如: [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] User=k8s Type=notify WorkingDirectory=/var/l

CentOS 部署Etcd集群

一.环境介绍 操作系统信息:CentOS 7 64位 服务器信息: 192.168.80.130  Etcd-master 192.168.80.131  Etcd-node1 192.168.80.132  Etcd-node2 二.部署前准备 1.设置免密登录   [Master] [[email protected] ~]# ssh-keygen [[email protected] ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub Etcd-node1 [

kubernetes 集群安装etcd集群,带证书

install etcd 准备证书 https://www.kubernetes.org.cn/3096.html 在master1需要安装CFSSL工具,这将会用来建立 TLS certificates. export CFSSL_URL="https://pkg.cfssl.org/R1.2" wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "${CFSSL_URL}