etcd+calico集群的部署

etcd单机模式

设置环境变量

export HostIP="192.168.12.50"

执行如下命令,打开etcd的客户端连接端口4001和2379、etcd互联端口2380

如果是第一次执行此命令,docker会下载最新的etcd官方镜像

docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379  --name etcd quay.io/coreos/etcd  -name etcd0  -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001  -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001  -initial-advertise-peer-urls http://${HostIP}:2380  -listen-peer-urls http://0.0.0.0:2380  -initial-cluster-token etcd-cluster-1  -initial-cluster etcd0=http://${HostIP}:2380  -initial-cluster-state new

选择上面2个端口中的任意一个,检测一下节点情况:

curl -L http://127.0.0.1:2379/v2/members

多节点etcd集群

配置多节点etcd集群和单节点类似,最主要的区别是-initial-cluster参数,它表示了各个成员的互联地址(peer url):

节点01执行如下命令:

docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --restart=always --name etcd quay.io/coreos/etcd -name etcd01 -advertise-client-urls http://192.168.73.140:2379,http://192.168.73.140:4001 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.168.73.140:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380" \
-initial-cluster-state new

节点02执行如下命令

docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --restart=always --name etcd quay.io/coreos/etcd -name etcd02 -advertise-client-urls http://192.168.73.137:2379,http://192.168.73.137:4001 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.168.73.137:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380" \
-initial-cluster-state new

检查集群连接情况,分别在各个节点执行如下命令:

curl -L http://127.0.0.1:2379/v2/members

如果正常,将看到2个节点的信息,且在各个节点看到的结果都应该是一样的:

{"members":[{"id":"2bd5fcc327f74dd5","name":"etcd01","peerURLs":["http://192.168.73.140:2380"],"clientURLs":["http://192.168.73.140:2379","http://192.168.73.140:4001"]},{"id":"c8a9cac165026b12","name":"etcd02","peerURLs":["http://192.168.73.137:2380"],"clientURLs":["http://192.168.73.137:2379","http://192.168.73.137:4001"]}]}

扩展etcd集群

在集群中的任何一台etcd节点上执行命令,将新节点注册到集群:

curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d ‘{"peerURLs": ["http://192.168.73.172:2380"]}‘

在新节点上启动etcd容器,注意红色字体部分的区别

docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --restart=always --name etcd quay.io/coreos/etcd -name etcd03 -advertise-client-urls http://192.168.73.150:2379,http://192.168.73.150:4001 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.168.73.150:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380,etcd03=http://192.168.73.150:2380" -initial-cluster-state existing

任意节点执行健康检查:

[[email protected] ~]# etcdctl cluster-health
member 2bd5fcc327f74dd5 is healthy: got healthy result from http://192.168.73.140:2379
member c8a9cac165026b12 is healthy: got healthy result from http://192.168.73.137:2379
cluster is healthy

calico部署

现在物理主机下载calicoctl,下载页面:

https://github.com/projectcalico/calico-containers/releases

并将下载的calicoctl复制到/usr/local/bin下面

在第一台etcd节点上执行如下命令:

[[email protected] ~]# calicoctl node  #如果是第一次执行该命令,会需要联网下载calico node镜像并启动
Running Docker container with the following command:

docker run -d --restart=always --net=host --privileged --name=calico-node -e HOSTNAME=docker01 -e IP= -e IP6= -e CALICO_NETWORKING=true -e AS= -e NO_DEFAULT_POOLS= -e ETCD_AUTHORITY=127.0.0.1:2379 -e ETCD_SCHEME=http -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico calico/node:v0.18.0

Calico node is running with id: 60b284221a94b418509f86d3c8d7073e11ab3c2a3ca17e4efd2568e97791ff33
Waiting for successful startup
No IP provided. Using detected IP: 192.168.73.140
Calico node started successfully

在第二台etcd节点上执行:

[[email protected] ~]# calicoctl node  --如果是第一次执行该命令,会需要联网下载calico node镜像
Running Docker container with the following command:

docker run -d --restart=always --net=host --privileged --name=calico-node -e HOSTNAME=docker01 -e IP= -e IP6= -e CALICO_NETWORKING=true -e AS= -e NO_DEFAULT_POOLS= -e ETCD_AUTHORITY=127.0.0.1:2379 -e ETCD_SCHEME=http -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico calico/node:v0.18.0

Calico node is running with id: 72e7213852e529a3588249d85f904e38a92d671add3cdfe5493687aab129f5e2
Waiting for successful startup
No IP provided. Using detected IP: 192.168.73.137
Calico node started successfully

在任意一台calico节点上执行如下命令,配置地址资源池:

[[email protected] ~]# calicoctl pool remove 192.168.0.0/16  #删除默认资源池
[[email protected] ~]# calicoctl pool add 10.0.238.0/24 --nat-outgoing --ipip   #添加新的IP资源池,支持跨子网的主机上的Docker间网络互通,需要添加--ipip参数;如果要Docker访问外网,需要添加--nat-outgoing参数
[[email protected] ~]# calicoctl pool show    #查看配置后的结果

在任意calico节点,检查Calico状态:

[[email protected] ~]# calicoctl status
calico-node container is running. Status: Up 3 hours
Running felix version 1.4.0rc1

IPv4 BGP status
IP: 192.168.73.140    AS Number: 64511 (inherited)
+----------------+-------------------+-------+----------+-------------+
|  Peer address  |     Peer type     | State |  Since   |     Info    |
+----------------+-------------------+-------+----------+-------------+
| 192.168.73.137 | node-to-node mesh |   up  | 09:18:51 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 address configured.

配置docker容器网络

分别在2个节点上启动业务一个容器,不加载网络驱动,后面网络让Calico来配置:

[[email protected] ~]# docker run --name test01 -itd --log-driver none --net none daocloud.io/library/centos:6.6 /bin/bash
[[email protected] ~]# docker run --name test02 -itd --log-driver none --net none daocloud.io/library/centos:6.6 /bin/bash

在任意的calico节点创建Calico profile:

[[email protected] ~]# calicoctl profile add starboss

在各个calico节点上,分别将需要互相访问的节点加入同一个profile:

[[email protected] ~]# calicoctl container test01 profile set starboss
Profile(s) set to starboss.
[[email protected] ~]# calicoctl container test02 profile set starboss
Profile(s) set to starboss.

通过Calico手动为容器指定ip,注意此ip需要符合calico pool的ip配置:

[[email protected] ~]# calicoctl container add test01 10.0.238.10
IP 10.0.238.10 added to test01
[[email protected] ~]# calicoctl container add test02 10.0.238.11
IP 10.0.238.10 added to test02

在任意节点查看Calico节点的配置情况:

[[email protected] ~]# calicoctl endpoint show --detailed
+----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+
| Hostname | Orchestrator ID |                           Workload ID                            |           Endpoint ID            |    Addresses    |        MAC        | Profiles | State  |
+----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+
| docker01 |      docker     | 8f935b0441739f52334e9f16099a2b52e2c982e3aef3190e02dd7ce67e61a853 | 75b0e79a022211e6975c000c29308ed8 | 192.168.0.10/32 | 1e:14:2d:bf:51:f5 | starboss | active |
| docker02 |      docker     | 3d0a8f39753537592f3e38d7604b0b6312039f3bf57cf13d91e953e7e058263e | 8efb263e022211e6a180000c295008af | 192.168.0.11/32 | ee:2b:c2:5e:b6:c5 | starboss | active |
+----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+

测试,在一台物理主机中ping另外一台主机中的容器:

[[email protected] ~]# docker exec test01  ping 192.168.0.11
PING 192.168.0.11 (192.168.0.11) 56(84) bytes of data.
64 bytes from 192.168.0.11: icmp_seq=1 ttl=62 time=0.557 ms
64 bytes from 192.168.0.11: icmp_seq=2 ttl=62 time=0.603 ms
64 bytes from 192.168.0.11: icmp_seq=3 ttl=62 time=0.656 ms
64 bytes from 192.168.0.11: icmp_seq=4 ttl=62 time=0.386 ms

  

时间: 2024-08-04 13:24:35

etcd+calico集群的部署的相关文章

kubernetes-1.0.3集群安装部署

一.节点规划 Role Ip Host master 192.168.1.151 docker1 minion 192.168.1.154 docker2 minion 192.168.2.2 docker3 minion 192.168.1.6 docker4 二.安装部署 1.各节点操作系统为centos7.0.内核版本为 Linux docker3 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 

kubernetes学习与实践篇(二) kubernetes1.5 的安装和集群环境部署

kubernetes 1.5 的安装和集群环境部署 文章转载自:http://www.cnblogs.com/tynia/p/k8s-cluster.html 简介: Docker:是一个开源的应用容器引擎,可以为应用创建一个轻量级的.可移植的.自给自足的容器. Kubernetes:由Google开源的Docker容器集群管理系统,为容器化的应用提供资源调度.部署运行.服务发现.扩容缩容等功能. Etcd:由CoreOS开发并维护的一个高可用的键值存储系统,主要用于共享配置和服务发现. Fla

ProxySQL Cluster 配置详解 以及 高可用集群方案部署记录(完结篇)

早期的ProxySQL若需要做高可用,需要搭建两个实例,进行冗余.但两个ProxySQL实例之间的数据并不能共通,在主实例上配置后,仍需要在备用节点上进行配置,对管理来说非常不方便.但是ProxySQl 从1.4.2版本后,ProxySQL支持原生的Cluster集群搭建,实例之间可以互通一些配置数据,大大简化了管理与维护操作. ProxySQL是一个非中心化代理,在拓扑中,建议将它部署在靠近应用程序服务器的位置处.ProxySQL节点可以很方便地扩展到上百个节点,因为它支持runtime修改配

同一个Docker swarm集群中部署多版本的测试环境

先介绍下用到的技术 Docker swarm: Docker官方的集群管理工具,相比kubernetes更加简单,容易入门.https://docs.docker.com/engine/swarm/ Traefik: 一个现代化的反向代理工具,原生支持Docker swarm模式,可以实现swarm的动态代理.https://docs.traefik.io/user-guide/swarm-mode/ 下图展示主要的思路: 在Docker swarm中创建某个测试版本service时,通过设置s

Storm集群安装部署步骤【详细版】

作者: 大圆那些事 | 文章可以转载,请以超链接形式标明文章原始出处和作者信息 网址: http://www.cnblogs.com/panfeng412/archive/2012/11/30/how-to-install-and-deploy-storm-cluster.html 本文以Twitter Storm官方Wiki为基础,详细描述如何快速搭建一个Storm集群,其中,项目实践中遇到的问题及经验总结,在相应章节以“注意事项”的形式给出. 1. Storm集群组件 Storm集群中包含两

Mysql上的RAC:Percona XtraDB Cluster负载均衡集群安装部署手册

 Percona XtraDB Cluster安装部署手册 引言 编写目的 编写此文档,供PerconaXtraDB Cluster部署时使用. 预期读者 系统维护人员及实施人员. 编制依据及参考资料 目标 通过阅读该手册,让读者明确PerconaXtraDB Cluster的安装.配置和维护情况,为后续数据库运维工作提供指导. 应用部署方案 环境准备 服务器列表 序号 IP 用途 HOSTNAME 操作系统 1 192.168.0.7 Percona XtraDB Cluster RedHat

RabbitMQ集群化部署

压测环境上RabbitMQ主库采用三台集群化部署,部署在172.16.103.127, 172.16.103.138, 172.16.103.129三台机器上. 安装目录:/opt/rabbitmq/rabbitmq_3.6.2 集群化部署 1.设置hosts解析,所有节点配置相同 vi /etc/hosts 172.16.103.129 mq-n129172.16.103.128 mq-n128172.16.103.127 mq-n127 2.设置节点间认证的cookiescp /root/.

windows下hadoop的集群分布式部署

下面我们进行说明一下hadoop集群的搭建配置. 本文假设读者具有hadoop单机配置的基础,相同的部分不在重述. 以三台测试机为例搭建一个小集群,三台机器的ip分别为 192.168.200.1;192.168.200.2;192.168.200.3 cygwin,jdk的安装同windows下hadoop的单机伪分布式部署(1),这里略过. 1.配置 hosts 在三台机子的hosts文件中加入如下记录: 192.168.200.1 hadoop1  #master namenode 192

ElasticSearch2.2 集群安装部署

一.ElasticSearch 集群安装部署 环境准备 ubuntu虚拟机2台 ip:192.168.1.104 192.168.1.106 jdk:最低要求1.7,本机jdk版本1.7_67 安装 a.安装jdk(这里不赘述) b.从官网下载ES版本 地址https://www.elastic.co/downloads/elasticsearch c.解压ES到本地 d.进入config目录下,用编辑器打开elasticsearch.yml文件 1.cluster.name: ppscore-