kubernetes安装部署集群搭建示例

### 系统环境准备(CentOS 7.2):

a) # systemctl disable firewalld

b) # sed -i s‘/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/sysconfig/selinux

c) # yum -y update && reboot

d) # yum -y install ntpdate && ntpdate cn.pool.ntp.org

Master : 192.168.11.10

node1 : 192.168.11.20

node2 : 192.168.11.30

下载安装:

etct: https://github.com/coreos/etcd/releases

flannel: https://github.com/coreos/flannel/releases

kubernetes: https://github.com/kubernetes/kubernetes/releases

docker: https://docs.docker.com/engine/installation/linux/centos/

) 分别在各节点写入DNS:

192.168.11.10 master hub.jevic.io

192.168.11.20 node1

192.168.11.30 node2

----------------------------------------------------

Master 配置: 192.168.11.10

[[email protected] master]# ls /opt/sourceetcd

flannel  etcd  kubernetes

[[email protected] master] ln -s  /opt/source/etcd/etcd /usr/local/bin

[[email protected] master] ln -s  /opt/source/etcd/etcdctl /usr/local/bin

[[email protected] master] ln -s  /opt/source/flannel/flanneld /usr/local/bin

[[email protected] master] ln -s  /opt/source/kubernetes/server/bin/kube-apiserver /usr/local/bin

[[email protected] master] ln -s  /opt/source/kubernetes/server/bin/kube-controller-manager /usr/local/bin

[[email protected] master] ln -s  /opt/source/kubernetes/server/bin/kubectl /usr/local/bin

[[email protected] master] ln -s  /opt/source/kubernetes/server/bin/kube-scheduler /usr/local/bin

[[email protected] master] mkdir /var/log/{flanneld,kubernetes}

[[email protected] master] nohup etcd --name etcd10 --data-dir /var/lib/etcd \

--listen-client-urls http://0.0.0.0:2378,http://0.0.0.0:4001 \

--advertise-client-urls http://0.0.0.0:2378,http://0.0.0.0:4001 >> /var/log/etcd.log 2>&1 &

[[email protected] master] nohup flanneld --listen=0.0.0.0:8888 >> /var/log/flanneld/flanneld.log 2>&1 &

[[email protected] master] etcdctl set /coreos.com/network/config ‘{ "Network": "10.1.0.0/16" }‘

--- 然后在node1,node2 分别执行(a-g)部分操作 ---

最后开启kubernetes服务:

[[email protected] master] nohup kube-apiserver --logtostderr=true \

--v=0 --etcd_servers=http://0.0.0.0:2378 \

--insecure-bind-address=0.0.0.0 \

--insecure-port=8080 \

--service-cluster-ip-range=10.254.0.0/16 >> /var/log/kubernetes/kube-apiserver.log 2>&1 &

[[email protected] master] nohup kube-controller-manager --logtostderr=true --v=0 --master=http://0.0.0.0:8080 >> /var/log/kubernetes/controller.log 2>&1 &

[[email protected] master] nohup kube-scheduler --logtostderr=true --v=0 --master=http://0.0.0.0:8080 >> /var/log/kubernetes/scheduler.log 2>&1 &

查看节点是否加入:

[[email protected] master] kubectl get nodes

NAME       STATUS    AGE

node1      Ready     1h

node2      Ready     39m

----------------------------------------------------

Node 节点配置:

【node1: 192.168.11.20】

[[email protected] node1]# ls /opt/source

flannel   kubernetes

[[email protected] node1]# ln -s /opt/source/etcd/etcd  /usr/local/bin

[[email protected] node1]# ln -s /opt/source/etcd/etcdctl /usr/local/bin

[[email protected] node1]# ln -s /opt/source/flannel/flanneld /usr/local/bin

[[email protected] node1]# ln -s /opt/source/kubernetes/server/bin/kubelet /usr/local/bin

[[email protected] node1]# ln -s /opt/source/kubernetes/server/bin/kube-proxy /usr/local/bin

a.[[email protected] node1]# mkdir /var/log/{flanneld,kubernetes}

b.[[email protected] node1]# nohup flanneld -etcd-endpoints=http://192.168.11.10:4001 -remote=192.168.11.10:8888 >> /var/log/flanneld/flanneld.log 2>&1 &

c.[[email protected] node1]# source /run/flannel/subnet.env

d.[[email protected] node1]# cat /run/flannel/subnet.env

FLANNEL_NETWORK=10.1.0.0/16

FLANNEL_SUBNET=10.1.62.1/24

FLANNEL_MTU=1472

FLANNEL_IPMASQ=false

e.[[email protected] node1]# grep "bip" /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd --bip=10.1.62.1/24 --mtu=1472

f.[[email protected] node1]# systemctl daemon-reload && systemctl start docker

g.[[email protected] node1]# ip a|egrep "docker|flan"

3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500

inet 10.1.62.0/16 scope global flannel0

4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

inet 10.1.62.1/24 scope global docker0

最后启动kubernetes节点服务:

[[email protected] node1]# nohup kubelet --address=0.0.0.0 \

--port=10250 --logtostderr=true --v=0 \

--api-servers=http://192.168.11.10:8080 >> /var/log/kubernetes/kubelet.log 2>&1 &

[[email protected] node1]# nohup kube-proxy --logtostderr=true --v=0 --master=http://192.168.11.10:8080 >> /var/log/kubernetes/proxy.log 2>&1 &

----------------------------------------------------

【node2: 192.168.11.30】

[[email protected] node2]# ls /opt/source

flannel   kubernetes

[[email protected] node2]# ln -s /opt/source/etcd/etcd  /usr/local/bin

[[email protected] node2]# ln -s /opt/source/etcd/etcdctl /usr/local/bin

[[email protected] node2]# ln -s /opt/source/flannel/flanneld /usr/local/bin

[[email protected] node2]# ln -s /opt/source/kubernetes/server/bin/kubelet /usr/local/bin

[[email protected] node2]# ln -s /opt/source/kubernetes/server/bin/kube-proxy /usr/local/bin

a.[[email protected] node2]# mkdir /var/log/{flanneld,kubernetes}

b.[[email protected] node2]# nohup flanneld -etcd-endpoints=http://192.168.11.10:4001 -remote=192.168.11.10:8888 >> /var/log/flanneld/flanneld.log 2>&1 &

c.[[email protected] node2]# source /run/flannel/subnet.env

d.[[email protected] node2]# cat /run/flannel/subnet.env

FLANNEL_NETWORK=10.1.0.0/16

FLANNEL_SUBNET=10.1.77.1/24

FLANNEL_MTU=1472

FLANNEL_IPMASQ=false

e.[[email protected] node2]# grep "bip" /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd --bip=10.1.77.1/24 --mtu=1472

f.[[email protected] node2]# systemctl daemon-reload && systemctl start docker

g.[[email protected] node2]# ip a|egrep "docker|flan"

3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500

inet 10.1.77.0/16 scope global flannel0

4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

inet 10.1.77.1/24 scope global docker0

最后启动kubernetes节点服务:

[[email protected] node2]# nohup kubelet --address=0.0.0.0 \

--port=10250 --logtostderr=true --v=0 \

--api-servers=http://192.168.11.10:8080 >> /var/log/kubernetes/kubelet.log 2>&1 &

[[email protected] node2]# nohup kube-proxy --logtostderr=true --v=0 --master=http://192.168.11.10:8080 >> /var/log/kubernetes/proxy.log 2>&1 &

时间: 2024-10-07 06:59:24

kubernetes安装部署集群搭建示例的相关文章

Hadoop2.7.2安装与集群搭建

1.环境准备 jdk需要1.7以上版本64位. 创建hadoop用户. 在hadoop用户目录下解压安装包hadoop-2.7.2.tar.gz 2.配置免密码登录 各节点分别执行 生成公钥和私钥:ssh-keygen -t rsa 四次enter. 将公钥添加进公钥库:cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 修改authorized_keys权限:chmod 600 ~/.ssh/authorized_keys 验证:ssh local

Hbase1.2.2安装和集群搭建

1.环境准备 jdk1.7.0_79 Hadoop2.7.2.(匹配Hbase1.2.2) zookeeer3.4.8 hbase-1.2.2-bin.tar.gz 最好和hadoop安装在统一用户下,解压安装包 tar zxvf hbase-1.2.2-bin.tar.gz 2. 修改配置文件 .bashrc(多节点) export JAVA_HOME=/usr/local/jdk1.7.0_79 export HADOOP_HOME=/home/hadoop/hadoop export HB

RabbitMQ的安装及集群搭建方法

RabbitMQ安装 1 安装erlang 下载地址:http://www.erlang.org/downloads 博主这里采用的是otp_src_19.1.tar.gz (200MB+) [[email protected] util]# tar zxvf otp_src_19.1.tar.gz [[email protected] util]# cd otp_src_19.1 [[email protected] otp_src_19.1]# ./configure --prefix=/o

rabbitmq安装、集群搭建

rabbitmq的安装: CentOS上面部署: 首先修改hosts文件 修改hosts文件vi /etc/hosts1.1.1.1 hostname 2.2.2.2 hostname 3.3.3.3 hostname erlang的安装环境:rpm -i http://mirror.bjtu.edu.cn/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpmyum install erlang安装rabbitMQ(RPM链接可以从http://www

rabbitMQ 安装,集群搭建, 编码

RabbitMQ 一.背景 命令行工具: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html 介绍入门文章: http://blog.csdn.net/anzhsoft/article/details/19563091 内容比较清晰: http://www.diggerplus.org/archives/3110 Exchange.Queue producer把消息发送到Exchange(带上route key),consumer声明queue(

Redis安装与集群搭建

1 1.1   安装redis n  版本说明 本教程使用redis3.0版本.3.0版本主要增加了redis集群功能. 安装的前提条件: 需要安装gcc:yum install gcc-c++ 1.下载redis的源码包. 2.把源码包上传到linux服务器 3.解压源码包 tar -zxvf redis-3.0.0.tar.gz 4.Make 5.Make install [[email protected] redis-3.0.0]# make install PREFIX=/usr/lo

k8s实践19:kubernetes二进制部署集群v1.12升级v1.15

1.升级前的版本 [[email protected] ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDa

十分钟教你学会zookeeper安装和集群搭建(伪集群 )

1. zookeeper介绍 ZooKeeper是一个为分布式应用所设计的分布的.开源的协调服务,它主要是用来解决分布式应用中经常遇到的一些数据管理问题,简化分布式应用协调及其管理的难度,提供高性能的分布式服务.ZooKeeper本身可以以Standalone模式安装运行,不过它的长处在于通过分布式ZooKeeper集群(一个Leader,多个Follower),基于一定的策略来保证ZooKeeper集群的稳定性和可用性,从而实现分布式应用的可靠性. ZooKeeper是作为分布式协调服务,是不

zookeeper3.4.8安装和集群搭建

1.环境准备 创建zookeeper用户. 准备安装包: zookeeper-3.4.8.tar.gz. 拷贝至安装目录并解压 tar zxvf zookeeper-3.3.6.tar.gz mv zookeeper-3.3.6 zookeeper 2.配置文件 zookeeper/conf/zoo.cfg(需手动创建) #zookeeper使用的基本时间单位(ms) tickTime=2000 #leader和follow之间的最长心跳时间(ticktime的倍数) initLimit=5 #