kubernetes 1.10.1 版本 部署

kubernetes组件

Master组件:

kube-apiserver
Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交

kube-controller-manager
处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。

kube-scheduler
根据调度算法为新创建的Pod选择一个Node节点。

Node组件:

kubelet
kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、 Pod挂载数据卷、
下载secret、获取容器和节点状态等工作。 kubelet将每个Pod转换成一组容器。

kube-proxy
在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。

docker或rocket/rkt
运行容器。

第三方服务:

 etcd
分布式键值存储系统。用于保持集群状态,比如Pod、 Service等对象信息。

K8S部署

1、环境规划
2、安装Docker
3、 自签TLS证书
4、部署Etcd集群
5、部署Flannel网络
6、创建Node节点kubeconfig文件
7、获取K8S二进制包
8、运行Master组件
9、运行Node组件
10、查询集群状态


1 环境规划


2 部署docker

node1 node2

mkdir  /data/docker
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

sudo yum makecache fast
sudo yum -y install docker-ce

docker version
systemctl enable docker.service
systemctl start docker.service

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-‘EOF‘
{"graph": "/data/docker"}
EOF
sudo systemctl daemon-reload

sudo systemctl restart docker

3 自签TLS证书


k8s-master 安装证书生成工具cfssl:

mkdir /data/ssl -p

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

cd /data/ssl/

创建certificate.sh

vim  certificate.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
              "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.1.107",
      "192.168.1.111",
      "192.168.1.14",
      "10.10.10.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

修改如下,然后执行

50      "192.168.1.107",
51      "192.168.1.111",
52      "192.168.1.14",

生成证书

admin-key.pem   ca.csr       ca.pem               kube-proxy-key.pem  server-csr.json
admin.csr       admin.pem       ca-csr.json  kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy-csr.json  server.csr          server.pem

4 部署Etcd

二进制包下载地址: https://github.com/coreos/etcd/releases/tag/v3.2.12

3个节点

mkdir /data/etcd/
cd /data/etcd/

mkdir /opt/kubernetes/{bin,cfg,ssl}  -p
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
cd etcd-v3.2.12-linux-amd64/
cp etcd etcdctl  /opt/kubernetes/bin/

cd /data/ssl
cp ca*pem  server*pem  /opt/kubernetes/ssl/

scp -r /opt/kubernetes/*  192.168.1.111:/opt/kubernetes
scp -r /opt/kubernetes/*  192.168.1.14:/opt/kubernetes
cd /data/etcd
vim  etcd.sh

#!/bin/bash

ETCD_NAME=${1:-"etcd01"}
ETCD_IP=${2:-"127.0.0.1"}
ETCD_CLUSTER=${3:-"etcd01=http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \--name=\${ETCD_NAME} \--data-dir=\${ETCD_DATA_DIR} \--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \--initial-cluster=\${ETCD_INITIAL_CLUSTER} \--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \--initial-cluster-state=new \--cert-file=/opt/kubernetes/ssl/server.pem \--key-file=/opt/kubernetes/ssl/server-key.pem \--peer-cert-file=/opt/kubernetes/ssl/server.pem \--peer-key-file=/opt/kubernetes/ssl/server-key.pem \--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

chmod +x etcd.sh
master: ./etcd.sh  etcd01 192.168.1.107  etcd01=https://192.168.1.107:2380,etcd02=https://192.168.1.111:2380,etcd03=https://192.168.1.14:2380
node1:./etcd.sh  etcd02 192.168.1.111  etcd01=https://192.168.1.107:2380,etcd02=https://192.168.1.111:2380,etcd03=https://192.168.1.14:2380
node2:./etcd.sh  etcd03 192.168.1.14  etcd01=https://192.168.1.107:2380,etcd02=https://192.168.1.111:2380,etcd03=https://192.168.1.14:2380

tailf /var/log/messages
ps -ef | grep etcd
查看集群状态:
cd /opt/kubernetes/ssl
/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379" cluster-health

member 21ab2aeb56731588 is healthy: got healthy result from https://192.168.1.111:2379
member 5997140dfeb3820d is healthy: got healthy result from https://192.168.1.14:2379
member 9a57e056c2e030b8 is healthy: got healthy result from https://192.168.1.107:2379
cluster is healthy

5 部署Flannel网络

写入分配的子网段到etcd,供flanneld使用

cd /opt/kubernetes/ssl

/opt/kubernetes/bin/etcdctl > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem > --endpoints="https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379" > set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}‘
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

下载二进制包

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

配置flanneld--三个节点 操作

mkdir  /data/flanneld
cd /data/flanneld
tar xf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh  /opt/kubernetes/bin/

vim flanneld.sh
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

cat <<EOF >/usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd  \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker

chmod +x flanneld.sh
./flanneld.sh  https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379
cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.1.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.1.1/24 --ip-masq=false --mtu=1450"

查看配置

cd /opt/kubernetes/ssl
/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.107:2379,https://192.168.0.212:2379,https://192.168.0.213:2379"  ls /coreos.com/network/subnets

/coreos.com/network/subnets/172.17.1.0-24
/coreos.com/network/subnets/172.17.66.0-24
/coreos.com/network/subnets/172.17.87.0-24

/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.107:2379,https://192.168.0.212:2379,https://192.168.0.213:2379"  get /coreos.com/network/subnets/172.17.1.0-24

{"PublicIP":"192.168.1.107","BackendType":"vxlan","BackendData":{"VtepMAC":"4a:e5:53:6d:4a:66"}}

netstat -antp | grep flanneld
tcp        0      0 192.168.1.107:1618      192.168.1.14:2379       ESTABLISHED 1760/flanneld
tcp        0      0 192.168.1.107:1620      192.168.1.14:2379       ESTABLISHED 1760/flanneld
tcp        0      0 192.168.1.107:1616      192.168.1.14:2379       ESTABLISHED 1760/flanneld

6 创建Node节点kubeconfig文件

master节点操作

  • 创建TLS Bootstrapping Token
  • 创建kubelet kubeconfig
  • 创建kube-proxy kubeconfig
cd /data/ssl/
vim kubeconfig.sh          ##修改第10行 ip

# 创建 TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#----------------------

# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://192.168.1.107:6443"

# 设置集群参数
kubectl config set-cluster kubernetes   --certificate-authority=./ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap   --token=${BOOTSTRAP_TOKEN}   --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default   --cluster=kubernetes   --user=kubelet-bootstrap   --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes   --certificate-authority=./ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy   --client-certificate=./kube-proxy.pem   --client-key=./kube-proxy-key.pem   --embed-certs=true   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default   --cluster=kubernetes   --user=kube-proxy   --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

##  kubectl   软件在kubernetes-server-linux-amd64.tar.gz  里面,可从官网下载,下面的部署也需要这个软件包
mv kubectl  /usr/bin/
chmod +x /usr/bin/kubectl
sh kubeconfig.sh

kubeconfig.sh   kube-proxy-csr.json  kube-proxy.kubeconfig
kube-proxy.csr  kube-proxy-key.pem   kube-proxy.pem bootstrap.kubeconfig

scp *kubeconfig [email protected]:/opt/kubernetes/cfg
scp *kubeconfig [email protected]:/opt/kubernetes/cfg

7 获取K8S二进制包

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1101

kubernetes-server-linux-amd64.tar.gz

需要用到
master

  • kubectl
  • kube-scheduler
  • kube-apiserver
  • kube-controller-manager

node

  • kubelet
  • kube-proxy
vim   apiserver.sh
#!/bin/bash

MASTER_ADDRESS=${1:-"192.168.1.107"}
ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=${ETCD_SERVERS} \--insecure-bind-address=127.0.0.1 \--bind-address=${MASTER_ADDRESS} \--insecure-port=8080 \--secure-port=6443 \--advertise-address=${MASTER_ADDRESS} \--allow-privileged=true \--service-cluster-ip-range=10.10.10.0/24 \--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/kubernetes/ssl/ca.pem \--etcd-certfile=/opt/kubernetes/ssl/server.pem \--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
vim controller-manager.sh
#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.10.10.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
vim scheduler.sh
#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

8 运行Master组件

mv kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin
chmod +x /opt/kubernetes/bin/* && chmod +x *.sh

cp ssl/token.csv /opt/kubernetes/cfg/
ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem

./apiserver.sh 192.168.1.107 https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379
./scheduler.sh 127.0.0.1
./controller-manager.sh 127.0.0.1

echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
source /etc/profile

创建用户

kubectl create clusterrolebinding  kubelet-bootstrap --clusterrole=system:node-bootstrapper  --user=kubelet-bootstrap

检查

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}   

9 运行Node组件

vim  kubelet.sh
#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.111"}
DNS_SERVER_IP=${2:-"10.10.10.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \--v=4 \--address=${NODE_ADDRESS} \--hostname-override=${NODE_ADDRESS} \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--cert-dir=/opt/kubernetes/ssl \--allow-privileged=true \--cluster-dns=${DNS_SERVER_IP} \--cluster-domain=cluster.local \--fail-swap-on=false \--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
vim proxy.sh
#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.111"}

cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=${NODE_ADDRESS} --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

node1 node2重复此步骤

mv kubelet kube-proxy /opt/kubernetes/bin
chmod +x /opt/kubernetes/bin/* && chmod +x *.sh
./kubelet.sh 192.168.1.111 10.10.10.2
./proxy.sh 192.168.1.111

node2
./kubelet.sh 192.168.1.14 10.10.10.2
./proxy.sh 192.168.1.14

master:

kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY   2m        kubelet-bootstrap   Pending

kubectl  certificate approve node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY
certificatesigningrequest "node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY" approved

kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY   3m        kubelet-bootstrap   Approved,Issued

kubectl get node
NAME            STATUS     ROLES     AGE       VERSION
192.168.1.111   Ready      <none>    11m       v1.10.1
192.168.1.14    NotReady   <none>    8s        v1.10.1

10 查询集群状态

kubectl get componentstatus
kubectl get node


kubernetes 1.10.1 版本 部署

原文地址:http://blog.51cto.com/hequan/2106618

时间: 2024-10-11 17:32:20

kubernetes 1.10.1 版本 部署的相关文章

kubernetes V1.10.4 集群部署 (手动生成证书)

说明:本文档涉及docker镜像,yaml文件下载地址 链接:https://pan.baidu.com/s/1QuVelCG43_VbHiOs04R3-Q 密码:70q2 本文只是作为一个安装记录 1. 环境 1.1 服务器信息 主机名 IP地址 os 版本 节点 k8s01 172.16.50.131 CentOS Linux release 7.4.1708 (Core) master k8s02 172.16.50.132 CentOS Linux release 7.4.1708 (C

Kubernetes1.13.1部署Kuberneted-dashboard v1.10.1版本

参考文档 https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-the-dashboard-ui https://github.com/kubernetes/kubernetes/tree/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/cluster/addons/dashboard https://github.com/kubernete

solr-4.10.2版本使用tomcat7部署

当前版本仅限于solr-4.10.2版本.默认环境使用的是jdk1.7,tomcat7.环境自己配置.网上一堆堆的. 1.下载相应的文件(solr-4.10.2.zip). 官网地址:http://lucene.apache.org/solr/ 2.将solr-4.10.2.zip文件解压.例:F:\solr-4.10.2 3.找到F:\solr-4.10.2\example目录中的solr文件,复制一份至硬盘中,并改名为solrHome(F:\solrHome). 4.找到F:\solrHom

kubernetes命令式容器应用编排/部署应用/探查应用详情/部署service对象/扩缩容/修改删除对象

部署Pod应用 创建delpoyment控制器对象 [[email protected] ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --port=80 --replicas=1 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-p

kubernetes系列03—kubeadm安装部署K8S集群

1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm 安装kubernetes(本文演示的就是此方法) 优点:你只要安装kubeadm即可:kubeadm会帮你自动部署安装K8S集群:如:初始化K8S集群.配置各个插件的证书认证.部署集群网络等.安装简易. 缺点:不是自己一步一步安装,可能对K8S的理解不会那么深:并且有那一部分有问题,自己不好修正. 1.3.2 方法2:二进制安装部署k

Windows 10企业批量部署实战之ADK 10安装

本章为大家带来Windows 10企业批量部署实战之ADK 10安装,本章浅谈ADK 10的作用.新功能及安装操作. 适用于 Windows 10 的 Windows 评估和部署工具包 (ADK),以获取用于自动进行 Windows 10 大规模部署的全新和改进的部署工具.Windows ADK 包括: 用于自定义 Windows 10 映像的 Windows 映像和配置设计器 (Windows ICD) 用于评估系统或组件的质量和性能的 Windows 评估工具包和 Windows Perfo

Windows 10企业批量部署实战之MDT 2013 Update 1 preview安装配置

昨天我们提到了Windows 10企业批量部署实战中所需要的ADK 10安装及WDS安装配置,今天为大家带来最后一个组件MDT  2013 Update 1 preview的安装及配置内容.MDT 2013 Update 1的Preview版本利用 Windows 评估和部署工具包ADK 10,以支持部分接触安装Windows 10或者全自动安装Windows 10,以及 Windows 8 和 Windows 7 (LTI) 部署.接下来开始我们的安装配置操作: 一.MDT  2013 Upd

Windows 10企业批量部署实战之WDS安装

Microsoft 致力于使符合条件的正版 Windows 7 和 Windows 8/8.1 设备能够免费升级 到 Windows 10.此次升级后的Windows 10是完整版,时间范围为 Windows 10 发布后的一年内升级到 Windows 10 才能享受本免费优惠.受微软免费升级的嚼头很多企业选择免费升级Windows 10或者全新部署windows 10接轨新操作系统. 本系列文章为大家带来基于MDT 2013 Update 1 Preview版本搭载正式版ADK 10240为企

k8s Kubernetes v1.10 单节点 kubeadm 快速安装

k8s Kubernetes v1.10 单节点 kubeadm 快速安装 # Master 单节点快速安装 # 傻瓜式安装,只为快速部署测试环境 #测试环境centos 7.4 #ubuntu环境应该也可以,没测验证过 #1 初始化环境 curl -s http://elven.vip/ks/k8s/oneinstall/0.set.sh |bash #2 下载镜像,安装kubeadm工具 curl http://elven.vip/ks/k8s/oneinstall/1.download.s