calico for kubernetes

The reference urls:

https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/ubuntu-calico.md

https://github.com/projectcalico/calico-docker/blob/master/docs/kubernetes/KubernetesIntegration.md

I have 3 hosts: 10.11.151.97, 10.11.151.100, 10.11.150.101. Unfortunately, there is no internet access in all 3 hosts.  Following the guide, I build the Kubernetes cluster in ‘bash command‘ mode, rather than the ‘service mode‘ described in the reference.

10.11.151.97 is the kubernetes master, the other two are its nodes.

1, Run Etcd Cluster

etcd_token=kb3-etcd-cluster
local_name=kbetcd0
local_ip=10.11.151.97
local_peer_port=4010
local_client_port1=4011
local_client_port2=4012
node1_name=kbetcd1
node1_ip=10.11.151.100
node1_port=4010
node2_name=kbetcd2
node2_ip=10.11.151.101
node2_port=4010

./etcd -name $local_name -initial-advertise-peer-urls http://$local_ip:$local_peer_port -listen-peer-urls http://0.0.0.0:$local_peer_port -listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 -advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 -initial-cluster-token $etcd_token -initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port -initial-cluster-state new &

  

In each host, run etcd as this command since the etcd should run in cluster mode. If succeed, you should see ‘published {Name: *} to cluster *‘ output.

2, Setup Master

2.1 Start Kubernetes

Run kube-apiserver:

./kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4012 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=172.16.0.0/12 --insecure-bind-address=0.0.0.0 --insecure-port=8080 2>&1 > apiserver.out &

Run kube-controller-manager:

./kube-controller-manager --logtostderr=true --v=0 --master=http://tc-151-97:8080 --cloud-provider="" 2>&1 >controller.out &

  Run kube-scheduler:

./kube-scheduler --logtostderr=true --v=0 --master=http://tc-151-97:8080 2>&1 > scheduler.out &

2.2 Install calico in on Master

sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node

  

3, Setup Nodes

3.1 Install calico

For the nodes have no internet access, I downloaded the calico plugin mannual from:

https://github.com/projectcalico/calico-kubernetes/releases/tag/v0.6.0

Move the plugin to the kubernetes plugin directory:

sudo mv calico_kubernetes /usr/libexec/kubernetes/kubelet-plugins/net/exec/calico/calico

Start the calico:

sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node

3.2 Start kubelet with calico network:

Start the kubelet with --network-plugin parameter:

./kube-proxy --logtostderr=true --v=0 --master=http://tc-151-97:8080 --proxy-mode=iptables &
./kubelet --logtostderr=true --v=0 --api_servers=http://tc-151-97:8080 --address=0.0.0.0 —-network-plugin=calico --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &

Here is the kubelet command output:

I1124 15:11:52.226324 28368 server.go:808] Watching apiserver
I1124 15:11:52.393448 28368 plugins.go:56] Registering credential provider: .dockercfg
I1124 15:11:52.398087 28368 server.go:770] Started kubelet
E1124 15:11:52.398190 28368 kubelet.go:756] Image garbage collection failed: unable to find data for container /
I1124 15:11:52.398165 28368 server.go:72] Starting to listen on 0.0.0.0:10250
W1124 15:11:52.401695 28368 kubelet.go:775] Failed to move Kubelet to container "/kubelet": write /sys/fs/cgroup/memory/kubelet/memory.swappiness: invalid argument
I1124 15:11:52.401748 28368 kubelet.go:777] Running in container "/kubelet"
I1124 15:11:52.497377 28368 factory.go:194] System is using systemd
I1124 15:11:52.610946 28368 kubelet.go:885] Node tc-151-100 was previously registered
I1124 15:11:52.734788 28368 factory.go:236] Registering Docker factory
I1124 15:11:52.735851 28368 factory.go:93] Registering Raw factory
I1124 15:11:52.969060 28368 manager.go:1006] Started watching for new ooms in manager
I1124 15:11:52.969114 28368 oomparser.go:199] OOM parser using kernel log file: "/var/log/messages"
I1124 15:11:52.970296 28368 manager.go:250] Starting recovery of all containers
I1124 15:11:53.148967 28368 manager.go:255] Recovery completed
I1124 15:11:53.240408 28368 manager.go:104] Starting to sync pod status with apiserver
I1124 15:11:53.240439 28368 kubelet.go:1953] Starting kubelet main sync loop.

  

I do not know wheather the kubelet is run right. Someone tell me how to verify it ?

I do the same process in another node.

3, Create some pods and test.

apiVersion: v1
kind: ReplicationController
metadata:
name: test-1
spec:
replicas: 1
template:
metadata:
labels:
app: test-1
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-100
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-2
spec:
replicas: 1
template:
metadata:
labels:
app: test-2
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-100
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-3
spec:
replicas: 1
template:
metadata:
labels:
app: test-3
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-101
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-4
spec:
replicas: 1
template:
metadata:
labels:
app: test-4
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-101
./kubectl create -f test.yaml

This command create 4 pods, 2 for 10.11.151.100, 2 for 10.11.151.101.

[@tc_151_97 /home/domeos/openxxs/bin]# ./kubectl get pods
NAME READY STATUS RESTARTS AGE
test-1-1ztr2 1/1 Running 0 5m
test-2-8p2sr 1/1 Running 0 5m
test-3-1hkwa 1/1 Running 0 5m
test-4-jbdbq 1/1 Running 0 5m

  

[@tc-151-100 /home/domeos/openxxs/bin]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6dfc83ec1d12 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_ca4496d0
78087a93da00 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_330d815c
f80a1474f4c4 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_af7199c0
eb14879757e6 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_af2cc1c3
8accff535ff9 calico/node:latest "/sbin/start_runit" 27 minutes ago Up 27 minutes calico-node

In the node 10.11.151.100, the calico status:

[@tc-151-100 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
calico-node container is running. Status: Up 24 minutes
Running felix version 1.2.0

IPv4 BGP status
+---------------+-------------------+-------+----------+------------------------------------------+
| Peer address | Peer type | State | Since | Info |
+---------------+-------------------+-------+----------+------------------------------------------+
| 10.11.151.101 | node-to-node mesh | start | 07:18:44 | Connect Socket: Connection refused |
| 10.11.151.97 | node-to-node mesh | start | 07:07:40 | Active Socket: Connection refused |
+---------------+-------------------+-------+----------+------------------------------------------+

IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+ 

However, in another node 10.11.151.101:

[@tc-151-101 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
calico-node container is running. Status: Up 2 minutes
Running felix version 1.2.0

IPv4 BGP status
Unable to connect to server control socket (/etc/service/bird/bird.ctl): Connection refused

IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+

What has happened ?

And that, there is no calico ip route in both nodes:

[@tc-151-100 ~/baoquanwang/calico-docker-utils]$ ip route
default via 10.11.151.254 dev em1 proto static metric 1024
10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
[@tc-151-101 ~/baoquanwang/calico-docker-utils]$ ip route
default via 10.11.151.254 dev em1 proto static metric 1024
10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1

There is no log output in /var/log/calico/kubernetes/calico.log.

时间: 2024-08-07 16:37:58

calico for kubernetes的相关文章

kubespray容器化部署kubernetes高可用集群

一.基础环境 docker版本1.12.6 CentOS 7 1.准备好要部署的机器 IP ROLE 172.30.33.89 k8s-registry-lb 172.30.33.90 k8s-master01-etcd01 172.30.33.91 k8s-master02-etcd02 172.30.33.92 k8s-master03-etcd03 172.30.33.93 k8s-node01-ingress01 172.30.33.94 k8s-node02-ingress02 172

基于kubernetes自研容器管理平台的技术实践

一.容器云的背景 伴随着微服务的架构的普及,结合开源的Dubbo和Spring Cloud等微服务框架,宜信内部很多业务线逐渐了从原来的单体架构逐渐转移到微服务架构.应用从有状态到无状态,具体来说将业务状态数据如:会话.用户数据等存储到中间件中服务中. 微服务的拆分虽然将每个服务的复杂度降低,但服务实例的数目却呈现出爆炸式增长,这给运维增加难度,一方面是服务部署.升级,另一方面是服务的监控故障恢复等. 在2016年,容器技术尤其是Docker迅速流行起来,公司内部开始尝试将容器放到容器内运行,虽

[Kuberentes][Kubernetes容器网络3]kubernetes 纯三层网络模型

目录 解读 Kubernetes 三层网络方案 Flannel Host Gateway 模式 Calico 项目 Calico 项目的架构 核心流程 Route Reflector模式 总结 解读 Kubernetes 三层网络方案 除了前面说的 VXLAN 这种二层虚拟可扩展局域网方案之外,还有纯三层网络方案. Flannel Host Gateway 模式 Flannel 的 Host Gateway 模式: Flannel在宿主机上创建如下路由规则: $ ip route ... 10.

calico 2.6.1 升级至 3.11

说明 查看官方文档升级的操作需要做如下注意事项. 2.6.x 与 3.x 使用的etcd(这里只是针对 etcd 存储来说) 是不同的,2.6 的使用的是 etcdv2, 而3.x 是 etcdv3. 如果想从 2.6.x 升级到 3.x 至少得是2.6.5+的才行. 所以针对现有的情况,需要先升级至 2.6.5+ ,再升级 3.x. 2.6.1 升级至 2.6.12 2019/12/25 现有环境,使用 etcdv2 进行存储的 calico 数据. [[email protected] ku

PaaS 平台的网络需求

在使用 Docker 构建 PaaS 平台的过程中,我们首先遇到的问题是需要选择一个满足需求的网络模型: 让每个容器拥有自己的网络栈,特别是独立的 IP 地址 能够进行跨服务器的容器间通讯,同时不依赖特定的网络设备 有访问控制机制,不同应用之间互相隔离,有调用关系的能够通讯 调研了几个主流的网络模型: Docker 原生的 Bridge 模型:NAT 机制导致无法使用容器 IP 进行跨服务器通讯(后来发现自定义网桥可以解决通讯问题,但是觉得方案比较复杂) Docker 原生的 Host 模型:大

kubernetes adm 安装教程(网络使用calico)

1.禁用iptables 每台机器禁用iptables 避免和docker 的iptables冲突: systemctl stop firewalld systemctl disable firewalld 禁用selinux: vim /etc/selinux/config #SELINUX=enforcing SELINUX=disabled 2.安装docker (master node)国外的源 cat > /etc/yum.repos.d/docker.repo <<-EOF

Kubernetes之部署calico网络

部署calico网络 Calico组件: Felix:Calico agent     运行在每台node上,为容器设置网络信息:IP,路由规则,iptable规则等 etcd:calico后端存储 BIRD:  BGP Client: 负责把Felix在各node上设置的路由信息广播到Calico网络 , 通过BGP协议来着 BGP Route Reflector: 大规则集群的分级路由分发. calico: calico命令行管理工具 为Node节点部署calico网络,参照官方文档:htt

Kubernetes之部署calico网络Update

部署calico网络 Calico组件介绍: Felix:Calico agent 运行在每台node上,为容器设置网络信息:IP,路由规则,iptable规则等 etcd:calico后端存储 BIRD: BGP Client: 负责把Felix在各node上设置的路由信息广播到Calico网络( 通过BGP协议). BGP Route Reflector: 大规模集群的分级路由分发. calico: calico命令行管理工具 calico的部署: 参照官方文档:https://docs.p

kubernetes(k8s)集群安装calico

添加hosts解析 cat /etc/hosts 10.39.7.51 k8s-master-51 10.39.7.57 k8s-master-57 10.39.7.52 k8s-master-52 下载calico wget http://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml 下载所需镜像 # 建议下载后 推到自己的镜像仓库 [[email protected