Kubernetes(7) Service & Network (advanced)

从上一章节我们做了一个Service提供服务给单节点Redis数据库的实验。在这一章我们要深入Service中去,来弄清Service的工作原理。

1 Kubernetes 如何向客户端提供网络功能

Kubernetes中有三种网络类型:Node Network,Pod Network 和 Cluster Network(virutal IP)。其中Node 和 Pod Network 都是实实在在的网络设备上的IP,Cluster Network (Service Network) 则是虚拟的地址,这些IP 没有出现在网络设备的接口上,它们仅存在于Service和路由规则之中。

See discussion on stackoverflow https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types:

A ClusterIP exposes the following:

  • spec.clusterIp:spec.ports[*].port

You can only access this service while inside the cluster. It is accessible from its spec.clusterIp port. If a spec.ports[*].targetPort is set it will route from the port to the targetPort. The CLUSTER-IP you get when calling kubectl get services is the IP assigned to this service within the cluster internally.

A NodePort exposes the following:

  • <NodeIP>:spec.ports[*].nodePort
  • spec.clusterIp:spec.ports[*].port

If you access this service on a nodePort from the node‘s external IP, it will route the request to spec.clusterIp:spec.ports[*].port, which will in turn route it to your spec.ports[*].targetPort, if set. This service can also be accessed in the same way as ClusterIP.

Your NodeIPs are the external IP addresses of the nodes. You cannot access your service from <ClusterIP>:spec.ports[*].nodePort.

A LoadBalancer exposes the following:

  • spec.loadBalancerIp:spec.ports[*].port
  • <NodeIP>:spec.ports[*].nodePort
  • spec.clusterIp:spec.ports[*].port

You can access this service from your load balancer‘s IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port. You can access this service as you would a NodePort or a ClusterIP service as well.

kubec-proxy始终(通过watch)监视着master节点上的apiserver的变动信息。一旦apiserver状态发生变化,kube-proxy会立刻生成能调动Pod行为的规则。这些规则取决于Proxy-mode实现的的方式。Proxy-mode有三种实现模型:

1.1 userspace

用户的请求首先到达对应节点得内核的iptables规则即Service规则(请求也可以是从Cluster外部发来的图中是从内部Client发出的请求)。该iptables规则的工作方式为:iptables转发给监听连接的local port 或称kube-proxy,经过kube-porxy转发请求给 Pod backend,且将Response转发给客户端。该方法会造成较大的开销,如右下图:Client Pod 首先会被转发至iptables,之后会从内核空间的iptables多次转发给各节点的 kube-proxy,尽管该Client Pod 与其中某个kube-proxy 在同一节点上。

       

1.2 iptables

客户端 Pod 请求 Service IP(iptables),由内核空间 iptables规则直接决定调用相关 Server Pod。这种做法避免了多次 kube-proxy 转发,从而提升了效率。

        

1.3 ipvs

Client 请求直接由 ipvs 规则调动给相关 Pod 资源。

2 Resolve Service Name

“正常” Service(除了 Headless Service)会以 my-svc.my-namespace.svc.cluster.local 这种名字的形式被指派一个 DNS A 记录。这会解析成该 Service 的 Cluster IP。

“Headless” Service(没有Cluster IP)也会以 my-svc.my-namespace.svc.cluster.local 这种名字的形式被指派一个 DNS A 记录。 不像正常 Service,它会解析成该 Service 选择的一组 Pod 的 IP。 希望客户端能够使用这一组 IP,否则就使用标准的 round-robin 策略从这一组 IP 中进行选择。

see: https://kubernetes.io/zh/docs/concepts/services-networking/dns-pod-service/

3. Endpoint k8s服务发现原理

k8s会根据service关联到pod的podIP信息组合成一个endpoint。若service定义中没有selector字段,service被创建时,endpoint controller不会自动创建endpoint。

3.1 Endpoint and Endpoint Controller

Endpoint 是k8s集群中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址。Service配置selector,endpoint controller才会自动创建对应的endpoint对象;否则,不会生成endpoint对象.

Endpoint Controller 是k8s集群控制器的其中一个组件,其功能如下:

  • 负责生成和维护所有endpoint对象的控制器
  • 负责监听service和对应pod的变化
  • 监听到service被删除,则删除和该service同名的endpoint对象
  • 监听到新的service被创建,则根据新建service信息获取相关pod列表,然后创建对应endpoint对象
  • 监听到service被更新,则根据更新后的service信息获取相关pod列表,然后更新对应endpoint对象
  • 监听到pod事件,则更新对应的service的endpoint对象,将podIp记录到endpoint中

3.2 Service load balancing

service 负载分发策略有两种:

  • RoundRobin:轮询模式,即轮询将请求转发到后端的各个pod上(默认模式)
  • SessionAffinity:基于客户端IP地址进行会话保持的模式,第一次客户端访问后端某个pod,之后的请求都转发到这个pod上。

3.3 服务发现

DNS:可以通过cluster add-on的方式轻松的创建KubeDNS来对集群内的Service进行服务发现————这也是k8s官方强烈推荐的方式。为了让Pod中的容器可以使用kube-dns来解析域名,k8s会修改容器的/etc/resolv.conf配置

4 Hands on

4.1 Deploy Service with ClusterIP

see previous section

4.2 Deploy Service with NodePort

use template below to deploy a Deployment and Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 5
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
        - name: myapp-container
          image: ikubernetes/myapp:v3
          ports:
            - name: http
              containerPort: 80
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: 10.99.99.99
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080

in command line perform:

[[email protected] service]# kubectl apply -f myapp-svc-NordPort.yaml
deployment.apps/myapp-deploy created
service/myapp-svc created

[[email protected] service]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        24d   <none>
myapp-svc    NodePort    10.99.99.99   <none>        80:30080/TCP   3m    app=myapp,release=canary
[[email protected] service]# curl 10.99.99.99Hello MyApp | Version: v3 | <a href="hostname.html">Pod Name</a>

[[email protected] service]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE
myapp-deploy-65d6d8d888-2f5md   1/1     Running   0          3m    10.244.2.42   k8snode2
myapp-deploy-65d6d8d888-j6fgb   1/1     Running   0          3m    10.244.1.55   k8snode1
myapp-deploy-65d6d8d888-lffjt   1/1     Running   0          3m    10.244.2.41   k8snode2
myapp-deploy-65d6d8d888-zhhjb   1/1     Running   0          3m    10.244.1.56   k8snode1
myapp-deploy-65d6d8d888-zvwqj   1/1     Running   0          3m    10.244.1.57   k8snode1

As you can see Service myapp-svc map service port 80 to node port 30080

You can also access this service from external host. I use my ubuntu as host machine and created 3 instances for installing Kubernetes under ips 172.16.0.11 (master), 172.16.0.12 (worker1), 172.16.0.13 (workder2). Now I will start a new terminal on my ubuntu host machine :

 [email protected] ~  while true; do curl http://172.16.0.12:30080/hostname.html; sleep 2; done
myapp-deploy-65d6d8d888-j6fgb
myapp-deploy-65d6d8d888-2f5md
myapp-deploy-65d6d8d888-lffjt
myapp-deploy-65d6d8d888-lffjt
myapp-deploy-65d6d8d888-zvwqj
myapp-deploy-65d6d8d888-j6fgb
myapp-deploy-65d6d8d888-zhhjb
myapp-deploy-65d6d8d888-lffjt
myapp-deploy-65d6d8d888-j6fgb

From command line output we can see I use curl to access one of my worker node (172.16.0.12:30080) and Kubernetes Service(cluster ip 10.99.99.99) has forwarded my requests to different pods within the Endpoints (10.244.2.41:80, 10.244.2.42:80, 10.244.1.55:80, 10.244.1.56:80, 10.244.1.57:80). See report below:

[[email protected] service]# kubectl describe svc myapp-svc
Name:                     myapp-svc
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"myapp-svc","namespace":"default"},"spec":{"clusterIP":"10.99.99.9...
Selector:                 app=myapp,release=canary
Type:                     NodePort
IP:                       10.99.99.99
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30080/TCP
Endpoints:                10.244.1.55:80,10.244.1.56:80,10.244.1.57:80 + 2 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

In this case we have used 172.16.0.12 (worker1) for send request, the same we can also use Ip of Worker2 for the access. With the same princip we can setup a nginx proxy as load balancer, which redirect client requests to service and father to worker1 and worker2 pod address.

4.3 Deploy Headless Service

Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create “headless” services by specifying "None" for the cluster IP (.spec.clusterIP).

This option allows developers to reduce coupling to the Kubernetes system by allowing them freedom to do discovery their own way. Applications can still use a self-registration pattern and adapters for other discovery systems could easily be built upon this API.

For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 5
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
        - name: myapp-container
          image: ikubernetes/myapp:v3
          ports:
            - name: http
              containerPort: 80
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc-headless
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: None
  ports:
    - port: 80
      targetPort: 80

For Headless Service you can just change clusterIP to None and remove nortPort variable see example above.

[[email protected] service]# kubectl apply -f myapp-svc-headless.yaml
deployment.apps/myapp-deploy created
service/myapp-svc-headless created
[[email protected] service]# kubectl get svc
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes           ClusterIP   10.96.0.1    <none>        443/TCP   24d
myapp-svc-headless   ClusterIP   None         <none>        80/TCP    6s

We can inspect kube-dns to see how headless service resolve pod ips.

dig -t A -> inspect A record myapp-svc-headless.default.svc.cluster.local.

10.96.0.10 is ip address of kube-dns service, you can find it out by using

[[email protected] service]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGEkube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   24

the whole command:

[[email protected] service]# dig -t A myapp-svc-headless.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A myapp-svc-headless.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2981
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myapp-svc-headless.default.svc.cluster.local. IN A

;; ANSWER SECTION:
myapp-svc-headless.default.svc.cluster.local. 5    IN A 10.244.1.60
myapp-svc-headless.default.svc.cluster.local. 5    IN A 10.244.1.61
myapp-svc-headless.default.svc.cluster.local. 5    IN A 10.244.1.62
myapp-svc-headless.default.svc.cluster.local. 5    IN A 10.244.2.46
myapp-svc-headless.default.svc.cluster.local. 5    IN A 10.244.2.47

;; Query time: 3 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Jan 27 03:04:14 CET 2019
;; MSG SIZE  rcvd: 373

We can see A record myapp-svc-headless.default.svc.cluster.local can redirected to 10.244.2.46, 10.244.2.47, 10.244.1.60, 10.244.1.61, 10.244.1.62, which are matched to pod ips see below

[[email protected] service]# kubectl get pods -o wide -l app=myapp
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE
myapp-deploy-65d6d8d888-b77bv   1/1     Running   0          11m   10.244.1.61   k8snode1
myapp-deploy-65d6d8d888-f9q24   1/1     Running   0          11m   10.244.1.60   k8snode1
myapp-deploy-65d6d8d888-kmprb   1/1     Running   0          11m   10.244.1.62   k8snode1
myapp-deploy-65d6d8d888-lgssr   1/1     Running   0          11m   10.244.2.47   k8snode2
myapp-deploy-65d6d8d888-q7b5r   1/1     Running   0          11m   10.244.2.46   k8snode2

原文地址:https://www.cnblogs.com/crazy-chinese/p/10328278.html

时间: 2024-08-30 14:18:56

Kubernetes(7) Service & Network (advanced)的相关文章

centos7 无法启动网络(service network restart)错误解决办法

systemctl status network.service 出现以下错误“rtnetlink answers file exists” 的解决方法 和 NetworkManager 服务有冲突,这个好解决,直接关闭 NetworkManger 服务就好了.service NetworkManager stop,再重新输入service network restart就好了.

service network restart报错,出现FAILED

在运行service network restart命令时候,显示如下错误 hutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] fgrep: ifcfg-ifcfg-eth0: No such file or directory fgrep: ifcfg-ifcfg-eth0: No such file or directory fgrep: ifcfg-ifcfg-eth0: No suc

Linux: service network/Network/NetworkManager

Linux:service network/Network/NetworkManager start 这三种有什么不同? 1.network service的制御网络接口配置信息改动后,网络服务必须从新启动,来激活网络新配置的使得配置生效,这部分操作和从新启动系统时时一样的作用.制御(控制)是/etc/init.d/network这个文件,可以用这个文件后面加上下面的参数来操作网络服务.例如:/etc/init.d/networkrestart同样也可以用service这个命令来操作网络服务例如

虚拟机service network restart没有反应解决方法

一般我们新copy的虚拟机或新克隆的虚拟机第一次启动时都会出现没有ip地址的情况: [[email protected] network-scripts]# ifconfig lo        Link encap:Local Loopback inet addr:127.0.0.1  Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING  MTU:65536  Metric:1 RX packets:16 erro

虚拟机 克隆 设置固定ip eth0 不存在 service network restart失败

前言很重要: 1)如果你看来网上很多设置固定ip的资料,依然没有成功: 2)如果你虚拟机是克隆的,如果你重启网络报错:Bringing up interface eth0:  Error: No suitable device found: no device found for connection 'System eth0'. 3)如果你没有eth0 那么你很可能来对地方了解决问题:1)设置固定ip,2)克隆的虚拟机重启网络service network restart 报错 步骤: 1.

centos7 无法启动网络(service network restart)错误解决办法(转)

centos7 无法启动网络(service network restart)错误解决办法:(以下方法均为网上COPY,同时感谢原博主分享) systemctl status network.service 出现以下错误"rtnetlink answers file exists" 的解决方法 第一种: 和 NetworkManager 服务有冲突,这个好解决,直接关闭 NetworkManger 服务就好了, service NetworkManager stop,并且禁止开机启动

centOS 7下无法启动网络(service network start)错误解决办法(应该是最全的了。。。)

今天在centOS 7下更改完静态ip后发现network服务重启不了,翻遍了网络,尝试了各种方法,终于解决了. 现把各种解决方法归纳整理,希望能让后面的同学少走点歪路... 首先看问题:执行service network restart命令后出现下面的错误: 1 Restarting network (via systemctl): Job for network.service failed because the control process exited with error code

Kubernetes Nginx Service发现并访问

启动nginx service: kubectl run nginx --image=nginx 查看service状态: kubectl get services 发现只有kubernets service: kubernetes   10.254.0.1     <none>        443/TCP   1d 使用如下命令暴露nginx: kubectl expose deployment nginx --port=80 --type=LoadBalancer 再次查看service

kubernetes的Service Account

Service account作用Service account是为了方便Pod里面的进程调用Kubernetes API或其他外部服务. Service account使用场景运行在pod里的进程需要调用Kubernetes API以及非Kubernetes API的其它服务.Service Account它并不是给kubernetes集群的用户使用的,而是给pod里面的进程使用的,它为pod提供必要的身份认证. 与User account区别(1)User account是为人设计的,而se