同一k8s集群中多nginx ingress controller

同一k8s集群中多nginx ingress controller
同一k8s集群中,若有多个项目(对应多个namespace)共用一个nginx ingress controller,因此任意注册到ingress的服务有变更都会导致controller配置重载,当更新频率越来越高时,此controller压力会越来越大,理想的解决方案就是每个namespace对应一个nginx ingress controller,各司其职。

NGINX ingress controller提供了ingress.class参数来实现多ingress功能

使用示例
如果你已配置好多个nginx ingress controller,则可在创建ingress时在annotations中指定使用ingress.class为nginx(示例)的controller:

metadata:
  name: foo
  annotations:
    kubernetes.io/ingress.class: "nginx"

注意:将annotation设置为与有效ingress class不匹配的任何值将强制controller忽略你的ingress。 如果你只运行单个controller,同样annotation设置为除此ingress class或空字符串之外的任何值也会被controller忽略。

配置多个nginx ingress controller示例:

spec:
  template:
     spec:
       containers:
         - name: nginx-ingress-internal-controller
           args:
             - /nginx-ingress-controller
             - ‘--election-id=ingress-controller-leader-internal‘
             - ‘--ingress-class=nginx-internal‘
             - ‘--configmap=ingress/nginx-ingress-internal-controller‘

--ingress-class:保证此参数唯一,即每个controller配置各不相同

注意:

部署不同类型的多个nginx ingress controller(例如,nginx和gce),而不在annotation中指定ingress.class类将导致两个或所有controller都在努力满足ingress创建需求,并且以混乱的方式竞争更新其ingress状态字段。

当运行多个nginx ingress controller时,如果其中一个controller使用默认的--ingress-class值(参见internal/ingress/annotations/class/main.go中的IsValid方法),它将只处理未设置ingress.class的ingress需求。

实际应用

创建新namespace
此处创建名为ingress的namespace

kubectl create ns ingress

创建此namespace下的nginx-ingress-controller

nginx-ingress.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: 192.168.100.100/k8s/nginx-ingress-controller-defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        # resources:
          # limits:
            # cpu: 10m
            # memory: 20Mi
          # requests:
            # cpu: 10m
            # memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-ingress"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress

---

kind: ConfigMap
apiVersion: v1
#data:
#  "59090": ingress/kubernetes-dashboard:9090
metadata:
  name: tcp-services
  namespace: ingress

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress

---

kind: ConfigMap
apiVersion: v1
data:
  log-format-upstream: $remote_addr - $remote_user [$time_local] "$request" ‘ ‘$status
    $body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$http_x_forwarded_for"
  worker-shutdown-timeout: "600"
metadata:
  name: nginx-configuration
  namespace: ingress
  labels:
    app: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: ‘10254‘
        prometheus.io/scrape: ‘true‘
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      # initContainers:
      # - command:
        # - sh
        # - -c
        # - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        # image: concar-docker-rg01:5000/k8s/alpine:3.6
        # imagePullPolicy: IfNotPresent
        # name: sysctl
        # securityContext:
          # privileged: true
      containers:
        - name: nginx-ingress-controller
          image: 192.168.100.100/k8s/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --ingress-class=ingress
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: aliyun_logs_ingress
              value: "stdout"
            - name: aliyun_logs_ingress_tags
              value: "fields.project=kube,fields.env=system,fields.app=nginx-ingress,fields.version=v1,type=nginx,multiline=1"
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      hostNetwork: true
      nodeSelector:
        type: lb

对应资源解析:

Deployment: default-http-backend即默认ingress后端服务,它处理ingress中nginx控制器无法理解的所有URL路径和host

service:default-http-backend对应服务

ServiceAccount:nginx-ingress-controller绑定的RBAC服务账户

Role:nginx-ingress-controller绑定的RBAC角色

"ingress-controller-leader-ingress":注意最后的ingress对应自定义的namespace,此处为ingress(默认为ingress-controller-leader-nginx)

RoleBinding:nginx-ingress-controller角色和服务账户绑定

ConfigMap:nginx-ingress-controller的tcp-service/udp-service/nginx对应配置

DaemonSet:nginx-ingress-controller部署集(需要满足nodeSelector)

--ingress-class=ingress:声明此ingress仅服务于名为ingress的namespace

注:由于仅同一namespace使用,所以这种ingress-controller不需要配置ClusterRole和ClusterRoleBinding二个RBAC权限!

创建此ingress-controller

kubectl create -f nginx-ingress.yaml

添加ClusterRoleBinding权限

由于默认的ingress-contorller对应的ClusterRoleBinding仅绑定了kube-system命名空间对应的ingress ServiceAcount,所以需要在此ClusterRoleBinding上再添加自定义的ingress ServiceAcount,否则,新的ingress-contorller启动会报权限错误:

The cluster seems to be running with a restrictive Authorization mode and the Ingress controller does not have the required permissions to operate normally

ClusterRoleBinding:nginx-ingress-clusterrole-nisa-binding对应配置如下:

subjects:
- kind: ServiceAccount
  name: nginx-ingress-serviceaccount
  namespace: kube-system
- kind: ServiceAccount
  name: nginx-ingress-serviceaccount
  namespace: ingress

创建ingress资源

前提条件:满足存在label有type: lb的节点

示例如下:

ingress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: "ingress"
  name: storm-nimbus-ingress
  namespace: ingress
spec:
  rules:
  - host: storm-nimbus-ingress.test.com
    http:
      paths:
      - backend:
          serviceName: storm-nimbus-cluster
          servicePort: 8080
        path: /

kubernetes.io/ingress.class: 声明仅使用名为"ingress"的ingress-controller

host:自定义的server_name

serviceName:需要代理的k8s servcie服务名

serciePort:服务暴露的端口

创建此ingress

kubectl create -f ingress.yml

访问

访问:http://storm-nimbus-ingress.test.com/ (storm-nimbus-ingress.test.com保证可解析到ingress-controller节点)

多ingress之后需要网络隔离,默认多个namespace是没有网络隔离的

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: aep-production-network-policy
  namespace: aep-production #针对namespace设置访问权限,默认允许所有namespace之间的互访,但是如果建立了一个networkpolicy,则默认变为拒绝所有
spec:
  podSelector: {}
  ingress:
  - from:
    - ipBlock: #允许此ip访问
        cidr: 10.42.89.0/32 #这个ip是两台ingress主机中其中一台的ip
    - ipBlock: #允许此ip访问
        cidr: 10.42.143.0/32
    - namespaceSelector: {} #允许此namespace里面的访问
    - podSelector:  #允许打了以下标签的pod访问
        matchLabels:
          project: aep
          env: production
          vdc: oscarindustry
  policyTypes:
  - Ingress

以下为生产环境配置实例(ingress由devops平台产生):

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: carchat-prod
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: ecloud02-plat-ops-repo01.cmiov:5000/k8s/nginx-ingress-controller-defaultbackend:1.4
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 10m
            memory: 20Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080

---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: carchat-prod
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: carchat-prod

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: carchat-prod
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-carchat-prod-oscarindustry"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: carchat-prod
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: carchat-prod

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: carchat-prod

---

kind: ConfigMap
apiVersion: v1
data:
  "59090": kube-system/kubernetes-dashboard:9090
  "49090": monitoring/prometheus-operated:9090
metadata:
  name: tcp-services
  namespace: carchat-prod

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: carchat-prod

---

kind: ConfigMap
apiVersion: v1
data:
  log-format-upstream: $remote_addr - $remote_user [$time_local] "$request" ‘ ‘$status
    $body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$http_x_forwarded_for"
  worker-shutdown-timeout: "600"
metadata:
  name: nginx-configuration
  namespace: carchat-prod
  labels:
    app: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: carchat-prod
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: ‘10254‘
        prometheus.io/scrape: ‘true‘
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      # initContainers:
      # - command:
        # - sh
        # - -c
        # - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        # image: concar-docker-rg01:5000/k8s/alpine:3.6
        # imagePullPolicy: IfNotPresent
        # name: sysctl
        # securityContext:
          # privileged: true
      containers:
        - name: nginx-ingress-controller
          image: ecloud02-plat-ops-repo01.cmiov:5000/k8s/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --ingress-class=carchat-prod-oscarindustry
            - --annotations-prefix=nginx.ingress.kubernetes.io
          resources:
            limits:
              cpu: "3"
              memory: 6000Mi
            requests:
              cpu: 100m
              memory: 100Mi
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: aliyun_logs_ingress
              value: "stdout"
            - name: aliyun_logs_ingress_tags
              value: "fields.project=kube,fields.env=system,fields.app=nginx-ingress,fields.version=v1,type=nginx,multiline=1"
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      hostNetwork: true
      nodeSelector:
        ingress: carchat-prod

如果不在devops中创建:

ingress.yml


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: "ingress"
  name: storm-nimbus-ingress
  namespace: ingress
spec:
  rules:
  - host: storm-nimbus-ingress.test.com
    http:
      paths:
      - backend:
          serviceName: storm-nimbus-cluster
          servicePort: 8080
        path: /

原文地址:https://blog.51cto.com/4169523/2465460

时间: 2024-08-29 21:52:55

同一k8s集群中多nginx ingress controller的相关文章

实操教程丨如何在K8S集群中部署Traefik Ingress Controller

注:本文使用的Traefik为1.x的版本 在生产环境中,我们常常需要控制来自互联网的外部进入集群中,而这恰巧是Ingress的职责. Ingress的主要目的是将HTTP和HTTPS从集群外部暴露给该集群中运行的服务.这与Ingress控制如何将外部流量路由到集群有异曲同工之妙.接下来,我们举一个实际的例子来更清楚的说明Ingress的概念. 首先,想象一下在你的Kubernetes集群中有若干个微服务(小型应用程序之间彼此通信).这些服务能够在集群内部被访问,但我们想让我们的用户从集群外部也

在kubernetes集群中运行nginx

在完成前面kubernetes数据持久化的学习之后,本节我们开始尝试在k8s集群中部署nginx应用,对于nginx来说,需要持久化的数据主要有两块:1.nginx配置文件和日志文件2.网页文件 一.配置nginx网页文件持久化1.ReplicationController配置文件如下 # cat nginx-rc.yaml apiVersion: v1 kind: ReplicationController metadata: name: nginx-test labels: name: ng

【K8S学习笔记】Part2:获取K8S集群中运行的所有容器镜像

本文将介绍如何使用kubectl列举K8S集群中运行的Pod内的容器镜像. 注意:本文针对K8S的版本号为v1.9,其他版本可能会有少许不同. 0x00 准备工作 需要有一个K8S集群,并且配置好了kubectl命令行工具来与集群通信.如果未准备好集群,那么你可以使用Minikube创建一个K8S集群,或者你也可以使用下面K8S环境二者之一: Katacoda Play with Kubernetes 如果需要查看K8S版本信息,可以输入指令kubectl version. 在本练习中,我们将使

k8s集群中的存储持久化概述

存储分类:直连式存储,集中式共享存储,分布式存储文件存储,块存储,对象存储DAS,NAS,SANDAS属于直连式存储,将存储设备通过SCSI接口或者光纤通道直接和主板连接,不能实现数据共享NAS和SAN属于集中式共享存储NAS使用NFS和CIFS(原来叫SMB,微软的)协议SAN分为FC SAN和IP SANIP SAN使用iSCSI技术NFS实现linux之间共享,smaba基于CIFS协议,实现linux和windows之间文件共享ceph属于分布式共享系统 k8s集群中支持的持久存储主要包

K8S集群中部署jenkins

本文介绍在k8s环境中进行jenkins server的部署和配置.Jenkins是一个开源的.功能强大的持续集成和持续构建工具,采用master和salve架构,我们通过将jenkins集成环境部署在k8s集群中,可以实现jenkins slave按需创建.动态的伸缩.同时也提供了在k8s环境中应用的持续部署解决方案. 一.准备docker镜像文件 1.编译jenkins server docker镜像,默认的jenkis镜像已包含jdk,版本为1.8.0_171 # cat dockerfi

将 master 节点服务器从 k8s 集群中移除并重新加入

背景 1 台 master 加入集群后发现忘了修改主机名,而在 k8s 集群中修改节点主机名非常麻烦,不如将 master 退出集群改名并重新加入集群(前提是用的是高可用集群). 操作步骤 ssh 登录另外一台 master 节点将要改名的 master 节点移出集群. kubectl drain blog-k8s-n0 kubectl delete node blog-k8s-n0 登录已退出集群的 master 服务器重置 kubelet 配置并重新加入集群. kubeadm reset k

k8s集群中的EFK日志搜集系统

Kubernetes 集群本身不提供日志收集的解决方案,一般来说有主要的3种方案来做日志收集:1.在每个节点上运行一个 agent 来收集日志由于这种 agent 必须在每个节点上运行,所以直接使用 DaemonSet 控制器运行该应用程序即可这种方法也仅仅适用于收集输出到 stdout 和 stderr 的应用程序日志简单来说,本方式就是在每个node上各运行一个日志代理容器,对本节点/var/log和 /var/lib/docker/containers/两个目录下的日志进行采集2.在每个

k8s集群中的rbac权限管理

启用RBAC,需要在 apiserver 中添加参数--authorization-mode=RBAC,如果使用的kubeadm安装的集群,1.6 版本以上的都默认开启了RBAC查看是否开启:$ cat /etc/kubernetes/manifests/kube-apiserver.yaml spec: containers: - command: - kube-apiserver - --advertise-address=192.168.1.243 - --allow-privileged

使用nfs在k8s集群中实现持久化存储

准备NFS服务192.168.1.244$ yum -y install nfs-utils rpcbind$ systemctl start nfs-server rpcbind$ systemctl enable nfs-server rpcbind$ mkdir -p /data/k8s$ cd /data/k8s$ echo 11111111 > index.html$ vim /etc/exports/data/k8s *(rw,async,no_root_squash)$ syste