同一k8s集群中多nginx ingress controller
同一k8s集群中,若有多个项目(对应多个namespace)共用一个nginx ingress controller,因此任意注册到ingress的服务有变更都会导致controller配置重载,当更新频率越来越高时,此controller压力会越来越大,理想的解决方案就是每个namespace对应一个nginx ingress controller,各司其职。
NGINX ingress controller提供了ingress.class参数来实现多ingress功能
使用示例
如果你已配置好多个nginx ingress controller,则可在创建ingress时在annotations中指定使用ingress.class为nginx(示例)的controller:
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
注意:将annotation设置为与有效ingress class不匹配的任何值将强制controller忽略你的ingress。 如果你只运行单个controller,同样annotation设置为除此ingress class或空字符串之外的任何值也会被controller忽略。
配置多个nginx ingress controller示例:
spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- ‘--election-id=ingress-controller-leader-internal‘
- ‘--ingress-class=nginx-internal‘
- ‘--configmap=ingress/nginx-ingress-internal-controller‘
--ingress-class:保证此参数唯一,即每个controller配置各不相同
注意:
部署不同类型的多个nginx ingress controller(例如,nginx和gce),而不在annotation中指定ingress.class类将导致两个或所有controller都在努力满足ingress创建需求,并且以混乱的方式竞争更新其ingress状态字段。
当运行多个nginx ingress controller时,如果其中一个controller使用默认的--ingress-class值(参见internal/ingress/annotations/class/main.go中的IsValid方法),它将只处理未设置ingress.class的ingress需求。
实际应用
创建新namespace
此处创建名为ingress的namespace
kubectl create ns ingress
创建此namespace下的nginx-ingress-controller
nginx-ingress.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: 192.168.100.100/k8s/nginx-ingress-controller-defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
# resources:
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-ingress"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
---
kind: ConfigMap
apiVersion: v1
#data:
# "59090": ingress/kubernetes-dashboard:9090
metadata:
name: tcp-services
namespace: ingress
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress
---
kind: ConfigMap
apiVersion: v1
data:
log-format-upstream: $remote_addr - $remote_user [$time_local] "$request" ‘ ‘$status
$body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$http_x_forwarded_for"
worker-shutdown-timeout: "600"
metadata:
name: nginx-configuration
namespace: ingress
labels:
app: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress
spec:
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: ‘10254‘
prometheus.io/scrape: ‘true‘
spec:
serviceAccountName: nginx-ingress-serviceaccount
# initContainers:
# - command:
# - sh
# - -c
# - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
# image: concar-docker-rg01:5000/k8s/alpine:3.6
# imagePullPolicy: IfNotPresent
# name: sysctl
# securityContext:
# privileged: true
containers:
- name: nginx-ingress-controller
image: 192.168.100.100/k8s/nginx-ingress-controller:0.24.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --ingress-class=ingress
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: aliyun_logs_ingress
value: "stdout"
- name: aliyun_logs_ingress_tags
value: "fields.project=kube,fields.env=system,fields.app=nginx-ingress,fields.version=v1,type=nginx,multiline=1"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
hostNetwork: true
nodeSelector:
type: lb
对应资源解析:
Deployment: default-http-backend即默认ingress后端服务,它处理ingress中nginx控制器无法理解的所有URL路径和host
service:default-http-backend对应服务
ServiceAccount:nginx-ingress-controller绑定的RBAC服务账户
Role:nginx-ingress-controller绑定的RBAC角色
"ingress-controller-leader-ingress":注意最后的ingress对应自定义的namespace,此处为ingress(默认为ingress-controller-leader-nginx)
RoleBinding:nginx-ingress-controller角色和服务账户绑定
ConfigMap:nginx-ingress-controller的tcp-service/udp-service/nginx对应配置
DaemonSet:nginx-ingress-controller部署集(需要满足nodeSelector)
--ingress-class=ingress:声明此ingress仅服务于名为ingress的namespace
注:由于仅同一namespace使用,所以这种ingress-controller不需要配置ClusterRole和ClusterRoleBinding二个RBAC权限!
创建此ingress-controller
kubectl create -f nginx-ingress.yaml
添加ClusterRoleBinding权限
由于默认的ingress-contorller对应的ClusterRoleBinding仅绑定了kube-system命名空间对应的ingress ServiceAcount,所以需要在此ClusterRoleBinding上再添加自定义的ingress ServiceAcount,否则,新的ingress-contorller启动会报权限错误:
The cluster seems to be running with a restrictive Authorization mode and the Ingress controller does not have the required permissions to operate normally
ClusterRoleBinding:nginx-ingress-clusterrole-nisa-binding对应配置如下:
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
创建ingress资源
前提条件:满足存在label有type: lb的节点
示例如下:
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "ingress"
name: storm-nimbus-ingress
namespace: ingress
spec:
rules:
- host: storm-nimbus-ingress.test.com
http:
paths:
- backend:
serviceName: storm-nimbus-cluster
servicePort: 8080
path: /
kubernetes.io/ingress.class: 声明仅使用名为"ingress"的ingress-controller
host:自定义的server_name
serviceName:需要代理的k8s servcie服务名
serciePort:服务暴露的端口
创建此ingress
kubectl create -f ingress.yml
访问
访问:http://storm-nimbus-ingress.test.com/ (storm-nimbus-ingress.test.com保证可解析到ingress-controller节点)
多ingress之后需要网络隔离,默认多个namespace是没有网络隔离的
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: aep-production-network-policy
namespace: aep-production #针对namespace设置访问权限,默认允许所有namespace之间的互访,但是如果建立了一个networkpolicy,则默认变为拒绝所有
spec:
podSelector: {}
ingress:
- from:
- ipBlock: #允许此ip访问
cidr: 10.42.89.0/32 #这个ip是两台ingress主机中其中一台的ip
- ipBlock: #允许此ip访问
cidr: 10.42.143.0/32
- namespaceSelector: {} #允许此namespace里面的访问
- podSelector: #允许打了以下标签的pod访问
matchLabels:
project: aep
env: production
vdc: oscarindustry
policyTypes:
- Ingress
以下为生产环境配置实例(ingress由devops平台产生):
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: carchat-prod
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: ecloud02-plat-ops-repo01.cmiov:5000/k8s/nginx-ingress-controller-defaultbackend:1.4
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 10m
memory: 20Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: carchat-prod
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: carchat-prod
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: carchat-prod
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-carchat-prod-oscarindustry"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: carchat-prod
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: carchat-prod
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: carchat-prod
---
kind: ConfigMap
apiVersion: v1
data:
"59090": kube-system/kubernetes-dashboard:9090
"49090": monitoring/prometheus-operated:9090
metadata:
name: tcp-services
namespace: carchat-prod
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: carchat-prod
---
kind: ConfigMap
apiVersion: v1
data:
log-format-upstream: $remote_addr - $remote_user [$time_local] "$request" ‘ ‘$status
$body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$http_x_forwarded_for"
worker-shutdown-timeout: "600"
metadata:
name: nginx-configuration
namespace: carchat-prod
labels:
app: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: carchat-prod
spec:
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: ‘10254‘
prometheus.io/scrape: ‘true‘
spec:
serviceAccountName: nginx-ingress-serviceaccount
# initContainers:
# - command:
# - sh
# - -c
# - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
# image: concar-docker-rg01:5000/k8s/alpine:3.6
# imagePullPolicy: IfNotPresent
# name: sysctl
# securityContext:
# privileged: true
containers:
- name: nginx-ingress-controller
image: ecloud02-plat-ops-repo01.cmiov:5000/k8s/nginx-ingress-controller:0.24.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --ingress-class=carchat-prod-oscarindustry
- --annotations-prefix=nginx.ingress.kubernetes.io
resources:
limits:
cpu: "3"
memory: 6000Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: aliyun_logs_ingress
value: "stdout"
- name: aliyun_logs_ingress_tags
value: "fields.project=kube,fields.env=system,fields.app=nginx-ingress,fields.version=v1,type=nginx,multiline=1"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
hostNetwork: true
nodeSelector:
ingress: carchat-prod
如果不在devops中创建:
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "ingress"
name: storm-nimbus-ingress
namespace: ingress
spec:
rules:
- host: storm-nimbus-ingress.test.com
http:
paths:
- backend:
serviceName: storm-nimbus-cluster
servicePort: 8080
path: /
原文地址:https://blog.51cto.com/4169523/2465460