ingress高可用--使用DaemonSet方式部署ingress-nginx

前言

为了配置kubernetes中的ingress的高可用,对于kubernetes集群以外只暴露一个访问入口,需要使用keepalived排除单点问题。需要使用daemonset方式将ingress-controller部署在边缘节点上。

边缘节点

首先解释下什么叫边缘节点(Edge Node),所谓的边缘节点即集群内部用来向集群外暴露服务能力的节点,集群外部的服务通过该节点来调用集群内部的服务,边缘节点是集群内外交流的一个Endpoint。

边缘节点要考虑两个问题

  • 边缘节点的高可用,不能有单点故障,否则整个kubernetes集群将不可用
  • 对外的一致暴露端口,即只能有一个外网访问IP和端口

架构

为了满足边缘节点的以上需求,我们使用keepalived来实现。

在Kubernetes中添加了ingress后,在DNS中添加A记录,域名为你的ingress中host的内容,IP为你的keepalived的VIP,这样集群外部就可以通过域名来访问你的服务,也解决了单点故障。

选择Kubernetes的三个node作为边缘节点,并安装keepalived。

安装keepalived服务

yum install -y keepalived

  

node1的keealived配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id edgenode
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 88
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.40.0.109 dev eth0 label eth0:1
    }
}

  

node2 keepalive配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id edgenode
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 88
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.40.0.109 dev eth0 label eth0:1
    }
}

  

node3 keepalived配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id edgenode
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 88
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.40.0.109 dev eth0 label eth0:1
    }
}

 

设置keepalived开机自启动

systemctl enable keepalived.service

  

启动keepalived服务

systemctl start keepalived.service

  

安装ingress-nginx

给边缘节点打标签

kubectl label nodes 10.40.0.105 edgenode=true
kubectl label nodes 10.40.0.106 edgenode=true
kubectl label nodes 10.40.0.107 edgenode=true

  

查看node标签

[[email protected] ingress]# kubectl get node --show-labels
NAME          STATUS   ROLES    AGE   VERSION   LABELS
10.40.0.105   Ready    <none>   25d   v1.12.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=10.40.0.105
10.40.0.106   Ready    <none>   25d   v1.12.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,env_role=dev,kubernetes.io/hostname=10.40.0.106
10.40.0.107   Ready    <none>   17d   v1.12.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=10.40.0.

将原来ingress的yaml文件由deployment改为daemonset

cat ingress-daemonset.yaml

apiVersion: extensions/v1beta1
#kind: Deployment
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  #replicas: 3
  #selector:
    #matchLabels:
      #app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: ‘10254‘
        prometheus.io/scrape: ‘true‘
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        edgenode: ‘true‘
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

  

namespace部署文件

cat namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

configmap部署文件

cat configmap.yaml 

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

  

rbac部署文件

cat rbac.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

  

tcp-service-configmap

cat tcp-services-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx

  

udp-servcie-configmap

cat udp-services-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx

  

default-backend

cat default-backend.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

  

部署

kubectl create -f namespace.yaml
kubectl create -f configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f default-backend.yaml
kubectl create -f tcp-services-configmap.yaml
kubectl create -f udp-services-configmap.yaml
kubectl create -f ingress-daemonset.yaml

  

查看ingress-controller

[[email protected] ingress]# kubectl get ds -n ingress-nginx
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
nginx-ingress-controller   3         3         3       3            3           edgenode=true   57m
[[email protected] ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE
default-http-backend-86569b9d95-x4bsn   1/1     Running   12         24d   172.17.65.6   10.40.0.105   <none>
nginx-ingress-controller-5b7xg          1/1     Running   0          58m   10.40.0.105   10.40.0.105   <none>
nginx-ingress-controller-b5mxc          1/1     Running   0          58m   10.40.0.106   10.40.0.106   <none>
nginx-ingress-controller-t5n5k          1/1     Running   0          58m   10.40.0.107   10.40.0.107   <none>

原文地址:https://www.cnblogs.com/terrycy/p/10048165.html

时间: 2024-11-14 13:19:18

ingress高可用--使用DaemonSet方式部署ingress-nginx的相关文章

ProxySQL Cluster 配置详解 以及 高可用集群方案部署记录(完结篇)

早期的ProxySQL若需要做高可用,需要搭建两个实例,进行冗余.但两个ProxySQL实例之间的数据并不能共通,在主实例上配置后,仍需要在备用节点上进行配置,对管理来说非常不方便.但是ProxySQl 从1.4.2版本后,ProxySQL支持原生的Cluster集群搭建,实例之间可以互通一些配置数据,大大简化了管理与维护操作. ProxySQL是一个非中心化代理,在拓扑中,建议将它部署在靠近应用程序服务器的位置处.ProxySQL节点可以很方便地扩展到上百个节点,因为它支持runtime修改配

【4】搭建HA高可用hadoop-2.3(部署配置HBase)

[1]搭建HA高可用hadoop-2.3(规划+环境准备) [2]搭建HA高可用hadoop-2.3(安装zookeeper) [3]搭建HA高可用hadoop-2.3(部署配置hadoop--cdh5.1.0) [4]搭建HA高可用hadoop-2.3(部署配置HBase) 部署配置habase (1)安装habase master1.slave1.slave2.slave3 #cd /opt #tar xf  hbase-0.98.1-cdh5.1.0.tar.gz #ln -s  hbas

MySQL高可用架构-MHA环境部署记录

一.MHA介绍 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是日本的一位 MySQL专家采用Perl语言编写的一个脚本管理工具,该工具仅适用于MySQLReplication(二层)环境,目的在于维持Master主库的高可用性.是一套优秀的作为MySQL高可用性 环境下故障切

阿里项目的高可用Mongodb集群部署

数据副本 MongoDB中的一组副本是一群mongod进程,这些进程维护同样的数据集.副本集提供了冗余和高可用性,是生产环境部署的基础. 数据冗余和可用性 通过在不同的服务器上存储相同的数据,副本机制保证了一定程度的容错,即在一个数据库挂掉后,数据服务仍然可用. 在某些情况下,副本可以提升数据的读性能,因为用户可以从不同的数据库读取数据.在不同的数据中心维护数据的拷贝,能够提高分布式应用程序的可用性.也可以维护额外的副本用于其他的目的,比如灾难恢复,告警或者是备份. Mongo中的副本 一个Mo

高可用 kubernetes 集群部署实践

前言 Kubernetes(k8s) 凭借着其优良的架构,灵活的扩展能力,丰富的应用编排模型,成为了容器编排领域的事实标准.越来越多的企业拥抱这一趋势,选择 k8s 作为容器化应用的基础设施,逐渐将自己的核心服务迁移到 k8s 之上. 可用性对基础设施而言至关重要.各大云计算厂商纷纷推出了高可用.可扩展的 k8s 托管服务,其中比较有代表性的有 Amazon EKS.Azure Kubernetes Service (AKS).Google Kubernetes Engine.阿里云容器服务 K

AD RMS高可用(三)部署RMS根群集服务器

1) 在rms服务器上点击"添加角色和功能",开始rms组件的添加 2) 选择"基于角色或基于功能安装",点击"下一步". 3) 选择第一台rms服务,点击"下一步". 4) 勾选"ADRMS"角色,同时会添加iis服务.点击"下一步". 5) 保持默认,点击"下一步". 6) 点击"下一步" 7) 选择"Active Director权

kubeadm部署k8s1.9高可用集群--4部署master节点

部署master节点 kubernetes master 节点包含的组件: kube-apiserver kube-scheduler kube-controller-manager 本文档介绍部署一个三节点高可用 master 集群的步骤,分别命名为k8s-host1.k8s-host2.k8s-host3: k8s-host1:172.16.120.154 k8s-host2:172.16.120.155 k8s-host3:172.16.120.156 安装docker 在每台主机安装do

镜像仓库Harbor私服高可用策略分析及部署

一.分析指定Harbor高可用策略 主流的策略有那么几种: 1.harbor做双主复制 2.harbor集群挂载分布式cephfs存储 3.在k8s集群上部署harbor 第二种和第三种都是多个节点,然后挂载的分布式存储,然后为了保证数据的统一性使用单独的mysql数据库,这样以来存在mysql数据和镜像仓库数据单点存放,故障恢复难度大,安装操作复杂的问题 双主复制不存在这些问题,数据多点存放,而且扩容更改高可用模式操作简单,可以更换成主主从等模式. 二.安装docker-compose #ca

spring cloud深入学习(十二)-----Spring Cloud Zuul网关 Filter、熔断、重试、高可用的使用方式

Zuul的核心 Filter是Zuul的核心,用来实现对外服务的控制.Filter的生命周期有4个,分别是“PRE”.“ROUTING”.“POST”.“ERROR”,整个生命周期可以用下图来表示. Zuul大部分功能都是通过过滤器来实现的,这些过滤器类型对应于请求的典型生命周期. PRE: 这种过滤器在请求被路由之前调用.我们可利用这种过滤器实现身份验证.在集群中选择请求的微服务.记录调试信息等. ROUTING:这种过滤器将请求路由到微服务.这种过滤器用于构建发送给微服务的请求,并使用Apa