Kubernetes Ingress-Nginx实现高可用

假定我们在Kubernetes 指定两个worker节点中部署了ingress nginx来为后端的pod做proxy,这时候我们就需要通过keepalived实现高可用,提供对外的VIP

首先我们要先确保有两个worker节点部署了ingress nginx
在本实验中,环境如下:

IP地址 主机名 描述
10.0.0.31 k8s-master01
10.0.0.34 k8s-node02 ingress nginx、keepalived
10.0.0.35 k8s-node03 ingress nginx、keepalived

1、查看ingress nginx状态

[[email protected] Ingress]# kubectl get pod -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP          NODE         NOMINATED NODE   READINESS GATES
nginx-ingress-controller-85bd8789cd-8c4xh   1/1     Running   0          62s     10.0.0.34   k8s-node02   <none>           <none>
nginx-ingress-controller-85bd8789cd-mhd8n   0/1     Pending   0          3s      <none>      <none>       <none>           <none>
nginx-ingress-controller-85ff8dfd88-vqkhx   1/1     Running   0          3m56s   10.0.0.35   k8s-node03   <none>           <none>

创建一个用于测试环境的namespace

 kubectl  create namespace test

2、部署一个Deployment(用于测试)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
  # 部署在测试环境
  namespace: test
spec:
  replicas: 3
  selector:
    matchLabels:
      name: myweb
      type: test
  template:
    metadata:
      labels:
        name: myweb
        type: test
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 80
---
# service
apiVersion: v1
kind: Service
metadata:
  name: myweb-svc
spec:
  selector:
    name: myweb
    type: test
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
---
# ingress

执行kubectl create 创建deployment

kubectl  create -f myweb-demo.yaml

查看deployment是否部署成功

[[email protected] Project]# kubectl get pods -n test -o wide | grep "myweb"
myweb-deploy-6d586d7db4-2g5ll   1/1     Running   0          23s     10.244.3.240   k8s-node02   <none>           <none>
myweb-deploy-6d586d7db4-cf7w7   1/1     Running   0          4m2s    10.244.1.132   k8s-node01   <none>           <none>
myweb-deploy-6d586d7db4-rp5zc   1/1     Running   0          3m59s   10.244.2.5     k8s-node03   <none>           <none>

3、在两个worker节点部署keepalived
VIP:10.0.0.130,接口:eth0

1.安装keepalived

yum -y install keepalived

1.k8s-node03节点作为master配置keepalived

[[email protected] ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email_from [email protected]
   router_id k8s-node03
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 110
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.130/24 dev eth0 label eth0:1
    }
}

2.k8s-node03节点作为配置keepalived

[[email protected] ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s-node03
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 110
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.130/24 dev eth0 label eth0:1
    }
}

3.k8s-node02节点配置keeplived

[[email protected] ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s-node02
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      10.0.0.130/24 dev eth0 label eth0:1
    }
}

4.两个节点启动keepalived并加入开机启动

systemctl start keepalived.service
systemctl enable keepalived.service 

启动完成后检查k8s-node03的IP地址是否已有VIP

[[email protected] ~]# ip add | grep "130"
    inet 10.0.0.130/24 scope global secondary eth0:1

5.在宿主机上配置hosts文件,实现IP和域名的解析

10.0.0.130 myweb.app.com

6.浏览器测试访问

4.测试vip漂移
现在我将k8s-node03的keepalived进程关闭,那么vip就会漂移到k8s-node02

[[email protected] ~]# systemctl stop keepalived.service

// 在k8s-node02上查看VIP
[[email protected] ~]# ip add | grep "130"
    inet 10.0.0.130/24 scope global secondary eth0:1

再次访问

原文地址:https://blog.51cto.com/12643266/2455788

时间: 2024-07-31 08:48:04

Kubernetes Ingress-Nginx实现高可用的相关文章

nginx+keepalived高可用

nginx+keepalived高可用 1.环境如下 lb-01:192.168.75.136/24  nginx+keepalived-master lb-02:192.168.75.137/24  nginx+keepalived-backup VIP:192.168.75.135/24   rs-01:192.168.75.133/24 apache rs-02:192.168.75.13424 apache lb操作系统centos7.rs操作系统ubuntu14.04 2.lb-01/

分布式架构高可用架构篇_04_Keepalived+Nginx实现高可用Web负载均衡

一.场景需求 二.Keepalived 简要介绍 Keepalived 是一种高性能的服务器高可用或热备解决方案,Keepalived 可以用来防止服务器单点故障的发生,通过配合 Nginx 可以实现 web 前端服务的高可用. Keepalived 以 VRRP 协议为实现基础,用 VRRP 协议来实现高可用性(HA).VRRP(VirtualRouter Redundancy Protocol)协议是用于实现路由器冗余的协议,VRRP 协议将两台或多台路由器设备虚拟成一个设备,对外提供虚拟路

keepalived+nginx实现高可用and负载均衡集群

keepalived+nginx实现高可用and负载均衡集群 前言 因生产环境需要,现需要搭建一个前端为nginx的HA双向互备.后端为nginx代理的loadbalance负载均衡集群.nginx进程基于于Master+Slave(worker)多进程模型,自身具有非常稳定的子进程管理功能.在Master进程分配模式下,Master进程永远不进行业务处理,只是进行任务分发,从而达到Master进程的存活高可靠性,Slave(worker)进程所有的业务信号都 由主进程发出,Slave(work

keepalived+nginx搭建高可用and负载均衡集群

keepalived+nginx搭建高可用and负载均衡集群 前言 因生产环境需要,现需要搭建一个前端为nginx的HA双向互备.后端为nginx代理的loadbalance负载均衡集群.nginx进程基于于Master+Slave(worker)多进程模型,自身具有非常稳定的子进程管理功能.在Master进程分配模式下,Master进程永远不进行业务处理,只是进行任务分发,从而达到Master进程的存活高可靠性,Slave(worker)进程所有的业务信号都由主进程发出,Slave(worke

keepAlived+nginx实现高可用双主模型LVS

实验目的: 利用keepalived实现高可用反向代理的nginx.以及双主模型的ipvs 实验环境: node1:在nginx做代理时做反向代理节点,在keepalived实现LVS时做Director.VIP1:172.16.18.22 VIP2:172.16.18.23 node2:在nginx做代理时做反向代理节点,在keepalived实现LVS时做Director.VIP1:172.16.18.22 VIP2:172.16.18.23 node3:在nginx做代理时做web服务器.

LVS+keeplived+nginx+tomcat高可用、高性能jsp集群

原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 .作者信息和本声明.否则将追究法律责任.http://kerry.blog.51cto.com/172631/557749 #!/bin/bash # BY kerryhu # MAIL:[email protected] # BLOG:http://kerry.blog.51cto.com # Please manual operation yum of before Operation..... #yum -y install

keepalived实现nginx的高可用(双主模型)

实验环境: RS1:rip(172.16.125.7),安装httpd软件包: RS2:rip(172.16.125.8),安装httpd软件包: director1(7-1.lcs.com):vip(172.16.125.100),dip(172.16.125.5),安装nginx.keepalived软件包. director2(7-2.lcs.com):vip(172.16.125.110),dip(172.16.125.6),安装nginx.keepalived软件包. 首先关闭所有节点

nginx+keepalived高可用web架构

1.下载所需的软件包 (1).keepalived软件包     keepalived-1.1.20.tar.gz (2).nginx软件包     nginx-1.1.6.tar.gz (3).nginx模块软件包     libunwind-0.99.tar.gz     agentzh-encrypted-session-nginx-module-v0.02-0-gc752861.tar.gz     chunkin-nginx-module-0.23rc2.tar.gz     goog

keepalived + nginx 实现高可用集群方案

keepalived + nginx 实现高可用集群方案 一.使用场景介绍: nginx做负载均衡,来达到分发请求的目的,但是不能很好的避免单点故障,假如nginx服务器挂点了,那么所有的服务也会跟着瘫痪 .keepalived+nginx,就能很好的解决这一问题. 二.原理介绍: Keepalived 是一种高性能的服务器高可用或热备解决方案,Keepalived 可以用来防止服务器单点故 障的发生,通过配合 Nginx 可以实现 web 前端服务的高可用. Keepalived 以 VRRP

keepalived基于双主模型实现nginx的高可用(2)

Keepalived: keepalived是基于vrrp协议实现的一个高可用集群解决方案,可以利用keepalived来解决单点故障问题,使用keepalived实现的高可用集群方案中,一般有两台服务器,一个是MASTER(主服务器),另一个是BACKUP(备用服务器),这个集群中对外提供一个虚拟IP,MASTER服务器会定时发送特定信息给BACKUP服务器,当BACKUP服务器接收不到MASTER发送的消息时,BACKUP服务器会接管虚拟IP,继续提供服务. 当keepalived基于主备模