k8s部署---多节点部署与负载均衡搭建(五)

多节点部署介绍

  • 在生产环境中,搭建kubernetes平台时我们同时会考虑平台的高可用性,kubenetes平台是由master中心管理机制,由master服务器调配管理各个节点服务器,在之前的文章中我们搭建的是单节点(一个master服务器)的部署,当master服务器宕机时,我们的搭建的平台也就无法使用了,这个时候我们就要考虑多节点(多master)的部署,已到平台服务的高可用性。

负载均衡介绍

  • 在我们搭建多节点部署时,多个master同时运行工作,在处理工作问题时总是使用同一个master完成工作,当master服务器面对多个请求任务时,处理速度就会变慢,同时其余的master服务器不处理请求也是一种资源的浪费,这个时候我们就考虑到做负载均衡服务

  • 本次搭建负载均衡使用nginx服务做四层负载均衡,keepalived做地址飘逸

实验部署

实验环境

  • lb01:192.168.80.19 (负载均衡服务器)
  • lb02:192.168.80.20 (负载均衡服务器)
  • Master01:192.168.80.12
  • Master01:192.168.80.11
  • Node01:192.168.80.13
  • Node02:192.168.80.14

多master部署

  • master01服务器操作

    [[email protected] kubeconfig]# scp -r /opt/kubernetes/ [email protected]:/opt     //直接复制kubernetes目录到master02
    The authenticity of host ‘192.168.80.11 (192.168.80.11)‘ can‘t be established.
    ECDSA key fingerprint is SHA256:Ih0NpZxfLb+MOEFW8B+ZsQ5R8Il2Sx8dlNov632cFlo.
    ECDSA key fingerprint is MD5:a9:ee:e5:cc:40:c7:9e:24:5b:c1:cd:c1:7b:31:42:0f.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added ‘192.168.80.11‘ (ECDSA) to the list of known hosts.
    [email protected]‘s password:
    token.csv                                                                  100%   84    61.4KB/s   00:00
    kube-apiserver                                                             100%  929     1.6MB/s   00:00
    kube-scheduler                                                             100%   94   183.2KB/s   00:00
    kube-controller-manager                                                    100%  483   969.2KB/s   00:00
    kube-apiserver                                                             100%  184MB 106.1MB/s   00:01
    kubectl                                                                    100%   55MB  85.9MB/s   00:00
    kube-controller-manager                                                    100%  155MB 111.9MB/s   00:01
    kube-scheduler                                                             100%   55MB 115.8MB/s   00:00
    ca-key.pem                                                                 100% 1675     2.7MB/s   00:00
    ca.pem                                                                     100% 1359     2.6MB/s   00:00
    server-key.pem                                                             100% 1679     2.5MB/s   00:00
    server.pem                                                                 100% 1643     2.7MB/s   00:00
    [[email protected] kubeconfig]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager, kube-scheduler}.service [email protected]:/usr/lib/systemd/system    //复制master中的三个组件启动脚本
    [email protected]‘s password:
    kube-apiserver.service                                                     100%  282   274.4KB/s   00:00
    kube-controller-manager.service                                            100%  317   403.5KB/s   00:00
    kube-scheduler.service                                                     100%  281   379.4KB/s   00:00
    [[email protected] kubeconfig]# scp -r /opt/etcd/ [email protected]:/opt/    //特别注意:master02一定要有 etcd证书,否则apiserver服务无法启动 拷贝master01上已有的etcd证书给master02使用
    [email protected]‘s password:
    etcd                                                                       100%  509   275.7KB/s   00:00
    etcd                                                                       100%   18MB  95.3MB/s   00:00
    etcdctl                                                                    100%   15MB  75.1MB/s   00:00
    ca-key.pem                                                                 100% 1679   941.1KB/s    00:00
    ca.pem                                                                     100% 1265     1.6MB/s   00:00
    server-key.pem                                                             100% 1675     2.0MB/s   00:00
    server.pem                                                                 100% 1338     1.5MB/s   00:00
  • master02服务器操作
    [[email protected] ~]# systemctl stop firewalld.service     //关闭防火墙
    [[email protected] ~]# setenforce 0                        //关闭selinux
    [[email protected] ~]# vim /opt/kubernetes/cfg/kube-apiserver     //更改文件
    ...
    --etcd-servers=https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 --bind-address=192.168.80.11 \       //更改IP地址
    --secure-port=6443 --advertise-address=192.168.80.11 \   //更改IP地址
    --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 ...
    :wq
    [[email protected] ~]# systemctl start kube-apiserver.service   //启动apiserver服务
    [[email protected] ~]# systemctl enable kube-apiserver.service  //设置开机自启
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/ systemd/system/kube-apiserver.service.
    [[email protected] ~]# systemctl start kube-controller-manager.service   //启动controller-manager
    [[email protected] ~]# systemctl enable kube-controller-manager.service  //设置开机自启
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    [[email protected] ~]# systemctl start kube-scheduler.service            //启动scheduler
    [[email protected] ~]# systemctl enable kube-scheduler.service           //设置开机自启
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/ systemd/system/kube-scheduler.service.
    [[email protected] ~]# vim /etc/profile       //编辑添加环境变量
    ...
    export PATH=$PATH:/opt/kubernetes/bin/
    :wq
    [[email protected] ~]# source /etc/profile     //重新执行
    [[email protected] ~]# kubectl get node        //查看节点信息
    NAME            STATUS   ROLES    AGE    VERSION
    192.168.80.13   Ready    <none>   146m   v1.12.3
    192.168.80.14   Ready    <none>   144m   v1.12.3    //多master配置成功

负载均衡部署

  • lb01、lb02同步操作

    [[email protected] ~]# systemctl stop firewalld.service
    [[email protected] ~]# setenforce 0
    [[email protected] ~]# vim /etc/yum.repos.d/nginx.repo   //配置nginx服务yum源
    [nginx]
    name=nginx repo
    baseurl=http://nginx.org/packages/centos/7/$basearch/
    gpgcheck=0
    :wq
    [[email protected] yum.repos.d]# yum list     //重新加载yum
    已加载插件:fastestmirror
    base                                                                                  | 3.6 kB  00:00:00
    extras                                                                                | 2.9 kB   00:00:00
    ...
    [[email protected] yum.repos.d]# yum install nginx -y     //安装nginx服务
    已加载插件:fastestmirror
    Loading mirror speeds from cached hostfile
    * base: mirrors.aliyun.com
    * extras: mirrors.163.com
    ...
    [[email protected] yum.repos.d]# vim /etc/nginx/nginx.conf    //编辑nginx配置文件
    ...
    events {
    worker_connections  1024;
    }
    
    stream {                     //添加四层转发模块
    log_format main ‘$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent‘;
    access_log /var/log/nginx/k8s-access.log main;
    
    upstream k8s-apiserver {
        server 192.168.80.12:6443;          //注意IP地址
        server 192.168.80.11:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-apiserver;
    }
    }
    
    http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    ...
    :wq
    [[email protected] yum.repos.d]# systemctl start nginx       //启动nginx服务 可以在浏览器中访问测试nginx服务
    [[email protected] yum.repos.d]# yum install keepalived -y    //安装keepalived服务
    已加载插件:fastestmirror
    Loading mirror speeds from cached hostfile
    * base: mirrors.aliyun.com
    * extras: mirrors.163.com
    ...
    [[email protected] yum.repos.d]# mount.cifs //192.168.80.2/shares/K8S/k8s02 /mnt/     //挂载宿主机目录
    Password for [email protected]//192.168.80.2/shares/K8S/k8s02:
    [[email protected] yum.repos.d]# cp /mnt/keepalived.conf /etc/keepalived/keepalived.conf  //复制准备好的  keepalived配置文件覆盖源配置文件
    cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes
    [[email protected] yum.repos.d]# vim /etc/keepalived/keepalived.conf       //编辑配置文件
    ...
    vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    //注意脚本位置修改
    }
    
    vrrp_instance VI_1 {
    state MASTER
    interface ens33            //注意网卡名称
    virtual_router_id 51   //VRRP 路由 ID实例,每个实例是唯一的
    priority 100           //优先级,备服务器设置 90
    advert_int 1            //指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.100/24       //飘逸地址
    }
    track_script {
        check_nginx
    }
    }
    //删除下面所有内容
    :wq
  • lb02服务器keepalived配置文件修改
    [[email protected] ~]# vim /etc/keepalived/keepalived.conf
    ...
    vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    //注意脚本位置修改
    }
    
    vrrp_instance VI_1 {
    state BACKUP         //修改角色为backup
    interface ens33      //网卡名称
    virtual_router_id 51   //VRRP 路由 ID实例,每>个实例是唯一的
    priority 90       //优先级,备服务器设置 90
    advert_int 1      //指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.100/24       //虚拟IP地址
    }
    track_script {
        check_nginx
    }
    }
    //删除下面所有内容
    :wq
  • lb01、lb02同步操作
    [[email protected] yum.repos.d]# vim /etc/nginx/check_nginx.sh   //编辑判断nginx状态脚本
    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
    
    if [ "$count" -eq 0 ];then
    systemctl stop keepalived
    fi
    :wq
    chmod +x /etc/nginx/check_nginx.sh     //添加脚本执行权限
    [[email protected] yum.repos.d]# systemctl start keepalived     //启动服务
  • lb01服务器操作
    [[email protected] ~]# ip a      //查看地址信息
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33    //虚拟地址成功配置
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
  • lb02服务器操作
    [[email protected] ~]# ip a          //查看地址信息
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::cd8b:b80c:8deb:251f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever       //没有虚拟IP地址 lb02属于备用服务
  • lb01服务器停止nginx服务,再次在lb02服务器IP地址,看虚拟IP地址是否成功漂移
    [[email protected] ~]# systemctl stop nginx.service
    [[email protected] nginx]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
    [[email protected] ~]# ip a           //在lb02服务器查看
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33      //漂移地址转移到lb02上
       valid_lft forever preferred_lft forever
    inet6 fe80::cd8b:b80c:8deb:251f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
  • 在lb01服务器重新开启nginx、keepalived服务
    [[email protected] nginx]# systemctl start nginx
    [[email protected] nginx]# systemctl start keepalived.service
    [[email protected] nginx]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33     //漂移地址被抢占回来  因为配置了优先级
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
  • 在所有的node节点修改配置文件
    [[email protected] ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [[email protected] ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [[email protected] ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [[email protected] ~]# systemctl restart kubelet.service    //重启服务
    [[email protected] ~]# systemctl restart kube-proxy.service
  • 在lb01服务器查看日志信息
    [[email protected] nginx]# tail /var/log/nginx/k8s-access.log
    192.168.80.13 192.168.80.12:6443 - [11/Feb/2020:15:23:52 +0800] 200 1118
    192.168.80.13 192.168.80.11:6443 - [11/Feb/2020:15:23:52 +0800] 200 1119
    192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1119
    192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1120

    多节点搭建与负载均衡配置完成

原文地址:https://blog.51cto.com/14473285/2470307

时间: 2024-10-29 10:46:31

k8s部署---多节点部署与负载均衡搭建(五)的相关文章

使用Nginx负载均衡搭建高性能.NETweb应用程序二

在文章<使用Nginx负载均衡搭建高性能.NETweb应用程序一>中,让我们对Nginx有了一个初步认识,下面我们将在windows平台下面使用Nginx演示集群部署我们的web应用. 一.下载Nginx部署包 到Nginx官网去下载一个windows平台下面的Nginx部署包,目前我下载的是一个nginx-1.6.2版本的. 二.命令启动服务 启动:start nginx.exe 停止:nginx -s stop 重新加载: nginx -s reload 三.实例搭建 首 选:我们要在我们

CentOS 6.4部署Nginx反向代理、负载均衡

一:前言 Nginx是一个支持反向代理.负载均衡.页面缓存.URL重写以及读写分离的高性能Web服务器. 二:环境准备 1.操作系统 CentOS 6.4 X86_64 [[email protected] logs]# cat /proc/version Linux version 2.6.32-358.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Fri

在IDEA上使用springcloud构建多节点的服务提供者负载均衡

负载均衡(结合<在IDEA上使用springcloud构建单节点的服务提供者>) 以上面spring-cloud-producer为例子修改,将其中的controller改动如下: @RestController public class HelloController { @RequestMapping("/hello") public String index(@RequestParam String name) { return "hello "+

lvs+keepalived+nginx负载均衡搭建测试

1. 简介 1.1 LVS简介 LVS(Linux Virtual Server),也就是Linux虚拟服务器, 是一个由章文嵩博士发起的自由软件项目.使用LVS技术要达到的目标是:通过LVS提供的负载均衡技术和Linux操作系统实现一个高性能.高可用的服务器群集,它具有良好可靠性.可扩展性和可操作性.从而以低廉的成本实现最优的服务性能. LVS主要用来做四层负载均衡. 1.2 Keepalived简介 Keepalived是分布式部署系统解决系统高可用的软件,结合LVS(Linux Virtu

使用LVS负载均衡搭建web群集的原理及安装配置详解

一.负载均衡LVS基本介绍 LVS是 Linux Virtual Server 的简称,也就是Linux虚拟服务器.这是一个由章文嵩博士发起的一个开源项目,它的官方网站是 http://www.linuxvirtualserver.org. LVS是Linux内核标准的一部分.LVS是一个实现负载均衡集群的开源软件项目,通过 LVS 的负载均衡技术和 Linux操作系统可以实现一个高性能高可用的 Linux 服务器集群,它具有良好的可靠性.可扩展性和可操作性.LVS架构从逻辑上可分为调度层.Se

使用Nginx负载均衡搭建高性能.NETweb应用程序一

一.遇到的问题 当我们用IIS服务器部署了一个web应用以后,当很多用户高并发访问的时候,客户端响应就会很慢,客户的体验就会很差,由于IIS接受到客户端请求的 时候,就会创建一个线程,当线程达到几千个时候,这些线程就会占用较大内存,同时由于这些线程要进行切换,所以CPU占用也会比较高,这样IIS性能就很 难提高了.那么如何解决这个问题呢? 二.如何解决高并发问题 为了解决这个高并发的问题,我们就需要进行负载均衡.我们可以在架构上 通过硬件和软件来解决负载均衡,硬件层面可以使用负载均衡器,一般而言

SQL SERVER 2016 AlwaysOn 无域集群+负载均衡搭建与简测

之前和很多群友聊天发现对2016的无域和负载均衡满心期待,毕竟可以简单搭建而且可以不适用第三方负载均衡器,SQL自己可以负载了.windows2016已经可以下载使用了,那么这回终于可以揭开令人憧憬向往的AlwaysOn2016 负载均衡集群的神秘面纱了. 本篇主要描述个人集群搭建中遇到的坑和一些注意事项,以及2016无域负载均衡的简单体验测试. 搭建体验 基础环境 想要不使用域环境来搭建AlwaysON 必须使用windows 2016 和sql server2016 本篇我使用3台虚拟机(主

nginx+keepalived+tomcat+memcache负载均衡搭建小集群

最近一段时间一直在研究高可用高并发负载均衡分布式集群等技术,先前发布了lvs基于网络第四次协议搭建的小集群,现在用空刚好搭建了一个基于nginx搭建的小集群. 我准备了四台机器,情况如下 机器名称 机器IP 安装软件 角色 虚拟ip 描述 vmm01 192.168.252.11 Nginx.keepalived Nginx主机 192.168.252.200 反向代理到tomcat1和tomcat2 vmm04 192.168.252.14 Nginx.keepalived Nginx备机 主

Foreman-porxy负载均衡搭建

本文接上篇puppet负载均衡的环境实验. Foreman-proxy可以采用四层或者七层负载,都可以实现,在foreman的web界面添加一个smart-proxy,后端多个真实foreman-proxy处理的目的 个人采用的是haproxy实现的四层和七层代理,pm01和pm03是foreman-proxy服务器,ag01是foreman服务器,lvs是负载均衡服务器(vip在lvs01服务器上),上面代理了puppet,foreman-proxy业务. 5.1 Foreman-proxy七