Keepalived+Nginx+Tomcat 实现高可用Web集群(转)

Keepalived+Nginx+Tomcat 实现高可用Web集群

溯水心生 关注

2018.01.08 20:28* 字数 1382 阅读 965评论 1喜欢 9

集群规划图片

一、Nginx的安装过程

1.下载Nginx安装包,安装依赖环境包

(1)安装 C++编译环境

yum  -y install gcc   #C++

(2)安装pcre

yum  -y install pcre-devel

(3)安装zlib

yum  -y install  zlib-devel

(4)安装Nginx

定位到nginx 解压文件位置,执行编译安装命令

[[email protected] nginx-1.12.2]# pwd
/usr/local/nginx/nginx-1.12.2
[[email protected] nginx-1.12.2]# ./configure  && make && make install

(5)启动Nginx

安装完成后先寻找那安装完成的目录位置

[[email protected] nginx-1.12.2]# whereis nginx
nginx: /usr/local/nginx
[[email protected] nginx-1.12.2]#

进入Nginx子目录sbin启动Nginx

[[email protected] sbin]# ls
nginx
[[email protected] sbin]# ./nginx &
[1] 5768
[[email protected] sbin]#

查看Nginx是否启动

Niginx启动成功截图

或通过进程查看Nginx启动情况

[[email protected] sbin]# ps -aux|grep nginx
root       5769  0.0  0.0  20484   608 ?        Ss   14:03   0:00 nginx: master process ./nginx
nobody     5770  0.0  0.0  23012  1620 ?        S    14:03   0:00 nginx: worker process
root       5796  0.0  0.0 112668   972 pts/0    R+   14:07   0:00 grep --color=auto nginx
[1]+  完成                  ./nginx
[[email protected] sbin]#

到此Nginx安装完成并启动成功。

(6)Nginx快捷启动和开机启动配置

编辑Nginx快捷启动脚本【注意Nginx安装路径,需要根据自己的NGINX路径进行改动】

[[email protected] init.d]# vim /etc/rc.d/init.d/nginx
  1 #!/bin/sh
  2 #
  3 # nginx - this script starts and stops the nginx daemon
  4 #
  5 # chkconfig: - 85 15
  6 # description: Nginx is an HTTP(S) server, HTTP(S) reverse   7 # proxy and IMAP/POP3 proxy server
  8 # processname: nginx
  9 # config: /etc/nginx/nginx.conf
 10 # config: /usr/local/nginx/conf/nginx.conf
 11 # pidfile: /usr/local/nginx/logs/nginx.pid
 12
 13 # Source function library.
 14 . /etc/rc.d/init.d/functions
 15
 16 # Source networking configuration.
 17 . /etc/sysconfig/network
 18
 19 # Check that networking is up.
 20 [ "$NETWORKING" = "no" ] && exit 0
 21 nginx="/usr/local/nginx/sbin/nginx"
 22 prog=$(basename $nginx)
 23 NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
 24 [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
 25 lockfile=/var/lock/subsys/nginx
 26
 27 make_dirs() {
 28     # make required directories
 29     user=`$nginx -V 2>&1 | grep "configure arguments:" | sed ‘s/[^*]*--user=\([^ ]*\).*/\1/g‘ -`
 30     if [ -z "`grep $user /etc/passwd`" ]; then
 31     useradd -M -s /bin/nologin $user
 32     fi
 33     options=`$nginx -V 2>&1 | grep ‘configure arguments:‘`
 34     for opt in $options; do
 35     if [ `echo $opt | grep ‘.*-temp-path‘` ]; then
 36     value=`echo $opt | cut -d "=" -f 2`
 37     if [ ! -d "$value" ]; then
 38     # echo "creating" $value
 39     mkdir -p $value && chown -R $user $value
 40     fi
 41     fi
 42     done
 43 }
 44
 45 start() {
 46     [ -x $nginx ] || exit 5
 47     [ -f $NGINX_CONF_FILE ] || exit 6
 48     make_dirs
 49     echo -n $"Starting $prog: "
 50     daemon $nginx -c $NGINX_CONF_FILE
 51     retval=$?
 52     echo
 53     [ $retval -eq 0 ] && touch $lockfile
 54     return $retval
 55 }
 56
 57 stop() {
 58     echo -n $"Stopping $prog: "
 59     killproc $prog -QUIT
 60     retval=$?
 61     echo
 62     [ $retval -eq 0 ] && rm -f $lockfile
 63     return $retval
 64 }
 65
 66 restart() {
 67     #configtest || return $?
 68     stop
 69     sleep 1
 70     start
 71 }
 72
 73 reload() {
 74     #configtest || return $?
 75     echo -n $"Reloading $prog: "
 76     killproc $nginx -HUP
 77     RETVAL=$?
 78     echo
 79 }
 80
 81 force_reload() {
 82     restart
 83 }
 84
 85 configtest() {
 86     $nginx -t -c $NGINX_CONF_FILE
 87 }
 88
 89 rh_status() {
 90     status $prog
 91 }
 92
 93 rh_status_q() {
 94     rh_status >/dev/null 2>&1
 95 }
 96
 97 case "$1" in
 98 start)
 99 rh_status_q && exit 0
100 $1
101 ;;
102 stop)
103
104 rh_status_q || exit 0
105 $1
106 ;;
107 restart|configtest)
108 $1
109 ;;
110 reload)
111 rh_status_q || exit 7
112 $1
113 ;;
114 force-reload)
115 force_reload
116 ;;
117 status)
118 rh_status
119 ;;
120 condrestart|try-restart)
121 rh_status_q || exit 0
122 ;;
123 *)
124 echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
125 exit 2
126 esac

为启动脚本授权 并加入开机启动

[[email protected] init.d]# chmod -R 777 /etc/rc.d/init.d/nginx
[[email protected] init.d]# chkconfig  nginx

启动Nginx

[[email protected] init.d]# ./nginx start

将Nginx加入系统环境变量

[[email protected] init.d]# echo ‘export PATH=$PATH:/usr/local/nginx/sbin‘>>/etc/profile && source /etc/profile

Nginx命令 [ service nginx (start|stop|restart) ]

[[email protected] init.d]# service nginx start
Starting nginx (via systemctl):                            [  确定  ]

Tips:快捷命令

service nginx (start|stop|restart)

二、KeepAlived安装和配置

1.安装Keepalived依赖环境

yum install -y popt-devel
yum install  -y ipvsadm
yum install -y libnl*
yum install -y libnf*
yum install -y openssl-devel

2.编译Keepalived并安装

[[email protected] keepalived-1.3.9]# ./configure
[[email protected] keepalived-1.3.9]#  make && make install

3.将Keepalive 安装成系统服务

[[email protected] etc]# mkdir /etc/keepalived
[[email protected] etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived/

手动复制默认的配置文件到默认路径

[[email protected] etc]#  mkdir /etc/keepalived
[[email protected] etc]# cp /usr/local/keepalived/etc/sysconfig/keepalived  /etc/sysconfig/
[[email protected] etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

为keepalived 创建软链接

[[email protected] sysconfig]# ln -s /usr/local/keepalived/sbin/keepalived  /usr/sbin/

设置Keepalived开机自启动

[[email protected] sysconfig]# chkconfig keepalived  on
注意:正在将请求转发到“systemctl enable keepalived.service”。
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service

启动Keepalived服务

[[email protected] keepalived]# keepalived -D  -f /etc/keepalived/keepalived.conf

关闭Keepalived服务

[[email protected] keepalived]# killall keepalived

三、集群规划和搭建

集群规划图片

环境准备:

CentOS 7.2

Keepalived   Version 1.4.0 - December 29, 2017

Nginx           Version: nginx/1.12.2

Tomcat         Version:8



集群规划清单

虚拟机 IP 说明
Keepalived+Nginx1[Master] 192.168.43.101 Nginx Server 01
Keeepalived+Nginx[Backup] 192.168.43.102 Nginx Server 02
Tomcat01 192.168.43.103 Tomcat Web Server01
Tomcat02 192.168.43.104 Tomcat Web Server02
VIP 192.168.43.150 虚拟漂移IP

1.更改Tomcat默认欢迎页面,用于标识切换Web

更改TomcatServer01 节点ROOT/index.jsp 信息,加入TomcatIP地址,并加入Nginx值,即修改节点192.168.43.103信息如下:

<div id="asf-box">
    <h1>${pageContext.servletContext.serverInfo}(192.168.224.103)<%=request.getHeader("X-NGINX")%></h1>
</div>

更改TomcatServer02 节点ROOT/index.jsp信息,加入TomcatIP地址,并加入Nginx值,即修改节点192.168.43.104信息如下:

<div id="asf-box">
    <h1>${pageContext.servletContext.serverInfo}(192.168.224.104)<%=request.getHeader("X-NGINX")%></h1>
</div>

2.启动Tomcat服务,查看Tomcat服务IP信息,此时Nginx未启动,因此request-header没有Nginx信息。

Tomcat启动信息

3.配置Nginx代理信息

1.配置Master节点[192.168.43.101]代理信息

upstream tomcat {
   server 192.168.43.103:8080 weight=1;
   server 192.168.43.104:8080 weight=1;
}
server{
   location / {
       proxy_pass http://tomcat;
   proxy_set_header X-NGINX "NGINX-1";
   }
   #......其他省略
}

2.配置Backup节点[192.168.43.102]代理信息

upstream tomcat {
    server 192.168.43.103:8080 weight=1;
    server 192.168.43.104:8080 weight=1;
}
server{
    location / {
        proxy_pass http://tomcat;
    proxy_set_header X-NGINX "NGINX-2";
    }
    #......其他省略
}

3.启动Master 节点Nginx服务

[[email protected] init.d]# service nginx start
Starting nginx (via systemctl):                            [  确定  ]

此时访问 192.168.43.101 可以看到103和104节点Tcomat交替显示,说明Nginx服务已经将请求负载到了2台tomcat上。

Nginx 负载效果

4.同理配置Backup[192.168.43.102] Nginx信息,启动Nginx后,访问192.168.43.102后可以看到Backup节点已起到负载的效果。

Backup负载效果

4.配置Keepalived 脚本信息

1.在Master节点和Slave节点 /etc/keepalived目录下添加check_nginx.sh 文件,用于检测Nginx的存活状况,添加keepalived.conf文件

check_nginx.sh文件信息如下:

#!/bin/bash
#时间变量,用于记录日志
d=`date --date today +%Y%m%d_%H:%M:%S`
#计算nginx进程数量
n=`ps -C nginx --no-heading|wc -l`
#如果进程为0,则启动nginx,并且再次检测nginx进程数量,
#如果还为0,说明nginx无法启动,此时需要关闭keepalived
if [ $n -eq "0" ]; then
        /etc/rc.d/init.d/nginx start
        n2=`ps -C nginx --no-heading|wc -l`
        if [ $n2 -eq "0"  ]; then
                echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log
                systemctl stop keepalived
        fi
fi

添加完成后,为check_nginx.sh 文件授权,便于脚本获得执行权限。

[[email protected] keepalived]# chmod -R 777 /etc/keepalived/check_nginx.sh 

2.在Master 节点 /etc/keepalived目录下,添加keepalived.conf 文件,具体信息如下:

vrrp_script chk_nginx {
 script "/etc/keepalived/check_nginx.sh"   //检测nginx进程的脚本
 interval 2
 weight -20
}  

global_defs {
 notification_email {
     //可以添加邮件提醒
 }
}
vrrp_instance VI_1 {
 state MASTER                  #标示状态为MASTER 备份机为BACKUP
 interface ens33               #设置实例绑定的网卡(ip addr查看,需要根据个人网卡绑定)
 virtual_router_id 51          #同一实例下virtual_router_id必须相同
 mcast_src_ip 192.168.43.101
 priority 250                  #MASTER权重要高于BACKUP 比如BACKUP为240
 advert_int 1                  #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
 nopreempt                     #非抢占模式
 authentication {              #设置认证
        auth_type PASS         #主从服务器验证方式
        auth_pass 123456
 }
 track_script {
        check_nginx
 }
 virtual_ipaddress {           #设置vip
        192.168.43.150         #可以多个虚拟IP,换行即可
 }
}

3.在Backup节点 etc/keepalived目录下添加 keepalived.conf 配置文件

信息如下:

vrrp_script chk_nginx {
 script "/etc/keepalived/check_nginx.sh"   //检测nginx进程的脚本
 interval 2
 weight -20
}  

global_defs {
 notification_email {
     //可以添加邮件提醒
 }
}
vrrp_instance VI_1 {
 state BACKUP                  #标示状态为MASTER 备份机为BACKUP
 interface ens33               #设置实例绑定的网卡(ip addr查看)
 virtual_router_id 51          #同一实例下virtual_router_id必须相同
 mcast_src_ip 192.168.43.102
 priority 240                  #MASTER权重要高于BACKUP 比如BACKUP为240
 advert_int 1                  #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
 nopreempt                     #非抢占模式
 authentication {              #设置认证
        auth_type PASS         #主从服务器验证方式
        auth_pass 123456
 }
 track_script {
        check_nginx
 }
 virtual_ipaddress {           #设置vip
        192.168.43.150         #可以多个虚拟IP,换行即可
 }
}

Tips:关于配置信息的几点说明

  • state - 主服务器需配成MASTER,从服务器需配成BACKUP
  • interface - 这个是网卡名,我使用的是VM12.0的版本,所以这里网卡名为ens33
  • mcast_src_ip - 配置各自的实际IP地址
  • priority - 主服务器的优先级必须比从服务器的高,这里主服务器配置成250,从服务器配置成240
  • virtual_ipaddress - 配置虚拟IP(192.168.43.150)
  • authentication - auth_pass主从服务器必须一致,keepalived靠这个来通信
  • virtual_router_id - 主从服务器必须保持一致

5.集群高可用(HA)验证

  • Step1 启动Master机器的Keepalived和 Nginx服务
[[email protected] keepalived]# keepalived  -D -f /etc/keepalived/keepalived.conf
[[email protected] keepalived]# service nginx start

查看服务启动进程

[[email protected] keepalived]# ps -aux|grep nginx
root       6390  0.0  0.0  20484   612 ?        Ss   19:13   0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody     6392  0.0  0.0  23008  1628 ?        S    19:13   0:00 nginx: worker process
root       6978  0.0  0.0 112672   968 pts/0    S+   20:08   0:00 grep --color=auto nginx

查看Keepalived启动进程

[[email protected] keepalived]# ps -aux|grep keepalived
root       6402  0.0  0.0  45920  1016 ?        Ss   19:13   0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root       6403  0.0  0.0  48044  1468 ?        S    19:13   0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root       6404  0.0  0.0  50128  1780 ?        S    19:13   0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root       7004  0.0  0.0 112672   976 pts/0    S+   20:10   0:00 grep --color=auto keepalived

使用 ip add 查看虚拟IP绑定情况,如出现192.168.43.150 节点信息则绑定到Master节点

[[email protected] keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:91:bf:59 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.101/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.43.150/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::9abb:4544:f6db:8255/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::b0b3:d0ca:7382:2779/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
  • Step 2 启动Backup节点Nginx服务和Keepalived服务,查看服务启动情况,如Backup节点出现了虚拟IP,则Keepalvied配置文件有问题,此情况称为脑裂。
[[email protected] keepalived]# clear
[[email protected] keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:14:df:79 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.102/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff

  • Step 3 验证服务

    浏览并多次强制刷新地址: http://192.168.43.150 ,可以看到103和104多次交替显示,并显示Nginx-1,则表明 Master节点在进行web服务转发。

  • Step 4 关闭Master keepalived服务和Nginx服务,访问Web服务观察服务转移情况
[[email protected] keepalived]# killall keepalived
[[email protected] keepalived]# service nginx stop

此时强制刷新192.168.43.150发现 页面交替显示103和104并显示Nginx-2 ,VIP已转移到192.168.43.102上,已证明服务自动切换到备份节点上。

  • Step 5 启动Master Keepalived 服务和Nginx服务

    此时再次验证发现,VIP已被Master重新夺回,并页面交替显示 103和104,此时显示Nginx-1

四、Keepalived抢占模式和非抢占模式

keepalived的HA分为抢占模式和非抢占模式,抢占模式即MASTER从故障中恢复后,会将VIP从BACKUP节点中抢占过来。非抢占模式即MASTER恢复后不抢占BACKUP升级为MASTER后的VIP。

非抢占模式配置:

  • 1> 在vrrp_instance块下两个节点各增加了nopreempt指令,表示不争抢vip
  • 2> 节点的state都为BACKUP
    两个keepalived节点都启动后,默认都是BACKUP状态,双方在发送组播信息后,会根据优先级来选举一个MASTER出来。由于两者都配置了nopreempt,所以MASTER从故障中恢复后,不会抢占vip。这样会避免VIP切换可能造成的服务延迟。

原文地址:https://www.cnblogs.com/devin-ou/p/9524341.html

时间: 2024-10-12 18:55:00

Keepalived+Nginx+Tomcat 实现高可用Web集群(转)的相关文章

nginx+keepalived+tomcat配置高可用web集群

基本架构: 角色 ip 安装软件 作用 主机名 nginx主 192.168.247.129 nginx+keepalived 反向代理 nginxmaster.com nginx备 192.168.247.130 nginx+keepalived 反向代理 nginxsalve.com tomcat1 192.168.247.128 tomcat web服务器 tomcat1.com tomcat2 192.168.247.131 tomcat web服务器 tomcat2.com nfs主

Nginx+Tomcat+Keepalived实现高可用web集群

Nginx+Tomcat+Keepalived实现高可用web集群: 环境:CenOS 6.5Nginx-Master:10.10.10.128Nginx-Backup:10.10.10.129Tomcat1:10.10.10.130Tomcat2:10.10.10.131VIP:10.10.10.100 一.环境基础配置1.更换国内yum源2.关闭防火墙.SELinux3.时间同步 二.Web端安装Web服务 1.查看是否安装JDK [[email protected] ~]# java -v

CentOS Linux 负载均衡高可用WEB集群之LVS+Keepalived配置

CentOS Linux 负载均衡高可用WEB集群之LVS+Keepalived配置 LB集群是locd balance集群的简称.翻译成中文是:负载均衡集群的意思:集群是一组相互独立的.通过高速网络互联的计算机相互之间构成一个组合,并以单一的系统的模式加以管理.LVS是Linux Virtual Server的简写,翻译中文是Linux虚拟服务器,是一个虚拟的服务器集群系统. 负载均衡集群:是为了企业提供更为实用,性价比更高的系统机构解决方案.负载均衡集群把用户的请求尽可能的平均分发到集群的各

搭建LVS+Keepalived+nginx+tomcat高可用性,高性能jsp集群

LVS-master:192.168.0.210 LVS-backup:192.168.0.211 LVS-VIP:192.168.0.209 nginx+tomcat:192.168.0.212 nginx+tomcat:192.168.0.227 安装nginx所需包: Nginx-1.6.0.tar.gz和pcre-8.35.zip 一.安装pcre-8.35 1 #unzip pcre-8.35.zip 2 #cd pcre-8.35 3 #./configure 4 #make 5 #

corosycn&pacemaker的高可用web集群

基本拓扑: 两台高可用节点: node1:192.168.191.112 node2:192.168.191.113 NFS服务器:192.168.191.111 web服务的流动IP:192.168.191.199 一.准备工作: 1).node1---node2 基于主机名通信 1.编辑/etc/hosts文件添加如下内容 192.168.191.112 node1.liaobin.com node1 192.168.191.113 node2.liaobin.com node2 2.编辑/

heartbeat httpd nfs 实现高可用web集群

一 环境准备 二 拓扑结构 三 前提条件 四 安装相关软件 五 配置heartbeat 六 测试web集群 七 问题汇总 八 共享存储 一 环境准备 操作系统 centos 6.4 x86_64 最小化安装 如使用yum 安装的方式 centos5.5 安装的是V2.X ,centos 6.4 安装的是V3.X YUM 安装 Vim man ntp "development tools" "server platform development" "des

CentOS Linux 负载均衡高可用WEB集群之Nginx+Keepalived配置

Nginx+Keepalived实现负载均衡高可用的WEB服务集群,nginx作为负载均衡器,keepalived作为高可用,当其中的一台负载均衡器(nginx)发生故障时可以迅速切换到备用的负载均衡器(nginx),保持业务的连续性. 1.服务器的环境配置及IP分配 操作系统:CentOS release 6.7 (Final) nginx版本:nginx/1.8.0 keepalived版本:Keepalived v1.2.13 Nginx + keepalived服务器的IP分配表 服务器

heartbeat v2配置高可用web集群和基于nfs搭建MySQL高可用集群

安装环境:Centos 6.4, httpd2.4,mysql5.5,heartbeat v2 提供两台机器node1和node2,在/etc/hosts文件中添加名称解析,并且主机名称要与节点名称要相同,即uname -n的名称要和hosts定义的名称必须一样. #   IP                         HOSTNAME             ALIAS 10.204.80.79     node1.mylinux.com     node1 10.204.80.80  

corosync+pacemaker+crmsh的高可用web集群的实现

网络规划: node1:eth0:172.16.31.10/16 node2: eth0: 172.16.31.11/16 nfs:   eth0: 172.16.31.12/15 注: nfs在提供NFS服务的同时是一台NTP服务器,可以让node1和node2同步时间的. node1和node2之间心跳信息传递依靠eth0传递 web服务器的VIP是172.16.31.166/16 架构图:跟前文的架构一样,只是节点上安装的高可用软件不一致: 一.高可用集群构建的前提条件 1.主机名互相解析