Redis高可用方案(redis主从+keepalived+sentinel)

架构:redis主从+keepalived+sentinel

三台机器,两台redis主从,一台配合选举sentinel leader机器。

Master:  192.168.100.135    controller         部署redis+keepalived+sentinel

Slave:  192.168.100.136     web-nb-136     部署redis+keepalived+sentinel

配合sentinel:  192.168.100.128     WEB-NB-128    配置sentinel

VIP:  192.168.100.140

redis版本: redis-2.8.6.tar.gz 自带sentinel功能

keepalived版本: keepalived-1.2.13.tar.gz

四个实例6379、6380、6381、6382



测试结论:

第一种情况:
advert_int 4
down-after-milliseconds 3000
failover-timeout 9000
failover用时: 10-15s

第二种情况:
advert_int 3
down-after-milliseconds 2000    (时间太短,会导致主从角色不稳定,有可能出现主从来回切换的现象,从而导致服务不可用,2000ms是最低极限)
failover-timeout 6000
failover用时: 5-10s   确切用时:6-7s

135上的操作:

1. 安装配置redis

(1)安装redis

[[email protected] ~]# cat install_redis_2-8-6.sh

yum install tcl -y
wget http://download.redis.io/releases/redis-2.8.6.tar.gz
tar zxvf redis-2.8.6.tar.gz
cd redis-2.8.6;make;cd src
cp redis-server /usr/local/bin/
cp redis-cli /usr/local/bin/
cp redis-sentinel /usr/local/bin/
cp redis-check-aof redis-check-dump redis-benchmark /usr/local/bin/
mkdir /etc/redis /var/log/redis /var/run/redis/
mkdir -p /var/redis/redis_{6379,6380,6381,6382}

[[email protected] ~]# sh  install_redis_2-8-6.sh

(2)配置redis

① 实例6379:

[[email protected] ~]# cd /etc/redis/

[[email protected] redis]# cat redis_6379.conf
daemonize yes
pidfile "/var/run/redis/redis_6379.pid"
port 6379
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6379.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6379.rdb"
dir "/var/redis/redis_6379"
maxmemory 4gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

② 实例6380:

[[email protected] redis]# cat redis_6380.conf
daemonize yes
pidfile "/var/run/redis/redis_6380.pid"
port 6380
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6380.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6380.rdb"
dir "/var/redis/redis_6380"
maxmemory 2gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

③ 实例6381:

[[email protected] redis]# cat redis_6381.conf
daemonize yes
pidfile "/var/run/redis/redis_6381.pid"
port 6381
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6381.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6381.rdb"
dir "/var/redis/redis_6381"
maxmemory 2gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

④ 实例6382:

[[email protected] redis]# cat redis_6382.conf
daemonize yes
pidfile "/var/run/redis/redis_6382.pid"
port 6382
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6382.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6382.rdb"
dir "/var/redis/redis_6382"
maxmemory 2gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

2.安装配置keepalived

(1)安装keepalived

[[email protected] ~]# cat install_keepalived-1.2.13.sh
#!/bin/bash
wget -qO keepalived-1.2.13.tar.gz http://www.keepalived.org/software/keepalived-1.2.13.tar.gz
yum install openssl openssl-devel -y
tar zxvf keepalived-1.2.13.tar.gz;cd keepalived-1.2.13
./configure --prefix=/usr/local/keepalived
make && make install
mkdir -p /etc/keepalived/{scripts,log} 
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/bin/genhash /usr/sbin/

(2)配置keepalived

[[email protected] ~]# cd /etc/keepalived/

[[email protected] keepalived]# cat keepalived.conf
global_defs {
lvs_id LVS_redis
}

vrrp_script chk_redis {
  script "/etc/keepalived/scripts/redis_check.sh"
    weight -20
    interval 2                                    
  }

vrrp_instance VI_1 {
  state backup                            
  interface eth0                          
  virtual_router_id 52
  nopreempt
  priority 200    
  advert_int 4

virtual_ipaddress {
    192.168.100.140                      
  }

track_script {
    chk_redis                    
  }

notify_master /etc/keepalived/scripts/redis_master.sh 
   notify_stop  /etc/keepalived/scripts/keepalived_stop.sh

}

(3)编写keepalived监测以及触发脚本

① 监测脚本redis_check.sh

该脚本用于监测实例运行状态。

[[email protected] scripts]# cd scripts/

[[email protected] scripts]# cat redis_check.sh
#!/bin/bash
Vip=192.168.100.140
PortGroup=(6379 6380 6381 6382)
VipValue=`/sbin/ip add|grep $Vip`
RedisCli_Cmd="/usr/local/bin/redis-cli"
Count=0

case_value(){
  case $1 in
     RedisRole)
        Value=`$RedisCli_Cmd -p $Port info|awk -F‘[:|\r]‘ ‘/role/{print $2}‘`
        ;;
     Alive)
        Value=`$RedisCli_Cmd -p $Port PING`
  esac
}

sub_value(){
  for Port in ${PortGroup[@]};do
     case_value $1
     if [ $Value = "$2" ];then
          let "Count = $Count + 1"
          if [ $Count -eq 4 ];then exit 0;fi
     else
          `which pkill` keepalived
          exit 1
     fi
  done
}

if [ -n "$VipValue" ];then
  sub_value RedisRole master
else
  sub_value Alive PONG
fi

[[email protected] scripts]# chmod +x redis_check.sh

② 触发脚本keepalived_stop.sh

在keepalived状态发生变化或服务异常停止时触发执行该脚本。

[[email protected] scripts]# cat keepalived_stop.sh
#!/bin/bash
`which pkill` redis-server

[[email protected] scripts]# chmod +x keepalived_stop.sh

③ 开机启动脚本keepalived_start.sh

用于自动解决keepalived启动顺序的问题.

[[email protected] ~]# cat /etc/keepalived/scripts/keepalived_start.sh
#!/bin/bash

RedisRole=`/usr/local/bin/redis-cli -p 6379 info|awk -F‘[:|\r]‘ ‘/role/{print $2}‘`
KeepalivedStartCmd="/etc/init.d/keepalived start"

if [ $RedisRole = "master" ];then
        $KeepalivedStartCmd
else
        while true;
        do
                sleep 1
                ping 192.168.100.140 -c 1 >/dev/null 2>&1
                if [ $? -eq 0 ];then $KeepalivedStartCmd;break;fi
        done
fi

[[email protected] ~]# chmod +x /etc/keepalived/scripts/keepalived_start.sh

3.配置sentinel

[[email protected] keepalived]# cd /etc/redis/

[[email protected] redis]# cat sentinel.conf

port 26379

daemonize yes

logfile "/var/log/redis/sentinel.log"

sentinel monitor MyMaster6379 192.168.100.135 6379 2
sentinel down-after-milliseconds MyMaster6379 2000
sentinel failover-timeout MyMaster6379 6000
sentinel config-epoch MyMaster6379 1

sentinel monitor MyMaster6380 192.168.100.135 6380 2
sentinel down-after-milliseconds MyMaster6380 2000
sentinel failover-timeout MyMaster6380 6000
sentinel config-epoch MyMaster6380 1

sentinel monitor MyMaster6381 192.168.100.135 6381 2
sentinel down-after-milliseconds MyMaster6381 2000
sentinel failover-timeout MyMaster6381 6000
sentinel config-epoch MyMaster6381 1

sentinel monitor MyMaster6382 192.168.100.135 6382 2
sentinel down-after-milliseconds MyMaster6382 2000
sentinel failover-timeout MyMaster6382 6000
sentinel config-epoch MyMaster6382 1

注释:三台机器的sentinel.conf的配置除端口不同之外,其他都是一样的.

需要注意的几个地方:

1、注意第一行最后一个2,意思是当有2个sentinel实例同时检测到redis异常时,才会有反应。(测试过程中使用的是1)

2、主从切换后,redis.conf、sentinel.conf内容都会改变,如果还想要原来的主从架构,需要再修改配置文件,并重新启动;

3、
master挂掉,sentinel已经选择了新的master,但是还没有将其改成master,但是已经将old
master改成了slave。那么这时候如果重启old master,就会处于无主状态。所以一方面要等sentinel稳定后再启动old
master,或者重新人工修改配置文件,重新启动集群。

4、sentinel只是在server端做主从切换,app端要自己开发,例如Jedis库的SentinelJedis,能够监控sentinel的状态。这样才能完整的实现高可用性的主从切换。 (本测试使用keepalived的VIP漂移技术实现透明化服务)

4. 加入开机启动

[[email protected] redis]# cat /etc/rc.local

# Redis 2.8.6
redis-server /etc/redis/redis_6379.conf
redis-server /etc/redis/redis_6380.conf
redis-server /etc/redis/redis_6381.conf
redis-server /etc/redis/redis_6382.conf

# Redis-sentinel

redis-sentinel /etc/redis/sentinel.conf

# Keepalived

#/etc/init.d/keepalived start

/bin/bash /etc/keepalived/scripts/keepalived_start.sh

136上的操作:

1. 安装配置redis

(1)安装redis

安装方法和135上的redis安装方法相同.

(2)配置redis

① 实例6379:

[[email protected] ~]# cd /etc/redis/

[[email protected] redis]# cat redis_6379.conf
daemonize yes
pidfile "/var/run/redis/redis_6379.pid"
port 6379
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6379.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6379.rdb"
dir "/var/redis/redis_6379"
maxmemory 4gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

slaveof 192.168.100.135 6379

② 实例6380:

[[email protected] redis]# cat redis_6380.conf
daemonize yes
pidfile "/var/run/redis/redis_6380.pid"
port 6380
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6380.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6380.rdb"
dir "/var/redis/redis_6380"
maxmemory 2gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

slaveof 192.168.100.135 6380

③ 实例6381:

[[email protected] redis]# cat redis_6381.conf
daemonize yes
pidfile "/var/run/redis/redis_6381.pid"
port 6381
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6381.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6381.rdb"
dir "/var/redis/redis_6381"
maxmemory 2gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

slaveof 192.168.100.135 6381

④ 实例6382:

[[email protected] redis]# cat redis_6382.conf
daemonize yes
pidfile "/var/run/redis/redis_6382.pid"
port 6382
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/var/log/redis/redis_6382.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump_6382.rdb"
dir "/var/redis/redis_6382"
maxmemory 2gb
slave-read-only yes
slave-serve-stale-data yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

slaveof 192.168.100.135 6382

2.安装配置keepalived

(1)安装keepalived

安装方法和135上的相同.

(2)配置keepalived

[[email protected] keepalived]# cat keepalived.conf
global_defs {
  lvs_id LVS_redis
  }

vrrp_script chk_redis {
  script "/etc/keepalived/scripts/redis_check.sh"
  weight -20
  interval 2                                   
}

vrrp_instance VI_1 {
  state backup                           
  interface eth0                         
  virtual_router_id 52
  priority 100 
  advert_int 4                  
  virtual_ipaddress {
   192.168.100.140                     
  }

track_script {
   chk_redis                   
  }

notify_master /etc/keepalived/scripts/redis_master.sh
   notify_stop  /etc/keepalived/scripts/keepalived_stop.sh 
}

(3)编写keepalived监测以及触发脚本

和135上的①②③的脚本相同.

3.配置sentinel

[[email protected] redis]# cat sentinel.conf

port 26479

daemonize yes

logfile "/var/log/redis/sentinel.log"

sentinel monitor MyMaster6379 192.168.100.135 6379 2
sentinel down-after-milliseconds MyMaster6379 2000
sentinel failover-timeout MyMaster6379 6000
sentinel config-epoch MyMaster6379 1

sentinel monitor MyMaster6380 192.168.100.135 6380 2
sentinel down-after-milliseconds MyMaster6380 2000
sentinel failover-timeout MyMaster6380 6000
sentinel config-epoch MyMaster6380 1

sentinel monitor MyMaster6381 192.168.100.135 6381 2
sentinel down-after-milliseconds MyMaster6381 2000
sentinel failover-timeout MyMaster6381 6000
sentinel config-epoch MyMaster6381 1

sentinel monitor MyMaster6382 192.168.100.135 6382 2
sentinel down-after-milliseconds MyMaster6382 2000
sentinel failover-timeout MyMaster6382 6000
sentinel config-epoch MyMaster6382 1

4. 加入开机启动

[[email protected] redis]# cat /etc/rc.local

# Redis 2.8.6
redis-server /etc/redis/redis_6379.conf
redis-server /etc/redis/redis_6380.conf
redis-server /etc/redis/redis_6381.conf
redis-server /etc/redis/redis_6382.conf

# Redis-sentinel

redis-sentinel /etc/redis/sentinel.conf

# Keepalived
#/etc/init.d/keepalived start

/bin/bash /etc/keepalived/scripts/keepalived_start.sh

128上的操作:

128只是用来配合监测redis maser异常过程中进行重新选举领头(leader) Sentinel 时充当人头的.128只需要配置启动sentinel即可(前面我们已经在135和136上配置了两个sentinel)。

因为一个 Sentinel 都需要获得系统架构中多数(majority) Sentinel 的支持, 才能发起一次自动故障迁移, 并预留一个给定的配置纪元 (configuration Epoch ,一个配置纪元就是一个新主服务器配置的版本号)。

换句话说, 在只有少数(minority) Sentinel 进程正常运作的情况下, Sentinel 是不能执行自动故障迁移的。

由于redis-2.8.6已经集合sentinel,因此需要先安装redis,在配置sentinel.

1.安装redis

安装方法和135上的redis安装方法相同.

2.配置sentinel

[[email protected] ~]# cat /etc/redis/sentinel.conf

port 26579

daemonize yes

logfile "/var/log/redis/sentinel.log"

sentinel monitor MyMaster6379 192.168.100.135 6379 2
sentinel down-after-milliseconds MyMaster6379 2000
sentinel failover-timeout MyMaster6379 6000
sentinel config-epoch MyMaster6379 1

sentinel monitor MyMaster6380 192.168.100.135 6380 2
sentinel down-after-milliseconds MyMaster6380 2000
sentinel failover-timeout MyMaster6380 6000
sentinel config-epoch MyMaster6380 1

sentinel monitor MyMaster6381 192.168.100.135 6381 2
sentinel down-after-milliseconds MyMaster6381 2000
sentinel failover-timeout MyMaster6381 6000
sentinel config-epoch MyMaster6381 1

sentinel monitor MyMaster6382 192.168.100.135 6382 2
sentinel down-after-milliseconds MyMaster6382 2000
sentinel failover-timeout MyMaster6382 6000
sentinel config-epoch MyMaster6382 1

3.加入开机启动

[[email protected] ~]# cat /etc/rc.local

# Redis-sentinel

redis-sentinel /etc/redis/sentinel.conf

需要注意几个极限的问题:


1. 整个系统部署完成最初,Redis主从在启动keepalived服务时是有顺序的;要点就是要保证先启动keepalived服务的机器是Redis Master。

2. 考虑到后期维护期间,机房意外断电的情况,当电源恢复正常后,如何保证Redis主从keepalived的先后启动顺序。

解决思路: 自定义keepalived开机启动脚本,做相应的判断,问题即可得到解决。本测试中该问题已经得到解决。

3. 当Master失效之后,故障自动切换完成,切忌不要立刻启动失效的Old Master,要等到Sentinel稳定(也就是新的Master已经成功接管读写任务)之后再启动Old Master,当然此时的Old

Master 会自动变为新Master的Slave,并向新的Master发起同步请求。

4. 线上部署时,会涉及到防火墙的配置,主要考虑到以下几个要点就不会有问题

① 允许哪些来源ip(网段)可以访问本机redis实例端口;

② 允许哪些来源ip(网段)可以访问本机的sentinel端口;

③ 允许vrrp协议可以通过本机

举例:

-A INPUT -p vrrp -j ACCEPT
-A INPUT -s 192.168.100.0/24 -p tcp -m tcp --dport 6379 -j ACCEPT
-A INPUT -s 192.168.100.0/24 -p tcp -m tcp --dport 6380 -j ACCEPT
-A INPUT -s 192.168.100.0/24 -p tcp -m tcp --dport 6381 -j ACCEPT
-A INPUT -s 192.168.100.0/24 -p tcp -m tcp --dport 6382 -j ACCEPT
-A INPUT -s 192.168.100.0/24 -p tcp -m tcp --dport 26379 -j ACCEPT

时间: 2024-08-03 12:58:04

Redis高可用方案(redis主从+keepalived+sentinel)的相关文章

Redis高可用方案哨兵机制------ 配置文件sentinel.conf详解

redis的哨兵机制是官方推荐的一种高可用(HA)方案,我们在使用Redis的主从结构时,如果主节点挂掉,这时是不能自动进行主备切换和通知客户端主节点下线的. Redis-Sentinel机制主要用三个功能: (1)监控:不停监控Redis主从节点是否安装预期运行 (2)提醒:如果Redis运行出现问题可以 按照配置文件中的配置项 通知客户端或者集群管理员 (3)自动故障转移:当主节点下线之后,哨兵可以从主节点的多个从节点中选出一个为主节点,并更新配置文件和其他从节点的主节点信息. 这里先把哨兵

基于Sentinel的Redis高可用方案

数据存储我们在应用设计过程中非常重要的一部分,无论是关系型数据库,还是Redis.MongoDB等非关系型数据库,都有很多的高可用方案,还有一些针对不同业务设计的中间件,使其性能更有特色,更能保证数据存储的稳定和安全. 目前主流的Redis的数据存储架构有Redis单点,Redis主从,基于Sentinel的Redis主备.基于keepalive的redis主备,以及Redis集群Cluster,还有豌豆荚开源的Codis等是目前业内比较流行的解决方案,不同的存储架构,是若干个技术工程师,根据自

redis高可用之redis sentinel(哨兵)的搭建以及应用

redis的sentinel可以监控redis一个和多个redis的主从复制架构. 主要实现的功能有: 监控(Monitoring): Sentinel 会不断地检查你的主服务器和从服务器是否运作正常. 提醒(Notification): 当被监控的某个 Redis 服务器出现问题时, Sentinel 可以通过 API 向管理员或者其他应用程序发送通知. 自动故障迁移(Automatic failover): 当一个主服务器不能正常工作时, Sentinel 会开始一次自动故障迁移操作, 它会

Redis高可用架构之主从同步

CAP原理 最终一致性 Redis提供的同步机制 主从同步 丛丛同步 缓解master节点的压力 Redis的同步方式 增量同步 Redis同步的指令流,环形Buffer存储指令流 缺点:buffer大小固定,会存在未执行的指令被覆盖掉的情况 快照同步 耗费资源,主节点Bgsave到磁盘文件中,再将文件中的内容传送到从节点 从节点在进行快照文件接受完毕后进行全量加载,加载前先删除当前内存的数据,记载完毕后才继续进行增量同步 需要合理设置buffer大小,避免发生快照同步死循环 增加从节点 首先进

你需要了解的高可用方案之使用keepalived搭建双机热备一览

在之前一篇使用nginx搭建高可用的解决方案的时候,很多同学会问,如果nginx挂掉怎么办,比如下面这张图: 你可以清楚的看到,如果192.168.2.100这台机器挂掉了,那么整个集群就下线了,这个问题该怎么解决呢??? 简单的想想确实不大好处理,因为你 的webBrowser总得要访问一个ip地址,对吧..这个问题怎么破呢? 一:问题分析 如果你有一些网络底子的话,就会明白,你给一个不在本网段的机器发送请求的话,这个请求会先经过你的网关IP,然后通过网关IP传给对方的网关IP,然 后网关IP

redis high available solution/ redis 高可用方案

http://developers.linecorp.com/blog/?p=1420 http://engineering.docusign.com/articles/redis-sentinel-client-nodejs/ From WizNote

Redis 高可用之哨兵模式

参考   : https://mp.weixin.qq.com/s/Z-PyNgiqYrm0ZYg0r6MVeQ 一.redis高可用解决方案 redis主从 优点:1.高可靠性,主从实时备份,有效解决单节点数据丢失问题. 2.可做读写分离,从库分担读操作,缓解主库压力 缺点:主库异常,需要手动主从切换    2.redis哨兵模式 优点:1.有效解决主从模式主库异常手动主从切换的问题 缺点:1.运维复杂,哨兵选举期间,不能对外提供服务 其他解决方案优缺点,可以查看 高可用 ,本篇主要介绍哨兵解

Codis3.2集群HA高可用方案

Codis高可用方案官方推荐使用Sentinel Redis 本身就是最终一致性的.Master 挂了,Promote Slave 成为新的 Master 需要时间(测试15秒内).其实 Sentinel 就是这个逻辑.Codis3.2 自己没有实现 HA,而是直接依赖 Sentinel 的. 注意事项: Sentinel需要使用原生的Redis-server,版本要等于或高于Codis3.2里面的3.2.8版本, 这里是在Redis3.2.9的下配置测试的,另外Protected-mode n

MYSQL数据库高可用方案探究

MySQL作为最关键的应用数据存储中心,如何保证MySQL服务的可靠性和持续性,是我们不得不细致考虑的一个问题.当master宕机的时候,我们如何保证数据尽可能的不丢失,如何保证快速的获知master宕机并进行相应的故障转移处理,都需要仔细考虑与规划. 要保证MySQL数据不丢失,replication是一个很好的解决方案,而MySQL提供了一套强大的replication机制,replication能极大地提升数据安全,异步复制的方式也保证了sql读写性能不受太大影响.在大量企业长期的使用和实