Sentinel(哨兵)是用于监控redis集群中Master状态的工具,其已经被集成在redis2.4+的版本中
一、Sentinel作用:
1):Master状态检测
2):如果Master异常,则会进行Master-Slave切换,将其中一个Slave作为Master,将之前的Master作为Slave
3):Master-Slave切换后,master_redis.conf、slave_redis.conf和sentinel.conf的内容都会发生改变,即master_redis.conf中会多一行slaveof的配置,sentinel.conf的监控目标会随之调换。
二、Sentinel工作方式:
1):每个Sentinel以每秒钟一次的频率向它所知的Master,Slave以及其他 Sentinel 实例发送一个PING命令
2):如果一个实例(instance)距离最后一次有效回复 PING 命令的时间超过 down-after-milliseconds选项所指定的值,则这个实例会被Sentinel 标记为主观下线。
3):如果一个Master被标记为主观下线,则正在监视这个Master的所有Sentinel要以每秒一次的频率确认Master的确进入了主观下线状态。
4):当有足够数量的 Sentinel(大于等于配置文件指定的值)在指定的时间范围内确认Master的确进入了主观下线状态,则Master会被标记为客观下线
5):在一般情况下,每个Sentinel会以每 10 秒一次的频率向它已知的所有Master,Slave发送INFO命令
6):当Master被Sentinel标记为客观下线时,Sentinel向下线的Master的所有Slave发送INFO命令的频率会从10秒一次改为每秒一次
7):若没有足够数量的Sentinel同意Master已经下线,Master的客观下线状态就会被移除。
若Master重新向Sentinel的PING命令返回有效回复,Master的主观下线状态就会被移除。
三、主观下线和客观下线
主观下线:Subjectively Down,简称SDOWN,指的是当前Sentinel实例对某个redis服务器做出的下线判断。
客观下线:Objectively Down,简称ODOWN,指的是多个Sentinel实例在对Master Server做出SDOWN判断,并且通过SENTINEL is-master-down-by-addr 命令互相交流之后,得出的Master Server下线判断,然后开启failover.
SDOWN适合于Master和Slave,只要一个Sentinel 发现Master进入了ODOWN, 这个Sentinel就可能会被其他Sentinel推选出, 并对下线的主服务器执行自动故障迁移操作。
ODOWN只适用于Master,对于Slave的Redis实例,Sentinel在将它们判断为下线前不需要进行协商,所以Slave的Sentinel永远不会达到ODOWN。
架构:
station11:192.168.1.11 redis master 6379 sentinel 26379
station12:192.168.1.12 redis slave1 6379 sentinel 26479
station13:192.168.1.13 redis slave2 6379 sentinel 26579
环境:Centos6.8 epel-6-ali.repo ansible
1、ansible安装3机redis
[[email protected] ~]# ansible all -m command -a "yum -y install gcc gcc-c++ tcl"
[[email protected] ~]# wget http://download.redis.io/releases/redis-3.2.6.tar.gz
[[email protected] ~]# ansible all -m copy -a ‘src=/root/redis-3.2.6.tar.gz dest=/root/‘
[[email protected] ~]# ansible all -m command -a ‘tar -zxvf redis-3.2.6.tar.gz -C /usr/local‘
[[email protected] ~]# ansible all -m file -a "src=/usr/local/redis-3.2.6 dest=/usr/local/redis state=link"
[[email protected] ~]# vim make.sh
#!/bin/bash cd /usr/local/redis make && make install
[[email protected] ~]# chmod +x make.sh
[[email protected] ~]# ansible all -m copy -a "src=/root/make.sh dest=/root/ mode=0755"
[[email protected] ~]# ansible all -m shell -a "/root/make.sh"
[[email protected] ~]# ansible all -m command -a "mkdir /etc/redis"
[[email protected] ~]# ansible all -m command -a "mkdir -pv /data/redis/6379"
[[email protected] ~]# ansible all -m copy -a "src=/usr/local/redis/redis.conf dest=/etc/redis/"
2.1、修改主库配置文件
[[email protected] ~]# vim /etc/redis/redis.conf
daemonize yes 以守护进程模式运行 bind 192.168.1.11 本机主库监听本机地址 logfile "/var/log/redis_6379.log" 指定日志文件 dir /data/redis/6379 指定rdb数据库存放目录 masterauth redhat 启动主库验证 requirepass redhat 启动用户进入密码验证
[[email protected] ~]# redis-server /etc/redis/redis.conf&
[[email protected] ~]# ss -nutlp | grep redis
tcp LISTEN 0 128 192.168.1.11:6379 *:* users:(("redis-server",6447,4))
2.2、为slave1提供redis配置文件
[[email protected] ~]# vim /etc/redis/redis.conf
bind 192.168.1.12 #绑定slave1的地址 logfile "/var/log/redis_6379.log" dir /data/redis/6379 slaveof 192.168.1.11 6379 #此选项是主从配置的关键,指向master的ip和redis端口 slave-priority 95 #slave1服务器的优先级
[[email protected] ~]# nohup redis-server /etc/redis/redis.conf& 后台执行
[1] 2572
[[email protected] ~]# tail -f /var/log/redis_6379.log 主库日志
6447:M 22 Jan 23:49:34.809 * Slave 192.168.1.12:6379 asks for synchronization 6447:M 22 Jan 23:49:34.809 * Full resync requested by slave 192.168.1.12:6379
[[email protected] ~]# tail -f /var/log/redis_6379.log 从库日志
7655:S 22 Jan 23:49:34.555 * Connecting to MASTER 192.168.1.11:6379 7655:S 22 Jan 23:49:34.555 * MASTER <-> SLAVE sync started
2.3、为slave2提供redis配置文件
[[email protected] ~]# scp /etc/redis/redis.conf 192.168.1.13:/etc/redis/
[[email protected] ~]# vim /etc/redis/redis.conf
bind 192.168.1.13 slaveof 192.168.1.11 6379 slave-priority 97
[[email protected] ~]# nohup redis-server /etc/redis/redis.conf&
[1] 5659
[[email protected] ~]# tail -f /var/log/redis_6379.log
5659:S 23 Jan 00:00:12.656 * Connecting to MASTER 192.168.1.11:6379 5659:S 23 Jan 00:00:12.656 * MASTER <-> SLAVE sync started
2.4、检查主从库复制正常?
[[email protected] ~]# redis-cli -h 192.168.1.11 -p 6379
192.168.1.11:6379> info replication # Replication role:master connected_slaves:2 slave0:ip=192.168.1.12,port=6379,state=online,offset=1023,lag=1 slave1:ip=192.168.1.13,port=6379,state=online,offset=1023,lag=1 master_repl_offset:1023 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:1022 192.168.1.11:6379> set 1 1000 OK 192.168.1.11:6379> keys * 1) "1" 192.168.1.11:6379> get 1 "1000"
[[email protected] ~]# redis-cli -h 192.168.1.12 -p 6379
192.168.1.12:6379> keys * 1) "1" 192.168.1.12:6379> get 1 "1000"
[[email protected] ~]# redis-cli -h 192.168.1.13 -p 6379
192.168.1.13:6379> get 1 "1000" 192.168.1.13:6379> keys * 1) "1"
3.1、主库sentinel配置文件
[[email protected] ~]# cp /usr/local/redis/sentinel.conf /etc/redis/
[[email protected] ~]# vim /etc/redis/sentinel.conf
port 26379 sentinel monitor mymaster 192.168.1.11 6379 1 sentinel down-after-milliseconds mymaster 5000 sentinel failover-timeout mymaster 60000 logfile "/var/log/sentinel_master.log" protected-mode no 打开保护模式,否则sentinel不会监听除了127.0.0.1和192.168.1.1之外IP,也就不会超过1张投票给station11已经死,需要重新选举主服 注意:192.168.1.12和192.168.1.13上的sentinel配置和192.168.1.11一样,只有端口和log不一样如下: slave1 192.168.1.12
[[email protected] ~]# ansible all -m copy -a "src=/etc/redis/sentinel.conf dest=/etc/redis/"
3.2、从库slave1的sentinel配置文件
[[email protected] ~]# vim /etc/redis/sentinel.conf
port 26479 logfile "/var/log/sentinel_slave1.log"
3.3、从库slave1的sentinel配置文件
[[email protected] ~]# vim /etc/redis/sentinel.conf
port 26579 logfile "/var/log/sentinel_slave2.log"
3.4、主从库sentinel连接时日志
[[email protected] ~]# nohup redis-sentinel /etc/redis/sentinel.conf&
[1] 6640
[[email protected] ~]# tail -f /var/log/sentinel_master.log
6640:X 23 Jan 00:22:34.910 # Sentinel ID is 17c9ee07632d60c4c0aa75a853bbda93966caa22 6640:X 23 Jan 00:22:34.910 # +monitor master mymaster 192.168.1.11 6379 quorum 1 6640:X 23 Jan 00:22:34.910 * +slave slave 192.168.1.12:6379 192.168.1.12 6379 @ mymaster 192.168.1.11 6379 6640:X 23 Jan 00:22:34.912 * +slave slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.11 6379
[[email protected] ~]# nohup redis-sentinel /etc/redis/sentinel.conf&
[2] 7782
[[email protected] ~]# tail -f /var/log/sentinel_slave1.log
7782:X 23 Jan 00:24:36.059 # Sentinel ID is 7c000aa564ed603eeefd031333ebc2c916597ec6 7782:X 23 Jan 00:24:36.059 # +monitor master mymaster 192.168.1.11 6379 quorum 1 7782:X 23 Jan 00:24:36.060 * +slave slave 192.168.1.12:6379 192.168.1.12 6379 @ mymaster 192.168.1.11 6379 7782:X 23 Jan 00:24:36.061 * +slave slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.11 6379 7782:X 23 Jan 00:24:36.516 * +sentinel sentinel 17c9ee07632d60c4c0aa75a853bbda93966caa22 192.168.1.11 26379 @ mymaster 192.168.1.11 6379 7782:X 23 Jan 00:24:41.597 # +sdown sentinel 17c9ee07632d60c4c0aa75a853bbda93966caa22 192.168.1.11 26379 @ mymaster 192.168.1.11 6379
[[email protected] ~]# nohup redis-sentinel /etc/redis/sentinel.conf&
[2] 5763
[[email protected] ~]# tail -f /var/log/sentinel_slave2.log
5763:X 23 Jan 00:26:14.286 # Sentinel ID is a78518e4955d3602c61e212be0cbdc378daa3cdc 5763:X 23 Jan 00:26:14.286 # +monitor master mymaster 192.168.1.11 6379 quorum 1 5763:X 23 Jan 00:26:14.287 * +slave slave 192.168.1.12:6379 192.168.1.12 6379 @ mymaster 192.168.1.11 6379 5763:X 23 Jan 00:26:14.288 * +slave slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.11 6379 5763:X 23 Jan 00:26:14.708 * +sentinel sentinel 7c000aa564ed603eeefd031333ebc2c916597ec6 192.168.1.12 26479 @ mymaster 192.168.1.11 6379 5763:X 23 Jan 00:26:15.779 * +sentinel sentinel 17c9ee07632d60c4c0aa75a853bbda93966caa22 192.168.1.11 26379 @ mymaster 192.168.1.11 6379 5763:X 23 Jan 00:26:19.716 # +sdown sentinel 7c000aa564ed603eeefd031333ebc2c916597ec6 192.168.1.12 26479 @ mymaster 192.168.1.11 6379 5763:X 23 Jan 00:26:20.797 # +sdown sentinel 17c9ee07632d60c4c0aa75a853bbda93966caa22 192.168.1.11 26379 @ mymaster 192.168.1.11 6379
3.5、客户端查看sentinel
[[email protected] ~]# redis-cli -p 26379
127.0.0.1:26379> info sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=192.168.1.11:6379,slaves=2,sentinels=3 127.0.0.1:26379> sentinel masters 1) 1) "name" 2) "mymaster" 3) "ip" 4) "192.168.1.11" 5) "port" 6) "6379" 7) "runid" 8) "e0ae553828b47db69e0d75ef8c20b30f1ed96c3c" 9) "flags" 10) "master" ..................................................... 127.0.0.1:26379> sentinel slaves mymaster 1) 1) "name" 2) "192.168.1.12:6379" 3) "ip" 4) "192.168.1.12" 5) "port" 6) "6379" 7) "runid" 8) "486ebcb9ad89bf9c6889fd98b0d669c0addb9d10" 9) "flags" 10) "slave" .................................................... 31) "master-link-status" 32) "ok" 33) "master-host" 34) "192.168.1.11" 35) "master-port" 36) "6379" 37) "slave-priority" 38) "95" 39) "slave-repl-offset" 40) "75763" 2) 1) "name" 2) "192.168.1.13:6379" 3) "ip" 4) "192.168.1.13" 5) "port" 6) "6379" 7) "runid" 8) "30fdcff948a6e249a87da41ef42f41897eaf4104" 9) "flags" 10) "slave" ................................................. 31) "master-link-status" 32) "ok" 33) "master-host" 34) "192.168.1.11" 35) "master-port" 36) "6379" 37) "slave-priority" 38) "97" 39) "slave-repl-offset" 40) "75763"
4、进行容灾测试:
4.1、模拟redis的HA集群slave服务器宕机
停掉一台slave,观察集群的状态,这里将slave2的redis-server停掉
[[email protected] ~]# killall redis-server
首先查看三台sentinel的日志信息,都会刷新一条,如下:
[[email protected] ~]# tail -f /var/log/sentinel_master.log
6640:X 23 Jan 00:33:05.032 # +sdown slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.11 6379
可以看到192.168.1.13掉了
到master服务器上查看主从复制信息:
192.168.1.11:6379> info replication # Replication role:master connected_slaves:1 slave0:ip=192.168.1.12,port=6379,state=online,offset=131310,lag=0 master_repl_offset:131310 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:131309
可以看到192.168.1.13已经被master剔除了!
再重新将slave2的redis-server启动起来,继续观察:
[[email protected] ~]# nohup redis-server /etc/redis/redis.conf&
三台sentinel的日志信息,如下:
[[email protected] ~]# tail -f /var/log/sentinel_master.log
6640:X 23 Jan 00:36:10.845 * +reboot slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.11 6379 6640:X 23 Jan 00:36:10.936 # -sdown slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.11 6379
可以看出192.168.1.13已经重新启动了!
在master上继续查看复制信息,如下:13主机又重回集群
192.168.1.11:6379> info replication # Replication role:master connected_slaves:2 slave0:ip=192.168.1.12,port=6379,state=online,offset=162401,lag=1 slave1:ip=192.168.1.13,port=6379,state=online,offset=162401,lag=1 master_repl_offset:162401 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:162400
4.2、模拟redis的HA集群master服务器宕机
说明:停掉master的6379端口,假设master是因为外部问题宕机了(直接kill掉redis-server进程)
[[email protected] ~]# killall redis-server
观察三台redis的sentinel日志:
[[email protected] ~]#tail -f /var/log/sentinel_master.log
6640:X 23 Jan 00:39:07.305 # +sdown master mymaster 192.168.1.11 6379 6640:X 23 Jan 00:39:07.305 # +odown master mymaster 192.168.1.11 6379 #quorum 1/1 6640:X 23 Jan 00:39:07.305 # +new-epoch 1 6640:X 23 Jan 00:39:07.305 # +try-failover master mymaster 192.168.1.11 6379 6640:X 23 Jan 00:39:07.308 # +vote-for-leader 17c9ee07632d60c4c0aa75a853bbda93966caa22 1 6640:X 23 Jan 00:39:17.380 # -failover-abort-not-elected master mymaster 192.168.1.11 6379 6640:X 23 Jan 00:39:17.438 # Next failover delay: I will not start a failover before Mon Jan 23 00:41:07 2017 6640:X 23 Jan 00:41:07.432 # +new-epoch 2 6640:X 23 Jan 00:41:07.432 # +try-failover master mymaster 192.168.1.11 6379 6640:X 23 Jan 00:41:07.434 # +vote-for-leader 17c9ee07632d60c4c0aa75a853bbda93966caa22 2 6640:X 23 Jan 00:41:17.505 # -failover-abort-not-elected master mymaster 192.168.1.11 6379 6640:X 23 Jan 00:41:17.592 # Next failover delay: I will not start a failover before Mon Jan 23 00:43:07 2017
一直尝试恢复master失败,投票只有1票,重新选举出新主服数量不够
[[email protected] ~]#tail -f sentinel_master.log
6732:X 23 Jan 01:24:18.921 # +new-epoch 18 6732:X 23 Jan 01:24:18.933 # +vote-for-leader a78518e4955d3602c61e212be0cbdc378daa3cdc 18 6732:X 23 Jan 01:24:18.946 # +sdown master mymaster 192.168.1.11 6379 6732:X 23 Jan 01:24:18.946 # +odown master mymaster 192.168.1.11 6379 #quorum 1/1 6732:X 23 Jan 01:24:18.946 # Next failover delay: I will not start a failover before Mon Jan 23 01:26:19 2017 6732:X 23 Jan 01:24:19.240 # +config-update-from sentinel a78518e4955d3602c61e212be0cbdc378daa3cdc 192.168.1.13 26579 @ mymaster 192.168.1.11 6379 6732:X 23 Jan 01:24:19.240 # +switch-master mymaster 192.168.1.11 6379 192.168.1.12 6379 切换master从11到12 6732:X 23 Jan 01:24:19.241 * +slave slave 192.168.1.13:6379 192.168.1.13 6379 @ mymaster 192.168.1.12 6379 6732:X 23 Jan 01:24:19.241 * +slave slave 192.168.1.11:6379 192.168.1.11 6379 @ mymaster 192.168.1.12 6379 6732:X 23 Jan 01:24:24.258 # +sdown slave 192.168.1.11:6379 192.168.1.11 6379 @ mymaster 192.168.1.12 6379
[[email protected] ~]#tail -f /var/log/redis_slave1.log #slave1启动master模式
7846:M 23 Jan 01:24:18.857 * MASTER MODE enabled (user request from ‘id=7 addr=192.168.1.13:44615 fd=10 name=sentinel-a78518e4-cmd age=183 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=0 qbuf-free=32768 obl=36 oll=0 omem=0 events=r cmd=exec‘) 7846:M 23 Jan 01:24:18.858 # CONFIG REWRITE executed with success. 7846:M 23 Jan 01:24:19.060 * Slave 192.168.1.13:6379 asks for synchronization 7846:M 23 Jan 01:24:19.061 * Full resync requested by slave 192.168.1.13:6379 7846:M 23 Jan 01:24:19.061 * Starting BGSAVE for SYNC with target: disk 7846:M 23 Jan 01:24:19.061 * Background saving started by pid 7852 7852:C 23 Jan 01:24:19.091 * DB saved on disk 7852:C 23 Jan 01:24:19.091 * RDB: 8 MB of memory used by copy-on-write 7846:M 23 Jan 01:24:19.178 * Background saving terminated with success 7846:M 23 Jan 01:24:19.178 * Synchronization with slave 192.168.1.13:6379 succeeded
[[email protected] ~]# redis-cli -h 192.168.1.12 -p 6379
192.168.1.12:6379> info replication # Replication role:master connected_slaves:1 slave0:ip=192.168.1.13,port=6379,state=online,offset=65036,lag=0 master_repl_offset:65036 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:65035
[[email protected] ~]# redis-cli -h 192.168.1.12 -p 26479
192.168.1.12:26479> info sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=192.168.1.12:6379,slaves=2,sentinels=3 192.168.1.12:26479> sentinel masters 1) 1) "name" 2) "mymaster" 3) "ip" 4) "192.168.1.12" 5) "port" 6) "6379" 7) "runid" 8) "0a4b1b8c6b3595dde9446697a025fca0f0530ac7" 9) "flags" 10) "master" ........................................................ 31) "num-slaves" 32) "2" 33) "num-other-sentinels" 34) "2" 35) "quorum" 36) "1" 37) "failover-timeout" 38) "60000" 39) "parallel-syncs" 40) "1" 192.168.1.12:26479> sentinel slaves mymaster 1) 1) "name" 2) "192.168.1.13:6379" 3) "ip" 4) "192.168.1.13" 5) "port" 6) "6379" 7) "runid" 8) "eb29d1741121071accc633db186f63b81fc33ffc" 9) "flags" 10) "slave" ........................................................ 29) "master-link-down-time" 30) "0" 31) "master-link-status" 32) "ok" 33) "master-host" 34) "192.168.1.12" 35) "master-port" 36) "6379" 37) "slave-priority" 38) "97" 39) "slave-repl-offset" 40) "112029" 2) 1) "name" 2) "192.168.1.11:6379" 3) "ip" 4) "192.168.1.11" 5) "port" 6) "6379" 7) "runid" 8) "" 9) "flags" 10) "s_down,slave,disconnected" .............................................. 27) "role-reported" 28) "slave" 29) "role-reported-time" 30) "546434" 31) "master-link-down-time" 32) "0" 33) "master-link-status" 34) "err" 35) "master-host" 36) "?" 37) "master-port" 38) "0" 39) "slave-priority" 40) "100" 41) "slave-repl-offset" 42) "0"
假如此时192.168.1.11这台服务器的redis-server恢复了,也会被加入slave队列中
[[email protected] ~]# nohup redis-server /etc/redis/redis.conf&
[[email protected] ~]# tail -f sentinel_master.log
6732:X 23 Jan 01:35:31.759 # -sdown slave 192.168.1.11:6379 192.168.1.11 6379 @ mymaster 192.168.1.12 6379 6732:X 23 Jan 01:35:41.740 * +convert-to-slave slave 192.168.1.11:6379 192.168.1.11 6379 @ mymaster 192.168.1.12 6379
[[email protected] ~]#tail -f redis_6379.log
7846:M 23 Jan 01:35:42.083 * Slave 192.168.1.11:6379 asks for synchronization 7846:M 23 Jan 01:35:42.083 * Full resync requested by slave 192.168.1.11:6379 7846:M 23 Jan 01:35:42.083 * Starting BGSAVE for SYNC with target: disk 7846:M 23 Jan 01:35:42.084 * Background saving started by pid 7859 7859:C 23 Jan 01:35:42.095 * DB saved on disk 7859:C 23 Jan 01:35:42.095 * RDB: 6 MB of memory used by copy-on-write 7846:M 23 Jan 01:35:42.177 * Background saving terminated with success 7846:M 23 Jan 01:35:42.178 * Synchronization with slave 192.168.1.11:6379 succeeded
5、查看故障转移之后redis和sentinel配置文件的变化
5.1、首先查看三台redis的redis.conf文件
因为模拟1.11服务器的redis-server宕机而后又重新开启,所以sentinel机制rewrite了redis.conf文件,如下:
[[email protected] ~]# vim /etc/redis/redis.conf
# Generated by CONFIG REWRITE slaveof 192.168.1.12 6379
rewrite机制在1.11的redis.conf末尾添加了如上2行,表示指向1.12这台新的master
查看1.12的redis.conf文件,你会发现原来的参数slaveof 192.168.1.11 6379 已经消失了!
查看1.13的redis.conf文件,如下:
slaveof 192.168.1.12 6379
由原来的指向192.168.1.11变成了指向新的master 192.168.1.12
6、查看三台redis的sentinel.conf文件
1.11上的sentinel.conf文件:
#cat sentinel.conf port 26379 sentinel monitor mymaster 192.168.1.12 6379 1 #已经由原来的192.168.1.11变成了192.168.1.12 # Generated by CONFIG REWRITE sentinel known-slave mymaster 192.168.1.11 6379 sentinel known-slave mymaster 192.168.1.13 6379 sentinel known-sentinel mymaster 192.168.1.12 26479 7c000aa564ed603eeefd031333ebc2c916597ec6 sentinel known-sentinel mymaster 192.168.1.13 26579 a78518e4955d3602c61e212be0cbdc378daa3cdc #后边的字符串是sentinel启动时候为每一台redis生成的唯一标识
1.12上的sentinel.conf
#cat sentinel.conf port 26479 sentinel monitor mymaster 192.168.1.12 6379 1 # Generated by CONFIG REWRITE sentinel known-slave mymaster 192.168.1.13 6379 sentinel known-slave mymaster 192.168.1.11 6379 sentinel known-sentinel mymaster 192.168.1.13 26579 a78518e4955d3602c61e212be0cbdc378daa3cdc sentinel known-sentinel mymaster 192.168.1.11 26379 17c9ee07632d60c4c0aa75a853bbda93966caa22
1.13上的sentinel.conf
#cat sentinel.conf port 26579 sentinel monitor mymaster 192.168.1.12 6379 1 # Generated by CONFIG REWRITE sentinel known-slave mymaster 192.168.1.11 6379 sentinel known-slave mymaster 192.168.1.13 6379 sentinel known-sentinel mymaster 192.168.1.12 26479 7c000aa564ed603eeefd031333ebc2c916597ec6 sentinel known-sentinel mymaster 192.168.1.11 26379 17c9ee07632d60c4c0aa75a853bbda93966caa22