redis cluster异地数据迁移,扩容,缩容

由于项目的服务器分布在重庆,上海,台北,休斯顿,所以需要做异地容灾需求。当前的mysql,redis cluster,elastic search都在重庆的如果重庆停电了,整个应用都不能用了。

现在考虑第一步做重庆和上海的异地容灾,大概测试了一下重庆的几台服务器之间大概是13m/s的传输速度也就是说100M的局域网带宽,重庆到上海只有1.2m/s的传输速度,大概10M的局域网带宽。

第一个方案先考虑简单的  mysql 重庆上海主主同步  redis cluster的master节点默认都设置在重庆的服务器,slave都设置在上海服务器。es的主分片也设置在重庆,副本分片全部设置在上海。

如下是redis的扩容和数据迁移的方法

在trialrun的服务器上一共3台   15.99.72.164和15.99.72.165在重庆    15.15.181.147在上海

[[email protected] 7005]# bin/redis-cli -c -h 15.15.181.147 -p 7006
15.15.181.147:7006> cluster nodes
c08e8c7faeede2220e621b2409061210e0b107ad 15.99.72.164:[email protected] slave 421123bf7fb3a4061e34cab830530d87b21148ee 0 1577089232000 7 connected
733609c2fbecdd41f454363698514e2f72ee0208 15.15.181.147:[email protected] myself,slave f452a66121e1e9c02b0ed28cafe03aaddb327c36 0 1577089230000 6 connected
31670db07d1bc7620a8f8254b26f2af00b04d1fd 15.99.72.164:[email protected] slave 763a88d5328ab0ce07a312e726d78bb2141b5813 0 1577089234988 5 connected
f452a66121e1e9c02b0ed28cafe03aaddb327c36 15.99.72.165:[email protected] master - 0 1577089235796 3 connected 5461-10922
421123bf7fb3a4061e34cab830530d87b21148ee 15.99.72.165:[email protected] master - 0 1577089234000 7 connected 0-5460
763a88d5328ab0ce07a312e726d78bb2141b5813 15.15.181.147:[email protected] master - 0 1577089232733 5 connected 10923-16383

[[email protected] src]# /root/tools/redis-4.0.11/src/redis-trib.rb info 15.99.72.165:7003
15.99.72.165:7003 (f452a661...) -> 53254 keys | 5462 slots | 1 slaves.
15.15.181.147:7005 (763a88d5...) -> 53174 keys | 5461 slots | 1 slaves.
15.99.72.165:7004 (421123bf...) -> 53050 keys | 5461 slots | 1 slaves.
[OK] 159478 keys in 3 masters.
9.73 keys per slot on average.

之前安装的是三主三从,现在我需要在165上先安装一个7007 的master的节点加入之前的集群然后把15.15.181.147:[email protected] master  的slots 全部迁移到165的7007节点

1,先在165上  mkdir -p /usr/local/redis-cluster/7007

由于之前165上安装过其他节点,直接  cd /usr/local/redis-ii/

cp -r bin /usr/local/redis-cluster/7007

然后进入之前安装的7004节点 cd /usr/local/redis-cluster/7004

cp redis.conf ../7007/

然后修改7007的相关配置

bind 15.99.72.165
protected-mode no
port 7007
daemonize yes
cluster-enabled yes
cluster-node-timeout 15000

保存配置后,启动7007这个节点  bin/redis-server ./redis.conf

然后把165:7007节点添加到之前的节点中

[[email protected] tools]# /root/tools/redis-4.0.11/src/redis-trib.rb add-node 15.99.72.165:7007 15.99.72.165:7003
>>> Adding node 15.99.72.165:7007 to cluster 15.99.72.165:7003
>>> Performing Cluster Check (using node 15.99.72.165:7003)
M: f452a66121e1e9c02b0ed28cafe03aaddb327c36 15.99.72.165:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 763a88d5328ab0ce07a312e726d78bb2141b5813 15.15.181.147:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: 421123bf7fb3a4061e34cab830530d87b21148ee 15.99.72.165:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 733609c2fbecdd41f454363698514e2f72ee0208 15.15.181.147:7006
slots: (0 slots) slave
replicates f452a66121e1e9c02b0ed28cafe03aaddb327c36
S: 31670db07d1bc7620a8f8254b26f2af00b04d1fd 15.99.72.164:7002
slots: (0 slots) slave
replicates 763a88d5328ab0ce07a312e726d78bb2141b5813
S: c08e8c7faeede2220e621b2409061210e0b107ad 15.99.72.164:7001
slots: (0 slots) slave
replicates 421123bf7fb3a4061e34cab830530d87b21148ee
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 15.99.72.165:7007 to make it join the cluster.
[OK] New node added correctly.

再用cluster nodes命令查看当前节点,可以发现7007已经加入到了redis cluster中但是slot 数为0

15.15.181.147:7006> cluster nodes
8e134e67e4e83a613b90f67cc6e6b8d71c208886 15.99.72.165:[email protected] master - 0 1577095695760 0 connected
c08e8c7faeede2220e621b2409061210e0b107ad 15.99.72.164:[email protected] slave 421123bf7fb3a4061e34cab830530d87b21148ee 0 1577095693561 7 connected
733609c2fbecdd41f454363698514e2f72ee0208 15.15.181.147:[email protected] myself,slave f452a66121e1e9c02b0ed28cafe03aaddb327c36 0 1577095691000 6 connected
31670db07d1bc7620a8f8254b26f2af00b04d1fd 15.99.72.164:[email protected] slave 763a88d5328ab0ce07a312e726d78bb2141b5813 0 1577095695000 5 connected
f452a66121e1e9c02b0ed28cafe03aaddb327c36 15.99.72.165:[email protected] master - 0 1577095694000 3 connected 5461-10922
421123bf7fb3a4061e34cab830530d87b21148ee 15.99.72.165:[email protected] master - 0 1577095694763 7 connected 0-5460
763a88d5328ab0ce07a312e726d78bb2141b5813 15.15.181.147:[email protected] master - 0 1577095691699 5 connected 10923-16383

接下来需要把15.15.181.147:[email protected] master  的slots全部迁移到 15.99.72.165:[email protected] master 上

迁移过程参考如下例子,由于我迁移的时候打印太多,没有拷贝粘贴进来,和下面除了ip 和port等等有区别级别上一样

重新分配master节点分配slot

将192.168.1.116:7000的slot全部分配(5461)给192.168.1.117:7000

[[email protected] redis-cluster]# ./redis-4.0.6/src/redis-trib.rb reshard 192.168.1.117:7000
How many slots do you want to move (from 1 to 16384)? 5461      # 分配多少数量的slot
What is the receiving node ID? a6d7dacd679a96fd79b7de552428a63610d620e6   # 上面那些数量的slot被哪个节点接收。这里填写192.168.1.117:7000节点ID
  Type ‘all‘ to use all the nodes as source nodes for the hash slots.
  Type ‘done‘ once you entered all the source nodes IDs.
Source node #1:0607089e5bb3192563bd8082ff230b0eb27fbfeb #指从哪个节点分配上面指定数量的slot。这里填写192.168.1.116:7000的ID。如果填写all,则表示从之前所有master节点中抽取上面指定数量的slot。
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 0 from 192.168.1.116:7000 to 192.168.1.117:7000:
[ERR] Calling MIGRATE: ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY)

解决报错

[root@localhost redis-cluster]# cp redis-4.0.6/src/redis-trib.rb redis-4.0.6/src/redis-trib.rb.bak

将redis-trib.rb文件中原来的
                source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:keys,*keys])
                    source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])
改为
                source.r.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,"replace",:keys,*keys])
                    source.r.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])

[root@localhost redis-cluster]# cat redis-4.0.6/src/redis-trib.rb |grep  source.r.call
                source.r.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,"replace",:keys,*keys])
                    source.r.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])

# 修改后继续报错
[root@localhost redis-cluster]# ./redis-4.0.6/src/redis-trib.rb reshard 192.168.1.117:7000
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
[WARNING] Node 192.168.1.117:7000 has slots in importing state (0).
[WARNING] Node 192.168.1.116:7000 has slots in migrating state (0).
[WARNING] The following slots are open: 0
>>> Check slots coverage...
[OK] All 16384 slots covered.
*** Please fix your cluster problems before resharding

# 解决办法
[root@localhost redis-cluster]# ./redis-4.0.6/src/redis-cli -h 192.168.1.117 -c -p 7000
192.168.1.117:7000> cluster setslot 0 stable  #The following slots are open: 0 这里是多少就写多少
OK
192.168.1.117:7000> exit
[root@localhost redis-cluster]# ./redis-4.0.6/src/redis-cli -h 192.168.1.116 -c -p 7000
192.168.1.116:7000> cluster setslot 0 stable  #The following slots are open: 0 这里是多少就写多少
OK
192.168.1.116:7000> exit

[root@localhost redis-cluster]# ./redis-4.0.6/src/redis-trib.rb fix 192.168.1.117:7000
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

在重新分槽

[[email protected] redis-cluster]# ./redis-4.0.6/src/redis-trib.rb reshard 192.168.1.117:7000
How many slots do you want to move (from 1 to 16384)? 5461      # 分配多少数量的slot
What is the receiving node ID? a6d7dacd679a96fd79b7de552428a63610d620e6   # 上面那些数量的slot被哪个节点接收。这里填写192.168.1.117:7000节点ID
  Type ‘all‘ to use all the nodes as source nodes for the hash slots.
  Type ‘done‘ once you entered all the source nodes IDs.
Source node #1:0607089e5bb3192563bd8082ff230b0eb27fbfeb #指从哪个节点分配上面指定数量的slot。这里填写192.168.1.116:7000的ID。如果填写all,则表示从之前所有master节点中抽取上面指定数量的slot。
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 5457 from 192.168.1.116:7000 to 192.168.1.117:7000: ..
Moving slot 5458 from 192.168.1.116:7000 to 192.168.1.117:7000: .
Moving slot 5459 from 192.168.1.116:7000 to 192.168.1.117:7000:
Moving slot 5460 from 192.168.1.116:7000 to 192.168.1.117:7000: ..

检查分槽结果

# 可以看到 192.168.1.116:7000 上的slot已经移动到 192.168.1.117:7000节点了
[root@localhost redis-cluster]# ./redis-4.0.6/src/redis-trib.rb info 192.168.1.117:7000
192.168.1.117:7000 (a6d7dacd...) -> 6652 keys | 5461 slots | 2 slaves.
192.168.1.117:7004 (63893e74...) -> 0 keys | 0 slots | 1 slaves.
192.168.1.117:7002 (8540a78c...) -> 0 keys | 0 slots | 1 slaves.
192.168.1.116:7001 (17831f8b...) -> 6665 keys | 5461 slots | 1 slaves.
192.168.1.116:7003 (c433ff1b...) -> 6683 keys | 5462 slots | 1 slaves.
192.168.1.116:7000 (0607089e...) -> 0 keys | 0 slots | 0 slaves.

将192.168.1.116:7001的slot全部分配(5461)给192.168.1.117:7002

[[email protected] redis-cluster]# ./redis-4.0.6/src/redis-trib.rb reshard 192.168.1.117:7002
How many slots do you want to move (from 1 to 16384)? 5461
What is the receiving node ID? 8540a78c666cb1e81fb2821d112f3040542af056
Please enter all the source node IDs.
  Type ‘all‘ to use all the nodes as source nodes for the hash slots.
  Type ‘done‘ once you entered all the source nodes IDs.
Source node #1:17831f8bbcd43ac05efc5486ebfcdbb210ce48f0
Source node #2:done
......
    Moving slot 16381 from 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0
    Moving slot 16382 from 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0
    Moving slot 16383 from 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0
Do you want to proceed with the proposed reshard plan (yes/no)? yes
......
Moving slot 16381 from 192.168.1.116:7001 to 192.168.1.117:7002:
Moving slot 16382 from 192.168.1.116:7001 to 192.168.1.117:7002: .
Moving slot 16383 from 192.168.1.116:7001 to 192.168.1.117:7002: ..

将192.168.1.116:7003的slot全部分配(5462)给192.168.1.117:7004

[[email protected] redis-cluster]# ./redis-4.0.6/src/redis-trib.rb reshard 192.168.1.117:7004
How many slots do you want to move (from 1 to 16384)? 5462
What is the receiving node ID? 63893e74e6f8e2414eba97b094a80ae8b3caeb09
Please enter all the source node IDs.
  Type ‘all‘ to use all the nodes as source nodes for the hash slots.
  Type ‘done‘ once you entered all the source nodes IDs.
Source node #1:c433ff1b448fbcd3234632712643bc68d5213e3b
Source node #2:done
......
    Moving slot 10920 from c433ff1b448fbcd3234632712643bc68d5213e3b
    Moving slot 10921 from c433ff1b448fbcd3234632712643bc68d5213e3b
    Moving slot 10922 from c433ff1b448fbcd3234632712643bc68d5213e3b
Do you want to proceed with the proposed reshard plan (yes/no)? yes
......
Moving slot 10920 from 192.168.1.116:7003 to 192.168.1.117:7004: ..
Moving slot 10921 from 192.168.1.116:7003 to 192.168.1.117:7004: ..
Moving slot 10922 from 192.168.1.116:7003 to 192.168.1.117:7004:

查看最新分槽情况

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb info 192.168.1.117:7000
192.168.1.117:7004 (63893e74...) -> 6683 keys | 5462 slots | 2 slaves.
192.168.1.116:7003 (c433ff1b...) -> 0 keys | 0 slots | 0 slaves.
192.168.1.117:7002 (8540a78c...) -> 6665 keys | 5461 slots | 2 slaves.
192.168.1.116:7000 (0607089e...) -> 0 keys | 0 slots | 0 slaves.
192.168.1.117:7000 (a6d7dacd...) -> 6652 keys | 5461 slots | 2 slaves.
192.168.1.116:7001 (17831f8b...) -> 0 keys | 0 slots | 0 slaves.

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb check 192.168.1.117:7000
>>> Performing Cluster Check (using node 192.168.1.117:7000)
M: a6d7dacd679a96fd79b7de552428a63610d620e6 192.168.1.117:7000
   slots:0-5460 (5461 slots) master
   2 additional replica(s)
M: 63893e74e6f8e2414eba97b094a80ae8b3caeb09 192.168.1.117:7004
   slots:5461-10922 (5462 slots) master
   2 additional replica(s)
M: 8540a78c666cb1e81fb2821d112f3040542af056 192.168.1.117:7002
   slots:10923-16383 (5461 slots) master
   2 additional replica(s)
S: 1ebeedb98619bc88bf36acbbe4a766f2f74e629f 192.168.1.117:7003
   slots: (0 slots) slave
   replicates 8540a78c666cb1e81fb2821d112f3040542af056
M: 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0 192.168.1.116:7001
   slots: (0 slots) master
   0 additional replica(s)
S: e010d410223a2376d3308a68a724bac27ef8d74f 192.168.1.117:7001
   slots: (0 slots) slave
   replicates a6d7dacd679a96fd79b7de552428a63610d620e6
S: 17ee6bd4c68235d09acf2f4b18ae3fcc649d629c 192.168.1.116:7002
   slots: (0 slots) slave
   replicates 63893e74e6f8e2414eba97b094a80ae8b3caeb09
M: c433ff1b448fbcd3234632712643bc68d5213e3b 192.168.1.116:7003
   slots: (0 slots) master
   0 additional replica(s)
S: bef4dddc01651d64b5bb3e0ac384c0eb120aa537 192.168.1.116:7004
   slots: (0 slots) slave
   replicates a6d7dacd679a96fd79b7de552428a63610d620e6
S: 2579ab004e277ba68197d851d47d0436e0cf203d 192.168.1.117:7005
   slots: (0 slots) slave
   replicates 63893e74e6f8e2414eba97b094a80ae8b3caeb09
S: fb8dc97c90f3edc7f10a385f4b4b2a2b2612ffab 192.168.1.116:7005
   slots: (0 slots) slave
   replicates 8540a78c666cb1e81fb2821d112f3040542af056
M: 0607089e5bb3192563bd8082ff230b0eb27fbfeb 192.168.1.116:7000
   slots: (0 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

验证迁移后数据

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-cli -h 192.168.1.117 -p 7000 -c dbsize
(integer) 6652
[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-cli -h 192.168.1.117 -p 7002 -c dbsize
(integer) 6665
[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-cli -h 192.168.1.117 -p 7004 -c dbsize
(integer) 6683

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-cli -h 192.168.1.117 -p 7000 -c
192.168.1.117:7000> keys *
......
6650) "name7710"
6651) "name16668"
6652) "name12290"
192.168.1.117:7000>

迁移后从集群中删除原来的节点

查看旧集群节点地址及node id

[[email protected] redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb check 192.168.1.117:7000 | grep 192.168.1.116
M: 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0 192.168.1.116:7001
S: 17ee6bd4c68235d09acf2f4b18ae3fcc649d629c 192.168.1.116:7002
M: c433ff1b448fbcd3234632712643bc68d5213e3b 192.168.1.116:7003
S: bef4dddc01651d64b5bb3e0ac384c0eb120aa537 192.168.1.116:7004
S: fb8dc97c90f3edc7f10a385f4b4b2a2b2612ffab 192.168.1.116:7005
M: 0607089e5bb3192563bd8082ff230b0eb27fbfeb 192.168.1.116:7000

删除旧集群中的slave节点

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb del-node 192.168.1.116:7002 17ee6bd4c68235d09acf2f4b18ae3fcc649d629c
>>> Removing node 17ee6bd4c68235d09acf2f4b18ae3fcc649d629c from cluster 192.168.1.116:7002
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb del-node 192.168.1.116:7004 bef4dddc01651d64b5bb3e0ac384c0eb120aa537
>>> Removing node bef4dddc01651d64b5bb3e0ac384c0eb120aa537 from cluster 192.168.1.116:7004
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb del-node 192.168.1.116:7005 fb8dc97c90f3edc7f10a385f4b4b2a2b2612ffab
>>> Removing node fb8dc97c90f3edc7f10a385f4b4b2a2b2612ffab from cluster 192.168.1.116:7005
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

查看删除后的节点信息

[[email protected] redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb check 192.168.1.117:7000 | grep 192.168.1.116
M: 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0 192.168.1.116:7001
M: c433ff1b448fbcd3234632712643bc68d5213e3b 192.168.1.116:7003
M: 0607089e5bb3192563bd8082ff230b0eb27fbfeb 192.168.1.116:7000

删除旧集群中的master节点

删除master注意细节:
  如果还有slave节点,需要先将slave转移到其他master节点或删除slave节点
  如果master节点有slot,去掉分配的slot,然后再删除master节点。
  删除master主节点时,必须确保它上面的slot为0. 否则可能会导致整个redis cluster集群无法工作!
  如果要移除的master节点不是空的,需要先用重新分片命令来把数据移到其他的节点。

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb del-node 192.168.1.116:7000 0607089e5bb3192563bd8082ff230b0eb27fbfeb
>>> Removing node 0607089e5bb3192563bd8082ff230b0eb27fbfeb from cluster 192.168.1.116:7000
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb del-node 192.168.1.116:7001 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0
>>> Removing node 17831f8bbcd43ac05efc5486ebfcdbb210ce48f0 from cluster 192.168.1.116:7001
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb del-node 192.168.1.116:7003 c433ff1b448fbcd3234632712643bc68d5213e3b
>>> Removing node c433ff1b448fbcd3234632712643bc68d5213e3b from cluster 192.168.1.116:7003
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

查看现有的节点信息

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb check 192.168.1.117:7000
>>> Performing Cluster Check (using node 192.168.1.117:7000)
M: a6d7dacd679a96fd79b7de552428a63610d620e6 192.168.1.117:7000
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 63893e74e6f8e2414eba97b094a80ae8b3caeb09 192.168.1.117:7004
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 8540a78c666cb1e81fb2821d112f3040542af056 192.168.1.117:7002
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 1ebeedb98619bc88bf36acbbe4a766f2f74e629f 192.168.1.117:7003
   slots: (0 slots) slave
   replicates 8540a78c666cb1e81fb2821d112f3040542af056
S: e010d410223a2376d3308a68a724bac27ef8d74f 192.168.1.117:7001
   slots: (0 slots) slave
   replicates a6d7dacd679a96fd79b7de552428a63610d620e6
S: 2579ab004e277ba68197d851d47d0436e0cf203d 192.168.1.117:7005
   slots: (0 slots) slave
   replicates 63893e74e6f8e2414eba97b094a80ae8b3caeb09
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

[root@localhost redis-cluster]#  ./redis-4.0.6/src/redis-trib.rb info 192.168.1.117:7000
192.168.1.117:7000 (a6d7dacd...) -> 6652 keys | 5461 slots | 1 slaves.
192.168.1.117:7004 (63893e74...) -> 6683 keys | 5462 slots | 1 slaves.
192.168.1.117:7002 (8540a78c...) -> 6665 keys | 5461 slots | 1 slaves.
[OK] 20000 keys in 3 masters.

会发现如下报错

Moving slot 16378 from 763a88d5328ab0ce07a312e726d78bb2141b5813
Moving slot 16379 from 763a88d5328ab0ce07a312e726d78bb2141b5813
Moving slot 16380 from 763a88d5328ab0ce07a312e726d78bb2141b5813
Moving slot 16381 from 763a88d5328ab0ce07a312e726d78bb2141b5813
Moving slot 16382 from 763a88d5328ab0ce07a312e726d78bb2141b5813
Moving slot 16383 from 763a88d5328ab0ce07a312e726d78bb2141b5813
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 10923 from 15.15.181.147:7005 to 15.99.72.165:7007:
[ERR] Calling MIGRATE: ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY)

解决方法

begin
source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:keys,*keys])
rescue => e
if o[:fix] && e.to_s =~ /BUSYKEY/
xputs "*** Target key exists. Replacing it for FIX."
source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])

把上面的2行邮红色字体的.client去掉保存

[[email protected] src]# /root/tools/redis-4.0.11/src/redis-trib.rb reshard 15.99.72.165:7007
>>> Performing Cluster Check (using node 15.99.72.165:7007)
M: 8e134e67e4e83a613b90f67cc6e6b8d71c208886 15.99.72.165:7007
slots: (0 slots) master
0 additional replica(s)
S: c08e8c7faeede2220e621b2409061210e0b107ad 15.99.72.164:7001
slots: (0 slots) slave
replicates 421123bf7fb3a4061e34cab830530d87b21148ee
M: 763a88d5328ab0ce07a312e726d78bb2141b5813 15.15.181.147:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 31670db07d1bc7620a8f8254b26f2af00b04d1fd 15.99.72.164:7002
slots: (0 slots) slave
replicates 763a88d5328ab0ce07a312e726d78bb2141b5813
M: 421123bf7fb3a4061e34cab830530d87b21148ee 15.99.72.165:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 733609c2fbecdd41f454363698514e2f72ee0208 15.15.181.147:7006
slots: (0 slots) slave
replicates f452a66121e1e9c02b0ed28cafe03aaddb327c36
M: f452a66121e1e9c02b0ed28cafe03aaddb327c36 15.99.72.165:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
[WARNING] Node 15.99.72.165:7007 has slots in importing state (10923).
[WARNING] Node 15.15.181.147:7005 has slots in migrating state (10923).
[WARNING] The following slots are open: 10923
>>> Check slots coverage...
[OK] All 16384 slots covered.
*** Please fix your cluster problems before resharding

原文地址:https://www.cnblogs.com/xiaohanlin/p/12085152.html

时间: 2024-11-08 06:17:32

redis cluster异地数据迁移,扩容,缩容的相关文章

分库分布的几件小事(三)可以动态扩容缩容的分库分表方案

1.扩容与缩容 这个是你必须面对的一个事儿,就是你已经弄好分库分表方案了,然后一堆库和表都建好了,基于分库分表中间件的代码开发啥的都好了,测试都ok了,数据能均匀分布到各个库和各个表里去,而且接着你还通过双写的方案咔嚓一下上了系统,已经直接基于分库分表方案在搞了. 那么现在问题来了,你现在这些库和表又支撑不住了,要继续扩容咋办?这个可能就是说你的每个库的容量又快满了,或者是你的表数据量又太大了,也可能是你每个库的写并发太高了,你得继续扩容. 缩容就是现在业务不景气了,数据量减少,并发量下降,那么

Kubernetes高级进阶之pod的自动扩容/缩容

目录:实践1:基于autoscaling cpu指标的扩容与缩容实践2:基于prometheus自定义指标QPS的扩容与缩容 Pod自动扩容/缩容(HPA) Horizontal Pod Autoscaler(HPA,Pod水平自动伸缩),根据资源利用率或者自定义指标自动调整replication controller, deployment 或 replica set,实现部署的自动扩展和缩减,让部署的规模接近于实际服务的负载.HPA不适于无法缩放的对象,例如DaemonSet. HPA主要是

Redis Cluster 的数据分片机制

上一篇<分布式数据缓存中的一致性哈希算法> 文章中讲述了一致性哈希算法的基本原理和实现,今天就以 Redis Cluster 为例,详细讲解一下分布式数据缓存中的数据分片,上线下线时数据迁移以及请求重定向等操作. Redis 集群简介 Redis Cluster 是 Redis 的分布式解决方案,在 3.0 版本正式推出,有效地解决了 Redis 分布式方面的需求. Redis Cluster 一般由多个节点组成,节点数量至少为 6 个才能保证组成完整高可用的集群,其中三个为主节点,三个为从节

codis__数据迁移和伸缩容

数据迁移命令 注意点:是迁移到某个 redis-group 而不是某个redis-servers  实例 伸缩容用法 redis 内存等不够用时 增容 : 增加redis-group, 然后迁移使用上述命令 迁移数据给他 当需要做资源整合时 缩容: 先用迁移命令把数据迁走,然后 ../bin/codis-config server remove-group 1 移调该组 当某个redis 出现故障点时 ../bin/codis-config server promote 2 192.168.10

如何设计可以动态扩容缩容的分库分表方案?

对于分库分表来说,主要是面对以下问题: 选择一个数据库中间件,调研.学习.测试: 设计你的分库分表的一个方案,你要分成多少个库,每个库分成多少个表,比如 3 个库,每个库 4 个表: 基于选择好的数据库中间件,以及在测试环境建立好的分库分表的环境,然后测试一下能否正常进行分库分表的读写: 完成单库单表到分库分表的迁移,双写方案: 线上系统开始基于分库分表对外提供服务: 扩容了,扩容成 6 个库,每个库需要 12 个表,你怎么来增加更多库和表呢? 是你必须面对的一个事儿,就是你已经弄好分库分表方案

如何设计可以动态扩容缩容的分库分表方案

停机扩容(不推荐) 这个方案就跟停机迁移一样,步骤几乎一致,唯一的一点就是那个导数的工具,是把现有库表的数据抽出来慢慢倒入到新的库和表里去.但是最好别这么玩儿,有点不太靠谱,因为既然分库分表就说明数据量实在是太大了,可能多达几亿条,甚至几十亿,你这么玩儿,可能会出问题. 从单库单表迁移到分库分表的时候,数据量并不是很大,单表最大也就两三千万.那么你写个工具,多弄几台机器并行跑,1小时数据就导完了.这没有问题. 如果 3 个库 + 12 个表,跑了一段时间了,数据量都 1~2 亿了.光是导 2 亿

Linux系统LVM(卷)部署-扩容-缩容-快照-删除

常用LVM命令总结: 注: 以下案例均采用的系统版本是Oracle linux 7.3 LVM案例: 部署案例: 第 1 步:让新添加的两块硬盘设备支持LVM 技术. [[email protected] ~]# pvcreate /dev/sdb /dev/sdc Physical volume "/dev/sdb" successfully created Physical volume "/dev/sdc" successfully created 第 2 步

Redis单实例数据迁移到集群

环境说明 单机redis 192.168.41.101:6379 redis集群 192.168.41.101:7000 master 192.168.41.101:7001 master 192.168.41.101:7002 192.168.41.102:7000 master 192.168.41.102:7001 192.168.41.102:7002 迁移步骤 查看集群状态及节点槽分布 [[email protected] bin]# ./redis-cli -c -p 7000 12

如何设计动态扩容缩容的分库分表方案?

面试官:如何来设计动态扩容的分库分表方案?面试官心理剖析:这个问题主要是看看你们公司设计的分库分表设计方案怎么样的?你知不知道动态扩容的方案? 回答: 背景说明:如果你们公司之前已经做了分库分表,你们当时分了 4 个库,每个库 4 张表:公司业务发展的很好,现在的数据库已经开始吃力了,不能满足快速发展的业务量了,需要进行扩容. 1)停机扩容 这个方案跟单库迁移方案是一样的,就是停服进行数据迁移,不过现在的数据迁移比之前的单库迁移要复杂的多,还有数据量也是之前的好几倍,单库的数据量可能就几千万,但