MongoDB健壮集群——用副本集做分片

1.    MongoDB分片+副本集

健壮的集群方案

多个配置服务器 多个mongos服务器  每个片都是副本集 正确设置w

架构图


说明:

1.   此实验环境在一台机器上通过不同port和dbpath实现启动不同的mongod实例

2.   总的9个mongod实例,分别做成shard1、shard2、shard3三组副本集,每组1主2从

3.   Mongos进程的数量不限,建议把mongos配置在每个应用服务器本机上,这样每个应用服务器就与自身的mongos进行通信,如果服务器不工作了,并不会影响其他的应用服务器与其自己的mongos通信

4.   此实验模拟2台应用服务器(2个mongos服务)

5.   生产环境中每个片都应该是副本集,这样单个服务器坏了,才不会导致片失效

部署环境

创建相关目录

[[email protected] cluster2]# mkdir -p shard{1,2,3}/node{1,2,3}
[[email protected] cluster2]# mkdir -p shard{1,2,3}/logs
[[email protected] cluster2]# ls shard*
shard1:
logs  node1  node2  node3

shard2:
logs  node1  node2  node3

shard3:
logs  node1  node2  node3
[[email protected] cluster2]# mkdir -p config/logs
[[email protected] cluster2]# mkdir -p config/node{1,2,3}
[[email protected] cluster2]# ls config/
logs  node1  node2  node3

[[email protected] cluster2]# mkdir -p mongos/logs

启动配置服务


Config server


/data/mongodb/config/node1


/data/mongodb/config/logs/node1.log


10000


/data/mongodb/config/node2


/data/mongodb/config/logs/node2.log


20000


/data/mongodb/config/node3


/data/mongodb/config/logs/node3.log


30000

#按规划启动3个:跟启动单个配置服务一样,只是重复3次

[[email protected] cluster2]# mongod --dbpath config/node1 --logpath config/logs/node1.log --logappend --fork --port 10000
[[email protected] cluster2]# mongod --dbpath config/node2 --logpath config/logs/node2.log --logappend --fork --port 20000
[[email protected] cluster2]# mongod --dbpath config/node3 --logpath config/logs/node3.log --logappend --fork --port 30000
[[email protected] cluster2]# ps -ef|grep mongod|grep -v grep
mongod    2329     1  0 20:05 ?        00:00:02 /usr/bin/mongod -f /etc/mongod.conf
root      2703     1  1 20:13 ?        00:00:00 mongod --dbpath config/node1 --logpath config/logs/node1.log --logappend --fork --port 10000
root      2716     1  1 20:13 ?        00:00:00 mongod --dbpath config/node2 --logpath config/logs/node2.log --logappend --fork --port 20000
root      2729     1  1 20:13 ?        00:00:00 mongod --dbpath config/node3 --logpath config/logs/node3.log --logappend --fork --port 30000

启动路由服务


Mongos server


——


/data/mongodb/mongos/logs/node1.log


40000


——


/data/mongodb/mongos/logs/node2.log


50000

#mongos的数量不受限制,通常应用一个服务器运行一个mongos

[[email protected] cluster2]# mongos --port 40000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log  --logappend --fork
[[email protected] cluster2]# mongos --port 50000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log  --logappend --fork
[[email protected] cluster2]# ps -ef|grep mongos|grep -v grep
root      2809     1  0 20:18 ?        00:00:00 mongos --port 40000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log --logappend --fork
root      2862     1  0 20:19 ?        00:00:00 mongos --port 50000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log --logappend --fork

配置副本集

按规划,配置启动shard1、shard2、shard3三组副本集

#此处以shard1为例说明配置方法

#启动三个mongod进程

[[email protected] cluster2]# mongod --replSet shard1 --dbpath shard1/node1 --logpath shard1/logs/node1.log --logappend --fork --port 10001
[[email protected] cluster2]# mongod --replSet shard1 --dbpath shard1/node2 --logpath shard1/logs/node2.log --logappend --fork --port 10002
[[email protected] cluster2]# mongod --replSet shard1 --dbpath shard1/node3 --logpath shard1/logs/node3.log --logappend --fork --port 10003

#初始化Replica Set:shard1

[[email protected] cluster2]# mongo --port 10001
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:10001/test
> use adminuse admin
switched to db admin
> rsconf={"_id" : "shard1","members" : [{"_id" : 0, "host" : "localhost:10001"}]}rsconf={"_id" : "shard1","members" : [{"_id" : 0, "host" : "localhost:10001"}]}
{
        "_id" : "shard1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:10001"
                }
        ]
}
> rs.initiate(rsconf)rs.initiate(rsconf)
{ "ok" : 1 }
shard1:OTHER> rs.add("localhost:10002")
{ "ok" : 1 }
shard1:PRIMARY> rs.add("localhost:10003")
{ "ok" : 1 }
shard1:PRIMARY> rs.conf()
{
        "_id" : "shard1",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:10001",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:10002",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "localhost:10003",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
} 

Shard2和shard3同shard1配置副本集

[[email protected] cluster2]# mongod --replSet shard2 --dbpath shard2/node1 --logpath shard2/logs/node1.log --logappend --fork --port 20001
[[email protected] cluster2]# mongod --replSet shard2 --dbpath shard2/node2 --logpath shard2/logs/node2.log --logappend --fork --port 20002
[[email protected] cluster2]# mongod --replSet shard2 --dbpath shard2/node3 --logpath shard2/logs/node3.log --logappend --fork --port 20003
[[email protected] cluster2]# mongo --port 20001
> use adminuse admin
> rsconf={"_id" : "shard2","members" : [{"_id" : 0, "host" : "localhost:20001"}]}
> rs.initiate(rsconf)
shard2:OTHER> rs.add("localhost:20002")
shard2:PRIMARY> rs.add("localhost:20003")
shard2:PRIMARY> rs.conf()rs.conf()
{
        "_id" : "shard2",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:20001",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:20002",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "localhost:20003",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}

  

[[email protected] cluster2]# mongod --replSet shard3 --dbpath shard3/node1 --logpath shard3/logs/node1.log --logappend --fork --port 30001
[[email protected] cluster2]# mongod --replSet shard3 --dbpath shard3/node2 --logpath shard3/logs/node2.log --logappend --fork --port 30002
[[email protected] cluster2]# mongod --replSet shard3 --dbpath shard3/node3 --logpath shard3/logs/node3.log --logappend --fork --port 30003
[[email protected] cluster2]# mongo --port 30001
connecting to: 127.0.0.1:30001/test> use admin
> rsconf={"_id" : "shard3","members" : [{"_id" : 0, "host" : "localhost:30001"}]}> rs.initiate(rsconf)r
shard3:OTHER> rs.add("localhost:30002")
shard3:PRIMARY> rs.add("localhost:30003")
shard3:PRIMARY> rs.conf()
{
        "_id" : "shard3",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:30001",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:30002",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "localhost:30003",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}

添加(副本集)分片

#连接到mongs,并切换到admin这里必须连接路由节点

[[email protected] cluster2]# mongo --port 40000
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:40000/test
mongos> use admin
switched to db admin
mongos> db.runCommand({"addShard":"shard1/localhost:10001"})
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> db.runCommand({"addShard":"shard2/localhost:20001"})
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> db.runCommand({"addShard":"shard3/localhost:30001"})
{ "shardAdded" : "shard3", "ok" : 1 }
mongos> db.runCommand({listshards:1})
{
        "shards" : [
                {
                        "_id" : "shard1",
                        "host" : "shard1/localhost:10001,localhost:10002,localhost:10003"
                },
                {
                        "_id" : "shard2",
                        "host" : "shard2/localhost:20001,localhost:20002,localhost:20003"
                },
                {
                        "_id" : "shard3",
                        "host" : "shard3/localhost:30001,localhost:30002,localhost:30003"
                }
        ],
        "ok" : 1
}

激活db和collections分片

激活数据库分片,命令

> db.runCommand( { enablesharding : "数据库名称"} );

执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard

一旦激活数据库分片,数据库中不同的collection将被存放在不同的shard上

但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作

#如:激活test数据库分片功能,连接mongos进程

[[email protected] cluster2]# mongo --port 50000
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:50000/test
mongos> use admin
switched to db admin
mongos> db.runCommand({"enablesharding":"test"})
{ "ok" : 1 }

要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作:

> db.runCommand( { shardcollection : "集合名称",key : "字段名称"});

注:  a. 分片的collection系统会自动创建一个索引(也可用户提前创建好)

b. 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许

#对collection:test.yujx分片

mongos> db.runCommand({"shardcollection":"test.yujx","key":{"_id":1}})
{ "collectionsharded" : "test.yujx", "ok" : 1 }

生成测试数据

mongos> use test
switched to db test
mongos> for(var i=1;i<=88888;i++) db.yujx.save({"id":i,"a":123456789,"b":888888888,"c":100000000})
WriteResult({ "nInserted" : 1 })
mongos> db.yujx.count()
88888

查看集合分片

mongos> db.yujx.stats()
{
        "sharded" : true,
        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
        "userFlags" : 1,
        "capped" : false,
        "ns" : "test.yujx",
        "count" : 88888,
        "numExtents" : 9,
        "size" : 9955456,
        "storageSize" : 22523904,
        "totalIndexSize" : 2918832,
        "indexSizes" : {
                "_id_" : 2918832
        },
        "avgObjSize" : 112,
        "nindexes" : 1,
        "nchunks" : 3,
        "shards" : {
                "shard1" : {
                        "ns" : "test.yujx",
                        "count" : 8,
                        "size" : 896,
                        "avgObjSize" : 112,
                        "numExtents" : 1,
                        "storageSize" : 8192,
                        "lastExtentSize" : 8192,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 1,
                        "totalIndexSize" : 8176,
                        "indexSizes" : {
                                "_id_" : 8176
                        },
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("55d15366716d7504d5d74c4c")
                        }
                },
                "shard2" : {
                        "ns" : "test.yujx",
                        "count" : 88879,
                        "size" : 9954448,
                        "avgObjSize" : 112,
                        "numExtents" : 7,
                        "storageSize" : 22507520,
                        "lastExtentSize" : 11325440,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 1,
                        "totalIndexSize" : 2902480,
                        "indexSizes" : {
                                "_id_" : 2902480
                        },
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("55d1543eabed7d6d4a71d25e")
                        }
                },
                "shard3" : {
                        "ns" : "test.yujx",
                        "count" : 1,
                        "size" : 112,
                        "avgObjSize" : 112,
                        "numExtents" : 1,
                        "storageSize" : 8192,
                        "lastExtentSize" : 8192,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 1,
                        "totalIndexSize" : 8176,
                        "indexSizes" : {
                                "_id_" : 8176
                        },
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("55d155346f36550e3c5f062c")
                        }
                }
        },
        "ok" : 1
}

查看数据库分片

mongos> db.printShardingStatus()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("55d152a35348652fbc726a10")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/localhost:10001,localhost:10002,localhost:10003" }
        {  "_id" : "shard2",  "host" : "shard2/localhost:20001,localhost:20002,localhost:20003" }
        {  "_id" : "shard3",  "host" : "shard3/localhost:30001,localhost:30002,localhost:30003" }
  balancer:
        Currently enabled:  yes
        Currently running:  yes
                Balancer lock taken at Sun Aug 16 2015 20:43:49 GMT-0700 (PDT) by Master.Hadoop:50000:1439781543:1804289383:Balancer:846930886
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                2 : Success
                1 : Failed with error ‘could not acquire collection lock for test.yujx to migrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migrating chunk [{ : MinKey }, { : MaxKey }) in test.yujx is taken.‘, from shard1 to shard2
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test",  "partitioned" : true,  "primary" : "shard1" }
                test.yujx
                        shard key: { "_id" : 1 }
                        chunks:
                                shard1  1
                                shard2  1
                                shard3  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("55d157cca0c90140e33a9342") } on : shard3 Timestamp(3, 0)
                        { "_id" : ObjectId("55d157cca0c90140e33a9342") } -->> { "_id" : ObjectId("55d157cca0c90140e33a934a") } on : shard1 Timestamp(3, 1)
                        { "_id" : ObjectId("55d157cca0c90140e33a934a") } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(2, 0) 

#或者连接mongos的config数据库查询

mongos> use config
switched to db config
mongos> db.shards.find()db.shards.find()
{ "_id" : "shard1", "host" : "shard1/localhost:10001,localhost:10002,localhost:10003" }
{ "_id" : "shard2", "host" : "shard2/localhost:20001,localhost:20002,localhost:20003" }
{ "_id" : "shard3", "host" : "shard3/localhost:30001,localhost:30002,localhost:30003" }
mongos> db.databases.find()db.databases.find()
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard1" }
mongos> db.chunks.find()db.chunks.find()
{ "_id" : "test.yujx-_id_MinKey", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("55d157cca0c90140e33a9342") }, "shard" : "shard3" }
{ "_id" : "test.yujx-_id_ObjectId(‘55d157cca0c90140e33a9342‘)", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("55d157cca0c90140e33a9342") }, "max" : { "_id" : ObjectId("55d157cca0c90140e33a934a") }, "shard" : "shard1" }
{ "_id" : "test.yujx-_id_ObjectId(‘55d157cca0c90140e33a934a‘)", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("55d157cca0c90140e33a934a") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard2" }

单点故障分析

由于这是为了了解入门mongodb做的实验,而故障模拟太浪费时间,所以这里就不一一列出,关于故障场景分析,可以参考:
http://blog.itpub.net/27000195/viewspace-1404402/

时间: 2024-08-03 21:03:55

MongoDB健壮集群——用副本集做分片的相关文章

搭建mongodb集群(副本集+分片)

完整的搭建mongodb集群(副本集+分片)的例子... 准备四台机器,分别是bluejoe1,bluejoe2,bluejoe3,以及bluejoe0 副本集及分片策略确定如下: 将创建3个副本集,命名为shard1,shard2,shard3: 以上3个副本集作为3个分片: 每个副本集包含2个副本(主.辅): 副本分开存储,即shard1存在bluejoe1和bluejoe2上各一份...以此类推 将创建3个配置库实例,一台机器一个 bluejoe0上配置一个mongos(mongos一般可

MongoDB集群搭建-副本集

MongoDB集群搭建-副本集 概念性的知识,可以参考本人博客地址: http://www.cnblogs.com/zlp520/p/8088169.html 一.Replica Set方案(副本集或复制集): 1.搭建副本集有两种办法: 其一:在一台服务器上,通过文件的方式及端口号的方式来区分: 其二:找最少三台服务器,每台服务器都通过如下的配置: ip规划每台服务器担任的工作: 192.168.0.100:27017 主机 192.168.0.101:27017 副本集 192.168.0.

Mongodb集群之副本集

上篇咱们遗留了几个问题 1主节点是否能自己主动切换连接? 眼下须要手动切换 2主节点读写压力过大怎样解决 3从节点每一个上面的数据都是对数据库全量拷贝,从节点压力会不会过大 4数据压力达到机器支撑不了时候是否能自己主动扩展? Nosql的产生是为了解决大数据量.高扩展,高性能,灵活数据模型.高可用性.可是光通过主从模型的架构是远远达不到上面几点的.因此.mongodb设计了副本集和分片的功能.咱们以下就来说说副本集 mongodb官方已经不建议使用主从模式,而是副本集进行取代. IMPORTAN

MongoDB学习笔记~Mongo集群和副本集

一些概念 对于Mongo在数据容灾上,推荐的模式是使用副本集模式,它有一个对外的主服务器Primary,还有N个副本服务器Secondary(N>=1,当N=1时,需要有一台仲裁服务器Arbiter,当N>1时不需要Arbiter),它们之前是通过内部机制实现同步的,并且当Primary挂了后,它会通过内部的心跳机制,选举别一台Secondary成为一个Primary,与外界(Route)进行通讯. 工业标准 在标准上,我们的副本集推荐使用奇数个服务器(3,5,7,9),但经过我的测试,只要大

搭建高可用MongoDB集群(四):分片

按照上一节中<搭建高可用mongodb集群(三)-- 深入副本集>搭建后还有两个问题没有解决: 从节点每个上面的数据都是对数据库全量拷贝,从节点压力会不会过大? 数据压力大到机器支撑不了的时候能否做到自动扩展? 在系统早期,数据量还小的时候不会引起太大的问题,但是随着数据量持续增多,后续迟早会出现一台机器硬件瓶颈问题的.而mongodb主打的就是海量数据架构,他不能解决海量数据怎么行!不行!"分片"就用这个来解决这个问题. 传统数据库怎么做海量数据读写?其实一句话概括:分而

Redis集群方案应该怎么做

方案1:Redis官方集群方案 Redis Cluster Redis Cluster是一种服务器sharding分片技术. Redis3.0版本开始正式提供,解决了多Redis实例协同服务问题,时间较晚,目前能证明在大规模生产环境下成功的案例还不是很多,需要时间检验. Redis Cluster中,Sharding采用slot(槽)的概念,一共分成16384个槽.对于每个进入Redis的键值对,根据key进行散列,分配到这16384个slot中的某一个中.使用的hash算法也比较简单,CRC1

MongoDB之分片集群与复制集

分片集群 1.1.概念 分片集群是将数据存储在多台机器上的操作,主要由查询路由mongos.分片.配置服务器组成. ●查询路由根据配置服务器上的元数据将请求分发到相应的分片上,本身不存储集群的元数据,只是缓存在内存中. ●分片用来存储数据块.数据集根据分片键将集合分割为数据块,存储在不同的分片上.在生产环境下,通常一个分片由一个复制集组成. ●配置服务器存储集群的元数据,包括数据与分片的映射关系,配置服务器一旦挂掉,集群将无法工作. 注意: ●当mongos重启时,会从配置服务器读取元数据更新自

Mongodb集群部署以及集群维护命令

Mongodb集群部署以及集群维护命令 http://lipeng200819861126-126-com.iteye.com/blog/1919271 mongodb分布式集群架构及监控配置 http://freeze.blog.51cto.com/1846439/884925/ 见文中: 七.监控配置:      早在去年已经出现MongoDB和Redis的Cacti模板,使用它,你可以对你的MongoDB和Redis服务进行流量监控.cacti的模板一直在更新,若企业已经用到nosql这种

分布式缓存技术redis学习系列(四)——redis高级应用(集群搭建、集群分区原理、集群操作)

本文是redis学习系列的第四篇,前面我们学习了redis的数据结构和一些高级特性,点击下面链接可回看 <详细讲解redis数据结构(内存模型)以及常用命令> <redis高级应用(主从.事务与锁.持久化)> 本文我们继续学习redis的高级特性--集群.本文主要内容包括集群搭建.集群分区原理和集群操作的学习. Redis集群简介 Redis 集群是3.0之后才引入的,在3.0之前,使用哨兵(sentinel)机制(本文将不做介绍,大家可另行查阅)来监控各个节点之间的状态.Redi