MongoDB分片集群还原

从mongodb 3.0开始,mongorestore还原的时候,需要一个运行着的实例。早期的版本没有这个要求。

1.为每个分片部署一个复制集

(1)复制集中的每个成员启动一个mongod

mongod --dbpath /mdb/data/s11 --logpath /mdb/mlog/s11.log --fork --port 27017 --replSet s1 --smallfiles &
mongod --dbpath /mdb/data/s12 --logpath /mdb/mlog/s12.log --fork --port 27018 --replSet s1 --smallfiles &

mongod --dbpath /mdb/data/s21 --logpath /mdb/mlog/s21.log --fork --port 27019 --replSet s2 --smallfiles &
mongod --dbpath /mdb/data/s22 --logpath /mdb/mlog/s22.log --fork --port 27020 --replSet s2 --smallfiles &

(2)通过mongo连接到实例,运行:

mongo --port=27017
>rs.initiate()
>rs.add("11.11.11.195:27018")
mongo --port=27019
>rs.initiate()
>rs.add("11.11.11.195:27020")

2.部署config服务器

mongod --dbpath /mdb/data/sc --logpath /mdb/mlog/sc.log --fork --port 27021 --configsvr  &

3.启动mongos实例

mongos --logpath /mdb/mlog/ss.log --fork --port 30000 --configdb 11.11.11.195:27021

4.集群添加分片

登陆路由器
./mongo --port 30000

增加片节点
sh.addShard("s1/11.11.11.195:27018")
sh.addShard("s2/11.11.11.195:27020")
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5722c003710922b361783847")
}
  shards:
        {  "_id" : "s1",  "host" : "s1/11.11.11.195:27018,11.11.11.195:27017" }
        {  "_id" : "s2",  "host" : "s2/11.11.11.195:27020,11.11.11.195:27019" }
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }

mongos>

5.关闭mongos instances
分片集群启动后,关闭mongos实例

6.还原分片数据

mongorestore --drop /mdb/bin/s1 --port 27017
2016-04-29T10:20:20.643+0800    building a list of dbs and collections to restore from /mdb/bin/s1 dir
2016-04-29T10:20:20.657+0800    reading metadata file from /mdb/bin/s1/snps/elegans.metadata.json
2016-04-29T10:20:20.658+0800    reading metadata file from /mdb/bin/s1/test/system.users.metadata.json
2016-04-29T10:20:20.658+0800    reading metadata file from /mdb/bin/s1/snps/system.users.metadata.json
2016-04-29T10:20:20.658+0800    restoring snps.elegans from file /mdb/bin/s1/snps/elegans.bson
2016-04-29T10:20:20.658+0800    restoring test.system.users from file /mdb/bin/s1/test/system.users.bson
2016-04-29T10:20:20.666+0800    restoring snps.system.users from file /mdb/bin/s1/snps/system.users.bson
2016-04-29T10:20:22.974+0800    restoring indexes for collection snps.system.users from metadata
2016-04-29T10:20:22.975+0800    finished restoring snps.system.users
2016-04-29T10:20:23.073+0800    restoring indexes for collection test.system.users from metadata
2016-04-29T10:20:23.074+0800    finished restoring test.system.users
2016-04-29T10:20:23.644+0800    [##......................]  snps.elegans  1.6 MB/13.3 MB  (11.8%)
2016-04-29T10:20:26.644+0800    [##############..........]  snps.elegans  7.9 MB/13.3 MB  (59.3%)
2016-04-29T10:20:29.239+0800    restoring indexes for collection snps.elegans from metadata
2016-04-29T10:20:29.660+0800    finished restoring snps.elegans
2016-04-29T10:20:29.660+0800    done
mongorestore --drop /mdb/bin/s2 --port 27019
2016-04-29T10:20:44.153+0800    building a list of dbs and collections to restore from /mdb/bin/s2 dir
2016-04-29T10:20:44.165+0800    reading metadata file from /mdb/bin/s2/snps/elegans.metadata.json
2016-04-29T10:20:44.165+0800    restoring snps.elegans from file /mdb/bin/s2/snps/elegans.bson
2016-04-29T10:20:44.184+0800    restoring indexes for collection snps.elegans from metadata
2016-04-29T10:20:44.186+0800    finished restoring snps.elegans
2016-04-29T10:20:44.186+0800    done

然后关闭所有分片实例

7.还原config server数据

mongorestore --drop /mdb/bin/config_server --port 27021
2016-04-29T10:26:24.294+0800    building a list of dbs and collections to restore from /mdb/bin/config_server dir
2016-04-29T10:26:24.296+0800    reading metadata file from /mdb/bin/config_server/config/changelog.metadata.json
2016-04-29T10:26:24.296+0800    reading metadata file from /mdb/bin/config_server/config/locks.metadata.json
2016-04-29T10:26:24.297+0800    restoring config.locks from file /mdb/bin/config_server/config/locks.bson
2016-04-29T10:26:24.297+0800    reading metadata file from /mdb/bin/config_server/config/actionlog.metadata.json
2016-04-29T10:26:24.302+0800    reading metadata file from /mdb/bin/config_server/config/chunks.metadata.json
2016-04-29T10:26:24.302+0800    restoring config.chunks from file /mdb/bin/config_server/config/chunks.bson
2016-04-29T10:26:24.303+0800    restoring indexes for collection config.locks from metadata
2016-04-29T10:26:24.303+0800    restoring config.actionlog from file /mdb/bin/config_server/config/actionlog.bson
2016-04-29T10:26:24.303+0800    restoring indexes for collection config.chunks from metadata
2016-04-29T10:26:24.304+0800    restoring config.changelog from file /mdb/bin/config_server/config/changelog.bson
2016-04-29T10:26:24.304+0800    finished restoring config.locks
2016-04-29T10:26:24.306+0800    restoring indexes for collection config.actionlog from metadata
2016-04-29T10:26:24.306+0800    reading metadata file from /mdb/bin/config_server/config/shards.metadata.json
2016-04-29T10:26:24.306+0800    restoring config.shards from file /mdb/bin/config_server/config/shards.bson
2016-04-29T10:26:24.307+0800    finished restoring config.chunks
2016-04-29T10:26:24.307+0800    finished restoring config.actionlog
2016-04-29T10:26:24.307+0800    restoring indexes for collection config.shards from metadata
2016-04-29T10:26:24.308+0800    restoring indexes for collection config.changelog from metadata
2016-04-29T10:26:24.308+0800    reading metadata file from /mdb/bin/config_server/config/databases.metadata.json
2016-04-29T10:26:24.308+0800    restoring config.databases from file /mdb/bin/config_server/config/databases.bson
2016-04-29T10:26:24.308+0800    reading metadata file from /mdb/bin/config_server/config/lockpings.metadata.json
2016-04-29T10:26:24.308+0800    finished restoring config.shards
2016-04-29T10:26:24.308+0800    reading metadata file from /mdb/bin/config_server/config/collections.metadata.json
2016-04-29T10:26:24.308+0800    finished restoring config.changelog
2016-04-29T10:26:24.309+0800    restoring config.lockpings from file /mdb/bin/config_server/config/lockpings.bson
2016-04-29T10:26:24.309+0800    restoring config.collections from file /mdb/bin/config_server/config/collections.bson
2016-04-29T10:26:24.325+0800    reading metadata file from /mdb/bin/config_server/config/mongos.metadata.json
2016-04-29T10:26:24.325+0800    restoring indexes for collection config.databases from metadata
2016-04-29T10:26:24.326+0800    restoring config.mongos from file /mdb/bin/config_server/config/mongos.bson
2016-04-29T10:26:24.326+0800    restoring indexes for collection config.lockpings from metadata
2016-04-29T10:26:24.327+0800    restoring indexes for collection config.collections from metadata
2016-04-29T10:26:24.327+0800    finished restoring config.databases
2016-04-29T10:26:24.327+0800    finished restoring config.lockpings
2016-04-29T10:26:24.328+0800    reading metadata file from /mdb/bin/config_server/config/version.metadata.json
2016-04-29T10:26:24.328+0800    restoring config.version from file /mdb/bin/config_server/config/version.bson
2016-04-29T10:26:24.328+0800    reading metadata file from /mdb/bin/config_server/config/settings.metadata.json
2016-04-29T10:26:24.328+0800    restoring config.settings from file /mdb/bin/config_server/config/settings.bson
2016-04-29T10:26:24.328+0800    finished restoring config.collections
2016-04-29T10:26:24.328+0800    reading metadata file from /mdb/bin/config_server/config/tags.metadata.json
2016-04-29T10:26:24.328+0800    restoring config.tags from file /mdb/bin/config_server/config/tags.bson
2016-04-29T10:26:24.366+0800    restoring indexes for collection config.tags from metadata
2016-04-29T10:26:24.366+0800    restoring indexes for collection config.settings from metadata
2016-04-29T10:26:24.366+0800    restoring indexes for collection config.version from metadata
2016-04-29T10:26:24.375+0800    restoring indexes for collection config.mongos from metadata
2016-04-29T10:26:24.376+0800    finished restoring config.settings
2016-04-29T10:26:24.376+0800    finished restoring config.tags
2016-04-29T10:26:24.376+0800    finished restoring config.mongos
2016-04-29T10:26:24.376+0800    finished restoring config.version
2016-04-29T10:26:24.376+0800    done

8.启动mongos instance

mongos --logpath /mdb/mlog/ss.log --fork --port 30000 --configdb 11.11.11.195:27021
2016-04-29T10:27:56.855+0800 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for production
about to fork child process, waiting until server is ready for connections.
forked process: 25444
child process started successfully, parent exiting

9.如果shard的主机名发生了变化,需要更新config数据库

mongos> db.shards.find()
{ "_id" : "s1", "host" : "s1/genome_svr1:27501,genome_svr2:27502,genome_svr2:27503" }
{ "_id" : "s2", "host" : "s2/genome_svr4:27601,genome_svr5:27602,genome_svr5:27603" }
mongos> db.shards.update( { "_id": "s1" }, { $set: { "host": "s1/11.11.11.195:27017,11.11.11.195:27018" } }, { multi: true })
mongos> db.shards.update( { "_id": "s2" }, { $set: { "host": "s2/11.11.11.195:27019,11.11.11.195:27020" } }, { multi: true })
mongos> db.shards.find()db.shards.find()
{ "_id" : "s1", "host" : "s1/11.11.11.195:27018,11.11.11.195:27017" }
{ "_id" : "s2", "host" : "s2/11.11.11.195:27020,11.11.11.195:27019" }
mongos>

10.重启所有的shard mongod实例

11.重启其它的mongos实例

12.验证集群环境

mongos> db.printShardingStatus()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("553f0cc819d7841961ac8f4b")
}
  shards:
        {  "_id" : "s1",  "host" : "s1/11.11.11.195:27018,11.11.11.195:27017" }
        {  "_id" : "s2",  "host" : "s2/11.11.11.195:27020,11.11.11.195:27019" }
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "snps",  "partitioned" : true,  "primary" : "s1" }
                snps.elegans
                        shard key: { "snp" : 1 }
                        chunks:
                                s1      1
                                s2      1
                        { "snp" : { "$minKey" : 1 } } -->> { "snp" : "haw100000" } on : s2 Timestamp(2, 0)
                        { "snp" : "haw100000" } -->> { "snp" : { "$maxKey" : 1 } } on : s1 Timestamp(2, 1)
        {  "_id" : "test",  "partitioned" : false,  "primary" : "s1" }

mongos>
时间: 2024-10-11 00:23:28

MongoDB分片集群还原的相关文章

Bugsnag的MongoDB分片集群使用经验

Bugsnag是一家为移动应用开发者提供实时的Bug追踪及检测服务的创业公司,Bugsnag已经使用MongoDB存储超过TB级的文档数据.从Bugsnag的第一个版本开始他们就使用MongoDB存储业务数据.近日,Bugsnag的工程师Simon Maynard在博客上分享了他们的MongoDB分片集群经验,并开源了几个他们常使用的脚本. 带标签的分片(Tag Aware Sharding) 带标签的分片是MongoDB 2.2版本中引入的新特性,此特性支持人为控制数据的分片方式,从而使数据存

使用HAProxy作为MongoDB分片集群mongos负载均衡

MongoDB分片集群的入口mongos自身没有failover机制.官方建议是将mongos和应用服务器部署在一起,多个应用服务器就要部署多个mongos实例,这样很是不方便.还可以使用LVS或者HAProxy来实现多个mongos的failover机制,但是一定要注意使用client affinity即客户端关联特性. global     chroot      /data/app_platform/haproxy/share/      log         127.0.0.1 loc

mongodb分片集群突然停电造成一些错误,分片无法启动

今天突然停电使mongodb分片集群造成这种错误,暂时不知道怎么解决,如果 有人知道请回复我 ,现在把记录下来,等后期处理. Fri Aug  8 10:49:52.165 [initandlisten] connection accepted from 172.16.0.115:59542 #2 (2 connections now open)Fri Aug  8 10:49:52.954 [initandlisten] connection accepted from 172.16.0.10

MongoDB 分片集群实战

背景 在如今的互联网环境下,海量数据已随处可见并且还在不断增长,对于如何存储处理海量数据,比较常见的方法有两种: 垂直扩展:通过增加单台服务器的配置,例如使用更强悍的 CPU.更大的内存.更大容量的磁盘,此种方法虽然成本很高,但是实现比较简单,维护起来也比较方便. 水平扩展:通过使用更多配置一般的服务器来共同承担工作负载,此种方法很灵活,可以根据工作负载的大小动态增减服务器的数量,但是实现比较复杂,得有专门的人员来运维. Databases for MongoDB 试用 IBM Cloud 上提

mongoDB(三) mongoDB分片集群

mongoDB分片集群 介绍 解决数据分片,防止数据丢失生产环境需要擦用分片+副本集的部署方式 组成部分 route: 提供入口,不存储数据 configserver: 存储元数据信息,建议使用副本集 shardserver: 数据存储服务,存储真正数据, 也许要使用副本集 依赖关系 当数据插入时,需要从configsrv知道数据要插入哪个shardsrv分片 当用户获取数据时,需要从configsrv知道数据存储在哪个shardsrv分片 集群搭建 使用同一份mongodb二进制文件 修改对应

备份和还原MongoDB分片集群数据

1.使用mongodump备份小量分片集群数据 如果一个分片集群的数据集比较小,可以直接使用mongodump连接到mongos实例进行数据备份.默认情况下,mongodump到非primary的节点进行数据查询. 如: mongodump --host 192.168.100.200 --port 28018 -d taiwan_game1 -o . mongorestore --host 192.168.100.200 --port 28018 taiwan_game1 需要注意的是如果使用

MongoDB分片集群配置实例

环境: windows操作系统 mongodb 3.4社区版 目标: 配置包含两个分片一个配置服务器的分片集群.其中每一个分片和一个配置服务器都被配置为一个单独的副本集.如下图所示: 注:每一个分片都应该被配置在一个单独的服务器设备上.方便起见,本文在同一台机器通过不同端口模拟不同服务器上的组件,实现分片集群的配置.(生产环境的配置与此相同,只需使用自己的主机名.端口.路径等即可). 下图为本文配置的分片集群架构,其中的任意节点(副本集节点和分片节点)都是可扩展的. 1.分别为config se

[ MongoDB ] 分片集群及测试

分片 在Mongodb里面存在另一种集群,就是分片技术,可以满足MongoDB数据量大量增长的需求. 当MongoDB存储海量的数据时,一台机器可能不足以存储数据,也可能不足以提供可接受的读写吞吐量.这时,我们就可以通过在多台机器上分割数据,使得数据库系统能存储和处理更多的数据. 为什么使用分片? 1. 复制所有的写入操作到主节点    2. 延迟的敏感数据会在主节点查询    3. 单个副本集限制在12个节点    4. 当请求量巨大时会出现内存不足.    5. 本地磁盘不足    6. 垂

MongoDB分片集群

Mongodb Sharding分片集群 OS                      CentOS6.5 192.168.3.100 server1 configport=27017 192.168.3.100 server1 mongosport=27018 192.168.3.101 node1 mongodport=27018 192.168.3.102 node2 mongodport=27018 #CentOS安装mongo软件包 yum -y install mongodb mo