mongodb2.6部署副本集+分区

部署规划

操作系统:redhat6.4 64位


Config


Route


分片1


分片2


分片3


使用端口


28000


27017


27018


27019


27020


IP地址


192.168.1.30


/etc/config.conf


/etc/route.conf


/etc/sd1.conf(主)


/etc/sd2.conf(仲裁)


/etc/sd3.conf(备)


192.168.1.52


/etc/config.conf


/etc/route.conf


/etc/sd1.conf(备)


/etc/sd2.conf(主)


/etc/sd3.conf(仲裁)


192.168.1.108


/etc/config.conf


/etc/route.conf


/etc/sd1.conf(仲裁)


/etc/sd2.conf(备)


/etc/sd3.conf(主)

一、在三个节点上创建如下目录,做测试的话建议确保在/目录有15G左右的剩余空间

[[email protected] ~]# mkdir -p /var/config

[[email protected] ~]# mkdir -p /var/sd1

[[email protected] ~]# mkdir -p /var/sd2

[[email protected] ~]# mkdir -p /var/sd3

二、查看配置文件

[[email protected] ~]# cat /etc/config.conf

port=28000

dbpath=/var/config

logpath=/var/config/config.log

logappend=true

fork=true

configsvr=true

[[email protected] ~]# cat /etc/route.conf

port=27017

configdb=192.168.1.30:28000,192.168.1.52:28000,192.168.1.108:28000

logpath=/var/log/mongos.log

logappend=true

fork=true

[[email protected] ~]# cat /etc/sd1.conf

port = 27018

dbpath=/var/sd1

logpath =/var/sd1/shard1.log

logappend =true

shardsvr =true

replSet =set1

fork =true

[[email protected] ~]# cat /etc/sd2.conf

port = 27019

dbpath=/var/sd2

logpath =/var/sd2/shard2.log

logappend =true

shardsvr =true

replSet =set2

fork =true

[[email protected] ~]# cat /etc/sd3.conf

port = 27020

dbpath=/var/sd3

logpath =/var/sd3/shard1.log

logappend =true

shardsvr =true

replSet =set3

fork =true

三、在三个节点上同步时间

四、在三个节点上启动用config服务器

节点1

[[email protected] ~]# mongod -f /etc/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 3472

child process started successfully, parent exiting

[[email protected] ~]# ps -ef |grep mongo

root      3472     1  1 19:15 ?        00:00:01 mongod -f /etc/config.conf

root      3499  2858  0 19:17 pts/0    00:00:00 grep mongo

[[email protected] ~]# netstat -anltp|grep 28000

tcp        0      0 0.0.0.0:28000               0.0.0.0:*                   LISTEN      3472/mongod

节点2

[[email protected] ~]# mongod -f /etc/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2998

child process started successfully, parent exiting

[[email protected] ~]# ps -ef |grep mongo

root      2998     1  8 19:15 ?        00:00:08 mongod -f /etc/config.conf

root      3014  2546  0 19:17 pts/0    00:00:00 grep mongo

[[email protected] ~]# netstat -anltp|grep 28000

tcp        0      0 0.0.0.0:28000               0.0.0.0:*                   LISTEN      2998/mongod

节点3

[[email protected] ~]# mongod -f /etc/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 4086

child process started successfully, parent exiting

[[email protected] ~]# ps -ef |grep mongo

root      4086     1  2 19:25 ?        00:00:00 mongod -f /etc/config.conf

root      4100  3786  0 19:25 pts/0    00:00:00 grep mongo

[[email protected] ~]# netstat -anltp|grep 28000

tcp        0      0 0.0.0.0:28000               0.0.0.0:*                   LISTEN      4086/mongod

五、在三个节点上启动用路由服务器

节点1

[[email protected] ~]#  mongos -f /etc/route.conf

about to fork child process, waiting until server is ready for connections.

forked process: 3575

child process started successfully, parent exiting

[[email protected] ~]# netstat -anltp|grep 2701

tcp        0      0 0.0.0.0:27017               0.0.0.0:*                   LISTEN      3575/mongos

节点2

[[email protected] ~]#  mongos -f /etc/route.conf

about to fork child process, waiting until server is ready for connections.

forked process: 3057

child process started successfully, parent exiting

[[email protected] ~]# netstat -anltp|grep 2701

tcp        0      0 0.0.0.0:27017

节点3

[[email protected] ~]# mongos -f /etc/route.conf

about to fork child process, waiting until server is ready for connections.

forked process: 4108

child process started successfully, parent exiting

[[email protected] ~]# netstat -anltp|grep 27017

tcp        0      0 0.0.0.0:27017               0.0.0.0:*                   LISTEN      4108/mongos

六、在三个节点启用shard

mongod -f /etc/sd1.conf

mongod -f /etc/sd2.conf

mongod -f /etc/sd3.conf

节点1

[[email protected] ~]# ps -ef |grep mongo

root      3472     1  2 19:15 ?        00:02:18 mongod -f /etc/config.conf

root      3575     1  0 19:28 ?        00:00:48 mongos -f /etc/route.conf

root      4135     1  0 20:52 ?        00:00:07 mongod -f /etc/sd1.conf

root      4205     1  0 20:55 ?        00:00:05 mongod -f /etc/sd2.conf

root      4265     1  0 20:58 ?        00:00:04 mongod -f /etc/sd3.conf

节点2

[[email protected] ~]# ps -ef |grep mongo

root      2998     1  1 19:15 ?        00:02:02 mongod -f /etc/config.conf

root      3057     1  1 19:28 ?        00:01:02 mongos -f /etc/route.conf

root      3277     1  1 20:52 ?        00:00:20 mongod -f /etc/sd1.conf

root      3334     1  6 20:56 ?        00:00:52 mongod -f /etc/sd2.conf

root      3470     1  1 21:01 ?        00:00:07 mongod -f /etc/sd3.conf

节点3

[[email protected] data]# ps -ef |grep mongo

root      4086     1  1 19:25 ?        00:01:58 mongod -f /etc/config.conf

root      4108     1  0 19:27 ?        00:00:55 mongos -f /etc/route.conf

root      4592     1  0 20:54 ?        00:00:07 mongod -f /etc/sd1.conf

root      4646     1  3 20:56 ?        00:00:30 mongod -f /etc/sd2.conf

root      4763     1  4 21:04 ?        00:00:12 mongod -f /etc/sd3.conf

七、配置副本集

192.168.1.30

[[email protected] ~]# mongo --port 27018

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27018/test

> use admin

switched to db admin

> rs1={_id:"set1",members:[{_id:0,host:"192.168.1.30:27018",priority:2},{_id:1,host:"192.168.1.52:27018"},{_id:2,host:"192.168.1.108:27018",arbiterOnly:true}]}

{

"_id" : "set1",

"members" : [

{

"_id" : 0,

"host" : "192.168.1.30:27018",

"priority" : 2

},

{

"_id" : 1,

"host" : "192.168.1.52:27018"

},

{

"_id" : 2,

"host" : "192.168.1.108:27018",

"arbiterOnly" : true

}

]

}

> rs.initiate(rs1)

{

"info" : "Config now saved locally.  Should come online in about a minute.",

"ok" : 1

}

192.168.1.52

[[email protected] ~]# mongo --port 27019

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27019/test

> use admin

switched to db admin

> rs2={_id:"set2",members:[{_id:0,host:"192.168.1.52:27019",priority:2},{_id:1,host:"192.168.1.108:27019"},{_id:2,host:"192.168.1.30:27019",arbiterOnly:true}]}

{

"_id" : "set2",

"members" : [

{

"_id" : 0,

"host" : "192.168.1.52:27019",

"priority" : 2

},

{

"_id" : 1,

"host" : "192.168.1.108:27019"

},

{

"_id" : 2,

"host" : "192.168.1.30:27019",

"arbiterOnly" : true

}

]

}

> rs.initiate(rs2);

{

"info" : "Config now saved locally.  Should come online in about a minute.",

"ok" : 1

}

192.168.1.108

[[email protected] sd3]# mongo --port 27020

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27020/test

> use admin

switched to db admin

> rs3={_id:"set3",members:[{_id:0,host:"192.168.1.108:27020",priority:2},{_id:1,host:"192.168.1.30:27020"},{_id:2,host:"192.168.1.52:27020",arbiterOnly:true}]}

{

"_id" : "set3",

"members" : [

{

"_id" : 0,

"host" : "192.168.1.108:27020",

"priority" : 2

},

{

"_id" : 1,

"host" : "192.168.1.30:27020"

},

{

"_id" : 2,

"host" : "192.168.1.52:27020",

"arbiterOnly" : true

}

]

}

> rs.initiate(rs3);

{

"info" : "Config now saved locally.  Should come online in about a minute.",

"ok" : 1

}

八、添加分片

在三个节点上任一个节点都可以操作

192.168.1.30

[[email protected] sd3]# mongo --port 27017

MongoDB shell version: 2.6.4

connecting to: 127.0.0.1:27017/test

mongos> use admin

switched to db admin

mongos> db.runCommand({addshard:"set1/192.168.1.30:27018,192.168.1.52:27018,192.168.1.108:27018"})

{ "shardAdded" : "set1", "ok" : 1 }

mongos> db.runCommand({addshard:"set2/192.168.1.30:27019,192.168.1.52:27019,192.168.1.108:27019"})

{ "shardAdded" : "set2", "ok" : 1 }

mongos> db.runCommand({addshard:"set3/192.168.1.30:27020,192.168.1.52:27020,192.168.1.108:27020"})

{ "shardAdded" : "set3", "ok" : 1 }

九、查看分片信息

mongos> db.runCommand({listshards : 1})

{

"shards" : [

{

"_id" : "set1",

"host" : "set1/192.168.1.30:27018,192.168.1.52:27018"

},

{

"_id" : "set2",

"host" : "set2/192.168.1.108:27019,192.168.1.52:27019"

},

{

"_id" : "set3",

"host" : "set3/192.168.1.108:27020,192.168.1.30:27020"

}

],

"ok" : 1

}

十、删除分片

mongos> db.runCommand({removeshard:"set3"})

{

"msg" : "draining started successfully",

"state" : "started",

"shard" : "set3",

"ok" : 1

}

十一、管理分片

mongos> use config

switched to db config

mongos> db.shards.find();

{ "_id" : "set1", "host" : "set1/192.168.1.30:27018,192.168.1.52:27018" }

{ "_id" : "set2", "host" : "set2/192.168.1.108:27019,192.168.1.52:27019" }

{ "_id" : "set3", "host" : "set3/192.168.1.108:27020,192.168.1.30:27020" }

十二、对要分片的库和表声明

切换到admin库

mongos> use admin

声明test库允许分片

mongos> db.runCommand({enablesharding:"test"})

{ "ok" : 1 }

声明users表要分片

mongos> db.runCommand({shardcollection:"test.lineqi",key:{id:"hashed"}})

{ "collectionsharded" : "test.lineqi", "ok" : 1 }

十三、测试脚本

切换到test

mongos>use test

mongos> for (var i = 1; i <= 100000; i++) db.lineqi.save({id:i,name:"12345678",sex:"male",age:27,value:"test"});

WriteResult({ "nInserted" : 1 })

十四、测试结果

查看分片信息

mongos> use config

switched to db config

mongos> db.chunks.find();

{ "_id" : "test.users-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("55ddb3a70f613da70e8ce303"), "ns" : "test.users", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 1 }, "shard" : "set1" }

{ "_id" : "test.users-id_1.0", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("55ddb3a70f613da70e8ce303"), "ns" : "test.users", "min" : { "id" : 1 }, "max" : { "id" : 4752 }, "shard" : "set2" }

{ "_id" : "test.users-id_4752.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("55ddb3a70f613da70e8ce303"), "ns" : "test.users", "min" : { "id" : 4752 }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "set3" }

{ "_id" : "test.lineqi-id_MinKey", "lastmod" : Timestamp(3, 2), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : NumberLong("-6148914691236517204") }, "shard" : "set2" }

{ "_id" : "test.lineqi-id_-3074457345618258602", "lastmod" : Timestamp(3, 4), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("-3074457345618258602") }, "max" : { "id" : NumberLong(0) }, "shard" : "set3" }

{ "_id" : "test.lineqi-id_3074457345618258602", "lastmod" : Timestamp(3, 6), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("3074457345618258602") }, "max" : { "id" : NumberLong("6148914691236517204") }, "shard" : "set1" }

{ "_id" : "test.lineqi-id_-6148914691236517204", "lastmod" : Timestamp(3, 3), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("-6148914691236517204") }, "max" : { "id" : NumberLong("-3074457345618258602") }, "shard" : "set2" }

{ "_id" : "test.lineqi-id_0", "lastmod" : Timestamp(3, 5), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong(0) }, "max" : { "id" : NumberLong("3074457345618258602") }, "shard" : "set3" }

{ "_id" : "test.lineqi-id_6148914691236517204", "lastmod" : Timestamp(3, 7), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("6148914691236517204") }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "set1" }

查看users表的存储信息

mongos>use test

mongos> db.lineqi.stats();

{

"sharded" : true,

"systemFlags" : 1,

"userFlags" : 1,

"ns" : "test.lineqi",

"count" : 100000,

"numExtents" : 18,

"size" : 11200000,

"storageSize" : 33546240,

"totalIndexSize" : 8086064,

"indexSizes" : {

"_id_" : 3262224,

"id_hashed" : 4823840

},

"avgObjSize" : 112,

"nindexes" : 2,

"nchunks" : 6,

"shards" : {

"set1" : {

"ns" : "test.lineqi",

"count" : 33102,

"size" : 3707424,

"avgObjSize" : 112,

"storageSize" : 11182080,

"numExtents" : 6,

"nindexes" : 2,

"lastExtentSize" : 8388608,

"paddingFactor" : 1,

"systemFlags" : 1,

"userFlags" : 1,

"totalIndexSize" : 2649024,

"indexSizes" : {

"_id_" : 1079232,

"id_hashed" : 1569792

},

"ok" : 1

},

"set2" : {

"ns" : "test.lineqi",

"count" : 33755,

"size" : 3780560,

"avgObjSize" : 112,

"storageSize" : 11182080,

"numExtents" : 6,

"nindexes" : 2,

"lastExtentSize" : 8388608,

"paddingFactor" : 1,

"systemFlags" : 1,

"userFlags" : 1,

"totalIndexSize" : 2755312,

"indexSizes" : {

"_id_" : 1103760,

"id_hashed" : 1651552

},

"ok" : 1

},

"set3" : {

"ns" : "test.lineqi",

"count" : 33143,

"size" : 3712016,

"avgObjSize" : 112,

"storageSize" : 11182080,

"numExtents" : 6,

"nindexes" : 2,

"lastExtentSize" : 8388608,

"paddingFactor" : 1,

"systemFlags" : 1,

"userFlags" : 1,

"totalIndexSize" : 2681728,

"indexSizes" : {

"_id_" : 1079232,

"id_hashed" : 1602496

},

"ok" : 1

}

},

"ok" : 1

}

十五、参考文档

http://blog.sina.com.cn/s/blog_8d3bcbdb01015vne.html

http://jingyan.baidu.com/article/495ba841f1ee2b38b30ede01.html

http://server.ctocio.com.cn/150/13087650.shtml

时间: 2024-10-06 15:18:30

mongodb2.6部署副本集+分区的相关文章

搭建高可用的MongoDB集群副本集

什么是副本集呢?打魔兽世界总说打副本,其实这两个概念差不多一个意思.游戏里的副本是指玩家集中在高峰时间去一个场景打怪,会出现玩家暴多怪物少的情况,游戏开发商为了保证玩家的体验度,就为每一批玩家单独开放一个同样的空间同样的数量的怪物,这一个复制的场景就是一个副本,不管有多少个玩家各自在各自的副本里玩不会互相影响. mongoDB的副本也是这个,主从模式其实就是一个单副本的应用,没有很好的扩展性和容错性.而副本集具有多个副本保证了容错性,就算一个副本挂掉了还有很多副本存在,并且解决了上面第一个问题"

docker-compose搭建mongoDB副本集(1主+1副+1仲裁)

一.基本概念 1.副本集:一个副本集就是一组MongoDB实例组成的集群,由一个主(Primary)服务器和多个备份(Secondary)服务器构成 2.主节点(master):主节点接收所有写入操作.主节点将对其数据集所做的所有更改记录到其 oplog. 3.副节点(secondary):复制主节点的 oplog 并将操作应用到其数据集,如果主节点不可用,一个合格的副节点将被选为新的主节点. 4.仲裁节点(arbiter):负载选举,当主节点不可用,它将从副节点中选一个作为主节点. 二.部署副

MongoDB集群部署(副本集模式)

一.需求背景1.现状描述(1).针对近期出现的mongodb未授权的安全问题,导致mongodb数据会被非法访问.应安全平台部的要求,需要对所有mongodb进行鉴权设置,目前活动侧总共有4台,用于某XX产品: (2).某XX产品用到4台mongodb,属于2015年机房裁撤的范围: (3).早期的4台mongodb采用是的M1机型,同时在架构上采取了路由分片的模式.从目前来看,无论是数据量上,还是访问量上,都比较小,在资源上有点浪费,在架构上属于过早设计. 而本次新建的mongodb集群,采用

docker中部署mongodb副本集

1.基本信息如下 服务器地址 192.168.73.129 副本集名称 rs 容器节点及端口映射 m0 37017:27017 m1 47017:27017 m2 57017:27017 注:机器环境安装docker 2.部署步骤 2.1下载mongo镜像 docker pull mongo 2.2 启动三个节点 docker run --name m0 -p 37017:27017 -d mongo --replSet "rs" docker run --name m1 -p 470

MongoDB副本集(一主两从)读写分离、故障转移功能环境部署记录

Mongodb是一种非关系数据库(NoSQL),非关系型数据库的产生就是为了解决大数据量.高扩展性.高性能.灵活数据模型.高可用性.MongoDB官方已经不建议使用主从模式了,替代方案是采用副本集的模式.主从模式其实就是一个单副本的应用,没有很好的扩展性和容错性,而Mongodb副本集具有多个副本保证了容错性,就算一个副本挂掉了还有很多副本存在,主节点挂掉后,整个集群内会实现自动切换. Mongodb副本集的工作原理客户端连接到整个Mongodb副本集,不关心具体哪一台节点机是否挂掉.主节点机负

MongoDB4.0 WINDOWS环境下 副本集、分片部署

部署开始: 创建路径 D:\Program Files\MongoDB\MySet下 config Data log 文件夹 config文件夹下准备配置文件: 分片1的副本集1 storage: dbPath: D:/Program Files/MongoDB/MySet/Data/shard11 journal: enabled: true systemLog: destination: file logAppend: true path: D:/Program Files/MongoDB/

MongoDB 部署复制集(副本集)

环境 操作系统:Ubuntu 18.04 MongoDB: 4.0.3 服务器 首先部署3台服务器,1台主节点 + 2台从节点 3台服务器的内容ip分别是: 10.140.0.5 (主节点) 10.140.0.6 (从节点01) 10.140.0.7 (从节点02) 安装MongoDB 接下来,需要在每一台服务器上安装MongoDB. 完整安装过程可参考官方文档. 为了方便,本文提供MongoDB的一键安装脚本. 切换成root用户 sudo su - 运行安装脚本 wget https://g

利用Docker部署mongodb集群--分片与副本集

环境 Docker version 1.6.2  mongodb 3.0.4 第一步  编写Dockerfile并生成镜像 主意包含两个Dockerfile镜像,一个mongod的,一个mongos(在集群中负责路由) 编写Mongod的Dockerfile: FROM ubuntu:14.04 RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 ENV MONGO_MAJOR 3.0 RUN ech

mongodb3.0.1副本集安装部署(仲裁节点模式)

环境:OS:Centos 7db:3.0.1两台物理机器,启用3个进程,各角色如下192.168.1.118:28007 主192.168.1.85:28008 从192.168.1.85:28009 仲裁节点 1.下载安装介质,我这里下载的是mongodb-linux-x86_64-3.0.1.tgzhttp://dl.mongodb.org/dl/linux/x86_64 -------------------在192.168.1.118上安装---------------------1.安