mogodb备份机制

一,通过copy mogodb文件的方式备份还原。(建议copy的时候mogodb锁定禁止写入,避免导出的文件与原库文件部分数据不一致,或者导出的文件格式损坏)

1,通过FTP将生产的mogodb文件copy下来

2,在window下恢复

C:\Program Files\MongoDB 2.6 Standard\bin>mongod.exe   -dbpath E:\mogo_data
2015-05-27T12:44:16.313+0800 Hotfix KB2731284 or later update is not installed,
will zero-out data files
2015-05-27T12:44:16.316+0800 [initandlisten] MongoDB starting : pid=6256 port=27
017 dbpath=E:\mogo_data 64-bit host=PC201505061049
2015-05-27T12:44:16.317+0800 [initandlisten] targetMinOS: Windows 7/Windows Serv
er 2008 R2
2015-05-27T12:44:16.317+0800 [initandlisten] db version v2.6.10
2015-05-27T12:44:16.317+0800 [initandlisten] git version: 5901dbfb49d16eaef6f2c2
c50fba534d23ac7f6c
2015-05-27T12:44:16.317+0800 [initandlisten] build info: windows sys.getwindowsv
ersion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1')
BOOST_LIB_VERSION=1_49
2015-05-27T12:44:16.317+0800 [initandlisten] allocator: system
2015-05-27T12:44:16.317+0800 [initandlisten] options: { storage: { dbPath: "E:\m
ogo_data" } }
2015-05-27T12:44:16.394+0800 [initandlisten] journal dir=E:\mogo_data\journal
2015-05-27T12:44:16.395+0800 [initandlisten] recover begin
2015-05-27T12:44:16.396+0800 [initandlisten] recover lsn: 2659254
2015-05-27T12:44:16.396+0800 [initandlisten] recover E:\mogo_data\journal\j._0
2015-05-27T12:44:16.396+0800 [initandlisten] recover skipping application of sec
tion seq:0 < lsn:2659254
2015-05-27T12:44:16.397+0800 [initandlisten] recover skipping application of sec
tion seq:59134 < lsn:2659254
2015-05-27T12:44:16.397+0800 [initandlisten] recover skipping application of sec
tion seq:118224 < lsn:2659254
2015-05-27T12:44:16.398+0800 [initandlisten] recover skipping application of sec
tion seq:177314 < lsn:2659254
2015-05-27T12:44:16.398+0800 [initandlisten] recover skipping application of sec
tion seq:236414 < lsn:2659254
2015-05-27T12:44:16.399+0800 [initandlisten] recover skipping application of sec
tion seq:295514 < lsn:2659254
2015-05-27T12:44:16.399+0800 [initandlisten] recover skipping application of sec
tion seq:354604 < lsn:2659254
2015-05-27T12:44:16.400+0800 [initandlisten] recover skipping application of sec
tion seq:413704 < lsn:2659254
2015-05-27T12:44:16.400+0800 [initandlisten] recover skipping application of sec
tion seq:472784 < lsn:2659254
2015-05-27T12:44:16.400+0800 [initandlisten] recover skipping application of sec
tion more...
2015-05-27T12:44:16.478+0800 [initandlisten] recover cleaning up
2015-05-27T12:44:16.478+0800 [initandlisten] removeJournalFiles
2015-05-27T12:44:16.479+0800 [initandlisten] recover done
2015-05-27T12:44:16.512+0800 [initandlisten] waiting for connections on port 270
17
2015-05-27T12:44:53.407+0800 [initandlisten] connection accepted from 127.0.0.1:
7344 #1 (1 connection now open)
2015-05-27T12:45:16.509+0800 [clientcursormon] mem (MB) res:70 virt:883
2015-05-27T12:45:16.509+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T12:45:16.509+0800 [clientcursormon]  connections:1
2015-05-27T12:50:16.528+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T12:50:16.528+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T12:50:16.528+0800 [clientcursormon]  connections:1
2015-05-27T12:55:16.547+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T12:55:16.547+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T12:55:16.547+0800 [clientcursormon]  connections:1
2015-05-27T13:00:16.569+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:00:16.569+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:00:16.569+0800 [clientcursormon]  connections:1
2015-05-27T13:05:16.586+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:05:16.586+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:05:16.586+0800 [clientcursormon]  connections:1
2015-05-27T13:10:16.603+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:10:16.603+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:10:16.603+0800 [clientcursormon]  connections:1
2015-05-27T13:15:16.620+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:15:16.620+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:15:16.620+0800 [clientcursormon]  connections:1
2015-05-27T13:20:16.637+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:20:16.637+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:20:16.637+0800 [clientcursormon]  connections:1
2015-05-27T13:25:16.654+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:25:16.654+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:25:16.654+0800 [clientcursormon]  connections:1
2015-05-27T13:30:16.671+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:30:16.671+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:30:16.671+0800 [clientcursormon]  connections:1
2015-05-27T13:35:16.700+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:35:16.700+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:35:16.700+0800 [clientcursormon]  connections:1
2015-05-27T13:40:16.732+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:40:16.732+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:40:16.745+0800 [clientcursormon]  connections:1
2015-05-27T13:45:16.775+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:45:16.775+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:45:16.775+0800 [clientcursormon]  connections:1
2015-05-27T13:50:16.813+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:50:16.813+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:50:16.816+0800 [clientcursormon]  connections:1
2015-05-27T13:51:38.859+0800 [initandlisten] connection accepted from 127.0.0.1:
8203 #2 (2 connections now open)
2015-05-27T13:51:38.869+0800 [conn2] end connection 127.0.0.1:8203 (1 connection
 now open)
2015-05-27T13:52:06.348+0800 [initandlisten] connection accepted from 127.0.0.1:
8216 #3 (2 connections now open)
2015-05-27T13:52:06.368+0800 [conn3] end connection 127.0.0.1:8216 (1 connection
 now open)
2015-05-27T13:55:16.849+0800 [clientcursormon] mem (MB) res:70 virt:880
2015-05-27T13:55:16.849+0800 [clientcursormon]  mapped (incl journal view):736
2015-05-27T13:55:16.849+0800 [clientcursormon]  connections:1
2015-05-27T13:57:03.290+0800 [initandlisten] connection accepted from 127.0.0.1:
8302 #4 (2 connections now open)
2015-05-27T13:57:03.299+0800 [conn4] end connection 127.0.0.1:8302 (1 connection
 now open)
2015-05-27T13:57:21.789+0800 [initandlisten] connection accepted from 127.0.0.1:
8304 #5 (2 connections now open)
2015-05-27T13:57:21.804+0800 [conn5] end connection 127.0.0.1:8304 (1 connection
 now open)
2015-05-27T13:57:40.792+0800 [initandlisten] connection accepted from 127.0.0.1:
8308 #6 (2 connections now open)
2015-05-27T13:57:40.809+0800 [conn6] end connection 127.0.0.1:8308 (1 connection
 now open)
2015-05-27T13:57:54.838+0800 [initandlisten] connection accepted from 127.0.0.1:
8309 #7 (2 connections now open)
2015-05-27T13:57:54.979+0800 [conn7] end connection 127.0.0.1:8309 (1 connection
 now open)
2015-05-27T13:58:17.267+0800 [initandlisten] connection accepted from 127.0.0.1:
8311 #8 (2 connections now open)
2015-05-27T13:58:17.268+0800 [conn8] CMD: drop test.blog
2015-05-27T13:58:17.271+0800 [conn8] build index on: test.blog properties: { v:
1, key: { _id: 1 }, name: "_id_", ns: "test.blog" }
2015-05-27T13:58:17.271+0800 [conn8]     added index to empty collection
2015-05-27T13:58:17.277+0800 [conn8] CMD: drop test.fs.chunks
2015-05-27T13:58:17.279+0800 [conn8] build index on: test.fs.chunks properties:
{ v: 1, key: { _id: 1 }, name: "_id_", ns: "test.fs.chunks" }
2015-05-27T13:58:17.280+0800 [conn8]     added index to empty collection
2015-05-27T13:58:17.441+0800 [conn8] insert test.fs.chunks ninserted:1 keyUpdate
s:0 numYields:0 locks(micros) w:222 120ms
2015-05-27T13:58:17.473+0800 [conn8] build index on: test.fs.chunks properties:
{ v: 1, unique: true, key: { files_id: 1, n: 1 }, name: "files_id_1_n_1", ns: "t
est.fs.chunks" }

二,通过mongodump\mongorestore命令进行备份(热备,服务正常使用)

1,数据导出

[[email protected] ~]# mongodump -h 192.168.60.237 -o /root/test
2015-05-27T13:34:45.737+0800	writing test.fs.chunks to /root/test/test/fs.chunks.bson
2015-05-27T13:34:45.737+0800	writing test.fs.files to /root/test/test/fs.files.bson
2015-05-27T13:34:45.737+0800	writing admin.system.indexes to /root/test/admin/system.indexes.bson
2015-05-27T13:34:45.737+0800	writing test.system.indexes to /root/test/test/system.indexes.bson
2015-05-27T13:34:45.738+0800	writing admin.system.users to /root/test/admin/system.users.bson
2015-05-27T13:34:45.738+0800	writing admin.system.version to /root/test/admin/system.version.bson
2015-05-27T13:34:45.739+0800	writing admin.system.users metadata to /root/test/admin/system.users.metadata.json
2015-05-27T13:34:45.739+0800	writing test.fs.files metadata to /root/test/test/fs.files.metadata.json
2015-05-27T13:34:45.741+0800	writing admin.system.version metadata to /root/test/admin/system.version.metadata.json
2015-05-27T13:34:45.743+0800	done dumping test.fs.files
2015-05-27T13:34:45.743+0800	done dumping admin.system.users
2015-05-27T13:34:45.743+0800	writing test.blog to /root/test/test/blog.bson
2015-05-27T13:34:45.744+0800	done dumping admin.system.version
2015-05-27T13:34:45.744+0800	writing test.blog metadata to /root/test/test/blog.metadata.json
2015-05-27T13:34:45.746+0800	done dumping test.blog
2015-05-27T13:34:45.761+0800	writing test.fs.chunks metadata to /root/test/test/fs.chunks.metadata.json
2015-05-27T13:34:45.762+0800	done dumping test.fs.chunks
[[email protected] ~]#

2,数据导入

C:\Program Files\MongoDB 2.6 Standard\bin>mongorestore.exe -d test  --directoryp
erdb E:\mogo_data2\test --drop
2015-05-27T13:58:17.265+0800 Hotfix KB2731284 or later update is not installed,
will zero-out data files
connected to: 127.0.0.1
2015-05-27T13:58:17.268+0800 E:\mogo_data2\test\blog.bson
2015-05-27T13:58:17.268+0800    going into namespace [test.blog]
2015-05-27T13:58:17.268+0800     dropping
2015-05-27T13:58:17.271+0800    Created collection test.blog with options: { "cr
eate" : "blog" }
1 objects found
2015-05-27T13:58:17.272+0800    Creating index: { key: { _id: 1 }, name: "_id_",
 ns: "test.blog" }
2015-05-27T13:58:17.275+0800 E:\mogo_data2\test\fs.chunks.bson
2015-05-27T13:58:17.275+0800    going into namespace [test.fs.chunks]
2015-05-27T13:58:17.276+0800     dropping
2015-05-27T13:58:17.280+0800    Created collection test.fs.chunks with options:
{ "create" : "fs.chunks" }
271 objects found
2015-05-27T13:58:17.471+0800    Creating index: { key: { _id: 1 }, name: "_id_",
 ns: "test.fs.chunks" }
2015-05-27T13:58:17.472+0800    Creating index: { unique: true, key: { files_id:
 1, n: 1 }, name: "files_id_1_n_1", ns: "test.fs.chunks" }
2015-05-27T13:58:17.474+0800 E:\mogo_data2\test\fs.files.bson
2015-05-27T13:58:17.475+0800    going into namespace [test.fs.files]
2015-05-27T13:58:17.475+0800     dropping
2015-05-27T13:58:17.480+0800    Created collection test.fs.files with options: {
 "create" : "fs.files" }
266 objects found
2015-05-27T13:58:17.482+0800    Creating index: { key: { _id: 1 }, name: "_id_",
 ns: "test.fs.files" }
2015-05-27T13:58:17.484+0800    Creating index: { key: { filename: 1, uploadDate
: 1 }, name: "filename_1_uploadDate_1", ns: "test.fs.files" }
2015-05-27T13:58:17.487+0800    Creating index: { key: { filename: 1 }, name: "_
filename", ns: "test.fs.files" }

C:\Program Files\MongoDB 2.6 Standard\bin>

3,通过主从的方式备份,参考本博redhat下mongodb主从配置

时间: 2024-11-11 20:39:08

mogodb备份机制的相关文章

HDFS源码分析(二)-----元数据备份机制

前言 在Hadoop中,所有的元数据的保存都是在namenode节点之中,每次重新启动整个集群,Hadoop都需要从这些持久化了的文件中恢复数据到内存中,然后通过镜像和编辑日志文件进行定期的扫描与合并,ok,这些稍微了解Hadoop的人应该都知道,这不就是SecondNameNode干的事情嘛,但是很多人只是了解此机制的表象,内部的一些实现机理估计不是每个人都又去深究过,你能想象在写入编辑日志的过程中,用到了双缓冲区来加大并发量的写吗,你能想象为了避免操作的一致性性,作者在写入的时候做过多重的验

kafka备份机制——zk选举leader,leader在broker里负责备份

Kafka架构 如上图所示,一个典型的kafka集群中包含若干producer(可以是web前端产生的page view,或者是服务器日志,系统CPU.memory等),若干broker(Kafka支持水平扩展,一般broker数量越多,集群吞吐率越高),若干consumer group,以及一个Zookeeper集群.Kafka通过Zookeeper管理集群配置,选举leader,以及在consumer group发生变化时进行rebalance.producer使用push模式将消息发布到b

NVM区数据备份机制

上一篇主要说明NVM区操作注意事项,本文针对上篇提到的NVM区数据备份方法进行补充讲解.NVM区主要特性是写入数据掉电不丢失,可以永久的保存数据,一般用作存放不经常修改的数据,此功能类似FLASH.向NVM区写入数据可分为3步:第一步,将目标扇区内原有数据读出到RAM中:第二步,擦除NVM目标扇区内数据:第三步,将新数据和RAM中的旧数据写入到该扇区中.基于以上写操作的特点可以看出,若执行写NVM区操作的第二步或第三步时芯片断电了,就会造成NVM区内原有数据丢失,而新数据写入失败,表现出NVM区

图解 Kafka 水印备份机制

高可用是很多分布式系统中必备的特征之一,Kafka 日志的高可用是通过基于 leader-follower 的多副本同步实现的,每个分区下有多个副本,其中只有一个是 leader 副本,提供发送和消费消息,其余都是 follower 副本,不断地发送 fetch 请求给 leader 副本以同步消息,如果 leader 在整个集群运行过程中不发生故障,follower 副本不会起到任何作用,问题就在于任何系统都不能保证其稳定运行,当 leader 副本所在的 broker 崩溃之后,其中一个 f

Ubuntu Server用rsync实现备份机制

创建脚本cd /root touch sprin.sh vim sprin.sh #!/bin/bashrsync -az /test/sprin/ /backup/sprin/ find /test/sprin/SM/ -type f -mtime +30 -execdir rm -rf -- '{}' +#--删除超过30天的文件,以便不断腾出空间 find /test/sprin/FA/ -type f -mtime +30 -execdir rm -rf -- '{}' + 赋予脚本执行

rsync远程数据备份配置之再次总结

一.实验环境 主机名  网卡ip  默认网关 用途 nfs-server 10.0.0.11 10.0.0.254 rsync服务器端 web-client01 10.0.0.12 10.0.0.254 rsync客服端 web-client02 10.0.0.13 10.0.0.254 rsync客服端 二.实验步骤 1.什么是rsync?rsync是一款开源的,快速的,多功能的可实现全量及增量的数据备份同步的优秀工具,适用于多种操作系统之上.2.rsync特性1)支持拷贝链接文件特殊文件2)

完整和增量备份MySQL脚本

文档介绍本文档采用mysqldump 对数据库进行备份,mysqldump 是采用SQL级别的备份机制,它将数据表导成 SQL脚本文件,在不同的 MySQL 版本之间升级时相对比较合适,这也是最常用的备份方法,mysqldump 比直接拷贝要慢些. 本文描述Mysql数据库的自动备份,包括完全备份和增量备份.其中,完全备份每周执行一次,增量备份每天都会执行.备份成功后会自动上传到FTP服务器.mysql需要开启二进制日志. 备份策略布置把脚本放到/usr/bin 目录下面(1).启用二进制日志采

MongoDB 状态监控、备份复制及自动分片

如果MongoDB仅仅是一个文档型的数据库,那就没有什么亮点了,然而MongoDB最大优点在于读扩展,热备份,故障恢复以及自动分片(写扩展).这节系列结束篇就把这些功能介绍一下. 备份复制实现了数据库备份的同时,实现了读写分离,又实现了读操作的负载均衡,即一台主写服务器,多台从属备份和读服务器,并且支持备份和读的集群扩展.其中Replica Sets方式又支持故障切换,当主服务器down掉后会投票选出一台从服务器接替为主服务器实现写操作.而自动分片功能会将原先的集合(表),自动分片到其它服务器上

svn备份脚本

svn备份一般采用三种方式:1)svnadmin dump 2)svnadmin hotcopy 3)svnsync. 注意,svn备份不宜采用普通的文件拷贝方式(除非你备份的时候将库暂停),如copy命令.rsync命令. 笔者曾经用 rsync命令来做增量和全量备份,在季度备份检查审计中,发现备份出来的库大部分都不可用,因此最好是用svn本身提供的功能来进行备份. 优缺点分析============== 第一种svnadmin dump是官方推荐的备份方式,优点是比较灵活,可以全量备份也可以