(一)备份namenode的元数据
namenode中的元数据非常重要,如丢失或者损坏,则整个系统无法使用。因此应该经常对元数据进行备份,最好是异地备份。
1、将元数据复制到远程站点
(1)以下代码将secondary namenode中的元数据复制到一个时间命名的目录下,然后通过scp命令远程发送到其它机器
#!/bin/bash export dirname=/mnt/tmphadoop/dfs/namesecondary/current/`date +%y%m%d%H` if [ ! -d ${dirname} ] then mkdir ${dirname} cp /mnt/tmphadoop/dfs/namesecondary/current/* ${dirname} fi scp -r ${dirname} slave1:/mnt/namenode_backup/ rm -r ${dirname}
(2)配置crontab,定时执行此项工作
0 0,8,14,20 * * * bash /mnt/scripts/namenode_backup_script.sh
2、在远程站点中启动一个本地namenode守护进程,尝试加载这些备份文件,以确定是否已经进行了正确备份。
(二)数据备份
对于重要的数据,不能完全依赖HDFS,而是需要进行备份,注意以下几点
(1)尽量异地备份
(2)如果使用distcp备份至另一个hdfs集群,则不要使用同一版本的hadoop,避免hadoop自身导致数据出错。
(三)文件系统检查
定期在整个文件系统上运行HDFS的fsck工具,主动查找丢失或者损坏的块。
建议每天执行一次。
[[email protected] ~]$ hadoop fsck / ……省略输出(若有错误,则在此外出现,否则只会出现点,一个点表示一个文件)…… .........Status: HEALTHY Total size: 14466494870 B Total dirs: 502 Total files: 1592 (Files currently being written: 2) Total blocks (validated): 1725 (avg. block size 8386373 B) Minimally replicated blocks: 1725 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 648 (37.565216 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 2 Average block replication: 2.0 Corrupt blocks: 0 Missing replicas: 760 (22.028986 %) Number of data-nodes: 2 Number of racks: 1 FSCK ended at Sun Mar 01 20:17:57 CST 2015 in 608 milliseconds The filesystem under path '/' is HEALTHY
(1)若hdfs-site.xml中的dfs.replication设置为3,而实现上只有2个datanode,则在执行fsck时会出现以下错误;
/hbase/Mar0109_webpage/59ad1be6884739c29d0624d1d31a56d9/il/43e6cd4dc61b49e2a57adf0c63921c09: Under replicated blk_-4711857142889323098_6221. Target Replicas is 3 but found 2 replica(s).
注意,由于原来的dfs.replication为3,后来下线了一台datanode,并将dfs.replication改为2,但原来已创建的文件也会记录dfs.replication为3,从而出现以上错误,并导致 Under-replicated blocks: 648 (37.565216 %)。
(2)fsck工具还可以用来检查一个文件包括哪些块,以及这些块分别在哪等
[[email protected] conf]$ hadoop fsck /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 -files -blocks -racks FSCK started by jediael from /10.171.29.191 for path /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 at Sun Mar 01 20:39:35 CST 2015 /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 21507169 bytes, 1 block(s): Under replicated blk_7117944555454804881_3655. Target Replicas is 3 but found 2 replica(s). 0. blk_7117944555454804881_3655 len=21507169 repl=2 [/default-rack/10.171.94.155:50010, /default-rack/10.251.0.197:50010] Status: HEALTHY Total size: 21507169 B Total dirs: 0 Total files: 1 Total blocks (validated): 1 (avg. block size 21507169 B) Minimally replicated blocks: 1 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 1 (100.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 2 Average block replication: 2.0 Corrupt blocks: 0 Missing replicas: 1 (50.0 %) Number of data-nodes: 2 Number of racks: 1 FSCK ended at Sun Mar 01 20:39:35 CST 2015 in 0 milliseconds The filesystem under path '/hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7' is HEALTHY
此命令的用法如下:
[[email protected] ~]$ hadoop fsck -files Usage: DFSck <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]] <path> start checking from this path -move move corrupted files to /lost+found -delete delete corrupted files -files print out files being checked -openforwrite print out files opened for write -blocks print out block report -locations print out locations for every block -racks print out network topology for data-node locations By default fsck ignores files opened for write, use -openforwrite to report such files. They are usually tagged CORRUPT or HEALTHY depending on their block allocation status Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|jobtracker:port> specify a job tracker -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
详细解释请见《hadoop权威指南》P376
(四)均衡器
随时时间推移,各个datanode上的块分布来越来越不均衡,这将降低MR的本地性,导致部分datanode相对更加繁忙。
均衡器是一个hadoop守护进程,它将块从忙碌的DN移动相对空闲的DN,同时坚持块复本放置策略,将复本分散到不同的机器、机架。
建议定期执行均衡器,如每天或者每周。
(1)通过以下命令运行均衡器
[[email protected] log]$ start-balancer.sh starting balancer, logging to /var/log/hadoop/hadoop-jediael-balancer-master.out
查看日志如下:
[[email protected] hadoop]$ pwd /var/log/hadoop [[email protected] hadoop]$ ls hadoop-jediael-balancer-master.log hadoop-jediael-balancer-master.out [[email protected] hadoop]$ cat hadoop-jediael-balancer-master.log 2015-03-01 21:08:08,027 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.251.0.197:50010 2015-03-01 21:08:08,028 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.171.94.155:50010 2015-03-01 21:08:08,028 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 0 over utilized nodes: 2015-03-01 21:08:08,028 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 0 under utilized nodes:
(2)均衡器将每个DN的使用率与整个集群的使用率接近,这个“接近”是通过-threashold参数指定的,默认是10%。
(3)不同节点之间复制数据的带宽是受限的,默认是1MB/s,可以通过hdfs-site.xml文件中的dfs.balance.bandwithPerSec属性指定(单位是字节)。