NameNode Recovery Tools for the Hadoop Distributed File System

转自:http://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-system/

Warning: The procedure described below can cause data loss. Contact Cloudera Support before attempting it.

Most system administrators have had to deal with a bad hard disk at some point. One moment, the hard disk is a mechanical marvel; the next, it is an expensive paperweight.

The HDFS (Hadoop Distributed File System) community has been steadily working to diminish the impact of disk failures on overall system availability. In this article, I’m going to be mostly talking about how to minimize the impact of hard disk failures on the NameNode.

The NameNode’s function is to store metadata. In filesystem jargon, metadata is “data about data”– things like the owners of files, permission bits, and so forth. HDFS stores its metadata on the NameNode in two main places: the FSImage, and the edit log.

Edit Log Failover

It is a good practice to configure your NameNode to store multiple copies of its metadata. By storing two copies of the edit log and FSImage, on two separate hard disks, a good system administrator can avoid bringing down the NameNode if one of those disks fails.

During the NameNode’s startup process, it reads both the FSImage and the edit log. But what if the first place it looks is unreadable, because of a hardware problem or disk corruption? Previously, the NameNode would abort the startup process if it encountered an error while reading an edit log. The administrator would have to remove the corrupt edit log and restart the NameNode. With edit log failover, the NameNode will mark that location as failed automatically, and continue trying the other locations.

More Robust end-of-file Validation

When it’s stored on-disk, the edit log file contains padding at the end. Because we have padding at the end of the file, we can’t simply keep reading the edit log until we get an end-of-file (EOF) condition. Instead, we have to rely on other clues to know where the file ends.

Formerly, the clue we relied on was finding an OP_INVALID opcode. As soon as we read an OP_INVALID opcode, we would immediately assume that there was nothing more to read. However, this is not the most robust way to determine where a file ends. Because an OP_INVALID opcode is a single byte, the likelihood that random corruption could produce an early EOF was unacceptably high.

How can we do better? Well, in most cases, we know what transaction ID an edit log ends on. So we can simply verify that the last edit log operation we read from the file matched this. In cases where we don’t know the end transaction ID, we can verify that the padding at the end of the file contains only padding bytes. This makes the edit log code even more robust.

HDFS FSCK

When your local ext3 or ext4 filesystem has become corrupted, the fsck command can usually repair it. Fsck is an offline process which examines on-disk structures and usually offers to fix them if they are damaged.

HDFS has its own fsck command, which you can access by running “hdfs fsck.” Similar to the ext3 fsck, HDFS fsck determines which files contain corrupt blocks, and gives you options about how to fix them.

However, HDFS fsck only operates on data, not metadata. On a local filesystem, this distinction is irrelevant, because data and metadata are stored in the same place. However, for HDFS, metadata is stored on the NameNode, whereas data is stored on the DataNodes.

Manual NameNode Metadata Recovery

When properly configured, HDFS is much more robust against metadata corruption than a local filesystem, because it stores multiple copies of everything. However, because HDFS is a truly robust system, we added the capability for an administrator to recover a partial or corrupted edit log. This new functionality is called manual NameNode recovery.
Similar to fsck, NameNode recovery is an offline process. An
administrator can run NameNode recovery to recover a corrupted edit log.
This can be very helpful for getting corrupted filesystems on their
feet again.

NameNode Recovery in Action

Let’s test out recovery mode. To activate recovery mode, you start the NameNode with the -recover flag, like so:

./bin/hadoop namenode -recover

1

./bin/hadoop namenode -recover

At this point, the NameNode will ask you whether you want to continue.

You have selected Metadata Recovery mode. This mode is intended to recover
lost metadata on a corrupt filesystem. Metadata recovery mode often
permanently deletes data from your HDFS filesystem. Please back up your edit
log and fsimage before trying this!

Are you ready to proceed? (Y/N)
(Y or N)

1

2

3

4

5

6

7

You have selected Metadata Recovery mode.  This mode is intended to recover

lost metadata on a corrupt filesystem.  Metadata recovery mode often

permanently deletes data from your HDFS filesystem.  Please back up your edit

log and fsimage before trying this!

Are you ready to proceed? (Y/N)

(Y or N)

Once you answer yes, the recovery process will read as much of the edit log as possible. When there is an error or an ambiguity, it will ask you how to proceed.

In this example, we encounter an error when trying to read transaction ID 3:

11:10:41,443 ERROR FSImage:147 - Error replaying edit log at offset 71. Expected transaction ID was 3
Recent opcode offsets: 17 71
org.apache.hadoop.fs.ChecksumException: Transaction is corrupt. Calculated checksum is -1642375052 but read checksum -6897
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.validateChecksum(FSEditLogOp.java:2356)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2341)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2247)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:110)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:74)
at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:140)
at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:74)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:138)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:683)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:639)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:247)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:498)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:390)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:354)
at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1033)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1103)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1164)
11:10:41,444 ERROR MetaRecoveryContext:96 - We failed to read txId 3
11:10:41,444 INFO MetaRecoveryContext:64 -
Enter ‘c‘ to continue, skipping the bad section in the log
Enter ‘s‘ to stop reading the edit log here, abandoning any later edits
Enter ‘q‘ to quit without saving
Enter ‘a‘ to always select the first choice in the future without prompting. (c/s/q/a)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

11:10:41,443 ERROR FSImage:147 - Error replaying edit log at offset 71.  Expected transaction ID was 3

Recent opcode offsets: 17 71

org.apache.hadoop.fs.ChecksumException: Transaction is corrupt. Calculated checksum is -1642375052 but read checksum -6897

at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.validateChecksum(FSEditLogOp.java:2356)

at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2341)

at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2247)

at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:110)

at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:74)

at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:140)

at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:74)

at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:138)

at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)

at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:683)

at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:639)

at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:247)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:498)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:390)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:354)

at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1033)

at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1103)

at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1164)

11:10:41,444 ERROR MetaRecoveryContext:96 - We failed to read txId 3

11:10:41,444  INFO MetaRecoveryContext:64 -

Enter ‘c‘ to continue, skipping the bad section in the log

Enter ‘s‘ to stop reading the edit log here, abandoning any later edits

Enter ‘q‘ to quit without saving

Enter ‘a‘ to always select the first choice in the future without prompting. (c/s/q/a)

There are four options here– continue, stop, quit, and always
Continue
will try to skip over the bad section in the log. If the problem is
just a stray byte or two, or a few bad sectors, this option will let you
bypass it.

Stop stops reading the edit log and saves the
current contents of the FSImage. In this case, all the edits that still
haven’t been read will be permanently lost.

Quit exits the NameNode process without saving a new FSImage.

Always selects continue, and suppresses this prompt
in the future. Once you select always, Recovery mode will stop prompting
you and always select continue in the future.

In this case, I’m going to select continue, because I
think there may be more edits following the corrupt region that I want
to salvage. The next prompt informs me that an edit is missing– which is
to be expected, considering the previous one was corrupt.

12:22:38,829 INFO MetaRecoveryContext:105 - Continuing.
12:22:38,860 ERROR MetaRecoveryContext:96 - There appears to be a gap in the edit log. We expected txid 3, but got txid 4.
12:22:38,860 INFO MetaRecoveryContext:64 -
Enter ‘c‘ to continue, ignoring missing transaction IDs
Enter ‘s‘ to stop reading the edit log here, abandoning any later edits
Enter ‘q‘ to quit without saving
Enter ‘a‘ to always select the first choice in the future without prompting. (c/s/q/a)

1

2

3

4

5

6

7

12:22:38,829  INFO MetaRecoveryContext:105 - Continuing.

12:22:38,860 ERROR MetaRecoveryContext:96 - There appears to be a gap in the edit log.  We expected txid 3, but got txid 4.

12:22:38,860  INFO MetaRecoveryContext:64 -

Enter ‘c‘ to continue, ignoring missing  transaction IDs

Enter ‘s‘ to stop reading the edit log here, abandoning any later edits

Enter ‘q‘ to quit without saving

Enter ‘a‘ to always select the first choice in the future without prompting. (c/s/q/a)

Again I enter ‘c’ to continue.

Finally, recovery completes.

12:22:42,205 INFO MetaRecoveryContext:105 - Continuing.
12:22:42,207 INFO FSEditLogLoader:199 - replaying edit log: 4/5 transactions completed. (80%)
12:22:42,208 INFO FSImage:95 - Edits file /opt/hadoop/run4/name1/current/edits_0000000000000000001-0000000000000000005 of size 1048580 edits # 4 loaded in 4 seconds.
12:22:42,212 INFO FSImage:504 - Saving image file /opt/hadoop/run4/name2/current/fsimage.ckpt_0000000000000000005 using no compression
12:22:42,213 INFO FSImage:504 - Saving image file /opt/hadoop/run4/name1/current/fsimage.ckpt_0000000000000000005 using no compression

1

2

3

4

5

12:22:42,205  INFO MetaRecoveryContext:105 - Continuing.

12:22:42,207  INFO FSEditLogLoader:199 - replaying edit log: 4/5 transactions completed. (80%)

12:22:42,208  INFO FSImage:95 - Edits file /opt/hadoop/run4/name1/current/edits_0000000000000000001-0000000000000000005 of size 1048580 edits # 4 loaded in 4 seconds.

12:22:42,212  INFO FSImage:504 - Saving image file /opt/hadoop/run4/name2/current/fsimage.ckpt_0000000000000000005 using no compression

12:22:42,213  INFO FSImage:504 - Saving image file /opt/hadoop/run4/name1/current/fsimage.ckpt_0000000000000000005 using no compression

Then, the NameNode exits. Now, I can restart the NameNode and resume normal operation. The corruption has been fixed, although we have lost a small amount of metadata.

When Manual Recovery is the Best Choice

If there is another valid copy of the edit log somewhere else, it is preferrable to use that copy rather than trying to recover the corrupted copy. This is a case where high availability can help a lot. If there is a standby NameNode ready to take over, there should be no need to recover the edit log on the primary. Manual recovery is a good choice when there is no other copy of the edit log available.

Conclusion

The best recovery process is the one that you never need to do. High availability, combined with edit log failover, should mean that manual recovery is almost never necessary. However, it’s good to know that HDFS has tools to deal with whatever comes up.

Recovery mode will be available in CDH4. A more limited version of recovery mode without edit log failover will be available in CDH3.

时间: 2024-09-30 18:53:01

NameNode Recovery Tools for the Hadoop Distributed File System的相关文章

HDFS分布式文件系统(The Hadoop Distributed File System)

The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execu

HDFS(Hadoop Distributed File System )

HDFS(Hadoop Distributed File System ) HDFS(Hadoop Distributed File System )Hadoop分布式文件系统.是根据google发表的论文翻版的.论文为GFS(Google File System)Google 文件系统(中文,英文). 1. 架构分析 基础名词解释: Block: 在HDFS中,每个文件都是采用的分块的方式存储,每个block放在不同的datanode上,每个block的标识是一个三元组(block id, n

Hadoop ->> HDFS(Hadoop Distributed File System)

HDFS全称是Hadoop Distributed File System.作为分布式文件系统,具有高容错性的特点.它放宽了POSIX对于操作系统接口的要求,可以直接以流(Stream)的形式访问文件系统中的数据. HDFS能快速检测到硬件故障,也就是数据节点的Failover,并且自动恢复数据访问. 使用流形式的数据方法特点不是对数据访问时快速的反应,而是批量数据处理时的吞吐能力的最大化. 文件操作原则: HDFS文件的操作原则是“只写一次,多次读取”.一个文件一旦被创建再写入数据完毕后就不再

【整理学习HDFS】Hadoop Distributed File System 一个分布式文件系统

Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统.它和现有的分布式文件系统有很多共同点.但同时,它和其他的分布式文件系统的区别也是很明显的.HDFS是一个高度容错性的系统,适合部署在廉价的机器上.HDFS能提供高吞吐量的数据访问,非常适合大规模数据集上的应用.HDFS放宽了一部分POSIX约束,来实现流式读取文件系统数据的目的.HDFS在最开始是作为Apache Nutch搜索引擎项目的基础架构而开发的.HDFS是Apac

HDFS(Hadoop Distributed File System)的组件架构概述

1.hadoop1.x和hadoop2.x区别 2.组件介绍 HDFS架构概述1)NameNode(nn): 存储文件的元数据,如文件名,文件目录结构,文件属性(生成时间,副本数,文件权限),以及每个文件的块列表和块所在的DataNode等.2)DataNode(dn): 在本地文件系统存储文件块数据,以及块数据的校验和.3)SecondaryNameNode(2nn): 用来监控HDFS状态的辅助后台程序,每隔一段时间获取DHFS元数据的快照. YARN架构概述 1)ResourceManag

DFS(distributed file system)

A clustered file system is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system (only direct attached storage for each node)

5105 pa3 Distributed File System based on Quorum Protocol

1 Design document 1.1 System overview We implemented a distributed file system using a quorum based protocol. The basic idea of this protocol is that the clients need to obtain permission from multiple servers before either reading or writing a file

File System Shell

Overview appendToFile cat chgrp chmod chown copyFromLocal copyToLocal count cp du dus expunge get getfacl getmerge ls lsr mkdir moveFromLocal moveToLocal mv put rm rmr setfacl setrep stat tail test text touchz Overview The File System (FS) shell incl

Parallel file system processing

A treewalk for splitting a file directory is disclosed for parallel execution of work items over a filesystem. The given work item is assigned to a worker. Thereafter, a request is sent to split the file directory to share a portion of the file direc