hadoop 点点滴滴(一)

1.  hbase 0.98.00.95.20.94.9 bug

2013-06-11 21:51:22,199 ERROR [IPC Server handler 18 on 60000] master.HMaster: Region server ip-10-138-2-28.ec2.internal,60020,1370887927492 reported a fatal error:
ABORTING region server ip-10-138-2-28.ec2.internal,60020,1370887927492: IOE in log roller
Cause:
java.io.FileNotFoundException: File/Directory /hbase/.oldlogs/ip-10-138-2-28.ec2.internal%2C60020%2C1370887927492.1370986265468 does not exist.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:1488)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1453)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:798)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:704)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43194)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2097)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:813)
	at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1596)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.archiveLogFile(FSHLog.java:705)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.cleanOldLogs(FSHLog.java:595)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:536)
	at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:96)
	at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /hbase/.oldlogs/ip-10-138-2-28.ec2.internal%2C60020%2C1370887927492.1370986265468 does not exist.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:1488)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1453)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:798)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:704)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43194)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)

	at org.apache.hadoop.ipc.Client.call(Client.java:1164)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at com.sun.proxy.$Proxy11.setTimes(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at com.sun.proxy.$Proxy11.setTimes(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:685)
	at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
	at com.sun.proxy.$Proxy12.setTimes(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2095)
	... 7 more

2013-06-11 21:51:32,976 INFO  [main-EventThread] zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [ip-10-138-2-28.ec2.internal,60020,1370887927492]

参考链接:https://issues.apache.org/jira/browse/HBASE-8749

Index: src/main/java/org/apache/hadoop/hbase/HBaseFileSystem.java
===================================================================
--- src/main/java/org/apache/hadoop/hbase/HBaseFileSystem.java	(revision 1493637)
+++ src/main/java/org/apache/hadoop/hbase/HBaseFileSystem.java	(working copy)
@@ -259,9 +259,9 @@
    */
   public static boolean renameAndSetModifyTime(final FileSystem fs, Path src, Path dest)
       throws IOException {
+    // set the modify time for TimeToLive Cleaner
+    fs.setTimes(src, EnvironmentEdgeManager.currentTimeMillis(), -1);
     if (!renameDirForFileSystem(fs, src, dest)) return false;
-    // set the modify time for TimeToLive Cleaner
-    fs.setTimes(dest, EnvironmentEdgeManager.currentTimeMillis(), -1);
     return true;
   }
 }
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
index 01d6cfe..d0e5443 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
@@ -1645,11 +1645,11 @@ public abstract class FSUtils {
     }
   }

-  public static boolean renameAndSetModifyTime(final FileSystem fs, Path src, Path dest)
+  public static boolean renameAndSetModifyTime(final FileSystem fs, final Path src, final Path dest)
       throws IOException {
-    if (!fs.rename(src, dest)) return false;
     // set the modify time for TimeToLive Cleaner
-    fs.setTimes(dest, EnvironmentEdgeManager.currentTimeMillis(), -1);
+    fs.setTimes(src, EnvironmentEdgeManager.currentTimeMillis(), -1);
+    if (!fs.rename(src, dest)) return false;
     return true;
   }
 }
时间: 2024-10-06 20:06:49

hadoop 点点滴滴(一)的相关文章

hadoop 点点滴滴(二)

hadoop2.0 yarn 内存溢出问题解决 异常提示: Container [pid=20170,containerID=container_1390877171119_0002_01_000005] is running beyond virtual memory limits. Current usage: 416.8 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used. Killing contai

hadoop 点点滴滴(三)

AttemptID:attempt_1390962167768_0001_m_000026_0 Timed out after 600 secs14/01/29 10:56:54 INFO mapreduce.Job: map 13% reduce 0%14/01/29 10:57:28 INFO mapreduce.Job: Task Id : attempt_1390962167768_0001_m_000002_0, Status : FAILEDAttemptID:attempt_139

Hadoop mapreduce自定义分组RawComparator

本文发表于本人博客. 今天接着上次[Hadoop mapreduce自定义排序WritableComparable]文章写,按照顺序那么这次应该是讲解自定义分组如何实现,关于操作顺序在这里不多说了,需要了解的可以看看我在博客园的评论,现在开始. 首先我们查看下Job这个类,发现有setGroupingComparatorClass()这个方法,具体源码如下: /** * Define the comparator that controls which keys are grouped toge

hadoop程序MapReduce之WordCount

需求:统计一个文件中所有单词出现的个数. 样板:word.log文件中有hadoop hive hbase hadoop hive 输出:hadoop 2 hive 2 hbase 1 MapReduce设计方式: 一.Map过程<k,v>键值队的设计: 1.按行将文本文件切割成 <k1,v1>,k1代表:行在文件中的位置,v1代表:一行数据.多少个<k1,v1>就调用多少次map()方法. 2.在map()方法中将一行数据按照空格继续分割成<k2,v2>,

Hadoop HDFS的Shell操作实例

本文发表于本人博客. 我们知道HDFS是Hadoop的分布式文件系统,那既然是文件系统那最起码会有管理文件.文件夹之类的功能吧,这个类似我们的Windows操作系统一样的吧,创建.修改.删除.移动.复制.修改权限等这些操作. 那我们现在来看看hadoop下是怎么操作的. 先输入hadoop fs命令,会看到如下输出: Usage: java FsShell [-ls <path>] [-lsr <path>] [-du <path>] [-dus <path>

Hadoop mapreduce自定义排序WritableComparable

本文发表于本人博客. 今天继续写练习题,上次对分区稍微理解了一下,那根据那个步骤分区.排序.分组.规约来的话,今天应该是要写个排序有关的例子了,那好现在就开始! 说到排序我们可以查看下hadoop源码里面的WordCount例子中对LongWritable类型定义,它实现抽象接口WritableComparable,代码如下: public interface WritableComparable<T> extends Writable, Comparable<T> { } pub

Hadoop:Windows 7 32 Bit 编译与运行

所需工具 1.Windows 7 32 Bit OS(你懂的) 2.Apache Hadoop 2.2.0-bin(hadoop-2.2.0.tar.gz) 3.Apache Hadoop 2.2.0-src(hadoop-2.2.0-src.tar.gz) 3.JDK 1.7 4.Maven 3.2.1(apache-maven-3.2.1-bin.zip) 5.Protocol Buffers 2.5.0 6.Unix command-line tool Cygwin(Setup-x86.e

编译hadoop 的native library

os:centos 6.7 x64 要解决的问题:   WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 解决的必要性 hadoop的cache和短路读(Short-Circuit Local Reads)都需要native library的支持 解决步骤 编译方法是 http://had

Hadoop Hive基础sql语法

Hive 是基于Hadoop 构建的一套数据仓库分析系统,它提供了丰富的SQL查询方式来分析存储在Hadoop 分布式文件系统中的数据,可以将结构化的数据文件映射为一张数据库表,并提供完整的SQL查询功能,可以将SQL语句转换为MapReduce任务进行运行,通过自己的SQL 去查询分析需要的内容,这套SQL 简称Hive SQL,使不熟悉mapreduce 的用户很方便的利用SQL 语言查询,汇总,分析数据.而mapreduce开发人员可以把己写的mapper 和reducer 作为插件来支持