问题排查方式
- 一般的错误,查看错误输出,按照关键字google
- 异常错误(如namenode、datanode莫名其妙挂了):查看hadoop($HADOOP_HOME/logs)或hive日志
hadoop错误
1.datanode无法正常启动
添加datanode后,datanode无法正常启动,进程一会莫名其妙挂掉,查看namenode日志显示如下:
Text代码
2013-06-21
18:53:39,182 FATAL org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.getDatanode: Data node x.x.x.x:50010 is attempting to report
storage ID DS-1357535176-x.x.x.x-50010-1371808472808. Node y.y.y.y:50010
is expected to serve this storage.
原因分析:
拷贝hadoop安装包时,包含data与tmp文件夹(见本人《hadoop安装》一文),未成功格式化datanode
解决办法:
Shell代码
rm -rf /data/hadoop/hadoop-1.1.2/data
rm -rf /data/hadoop/hadoop-1.1.2/tmp
hadoop datanode -format
2. safe mode
Text代码
2013-06-20
10:35:43,758 ERROR org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:hadoop
cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
renew lease for
DFSClient_hb_rs_wdev1.corp.qihoo.net,60020,1371631589073. Name node is
in safe mode.
解决方案:
Shell代码
hadoop dfsadmin -safemode leave
3.连接异常
Text代码
2013-06-21
19:55:05,801 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
java.io.IOException: Call to homename/x.x.x.x:9000 failed on local
exception: java.io.EOFException
可能原因:
- namenode监听127.0.0.1:9000,而非0.0.0.0:9000或外网IP:9000
- iptables限制
解决方案:
- 检查/etc/hosts配置,使得hostname绑定到非127.0.0.1的IP上
- iptables放开端口
4. namenode id
Text代码
ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data:
namenode namespaceID = 240012870; datanode namespaceID = 1462711424 .
问题:Namenode上namespaceID与datanode上namespaceID不一致。
问题产生原因:每次namenode
format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode
format清空了namenode下的数据,但是没有清空datanode下的数据,所以造成namenode节点上的namespaceID与
datanode节点上的namespaceID不一致。启动失败。
解决办法:参考该网址 http://blog.csdn.net/wh62592855/archive/2010/07/21/5752199.aspx 给出两种解决方法,我们使用的是第一种解决方法:即:
(1)停掉集群服务
(2)在出问题的datanode节点上删除data目录,data目录即是在hdfs-site.xml文件中配置的
dfs.data.dir目录,本机器上那个是/var/lib/hadoop-0.20/cache/hdfs/dfs/data/
(注:我们当时在所有的datanode和namenode节点上均执行了该步骤。以防删掉后不成功,可以先把data目录保存一个副本).
(3)格式化namenode.
(4)重新启动集群。
问题解决。
这种方法带来的一个副作用即是,hdfs上的所有数据丢失。如果hdfs上存放有重要数据的时候,不建议采用该方法,可以尝试提供的网址中的第二种方法。
5. 目录权限
start-dfs.sh执行无错,显示启动datanode,执行完后无datanode。查看datanode机器上的日志,显示因dfs.data.dir目录权限不正确导致:
Text代码
expected: drwxr-xr-x,current:drwxrwxr-x
解决办法:
查看dfs.data.dir的目录配置,修改权限即可。
hive错误
1.NoClassDefFoundError
Could not initialize class java.lang.NoClassDefFoundError: Could not
initialize class org.apache.hadoop.hbase.io.HbaseObjectWritable
将protobuf-***.jar添加到jars路径
Xml代码
//$HIVE_HOME/conf/hive-site.xml
hive.aux.jars.path
file:///data/hadoop/hive-0.10.0/lib/hive-hbase-handler-0.10.0.jar,file:///data/hadoop/hive-0.10.0/lib/hbase-0.94.8.jar,file:///data/hadoop/hive-0.10.0/lib/zookeeper-3.4.5.jar,file:///data/hadoop/hive-0.10.0/lib/guava-r09.jar,file:///data/hadoop/hive-0.10.0/lib/hive-contrib-0.10.0.jar,file:///data/hadoop/hive-0.10.0/lib/protobuf-java-2.4.0a.jar
2.hive动态分区异常
[Fatal Error] Operator FS_2 (id=2): Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode
Shell代码
hive> set hive.exec.max.dynamic.partitions.pernode = 10000;
3.mapreduce进程超内存限制——hadoop Java heap space
vim mapred-site.xml添加:
Xml代码
//mapred-site.xml
mapred.child.java.opts
-Xmx2048m
Shell代码
#$HADOOP_HOME/conf/hadoop_env.sh
export HADOOP_HEAPSIZE=5000
4.hive文件数限制
[Fatal Error] total number of created files now is 100086, which exceeds 100000
Shell代码
hive> set hive.exec.max.created.files=655350;
5.metastore连接超时
Text代码
FAILED: SemanticException org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
解决方案:
Shell代码
hive> set hive.metastore.client.socket.timeout=500;
6. java.io.IOException: error=7, Argument list too long
Text代码
Task with the most failures(5):
-----
Task ID:
task_201306241630_0189_r_000009
URL:
http://namenode.godlovesdog.com:50030/taskdetails.jsp?jobid=job_201306241630_0189&tipid=task_201306241630_0189_r_000009
-----
Diagnostic Messages for this Task:
java.lang.RuntimeException:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error
while processing row (tag=0)
{"key":{"reducesinkkey0":"164058872","reducesinkkey1":"djh,S1","reducesinkkey2":"20130117170703","reducesinkkey3":"xxx"},"value":{"_col0":"1","_col1":"xxx","_col2":"20130117170703","_col3":"164058872","_col4":"xxx,S1"},"alias":0}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:520)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused
by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime
Error while processing row (tag=0)
{"key":{"reducesinkkey0":"164058872","reducesinkkey1":"xxx,S1","reducesinkkey2":"20130117170703","reducesinkkey3":"xxx"},"value":{"_col0":"1","_col1":"xxx","_col2":"20130117170703","_col3":"164058872","_col4":"djh,S1"},"alias":0}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
... 7 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20000]: Unable to initialize custom script.
at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:354)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
... 7 more
Caused by: java.io.IOException: Cannot run program "/usr/bin/python2.7": error=7, 参数列表过长
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)
at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:313)
... 15 more
Caused by: java.io.IOException: error=7, 参数列表过长
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1023)
... 16 more
FAILED: Execution Error, return code 20000 from org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script.
解决方案:
升级内核或减少分区数https://issues.apache.org/jira/browse/HIVE-2372
6.runtime error
Shell代码
hive> show tables;
FAILED:
Error in metadata: java.lang.RuntimeException: Unable to instantiate
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
问题排查:
Shell代码
hive -hiveconf hive.root.logger=DEBUG,console
Text代码
13/07/15 16:29:24 INFO hive.metastore: Trying to connect to metastore with URI thrift://xxx.xxx.xxx.xxx:9083
13/07/15 16:29:24 WARN hive.metastore: Failed to connect to the MetaStore Server...
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 拒绝连接
。。。
MetaException(message:Could
not connect to meta store using any of the URIs provided. Most recent
failure: org.apache.thrift.transport.TTransportException:
java.net.ConnectException: 拒绝连接
尝试连接9083端口,netstat查看该端口确实没有被监听,第一反应是hiveserver没有正常启动。查看hiveserver进程却存在,只是监听10000端口。
查看hive-site.xml配置,hive客户端连接9083端口,而hiveserver默认监听10000,找到问题根源了
解决办法:
Shell代码
hive --service hiveserver -p 9083
//或修改$HIVE_HOME/conf/hive-site.xml的hive.metastore.uris部分
//将端口改为10000