Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

解决方法:

    <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-common</artifactId>
            <version>2.4.1</version>
        </dependency>
时间: 2024-10-16 13:38:10

Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.的相关文章

hadoop出现Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name

PriviledgedActionException as:man (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. 2014-09-24 12:57:41,567 ERROR [RunService.java:206

sqoop job local 和 Cannot initialize Cluster 问题

hadoop版本:Hadoop 2.3.0-cdh5.0.0 sqoop版本:Sqoop 1.4.4-cdh5.0.0 配置好sqooop-env.xml: #Set path to where bin/hadoop is available export HADOOP_COMMON_HOME=/my/hadoop #Set path to where hadoop-*-core.jar is available export HADOOP_MAPRED_HOME=/my/hadoop/shar

unity 4 Please check your configuration file and verify this type name.

The problem is in you config file. You are mixing two concepts with some incorrect syntax. The <assembly... /> and <namespace ... /> nodes provide an assembly and namespace search order when your <register ... /> node contains a type tha

mha安装报错 [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln361] None of slaves can be master. Check failover configuration file or log-bin settings in my.cnf

查找资料 参考 http://blog.51cto.com/16769017/1878451 解决方法: 在两个从库上开启二进制日志即可(花了 一天时间,找不到解决方法,最后还是靠自己的理解及测试解决的,骄傲!!)具体配置不在贴上来了.   实际配置: mysql master 上: cat /etc/my.cnf log-bin=weifeng1 server_id = 81 socket = /tmp/mysql.sock binlog-do-db = db1 slave 01 上面: ca

svn check下来的代码在eclipse中没有run on server

博客内容转至http://blog.csdn.net/hongchangfirst/article/details/7722703先Close Project,然后修改eclipse工程下的.project文件:    1. 在 <natures> </natures>中加入     <nature>org.eclipse.wst.common.project.facet.core.nature</nature>     <nature>org.

Win7环境下Eclipse连接Hadoop2.2.0

准备: 确保hadoop2.2.0集群正常运行 1.eclipse中建立java工程,导入hadoop2.2.0相关jar包 2.在src根目录下拷入log4j.properties,通过log4j查看详细日志 log4j.rootLogger=debug, stdout, R log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLa

【甘道夫】Win7环境下Eclipse连接Hadoop2.2.0

准备: 确保hadoop2.2.0集群正常运行 1.eclipse中建立java工程,导入hadoop2.2.0相关jar包 2.在src根目录下拷入log4j.properties,通过log4j查看详细日志 log4j.rootLogger=debug, stdout, R log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLa

hive on tez踩坑记1-hive0.13 on tez

最近集群准备升级到cdh5.2.0,并使用tez,在测试集群cdh5.2.0已经稳定运行了很长时间,因此开始折腾hive on tez了,期间遇到不少问题,这里记录下. hive on tez的部署比较简单,可以参考wiki.主要注意几个地方 1.编译的时候 mvn clean package -Dtar -DskipTests=true -Dmaven.javadoc.skip=true 2.需要将tez相关的包upload到hdfs中,并设置tez-site.xml   <property>

Summary on mapreduce.framework.name init error

An exception occured while performing the indexing job : java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. at org.apache.hadoop.mapreduce.Cluster.initiali