retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

[email protected]:/export/scratch/yao/hadoop-1.2.1/bin$ ./hadoop fs -ls

16/03/10 14:05:35 INFO ipc.Client: Retrying connect to server: cs-spatial-210/ip address:5218. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

16/03/10 14:05:36 INFO ipc.Client: Retrying connect to server: cs-spatial-210/ip address:5218. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

16/03/10 14:05:37 INFO ipc.Client: Retrying connect to server: cs-spatial-210/ip address:5218. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

ls: Call to cs-spatial-210/ip address:5218 failed on connection exception: java.net.ConnectException: Connection refused

===============================================================================

还没开工,集群 就挂了,昨天还好好的,谁动了我的集群

于是乎,开启google、百度模式,答案都试了个变,log反复看,还是不行。尼玛,我去页面看看……

不看不知道,一看吓一跳,我的集群已成为了别人的集群。突然想起沙特大哥曾说过,不要采用默认端口,因为集群很多人用,容易冲突,好吧,我错了。更多文章点击这里查看

===============================================================================

把hdfs和map的端口重新配置了一遍,这下好了。nice!!!!!!!!!!!!!!!!!!!

大家都用这个集群,自己配置自己的,如果采用默认端口很容易产生冲突,更搞笑的是,我和一个哥们都选择了同一台集群作为master,这个机器有那么好吗?更多文章点击这里查看

时间: 2024-10-27 07:44:46

retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)的相关文章

MapReduce案例运行

从<Hadoop权威指南>选取了一个小案例,在Hadoop集群环境中运行. 1.新建JAVA类,保存书中源代码. [huser@master bin]$ vi URLCat.java import java.io.InputStream; import java.net.URL; import org.apache.hadoop.fs.FsUrlStreamHandlerFactory; import org.apache.hadoop.io.IOUtils; public class URL

hadoop启动之后出现错误:Retrying connect to server: hadoop/192.168.73.100:9000. Already tried 0 time(s);

INFO ipc.Client: Retrying connect to server: hadoop/192.168.73.100:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) INFO ipc.Client: Retrying connect to server: hadoop/192.168.73.1

spark 在yarn执行job时一直抱0.0.0.0:8030错误

近日新写完的spark任务放到yarn上面执行时,在yarn的slave节点中一直看到报错日志:连接不到0.0.0.0:8030 . 1 The logs are as below: 2 2014-08-11 20:10:59,795 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030 3 2014-08-11 20:11:01,838 INFO [ma

Hadoop常见异常及其解决方式

1.Shell$ExitCodeException 现象:执行hadoop job时出现例如以下异常: 14/07/09 14:42:50 INFO mapreduce.Job: Task Id : attempt_1404886826875_0007_m_000000_1, Status : FAILED Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.had

hadoop问题之java.net.NoRouteToHostException: 没有到主机的路由

hadoop启动过程中遇到下面的问题: 2015-08-02 19:43:20,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG:   host = slave1/192.168.198.21 ST

spark on yarn提交任务时一直显示ACCEPTED

spark on yarn提交任务时一直显示ACCEPTED,过一个小时后就会出现任务失败,但在提交时shell终端显示的日志并没有报错,logs文件夹中也没有日志产生.注:spark on yarn是不需要启动spark集群的,只需要在提交任务的机器配置spark就可以了,因为任务是由hadoop来执行的,spark只负责任务的提交. 任务提交命令为 bin/spark-submit --class org.apache.spark.examples.JavaWordCount\     --

ERROR security.UserGroupInformation: PriviledgedActionException + java.net.ConnectException解决办法

HADOOP运行mr程序时报错: 15/05/18 19:25:33 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server 15/05/18 19:25:34 INFO ipc.Client: Retrying connect to server: nwj5/192.168.11.1

spark HA 安装配置和使用(spark1.2-cdh5.3)

安装环境如下: 操作系统:CentOs 6.6 Hadoop 版本:CDH-5.3.0 Spark 版本:1.2 集群5个节点 node01-05 node01-03 为worker node04.node05为master spark HA 必须要zookeepr来做协同服务,做master主备切换,zookeeper的安装和配置再次不做赘述. yum源的配置请看: 1.安装 查看spark的相关包有哪些: [[email protected] hadoop-yarn]# yum list |

[转]hadoop运行mapreduce作业无法连接0.0.0.0/0.0.0.0:10020

14/04/04 17:15:12 INFO mapreduce.Job:  map 0% reduce 0% 14/04/04 17:19:42 INFO mapreduce.Job:  map 41% reduce 0% 14/04/04 17:19:53 INFO mapreduce.Job:  map 64% reduce 0% 14/04/04 17:19:55 INFO mapreduce.Job:  map 52% reduce 0% 14/04/04 17:19:57 INFO