hadoop错误java.io.IOException Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

错误:

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

原因:

无法写入;我的环境中有3个datanode,备份数量设置的是3。在写操作时,它会在pipeline中写3个机器。默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的datanode大于等于3,它会找另外一个datanode来拷贝。目前机器只有3台,因此只要一台datanode出问题,就一直无法写入成功。

解决办法:

    修改hdfs-site.xml文件,添加或者修改如下两项:

<property>

<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>         <value>true</value>

</property>

<property>

<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>

<value>NEVER</value>

</property>

对于dfs.client.block.write.replace-datanode-on-failure.enable,客户端在写失败的时候,是否使用更换策略,默认是true没有问题

对于,dfs.client.block.write.replace-datanode-on-failure.policy,default在3个或以上备份的时候,是会尝试更换结点尝试写入datanode。而在两个备份的时候,不更换datanode,直接开始写。对于3个datanode的集群,只要一个节点没响应写入就会出问题,所以可以关掉。

如果,您认为阅读这篇博客让您有些收获,不妨点击一下右下角的【推荐】。
如果,您希望更容易地发现我的新博客,不妨点击一下左下角的【关注我】。
如果,您对我的博客所讲述的内容有兴趣,请继续关注我的后续博客,我是【刘超★ljc】。

本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

时间: 2024-10-12 03:20:36

hadoop错误java.io.IOException Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try的相关文章

java.io.ioexception failed to mkdirs jenkins xcode || jenkins 无法创建新文件

=========================================================== FATAL: Failed to mkdirs: /Users/chenqing/Sqy/iOSProject/cyou/Svn/Cos/CosXCode_lxh/test-reports java.io.IOException: Failed to mkdirs: /Users/chenqing/Sqy/iOSProject/cyou/Svn/Cos/CosXCode_lxh

ElasticsearchException: java.io.IOException: failed to read [id:0, file:/data/elasticsearch/nodes/0/_state/global-0.st]

from : https://www.cnblogs.com/hixiaowei/p/11213143.html 1.以前装过elasticsearch,重新安装elastic search ,报错 [2019-07-19T14:32:10,720][ERROR][o.e.g.GatewayMetaState ] [master-node] failed to read local state, exiting... org.elasticsearch.ElasticsearchExceptio

hadoop异常 java.io.IOException: Job status not available

[[email protected] conf]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount /user/lizeyi/people.txt  /user/lizeyi/wordcount7 15/06/08 18:36:16 INFO client.RMProxy: Connecting to ResourceManager at master.hadoop/10.3.4.35:80

keytool 错误 java.io.IOException: incorrect AVA format

给一个APK做签名,选择新建一个key并填写相关信息,但在Finish时,keytool报出了一个错误:keytool error: java.io.IOException: Incorrect AVA format这是因为在创建key的时候,organization和organizational unit等信息都不能包含 ”,”删除","可以正常签名.

Reason: java.io.IOException: Failed to create dir

do: mkdir -p /usr/share/activemq/activemq-data/localhost/KahaDB3 chmod 777 -R /usr/share/activemq/activemq-data/localhost/KahaDB /etc/init.d/activemq start check : netstat -an | grep 61614 tcp        0      0 :::61614                    :::*        

Job Submission failed with exception &#39;java.io.IOException

在hive上执行如下sql: select a.product_id, a.merchant_id, b.category_id from pm_info a join product b on (a.product_id=b.id) where a.is_deleted=0 and a.product_is_gift=0 and a.can_sale=1 and a.can_show=1 and b.is_deleted=0 and b.PRODUCT_IS_GIFT=0; 出现错误,错误如下

重新格式化namenode后,出现java.io.IOException Incompatible clusterIDs

错误: java.io.IOException: Incompatible clusterIDs in /data/dfs/data: namenode clusterID = CID-d1448b9e-da0f-499e-b1d4-78cb18ecdebb; datanode clusterID = CID-ff0faa40-2940-4838-b321-98272eb0dee3 原因: 每次namenode format会重新创建一个namenodeId,而data目录包含了上次format

hadoop错误Ignoring exception during close for [email&#160;protected] java.io.IOException Spill failed

1.错误    Ignoring exception during close for [email protected] java.io.IOException: Spill failed2.原因     本地磁盘空间不足非hdfs (我是在myeclipse中调试程序,本地tmp目录占满)3.解决     清理.增加空间 如果,您认为阅读这篇博客让您有些收获,不妨点击一下右下角的[推荐]. 如果,您希望更容易地发现我的新博客,不妨点击一下左下角的[关注我]. 如果,您对我的博客所讲述的内容有

hive执行query语句时提示错误:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException:

hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOEx