Oozie java.io.IOException: output.properties data exceeds its limit [2048]

在使用oozie调用sqoop时,报了下边这个错

Launcher AM execution failed
java.io.IOException: output.properties data exceeds its limit [2048]
    at org.apache.oozie.action.hadoop.LocalFsOperations.getLocalFileContentAsString(LocalFsOperations.java:86)
    at org.apache.oozie.action.hadoop.LauncherAM.processActionData(LauncherAM.java:521)
    at org.apache.oozie.action.hadoop.LauncherAM.handleActionData(LauncherAM.java:501)
    at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:229)
    at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
    at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
Exception in thread "main" java.io.IOException: output.properties data exceeds its limit [2048]
    at org.apache.oozie.action.hadoop.LocalFsOperations.getLocalFileContentAsString(LocalFsOperations.java:86)
    at org.apache.oozie.action.hadoop.LauncherAM.processActionData(LauncherAM.java:521)
    at org.apache.oozie.action.hadoop.LauncherAM.handleActionData(LauncherAM.java:501)
    at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:229)
    at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
    at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)

解决

输出大小默认是2048,在oozie-site.xml修改配置,重启
<property>
    <name>oozie.action.max.output.data</name>
    <value>204800</value>
</property>

原文地址:https://www.cnblogs.com/EnzoDin/p/10548405.html

时间: 2024-07-31 16:01:17

Oozie java.io.IOException: output.properties data exceeds its limit [2048]的相关文章

hive对于lzo文件处理异常Caused by: java.io.IOException: Compressed length 842086665 exceeds max block size 67108864 (probably corrupt file)

hive查询lzo数据格式文件的表时,抛 Caused by: java.io.IOException: Compressed length 842086665 exceeds max block size 67108864 (probably corrupt file) 这类异常,如图: 这是由于lzo文件数过多,hive执行时默认是不会自动先合并lzo等压缩文件再计算,需要设置hive对应的参数,告诉它在执行计算之前,先合并较多的压缩文件 在执行hive的sql之前需要加上 set hive

java.io.IOException: The output jar is empty. Did you specify the proper &#39;-keep&#39; options?

执行Maven Install打包的时候,出现以下错误信息: [proguard] java.io.IOException: The output jar is empty. Did you specify the proper '-keep' options? [proguard] at proguard.shrink.Shrinker.execute(Shrinker.java:171) [proguard] at proguard.ProGuard.shrink(ProGuard.java

ElasticsearchException: java.io.IOException: failed to read [id:0, file:/data/elasticsearch/nodes/0/_state/global-0.st]

from : https://www.cnblogs.com/hixiaowei/p/11213143.html 1.以前装过elasticsearch,重新安装elastic search ,报错 [2019-07-19T14:32:10,720][ERROR][o.e.g.GatewayMetaState ] [master-node] failed to read local state, exiting... org.elasticsearch.ElasticsearchExceptio

hadoop格式化:java.io.IOException: Incompatible clusterIDs in /home/lxh/hadoop/hdfs/data: namenode clusterID

1 概述  解决hadoop启动hdfs时,datanode无法启动的问题.错误为: java.io.IOException: Incompatible clusterIDs in /home/lxh/hadoop/hdfs/data: namenode clusterID = CID-a3938a0b-57b5-458d-841c-d096e2b7a71c; datanode clusterID = CID-200e6206-98b5-44b2-9e48-262871884eeb 2 问题描述

spark程序异常:Exception in thread &quot;main&quot; java.io.IOException: No FileSystem for scheme: hdfs

命令: java -jar myspark-1.0-SNAPSHOT.jar myspark-1.0-SNAPSHOT.jar hdfs://single:9000/input/word.txt hdfs://single:9000/output/out1 错误信息: .......... 14/11/23 06:14:18 INFO SparkDeploySchedulerBackend: Granted executor ID app-20141123061418-0011/0 on hos

sqoop1.4.5 导入 hive IOException running import job: java.io.IOException: Hive exited with status 1

sqoop 导入 hive hive.HiveImport: Exception in thread "main" java.lang.NoSuchMethodError: org.apache.thrift.EncodingUtils.setBit(BIZ)B ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1

hive执行query语句时提示错误:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException:

hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOEx

spark运行java-jar:Exception in thread &quot;main&quot; java.io.IOException: No FileSystem for scheme: hdfs

今天碰到的一个 spark问题,困扰好久才解决 首先我的spark集群部署使用的部署包是官方提供的 spark-1.0.2-bin-hadoop2.tgz 部署在hadoop集群上. 在运行java jar包的时候使用命令 java -jar chinahadoop-1.0-SNAPSHOT.jar  chinahadoop-1.0-SNAPSHOT.jar  hdfs://node1:8020/user/ning/data.txt /user/ning/output 出现了如下错误 14/08

hive运行query语句时提示错误:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException:

hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOEx