hadoop异常 java.io.IOException: Job status not available

[[email protected] conf]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount /user/lizeyi/people.txt  /user/lizeyi/wordcount7
15/06/08 18:36:16 INFO client.RMProxy: Connecting to ResourceManager at master.hadoop/10.3.4.35:8032
15/06/08 18:36:17 INFO input.FileInputFormat: Total input paths to process : 1
15/06/08 18:36:17 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
15/06/08 18:36:17 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 39cf0c71a251a79c50555810ca660450d9682140]
15/06/08 18:36:17 INFO mapreduce.JobSubmitter: number of splits:1
15/06/08 18:36:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1433756996622_0004
15/06/08 18:36:18 INFO impl.YarnClientImpl: Submitted application application_1433756996622_0004
15/06/08 18:36:18 INFO mapreduce.Job: The url to track the job: http://master.hadoop:8088/proxy/application_1433756996622_0004/
15/06/08 18:36:18 INFO mapreduce.Job: Running job: job_1433756996622_0004
15/06/08 18:36:40 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
java.io.IOException: Job status not available 
        at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
        at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:609)
        at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1354)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1316)
        at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
[[email protected] conf]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount /user/lizeyi/people.txt  /user/lizeyi/wordcount9
15/06/08 18:54:04 INFO client.RMProxy: Connecting to ResourceManager at master.hadoop/10.3.4.35:8032
15/06/08 18:54:06 INFO input.FileInputFormat: Total input paths to process : 1
15/06/08 18:54:06 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
15/06/08 18:54:06 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 39cf0c71a251a79c50555810ca660450d9682140]
15/06/08 18:54:06 INFO mapreduce.JobSubmitter: number of splits:1
15/06/08 18:54:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1433760669916_0001
15/06/08 18:54:07 INFO impl.YarnClientImpl: Submitted application application_1433760669916_0001
15/06/08 18:54:07 INFO mapreduce.Job: The url to track the job: http://master.hadoop:8088/proxy/application_1433760669916_0001/
15/06/08 18:54:07 INFO mapreduce.Job: Running job: job_1433760669916_0001
15/06/08 18:54:34 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
15/06/08 18:54:35 INFO mapreduce.Job: Job job_1433760669916_0001 running in uber mode : false
15/06/08 18:54:35 INFO mapreduce.Job:  map 100% reduce 100%
15/06/08 18:54:35 INFO mapreduce.Job: Job job_1433760669916_0001 completed successfully
15/06/08 18:54:35 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=68
                FILE: Number of bytes written=205241
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=151
                HDFS: Number of bytes written=46
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=6036
                Total time spent by all reduces in occupied slots (ms)=6132
                Total time spent by all map tasks (ms)=6036
                Total time spent by all reduce tasks (ms)=6132
                Total vcore-seconds taken by all map tasks=6036
                Total vcore-seconds taken by all reduce tasks=6132
                Total megabyte-seconds taken by all map tasks=6180864
                Total megabyte-seconds taken by all reduce tasks=6279168
        Map-Reduce Framework
                Map input records=4
                Map output records=4
                Map output bytes=54
                Map output materialized bytes=68
                Input split bytes=113
                Combine input records=4
                Combine output records=4
                Reduce input groups=4
                Reduce shuffle bytes=68
                Reduce input records=4
                Reduce output records=4
                Spilled Records=8
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=82
                CPU time spent (ms)=2150
                Physical memory (bytes) snapshot=447897600
                Virtual memory (bytes) snapshot=1986359296
                Total committed heap usage (bytes)=355467264
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=38
        File Output Format Counters 
                Bytes Written=46
时间: 2024-10-12 03:52:21

hadoop异常 java.io.IOException: Job status not available的相关文章

解决hiveserver2报错:java.io.IOException: Job status not available - Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

用户使用的sql: select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_0a3a743f0fe3; 下面做不同的测试: 1.beeline -u jdbc:hive2://0.0.0.0:10000 -e "select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_

java.io.IOException: Job status not available

2014-04-20 21:57:24,544 INFO [main] mapred.ClientServiceDelegate (ClientServiceDelegate.java:getProxy(270)) - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server java.io.IOException: Job status not avai

hbase异常:java.io.IOException: Unable to determine ZooKeeper ensemble

项目中用到hbase,有时候可能会报一些异常,比如java.io.IOException: Unable to determine ZooKeeper ensemble 等等,当出现这个问题时,某某说是项目中用到线程池的问题导致的,但查看异常之后,并非跟啥线程池有关系,异常信息如下: java.io.IOException: Unable to determine ZooKeeper ensemble at org.apache.hadoop.hbase.zookeeper.ZKUtil.con

hadoop错误java.io.IOException Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

错误: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try 原因: 无法写入:我的环境中有3个datanode,备份数量设置的是3.在写操作时,它会在pipeline中写3个机器.默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的dat

tomcat 启动 证书异常java.io.IOException: Alias name [cas] does not identify a key entry

在搭建CAS server的过程中,Tomcat开启https,配置秘钥证书,证书是通过keytool生成的 <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true"

windows 中使用hbase 异常:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

平时一般是在windows环境下进行开发,在windows 环境下操作hbase可能会出现异常(java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.),以前也遇到过这个问题,今天又有小伙伴遇到这个问题,就顺带记一笔,异常信息如下: 2016-05-23 17:02:13,551 WARN [org.apache.hadoop.util.NativeCodeLoa

Hadoop与HBase中遇到的问题(续)java.io.IOException: Non-increasing Bloom keys异常

在使用Bulkload向HBase导入数据中, 自己编写Map与使用KeyValueSortReducer生成HFile时, 出现了下面的异常: java.io.IOException: Non-increasing Bloom keys: 201301025200000000000003520000000000000500 after 201311195100000000000000010000000000001600 at org.apache.hadoop.hbase.regionserv

HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

ERROR: Can't get master address from ZooKeeper; znode data == null   一定注意这只是问题的第一层表象,真的问题是: File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplica 网上很多都是叫用两种方式解决 stop/start  重启hbase 格式化 hdfs namenode -format,不能随随便便就格

sqoop1.4.5 导入 hive IOException running import job: java.io.IOException: Hive exited with status 1

sqoop 导入 hive hive.HiveImport: Exception in thread "main" java.lang.NoSuchMethodError: org.apache.thrift.EncodingUtils.setBit(BIZ)B ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1