solution:No job file jar和ClassNotFoundException(hadoop,mapreduce)

hadoop-1.2.1伪分布式搭建好了,也只是用命令跑过hadoop-example.jar包的wordcount,这一切看起来so easy。

但没想到的是,自己的mr程序,运行起来却遇到了No job file jar和ClassNotFoundException的问题。

经过一番周折,自己写的mapreduce 终于成功运行了。

我没有将第三方jar包(hadoop-core,commons-cli,commons-xxx等6个jar包)和自己的代码的jar包全部都添加到远程集群上,在本地也没有将第三方jar包打成third-party.jar,也没有用到“-libjars”参数,连GenericOptionsParser也没使用(网上很多solution都说这个用来解析hadoop的命令参数),,

关键代码:

Job job = new Job(getConf());

job.setJarByClass(WordCountJob.class);

int res = ToolRunner.run(new WordCountJob(),args);

source code:

package wordcount2;

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.conf.Configured;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.util.GenericOptionsParser;

import org.apache.hadoop.util.Tool;

import org.apache.hadoop.util.ToolRunner;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountJob extends Configured implements Tool {

public static class TokenizerMapper extends Mapper<Object,Text,Text,IntWritable>{

private final static IntWritable one = new IntWritable(1);

private Text word = new Text();

public void map(Object key,Text value,Context context) throws IOException,InterruptedException{

StringTokenizer itr = new StringTokenizer(value.toString());

while(itr.hasMoreTokens()){

word.set(itr.nextToken());

context.write(word,one);

}

}

}

public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable>{

private IntWritable result = new IntWritable();

public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {

int sum = 0;

for(IntWritable val:values){

sum += val.get();

}

result.set(sum);

context.write(key, result);

}

}

@Override

public int run(String[] args) throws Exception {

// TODO Auto-generated method stub

//        Configuration conf = new Configuration();

//        String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();

if(args.length !=2){

System.err.println("Usage:wordcount <in> <out>");

System.exit(2);

}

//        Job job = new Job(conf,"wordcountmr");

Job job = new Job(getConf());

job.setJarByClass(WordCountJob.class);

job.setMapperClass(TokenizerMapper.class);

job.setCombinerClass(IntSumReducer.class);

job.setReducerClass(IntSumReducer.class);

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

FileInputFormat.addInputPath(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

System.exit(job.waitForCompletion(true)?0:1);

return 0;

}

public static void main(String[] args) throws Exception{

int res = ToolRunner.run(new WordCountJob(),args);

System.exit(res);

}

}

编译成jar包,可以使用命令(javac -classpath /home/lzc/hadoop-1.2.1/hadoop-core-1.2.1.jar:/home/lzc/hadoop-1.2.1/lib/commons-cli-1.2.jar -d ./classes/ ./src/WordCountJob.java以及jar -cvfm wordcountjob.jar -C ./classes/两个命令),最简单的方式是使用eclipse的导出jar文件功能,单独将该class生成一个jar文件。

把生成的jar包cp到hadoop_home下,执行以下命令。

[email protected]:~/Dolphin/hadoop-1.2.1$ bin/hadoop jar wc2.jar wordcount2.WordCountJob input/file*.txt output

14/12/10 15:48:59 INFO input.FileInputFormat: Total input paths to process : 2

14/12/10 15:48:59 INFO util.NativeCodeLoader: Loaded the native-hadoop library

14/12/10 15:48:59 WARN snappy.LoadSnappy: Snappy native library not loaded

14/12/10 15:49:00 INFO mapred.JobClient: Running job: job_201412080836_0026

14/12/10 15:49:01 INFO mapred.JobClient:  map 0% reduce 0%

14/12/10 15:49:06 INFO mapred.JobClient:  map 100% reduce 0%

14/12/10 15:49:13 INFO mapred.JobClient:  map 100% reduce 33%

14/12/10 15:49:15 INFO mapred.JobClient:  map 100% reduce 100%

14/12/10 15:49:15 INFO mapred.JobClient: Job complete: job_201412080836_0026

14/12/10 15:49:15 INFO mapred.JobClient: Counters: 29

14/12/10 15:49:15 INFO mapred.JobClient:   Job Counters

14/12/10 15:49:15 INFO mapred.JobClient:     Launched reduce tasks=1

14/12/10 15:49:15 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=7921

14/12/10 15:49:15 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

14/12/10 15:49:15 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

14/12/10 15:49:15 INFO mapred.JobClient:     Launched map tasks=2

14/12/10 15:49:15 INFO mapred.JobClient:     Data-local map tasks=2

14/12/10 15:49:15 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=9018

14/12/10 15:49:15 INFO mapred.JobClient:   File Output Format Counters

14/12/10 15:49:15 INFO mapred.JobClient:     Bytes Written=48

14/12/10 15:49:15 INFO mapred.JobClient:   FileSystemCounters

14/12/10 15:49:15 INFO mapred.JobClient:     FILE_BYTES_READ=102

14/12/10 15:49:15 INFO mapred.JobClient:     HDFS_BYTES_READ=284

14/12/10 15:49:15 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=190665

14/12/10 15:49:15 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=48

14/12/10 15:49:15 INFO mapred.JobClient:   File Input Format Counters

14/12/10 15:49:15 INFO mapred.JobClient:     Bytes Read=48

14/12/10 15:49:15 INFO mapred.JobClient:   Map-Reduce Framework

14/12/10 15:49:15 INFO mapred.JobClient:     Map output materialized bytes=108

14/12/10 15:49:15 INFO mapred.JobClient:     Map input records=2

14/12/10 15:49:15 INFO mapred.JobClient:     Reduce shuffle bytes=108

14/12/10 15:49:15 INFO mapred.JobClient:     Spilled Records=16

14/12/10 15:49:15 INFO mapred.JobClient:     Map output bytes=80

14/12/10 15:49:15 INFO mapred.JobClient:     CPU time spent (ms)=2420

14/12/10 15:49:15 INFO mapred.JobClient:     Total committed heap usage (bytes)=390004736

14/12/10 15:49:15 INFO mapred.JobClient:     Combine input records=8

14/12/10 15:49:15 INFO mapred.JobClient:     SPLIT_RAW_BYTES=236

14/12/10 15:49:15 INFO mapred.JobClient:     Reduce input records=8

14/12/10 15:49:15 INFO mapred.JobClient:     Reduce input groups=6

14/12/10 15:49:15 INFO mapred.JobClient:     Combine output records=8

14/12/10 15:49:15 INFO mapred.JobClient:     Physical memory (bytes) snapshot=436707328

14/12/10 15:49:15 INFO mapred.JobClient:     Reduce output records=6

14/12/10 15:49:15 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1908416512

14/12/10 15:49:15 INFO mapred.JobClient:     Map output records=8

[email protected]:~/Dolphin/hadoop-1.2.1$ bin/hadoop fs -ls output

Found 3 items

-rw-r--r--   2 hadoop121 supergroup          0 2014-12-10 15:49 /user/hadoop121/output/_SUCCESS

drwxr-xr-x   - hadoop121 supergroup          0 2014-12-10 15:49 /user/hadoop121/output/_logs

-rw-r--r--   2 hadoop121 supergroup         48 2014-12-10 15:49 /user/hadoop121/output/part-r-00000

[email protected]:~/Dolphin/hadoop-1.2.1$ bin/hadoop fs -cat output/part-r-00000

Hadoop    1

Hello    2

Word    1

hadoop    1

hello    2

word    1

有人说hdfs不能访问本地文件,有权限问题,但我特意试了下,本地一样成功执行。

[email protected]:~/Dolphin/hadoop-1.2.1$ bin/hadoop jar /home/lzc/workspace/wordcount1/wc2.jar wordcount2.WordCountJob input/file*.txt output

14/12/10 16:08:26 INFO input.FileInputFormat: Total input paths to process : 2

14/12/10 16:08:26 INFO util.NativeCodeLoader: Loaded the native-hadoop library

14/12/10 16:08:26 WARN snappy.LoadSnappy: Snappy native library not loaded

14/12/10 16:08:27 INFO mapred.JobClient: Running job: job_201412080836_0027

14/12/10 16:08:28 INFO mapred.JobClient:  map 0% reduce 0%

14/12/10 16:08:33 INFO mapred.JobClient:  map 100% reduce 0%

14/12/10 16:08:40 INFO mapred.JobClient:  map 100% reduce 33%

14/12/10 16:08:41 INFO mapred.JobClient:  map 100% reduce 100%

14/12/10 16:08:42 INFO mapred.JobClient: Job complete: job_201412080836_0027

14/12/10 16:08:42 INFO mapred.JobClient: Counters: 29

14/12/10 16:08:42 INFO mapred.JobClient:   Job Counters

14/12/10 16:08:42 INFO mapred.JobClient:     Launched reduce tasks=1

14/12/10 16:08:42 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=7221

14/12/10 16:08:42 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

14/12/10 16:08:42 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

14/12/10 16:08:42 INFO mapred.JobClient:     Launched map tasks=2

14/12/10 16:08:42 INFO mapred.JobClient:     Data-local map tasks=2

14/12/10 16:08:42 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8677

14/12/10 16:08:42 INFO mapred.JobClient:   File Output Format Counters

14/12/10 16:08:42 INFO mapred.JobClient:     Bytes Written=48

14/12/10 16:08:42 INFO mapred.JobClient:   FileSystemCounters

14/12/10 16:08:42 INFO mapred.JobClient:     FILE_BYTES_READ=102

14/12/10 16:08:42 INFO mapred.JobClient:     HDFS_BYTES_READ=284

14/12/10 16:08:42 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=190665

14/12/10 16:08:42 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=48

14/12/10 16:08:42 INFO mapred.JobClient:   File Input Format Counters

14/12/10 16:08:42 INFO mapred.JobClient:     Bytes Read=48

14/12/10 16:08:42 INFO mapred.JobClient:   Map-Reduce Framework

14/12/10 16:08:42 INFO mapred.JobClient:     Map output materialized bytes=108

14/12/10 16:08:42 INFO mapred.JobClient:     Map input records=2

14/12/10 16:08:42 INFO mapred.JobClient:     Reduce shuffle bytes=108

14/12/10 16:08:42 INFO mapred.JobClient:     Spilled Records=16

14/12/10 16:08:42 INFO mapred.JobClient:     Map output bytes=80

14/12/10 16:08:42 INFO mapred.JobClient:     CPU time spent (ms)=2280

14/12/10 16:08:42 INFO mapred.JobClient:     Total committed heap usage (bytes)=373489664

14/12/10 16:08:42 INFO mapred.JobClient:     Combine input records=8

14/12/10 16:08:42 INFO mapred.JobClient:     SPLIT_RAW_BYTES=236

14/12/10 16:08:42 INFO mapred.JobClient:     Reduce input records=8

14/12/10 16:08:42 INFO mapred.JobClient:     Reduce input groups=6

14/12/10 16:08:42 INFO mapred.JobClient:     Combine output records=8

14/12/10 16:08:42 INFO mapred.JobClient:     Physical memory (bytes) snapshot=433147904

14/12/10 16:08:42 INFO mapred.JobClient:     Reduce output records=6

14/12/10 16:08:42 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1911033856

14/12/10 16:08:42 INFO mapred.JobClient:     Map output records=8

[email protected]:~/Dolphin/hadoop-1.2.1$

references:

1.http://dongxicheng.org/mapreduce/run-hadoop-job-problems/

2.http://lucene.472066.n3.nabble.com/Trouble-with-Word-Count-example-td4023269.html

3.http://stackoverflow.com/questions/22850532/warn-mapred-jobclient-no-job-jar-file-set-user-classes-may-not-be-found

时间: 2024-11-06 03:40:55

solution:No job file jar和ClassNotFoundException(hadoop,mapreduce)的相关文章

Hadoop MapReduce编程入门案例

Hadoop入门例程简析中 (下面的程序下载地址:http://download.csdn.net/detail/zpcandzhj/7810829) 一.一些说明 (1)Hadoop新旧API的区别 新的API倾向于使用虚类(抽象类),而不是接口,因为这更容易扩展. 例如,可以无需修改类的实现而在虚类中添加一个方法(即用默认的实现). 在新的API中,mapper和reducer现在都是虚类. 新的API 放在org.apache.hadoop.mapreduce 包(和子包)中.之前版本的A

Hadoop MapReduce Next Generation - Setting up a Single Node Cluster

Hadoop MapReduce Next Generation - Setting up a Single Node Cluster. Purpose This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop

使用Python实现Hadoop MapReduce程序

转自:使用Python实现Hadoop MapReduce程序 英文原文:Writing an Hadoop MapReduce Program in Python 根据上面两篇文章,下面是我在自己的ubuntu上的运行过程.文字基本采用博文使用Python实现Hadoop MapReduce程序,  打字很浪费时间滴. 在这个实例中,我将会向大家介绍如何使用Python 为 Hadoop编写一个简单的MapReduce程序. 尽管Hadoop 框架是使用Java编写的但是我们仍然需要使用像C+

Hadoop MapReduce执行过程详解(带hadoop例子)

https://my.oschina.net/itblog/blog/275294 摘要: 本文通过一个例子,详细介绍Hadoop 的 MapReduce过程. 分析MapReduce执行过程 MapReduce运行的时候,会通过Mapper运行的任务读取HDFS中的数据文件,然后调用自己的方法,处理数据,最后输出.Reducer任务会接收Mapper任务输出的数据,作为自己的输入数据,调用自己的方法,最后输出到HDFS的文件中.整个流程如图: Mapper任务的执行过程详解 每个Mapper任

Hadoop MapReduce原理及实例

MapReduce是用于数据处理的一种编程模型,简单但足够强大,专门为并行处理大数据而设计. 1. 通俗理解MapReduce MapReduce的处理过程分为两个步骤:map和reduce.每个阶段的输入输出都是key-value的形式,key和value的类型可以自行指定.map阶段对切分好的数据进行并行处理,处理结果传输给reduce,由reduce函数完成最后的汇总. 例如从大量历史数据中找出往年最高气温,NCDC公开了过去每一年的所有气温等天气数据的检测,每一行记录一条观测记录,格式如

hadoop MapReduce实例解析

1.MapReduce理论简介 1.1 MapReduce编程模型 MapReduce采用"分而治之"的思想,把对大规模数据集的操作,分发给一个主节点管理下的各个分节点共同完成,然后通过整合各个节点的中间结果,得到最终结果.简单地说,MapReduce就是"任务的分解与结果的汇总". 在Hadoop中,用于执行MapReduce任务的机器角色有两个:一个是JobTracker:另一个是TaskTracker,JobTracker是用于调度工作的,TaskTracke

Hadoop MapReduce开发最佳实践(上篇)

body{ font-family: "Microsoft YaHei UI","Microsoft YaHei",SimSun,"Segoe UI",Tahoma,Helvetica,Sans-Serif,"Microsoft YaHei", Georgia,Helvetica,Arial,sans-serif,宋体, PMingLiU,serif; font-size: 10.5pt; line-height: 1.5;}

hadoop mapreduce

写在前面: 需要保证hadoop版本  各个jar版本一致,否则可能出现各种哦莫名奇妙的错误! maven 依赖: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:

Hadoop MapReduce编程 API入门系列之倒排索引(二十四)

不多说,直接上代码. 2016-12-12 21:54:04,509 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Initializing JVM Metrics with processName=JobTracker, sessionId=2016-12-12 21:54:05,166 WARN [org.apache.hadoop.mapreduce.JobSubmitter] - Hadoop command-line option