[Linux][Hadoop] 运行WordCount例子

紧接上篇,完成Hadoop的安装并跑起来之后,是该运行相关例子的时候了,而最简单最直接的例子就是HelloWorld式的WordCount例子。

 

参照博客进行运行:http://xiejianglei163.blog.163.com/blog/static/1247276201443152533684/

 

首先创建一个文件夹,并创建两个文件,目录随意,为以下文件结构:

examples

--file1.txt

--file2.txt

文件内容随意填写,我是从新闻copy下来的一段英文:

执行以下命令:

[email protected]:/usr/local/gz/hadoop-2.4.1$ ./bin/hadoop fs -mkdir /data    #在hadoop中创建/data文件夹,该文件夹用来存放输入数据,这个文件不是Linux的根目录下的文件,而是hadoop下的文件夹
[email protected]:/usr/local/gz/hadoop-2.4.1$ ./bin/hadoop fs -put -f ./data_input/* /data #将前面生成的两个 文件拷贝至/data下

 

执行WordCount命令,并查看结果:

[email protected]:/usr/local/gz/hadoop-2.4.1$ ./bin/hadoop jar ./share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.4.1-sources.jar org.apache.hadoop.examples.WordCount /data /output
14/07/22 22:34:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/07/22 22:34:27 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/07/22 22:34:29 INFO input.FileInputFormat: Total input paths to process : 2
14/07/22 22:34:29 INFO mapreduce.JobSubmitter: number of splits:2
14/07/22 22:34:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1406038146260_0001
14/07/22 22:34:32 INFO impl.YarnClientImpl: Submitted application application_1406038146260_0001
14/07/22 22:34:32 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1406038146260_0001/
14/07/22 22:34:32 INFO mapreduce.Job: Running job: job_1406038146260_0001
14/07/22 22:34:58 INFO mapreduce.Job: Job job_1406038146260_0001 running in uber mode : false
14/07/22 22:34:58 INFO mapreduce.Job:  map 0% reduce 0%
14/07/22 22:35:34 INFO mapreduce.Job:  map 100% reduce 0%
14/07/22 22:35:52 INFO mapreduce.Job:  map 100% reduce 100%
14/07/22 22:35:52 INFO mapreduce.Job: Job job_1406038146260_0001 completed successfully
14/07/22 22:35:53 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=2521
                FILE: Number of bytes written=283699
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2280
                HDFS: Number of bytes written=1710
                HDFS: Number of read operations=9
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=2
                Launched reduce tasks=1
                Data-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=71182
                Total time spent by all reduces in occupied slots (ms)=13937
                Total time spent by all map tasks (ms)=71182
                Total time spent by all reduce tasks (ms)=13937
                Total vcore-seconds taken by all map tasks=71182
                Total vcore-seconds taken by all reduce tasks=13937
                Total megabyte-seconds taken by all map tasks=72890368
                Total megabyte-seconds taken by all reduce tasks=14271488
        Map-Reduce Framework
                Map input records=29
                Map output records=274
                Map output bytes=2814
                Map output materialized bytes=2527
                Input split bytes=202
                Combine input records=274
                Combine output records=195
                Reduce input groups=190
                Reduce shuffle bytes=2527
                Reduce input records=195
                Reduce output records=190
                Spilled Records=390
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=847
                CPU time spent (ms)=6410
                Physical memory (bytes) snapshot=426119168
                Virtual memory (bytes) snapshot=1953292288
                Total committed heap usage (bytes)=256843776
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=2078
        File Output Format Counters
                Bytes Written=1710
[email protected]:/usr/local/gz/hadoop-2.4.1$

上面的日志显示出了wordCount的详细情况,然后执行查看结果命令查看统计结果:

[email protected]:/usr/local/gz/hadoop-2.4.1$ ./bin/hadoop fs -cat /output/part-r-00000
14/07/22 22:38:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
"as     1
"atrocious,"    1
-       1
10-day  1
13      1
18      1
20,     1
2006.   1
3,000   1
432     1
65      1
7.4.52  1
:help   2
:help<Enter>    1
:q<Enter>       1
<F1>    1
Already,        1
Ban     1
Benjamin        1

后面省略了很多统计数据,wordCount统计结果完成。

[Linux][Hadoop] 运行WordCount例子,布布扣,bubuko.com

时间: 2024-11-08 19:08:19

[Linux][Hadoop] 运行WordCount例子的相关文章

在ubuntu上安装eclipse同时连接hadoop运行wordcount程序

起先我是在win7 64位上远程连接hadoop运行wordcount程序的,但是这总是需要网络,考虑到这一情况,我决定将这个环境转移到unbuntu上 需要准备的东西 一个hadoop的jar包,一个连接eclipse的插件(在解压的jar包里有这个东西),一个hadoop-core-*.jar(考虑到连接的权限问题) 一个eclipse的.tar.gz包(其它类型的包也可以,eclipse本身就是不需要安装的,这里就不多说了) 因为我之前在win7上搭建过这个环境,所以一切很顺利,但还是要在

RedHat 安装Hadoop并运行wordcount例子

1.安装 Red Hat 环境 2.安装JDK 3.下载hadoop2.8.0 http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz 4.在用户目录下新建hadoop文件夹,并解压hadoop压缩包 mkdir Hadoop tar -zxvf hadoop-2.8.0.tar.gz 5.为hadoop配置JAVA_HOME [[email protected] ~]$

(四)伪分布式下jdk1.6+Hadoop1.2.1+HBase0.94+Eclipse下运行wordCount例子

本篇先介绍HBase在伪分布式环境下的安装方式,然后将MapReduce编程和HBase结合起来使用,完成WordCount这个例子. HBase在伪分布环境下安装 一.   前提条件 已经成功地安装了jdk1.6和hadoop1.2.1. Jdk1.6+Hadoop1.2.1在伪分布环境下具体的安装方法见:Hadoop1.2.1安装——单节点方式和单机伪分布方式 二.   环境 VMware® Workstation 10.04 Ubuntu14.04 32位 Java JDK 1.6.0 h

Hadoop运行wordcount时报classnotfound错误的一个原因

我们在按照网上铺天盖地的教程开始运行wordcount时,有时会得到一个报错.如下所示 /usr/local/hadoop-1.2.1/bin# ./hadoop jar /home/ftp/temp/wordcount.jar WordCount /home/input /home/output Exception in thread "main" java.lang.ClassNotFoundException: WordCount at java.net.URLClassLoad

hadoop的wordcount例子运行

可以通过一个简单的例子来说明MapReduce到底是什么: 我们要统计一个大文件中的各个单词出现的次数.由于文件太大.我们把这个文件切分成如果小文件,然后安排多个人去统计.这个过程就是”Map”.然后把每个人统计的数字合并起来,这个就是“Reduce". 上面的例子如果在MapReduce去做呢,就需要创建一个任务job,由job把文件切分成若干独立的数据块,并分布在不同的机器节点中.然后通过分散在不同节点中的Map任务以完全并行的方式进行处理.MapReduce会对Map的输出地行收集,再将结

配置RHadoop与运行WordCount例子

1.安装R语言环境 su -c 'rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm' su -c 'yum install foo' yum list R-\* yum install R 2.安装RStudio Desktop和Server Desktop是rpm包,双击执行 Server安装命令: yum install openssl098e # Required o

[hadoop] hadoop 运行 wordcount

讲准备好的文本文件放到hdfs中 执行 hadoop 安装包中的例子 [[email protected] mapreduce]# hadoop jar hadoop-mapreduce-examples-2.8.0.jar wordcount /input/ /output/wordcount 17/05/14 02:01:17 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/172.16.253.128:8032

CDH quick start VM 中运行wordcount例子

需要注意的事情: 1. 对于wordcount1.0 ,按照http://www.cloudera.com/content/cloudera/en/documentation/HadoopTutorial/CDH4/Hadoop-Tutorial/ht_usage.html#topic_5_2 执行. 2.hadoop fs -mkdir /user/cloudera 这条语句意思是在hadoop文件系统下创建新文件夹.在终端中执行"cd /user/cloudera"是错误的,会出现

Hadoop3 在eclipse中访问hadoop并运行WordCount实例

前言:       毕业两年了,之前的工作一直没有接触过大数据的东西,对hadoop等比较陌生,所以最近开始学习了.对于我这样第一次学的人,过程还是充满了很多疑惑和不解的,不过我采取的策略是还是先让环境跑起来,然后在能用的基础上在多想想为什么.       通过这三个礼拜(基本上就是周六周日,其他时间都在加班啊T T)的探索,我目前主要完成的是: 1.在Linux环境中伪分布式部署hadoop(SSH免登陆),运行WordCount实例成功. http://www.cnblogs.com/Pur