Hadoop2.4.x 实例测试 WordCount程序

 在实例测试前先确保hadoop 启动正确

  Master.Hadoop:

word 1
[[email protected] input]$ jps
6736 Jps
6036 NameNode
4697 SecondaryNameNode
4849 ResourceManager
[[email protected] input]$

  Slave1.Hadoop

[[email protected] sources]$ jps
8086 SecondaryNameNode
8961 Jps
8320 NodeManager
7935 DataNode

在测试过程中遇到的错与与解决方案:

问题一:

[[email protected] input]$ hadoop fs -ls /
ls: Call From Master.Hadoop/192.168.160.131 to Master.Hadoop:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

解决方案:

1、格式化HDFS文件系统:hadoop namenode -format

问题二:

/home/hadoop/WordCount/input
[[email protected] input]$ hadoop fs -put ./ /input
15/06/30 17:10:45 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/input/test1.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
解决方案:

1. 先执行stop-all.sh暂停所有服务

2. 将所有Salve节点上的tmp , logs 文件夹删除 , 然后重新建立tmp , logs 文件夹

3. 格式化HDFS文件系统:hadoop namenode -format



转载地址:

http://linux.it.net.cn/e/cluster/hadoop/2014/1215/10427.html

转载内容:

装好的hadoop测试一1个示例程序WordCount,首先需要在操作系统上新建两个任意文件,然后上传到hadoop,再运行该程序统计文件中单词的个数,最后查看结果。

在操作系统上新建任意文件:

例如:
[[email protected] input]$ ls
test1.txt  test2.txt

查看hadoop的文件系统目录

[[email protected] input]$ hadoop fs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2013-10-30 00:00 /input

上传至hadoop的/input下

[[email protected] input]$ hadoop fs -put ./ /input
[[email protected] input]$ hadoop fs -ls /input
Found 2 items
-rw-r--r--   3 hadoop supergroup         12 2013-10-30 00:00 /input/test1.txt
-rw-r--r--   3 hadoop supergroup         13 2013-10-30 00:00 /input/test2.txt

在hadoop文件系统命令查看这两个文件的内容:

[[email protected] test]$ hadoop fs -cat /input/test1.txt 
hello world
[[email protected] test]$ hadoop fs -cat /input/test2.txt
hello hadoop

运行示例程序(WordCount):

[[email protected] test]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jar org.apache.hadoop.examples.WordCount /input /output
13/11/06 21:33:40 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
13/11/06 21:33:40 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
13/11/06 21:33:40 INFO input.FileInputFormat: Total input paths to process : 2
13/11/06 21:33:41 INFO mapreduce.JobSubmitter: number of splits:2
13/11/06 21:33:41 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class IT网,http://www.it.net.cn 
13/11/06 21:33:41 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
13/11/06 21:33:41 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/11/06 21:33:41 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/11/06 21:33:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local382050821_0001
13/11/06 21:33:41 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/staging/hadoop382050821/.staging/job_local382050821_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
13/11/06 21:33:41 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/staging/hadoop382050821/.staging/job_local382050821_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
13/11/06 21:33:42 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local382050821_0001/job_local382050821_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring. Linux学习,http:// linux.it.net.cn 
13/11/06 21:33:42 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local382050821_0001/job_local382050821_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
13/11/06 21:33:42 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
13/11/06 21:33:42 INFO mapreduce.Job: Running job: job_local382050821_0001
13/11/06 21:33:42 INFO mapred.LocalJobRunner: OutputCommitter set in config null
13/11/06 21:33:42 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
13/11/06 21:33:42 INFO mapred.LocalJobRunner: Waiting for map tasks
13/11/06 21:33:42 INFO mapred.LocalJobRunner: Starting task: attempt_local382050821_0001_m_000000_0
13/11/06 21:33:42 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/11/06 21:33:42 INFO mapred.MapTask: Processing split: hdfs://hadoop01:9000/input/test2.txt:0+13 Linux学习,http:// linux.it.net.cn 
13/11/06 21:33:42 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/11/06 21:33:42 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
13/11/06 21:33:42 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
13/11/06 21:33:42 INFO mapred.MapTask: soft limit at 83886080
13/11/06 21:33:42 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
13/11/06 21:33:42 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
13/11/06 21:33:43 INFO mapred.LocalJobRunner: 
13/11/06 21:33:43 INFO mapred.MapTask: Starting flush of map output
13/11/06 21:33:43 INFO mapred.MapTask: Spilling map output
13/11/06 21:33:43 INFO mapred.MapTask: bufstart = 0; bufend = 21; bufvoid = 104857600
13/11/06 21:33:43 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
13/11/06 21:33:43 INFO mapred.MapTask: Finished spill 0
13/11/06 21:33:43 INFO mapred.Task: Task:attempt_local382050821_0001_m_000000_0 is done. And is in the process of committing 
13/11/06 21:33:43 INFO mapreduce.Job: Job job_local382050821_0001 running in uber mode : false
13/11/06 21:33:43 INFO mapreduce.Job:  map 0% reduce 0%
13/11/06 21:33:43 INFO mapred.LocalJobRunner: map
13/11/06 21:33:43 INFO mapred.Task: Task ‘attempt_local382050821_0001_m_000000_0‘ done.
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Finishing task: attempt_local382050821_0001_m_000000_0
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Starting task: attempt_local382050821_0001_m_000001_0
13/11/06 21:33:43 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/11/06 21:33:43 INFO mapred.MapTask: Processing split: hdfs://hadoop01:9000/input/test1.txt:0+12
13/11/06 21:33:43 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/11/06 21:33:43 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
13/11/06 21:33:43 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 
13/11/06 21:33:43 INFO mapred.MapTask: soft limit at 83886080
13/11/06 21:33:43 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
13/11/06 21:33:43 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
13/11/06 21:33:43 INFO mapred.LocalJobRunner: 
13/11/06 21:33:43 INFO mapred.MapTask: Starting flush of map output
13/11/06 21:33:43 INFO mapred.MapTask: Spilling map output
13/11/06 21:33:43 INFO mapred.MapTask: bufstart = 0; bufend = 20; bufvoid = 104857600
13/11/06 21:33:43 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
13/11/06 21:33:43 INFO mapred.MapTask: Finished spill 0
13/11/06 21:33:43 INFO mapred.Task: Task:attempt_local382050821_0001_m_000001_0 is done. And is in the process of committing
13/11/06 21:33:43 INFO mapred.LocalJobRunner: map
13/11/06 21:33:43 INFO mapred.Task: Task ‘attempt_local382050821_0001_m_000001_0‘ done.
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Finishing task: attempt_local382050821_0001_m_000001_0 
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Map task executor complete.
13/11/06 21:33:43 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/11/06 21:33:43 INFO mapred.Merger: Merging 2 sorted segments
13/11/06 21:33:43 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 36 bytes
13/11/06 21:33:43 INFO mapred.LocalJobRunner: 
13/11/06 21:33:43 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
13/11/06 21:33:44 INFO mapreduce.Job:  map 100% reduce 0%
13/11/06 21:33:44 INFO mapred.Task: Task:attempt_local382050821_0001_r_000000_0 is done. And is in the process of committing
13/11/06 21:33:44 INFO mapred.LocalJobRunner: 
13/11/06 21:33:44 INFO mapred.Task: Task attempt_local382050821_0001_r_000000_0 is allowed to commit now
13/11/06 21:33:44 INFO output.FileOutputCommitter: Saved output of task ‘attempt_local382050821_0001_r_000000_0‘ to hdfs://hadoop01:9000/output/_temporary/0/task_local382050821_0001_r_000000 
13/11/06 21:33:44 INFO mapred.LocalJobRunner: reduce > reduce
13/11/06 21:33:44 INFO mapred.Task: Task ‘attempt_local382050821_0001_r_000000_0‘ done.
13/11/06 21:33:45 INFO mapreduce.Job:  map 100% reduce 100%
13/11/06 21:33:45 INFO mapreduce.Job: Job job_local382050821_0001 completed successfully
13/11/06 21:33:45 INFO mapreduce.Job: Counters: 32
        File System Counters
                FILE: Number of bytes read=812174
                FILE: Number of bytes written=1395157
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=63 
                HDFS: Number of bytes written=25
                HDFS: Number of read operations=25
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=5
        Map-Reduce Framework
                Map input records=2
                Map output records=4
                Map output bytes=41
                Map output materialized bytes=61
                Input split bytes=202
                Combine input records=4
                Combine output records=4 
                Reduce input groups=3
                Reduce shuffle bytes=0
                Reduce input records=4
                Reduce output records=3
                Spilled Records=8
                Shuffled Maps =0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=146
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0 IT网,http://www.it.net.cn 
                Total committed heap usage (bytes)=456732672
        File Input Format Counters 
                Bytes Read=25
        File Output Format Counters 
                Bytes Written=25

查看程序运行结果:

[[email protected] test]$ hadoop fs -cat /output/part-r-00000
hadoop  1
hello   2
world   1

时间: 2024-10-07 23:46:53

Hadoop2.4.x 实例测试 WordCount程序的相关文章

一步一步跟我学习hadoop(2)----hadoop eclipse插件安装和运行wordcount程序

本博客hadoop版本是hadoop  0.20.2. 安装hadoop-0.20.2-eclipse-plugin.jar 下载hadoop-0.20.2-eclipse-plugin.jar文件,并添加到eclipse插件库,添加方法很简单:找到eclipse安装目录下的plugins目录,直接复制到此目录下,重启eclipse 依次点击eclipse工具栏的window-----show view ------other在弹出的窗口中输入map,确认找到如下所示 到这里插件安装成功 map

eclipse下安装配置hadoop(含WordCount程序测试)

这里我为大家介绍如何在windows下安装配置hadoop.,以及测试一个程序 所需要使用的插件和分别有: 一.准备工作 1.eclipse,最好是JAVAEE版本的,以为可以改变模式. 2.hadoop和eclipse的连接器: hadoop-eclipse-plugin-1.2.1.jar(这个是我所使用的,在这里可以自定义选取版本) 3.是hadoop源码包(下载最新的就可以). 将hadoop-0.20.2-eclipse-plugin.jar 复制到eclipse/plugins目录下

编写Spark的WordCount程序并提交到集群运行[含scala和java两个版本]

编写Spark的WordCount程序并提交到集群运行[含scala和java两个版本] 1. 开发环境 Jdk 1.7.0_72 Maven 3.2.1 Scala 2.10.6 Spark 1.6.2 Hadoop 2.6.4 IntelliJ IDEA 2016.1.1 2. 创建项目1) 新建Maven项目 2) 在pom文件中导入依赖pom.xml文件内容如下: <?xml version="1.0" encoding="UTF-8"?> &l

021_在Eclipse Indigo中安装插件hadoop-eclipse-plugin-1.2.1.jar,直接运行wordcount程序

1.工具介绍 Eclipse Idigo.JDK1.7-32bit.hadoop1.2.1.hadoop-eclipse-plugin-1.2.1.jar(自己网上下载) 2.插件安装步骤 1)将hadoop-eclipse-plugin-1.2.1.jar放到eclipse安装目录的plugins文件夹中,重新启动eclipse. 2)打开Window-->Preferens,发现Hadoop Map/Reduce选项,说明插件安装成功,配置Hadoop installation direct

spark快速入门与WordCount程序机制深度解析 spark研习第二季

2.spark wordCount程序深度剖析 标签: spark 一.Eclipse(scala IDE)开发local和cluster (一). 配置开发环境 要在本地安装好java和scala. 由于spark1.6需要scala 2.10.X版本的.推荐 2.10.4,java版本最好是1.8.所以提前我们要需要安装好java和scala并在环境变量中配置好. 下载scala IDE for eclipse安装 连接:http://scala-ide.org/download/sdk.h

win7 64位下安装hadoop的eclipse插件并编写运行WordCount程序

win7 64位下安装hadoop的eclipse插件并编写运行WordCount程序 环境: win7 64位 hadoop-2.6.0 步骤: 1.下载hadoop-eclipse-plugin-2.6.0.jar包 2.把hadoop-eclipse-plugin-2.6.0.jar放到eclipse安装目录下的plugins目录下 3.打开eclipse发现左边多出来一个DFS Locations 4.在win7上解压hadoop-2.6.0. 5.下载hadoop.dll.winuti

Java笔记---Hadoop 2.7.1下WordCount程序详解

一.前言 在之前我们已经在 CenOS6.5 下搭建好了 Hadoop2.x 的开发环境.既然环境已经搭建好了,那么现在我们就应该来干点正事嘛!比如来一个Hadoop世界的HelloWorld,也就是WordCount程序(一个简单的单词计数程序) 二.WordCount 官方案例的运行 2.1 程序简介 WordCount程序是hadoop自带的案例,我们可以在 hadoop 解压目录下找到包含这个程序的 jar 文件(hadoop-mapreduce-examples-2.7.1.jar),

CentOS上安装mesos和实例测试

1 安装Mesos 在centOS上安装mesos,可以分为下面几个步骤. 必要的系统工具和库,运行下面的命令即可. $sudo yum groupinstall "Developmenttools" $sudo yum installjava-1.6.0-openjdk.x86_64 java-1.6.0-openjdk-devel.x86_64 python python-devel libcurllibcurl-devel 下载mesos源码,安装mesos命令行步骤: $wge

Hadoop下WordCount程序

一.前言 在之前我们已经在 CenOS6.5 下搭建好了 Hadoop2.x 的开发环境.既然环境已经搭建好了,那么现在我们就应该来干点正事嘛!比如来一个Hadoop世界的HelloWorld,也就是WordCount程序(一个简单的单词计数程序). 二.WordCount 官方案例的运行 2.1 程序简介 WordCount程序是hadoop自带的案例,我们可以在 hadoop 解压目录下找到包含这个程序的 jar 文件(hadoop-mapreduce-examples-2.7.1.jar)