hadoop mapreduce开发实践之HDFS文件分发by streaming

1、分发HDFS文件(-cacheFile)

需求:wordcount(只统计指定的单词),但是该文件非常大,可以先将该文件上传到hdfs,通过-cacheFile的方式进行分发;

-cachefile hdfs://host:port/path/to/file#linkname #选项在计算节点上缓存文件,streaming程序通过./linkname的方式访问文件。

思路:mapper和reducer程序都不需要修改,只是在运行streaming的时候需要使用-cacheFile 指定hdfs上的文件;

1.1、streaming命令格式

$HADOOP_HOME/bin/hadoop jar hadoop-streaming.jar     -jobconf mapred.job.name="streaming_wordcount"     -jobconf mapred.job.priority=3     -input /input/     -output /output/     -mapper "python mapper.py whc"     -reducer "python reducer.py"     -cacheFile "hdfs://master:9000/cache_file/wordwhite#whc"
    -file ./mapper.py     -file ./reducer.py 

注:-cacheFile "hdfs://master:9000/cache_file/wordwhite#whc" whc表示在hdfs上该文件的别名,在-mapper "python mapper.py whc"就如同使用本地文件一样。

1.2、上传wordwhite

$ hadoop fs -mkdir /input/cachefile
$ hadoop fs -put wordwhite  /input/cachefile
$ hadoop fs -ls /input/cachefile
Found 1 items
-rw-r--r--   1 hadoop supergroup         12 2018-01-26 15:02 /input/cachefile/wordwhite
$ hadoop fs -text hdfs://localhost:9000/input/cachefile/wordwhite
the
and
had

1.3 run_streaming程序

mapper和reducer程序参考本地分发实例

$ vim runstreaming_cachefile.sh 

#!/bin/bash

HADOOP_CMD="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/bin/hadoop"
STREAM_JAR_PATH="/home/hadoop/app/hadoop/hadoop-2.6.0-cdh5.13.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.13.0.jar"

INPUT_FILE_PATH="/input/The_Man_of_Property"
OUTPUT_FILE_PATH="/output/wordcount/wordwhitecachefiletest"

$HADOOP_CMD jar $STREAM_JAR_PATH                 -input $INPUT_FILE_PATH                 -output $OUTPUT_FILE_PATH                 -jobconf "mapred.job.name=wordcount_wordwhite_cachefile_demo"                 -mapper "python mapper.py WHF"                 -reducer "python reducer.py"                 -cacheFile "hdfs://localhost:9000/input/cachefile/wordwhite#WHF"                 -file ./mapper.py                 -file ./reducer.py

1.4、执行程序

$ ./runstreaming_cachefile.sh
18/01/26 15:38:27 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
18/01/26 15:38:28 WARN streaming.StreamJob: -cacheFile option is deprecated, please use -files instead.
18/01/26 15:38:28 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead.
18/01/26 15:38:28 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
packageJobJar: [./mapper.py, ./reducer.py, /tmp/hadoop-unjar1709565523181962236/] [] /tmp/streamjob6164905989972408041.jar tmpDir=null
18/01/26 15:38:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/01/26 15:38:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/01/26 15:38:31 INFO mapred.FileInputFormat: Total input paths to process : 1
18/01/26 15:38:31 INFO mapreduce.JobSubmitter: number of splits:2
18/01/26 15:38:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1516345010544_0012
18/01/26 15:38:32 INFO impl.YarnClientImpl: Submitted application application_1516345010544_0012
18/01/26 15:38:32 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1516345010544_0012/
18/01/26 15:38:32 INFO mapreduce.Job: Running job: job_1516345010544_0012
18/01/26 15:38:40 INFO mapreduce.Job: Job job_1516345010544_0012 running in uber mode : false
18/01/26 15:38:40 INFO mapreduce.Job:  map 0% reduce 0%
18/01/26 15:38:49 INFO mapreduce.Job:  map 50% reduce 0%
18/01/26 15:38:50 INFO mapreduce.Job:  map 100% reduce 0%
18/01/26 15:38:57 INFO mapreduce.Job:  map 100% reduce 100%
18/01/26 15:38:57 INFO mapreduce.Job: Job job_1516345010544_0012 completed successfully
18/01/26 15:38:57 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=73950
        FILE: Number of bytes written=582590
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=636501
        HDFS: Number of bytes written=27
        HDFS: Number of read operations=9
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=2
        Launched reduce tasks=1
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=12921
        Total time spent by all reduces in occupied slots (ms)=5641
        Total time spent by all map tasks (ms)=12921
        Total time spent by all reduce tasks (ms)=5641
        Total vcore-milliseconds taken by all map tasks=12921
        Total vcore-milliseconds taken by all reduce tasks=5641
        Total megabyte-milliseconds taken by all map tasks=13231104
        Total megabyte-milliseconds taken by all reduce tasks=5776384
    Map-Reduce Framework
        Map input records=2866
        Map output records=9243
        Map output bytes=55458
        Map output materialized bytes=73956
        Input split bytes=198
        Combine input records=0
        Combine output records=0
        Reduce input groups=3
        Reduce shuffle bytes=73956
        Reduce input records=9243
        Reduce output records=3
        Spilled Records=18486
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=360
        CPU time spent (ms)=3910
        Physical memory (bytes) snapshot=719896576
        Virtual memory (bytes) snapshot=8331550720
        Total committed heap usage (bytes)=602931200
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=636303
    File Output Format Counters
        Bytes Written=27
18/01/26 15:38:57 INFO streaming.StreamJob: Output directory: /output/wordcount/wordwhitecachefiletest

1.5、查看结果

$ hadoop fs -ls /output/wordcount/wordwhitecachefiletest
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2018-01-26 15:38 /output/wordcount/wordwhitecachefiletest/_SUCCESS
-rw-r--r--   1 hadoop supergroup         27 2018-01-26 15:38 /output/wordcount/wordwhitecachefiletest/part-00000

$ hadoop fs -text /output/wordcount/wordwhitecachefiletest/part-00000
and 2573
had 1526
the 5144

以上就完成了分发HDFS上的文件并指定单词的wordcount.

2、hadoop streaming 语法参考

原文地址:http://blog.51cto.com/balich/2065812

时间: 2024-11-08 23:18:52

hadoop mapreduce开发实践之HDFS文件分发by streaming的相关文章

hadoop mapreduce开发实践之HDFS压缩文件(-cacheArchive)

1.分发HDFS压缩文件(-cacheArchive) 需求:wordcount(只统计指定的单词[the,and,had...]),但是该文件存储在HDFS上的压缩文件,压缩文件内可能有多个文件,通过-cacheArchive的方式进行分发: -cacheArchive hdfs://host:port/path/to/file.tar.gz#linkname.tar.gz #选项在计算节点上缓存文件,streaming程序通过./linkname.tar.gz的方式访问文件. 思路:redu

Hadoop MapReduce开发最佳实践(上篇)

body{ font-family: "Microsoft YaHei UI","Microsoft YaHei",SimSun,"Segoe UI",Tahoma,Helvetica,Sans-Serif,"Microsoft YaHei", Georgia,Helvetica,Arial,sans-serif,宋体, PMingLiU,serif; font-size: 10.5pt; line-height: 1.5;}

在Windows上使用Eclipse配置Hadoop MapReduce开发环境

在Windows上使用Eclipse配置Hadoop MapReduce开发环境 1. 系统环境及所需文件 windows 8.1 64bit Eclipse (Version: Luna Release 4.4.0) hadoop-eclipse-plugin-2.7.0.jar hadoop.dll & winutils.exe 2. 修改Master节点的hdfs-site.xml 添加如下内容 <property> <name>dfs.permissions<

intellij idea hadoop mapreduce 开发调试

在idea中的hadoop程序开发(MAC或Linux) hadoop的安装(自己查) 新建一个java project 3.配置项目结构与依赖(project structure) 4.配置构件(artifacts):名称(name),类型(Type),构件时重新编译打包(Build on make),输出目录(Output directory),输出规划(Output Layout),选择当前模块的输出构件 5.编写代码(WordCount.java) 6.配置调试运行方式 7.新建一个运行

浅谈hadoop中mapreduce的文件分发

最近在做数据分析的时候,需要在mapreduce中调用c语言写的接口,此时就需要把动态链接库so文件分发到hadoop的各个节点上,原来想自己来做这个分发,大概过程就是把so文件放在hdfs上面,然后做mapreduce的时候把so文件从hdfs下载到本地,但查询资料后发现hadoop有相应的组件来帮助我们完成这个操作,这个组件就是DistributedCache,分布式缓存,运用这个东西可以做到第三方文件的分发和缓存功能,下面详解: 如果我们需要在map之间共享一些数据,如果信息量不大,我们可

HDFS文件的一些操作

1. hadoop fs -ls  可以查看HDFS文件 后面不加目录参数的话,默认当前用户的目录./user/当前用户 $ hadoop fs -ls 16/05/19 10:40:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 3 items drwxr-xr-x

深入浅出Hadoop实战开发(HDFS实战图片、MapReduce、HBase实战微博、Hive应用)

Hadoop是什么,为什么要学习Hadoop?     Hadoop是一个分布式系统基础架构,由Apache基金会开发.用户可以在不了解分布式底层细节的情况下,开发分布式程序.充分利用集群的威力高速运算和存储.Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS.HDFS有着高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上.而且它提供高传输率(high throughput)来访问应用程序的数据,适合那些有着超大数据

王家林的云计算分布式大数据Hadoop企业级开发动手实践

一:课程简介: Hadoop是云计算分布式大数据的事实标准软件框架,Hadoop中的架构实现是整个云计算产业技术的基础,作为与Google三大核心技术DFS.MapReduce.BigTable相对的HDFS.MapReduce.和HBase也是整个Hadoop生态系统的核心的技术,本课程致力于帮您掌握这三大技术的同时掌握云计算的数据仓库挖掘技术Hive,助您在云计算技术时代自由翱翔. 二:课程特色 1,      深入浅出中动手实作: 2,      掌握Hadoop三大核心:HDFS.Map

Hadoop企业级完整训练:HDFS&amp;MapReduce&amp;HBase&amp;Hive&amp;Zookeeper&amp;Pig&amp;Project)

Hadoop是云计算的事实标准软件框架,是云计算理念.机制和商业化的具体实现,是整个云计算技术学习中公认的核心和最具有价值内容. 如何从企业级开发实战的角度开始,在实际企业级动手操作中深入浅出并循序渐进的掌握Hadoop是本课程的核心. 云计算学习者的心声: 如何从企业级开发的角度,不断动手实际操作,循序渐进中掌握Hadoop,直到能够直接进行企业级开始,是困惑很多对云计算感兴趣的朋友的核心问题,本课程正是为解决此问题而生,学习者只需要按照一步步的跟着视频动手操作,即可完全无痛掌握Hadoop企