Hadoop-2.4.0安装和wordcount执行验证
下面描写叙述了64位centos6.5机器下,安装32位hadoop-2.4.0,并通过执行
系统自带的WordCount样例来验证服务正确性的步骤。
建立文件夹
/home/QiumingLu/hadoop-2.4.0,以后这个是hadoop的安装文件夹。
安装hadoop-2.4.0,解压hadoop-2.4.0.tar.gz到文件夹
/home/QiumingLu/hadoop-2.4.0就可以
[[email protected]]# ls
bin etc lib LICENSE.txt NOTICE.txt sbin synthetic_control.data
dfs include libexec logs README.txt share
配置etc/hadoop/hadoop-env.sh
[[email protected]]#
cat etc/hadoop/hadoop-env.sh
#The java implementation to use. exportJAVA_HOME=/home/QiumingLu/mycloud/jdk/jdk1.7.0_51
由于hadoop是默认32位的。所以要加这个:
exportHADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native exportHADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
否则。可能出现一下错误:
Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It‘s highly recommended that you fix the library with ‘execstack -c <libfile>‘, or link it with ‘-z noexecstack‘.
localhost]
sed: -e expression #1, char 6: unknown option to `s‘
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
Java: ssh: Could not resolve hostname Java: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
配置etc/hadoop/hdfs-site.xml
[[email protected]]# cat etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/QiumingLu/hadoop-2.4.0/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/QiumingLu/hadoop-2.4.0/dfs/data</value> </property> </configuration>
配置etc/hadoop/core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
配置etc/hadoop/yarn-site.xml
<configuration> <!--Site specific YARN configuration properties --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
配置etc/hadoop/mapred-site.xml.template
[[email protected]]# cat etc/hadoop/mapred-site.xml.template <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
格式化文件系统
[[email protected]]#
./bin/hadoop namenode -format
启动服务,这里使用root用户,须要输入password的时候,输入root用户password
如果使用非root,并如果分布式服务,须要先解决ssh登录问题,此处不详
细描写叙述。
[[email protected]]#
sbin/start-all.sh
查看启动状态:
[[email protected]]#
./bin/hadoop dfsadmin -report
DEPRECATED:Use of this script to execute hdfs command is deprecated.
Insteaduse the hdfs command for it.
14/04/1805:15:30 WARN util.NativeCodeLoader: Unable to load native-hadooplibrary for your platform... using builtin-java
classes whereapplicable
ConfiguredCapacity: 135938813952 (126.60 GB)
PresentCapacity: 126122217472 (117.46 GB)
DFSRemaining: 126121320448 (117.46 GB)
DFSUsed: 897024 (876 KB)
DFSUsed%: 0.00%
Underreplicated blocks: 0
Blockswith corrupt replicas: 0
Missingblocks: 0
-------------------------------------------------
Datanodesavailable: 1 (1 total, 0 dead)
Livedatanodes:
Name:127.0.0.1:50010 (localhost)
Hostname:localhost
DecommissionStatus : Normal
ConfiguredCapacity: 135938813952 (126.60 GB)
DFSUsed: 897024 (876 KB)
NonDFS Used: 9816596480 (9.14 GB)
DFSRemaining: 126121320448 (117.46 GB)
DFSUsed%: 0.00%
DFSRemaining%: 92.78%
ConfiguredCache Capacity: 0 (0 B)
CacheUsed: 0 (0 B)
CacheRemaining: 0 (0 B)
CacheUsed%: 100.00%
CacheRemaining%: 0.00%
Lastcontact: Fri Apr 18 05:15:29 CST 2014
[[email protected]]# jps
3614DataNode
3922ResourceManager
3514NameNode
9418Jps
4026NodeManager
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjQzMjc3OA==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" >
构造数据文件(file1.txt,file2.txt)
[[email protected]]# cat example/file1.txt
hello world hello markhuang hello hadoop
[[email protected]]# cat example/file2.txt
hadoop ok hadoop fail hadoop 2.4
[[email protected]]#
./bin/hadoop fs -mkdir /data
把数据文件增加到hadoop系统。
[[email protected]]#
./bin/hadoop fs -put -f example/file1.txtexample/file2.txt /data
执行WordCount(java)版本号。
[[email protected]]#
./bin/hadoop jar./share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.4.0-sources.jarorg.apache.hadoop.examples.WordCount /data /output
查看结果。
[[email protected]]#
./bin/hadoop fs -cat /output/part-r-00000
2.4 1
fail 1
hadoop 4
hello 3
markhuang 1
ok 1
world 1