Hadoop0.21.0部署安装以及mapreduce测试

鉴于hadoop的需要。。。但是并不限于此。。。有时候闲输入密码麻烦,也可以用这种办法从一个节点通过ssh进入另一个节点。。。

设要使master进入slave是免密码的,则可以在master(ip为192.168.169.9)中如下操作:

命令:ssh-keygen -t rsa  然后一路回车(该命令不需要进入特定目录)

cd进入/root/.ssh/可以看见如下(橙色是新生成的)

id_rsa  id_rsa.pub  known_hosts

然后用scp命令将id_rsa远程传输到slave(ip为192.168.169.10)节点中:scp  id_rsa.pub  192.168.169.10:/root/.ssh/192.168.169.9

则可以在slave的/root/.ssh/目录下看到名为192.168.169.9的文件:

然后用以下命令:

cat 192.168.169.9 >>authorized_keys

这里的结果因为我已经提前做了,所以我们从上面图片也可以看出来authorized_keys也有了。。。

这个时候就用ssh免密码进入slave了

如果想要同时能从slave进入master过程也是一样的。。。

1.安装环境:一台物理机(namenode),加两台虚拟机(datanode)。


主机名


IP


功能


namenode


192.168.169.9


NameNode,jobtraker


datanode1


192.168.169.10


DataNode,tasktraker


datanode2


192.168.169.20


DataNode,tasktraker

同时在三台机子的/etc/hosts中添加以下内容以修改机器名:

192.168.169.9  namenode
192.168.169.10 datanode1
192.168.169.20 datanode2

修改后注意解析测验下:(datanode测验同理)

-bash-3.1# ping -c 4 namenode
PING namenode (192.168.169.9) 56(84) bytes of data.
64 bytes from namenode (192.168.169.9): icmp_seq=1 ttl=64 time=0.020 ms
64 bytes from namenode (192.168.169.9): icmp_seq=2 ttl=64 time=0.009 ms
64 bytes from namenode (192.168.169.9): icmp_seq=3 ttl=64 time=0.009 ms
64 bytes from namenode (192.168.169.9): icmp_seq=4 ttl=64 time=0.010 ms

--- namenode ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2997ms
rtt min/avg/max/mdev = 0.009/0.012/0.020/0.004 ms

然后修改/etc/sysconfig/network中的域名(因为必须要域名和机器名一样)如图(这里只列出namenode的,datanode的类似,只修改HOSTNAME)

NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=namenode

 
2.ssh免密码认证

具体做法上一篇博文已经讲过。。这里不再赘述,但是要注意,这里需要做到的是:

  1. master到slave的免密码认证;
  2. slave到master的免密码认证;
  3. master到master的免密码认证。

例如我对第一条测验一下:

-bash-3.1# ssh datanode1
Last login: Fri Feb 17 08:32:34 2012 from namenode
-bash-3.1# exit
logout
Connection to datanode1 closed.
-bash-3.1# ssh datanode2
Last login: Fri Feb 17 08:32:42 2012 from namenode
-bash-3.1# exit
logout
Connection to datanode2 closed.

3.安装JDK并配置环境变量,因为我的已经安装过了,所以我只需要配置环境变量了,修改/etc/profile内容如下:(注意修改后要source /etc/profile)  
export JAVA_HOME=/usr/java/jdk1.6.0_29export JRE_HOME=$JAVA_HOME/jreexport PATH=$PATH:/usr/java/jdk1.6.0_29/binexport CLASSPATH=:/usr/java/jdk1.6.0_29/lib:/usr/java/jdk1.6.0_29/jre/lib
查看下版本:
-bash-3.1# java -versionjava version "1.6.0_29"Java(TM) SE Runtime Environment (build 1.6.0_29-b11)Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)
写个具体例子跑一下(test.java):

class test
{
public static void main(String[] args)
{
System.out.println("Hello,World!");
}
}

测试一下:
-bash-3.1# javac test.java 

-bash-3.1# java test
Hello,World!

说明成功了。。。。。4.Hadoop安装配置
下载hadoop-0.21.0.tar.gz压缩包下来之后解压缩到一定目录,然后把它放到个固定的地方,我把它放在了/usr/local/hadoop目录下,进去看下

-bash-3.1# cd /usr/local/hadoop/hadoop-0.21.0/
-bash-3.1# ll

drwxrwxr-x 2 huyajun huyajun    4096 02-17 10:30 bin
drwxrwxr-x 5 huyajun huyajun    4096 2010-08-17 c++
drwxr-xr-x 8 huyajun huyajun    4096 2010-08-17 common
drwxrwxr-x 2 huyajun huyajun    4096 02-16 15:54 conf
-rw-rw-r-- 1 huyajun huyajun 1289953 2010-08-17 hadoop-common-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun  622276 2010-08-17 hadoop-common-test-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun  934881 2010-08-17 hadoop-hdfs-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun  613332 2010-08-17 hadoop-hdfs-0.21.0-sources.jar
-rw-rw-r-- 1 huyajun huyajun    6956 2010-08-17 hadoop-hdfs-ant-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun  688026 2010-08-17 hadoop-hdfs-test-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun  419671 2010-08-17 hadoop-hdfs-test-0.21.0-sources.jar
-rw-rw-r-- 1 huyajun huyajun 1747897 2010-08-17 hadoop-mapred-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun 1182309 2010-08-17 hadoop-mapred-0.21.0-sources.jar
-rw-rw-r-- 1 huyajun huyajun  252064 2010-08-17 hadoop-mapred-examples-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun 1492025 2010-08-17 hadoop-mapred-test-0.21.0.jar
-rw-rw-r-- 1 huyajun huyajun  298837 2010-08-17 hadoop-mapred-tools-0.21.0.jar
drwxr-xr-x 8 huyajun huyajun    4096 2010-08-17 hdfs
drwxrwxr-x 4 huyajun huyajun    4096 2010-08-17 lib
-rw-rw-r-- 1 huyajun huyajun   13366 2010-08-17 LICENSE.txt
drwxr-xr-x 3 root    root       4096 02-17 08:54 logs
drwxr-xr-x 9 huyajun huyajun    4096 2010-08-17 mapred
-rw-rw-r-- 1 huyajun huyajun     101 2010-08-17 NOTICE.txt
-rw-rw-r-- 1 huyajun huyajun    1366 2010-08-17 README.txt
drwxrwxr-x 8 huyajun huyajun    4096 2010-08-17 webapps

进行hadoop相关设置,具体如下:
a. hadoop-env.sh中加入如下行:

export JAVA_HOME=/usr/java/jdk1.6.0_29

b. core-site.xml 修改后如下(红色内容):hadoop.tmp.dir表示dfs所在目录

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="http://wuyanzan60688.blog.163.com/blog/configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://namenode:9000</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/home/wuyanzan/hadoop-1.0.1</value>
</property>
</configuration>

c. hdfs-site.xml 红色内容为增加的行

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="http://wuyanzan60688.blog.163.com/blog/configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>

d. mapred-site.xml 红色内容为增加行

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="http://wuyanzan60688.blog.163.com/blog/configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>namenode:9001</value>
</property>
</configuration>

e. masters 修改后整个内容如下:

namenode

f. slaves 修改后整个内容如下:

datanode1
datanode2

g. 添加hadoop环境变量,在/etc/profile中加入如下内容:

HADOOP_HOME=/usr/local/hadoop/hadoop-0.21.0
PATH=$PATH:$HADOOP_HOME/bin
export PATH HADOOP_HOME

然后source /etc/profile
之后最重要的一步是,把整个hadoop文件夹scp复制到两个datanode中相应的文件夹下,为:/usr/local/
5.hadoop启动,排错,监控
首先在namenode上对hdfs进行格式化:
-bash-3.1# cd /usr/local/hadoop/hadoop-0.21.0/bin/-bash-3.1# lshadoop            hadoop-daemon.sh   hdfs            mapred            rcc        start-all.sh       start-dfs.sh     stop-all.sh       stop-dfs.sh     test.classhadoop-config.sh  hadoop-daemons.sh  hdfs-config.sh  mapred-config.sh  slaves.sh  start-balancer.sh  start-mapred.sh  stop-balancer.sh  stop-mapred.sh  test.java-bash-3.1# hadoop namenode -formatDEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.
12/02/17 13:58:17 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = namenode/192.168.169.9STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 0.21.0

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r 985326; compiled by ‘tomwhite‘ on Tue Aug 17 01:02:28 EDT 2010
************************************************************/
Re-format filesystem in /tmp/hadoop-root/dfs/name ? (Y or N) Y
12/02/17 13:58:22 INFO namenode.FSNamesystem: defaultReplication = 2
12/02/17 13:58:22 INFO namenode.FSNamesystem: maxReplication = 512
12/02/17 13:58:22 INFO namenode.FSNamesystem: minReplication = 1
12/02/17 13:58:22 INFO namenode.FSNamesystem: maxReplicationStreams = 2
12/02/17 13:58:22 INFO namenode.FSNamesystem: shouldCheckForEnoughRacks = false
12/02/17 13:58:22 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/02/17 13:58:22 INFO namenode.FSNamesystem: fsOwner=root
12/02/17 13:58:22 INFO namenode.FSNamesystem: supergroup=supergroup
12/02/17 13:58:22 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/02/17 13:58:22 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/02/17 13:58:23 INFO common.Storage: Image file of size 110 saved in 0 seconds.
12/02/17 13:58:23 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
12/02/17 13:58:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at namenode/192.168.169.9
************************************************************/

正常格式化之后接着执行:sh start-all.sh

-bash-3.1# sh start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-namenode-namenode.out
datanode2: starting datanode, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-datanode-datanode2.out
datanode1: starting datanode, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-datanode-datanode1.out
namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-secondarynamenode-namenode.out
starting jobtracker, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-jobtracker-namenode.out
datanode1: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-tasktracker-datanode1.out
datanode2: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.21.0/bin/../logs/hadoop-root-tasktracker-datanode2.out

然后进入conf文件夹下jps查看:

-bash-3.1# cd ../conf/
-bash-3.1# ls
capacity-scheduler.xml  core-site.xml       hadoop-env.sh              hadoop-policy.xml  log4j.properties   mapred-site.xml  slaves                  ssl-server.xml.example
configuration.xsl       fair-scheduler.xml  hadoop-metrics.properties  hdfs-site.xml      mapred-queues.xml  masters          ssl-client.xml.example  taskcontroller.cfg
-bash-3.1# jps
1081 NameNode
1532 JobTracker
1376 SecondaryNameNode
1690 Jps

顺便进入datanode里查看下:

-bash-3.1# cd /usr/local/hadoop/hadoop-0.21.0/conf/
-bash-3.1# jps
6146 DataNode
6227 Jps
6040 TaskTracker

这里我出现了一些问题,就是有时候我突然发现我DataNode没有启动起来。。。后来进datanode的日志里查看,问题是:

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-root/dfs/data: namenode namespaceID = 991936739; datanode namespaceID = 1787084289

经高人指点说是我因为多次format的问题,所以我把namenode和datanode的tmp文件夹下的hadoop-root文件夹全部删掉,重新format之后start-all就可以了。。。。但是这也有个问题就是,如果namenode的数据就因为tmp被删除掉而丢失了。。高人建议我先备份之后再删。。。至于怎么弄我因为急着把整个框架搭建起来没有细问。。。。以后再研究。。。
总算是先搭起来了。。。。
 
然后我来做一下简单的mapreduce的wordcount测试,测试文件是我的python自动生成的word.txt文件
random_char.py文件如下:

import random,sys,string

if len(sys.argv)<2:
        print sys.argv[0],"count\n"
        sys.exit()
f=open(‘word.txt‘,‘a‘)
i=0
while i<int(sys.argv[1]):
        i=i+1
        f.write(chr(97+random.randint(0,25)))
        f.write("\n")
print "The word.txt have create"

执行后可以生成70多M的一个txt文件。
然后我们要把这个文件从linux文件系统中转到hdfs中去。
具体命令如下:

hadoop dfs -mkdir wuyanzan

hadoop dfs -put word.txt /user/root/wuyanzan

以上命令式在hdfs中新建一个文件夹,专门用于存放word.txt,我们可以ls查看一下

-bash-3.1# hadoop dfs -ls /user/root/
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

12/02/17 14:43:43 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/02/17 14:43:43 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
drwxr-xr-x   - root supergroup          0 2012-02-17 14:26 /user/root/wuyanzan

然后进wuyanzan文件夹里头看下

-bash-3.1# hadoop dfs -ls /user/root/wuyanzan/
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

12/02/17 14:59:10 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/02/17 14:59:11 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 1 items
-rw-r--r--   2 root supergroup    7625490 2012-02-17 14:45 /user/root/wuyanzan/word.txt

然后我们可以执行mapreduce,具体如下,输出结果放入output中:

-bash-3.1# hadoop jar hadoop-mapred-examples-0.21.0.jar wordcount wuyanzan output
12/02/17 14:46:58 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/02/17 14:46:58 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
12/02/17 14:46:58 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/02/17 14:46:58 INFO input.FileInputFormat: Total input paths to process : 1
12/02/17 14:46:59 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
12/02/17 14:46:59 INFO mapreduce.JobSubmitter: number of splits:1
12/02/17 14:46:59 INFO mapreduce.JobSubmitter: adding the following namenodes‘ delegation tokens:null
12/02/17 14:46:59 INFO mapreduce.Job: Running job: job_201202171400_0001
12/02/17 14:47:00 INFO mapreduce.Job:  map 0% reduce 0%
12/02/17 14:47:15 INFO mapreduce.Job:  map 66% reduce 0%
12/02/17 14:47:18 INFO mapreduce.Job:  map 100% reduce 0%
12/02/17 14:47:24 INFO mapreduce.Job:  map 100% reduce 100%
12/02/17 14:47:26 INFO mapreduce.Job: Job complete: job_201202171400_0001
12/02/17 14:47:26 INFO mapreduce.Job: Counters: 33

完成后我们可以查看下output中的内容:

-bash-3.1# hadoop dfs -ls /user/root/output
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

12/02/17 14:49:21 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/02/17 14:49:21 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
-rw-r--r--   2 root supergroup          0 2012-02-17 14:47 /user/root/output/_SUCCESS
-rw-r--r--   2 root supergroup        234 2012-02-17 14:47 /user/root/output/part-r-00000

最后的统计结果放在part-r-00000中,查看下:

-bash-3.1# hadoop dfs -cat /user/root/output/part-r-00000
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

12/02/17 14:49:47 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/02/17 14:49:47 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
a 146072
b 146313
c 147031
d 147121
e 146570
f 146807
g 146539
h 147319
i 146891
j 146895
k 146233
l 146308
m 146839
n 146945
o 147098
p 146992
q 146203
r 146640
s 146276
t 146411
u 146667
v 146083
w 146287
x 146480
y 146392
z 147333

这里我必须要指出一个缺陷是:hadoop dfs命令不支持tab补全功能。。。。。这点很让人蛋疼。。。。你必须记住完整的路径。。。不然就老是报错。。。。。
曾见错误:

1.ssh进入到datanode,运行jps,发现TaskTracker启动起来,但是Datanode没启动,提示:
Could not synchronize with target
排查后发现是datanode的/etc/hosts文件中没有解析自己的localhost。

2.could only be replicated to 0 nodes, instead of 1

这个错误我是偶然发现的,因为之前用的明明是好的,但是我因为某种原因重新format之后,在put文件就报这个错误,给我吓一跳,后来才知道:当你申请到一个HOD集群后马上尝试上传文件到HDFS时,DFSClient会警告。你要做的是启动之后等一会儿,然后再put,就不会出现了。(这个原因真的是很无语。。。)当然另外的原因别人已经说过很多:要么就是hdfs没有空间了,要么就是datanode没启动起来。

Hadoop0.21.0部署安装以及mapreduce测试,布布扣,bubuko.com

时间: 2024-10-08 21:20:35

Hadoop0.21.0部署安装以及mapreduce测试的相关文章

Hadoop 2.2.0部署安装(笔记,单机安装)

SSH无密安装与配置 具体配置步骤: ◎ 在root根目录下创建.ssh目录 (必须root用户登录) cd /root & mkdir .ssh chmod 700 .ssh & cd .ssh ◎ 创建密码为空的 RSA 密钥对: ssh-keygen -t rsa -P "" ◎ 在提示的对称密钥名称中输入 id_rsa将公钥添加至 authorized_keys 中: cat id_rsa.pub >> authorized_keys chmod 6

ELK 5.0部署安装

版本说明: Elasticsearch 5.0 Logstash 5.0(暂时未用) Filebeat 5.0 Kibana 5.0 ELK是一套采集日志并进行清洗分析的系统,由于目前的分析的需求较弱,所以仅仅采用filebeat做日志采集,没有使用logstash 一.环境准备&&软件安装: 1.首先,需要安装Java环境 下载安装包:jre-8u111-linux-x64.rpm 安装:yum install jre-8u111-linux-x64.rpm 2. 新建一个用户,else

Hadoop2.6+Zookper3.4+Hbase1.0部署安装

继hadoop完全分布式安装后,再结合zookper+hbase安全.在之前环境配置下继续进行. 一.zookper安装 1.1 下载并解压软件 cd /software wget -c http://apache.fayea.com/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz tar -zxf zookeeper-3.4.10.tar.gz -C /usr/local/ ln -sv /usr/local/zookeeper-3.4.1

k8s实践18:helm部署安装grafana配置测试

grafana部署配置测试 grafana官网地址 1.helm部署grafana 部署很简单,见下 [[email protected] prometheus-grafana]# helm install stable/grafana --generate-name NAME: grafana-1577432108 LAST DEPLOYED: Fri Dec 27 15:35:11 2019 NAMESPACE: default STATUS: deployed REVISION: 1 NO

Redis安装及简单测试

题目链接:11645 - Bits 题意:给定一个数字n,要求0-n的二进制形式下,连续11的个数. 思路:和 UVA 11038 这题类似,枚举中间,然后处理两边的情况. 不过本题最大的答案会超过longlong,要用高精度,不过借鉴http://www.cnblogs.com/TO-Asia/p/3214706.html这个人的方法,直接用两个数字来保存一个数字,这样能保存到2个longlong的长度,就足够存放这题的答案了. 代码: #include <stdio.h> #include

Flume1.5.0的安装、部署、简单应用(含分布式、与hadoop2.2.0、hbase0.96的案例)

目录: 一.什么是Flume? 1)flume的特点 2)flume的可靠性 3)flume的可恢复性 4)flume 的 一些核心概念 二.flume的官方网站在哪里? 三.在哪里下载? 四.如何安装? 五.flume的案例 1)案例1:Avro 2)案例2:Spool 3)案例3:Exec 4)案例4:Syslogtcp 5)案例5:JSONHandler 6)案例6:Hadoop sink 7)案例7:File Roll Sink 8)案例8:Replicating Channel Sel

mosquitto在Linux环境下的部署/安装/使用/测试

mosquitto在Linux环境下的部署 看了有三四天的的源码,(当然没怎么好好看了),突然发现对mosquitto的源码有了一点点感觉,于是在第五天决定在Linux环境下部署mosquitto. 使用传统源码安装步骤: 步骤1:http://mosquitto.org/files/source/官网下载源码,放到Linux环境中.解压后,找到主要配置文件config.mk,其中包含mosquitto的安装选项,需要注意的是,默认情况下mosquitto的安装需要OpenSSL(一个强大的安全

_00024 尼娜抹微笑伊拉克_云计算ClouderaManager以及CHD5.1.0群集部署安装文档V1.0

笔者博文:妳那伊抹微笑 itdog8 地址链接 : http://www.itdog8.com(个人链接) 博客地址:http://blog.csdn.net/u012185296 博文标题:_00024 妳那伊抹微笑_云计算之ClouderaManager以及CHD5.1.0集群部署安装文档V1.0 个性签名:世界上最遥远的距离不是天涯,也不是海角.而是我站在妳的面前.妳却感觉不到我的存在 技术方向:Flume+Kafka+Storm+Redis/Hbase+Hadoop+Hive+Mahou

Hadoop1.0.4+Hbase0.94.2+Hive0.9.0 分布式部署安装

因为个人太懒,所以很多配图没有上传,完整的部署安装教程另外备份了.这里记录一下防止文档丢了   Hadoop1.0.4+Hbase0.94.2+Hive0.9.0 分布式部署安装 目录 1 服务器配置 2 Hadoop安装 3 Hbase安装 4 Hive安装 版本控制信息 版本 日期 拟稿和修改 说明 1.0 2012-11-27 Yoan.Liang 报文标准初稿(内部版本:1000) 1       服务器配置 1.1    下载安装JDK 1.2    修改NameNode与DataNo