HDFS 搭建记录

1. 三台服务:

172.17.0.62(namenode)

172.17.0.68(datanode)

172.17.0.76(datanode)

/etc/hosts包含的内容:

三台都包含的域名映射:(注意:域名中不要有下划线(_),否则 datanode起不来,呵呵:。。)

172.17.0.76  tomcattomcatmobileplatformtest.bjdv   #不要有下划线(_)

172.17.0.62  tomcatmobileplatform.bjdv

172.17.0.68  mobileplatformweblogic.bjdv

#各个服务器映射自己的主机名

127.0.0.1 cd6d8c70882c

2. namenode与datanode配置互相的不用密码的ssh,以及登陆localhost的不用密码的ssh

每台服务器上执行一下:#localhost的ssh

1)ssh-keygen -t dsa -P ‘‘ -f /root/.ssh/id_dsa

2)cat /root/id_dsa.pub >> /root/.ssh/authorized_keys

3)chmod 0600 ~/.ssh/authorized_keys

把namenode和datanode上的id_dsa.pub拷到对方,并在对方上执行一下2,3  #namenode与datanode互相的ssh

3.下载并拷贝hadoop(本文使用的是hadoop-2.7.2)到三台服务器。

4.配置文件:

etc/hadoop/core-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://172.17.0.62:9000</value>
</property>
</configuration>

etc/hadoop/hadoop-env.sh:

加入JAVA_HOME,HADOOP_PID_DIR

export JAVA_HOME=/root/jdk1.7.0_60
export HADOOP_PID_DIR=/george/h/pids

etc/hadoop/hdfs-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/george/namenode</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/george/datanode</value>
    </property>
</configuration>

etc/hadoop/slaves:

172.17.0.68
172.17.0.76

namenode服务器上:

# sbin/start-dfs.sh

检查服务

# jps
27831 Jps
27353 NameNode
27584 SecondaryNameNode
# bin/hdfs dfsadmin -report
-bash: ./hadoop-2.7.2/bin/hdfs: No such file or directory
[email protected]:/george/hadoop-2.7.2# ./bin/hdfs dfsadmin -report
Configured Capacity: 20869324800 (19.44 GB)
Present Capacity: 14557261824 (13.56 GB)
DFS Remaining: 14557155328 (13.56 GB)
DFS Used: 106496 (104 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 172.17.0.68:50010 (mobileplatformweblogic.bjdv)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 10434662400 (9.72 GB)
DFS Used: 49152 (48 KB)
Non DFS Used: 4756729856 (4.43 GB)
DFS Remaining: 5677883392 (5.29 GB)
DFS Used%: 0.00%
DFS Remaining%: 54.41%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Aug 29 14:55:37 CST 2016

Name: 172.17.0.76:50010 (tomcattomcatmobileplatformtest.bjdv)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 10434662400 (9.72 GB)
DFS Used: 57344 (56 KB)
Non DFS Used: 1555333120 (1.45 GB)
DFS Remaining: 8879271936 (8.27 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.09%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Aug 29 14:55:38 CST 2016

( NameNode 的webUI信息 似乎不正确,只能看到一个Datanode, http://172.17.0.62:50070/dfshealth.html, 上面的 bin/hdfs dfsadmin -report 是正确的)

测试:

# bin/hadoop fs -put LICENSE.txt /

datanode服务器上:

检查服务

# jps
21622 DataNode
21774 Jps

日志查看:

# tail -f logs/hadoop-root-datanode-cd6d8c70882c.log

datanode上:

没有压缩的情况下,往hdfs上传文件后在其中的一个datanode上可以看到(如:blk_1073741828),dfs.replication设置为1,所以在所有datanode上只有一份文件。

# ls -l datanode/current/BP-333745209-172.17.0.62-1472441549732/current/finalized/subdir0/subdir0/
total 8
-rw-r--r-- 1 root root 1366 Aug 29 14:55 blk_1073741828
-rw-r--r-- 1 root root   19 Aug 29 14:55 blk_1073741828_1004.meta

以上是自己搭建hdfs时测试与总结,有不对的地方,欢迎指正。

时间: 2024-10-08 01:16:35

HDFS 搭建记录的相关文章

安卓开发环境(for mac)搭建记录

下载并解压安卓SDK(带Eclipse)和NDK 运行Eclipse 选择菜单ADT->Preferences->Android->NDK->Browse 选择之前解压的NDK目录 点击OK 右键Package Exploder空白处->Import->Android->Existing Android Code Into Workspace 点击Next 点击Browse 选择cocos2d的Android示例项目 右键Package Explorer里的Sim

faster-rcnn(testing): ubuntu14.04+caffe+cuda7.5+cudnn5.1.3+opencv3.0+matlabR2014a环境搭建记录

python版本的faster-rcnn见我的另一篇博客: py-faster-rcnn(running the demo): ubuntu14.04+caffe+cuda7.5+cudnn5.1.3+python2.7环境搭建记录 1. 首先需要配置编译caffe的环境,并降级gcc为4.7.见: ubuntu14.04下安装cudnn5.1.3,opencv3.0,编译caffe及matlab和python接口过程记录(不好意思,这也是我自己写的) 2. clone 源码: git clon

Android开发环境搭建记录201408

目的是采用android + ndk(JNI),使用c/c++编译生成opencv的相关函数的库. 默认jdk已安装好,JAVA_HOME, CLASSPATH, path已设置好.android开发的IDE有两种,Eclipse和Android Studio(http://developer.android.com/sdk/installing/studio.html),后者采用的gradle工具不了解,暂时就不试了. 1. Eclipse CDT + Cygwin环境 Eclipse c/c

(转载)PHP环境搭建-记录

PHP环境搭建-记录 转于 http://jingyan.baidu.com/article/fcb5aff797ec41edaa4a71c4.html php5.5 做了大量的更新,在与apache搭配的时候如何选择也很有讲究,这里我们以64位 php5.6 和 Apache2.4为例介绍如何配置. 工具/原料 Win7/8 64位 php5.5.6 6位 Apache2.4 64位 1 系统环境与软件 1 php5.5.6 下载链接:http://windows.php.net/downlo

12.2RAC搭建记录

12.2RAC环境搭建记录 安装前资源检查 资源限制要求/etc/security/limits.conf Table 6-1 Installation Owner Resource Limit Recommended Ranges Resource Shell Limit Resource Soft Limit Hard Limit Open file descriptors nofile at least 1024 at least 65536 Number of processes ava

怀仁药店微服务环境搭建记录

怀仁药店微服务环境搭建记录: 基础条件:服务器已安装docker,且开通外网功能. 修改计算机hostname: hostnamectl set-hostname hr-rs 一.建立分区及格式: (注:先确保没有镜像和容器运行在服务器上) [[email protected] ~]# df -h 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/centos-root 50G 2.6G 48G 6% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G

HDFS-2.8.5分布式搭建记录

测试环境搭建hadoop hdfs环境,记录如下: 一,服务器规划: 主机名 ip地址 子网掩码 角色 vm2 172.16.100.239 255.255.255.0 Namenode vm6 172.16.100.128 255.255.255.0 Datanode vm7 172.16.100.112 255.255.255.0 Datanode/SecondaryNamenode 操作系统版本为CentOS Linux release 7.4.1708 (Core) 二,基础环境配置 1

邮件系统服务器搭建记录(五)(Postfix+Cyrus-sasl+Courier-authlib+Dovecot+ExtMail+MySQL)

13.  配置dovecot访问mysql进行验证 dovecot本身是支持mysql认证方式的,其在/etc/dovecot/conf.d/下提供了名为auth-sql.conf.ext的配置文件: [[email protected] ~]# cd /etc/dovecot/conf.d/ [[email protected] conf.d]# ls auth-master.conf.ext  auth-master.conf.ext 但dovecot默认使用的收件认证方式是系统账号口令验证

CentOS 7 下nagios搭建记录

跟随 园子的文章搭建 http://www.cnblogs.com/mchina/archive/2013/02/20/2883404.html 1.遇 nagios插件地址迁移错误,记录解决. 2.php改安装为 5.6.26版本,记录 -------------------------------------------------------------------------------------------------------------------- 五.Nagios服务端安装