hadoop 端口总结

localhost:50030/jobtracker.jsp

localhost:50060/tasktracker.jsp

localhost:50070/dfshealth.jsp

1. NameNode进程

NameNode节点进程 – 运行在端口9000上

INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: asn-ThinkPad-SL410/127.0.1.1:9000

对应的Jetty服务器 -- 运行在端口50070上, 50070NameNode Web的管理端口

INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070

INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070

INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070

INFO org.mortbay.log: jetty-6.1.26

INFO org.mortbay.log: Started [email protected]:50070

2. DataNode进程

DataNode控制进程 -- 运行在50010上

INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-1647545997-127.0.1.1-50010-1399439341888

INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010  -- 网络拓扑结构,向默认机架中增加了1个数据节点

INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, processing time: 2 msecs

DatanodeRegistration(asn-ThinkPad-SL410:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020)

................. DatanodeRegistration(127.0.0.1:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020) In DataNode.run, data = FSDataset{dirpath=‘/opt/hadoop/data/current‘}

DataNode 对应的Jetty服务器 – 运行在端口50075上, DataNode Web 的管理端口

INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075

INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075

INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075

INFO org.mortbay.log: jetty-6.1.26

INFO org.mortbay.log: Started [email protected]:50075

DataNode 的 RPC  -- 运行在50020端口

INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting

INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting

INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec

INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting

3.  TaskTracker进程

TaskTracker服务进程  -- 运行在58567端口上

2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:58567

2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567

2014-05-09 08:51:54,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 58567: starting

2014-05-09 08:52:24,443 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567

TaskTracker服务对应的Jetty服务器  -- 运行在50060端口上

2014-05-09 08:52:24,513 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060

2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060

2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060

2014-05-09 08:52:24,514 INFO org.mortbay.log: jetty-6.1.26

2014-05-09 08:52:25,088 INFO org.mortbay.log: Started [email protected]:50060

4. JobTracker 进程

一个Job由多个Task组成

JobTracker up at: 9001

JobTracker webserver: 50030

2014-05-09 12:20:05,598 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as asn

2014-05-09 12:20:05,664 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.

2014-05-09 12:20:05,665 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.

2014-05-09 12:20:06,166 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030

2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030

2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030

2014-05-09 12:20:06,169 INFO org.mortbay.log: jetty-6.1.26

2014-05-09 12:20:07,481 INFO org.mortbay.log: Started [email protected]:50030

2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001

2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030

2014-05-09 12:20:08,165 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory

2014-05-09 12:20:08,479 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode

2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030

2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030

2014-05-09 12:20:08,513 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive

2014-05-09 12:20:08,931 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information

注:关闭nanenode安全模式

命令为:

[[email protected] hadoop-0.20.203.0]$ bin/hadoop dfsadmin -safemode leave

hadoop 端口总结,布布扣,bubuko.com

时间: 2024-10-01 04:49:59

hadoop 端口总结的相关文章

Hadoop端口说明

Hadoop端口说明: 默认端口                            设置位置                                    描述信息 8020        namenode RPC交互端口 8021        JT RPC交互端口 50030       mapred.job.tracker.http.address        JobTracker administrative web GUI         JOBTRACKER的HTTP服

Hadoop端口

本文转自:<Hadoop默认端口应用一览> Hadoop集群的各部分一般都会使用到多个端口,有些是daemon之间进行交互之用,有些是用于RPC访问以及HTTP访问.而随着Hadoop周边组件的增多,完全记不住哪个端口对应哪个应用,特收集记录如此,以便查询. 这里包含我们使用到的组件:HDFS, YARN, HBase, Hive, ZooKeeper. 组件 Daemon 端口 配置 说明 HDFS DataNode 50010 dfs.datanode.address datanode服务

hadoop 端口 用途

端口用途9000fs.defaultFS,如:hdfs://172.25.40.171:90009001dfs.namenode.rpc-address,DataNode会连接这个端口50070dfs.namenode.http-address50470dfs.namenode.https-address50100dfs.namenode.backup.address50105dfs.namenode.backup.http-address50090dfs.namenode.secondary.

【DAY2】hadoop 完全分布式模式配置的学习笔记

hadoop端口 ---------------- 1.namenode 50070 http://namenode:50070/ 2.resourcemanager:8088 http://localhost:8088/ 3.historyServer http://hs:19888/ 4.name rpc(remote procedure call,远程过程调用) hdfs://namenode:8020/ ssh指令结合操作命令 --------------------- $>ssh s3

eclipse配置hadoop location的端口号

在eclipse下配置hadoop location的时候 hadoop端口号应该与conf文件夹下的core-site.xml以及mapred-site.xml保持一致 前者对应dfs master,后者对应map-red master //这是mapred-site.xml mapred.job.tracker http://master.hadoop:9001 ? //这是core-site.xml fs.default.name hdfs://master.hadoop:9000 对于h

Hadoop2.5.2 新特性

今天看了下hadoop官网,2.5.2版本已经发布好几天了.赶紧看看有什么新东西. Apache Hadoop 2.5.2包含了一些重要的基于2.5.0发行版的bug修复. Common 使用HTTP代理服务器时认证技术改进.当通过代理服务器访问WebHDFS时,这将非常有用. 增加了一个新的hadoop指标监控sink,允许将监控数据直接写入到Graphite. 与hadoop兼容文件系统相关的规范工作. HDFS 支持POSIX风格的文件系统扩展属性.点此产看更多. 使用OfficeImag

大数据Hadoop-2

 关于大数据,一看就懂,一懂就懵. 一.简介 Hadoop的平台搭建,设置为三种搭建方式,第一种是"单节点安装",这种安装方式最为简单,但是并没有展示出Hadoop的技术优势,适合初学者快速搭建:第二种是"伪分布式安装",这种安装方式安装了Hadoop的核心组件,但是并没有真正展示出Hadoop的技术优势,不适用于开发,适合学习:第三种是"全分布式安装",也叫做"分布式安装",这种安装方式安装了Hadoop的所有功能,适用于开

Hadoop-2.8.5的HA集群搭建

一.Hadoop HA 机制的学习 1.1.Hadoop 2.X 的架构图 2.x版本中,HDFS架构解决了单点故障问题,即引入双NameNode架构,同时借助共享存储系统来进行元数据的同步,共享存储系统类型一般有几类,如:Shared NAS+NFS.BookKeeper.BackupNode 和 Quorum Journal Manager(QJM),上图中用的是QJM作为共享存储组件,通过搭建奇数结点的JournalNode实现主备NameNode元数据操作信息同步. 1.2.QJM原理

Hadoop.2.x_常用端口及定义方法(转)

组件   节点 默认端口 配置 用途说明 HDFS DataNode 50010 dfs.datanode.address datanode服务端口,用于数据传输 HDFS DataNode 50075 dfs.datanode.http.address http服务的端口 HDFS DataNode 50475 dfs.datanode.https.address https服务的端口 HDFS DataNode 50020 dfs.datanode.ipc.address ipc服务的端口