Hadoop ecosystem

  1. How did it all start- huge data on the web!
  2. Nutch built to crawl this web data
  3. Huge data had to saved- HDFS was born!
  4. How to use this data?
  5. Map reduce framework built for coding and running analytics – java, any language-streaming/pipes
  6. How to get in unstructured data – Web logs, Click streams, Apache logs, Server logs  – fuse,webdav, chukwa, flume, Scribe
  7. Hiho and sqoop for loading data into HDFS – RDBMS can join the Hadoop band wagon!
  8. High level interfaces required over low level map reduce programming– Pig, Hive, Jaql
  9. BI tools with advanced UI reporting- drilldown etc- Intellicus
  10. Workflow tools over Map-Reduce processes and High level languages
  11. Monitor and manage hadoop, run jobs/hive, view HDFS – high level view- Hue, karmasphere, eclipse plugin, cacti, ganglia
  12. Support frameworks- Avro (Serialization), Zookeeper (Coordination)
  13. More High level interfaces/uses- Mahout, Elastic map Reduce
  14. OLTP- also possible – Hbase

Hadoop ecosystem,布布扣,bubuko.com

时间: 2024-07-28 15:16:43

Hadoop ecosystem的相关文章

Ambari Install Hadoop ecosystem for 9 steps

Ambari for provisioning,managing and monitoring Hadoop 1. Install Ambari Server: 2. Enter list of hosts to be included in the cluster and provide your SSH key: 3. Register your hosts(Confirm hosts): 4. Host checks: 5. Choose Services(HDFS,MR,Nagios,G

Hadoop Ecosystem related ports

本文总结了Hadoop生态系统中各个组件使用的端口,包括了HDFS,Map Reduce,HBase,Hive,Spark,WebHCat,Impala,Alluxio,Sqoop等,后续会持续更新. HDFS Ports: Service Servers Default Ports Used Protocol Description Need End User Access? Configuration Parameters NameNode WebUI Master Nodes (NameN

What Is Apache Hadoop?

http://hadoop.apache.org/ 1 The Apache™ Hadoop® project develops open-source software for reliable, scalable,distributed computing. The Apache Hadoop software library is a framework that allows for the distributedprocessing of large data sets across

XI hadoop

XIhadoop 文本文件(索引): structured data ,RDBMS(表,字段,数据类型,约束): semi-structured data,半结构化数据(xml,json): google(网络爬虫.网络蜘蛛.网络机器人,20亿个页面,unstructureddata,pagerand页面排序算法): facebook,pv 500亿,化整为零(500G-->500*1G-->筛出50*100M数据),并行处理(将一个大问题切割成多个小问题,OLAP在线分析处理(数据挖掘):

关于hadoop

hadoop 是什么? 1. 适合海量数据的分布式存储与计算平台. 海量: 是指 1T 以上数据. 分布式: 任务分配到多态虚拟机上进行计算. 2. 多个任务是怎么被分配到多个虚拟机当中的? 分配是需要网络通讯的.而且是需要启动资源 或者 消耗一些硬件上的配置. 单 JVM 关注的如何『处理』,而不是交给其他人进行处理这个 『管理』的过程.  所以最开始有两个关键的字  『适合』, 只有当数据量超过 1T 的大数据处理才能凸显 hadoop 的优势;    当然,用 hadoop 处理 几十G.

Hadoop集群中Hbase的介绍、安装、使用

导读 HBase – Hadoop Database,是一个高可靠性.高性能.面向列.可伸缩的分布式存储系统,利用HBase技术可在廉价PC Server上搭建起大规模结构化存储集群. 一.Hbase简介 HBase是Google Bigtable的开源实现,类似Google Bigtable利用GFS作为其文件存储系统,HBase利用Hadoop HDFS作为其文件存储系统:Google运行MapReduce来处理Bigtable中的海量数据,HBase同样利用Hadoop MapReduce

Awesome Hadoop

A curated list of amazingly awesome Hadoop and Hadoop ecosystem resources. Inspired by Awesome PHP, Awesome Pythonand Awesome Sysadmin Awesome Hadoop Hadoop YARN NoSQL SQL on Hadoop Data Management Workflow, Lifecycle and Governance Data Ingestion an

Hadoop大数据平台构建

基础:linux常用命令.Java编程基础大数据:科学数据.金融数据.物联网数据.交通数据.社交网络数据.零售数据等等. Hadoop: 一个开源的分布式存储.分布式计算平台.(基于Apache) Hadoop的组成: HDFS:分布式文件系统,存储海量的数据. MapReduce:并行处理框架,实现任务分解和调度. Hadoop的用处: 搭建大型数据仓库,PB级数据的存储.处理.分析.统计等业务. 比如搜索引擎.网页的数据处理,各种商业智能.风险评估.预警,还有一些日志的分析.数据挖掘的任务.

CDH培训——Cloudera Developer Training for Spark and hadoop

Cloudera Developer Training for Spark and hadoop Course Time:2016年6月27-30日 Course Location:上海市 浦东新区 张江高科 伯克利工程创新中心 Contact us:400-679-6113 QQ:1438118790 Certification:CCA-175 Learn how toimport data into your Apache Hadoop closter and process it with