大数据WEB工具Hue

1、Hue的安装
(1)解压hue的安装包。
cdh]$ tar -zxf hue-3.7.0-cdh5.3.6-build.tar.gz -C /opt/app/
(2)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

[desktop]

# Set this to a random string, the longer the better.
# This is used for secure hashing in the session store.
secret_key=jFE93j;2[290-eiw.KEiwN2s3[‘d;/.q[eIW^y#e=+Iei*@Mn<qW5o

# Webserver listens on this address and port
http_host=hadoop-senior.ibeifeng.com
http_port=8888

# Time zone name
(3)进入/opt/app/hue-3.7.0-cdh5.3.6目录,运行Hue。
hue-3.7.0-cdh5.3.6]$ build/env/bin/supervisor
(4)Hue官方文档:
http://gethue.com/
http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/manual.html#_install_hue
https://github.com/cloudera/hue

2、Hue结合MapReduce
(1)编辑配置文件/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop/hdfs-site.xml,添加如下属性。

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
1
2
3
4
(2)编辑配置文件/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop/core-site.xml,添加如下属性。

<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
(3)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。
HDFS部分配置如下:

[hadoop]

# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs

[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://hadoop-senior.ibeifeng.com:8020

# NameNode logical name.
## logical_name=

# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
webhdfs_url=http://hadoop-senior.ibeifeng.com:50070/webhdfs/v1

# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false

# Default umask for file and directory creation, specified in an octal value.
## umask=022

# Directory of the Hadoop configuration
hadoop_conf_dir=/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop
YARN部分配置如下:

# Configuration for YARN (MR2)
# ------------------------------------------------------------------------
[[yarn_clusters]]

[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=hadoop-senior.ibeifeng.com

# The port where the ResourceManager IPC listens on
resourcemanager_port=8032

# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API
resourcemanager_api_url=http://hadoop-senior.ibeifeng.com:8088

# URL of the ProxyServer API
proxy_api_url=http://hadoop-senior.ibeifeng.com:8088

# URL of the HistoryServer API
history_server_api_url=http://hadoop-senior.ibeifeng.com:19888

# In secure mode (HTTPS), if SSL certificates from Resource Manager‘s
# Rest Server have to be verified against certificate authority
缓存目录配置如下:

###########################################################################
# Settings to configure the Filebrowser app
###########################################################################

[filebrowser]
# Location on local filesystem where the uploaded archives are temporary stored.
archive_upload_tempdir=/tmp
启动namenode、datanode、resourcemanager、nodemanager、historyserver。

3、Hue结合Hive
(1)编辑配置文件/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf/hive-site.xml。

<!-- HiveServer2 -->
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop-senior.ibeifeng.com</value>
</property>
启动hiveserver2:hive-0.13.1-cdh5.3.6]$ bin/hiveserver2
(2)编辑配置文件/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf/hive-site.xml。

<!-- Remote MetaStore -->
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop-senior.ibeifeng.com:9083</value>

启动metastore:hive-0.13.1-cdh5.3.6]$ bin/hive --service metastore
(3)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hadoop-senior.ibeifeng.com

# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf

# Timeout in seconds for thrift calls to Hive service
server_conn_timeout=120
4、Hue结合RDBMS
(1)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

[librdbms]
# The RDBMS app can have any number of databases configured in the databases
# section. A database is known by its section name
# (IE sqlite, mysql, psql, and oracle in the list below).

[[databases]]
# sqlite configuration.
[[[sqlite]]]
# Name to show in the UI.
nice_name=SQLite

# For SQLite, name defines the path to the database.
name=/opt/app/hue-3.7.0-cdh5.3.6/desktop/desktop.db

# Database backend to use.
engine=sqlite

# Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}

# mysql, oracle, or postgresql configuration.
[[[mysql]]]
# Name to show in the UI.
nice_name="My SQL DB"

# For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is ‘xe‘ by default.
name=test

# Database backend to use. This can be:
# 1. mysql
## options={www.fengshen157.com}

# mysql, oracle, or postgresql configuration.
[[[mysql]]]
# Name to show in the UI.
nice_name="My SQL DB"

# For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is ‘xe‘ by default.
name=test

# Database backend to use. This can be:
# 1. mysql
# 2. postgresql
# 3. oracle
engine=mysql

# IP or hostname of the database to connect to.
host=hadoop-senior.ibeifeng.com

# Port the database server www.yongshiyule178.com is listening to. Defaults are:
# 1. MySQL: 3306
# 2. PostgreSQL: 5432
# 3. Oracle Express Edition: 1521
port=3306

# Username to authenticate with when connecting to the database.
user=root

# Password matching the username www.mhylpt.com/ o authenticate with when
# connecting to the database.
password=123456

5、Hue结合Oozie
(1)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

###########################################################################
# Settings to configure liboozie
###########################################################################

[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=http://hadoop-senior.ibeifeng.com:11000/oozie

# Requires FQDN in oozie_url if enabled
## security_enabled=false

# Location on HDFS where the workflows/coordinator are deployed when submitted.
remote_deployement_dir=/user/beifeng/examples/apps

###########################################################################
# Settings to configure the Oozie app
###########################################################################

[oozie]
# Location on local FS where the examples are stored.
local_data_dir=/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/examples

# Location on local FS where the data for the examples is stored.
sample_data_dir=/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/examples/input-data

# Location on HDFS where the oozie examples and workflows are stored.
remote_data_dir=/user/beifeng/examples/apps

# Maximum of Oozie workflows or coodinators to retrieve in one API call.
oozie_jobs_count=100

# Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.
enable_cron_scheduling=true
(2)编辑配置文件/opt/cdh-5.3.6/oozie-4.www.dasheng178.com 0.0-cdh5.3.6/conf/oozie-site.xml,将oozie的共享库改为oozie用户,而不是beifeng用户。

<property>
<name>oozie.service.WorkflowAppService.system.libpath</name>
<value>/user/oozie/share/lib</value>
<description>
System library path to use for workflow applications.
This path is added to workflow application www.tianjiuyule178.com if their job properties sets
the property ‘oozie.use.system.libpath‘ to true.
</description>
</property>

(3)将oozie的共享库上传。
oozie-4.0.0-cdh5.3.6]$ bin/oozie-setup.sh sharelib create -fs hdfs://hadoop-senior.ibeifeng.com:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz
(4)启动oozie服务。
oozie-4.0.0-cdh5.3.6]www.huayiyul.com/$ bin/oozied.sh start

6、启动Hue
进入目录/opt/app/hue-3.7.0-cdh5.3.6/,执行如下命令:
hue-3.7.0-cdh5.3.6]$ build/env/bin/supervisor
也可执行完每步后分别重新启动。

---------------------

本文来自 魏晓蕾 的CSDN 博客 ,全文地址请点击:https://blog.csdn.net/gongxifacai_believe/article/details/81125718?utm_source=copy

原文地址:https://www.cnblogs.com/qwangxiao/p/9692107.html

时间: 2024-08-11 04:28:22

大数据WEB工具Hue的相关文章

大数据应用工具有哪些

互联网的迅速发展推动信息社会进入到大数据时代,大数据催生了人工智能,也加速推动了互联网的演进.再对大数据的应用中,有很多工具大大提高了工作效率,本篇文章将从大数据可视化工具和大数据分析工具分别阐述. ? 大数据分析工具: RapidMiner 在世界范围内,RapidMiner是比较领先的一个数据挖掘的解决方案.很大程度上,RapidMiner有比较先进的技术.RapidMiner数据挖掘的任务涉及了很多的范围,主要包括可以简化数据挖掘的过程中一些设计以及评价,还有各类数据艺术. HPCC 某个

Hadoop和大数据:60款顶级大数据开源工具

一.Hadoop相关工具 1. Hadoop Apache的Hadoop项目已几乎与大数据划上了等号.它不断壮大起来,已成为一个完整的生态系统,众多开源工具面向高度扩展的分布式计算. 支持的操作系统:Windows.Linux和OS X. 相关链接: http://hadoop.apache.org 2. Ambari 作为Hadoop生态系统的一部分,这个Apache项目提供了基于Web的直观界面,可用于配置.管理和监控Hadoop集群.有些开发人员想把Ambari的功能整合到自己的应用程序当

39个大数据可视化工具

无论是在行政演示中为数据点创建一个可视化进程,还是用可视化概念来细分客户,数据可视化都显得尤为重要.本文将推荐39个可用于处理大数据的可视化工具. &amp;lt;img class="size-full wp-image-407608 aligncenter" src="http://image.woshipm.com/wp-files/2016/09/dashuju-1.png" alt="dashuju-1" width="

数据层交换和高性能并发处理(开源ETL大数据治理工具--KETTLE使用及二次开发 )

ETL是什么?为什么要使用ETL?KETTLE是什么?为什么要学KETTLE? ETL是数据的抽取清洗转换加载的过程,是数据进入数据仓库进行大数据分析的载入过程,目前流行的数据进入仓库的过程有两种形式,一种是进入数据库后再进行清洗和转换,另外一条路线是首先进行清洗转换再进入数据库,我们的ETL属于后者. 大数据的利器大家可能普遍说是hadoop,但是大家要知道如果我们不做预先的清洗和转换处理,我们进入hadoop后仅通过mapreduce进行数据清洗转换再进行分析,垃圾数据会导致我们的磁盘占用量

大数据web管理工具——HUE

一.概述 HUE是一个开源的Apache Hadoop UI系统,早期由Cloudera开发,后来贡献给开源社区.它是基于Python Web框架Django实现的.通过使用Hue我们可以通过浏览器方式操纵Hadoop集群. 原文地址:https://www.cnblogs.com/jiangbei/p/11877830.html

大数据任务调度工具azkaban安装的相关文档

区配置: 1.查看时区 2 .修改时区 3 安装mysql 下载MySQL数据脚本(如下有安装MySQL服务可以忽略) 下载然后解压:tar -zxvf azkaban-sql-script-2.5.0.tar.gz 创建用户.分配权限并执行脚本 mysql –u root –pxxxx 创建数据库: CREATE DATABASE azkaban; 创建好数据库然后退出. 然后将sql-script文件中的create-all-sql-2.5.0.sql中的数据表创建在刚创建好的azkaban

零基础大数据培训视频教程

之前在北风网培训过大数据,培训质量还是非常不错的,现在分享出来一部分给大家看下,有需要更多的可以联系QQ:375537364      链接:http://pan.baidu.com/s/1bPl5aY 密码:ymmi Java Linux基础 Shell编程 Hadoop2.x HDFS YARN MapReduce ETL数据清洗Hive Sqoop Flume/Oozie 大数据WEB工具Hue HBase Storm Scala KafkaSpark Spark核心源码剖析 CM 5.3

大数据这么厉害呢知道多少

随着大数据时代的迅速来临,大数据的应用开始逐渐进入了社会的各个领域,他的相关技术已经渗透到各行各业,基于大数据分析的新兴学科也随之衍生.网络大数据的呈现为大数据分析技术人才提供了前所未有的宝贵机遇,但同时也提出了非常大的挑战.大数据为人们更好地感知现在.预测未来将带来的新型应用.大数据的技术与应用还是处于起步阶段,其应用的前景不可预测.不要犹豫啦,Java大数据是个不错的选择! 什么是大数据? 大数据是指大小超出了传统数据库软件工具的抓取.存储.管理和分析能力的数据群.这个定义带有主观性,对于"

2019大数据学习方向【最新分享】

一.大数据运维之Linux基础打好Linux基础,以便更好地学习Hadoop,hbase,NoSQL,Spark,Storm,docker,openstack等.因为企业中的项目基本上都是使用Linux环境下搭建或部署的. 1)Linux系统概述2)系统安装及相关配置?3)Linux网络基础?4)OpenSSH实现网络安全连接?5)vi文本编辑器 6)用户和用户组管理7)磁盘管理?8)Linux文件和目录管理?9)Linux终端常用命令?10)linux系统监测与维护 二.大数据开发核心技术 -