cdh版本的hue安装配置部署以及集成hadoop hbase hive mysql等权威指南

hue下载地址:https://github.com/cloudera/hue

hue学习文档地址:http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/manual.html

我目前使用的是hue-3.7.0-cdh5.3.6

hue(HUE=Hadoop User Experience)

Hue是一个开源的Apache Hadoop UI系统,由Cloudera Desktop演化而来,最后Cloudera公司将其贡献给Apache基金会的Hadoop社区,它是基于Python Web框架Django实现的。

通过使用Hue我们可以在浏览器端的Web控制台上与Hadoop集群进行交互来分析处理数据,例如操作HDFS上的数据,运行MapReduce Job,执行Hive的SQL语句,浏览HBase数据库等等。

hue特点:

  能够支持各个版本的hadoop
  hue默认数据库:sql lite
  文件浏览器:对数据进行增删改查
  hue下载src包进行一次编译,二次编译,在这用的是已经一次编译

hue部署:

1、下载依赖包:yum源安装

sudo yum -y install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel mvn mysql mysql-devel openldap-devel python-devel sqlite-devel openssl-devel  

sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel

2、解压hue tar包

  tar -zxvf hue-3.7.0-cdh5.3.6.tar.gz -C /指定的目录

3、二次编译

  进入hue目录:执行make apps  会出现个build目录

错误:(centos 7会有)
error: static declaration of ‘X509_REVOKED_dup’ follows non-static declaration
static X509_REVOKED * X509_REVOKED_dup(X509_REVOKED *orig) {
^
In file included from /usr/include/openssl/ssl.h:156:0,
from OpenSSL/crypto/x509.h:17,
from OpenSSL/crypto/crypto.h:30,
from OpenSSL/crypto/crl.c:3:
/usr/include/openssl/x509.h:751:15: note: previous declaration of ‘X509_REVOKED_dup’ was here
X509_REVOKED *X509_REVOKED_dup(X509_REVOKED *rev);
^
error: command ‘gcc‘ failed with exit status 1

给下面两个删掉:/usr/include/openssl/x509.h -》751、752行
X509_REVOKED *X509_REVOKED_dup(X509_REVOKED *rev);
X509_REQ *X509_REQ_dup(X509_REQ *req);
##必须删掉,注释不行

4、进入到hue-3.7.0-cdh5.3.6/desktop/conf

配置hue.ini文件:

secret_key=jFE93j;2[290-eiw.KEiwN2s3[‘d;/.q[eIW^y#e=+Iei*@Mn<qW5o
http_host=hadoop01.xningge.com
http_port=8888
time_zone=Asia/Shanghai

5、启动hue

  两种方式

1-->cd build/env/bin---》./supervisor
2-->build/env/bin/supervisor

 6、浏览器访问hue

  主机名+8888
  创建用户名和密码

hue和hadoop的组件配置

1、hdfs的配置

  在hdfs-site.xml中配置

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

  在core-site.xml中配置

  

<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>

2、重新启动hdfs进程

  sbin/start-yarn.sh

3、hue配置

  [[hdfs_clusters]]
  # HA support by using HttpFs

[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://hadoop01.xningge.com:8020

# NameNode logical name.
## logical_name=

# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
webhdfs_url=http://hadoop01.xningge.com:50070/webhdfs/v1

# This is the home of your Hadoop HDFS installation
hadoop_hdfs_home=/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6

# Use this as the HDFS Hadoop launcher script
hadoop_bin=/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/bin

# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false

# Default umask for file and directory creation, specified in an octal value.
## umask=022

# Directory of the Hadoop configuration
hadoop_conf_dir=/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/etc/hadoop

[[yarn_clusters]]

[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=hadoop01.xningge.com

# The port where the ResourceManager IPC listens on
resourcemanager_port=8032

# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API
resourcemanager_api_url=http://hadoop01.xningge.com:8088

# URL of the ProxyServer API
proxy_api_url=http://hadoop01.xningge.com:8088

# URL of the HistoryServer API
history_server_api_url=http://hadoop01.xningge.com:19888

eg:此配置都是伪分布式模式

4、启动hue服务

  build/env/bin/supervisor

hue与hive配置

1、hive配置

  在hive-site.xml配置

<property>
  <name>hive.server2.thrift.bind.host</name>
  <value>hadoop01.xningge.com</value>
</property>
<property>
  <name>hive.metastore.uris</name>
  <value>hadoop01.xningge.com:9083</value>
</property>

2、启动hive服务

  bin/hiveserver2 &

  bin/hive --service metastore &

3 、hue配置

  修改hue.ini文件

[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hadoop01.xningge.com

# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/opt/modules/cdh/hive-0.13.1-cdh5.3.6/conf

# Timeout in seconds for thrift calls to Hive service
server_conn_timeout=120

# Choose whether Hue uses the GetLog() thrift call to retrieve Hive logs.
# If false, Hue will use the FetchResults() thrift call instead.
## use_get_log_api=true

# Set a LIMIT clause when browsing a partitioned table.
# A positive value will be set as the LIMIT. If 0 or negative, do not set any limit.
## browse_partitioned_table_limit=250

# A limit to the number of rows that can be downloaded from a query.
# A value of -1 means there will be no limit.
# A maximum of 65,000 is applied to XLS downloads.
## download_row_limit=1000000

# Hue will try to close the Hive query when the user leaves the editor page.
# This will free all the query resources in HiveServer2, but also make its results inaccessible.
## close_queries=false

# Thrift version to use when communicating with HiveServer2
## thrift_version=5

hue与关系型数据库配置

[librdbms]
# The RDBMS app can have any number of databases configured in the databases
# section. A database is known by its section name
# (IE sqlite, mysql, psql, and oracle in the list below).

[[databases]]
# sqlite configuration.
[[[sqlite]]] //注意这里一定要取消注释
# Name to show in the UI.
nice_name=SQLite

# For SQLite, name defines the path to the database.
name=/opt/modules/hue-3.7.0-cdh5.3.6/desktop/desktop.db

# Database backend to use.
engine=sqlite

# Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}

# mysql, oracle, or postgresql configuration.

  ##注意:这里的数据不能改动,默认是hue的数据库

[[[mysql]]] //注意这里一定要取消注释
# Name to show in the UI.
nice_name="My SQL DB"

# For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is ‘xe‘ by default.
name=sqoop//这个name是数据库表名

# Database backend to use. This can be:
# 1. mysql
# 2. postgresql
# 3. oracle
engine=mysql

# IP or hostname of the database to connect to.
host=hadoop01.xningge.com

# Port the database server is listening to. Defaults are:
# 1. MySQL: 3306
# 2. PostgreSQL: 5432
# 3. Oracle Express Edition: 1521
port=3306

# Username to authenticate with when connecting to the database.
user=xningge

# Password matching the username to authenticate with when
# connecting to the database.
password=???

# Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}

hue与zookeeper配置

  只需修改hue.ini文件
  host_ports=hadoop01.xningge.com:2181
  启动zookeeper:

hue与oozie的配置

修改:hue.ini文件
[liboozie]
oozie_url=http://hadoop01.xningge.com:11000/oozie

  如果没有出来的:
  修改:oozie-site.xml
  <property>
    <name>oozie.service.WorkflowAppService.system.libpath</name>
    <value>/user/oozie/share/lib</value>
  </property>

  到oozie目录下重新创建sharelib库:
  bin/oozie-setup.sh sharelib create -fs hdfs://hadoop01.xningge.com:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz
  启动oozie:bin/oozied.sh start

hue与hbase的配置

修改hue.ini文件:
hbase_clusters=(Cluster|hadoop01.xningge.com:9090)
hbase_conf_dir=/opt/cdh_5.3.6/hbase-0.98.6-cdh5.3.6/conf
修改hbase-site.xml,添加以下配置:
<property>
  <name>hbase.regionserver.thrift.http</name>
  <value>true</value>
</property>
<property>
  <name>hbase.thrift.support.proxyuser</name>
  <value>true</value>
</property>
启动hbase:
bin/start-hbase.sh
bin/hbase-daemon.sh start thrift

##hbase完全分布式
hbase_clusters=(Cluster1|hostname:9090,Cluster2|hostname:9090,Cluster3|hostname:9090)

原文地址:https://www.cnblogs.com/xningge/p/8439577.html

时间: 2024-10-29 19:11:24

cdh版本的hue安装配置部署以及集成hadoop hbase hive mysql等权威指南的相关文章

hbase安装配置(整合到hadoop)

hbase安装配置(整合到hadoop) 如果想详细了解hbase的安装:http://abloz.com/hbase/book.html 和官网http://hbase.apache.org/ 1.  快速单击安装 在单机安装Hbase的方法.会引导你通过shell创建一个表,插入一行,然后删除它,最后停止Hbase.只要10分钟就可以完成以下的操作. 1.1下载解压最新版本 选择一个 Apache 下载镜像:http://www.apache.org/dyn/closer.cgi/hbase

debian8下安装配置部署zabbix3.0

一.安装配置zabbix server web server服务器:172.28.0.187 mysql服务器:172.28.0.237 1.安装web server(172.28.0.187) A.官方文档 zabbix官方提供了基于packages和编译安装方式的安装文档 这里server我使用package方式安装,agent使用源码编译的方式安装,因为server只有一台,而agent有很多台,需要对安装好的agent修改配置并重新打包,以方便后期批量部署. zabbix的package

HUE安装配置

HUE是什么 HUE=Hadoop User Experience Hue是一个开源的Apache Hadoop UI系统,由Cloudera Desktop演化而来,最后Cloudera公司将其贡献给Apache基金会的Hadoop社区,它是基于Python Web框架Django实现的. 通过使用Hue我们可以在浏览器端的Web控制台上与Hadoop集群进行交互来分析处理数据,例如操作HDFS上的数据,运行MapReduce Job,执行Hive的SQL语句,浏览HBase数据库等等. HU

hive 安装配置部署与测试

系统初始化 mysql5.6 的安装配置 hive 的安装配置处理 一: 系统环境初始化 1.1 系统环境: CentOS6.4x64 安装好的hadoop伪分布环境 所需软件包: apache-hive-0.3.1.tar.gz mysql-connector-java-5.1.27.tar.gz mysql-server-5.6.24-1.el6x86_64 mysql-client-5.6.24-1.el6x86_64 上传到/home/hadoop/yangyang/ 二: 安装mysq

Hue集成Hadoop和Hive

一.环境准备 1.下载Hue:https://dl.dropboxusercontent.com/u/730827/hue/releases/3.12.0/hue-3.12.0.tgz 2.安装依赖 yum groupinstall -y "Development Tools" "Development Libraries" yum install -y apache-maven ant asciidoc cyrus-sasl-devel cyrus-sasl-gs

openstack-mitaka之认证服务管理安装配置部署

1.以root用户登陆数据库,并创建数据库keystone,同时为数据库授权,并设置密码为keyston_dbpass 2.keystone认证服务使用带有mod_wsgi的Apache HTTP服务器来服务认证服务请求,端口为5000和35357.因此需要在controller节点安装相应的软件包 4.使用命令生成随机密码令牌 3.编辑/etc/keystone/keystone.conf 1)在[DEFAULT]定义初始管理令牌的值: 2)在 [database] 部分,配置数据库访问: 3

Linux puppet的安装配置部署

一.puppet简介 puppet是一个为实现数据中心自动化管理而设计的配置管理软件,能够管理IT基础设施的整个生命周期:供应(provisioning)配置(configuration).联动(orchestration)及报告(reporting).puppet基于C/S架构,类似于zabbiz,有master与agent节点之分.它是一个开源的(谈不上真正的开源,因为有商业版与社区版之分).新一代的.集中化的配置管理工具,由ruby语言研发,它拥有自己的配置语言(PCL,puppet co

新反向代理与负载均衡工具 traefik 安装配置部署详解

traefik ## 简介 traefik是一款开源的反向代理与负载均衡工具.软件定位是做负载均衡器,提供好用的负载均衡服务,不要老拿它跟nginx对比.它最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置. 目前支持:Docker, Swarm, Mesos/Marathon, Mesos, Kubernetes, Consul, Etcd, Zookeeper, BoltDB, Rest API等等后端模型. #### ME为什么选择traefik? Golang编写,单文件

tomcat 安装配置部署到nginx+tomcat+https

目录 1 Tomcat简介 2.下载并安装Tomcat服务 2.2 部署java环境 2.3 安装Tomcat 2.4 Tomcat目录介绍 (关注点 bin conf logs webapps) 2.5 启动Tomcat 3.2 Tomcat管理 8 搭建jpress--java 版本的wordpress tomcat 配置文件 conf/server.xml tomcat 自定义网站目录 Tomcat多实例 (多个虚拟主机) tomcat反向代理集群 tomcat监控 zabbix监控 ng