【原创】大数据基础之Impala(1)简介、安装、使用

impala2.12

官方:http://impala.apache.org/

一 简介

Apache Impala is the open source, native analytic database for Apache Hadoop. Impala is shipped by Cloudera, MapR, Oracle, and Amazon.

impala是hadoop上的开源分析性数据库;

  • Do BI-style Queries on Hadoop

    • Impala provides low latency and high concurrency for BI/analytic queries on Hadoop (not delivered by batch frameworks such as Apache Hive). Impala also scales linearly, even in multitenant environments.

impala支持hadoop上低延迟和高并发的查询。

  • Unify Your Infrastructure

    • Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment—no redundant infrastructure or data conversion/duplication.

使用同样的文件、格式和元数据。

  • Implement Quickly

    • For Apache Hive users, Impala utilizes the same metadata and ODBC driver. Like Hive, Impala supports SQL, so you don‘t have to worry about re-inventing the implementation wheel.

对于hive用户来说,impala使用相同的元数据和driver,支持sql。

Impala provides fast, interactive SQL queries directly on your Apache Hadoop data stored in HDFS, HBase, or the Amazon Simple Storage Service (S3). In addition to using the same unified storage platform, Impala also uses the same metadata, SQL syntax (Hive SQL), ODBC driver, and user interface (Impala query UI in Hue) as Apache Hive. This provides a familiar and unified platform for real-time or batch-oriented queries.

impala直接基于hadoop数据(hdsf、hbase等)实现快速的、交互式的sql查询;impala使用与hive相同的存储平台、元数据、sql语法、driver和ui,这样实现了实时查询和批处理查询的统一;

Impala is an addition to tools available for querying big data. Impala does not replace the batch processing frameworks built on MapReduce such as Hive. Hive and other frameworks built on MapReduce are best suited for long running batch jobs, such as those involving batch processing of Extract, Transform, and Load (ETL) type jobs.

impala是一个大数据查询工具集的有力补充,impala不替换现有的批处理框架比如hive(hive通常用来执行一些ETL任务);

Impala provides:

  • Familiar SQL interface that data scientists and analysts already know.
  • Ability to query high volumes of data ("big data") in Apache Hadoop.
  • Distributed queries in a cluster environment, for convenient scaling and to make use of cost-effective commodity hardware.
  • Ability to share data files between different components with no copy or export/import step; for example, to write with Pig, transform with Hive and query with Impala. Impala can read from and write to Hive tables, enabling simple data interchange using Impala for analytics on Hive-produced data.
  • Single system for big data processing and analytics, so customers can avoid costly modeling and ETL just for analytics.

impala架构

The Impala server is a distributed, massively parallel processing (MPP) database engine.

1 Impala Daemon

The core Impala component is a daemon process that runs on each DataNode of the cluster, physically represented by the impalad process. It reads and writes to data files; accepts queries transmitted from the impala-shell command, Hue, JDBC, or ODBC; parallelizes the queries and distributes work across the cluster; and transmits intermediate query results back to the central coordinator node.

impala deamon和数据节点部署在一起,负责读写数据、响应impala-shell/Hue/JDBC请求、分布式查询、返回查询结果,部署多个;

2 Impala Statestore

The Impala component known as the statestore checks on the health of Impala daemons on all the DataNodes in a cluster, and continuously relays its findings to each of those daemons. It is physically represented by a daemon process named statestored; you only need such a process on one host in the cluster. If an Impala daemon goes offline due to hardware failure, network error, software issue, or other reason, the statestore informs all the other Impala daemons so that future queries can avoid making requests to the unreachable node.

impala statestore检查和记录impala deamon服务器的健康情况,这样查询时可以踢掉不健康的节点,只需要部署1个。

3 Impala Catalog Service

The Impala component known as the catalog service relays the metadata changes from Impala SQL statements to all the Impala daemons in a cluster. It is physically represented by a daemon process named catalogd; you only need such a process on one host in the cluster. Because the requests are passed through the statestore daemon, it makes sense to run the statestored and catalogd services on the same host.

impala catalog负责元数据,只需要1个。

客户端

  • The impala-shell interactive command interpreter.
  • The Hue web-based user interface.
  • JDBC.

二 安装

安装支持3种方式:

1 Cloudera Manager安装

页面操作

2 Ambari安装

详见 https://www.cnblogs.com/barneywill/p/10290849.html

3 手工安装

1 增加repo

# cat /etc/yum.repos.d/cdh.repo

[cloudera-cdh5]
# Packages for Cloudera‘s Distribution for Hadoop, Version 5, on RedHat or CentOS 7 x86_64
name=Cloudera‘s Distribution for Hadoop, Version 5
baseurl=https://archive.cloudera.com/cdh5/redhat/7/x86_64/cdh/5/
gpgkey =https://archive.cloudera.com/cdh5/redhat/7/x86_64/cdh/RPM-GPG-KEY-cloudera 
gpgcheck = 1

2 安装

# yum install impala impala-catalog impala-server impala-state-store impala-shell

也可以细分安装

catalog 安装
# yum install impala impala-catalog

server安装
# yum install impala impala-server

statestore安装
# yum install impala impala-state-store

客户端安装
# yum install impala-shell

配置文件

/etc/default/impala

hadoop、hive、hbase等配置文件(core-site.xml、hdfs-site.xml、hive-site.xml、hbase-site.xml)放到

/etc/impala/conf/

启动命令

service impala-statestore start
service impala-catalog start
service impala-server start

注意:impala需要用到hive的元数据,2.12支持hive2及以下,不支持hive3;

impala server页面

三 使用

使用impala-shell

$ impala-shell -i $impala_server:21000
Starting Impala Shell without Kerberos authentication
Connected to $impala_server:21000
Server version: impalad version 2.12.0-cdh5.16.1 RELEASE (build 4a3775ef6781301af81b23bca45a9faeca5e761d)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.12.0-cdh5.16.1 (4a3775e) built on Wed Nov 21 21:02:28 PST 2018)

When you set a query option it lasts for the duration of the Impala shell session.
***********************************************************************************
[$impala_server:21000] >

连接成功之后像hive一样使用;

原文地址:https://www.cnblogs.com/barneywill/p/10296502.html

时间: 2024-11-05 20:49:24

【原创】大数据基础之Impala(1)简介、安装、使用的相关文章

区块链这些技术与h5房卡斗牛平台出售,大数据基础软件干货不容错过

在IT产业发展中,包括CPU.操作系统h5房卡斗牛平台出售 官网:h5.super-mans.com 企娥:2012035031 vx和tel:17061863513 h5房卡斗牛平台出售在内的基础软硬件地位独特,不但让美国赢得了产业发展的先机,成就了产业巨头,而且因为技术.标准和生态形成的壁垒,主宰了整个产业的发展.错失这几十年的发展机遇,对于企业和国家都是痛心的. 当大数据迎面而来,并有望成就一个巨大的应用和产业机会时,企业和国家都虎视眈眈,不想错再失这一难得的机遇.与传统的IT产业一样,大

大数据基础教程:创建RDD的二种方式

大数据基础教程:创建RDD的二种方式 1.从集合中创建RDD val conf = new SparkConf().setAppName("Test").setMaster("local")      val sc = new SparkContext(conf)      //这两个方法都有第二参数是一个默认值2  分片数量(partition的数量)      //scala集合通过makeRDD创建RDD,底层实现也是parallelize      val 

大数据平台CDH5.14.2 的安装配置

大数据平台CDH5.14.2 的安装配置 标签(空格分隔): 大数据平台构建 一:系统环境初始化 二:安装CDH5.14.2 平台 三:分配主机与分配角色 一: 系统环境初始化 1.1: 系统环境介绍 系统: CentOS7.5X64 cat /etc/hosts --- 172.17.100.11 node-01.flyfish 172.17.100.12 node-02.flyfish 172.17.100.13 node-03.flyfish 172.17.100.14 node-04.f

大数据基础架构详解

简介:本文是对大数据领域的基础论文的阅读总结,相关论文包括GFS,MapReduce.BigTable.Chubby.SMAQ. 大数据出现的原因: 大多数的技术突破来源于实际的产品需要,大数据最初诞生于谷歌的搜索引擎中.随着web2.0时代的发展,互联网上数据量呈献爆炸式的增长,为了满足信息搜索的需要,对大规模数据的存储提出了非常强劲的需要.基于成本的考虑,通过提升硬件来解决大批量数据的搜索越来越不切实际,于是谷歌提出了一种基于软件的可靠文件存储体系GFS,使用普通的PC机来并行支撑大规模的存

大数据基础和hadoop

一.大数据的特点 大数据是什么?其实很简单,大数据其实就是海量资料巨量资料,这些巨量资料来源于世界各地随时产生的数据,在大数据时代,任何微小的数据都可能产生不可思议的价值.大数据有4个特点,为别为:Volume(大量).Variety(多样).Velocity(高速).Value(价值),一般我们称之为4V. 所谓4V,具体指如下4点: 1.大量.大数据的特征首先就体现为“大”,从先Map3时代,一个小小的MB级别的Map3就可以满足很多人的需求,然而随着时间的推移,存储单位从过去的GB到TB,

“大数据“基础知识普及

大数据,官方定义是指那些数据量特别大.数据类别特别复杂的数据集,这种数据集无法用传统的数据库进行存储,管理和处理.大数据的主要特点为数据量大(Volume),数据类别复杂(Variety),数据处理速度快(Velocity)和数据真实性高(Veracity),合起来被称为4V. 大数据中的数据量非常巨大,达到了PB级别.而且这庞大的数据之中,不仅仅包括结构化数据(如数字.符号等数据),还包括非结构化数据(如文本.图像.声音.视频等数据).这使得大数据的存储,管理和处理很难利用传统的关系型数据库去

图说大数据基础

大数据开发基础上之图说笔记 1.Hadoop2概览 1.1Hadoop2的组成.演化: 1.2Hadoop2.0——Hadoop1.0演化与改进: 2.HDFS系统概览 2.1HDFS系统的主要特性与适用场景: 2.2HDFS的体系结构: 2.3HDFS的构成 2.4HDFS的读流程: 2.5HDFS创建子路径流程: 2.6写流程和删除流程 3 YARN概览 3.1Hadoop1.x中的MapReduce构成及特点: 3.2 Yarn的结构图和主要组件: 3.3 YARN的工作流程图: 4 Ma

学完大数据基础,可以按照我写的顺序学下去

首先给大家介绍什么叫大数据,大数据最早是在2006年谷歌提出来的,百度给他的定义为巨量数据集合,辅相成在今天大数据技术任然随着互联网的发展,更加迅速的成长,小到个人,企业,达到国家安全,大数据的作用可见一斑,也就是近几年大数据这个概念,随着云计算的出现才凸显出其价值,云计算与大数据的关系就像硬币的正反面一样,相密不可分.但是大数据的人才缺失少之又少,这就拖延了大数据的发展.所以人才培养真的很重要. 大数据的定义.大数据,又称巨量资料,指的是所涉及的数据资料量规模巨大到无法通过人脑甚至主流软件工具

分分钟理解大数据基础之Spark

一背景 Spark 是 2010 年由 UC Berkeley AMPLab 开源的一款 基于内存的分布式计算框架,2013 年被Apache 基金会接管,是当前大数据领域最为活跃的开源项目之一 Spark 在 MapReduce 计算框架的基础上,支持计算对象数据可以直接缓存到内存中,大大提高了整体计算效率.特别适合于数据挖掘与机器学习等需要反复迭代计算的场景. 二特性 高效:Spark提供 Cache 机制,支持需要反复迭代的计算或者多次数据共享,基于Spark 的内存计算比 Hadoop