单机安装hadoop-2.9.2+apache-hive-2.3.4-bin

system:
centos7.5
hostname:
hadoop1
soft:
hadoop-2.9.2
apache-hive-2.3.4-bin
jdk-8u201-linux-x64
mysql5.7《安装略》

设置静态ip地址

添加主机与ip映射
[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.76 hadoop1
关闭防火墙:
[[email protected] ~]# systemctl stop firewalld
[[email protected] ~]# systemctl disable firewalld
关闭seLinux
[[email protected] ~]# egrep -v "^#|^$" /etc/selinux/config
SELINUX=disabled
其它参数设置:
[[email protected] ~]# sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144

[[email protected] ~]# egrep -v "^#|^$" /etc/security/limits.conf
*       soft    nofile      65536
*       hard    nofile      131072
*       soft    nproc       65536
*       hard    nproc       65536

安装java:
[[email protected] opt]# ls
jdk-8u201-linux-x64.rpm
[[email protected] opt]# rpm -ih jdk-8u201-linux-x64.rpm
warning: jdk-8u201-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
################################# [100%]
Updating / installing...
################################# [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...

[[email protected] opt]# java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)    

建立hadoop账户《987654321》:
[[email protected] opt]# useradd hadoop
[[email protected] opt]# passwd hadoop

设置sudo权限

重启系统:
[[email protected] ~]# reboot

以下是有hadoop操作:

设置ssh免密码登录:
[[email protected] ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
6e:d0:62:99:3c:02:10:cf:00:5a:71:3c:6f:82:67:94 [email protected]
The key‘s randomart image is:
+--[ RSA 2048]----+
*.oo.. .* .E . +o o ..+.o+ o.oO S o = o .
+-----------------+
[[email protected] ~]$ ssh-copy-id -i .ssh/id_rsa.pub [email protected]

hadoop搭建:
[[email protected] hadoop]# su - hadoop
Last login: Fri Apr 19 20:56:04 CST 2019 on pts/0
[[email protected] ~]$ ls
hadoop-2.9.2.tar.gz
[[email protected] ~]$ tar -zxf hadoop-2.9.2.tar.gz
[[email protected] ~]$ ls
hadoop-2.9.2 hadoop-2.9.2.tar.gz

配置hadoop-env.sh
[[email protected] ~]$ vim hadoop-2.9.2/etc/hadoop/hadoop-env.sh
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/jdk1.8.0_201-amd64

配置core-site.xml
vim hadoop-2.9.2/etc/hadoop/core-site.xml
<configuration>
<!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的老大(NameNode)的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.3.76:9000</value>
</property>
<!-- 指定hadoop运行时产生临时文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>

[[email protected] ~]$ mkdir tmp

配置hdfs-site.xml
[[email protected] ~]$ vim hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/dfs/namenode</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/dfs/datanode</value>
<final>true</final>
</property>
<property>
<name>dfs.http.address</name>
<value>192.168.3.76:50070</value>
<description>The address and the base port where the dfs namenode web ui will listen on.If the port is 0 then the server will start on a free port</description>
</property>
<!-- 指定HDFS副本的数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

建目录:
[[email protected] ~]$ mkdir dfs/datanode -p
[[email protected] ~]$ mkdir dfs/namenode -p

配置mapred-site.xml
[[email protected] ~]$ cp hadoop-2.9.2/etc/hadoop/mapred-site.xml.template hadoop-2.9.2/etc/hadoop/mapred-site.xml
[[email protected] ~]$ vim hadoop-2.9.2/etc/hadoop/mapred-site.xml
<!-- 指定mr运行在yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://192.168.3.76:9001</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>file:/home/hadoop/mapred/system</value>
<final>true</final>
</property>
<property>
<name>mapred.local.dir</name>
<value>file:/home/hadoop/mapred/local</value>
<final>true</final>
</property>
建立目录:
[[email protected] ~]$ mkdir mapred/local -p
[[email protected] ~]$ mkdir mapred/system -p

配置yarn-site.xml
[[email protected] ~]$ vim hadoop-2.9.2/etc/hadoop/yarn-site.xml
<!-- 指定YARN的老大(ResourceManager)的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.3.76</value>
</property>
<!-- reducer获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

格式化hdfs
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs namenode -format
出现下面第二行的successfully,那么表名成功
19/04/19 21:24:20 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1608387477-192.168.3.76-1555680260442
19/04/19 21:24:20 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
19/04/19 21:24:20 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
19/04/19 21:24:20 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 325 bytes saved in 0 seconds .
19/04/19 21:24:20 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/04/19 21:24:20 INFO namenode.NameNode: SHUTDOWN_MSG:
/****
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.3.76
****/
启动并测试hdfs:
[[email protected] ~]$ hadoop-2.9.2/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop1]
hadoop1: starting namenode, logging to /home/hadoop/hadoop-2.9.2/logs/hadoop-hadoop-namenode-hadoop1.out
localhost: starting datanode, logging to /home/hadoop/hadoop-2.9.2/logs/hadoop-hadoop-datanode-hadoop1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.9.2/logs/hadoop-hadoop-secondarynamenode-hadoop1.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.9.2/logs/yarn-hadoop-resourcemanager-hadoop1.out
localhost: starting nodemanager, logging to /home/hadoop/hadoop-2.9.2/logs/yarn-hadoop-nodemanager-hadoop1.out

检测:
[[email protected] ~]$ jps
4705 SecondaryNameNode
4865 ResourceManager
4386 NameNode
5157 NodeManager
5318 Jps
4488 DataNode

实例测试:
[[email protected] ~]$ ll
total 357872
drwxrwxr-x 4 hadoop hadoop 36 Apr 19 22:03 dfs
drwxr-xr-x 10 hadoop hadoop 150 Apr 19 21:27 hadoop-2.9.2
-rw-r--r-- 1 hadoop hadoop 366447449 Apr 19 20:56 hadoop-2.9.2.tar.gz
drwxrwxr-x 4 hadoop hadoop 31 Apr 19 21:52 mapred
-rw-r--r-- 1 hadoop hadoop 11323 Apr 19 22:11 qqqq.xlsx
drwxrwxr-x 4 hadoop hadoop 35 Apr 19 22:06 tmp
这里的文件名必须要以‘/’开头,暂时只了解是hdfs是以绝对路径为基础,因为没有 ‘-cd’这样的命令支持
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -mkdir /input
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -put qqqq.xlsx /input
也可以查看此时新建的input目录里面有什么
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -ls /
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2019-04-19 22:14 /input
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -ls /input
Found 1 items
-rw-r--r-- 1 hadoop supergroup 11323 2019-04-19 22:14 /input/qqqq.xlsx
[[email protected] ~]$ hadoop-2.9.2/bin/hadoop jar hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep /input /output ‘dfs[a-z.]+‘

19/04/19 22:23:05 INFO mapreduce.Job: Job job_1555682785585_0001 completed successfully
19/04/19 22:23:23 INFO mapreduce.Job: Job job_1555682785585_0002 completed successfully

结果:
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -ls /
Found 4 items
drwxr-xr-x - hadoop supergroup 0 2019-04-19 22:14 /input
drwxr-xr-x - hadoop supergroup 0 2019-04-19 22:23 /output
drwx------ - hadoop supergroup 0 2019-04-19 22:22 /tmp
drwxr-xr-x - hadoop supergroup 0 2019-04-19 22:22 /user
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -ls /output
Found 2 items
-rw-r--r-- 1 hadoop supergroup 0 2019-04-19 22:23 /output/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 0 2019-04-19 22:23 /output/part-r-00000

检测二:
[[email protected] ~]$ ll
total 357876
drwxrwxr-x 4 hadoop hadoop 36 Apr 19 22:03 dfs
drwxr-xr-x 10 hadoop hadoop 150 Apr 19 21:27 hadoop-2.9.2
-rw-r--r-- 1 hadoop hadoop 366447449 Apr 19 20:56 hadoop-2.9.2.tar.gz
drwxrwxr-x 4 hadoop hadoop 31 Apr 19 21:52 mapred
-rw-r--r-- 1 hadoop hadoop 11323 Apr 19 22:11 qqqq.xlsx
drwxrwxr-x 4 hadoop hadoop 35 Apr 19 22:06 tmp
-rw-rw-r-- 1 hadoop hadoop 213 Apr 19 22:30 www.text
[[email protected] ~]$ cat www.text
http://blog.csdn.net/u012342408/article/details/50520696
http://blog.csdn.net/hitwengqi/article/details/8008203
http://blog.csdn.net/zl007700/article/details/50533675
https://www.cnblogs.com/yanglf/p/4020555.html

[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -put www.text /input
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -ls /input
Found 2 items
-rw-r--r-- 1 hadoop supergroup 11323 2019-04-19 22:14 /input/qqqq.xlsx
-rw-r--r-- 1 hadoop supergroup 213 2019-04-19 22:31 /input/www.text

[[email protected] ~]$ hadoop-2.9.2/bin/hadoop jar hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep /input/www.text /www ‘dfs[a-z.]+‘

19/04/19 22:33:33 INFO mapreduce.Job: Job job_1555682785585_0004 completed successfully
19/04/19 22:33:51 INFO mapreduce.Job: Job job_1555682785585_0005 completed successfully

结果:
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -ls /www
Found 2 items
-rw-r--r-- 1 hadoop supergroup 0 2019-04-19 22:33 /www/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 0 2019-04-19 22:33 /www/part-r-00000

查看安装包:
[[email protected] hive]# su - hadoop
Last login: Sat Apr 20 15:46:10 CST 2019 on pts/1
[[email protected] ~]$ ll
total 589016
-rw-r--r-- 1 hadoop hadoop 232234292 Apr 20 15:53 apache-hive-2.3.4-bin.tar.gz
drwxrwxr-x 4 hadoop hadoop 36 Apr 19 22:03 dfs
drwxr-xr-x 10 hadoop hadoop 150 Apr 19 21:27 hadoop-2.9.2
-rw-r--r-- 1 hadoop hadoop 366447449 Apr 19 20:56 hadoop-2.9.2.tar.gz
drwxrwxr-x 4 hadoop hadoop 31 Apr 19 21:52 mapred
-rw-r--r-- 1 hadoop hadoop 4452049 Apr 20 15:53 mysql-connector-java-5.1.47.tar.gz
-rw-r--r-- 1 hadoop hadoop 11323 Apr 19 22:11 qqqq.xlsx
drwxrwxr-x 4 hadoop hadoop 35 Apr 19 22:06 tmp
-rw-rw-r-- 1 hadoop hadoop 213 Apr 19 22:30 www.text
[[email protected] ~]$ tar -zxf apache-hive-2.3.4-bin.tar.gz
[[email protected] ~]$ tar -zxf mysql-connector-java-5.1.47.tar.gz

设置hive变量:
[[email protected] ~]$ egrep -v "^#|^$" .bashrc
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
export HIVE_HOME=/home/hadoop/apache-hive-2.3.4-bin
export PATH=$PATH:$HIVE_HOME/bin

[[email protected] ~]$ source .bashrc

建立配置文件:
[[email protected] conf]$ pwd
/home/hadoop/apache-hive-2.3.4-bin/conf
[[email protected] conf]$ cp hive-env.sh.template hive-env.sh
[[email protected] conf]$ cp hive-default.xml.template hive-site.xml
[[email protected] conf]$ cp hive-log4j2.properties.template hive-log4j2.properties
[[email protected] conf]$ cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

配置hive-site.xml,修改一下几行:

<property>
<name>hive.exec.scratchdir</name>
<value>/home/hadoop/tmp/hive-${user.name}</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>

<property>
<name>hive.exec.local.scratchdir</name>
<value>/home/hadoop/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>

<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/hadoop/tmp/hive/resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>

<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/home/hadoop/tmp/${user.name}/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>

<property>
<name>hive.querylog.location</name>
<value>/home/hadoop/tmp/${user.name}</value>
<description>Location of Hive run time structured log file</description>
</property>

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.3.76:3306/hive_metadata?&createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Wd#GDrf142D</value>
<description>password to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
<description>Auto creates necessary schema on a startup if one doesn‘t exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>

<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn‘t match with one from in Hive jars.
</description>
</property>

配置hive-env.sh:

Set HADOOP_HOME to point to a specific hadoop install directory

# HADOOP_HOME=${bin}/../../hadoop
HADOOP_HOME=/home/hadoop/hadoop-2.9.2

# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
export HIVE_CONF_DIR=/home/hive/apache-hive-2.3.4-bin/conf

加载mysql驱动
[[email protected] ~]$ ll
total 231144
drwxrwxr-x 10 hive hive 4096 Apr 20 14:21 apache-hive-2.3.4-bin
-rw-r--r-- 1 hive hive 232234292 Apr 20 14:16 apache-hive-2.3.4-bin.tar.gz
-rw-r--r-- 1 hive hive 4452049 Apr 20 15:23 mysql-connector-java-5.1.47.tar.gz
drwxrwxr-x 3 hive hive 17 Apr 20 15:07 tmp
[[email protected] ~]$ tar -zxf mysql-connector-java-5.1.47.tar.gz
[[email protected] ~]$ cp mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar apache-hive-2.3.4-bin/lib/

为Hive创建HDFS目录
在 Hive 中创建表之前需要使用以下 HDFS 命令创建 /tmp 和 /user/hive/warehouse (hive-site.xml 配置文件中属性项 hive.metastore.warehouse.dir 的默认值) 目录并给它们赋写权限
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -mkdir tmp
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -mkdir -p /user/hive/warehouse
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -chmod g+w /user/hive/warehouse
[[email protected] ~]$ hadoop-2.9.2/bin/hdfs dfs -chmod g+w tmp/

给mysql创建用户hive/密码hive<在没有使用mysql数据库root账户的情况下使用>:
$ mysql -u root -p #密码已设为123456
mysql> CREATE USER ‘hive‘@‘localhost‘ IDENTIFIED BY "hive";
mysql> grant all privileges on . to [email protected] identified by ‘hive‘;

更改root远程访问:
mysql> use mysql;
mysql> update user set host = ‘%‘ where user = ‘root‘;
mysql> flush privileges;
mysql> select host, user from user;
+-----------+---------------+
| host | user |
+-----------+---------------+
| % | root |
| localhost | mysql.session |
| localhost | mysql.sys |
+-----------+---------------+

运行Hive
在命令行运行 hive 命令时必须保证 HDFS 已经启动。可以使用 start-dfs.sh 来启动 HDFS。

从 Hive 2.1 版本开始, 我们需要先运行 schematool 命令来执行初始化操作。

[[email protected] ~]$ apache-hive-2.3.4-bin/bin/schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:mysql://192.168.3.76:3306/hive_metadata?&createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:   root
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed

启动hive并测试:
[[email protected] ~]$ apache-hive-2.3.4-bin/bin/hive
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin:/home/hadoop/apache-hive-2.3.4-bin/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/home/hadoop/apache-hive-2.3.4-bin/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show tables;
OK
Time taken: 3.236 seconds
hive> show databases;
OK
default
Time taken: 0.055 seconds, Fetched: 1 row(s)
hive> 

简答的hive语句测试:
建表:
hive> CREATE TABLE IF NOT EXISTS test (id INT,name STRING)ROW FORMAT DELIMITED FIELDS TERMINATED BY " " LINES TERMINATED BY "\n";
OK
Time taken: 0.524 seconds
hive> insert into test values(1,‘张三‘);
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = hadoop_20190420163725_0be10015-72ae-4642-b2c4-311aaeaacaa8
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there‘s no reduce operator
Starting Job = job_1555738609578_0001, Tracking URL = http://hadoop1:8088/proxy/application_1555738609578_0001/
Kill Command = /home/hadoop/hadoop-2.9.2/bin/hadoop job -kill job_1555738609578_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-04-20 16:37:38,182 Stage-1 map = 0%, reduce = 0%
2019-04-20 16:37:43,443 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.28 sec
MapReduce Total cumulative CPU time: 2 seconds 280 msec
Ended Job = job_1555738609578_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://192.168.3.76:9000/user/hive/warehouse/test/.hive-staging_hive_2019-04-20_16-37-25_672_7073846121967206245-1/-ext-10000
Loading data to table default.test
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative CPU: 2.28 sec HDFS Read: 4249 HDFS Write: 77 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 280 msec
OK
Time taken: 19.356 seconds
hive> select * from test;
OK
1 张三
Time taken: 0.352 seconds, Fetched: 1 row(s)

原文地址:https://blog.51cto.com/1054054/2382121

时间: 2024-10-18 02:46:33

单机安装hadoop-2.9.2+apache-hive-2.3.4-bin的相关文章

在Ubuntu上单机安装Hadoop

最近大数据比较火,所以也想学习一下,所以在虚拟机安装Ubuntu Server,然后安装Hadoop. 以下是安装步骤: 1. 安装Java 如果是新机器,默认没有安装java,运行java –version命名,看是否可以查看Java版本,如果未安装Java,这运行以下命名: # Update the source list $ sudo apt-get update # The OpenJDK project is the default version of Java # that is

单机安装Hadoop环境

目的 这篇文档的目的是帮助你快速完成单机上的Hadoop安装与使用以便你对Hadoop分布式文件系统(HDFS)和Map-Reduce框架有所体会,比如在HDFS上运行示例程序或简单作业等. 先决条件 支持平台 GNU/Linux是产品开发和运行的平台. Hadoop已在有2000个节点的GNU/Linux主机组成的集群系统上得到验证.            Ubuntu Linux 下载地址:http://mirrors.aliyun.com/ubuntu-releases/14.10/  W

Spark集群框架搭建【VM15+CentOS7+Hadoop+Scala+Spark+Zookeeper+HBase+Hive】

目录 1 目的 2 准备工作 3 安装过程 3.1 在虚拟机中安装CentOS7 3.1.1 虚拟机设置 3.1.2 安装Linux系统 3.2 JAVA环境 3.2.1 卸载Linux自带的jdk 3.2.2 下载并安装最新版本的jdk 3.2.3 环境变量设置 3.3 SSH免密登陆 3.3.1 准备工作 3.3.2 设置免密登陆 3.4 Hadoop2.7.2安装及集群配置 3.4.1 Hadoop安装 3.4.2 伪分布式集群配置 3.4.3 启动hadoop 3.5 Spark安装及环

PHP 1:在Windows上安装和配置PHP,Apache和My SQL

原文:PHP 1:在Windows上安装和配置PHP,Apache和My SQL 如果你Google一把类似的主题,你会发现相关的文章可以塞满你的硬盘.在这里之所以把它再次拿出来,目的是想记录我作为一个新手如何配置的,以及配置期间碰到的一些问题.期望其中的一些问题对大家有用.下载安装文件就不用说了.不过还是提一下版本吧. PHP:5.1.4 Apache 2.2.3 MySQL:5.0.2-community-nt 我安装的顺序如下: 安装PHP 安装Apache 安装MySQL PHP安装 有

Apache Spark 1.6 Hadoop 2.6 Mac下单机安装配置

转载:http://www.cnblogs.com/ysisl/p/5979268.html 一. 下载资料 1. JDK 1.6 + 2. Scala 2.10.4 3. Hadoop 2.6.4 4. Spark 1.6 二.预先安装 1. 安装JDK 2. 安装Scala 2.10.4 解压安装包即可 3. 配置sshd ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_

Hadoop 2.2.0部署安装(笔记,单机安装)

SSH无密安装与配置 具体配置步骤: ◎ 在root根目录下创建.ssh目录 (必须root用户登录) cd /root & mkdir .ssh chmod 700 .ssh & cd .ssh ◎ 创建密码为空的 RSA 密钥对: ssh-keygen -t rsa -P "" ◎ 在提示的对称密钥名称中输入 id_rsa将公钥添加至 authorized_keys 中: cat id_rsa.pub >> authorized_keys chmod 6

Hadoop单机安装配置过程:

1. 首先安装JDK,必须是sun公司的jdk,最好1.6版本以上. 最后java –version 查看成功与否. 注意配置/etc/profile文件,在其后面加上下面几句: export JAVA_HOME=/usr/local/jdk1.6.0_17    export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre:$PATH    export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar 2. 安装ssh,

hadoop(1):centos 安装 hadoop &amp; hive

1,关于hive Hive是一个基于Hadoop的数据仓库平台.通过hive,我们可以方便地进行ETL的工作.hive定义了一个类似于SQL的查询语言:HQL,能 够将用户编写的QL转化为相应的Mapreduce程序基于Hadoop执行. Hive是Facebook 2008年8月刚开源的一个数据仓库框架,其系统目标与 Pig 有相似之处,但它有一些Pig目前还不支持的机制,比如:更丰富的类型系统.更类似SQL的查询语言.Table/Partition元数据的持久化等. 本文的原文连接是: ht

windows环境下安装hadoop,hive的使用案例

Hadoop安装: 首先到官方下载官网的hadoop2.7.7,链接如下 https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/ 找网盘的hadooponwindows-master.zip 链接如下 https://pan.baidu.com/s/1VdG6PBnYKM91ia0hlhIeHg 把hadoop-2.7.7.tar.gz解压后 使用hadooponwindows-master的bin和etc替换hadoop2.7.7的

安装hadoop 和hive

All Hadoop sub-projects such as Hive, Pig, and HBase support Linux operating system. Therefore, you need to install any Linux flavored OS. The following simple steps are executed for Hive installation: Step 1: Verifying JAVA All Hadoop sub-projects s