用vmware拷贝出三个相同的ubuntu搭建小的zookeeper集群

第一次配置zookeeper的集群

因为想运行storm必须搭建集群
在自己的电脑上拷贝了自己的ubuntu虚拟机
采用的是vmware给虚拟机分配的地址
三个机器的配置基本上一样除了myid这个文件
看了这么久的一致,选举什么的也想试试这个过程的感觉

首先下载安装一个ubuntu
安装配置好jdk

下载zookeeper然后添加到~/.bashrc里面

1 source ~/.bashrc
2 使得文件配置生效
3 echo $PATH
4 查看路径中有没有java和zookeeper需要的可执行文件的路径
 1 修改zookeeper的配置文件
 2 [email protected]:~/StormProcessing/zookeeper-3.4.8/conf$ ls
 3 configuration.xsl  log4j.properties  zoo.cfg  zoo.cfg~
 4 [email protected]:~/StormProcessing/zookeeper-3.4.8/conf$ cat zoo.cfg
 5 # The number of milliseconds of each tick
 6 tickTime=2000
 7 # The number of ticks that the initial
 8 # synchronization phase can take
 9 initLimit=10
10 # The number of ticks that can pass between
11 # sending a request and getting an acknowledgement
12 syncLimit=5
13 # the directory where the snapshot is stored.
14 # do not use /tmp for storage, /tmp here is just
15 # example sakes.
16 dataDir=/home/jason/StormProcessing/data
17 dataLogDir=/home/jason/StormProcessing/log
18 # the port at which the clients will connect
19 clientPort=2181
20 # the maximum number of client connections.
21 # increase this if you need to handle more clients
22 #maxClientCnxns=60
23 #
24 # Be sure to read the maintenance section of the
25 # administrator guide before turning on autopurge.
26 #
27 # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
28 #
29 # The number of snapshots to retain in dataDir
30 #autopurge.snapRetainCount=3
31 # Purge task interval in hours
32 # Set to "0" to disable auto purge feature
33 #autopurge.purgeInterval=1
34
35 #其中2888表示这个服务器和集群中的leader交换信息的端口
36 #3888表示万一集群中的leader挂了,找到一个新的leader使用的通信端口
37 #在真正的集群中,几个服务器使用的端口是一样的,但是在伪集群方式中端口不能一样
38 server.1=192.168.60.129:2888:3888
39 server.2=192.168.60.132:2888:3888
40 server.3=192.168.60.133:2888:3888
41 #server.2=127.0.0.1:2889:3889
1 新建一下
2 dataDir=/home/jason/StormProcessing/data
3 dataLogDir=/home/jason/StormProcessing/log
4 这两个文件夹

在data文件夹下面新建一个文本文件myid里面就写一个1

拷贝磁盘,做成单个虚拟机,分别改其中的myid为2,3,注意ip之间的对应

 1 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ ./zkServer.sh start
 2 ZooKeeper JMX enabled by default
 3 Using config: /home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf/zoo.cfg
 4 Starting zookeeper ... STARTED
 5 分别启动三个虚拟机
 6 然后随便查看一个得到下面的结果:
 7 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ ./zkServer.sh status
 8 ZooKeeper JMX enabled by default
 9 Using config: /home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf/zoo.cfg
10 Error contacting service. It is probably not running.
1 运行这个
2 netstat -an |grep 2181
3 得到空

查google有很多可能
第一是没新建data或者log文件夹,我没这问题
第二是配置文件错误,我也没问题
第三是启动脚本zkServer.sh中重复定义端口号,我也没问题(没动这个文件)

1 查看自己的下面这个文件
2 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ cat zookeeper.out
3 nohup: 无法运行命令"/home/jason/StormProcessing/jdk1.8/bin/java": 权限不够

找到原因了
chmod 755 java
也不行

1 受到参考文献2的启发用下面的后两个代替前两个就出现了正常的结果
2 ./zkServer.sh start
3 ./zkServer.sh status
4 sudo ./zkServer.sh start
5 sudo ./zkServer.sh status
 1 下面是server.1
 2 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ sudo ./zkServer.sh status
 3 ZooKeeper JMX enabled by default
 4 Using config: /home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf/zoo.cfg
 5 Mode: follower
 6 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ netstat -an |grep 2181
 7 tcp6       0      0 :::2181                 :::*                    LISTEN
 8 server.2
 9 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ sudo ./zkServer.sh status
10 ZooKeeper JMX enabled by default
11 Using config: /home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf/zoo.cfg
12 Mode: follower
13 server.3
14 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ sudo ./zkServer.sh status
15 ZooKeeper JMX enabled by default
16 Using config: /home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf/zoo.cfg
17 Mode: leader

下面是新的成功的  zookeeper.out

 1 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ sudo cat zookeeper.out
 2 2016-05-25 13:21:29,768 [myid:] - INFO  [main:[email protected]103] - Reading configuration from: /home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf/zoo.cfg
 3 2016-05-25 13:21:29,817 [myid:] - INFO  [main:[email protected]149] - Resolved hostname: 192.168.60.133 to address: /192.168.60.133
 4 2016-05-25 13:21:29,817 [myid:] - INFO  [main:[email protected]149] - Resolved hostname: 192.168.60.132 to address: /192.168.60.132
 5 2016-05-25 13:21:29,818 [myid:] - INFO  [main:[email protected]149] - Resolved hostname: 192.168.60.129 to address: /192.168.60.129
 6 2016-05-25 13:21:29,818 [myid:] - INFO  [main:[email protected]331] - Defaulting to majority quorums
 7 2016-05-25 13:21:29,832 [myid:1] - INFO  [main:[email protected]78] - autopurge.snapRetainCount set to 3
 8 2016-05-25 13:21:29,832 [myid:1] - INFO  [main:[email protected]79] - autopurge.purgeInterval set to 0
 9 2016-05-25 13:21:29,832 [myid:1] - INFO  [main:[email protected]101] - Purge task is not scheduled.
10 2016-05-25 13:21:29,845 [myid:1] - INFO  [main:[email protected]127] - Starting quorum peer
11 2016-05-25 13:21:29,970 [myid:1] - INFO  [main:[email protected]89] - binding to port 0.0.0.0/0.0.0.0:2181
12 2016-05-25 13:21:29,993 [myid:1] - INFO  [main:[email protected]1019] - tickTime set to 2000
13 2016-05-25 13:21:29,993 [myid:1] - INFO  [main:[email protected]1039] - minSessionTimeout set to -1
14 2016-05-25 13:21:29,993 [myid:1] - INFO  [main:[email protected]1050] - maxSessionTimeout set to -1
15 2016-05-25 13:21:29,993 [myid:1] - INFO  [main:[email protected]1065] - initLimit set to 10
16 2016-05-25 13:21:30,026 [myid:1] - INFO  [main:[email protected]533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
17 2016-05-25 13:21:30,031 [myid:1] - INFO  [main:[email protected]548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
18 2016-05-25 13:21:30,043 [myid:1] - INFO  [ListenerThread:[email protected]534] - My election bind port: /192.168.60.129:3888
19 2016-05-25 13:21:30,056 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]774] - LOOKING
20 2016-05-25 13:21:30,057 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]818] - New election. My id =  1, proposed zxid=0x0
21 2016-05-25 13:21:30,061 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
22 2016-05-25 13:21:30,062 [myid:1] - INFO  [WorkerSender[myid=1]:[email protected]199] - Have smaller server identifier, so dropping the connection: (2, 1)
23 2016-05-25 13:21:30,064 [myid:1] - INFO  [WorkerSender[myid=1]:[email protected]199] - Have smaller server identifier, so dropping the connection: (3, 1)
24 2016-05-25 13:21:30,097 [myid:1] - INFO  [/192.168.60.129:3888:[email protected]541] - Received connection request /192.168.60.132:59080
25 2016-05-25 13:21:30,101 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
26 2016-05-25 13:21:30,102 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
27 2016-05-25 13:21:30,103 [myid:1] - INFO  [WorkerSender[myid=1]:[email protected]199] - Have smaller server identifier, so dropping the connection: (3, 1)
28 2016-05-25 13:21:30,104 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
29 2016-05-25 13:21:30,104 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
30 2016-05-25 13:21:30,120 [myid:1] - INFO  [/192.168.60.129:3888:[email protected]541] - Received connection request /192.168.60.133:55452
31 2016-05-25 13:21:30,122 [myid:1] - INFO  [/192.168.60.129:3888:[email protected]541] - Received connection request /192.168.60.133:55453
32 2016-05-25 13:21:30,123 [myid:1] - WARN  [RecvWorker:3:[email protected]810] - Connection broken for id 3, my id = 1, error =
33 java.io.EOFException
34     at java.io.DataInputStream.readInt(DataInputStream.java:392)
35     at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795)
36 2016-05-25 13:21:30,152 [myid:1] - WARN  [RecvWorker:3:[email protected]813] - Interrupting SendWorker
37 2016-05-25 13:21:30,150 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
38 2016-05-25 13:21:30,152 [myid:1] - INFO  [WorkerReceiver[myid=1]:[email protected]600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LEADING (n.state), 3 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
39 2016-05-25 13:21:30,152 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]844] - FOLLOWING
40 2016-05-25 13:21:30,149 [myid:1] - WARN  [SendWorker:3:[email protected]727] - Interrupted while waiting for message on queue
41 java.lang.InterruptedException
42     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
43     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095)
44     at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389)
45     at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879)
46     at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65)
47     at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715)
48 2016-05-25 13:21:30,153 [myid:1] - WARN  [SendWorker:3:[email protected]736] - Send worker leaving thread
49 2016-05-25 13:21:30,157 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]86] - TCP NoDelay set to: true
50 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]100] - Server environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT
51 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]100] - Server environment:host.name=ubuntu
52 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]100] - Server environment:java.version=1.7.0_95
53 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]100] - Server environment:java.vendor=Oracle Corporation
54 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-i386/jre
55 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]100] - Server environment:java.class.path=/home/jason/StormProcessing/zookeeper-3.4.8/bin/../build/classes:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../build/lib/*.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../lib/slf4j-api-1.6.1.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../lib/netty-3.7.0.Final.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../lib/log4j-1.2.16.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../lib/jline-0.9.94.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../zookeeper-3.4.8.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../src/java/lib/*.jar:/home/jason/StormProcessing/zookeeper-3.4.8/bin/../conf:
56 2016-05-25 13:21:30,166 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:java.library.path=/usr/java/packages/lib/i386:/usr/lib/i386-linux-gnu/jni:/lib/i386-linux-gnu:/usr/lib/i386-linux-gnu:/usr/lib/jni:/lib:/usr/lib
57 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:java.io.tmpdir=/tmp
58 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:java.compiler=<NA>
59 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:os.name=Linux
60 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environ[email protected]] - Server environment:os.arch=i386
61 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:os.version=3.16.0-67-generic
62 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:user.name=root
63 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:user.home=/root
64 2016-05-25 13:21:30,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Server environment:user.dir=/home/jason/StormProcessing/zookeeper-3.4.8/bin
65 2016-05-25 13:21:30,169 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/jason/StormProcessing/log/version-2 snapdir /home/jason/StormProcessing/data/version-2
66 2016-05-25 13:21:30,169 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - FOLLOWING - LEADER ELECTION TOOK - 112
67 2016-05-25 13:21:30,171 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Resolved hostname: 192.168.60.133 to address: /192.168.60.133
68 2016-05-25 13:21:30,178 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Getting a snapshot from leader
69 2016-05-25 13:21:30,183 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:[email protected]] - Snapshotting: 0x100000000 to /home/jason/StormProcessing/data/version-2/snapshot.100000000
70 2016-05-25 13:22:13,622 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]] - Accepted socket connection from /127.0.0.1:43640
71 2016-05-25 13:22:13,628 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]] - Processing srvr command from /127.0.0.1:43640
72 2016-05-25 13:22:13,653 [myid:1] - INFO  [Thread-1:[email protected]] - Closed socket connection for client /127.0.0.1:43640 (no session established for client)
73 [email protected]:~/StormProcessing/zookeeper-3.4.8/bin$ 

参考文献:
http://blog.csdn.net/crazycoder2010/article/details/8607310
http://rayfuxk.iteye.com/blog/2279596

时间: 2024-10-04 19:53:31

用vmware拷贝出三个相同的ubuntu搭建小的zookeeper集群的相关文章

搭建高可用mongodb集群(三)—— 深入副本集内部机制

http://www.lanceyan.com/tech/mongodb_repset2.html 在上一篇文章<搭建高可用mongodb集群(二)—— 副本集> 介绍了副本集的配置,这篇文章深入研究一下副本集的内部机制.还是带着副本集的问题来看吧! 副本集故障转移,主节点是如何选举的?能否手动干涉下架某一台主节点. 官方说副本集数量最好是奇数,为什么? mongodb副本集是如何同步的?如果同步不及时会出现什么情况?会不会出现不一致性? mongodb的故障转移会不会无故自动发生?什么条件会

zookeeper(三)--- 搭建zookeeper集群

zookeeper(三)--- 搭建zookeeper集群 环境 vagrant虚拟机 centos7 SecureCRT 软件 zookeeper3.4.6.tar.gz 步骤 1.使用vagrant创建3个虚拟机 IP地址:192.168.21.222,192.168.21.223,192.168.22.224 2.使用SecureCRT链接虚拟机 3.安装配置zookeeper 安装配置Zookeeper集群 解压 tar xzvfzookeeper-3.4.6.tar.gz 重命名解压后

【Linux 初学】zookeeper集群、win下Jenkins安装(三)

1.zookeeper集群: Java大型的项目中,环境变量的配置很重要,如果没有很好的配置环境变量的话,甚至项目连启动都是难事. export ZOOKEEPER_HOME=/home/zookeeper-3.3.3 export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf (1)ZooKeeper的单机模式部署 ZooKeeper的单机模式通常是用来快速测试客户端应用程序的,在实际过程中不可能是单机模式.单机模式的配置也比较简单.

Hadoop4 利用VMware搭建自己的hadoop集群

前言:       前段时间自己学习如何部署伪分布式模式的hadoop环境,之前由于工作比较忙,学习的进度停滞了一段时间,所以今天抽出时间把最近学习的成果和大家分享一下.       本文要介绍的是如何利用VMware搭建自己的hadoop的集群.如果大家想了解伪分布式的大家以及eclipse中的hadoop编程,可以参考我之前的三篇文章. 1.在Linux环境中伪分布式部署hadoop(SSH免登陆),运行WordCount实例成功. http://www.cnblogs.com/Purple

Hadoop系列之(三):使用Cloudera部署,管理Hadoop集群

1. Cloudera介绍 Hadoop是一个开源项目,Cloudera对Hadoop进行了商业化,简化了安装过程,并对hadoop做了一些封装. 根据使用的需要,Hadoop集群要安装很多的组件,一个一个安装配置起来比较麻烦,还要考虑HA,监控等. 使用Cloudera可以很简单的部署集群,安装需要的组件,并且可以监控和管理集群. CDH是Cloudera公司的发行版,包含Hadoop,Spark,Hive,Hbase和一些工具等. Cloudera有两个版本: Cloudera Expres

用阿里云三个ECS服务器搭建一个小模拟Hadoop集群(三个不同账号的阿里云,相同区域或不同区域)步骤整理

检查hosts和网卡配置 把三台小服务器先做内网互通 内网互通参照阿里云安全通道配置 1.准备至少三个虚拟机 2.相互通信,生成密钥并发送 生成密钥(ssh-keygen -t rsa) 发送密钥ssh-copy-id [email protected] (需要先修改.etc\hosts 文件) 登录测试 ssh [email protected] 3.安装JDK和Hadoop jdk安装 上传jdk到vm1并解压(tar -zvxf jdk-7u67-linux-x64.tar.gz) 配置环

一步一步搭建LVS-DR模型LB集群(三)

在lvs服务器上运行的脚本lvs.sh,绑定VIP地址到LVS-MASTER上,并设定LVS工作模式等 #!/bin/bash SNS_VIP=192.168.1.108 SNS_RIP1=192.168.1.106 SNS_RIP2=192.168.1.107 ./etc/rc.d/init.d/functions logger $0 called with $1 case "$1" in start) # set squid vip /sbin/ipvsadm-set 30 5 6

Docker笔记三:基于LVS DR模式构建WEB服务集群

安装ipvsadm 1. 先在宿主机上安装并以root来启动ipvsadm,每次要在容器中运行ipvs都需要先在宿主机上启动ipvs.如果直接进行2步操作将报出如下错误: Can't initialize ipvs: Protocol not availableAre you sure that IP Virtual Server is built in the kernel or as module? 2. 实例化一个ipvs容器: dockerfile: FROM ubuntu MAINTA

hive三种方式区别和搭建、HiveServer2环境搭建、HWI环境搭建和beeline环境搭建

说在前面的话 以下三种情况,最好是在3台集群里做,比如,master.slave1.slave2的master和slave1都安装了hive,将master作为服务端,将slave1作为服务端. hive三种方式区别和搭建 Hive中metastore(元数据存储)的三种方式: a)内嵌Derby方式 b)Local方式 c)Remote方式 1.本地derby这种方式是最简单的存储方式,只需要在hive-site.xml做如下配置便可<?xml version="1.0"?&g