kafka2.9.2的伪分布式集群安装和demo(java api)测试

1、什么是kafka?

  kafka是LinkedIn开发并开源的一个分布式MQ系统,现在是Apache的一个孵化项目。在它的主页描述kafka为一个高吞吐量的分布式(能将消息分散到不同的节点上)MQ。Kafka仅仅由7000行Scala编写,据了解,Kafka每秒可以生产约25万消息(50 MB),每秒处理55万消息(110 MB)。

  kafka目前支持多种客户端语言:java,python,c++,php等等。

  kafka集群的简要图解如下,producer写入消息,consumer读取消息

  

  

1.1、kafka设计目标

  • 高吞吐量是其核心设计之一。
  • 数据磁盘持久化:消息不在内存中cache,直接写入到磁盘,充分利用磁盘的顺序读写性能。
  • zero-copy:减少IO操作步骤。
  • 支持数据批量发送和拉取。
  • 支持数据压缩。
  • Topic划分为多个partition,提高并行处理能力。

1.2、kafka名词解释和工作方式

  • Producer :消息生产者,就是向kafka broker发消息的客户端。
  • Consumer :消息消费者,向kafka broker取消息的客户端
  • Topic :可以理解为一个队列。
  • Consumer Group (CG):这是kafka用来实现一个topic消息的广播(发给所有的consumer)和单播(发给任意一个consumer)的手段。一个topic可以有多个CG。topic的消息会复制(不是真的复制,是概念上的)到所有的CG,但每个CG只会把消息发给该CG中的一个consumer。如果需要实现广播,只要每个consumer有一个独立的CG就可以了。要实现单播只要所有的consumer在同一个CG。用CG还可以将consumer进行自由的分组而不需要多次发送消息到不同的topic。
  • Broker :一台kafka服务器就是一个broker。一个集群由多个broker组成。一个broker可以容纳多个topic。
  • Partition:为了实现扩展性,一个非常大的topic可以分布到多个broker(即服务器)上,一个topic可以分为多个partition,每个partition是一个有序的队列。partition中的每条消息都会被分配一个有序的id(offset)。kafka只保证按一个partition中的顺序将消息发给consumer,不保证一个topic的整体(多个partition间)的顺序。
  • Offset:kafka的存储文件都是按照offset.kafka来命名,用offset做名字的好处是方便查找。例如你想找位于2049的位置,只要找到2048.kafka的文件即可。当然the first offset就是00000000000.kafka

1.3、kafak系统扩展性

  • kafka使用zookeeper来实现动态的集群扩展,不需要更改客户端(producer和consumer)的配置。broker会在zookeeper注册并保持相关的元数据(topic,partition信息等)更新。
  • 而客户端会在zookeeper上注册相关的watcher。一旦zookeeper发生变化,客户端能及时感知并作出相应调整。这样就保证了添加或去除broker时,各broker间仍能自动实现负载均衡。

1.4、kafak和zookeeper的关系

  • Producer端使用zookeeper用来”发现”broker列表,以及和Topic下每个partition leader建立socket连接并发送消息.
  • Broker端使用zookeeper用来注册broker信息,已经监测partition leader存活性.
  • Consumer端使用zookeeper用来注册consumer信息,其中包括consumer消费的partition列表等,同时也用来发现broker列表,并和partition leader建立socket连接,并获取消息.

2、kafka的官方网站在哪里?

  http://kafka.apache.org/

  

3、在哪里下载?需要哪些组件的支持?

  kafka2.9.2在下面的地址可以下载:

  https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz

  需要zookeeper的支持,相关安装及下载,可以参考这篇文章《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署

4、如何安装?

4.1、解压kafka_2.9.2-0.8.1.1.tgz

  本文中解压到/home/hadoop目录下

  1. [email protected]:/home/hadoop/kafka_2.9.2-0.8.1.1# pwd
  2. /home/hadoop/kafka_2.9.2-0.8.1.1

4.2、修改server.properties配置文件。

  这里使用zookeeper的部分,请参考可以参考这篇文章《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署》中的配置,见下方第123行:

  1. [email protected]:/home/hadoop/kafka_2.9.2-0.8.1.1# cat config/server.properties
  2. # Licensed to the Apache Software Foundation (ASF) under one or more
  3. # contributor license agreements. See the NOTICE file distributed with
  4. # this work for additional information regarding copyright ownership.
  5. # The ASF licenses this file to You under the Apache License, Version 2.0
  6. # (the "License"); you may not use this file except in compliance with
  7. # the License. You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. # see kafka.server.KafkaConfig for additional details and defaults
  17. ############################# Server Basics #############################
  18. # The id of the broker. This must be set to a unique integer for each broker.
  19. #整数,建议根据ip区分,这里我是使用zookeeper中的id来设置
  20. broker.id=1
  21. ############################# Socket Server Settings #############################
  22. # The port the socket server listens on
  23. #broker用于接收producer消息的端口
  24. port=9092
  25. #port=44444
  26. # Hostname the broker will bind to. If not set, the server will bind to all interfaces
  27. #broker的hostname
  28. host.name=m1
  29. # Hostname the broker will advertise to producers and consumers. If not set, it uses the
  30. # value for "host.name" if configured. Otherwise, it will use the value returned from
  31. # java.net.InetAddress.getCanonicalHostName().
  32. #这个是配置PRODUCER/CONSUMER连上来的时候使用的地址
  33. advertised.host.name=m1
  34. # The port to publish to ZooKeeper for clients to use. If this is not set,
  35. # it will publish the same port that the broker binds to.
  36. #advertised.port=<port accessible by clients>
  37. # The number of threads handling network requests
  38. num.network.threads=2
  39. # The number of threads doing disk I/O
  40. num.io.threads=8
  41. # The send buffer (SO_SNDBUF) used by the socket server
  42. socket.send.buffer.bytes=1048576
  43. # The receive buffer (SO_RCVBUF) used by the socket server
  44. socket.receive.buffer.bytes=1048576
  45. # The maximum size of a request that the socket server will accept (protection against OOM)
  46. socket.request.max.bytes=104857600
  47. ############################# Log Basics #############################
  48. # A comma seperated list of directories under which to store log files
  49. #kafka存放消息文件的路径
  50. log.dirs=/home/hadoop/kafka_2.9.2-0.8.1.1/kafka-logs
  51. # The default number of log partitions per topic. More partitions allow greater
  52. # parallelism for consumption, but this will also result in more files across
  53. # the brokers.
  54. #topic的默认分区数
  55. num.partitions=2
  56. ############################# Log Flush Policy #############################
  57. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  58. # the OS cache lazily. The following configurations control the flush of data to disk.
  59. # There are a few important trade-offs here:
  60. # 1. Durability: Unflushed data may be lost if you are not using replication.
  61. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  62. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
  63. # The settings below allow one to configure the flush policy to flush data after a period of time or
  64. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  65. # The number of messages to accept before forcing a flush of data to disk
  66. #log.flush.interval.messages=10000
  67. # The maximum amount of time a message can sit in a log before we force a flush
  68. #log.flush.interval.ms=1000
  69. ############################# Log Retention Policy #############################
  70. # The following configurations control the disposal of log segments. The policy can
  71. # be set to delete segments after a period of time, or after a given size has accumulated.
  72. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  73. # from the end of the log.
  74. # The minimum age of a log file to be eligible for deletion
  75. #kafka接收日志的存储目录(目前我们保存7天数据log.retention.hours=168)
  76. log.retention.hours=168
  77. # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
  78. # segments don‘t drop below log.retention.bytes.
  79. #log.retention.bytes=1073741824
  80. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  81. log.segment.bytes=536870912
  82. # The interval at which log segments are checked to see if they can be deleted according
  83. # to the retention policies
  84. log.retention.check.interval.ms=60000
  85. # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
  86. # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
  87. log.cleaner.enable=false
  88. ############################# Zookeeper #############################
  89. # Zookeeper connection string (see zookeeper docs for details).
  90. # This is a comma separated host:port pairs, each corresponding to a zk
  91. # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
  92. # You can also append an optional chroot string to the urls to specify the
  93. # root directory for all kafka znodes.
  94. zookeeper.connect=m1:2181,m2:2181,s1:2181,s2:2181
  95. # Timeout in ms for connecting to zookeeper
  96. zookeeper.connection.timeout.ms=1000000

4.3、启动zookeeper和kafka

    1)zookeeper的启动,请参考这篇文章《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署

    启动后可以用以下命令在每台机器上查看状态

  1. [email protected]:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh status
  2. JMX enabled by default
  3. Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
  4. Mode: leader

    2)在m1,m2,m3,m4的机器上启动kafka,在这之前请先将m1上的kafka复制到另外三台机器上,复制后,记得更改server.properties配置文件中的host名称为当前所在机器。以下代码是在m1上执行后的效果:

  1. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-server-start.sh /home/hadoop/kafka_2.9.2-0.8.1.1/config/server.properties &
  2. [1] 31823
  3. [email protected]:/home/hadoop# [2014-08-05 10:03:11,210] INFO Verifying properties (kafka.utils.VerifiableProperties)
  4. [2014-08-05 10:03:11,261] INFO Property advertised.host.name is overridden to m1 (kafka.utils.VerifiableProperties)
  5. [2014-08-05 10:03:11,261] INFO Property broker.id is overridden to 1 (kafka.utils.VerifiableProperties)
  6. [2014-08-05 10:03:11,264] INFO Property host.name is overridden to m1 (kafka.utils.VerifiableProperties)
  7. [2014-08-05 10:03:11,264] INFO Property log.cleaner.enable is overridden to false (kafka.utils.VerifiableProperties)
  8. [2014-08-05 10:03:11,264] INFO Property log.dirs is overridden to /home/hadoop/kafka_2.9.2-0.8.1.1/kafka-logs (kafka.utils.VerifiableProperties)
  9. [2014-08-05 10:03:11,265] INFO Property log.retention.check.interval.ms is overridden to 60000 (kafka.utils.VerifiableProperties)
  10. [2014-08-05 10:03:11,265] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
  11. [2014-08-05 10:03:11,265] INFO Property log.segment.bytes is overridden to 536870912 (kafka.utils.VerifiableProperties)
  12. [2014-08-05 10:03:11,265] INFO Property num.io.threads is overridden to 8 (kafka.utils.VerifiableProperties)
  13. [2014-08-05 10:03:11,266] INFO Property num.network.threads is overridden to 2 (kafka.utils.VerifiableProperties)
  14. [2014-08-05 10:03:11,266] INFO Property num.partitions is overridden to 2 (kafka.utils.VerifiableProperties)
  15. [2014-08-05 10:03:11,267] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
  16. [2014-08-05 10:03:11,267] INFO Property socket.receive.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
  17. [2014-08-05 10:03:11,268] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiableProperties)
  18. [2014-08-05 10:03:11,268] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
  19. [2014-08-05 10:03:11,268] INFO Property zookeeper.connect is overridden to m1:2181,m2:2181,s1:2181,s2:2181 (kafka.utils.VerifiableProperties)
  20. [2014-08-05 10:03:11,269] INFO Property zookeeper.connection.timeout.ms is overridden to 1000000 (kafka.utils.VerifiableProperties)
  21. [2014-08-05 10:03:11,302] INFO [Kafka Server 1], starting (kafka.server.KafkaServer)
  22. [2014-08-05 10:03:11,303] INFO [Kafka Server 1], Connecting to zookeeper on m1:2181,m2:2181,s1:2181,s2:2181 (kafka.server.KafkaServer)
  23. [2014-08-05 10:03:11,335] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
  24. [2014-08-05 10:03:11,348] INFO Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT (org.apache.zookeeper.ZooKeeper)
  25. [2014-08-05 10:03:11,348] INFO Client environment:host.name=m1 (org.apache.zookeeper.ZooKeeper)
  26. [2014-08-05 10:03:11,349] INFO Client environment:java.version=1.7.0_65 (org.apache.zookeeper.ZooKeeper)
  27. [2014-08-05 10:03:11,349] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
  28. [2014-08-05 10:03:11,349] INFO Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre (org.apache.zookeeper.ZooKeeper)
  29. [2014-08-05 10:03:11,349] INFO Client environment:java.class.path=.:/usr/lib/jvm/java-7-oracle/lib/tools.jar:/usr/lib/jvm/java-7-oracle/lib/dt.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../core/build/dependant-libs-2.8.0/*.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../perf/build/libs//kafka-perf_2.8.0*.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../clients/build/libs//kafka-clients*.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../examples/build/libs//kafka-examples*.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/jopt-simple-3.2.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/kafka_2.9.2-0.8.1.1.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/kafka_2.9.2-0.8.1.1-javadoc.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/kafka_2.9.2-0.8.1.1-scaladoc.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/kafka_2.9.2-0.8.1.1-sources.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/log4j-1.2.15.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/metrics-core-2.2.0.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/scala-library-2.9.2.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/slf4j-api-1.7.2.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/snappy-java-1.0.5.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/zkclient-0.3.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../libs/zookeeper-3.3.4.jar:/home/hadoop/kafka_2.9.2-0.8.1.1/bin/../core/build/libs/kafka_2.8.0*.jar (org.apache.zookeeper.ZooKeeper)
  30. [2014-08-05 10:03:11,350] INFO Client environment:java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
  31. [2014-08-05 10:03:11,350] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
  32. [2014-08-05 10:03:11,350] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
  33. [2014-08-05 10:03:11,350] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
  34. [2014-08-05 10:03:11,350] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
  35. [2014-08-05 10:03:11,351] INFO Client environment:os.version=3.11.0-15-generic (org.apache.zookeeper.ZooKeeper)
  36. [2014-08-05 10:03:11,351] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
  37. [2014-08-05 10:03:11,351] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
  38. [2014-08-05 10:03:11,351] INFO Client environment:user.dir=/home/hadoop (org.apache.zookeeper.ZooKeeper)
  39. [2014-08-05 10:03:11,352] INFO Initiating client connection, connectString=m1:2181,m2:2181,s1:2181,s2:2181 sessionTimeout=6000 [email protected] (org.apache.zookeeper.ZooKeeper)
  40. [2014-08-05 10:03:11,380] INFO Opening socket connection to server m2/192.168.1.51:2181 (org.apache.zookeeper.ClientCnxn)
  41. [2014-08-05 10:03:11,386] INFO Socket connection established to m2/192.168.1.51:2181, initiating session (org.apache.zookeeper.ClientCnxn)
  42. [2014-08-05 10:03:11,398] INFO Session establishment complete on server m2/192.168.1.51:2181, sessionid = 0x247a3e09b460000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
  43. [2014-08-05 10:03:11,400] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
  44. [2014-08-05 10:03:11,652] INFO Loading log ‘test-1‘ (kafka.log.LogManager)
  45. [2014-08-05 10:03:11,681] INFO Recovering unflushed segment 0 in log test-1. (kafka.log.Log)
  46. [2014-08-05 10:03:11,711] INFO Completed load of log test-1 with log end offset 137 (kafka.log.Log)
  47. SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
  48. SLF4J: Defaulting to no-operation (NOP) logger implementation
  49. SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
  50. [2014-08-05 10:03:11,747] INFO Loading log ‘idoall.org-0‘ (kafka.log.LogManager)
  51. [2014-08-05 10:03:11,748] INFO Recovering unflushed segment 0 in log idoall.org-0. (kafka.log.Log)
  52. [2014-08-05 10:03:11,754] INFO Completed load of log idoall.org-0 with log end offset 5 (kafka.log.Log)
  53. [2014-08-05 10:03:11,760] INFO Loading log ‘test-0‘ (kafka.log.LogManager)
  54. [2014-08-05 10:03:11,765] INFO Recovering unflushed segment 0 in log test-0. (kafka.log.Log)
  55. [2014-08-05 10:03:11,777] INFO Completed load of log test-0 with log end offset 151 (kafka.log.Log)
  56. [2014-08-05 10:03:11,779] INFO Starting log cleanup with a period of 60000 ms. (kafka.log.LogManager)
  57. [2014-08-05 10:03:11,782] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
  58. [2014-08-05 10:03:11,800] INFO Awaiting socket connections on m1:9092. (kafka.network.Acceptor)
  59. [2014-08-05 10:03:11,802] INFO [Socket Server on Broker 1], Started (kafka.network.SocketServer)
  60. [2014-08-05 10:03:11,890] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
  61. [2014-08-05 10:03:11,919] INFO 1 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
  62. [2014-08-05 10:03:12,359] INFO New leader is 1 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
  63. [2014-08-05 10:03:12,387] INFO Registered broker 1 at path /brokers/ids/1 with address m1:9092. (kafka.utils.ZkUtils$)
  64. [2014-08-05 10:03:12,392] INFO [Kafka Server 1], started (kafka.server.KafkaServer)
  65. [2014-08-05 10:03:12,671] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [idoall.org,0],[test,0],[test,1] (kafka.server.ReplicaFetcherManager)
  66. [2014-08-05 10:03:12,741] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [idoall.org,0],[test,0],[test,1] (kafka.server.ReplicaFetcherManager)
  67. [2014-08-05 10:03:25,327] INFO Partition [test,0] on broker 1: Expanding ISR for partition [test,0] from 1 to 1,2 (kafka.cluster.Partition)
  68. [2014-08-05 10:03:25,334] INFO Partition [test,1] on broker 1: Expanding ISR for partition [test,1] from 1 to 1,2 (kafka.cluster.Partition)
  69. [2014-08-05 10:03:26,905] INFO Partition [test,1] on broker 1: Expanding ISR for partition [test,1] from 1,2 to 1,2,3 (kafka.cluster.Partition)

4.4、测试kafka的状态

    1)在m1上创建一个idoall_testTopic主题,KAFKA有几个,replication-factor就填几个

  1. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --create --topic idoall_testTopic --replication-factor 4 --partitions 2 --zookeeper m1:2181
  2. Created topic "idoall_testTopic".
  3. [2014-08-05 10:08:29,315] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [idoall_testTopic,0] (kafka.server.ReplicaFetcherManager)
  4. [2014-08-05 10:08:29,334] INFO Completed load of log idoall_testTopic-0 with log end offset 0 (kafka.log.Log)
  5. [2014-08-05 10:08:29,373] INFO Created log for partition [idoall_testTopic,0] in /home/hadoop/kafka_2.9.2-0.8.1.1/kafka-logs with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 536870912, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, cleanup.policy -> delete, segment.ms -> 604800000, max.message.bytes -> 1000012, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000}. (kafka.log.LogManager)
  6. [2014-08-05 10:08:29,384] WARN Partition [idoall_testTopic,0] on broker 1: No checkpointed highwatermark is found for partition [idoall_testTopic,0] (kafka.cluster.Partition)
  7. [2014-08-05 10:08:29,415] INFO Completed load of log idoall_testTopic-1 with log end offset 0 (kafka.log.Log)
  8. [2014-08-05 10:08:29,416] INFO Created log for partition [idoall_testTopic,1] in /home/hadoop/kafka_2.9.2-0.8.1.1/kafka-logs with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 536870912, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, cleanup.policy -> delete, segment.ms -> 604800000, max.message.bytes -> 1000012, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000}. (kafka.log.LogManager)
  9. [2014-08-05 10:08:29,422] WARN Partition [idoall_testTopic,1] on broker 1: No checkpointed highwatermark is found for partition [idoall_testTopic,1] (kafka.cluster.Partition)
  10. [2014-08-05 10:08:29,430] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [idoall_testTopic,1] (kafka.server.ReplicaFetcherManager)
  11. [2014-08-05 10:08:29,438] INFO Truncating log idoall_testTopic-1 to offset 0. (kafka.log.Log)
  12. [2014-08-05 10:08:29,473] INFO [ReplicaFetcherManager on broker 1] Added fetcher for partitions ArrayBuffer([[idoall_testTopic,1], initOffset 0 to broker id:2,host:m2,port:9092] ) (kafka.server.ReplicaFetcherManager)
  13. [2014-08-05 10:08:29,475] INFO [ReplicaFetcherThread-0-2], Starting (kafka.server.ReplicaFetcherThread)

    2)在m1上查看刚才创建的idoall_testTopic主题

  1. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --list --zookeeper m1:2181
  2. idoall_testTopic

    3)在m2上发送消息至kafka(m2模拟producer),发送消息“hello idoall.org”

  1. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-console-producer.sh --broker-list m1:9092 --sync --topic idoall_testTopic
  2. SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
  3. SLF4J: Defaulting to no-operation (NOP) logger implementation
  4. SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
  5. hello idoall.org

    

    4)在s1上开启一个消费者(s1模拟consumer),可以看到刚才发送的消息

  1. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper m1:2181 --topic idoall_testTopic --from-beginning
  2. SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
  3. SLF4J: Defaulting to no-operation (NOP) logger implementation
  4. SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
  5. hello idoall.org

    

    5)删除掉一个Topic,这里我们测试创建一个idoall的主题,再删除掉

  1. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --create --topic idoall --replication-factor 4 --partitions 2 --zookeeper m1:2181
  2. Created topic "idoall".
  3. [2014-08-05 10:38:30,862] INFO Completed load of log idoall-1 with log end offset 0 (kafka.log.Log)
  4. [2014-08-05 10:38:30,864] INFO Created log for partition [idoall,1] in /home/hadoop/kafka_2.9.2-0.8.1.1/kafka-logs with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 536870912, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, cleanup.policy -> delete, segment.ms -> 604800000, max.message.bytes -> 1000012, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000}. (kafka.log.LogManager)
  5. [2014-08-05 10:38:30,870] WARN Partition [idoall,1] on broker 1: No checkpointed highwatermark is found for partition [idoall,1] (kafka.cluster.Partition)
  6. [2014-08-05 10:38:30,878] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [idoall,1] (kafka.server.ReplicaFetcherManager)
  7. [2014-08-05 10:38:30,880] INFO Truncating log idoall-1 to offset 0. (kafka.log.Log)
  8. [2014-08-05 10:38:30,885] INFO [ReplicaFetcherManager on broker 1] Added fetcher for partitions ArrayBuffer([[idoall,1], initOffset 0 to broker id:3,host:s1,port:9092] ) (kafka.server.ReplicaFetcherManager)
  9. [2014-08-05 10:38:30,887] INFO [ReplicaFetcherThread-0-3], Starting (kafka.server.ReplicaFetcherThread)
  10. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --list --zookeeper m1:2181
  11. idoall
  12. idoall_testTopic
  13. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand --topic idoall --zookeeper m2:2181
  14. deletion succeeded!
  15. [email protected]:/home/hadoop# /home/hadoop/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --list --zookeeper m1:2181 idoall_testTopic
  16. [email protected]:/home/hadoop#

    同样也可以进入到zookeeper中查看主题是否已经删除掉。

  1. [email protected]:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
  2. Connecting to localhost:2181
  3. 2014-08-05 10:15:21,863 [myid:] - INFO [main:[email protected]] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
  4. 2014-08-05 10:15:21,871 [myid:] - INFO [main:[email protected]] - Client environment:host.name=m1
  5. 2014-08-05 10:15:21,871 [myid:] - INFO [main:[email protected]] - Client environment:java.version=1.7.0_65
  6. 2014-08-05 10:15:21,872 [myid:] - INFO [main:[email protected]] - Client environment:java.vendor=Oracle Corporation
  7. 2014-08-05 10:15:21,872 [myid:] - INFO [main:[email protected]] - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
  8. 2014-08-05 10:15:21,873 [myid:] - INFO [main:[email protected]] - Client environment:java.class.path=/home/hadoop/zookeeper-3.4.5/bin/../build/classes:/home/hadoop/zookeeper-3.4.5/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/netty-3.2.2.Final.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.4.5/bin/../zookeeper-3.4.5.jar:/home/hadoop/zookeeper-3.4.5/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../conf:.:/usr/lib/jvm/java-7-oracle/lib/tools.jar:/usr/lib/jvm/java-7-oracle/lib/dt.jar
  9. 2014-08-05 10:15:21,874 [myid:] - INFO [main:[email protected]] - Client environment:java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
  10. 2014-08-05 10:15:21,874 [myid:] - INFO [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
  11. 2014-08-05 10:15:21,874 [myid:] - INFO [main:[email protected]] - Client environment:java.compiler=<NA>
  12. 2014-08-05 10:15:21,875 [myid:] - INFO [main:[email protected]] - Client environment:os.name=Linux
  13. 2014-08-05 10:15:21,875 [myid:] - INFO [main:[email protected]] - Client environment:os.arch=amd64
  14. 2014-08-05 10:15:21,876 [myid:] - INFO [main:[email protected]] - Client environment:os.version=3.11.0-15-generic
  15. 2014-08-05 10:15:21,876 [myid:] - INFO [main:[email protected]] - Client environment:user.name=root
  16. 2014-08-05 10:15:21,877 [myid:] - INFO [main:[email protected]] - Client environment:user.home=/root
  17. 2014-08-05 10:15:21,878 [myid:] - INFO [main:[email protected]] - Client environment:user.dir=/home/hadoop
  18. 2014-08-05 10:15:21,879 [myid:] - INFO [main:[email protected]] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 [email protected]
  19. Welcome to ZooKeeper!
  20. 2014-08-05 10:15:21,920 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
  21. 2014-08-05 10:15:21,934 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]] - Socket connection established to localhost/127.0.0.1:2181, initiating session
  22. JLine support is enabled
  23. 2014-08-05 10:15:21,966 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x147a3e1246b0007, negotiated timeout = 30000
  24. WATCHER::
  25. WatchedEvent state:SyncConnected type:None path:null
  26. [zk: localhost:2181(CONNECTED) 0] ls /
  27. [hbase, hadoop-ha, admin, zookeeper, consumers, config, controller, storm, brokers, controller_epoch]
  28. [zk: localhost:2181(CONNECTED) 1] ls /brokers
  29. [topics, ids]
  30. [zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
  31. [idoall_testTopic]

4.5、使用Eclipse来调用kafka的JAVA API来测试kafka的集群状态

    1)消息生产端:Producertest.java    

  1. package idoall.testkafka;
  2. import java.util.Date;
  3. import java.util.Properties;
  4. import java.text.SimpleDateFormat;
  5. import kafka.javaapi.producer.Producer;
  6. import kafka.producer.KeyedMessage;
  7. import kafka.producer.ProducerConfig;
  8. /**
  9. * 消息生产端
  10. * @author 迦壹
  11. * @Time 2014-08-05
  12. */
  13. public class Producertest {
  14. public static void main(String[] args) {
  15. Properties props = new Properties();
  16. props.put("zk.connect", "m1:2181,m2:2181,s1:2181,s2:2181");
  17. // serializer.class为消息的序列化类
  18. props.put("serializer.class", "kafka.serializer.StringEncoder");
  19. // 配置metadata.broker.list, 为了高可用, 最好配两个broker实例
  20. props.put("metadata.broker.list", "m1:9092,m2:9092,s1:9092,s2:9092");
  21. // 设置Partition类, 对队列进行合理的划分
  22. //props.put("partitioner.class", "idoall.testkafka.Partitionertest");
  23. // ACK机制, 消息发送需要kafka服务端确认
  24. props.put("request.required.acks", "1");
  25. props.put("num.partitions", "4");
  26. ProducerConfig config = new ProducerConfig(props);
  27. Producer<String, String> producer = new Producer<String, String>(config);
  28. for (int i = 0; i < 10; i++)
  29. {
  30. // KeyedMessage<K, V>
  31. //   K对应Partition Key的类型
  32. //   V对应消息本身的类型
  33. //   topic: "test", key: "key", message: "message"
  34. SimpleDateFormat formatter = new SimpleDateFormat ("yyyy年MM月dd日 HH:mm:ss SSS");
  35. Date curDate = new Date(System.currentTimeMillis());//获取当前时间
  36. String str = formatter.format(curDate);
  37. String msg = "idoall.org" + i+"="+str;
  38. String key = i+"";
  39. producer.send(new KeyedMessage<String, String>("idoall_testTopic",key, msg));
  40. }
  41. }
  42. }

    2)消息消费端:Consumertest.java

  1. package idoall.testkafka;
  2. import java.util.HashMap;
  3. import java.util.List;
  4. import java.util.Map;
  5. import java.util.Properties;
  6. import kafka.consumer.ConsumerConfig;
  7. import kafka.consumer.ConsumerIterator;
  8. import kafka.consumer.KafkaStream;
  9. import kafka.javaapi.consumer.ConsumerConnector;
  10. /**
  11. * 消息消费端
  12. * @author 迦壹
  13. * @Time 2014-08-05
  14. */
  15. public class Consumertest extends Thread{
  16. private final ConsumerConnector consumer;
  17. private final String topic;
  18. public static void main(String[] args) {
  19. Consumertest consumerThread = new Consumertest("idoall_testTopic");
  20. consumerThread.start();
  21. }
  22. public Consumertest(String topic) {
  23. consumer =kafka.consumer.Consumer.createJavaConsumerConnector(createConsumerConfig());
  24. this.topic =topic;
  25. }
  26. private static ConsumerConfig createConsumerConfig() {
  27. Properties props = new Properties();
  28. // 设置zookeeper的链接地址
  29. props.put("zookeeper.connect","m1:2181,m2:2181,s1:2181,s2:2181");
  30. // 设置group id
  31. props.put("group.id", "1");
  32. // kafka的group 消费记录是保存在zookeeper上的, 但这个信息在zookeeper上不是实时更新的, 需要有个间隔时间更新
  33. props.put("auto.commit.interval.ms", "1000");
  34. props.put("zookeeper.session.timeout.ms","10000");
  35. return new ConsumerConfig(props);
  36. }
  37. public void run(){
  38. //设置Topic=>Thread Num映射关系, 构建具体的流
  39. Map<String,Integer> topickMap = new HashMap<String, Integer>();
  40. topickMap.put(topic, 1);
  41. Map<String, List<KafkaStream<byte[],byte[]>>> streamMap=consumer.createMessageStreams(topickMap);
  42. KafkaStream<byte[],byte[]>stream = streamMap.get(topic).get(0);
  43. ConsumerIterator<byte[],byte[]> it =stream.iterator();
  44. System.out.println("*********Results********");
  45. while(it.hasNext()){
  46. System.err.println("get data:" +new String(it.next().message()));
  47. try {
  48. Thread.sleep(1000);
  49. } catch (InterruptedException e) {
  50. e.printStackTrace();
  51. }
  52. }
  53. }
  54. }

    3)在Eclipse查看java代码效果,在这之前先在其中一台机器(我使用的s1),开启消费者,同时观察eclipse和s1上的消费者是否都收到了消息。最后结果如下图:

    

    

    

  可以看到,刚好10条信息,没有丢失。不过消息因为均衡的原因,并非是有序的,在Kafka只提供了分区内部的有序性,不能跨partition. 每个分区的有序性,结合按Key分partition的能力对大多应用都够用了。(如何按key进行分partition,在文章末尾提供的Eclpise代码中有个Partitionertest.java提供了一个Demo

4.6、在命令行下打包java文件,测试kafka

    1)修改工程目录中的pom.xml文件

  1. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  2. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  3. <modelVersion>4.0.0</modelVersion>
  4. <groupId>idoall.testkafka</groupId>
  5. <artifactId>idoall.testkafka</artifactId>
  6. <version>0.0.1-SNAPSHOT</version>
  7. <packaging>jar</packaging>
  8. <name>idoall.testkafka</name>
  9. <url>http://maven.apache.org</url>
  10. <properties>
  11. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  12. </properties>
  13. <dependencies>
  14. <dependency>
  15. <groupId>junit</groupId>
  16. <artifactId>junit</artifactId>
  17. <version>3.8.1</version>
  18. <scope>test</scope>
  19. </dependency>
  20. <dependency>
  21. <groupId>log4j</groupId>
  22. <artifactId>log4j</artifactId>
  23. <version>1.2.14</version>
  24. </dependency>
  25. <dependency>
  26. <groupId>com.sksamuel.kafka</groupId>
  27. <artifactId>kafka_2.10</artifactId>
  28. <version>0.8.0-beta1</version>
  29. </dependency>
  30. </dependencies>
  31. <build>
  32. <finalName>idoall.testkafka</finalName>
  33. <plugins>
  34. <plugin>
  35. <groupId>org.apache.maven.plugins</groupId>
  36. <artifactId>maven-compiler-plugin</artifactId>
  37. <version>2.0.2</version>
  38. <configuration>
  39. <source>1.5</source>
  40. <target>1.5</target>
  41. <encoding>UTF-8</encoding>
  42. </configuration>
  43. </plugin>
  44. <plugin>
  45. <artifactId>maven-assembly-plugin</artifactId>
  46. <version>2.4</version>
  47. <configuration>
  48. <descriptors>
  49. <descriptor>src/main/src.xml</descriptor>
  50. </descriptors>
  51. <descriptorRefs>
  52. <descriptorRef>jar-with-dependencies</descriptorRef>
  53. </descriptorRefs>
  54. </configuration>
  55. <executions>
  56. <execution>
  57. <id>make-assembly</id> <!-- this is used for inheritance merges -->
  58. <phase>package</phase> <!-- bind to the packaging phase -->
  59. <goals>
  60. <goal>single</goal>
  61. </goals>
  62. </execution>
  63. </executions>
  64. </plugin>
  65. </plugins>
  66. </build>
  67. </project>

    

    2)修改工程目录中的src/main/src.xml文件

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd ">
  5. <id>jar-with-dependencies</id>
  6. <formats>
  7. <format>jar</format>
  8. </formats>
  9. <includeBaseDirectory>false</includeBaseDirectory>
  10. <dependencySets>
  11. <dependencySet>
  12. <unpack>false</unpack>
  13. <scope>runtime</scope>
  14. </dependencySet>
  15. </dependencySets>
  16. <fileSets>
  17. <fileSet>
  18. <directory>/lib</directory>
  19. </fileSet>
  20. </fileSets>
  21. </assembly>

    3)制作依赖包,在工程目录执行mvn package,得到idoall.testkafka-jar-with-dependencies.jar,下面是部分执行后的结果:

  1. Running idoall.testkafka.idoall.testkafka.AppTest
  2. Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 sec
  3. Results :
  4. Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
  5. [INFO]
  6. [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ idoall.testkafka ---
  7. [INFO] Building jar: /Users/lion/Documents/_my_project/java/idoall.testkafka/target/idoall.testkafka.jar
  8. [INFO]
  9. [INFO] --- maven-assembly-plugin:2.4:single (make-assembly) @ idoall.testkafka ---
  10. [INFO] Reading assembly descriptor: src/main/src.xml
  11. [WARNING] The assembly id jar-with-dependencies is used more than once.
  12. [INFO] Building jar: /Users/lion/Documents/_my_project/java/idoall.testkafka/target/idoall.testkafka-jar-with-dependencies.jar
  13. [INFO] Building jar: /Users/lion/Documents/_my_project/java/idoall.testkafka/target/idoall.testkafka-jar-with-dependencies.jar
  14. [INFO] ------------------------------------------------------------------------
  15. [INFO] BUILD SUCCESS
  16. [INFO] ------------------------------------------------------------------------
  17. [INFO] Total time: 9.074 s
  18. [INFO] Finished at: 2014-08-05T12:22:47+08:00
  19. [INFO] Final Memory: 63M/836M
  20. [INFO] ------------------------------------------------------------------------

    4)编译文件,进入到工程目录,执行命令

  1. liondeMacBook-Pro:idoall.testkafka lion$ pwd
  2. /Users/lion/Documents/_my_project/java/idoall.testkafka
  3. liondeMacBook-Pro:idoall.testkafka lion$ javac -classpath target/idoall.testkafka-jar-with-dependencies.jar -d . src/main/java/idoall/testkafka/*.java

    5)执行编译后的文件。分别打开两个窗口,一个用来消费,一个用来生产。可以看到消费窗口可以正常显示消息。

  1. java -classpath .:target/idoall.testkafka-jar-with-dependencies.jar idoall.testkafka.Producertest
  2. java -classpath .:target/idoall.testkafka-jar-with-dependencies.jar idoall.testkafka.Consumertest

    

    

    

5、FAQ

  5.1、如果在创建主题时出现下面的错误 ,那就是启动的brokers的个数达不到你所指定的–replication-factor值:

  1. Error while executing topic command replication factor: 3 larger than available brokers: 1
  2. kafka.admin.AdminOperationException: replication factor: 3 larger than available brokers: 1
  3. at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:70)
  4. at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:155)
  5. at kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:86)
  6. at kafka.admin.TopicCommand$.main(TopicCommand.scala:50)
  7. at kafka.admin.TopicCommand.main(TopicCommand.scala)

  5.2、如果出现下面的错误,可以先启动kafka,再启动hadoop中的zkfc(DFSZKFailoverController):

  1. Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0130000, 986513408, 0) failed; error=‘Cannot allocate memory‘ (errno=12)
  2. #
  3. # There is insufficient memory for the Java Runtime Environment to continue.
  4. # Native memory allocation (malloc) failed to allocate 986513408 bytes for committing reserved memory.
  5. # An error report file with more information is saved as:
  1. # /home/hadoop/hs_err_pid13558.log
时间: 2024-11-06 17:14:54

kafka2.9.2的伪分布式集群安装和demo(java api)测试的相关文章

ubuntu12.04+kafka2.9.2+zookeeper3.4.5的分布式集群安装和demo(java api)测试

博文作者:迦壹 博客地址:http://idoall.org/home.php?mod=space&uid=1&do=blog&id=547 转载声明:可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明,谢谢合作! --------------------------------------- 目录: 一.什么是kafka? 二.kafka的官方网站在哪里? 三.在哪里下载?需要哪些组件的支持? 四.如何安装? 五.FAQ 六.扩展阅读 一.什么是kafka? ka

kafka2.9.2的分布式集群安装和demo(java api)测试

目录: 一.什么是kafka? 二.kafka的官方网站在哪里? 三.在哪里下载?需要哪些组件的支持? 四.如何安装? 五.FAQ 六.扩展阅读   一.什么是kafka? kafka是LinkedIn开发并开源的一个分布式MQ系统,现在是Apache的一个孵化项目.在它的主页描述kafka为一个高吞吐量的分布式(能将消息分散到不同的节点上)MQ.Kafka仅仅由7000行Scala编写,据了解,Kafka每秒可以生产约25万消息(50 MB),每秒处理55万消息(110 MB). kafka目

ZooKeeper伪分布式集群安装

获取ZooKeeper安装包 下载地址:http://apache.dataguru.cn/zookeeper 选择一个稳定版本进行下载,我这里下载的是zookeeper-3.4.6版本. ZooKeeper伪分布式集群安装 伪分布式集群:在一台Server中,启动多个ZooKeeper的实例. 上传并解压安装包 cd /usr rz -by tar xf zookeeper-3.4.6.tar.gz 创建实例配置文件 cd zookeeper-3.4.6/conf cp zoo_sample.

(转)ZooKeeper伪分布式集群安装及使用

转自:http://blog.fens.me/hadoop-zookeeper-intro/ 前言 ZooKeeper是Hadoop家族的一款高性能的分布式协作的产品.在单机中,系统协作大都是进程级的操作.分布式系统中,服务协作都是跨服务器才能完成的.在ZooKeeper之前,我们对于协作服务大都使用消息中间件,随着分布式系统的普及,用消息中间件完成协作,会有大量的程序开发.ZooKeeper直接面向于分布式系统,可以减少我们自己的开发,帮助我们更好完成分布式系统的数据管理问题. 目录 zook

ZooKeeper伪分布式集群安装及使用

前言 ZooKeeper是Hadoop家族的一款高性能的分布式协作的产品.在单机中,系统协作大都是进程级的操作.分布式系统中,服务协作都是跨服务器才能完成的.在ZooKeeper之前,我们对于协作服务大都使用消息中间件,随着分布式系统的普及,用消息中间件完成协作,会有大量的程序开发.ZooKeeper直接面向于分布式系统,可以减少我们自己的开发,帮助我们更好完成分布式系统的数据管理问题. 目录 zookeeper介绍 zookeeper单节点安装 zookeeper伪分布式集群安装 zookee

Tachyon 0.7.1伪分布式集群安装与测试

Tachyon是一个高容错的分布式文件系统,允许文件以内存的速度在集群框架中进行可靠的共享,就像Spark和 MapReduce那样.通过利用信息继承,内存侵入,Tachyon获得了高性能.Tachyon工作集文件缓存在内存中,并且让不同的 Jobs/Queries以及框架都能内存的速度来访问缓存文件.因此,Tachyon可以减少那些需要经常使用的数据集通过访问磁盘来获得的次数. 源码下载 源码地址:https://github.com/amplab/tachyon git clone http

Mac Hadoop2.6(CDH5.9.2)伪分布式集群安装

操作系统: MAC OS X 一.准备 1. JDK 1.8 下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 2.Hadoop CDH 下载地址:https://archive.cloudera.com/cdh5/cdh/5/ 本次安装版本:hadoop-2.6.0-cdh5.9.2.tar.gz 二.配置SSH(免密码登录) 1.打开iTerm2 终端,输入:ssh

ZooKeeper的伪分布式集群搭建以及真分布式集群搭建

zk集群的一些基本概念 zookeeper集群搭建: zk集群,主从节点,心跳机制(选举模式) 配置数据文件 myid 1/2/3 对应 server.1/2/3 通过 zkCli.sh -server [ip]:[port] 命令检测集群是否配置成功 和其他大多数集群结构一样,zookeeper集群也是主从结构.搭建集群时,机器数量最低也是三台,因为小于三台就无法进行选举.选举就是当集群中的master节点挂掉之后,剩余的两台机器会进行选举,在这两台机器中选举出一台来做master节点.而当原

ZooKeeper的伪分布式集群搭建

ZooKeeper集群的一些基本概念 zookeeper集群搭建: zk集群,主从节点,心跳机制(选举模式) 配置数据文件 myid 1/2/3 对应 server.1/2/3 通过 zkCli.sh -server [ip]:[port] 命令检测集群是否配置成功 和其他大多数集群结构一样,zookeeper集群也是主从结构.搭建集群时,机器数量最低也是三台,因为小于三台就无法进行选举.选举就是当集群中的master节点挂掉之后,剩余的两台机器会进行选举,在这两台机器中选举出一台来做maste