kafka 0.10.2 部署失败后,重新部署

删除kafka各个节点log目录

删除zookeeper上kafka相关的目录

[[email protected] ~]# zkCli.sh
Connecting to localhost:2181
2017-03-22 07:06:47,239 [myid:] - INFO  [main:[email protected]100] - Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
2017-03-22 07:06:47,242 [myid:] - INFO  [main:[email protected]100] - Client environment:host.name=m1
2017-03-22 07:06:47,242 [myid:] - INFO  [main:[email protected]100] - Client environment:java.version=1.8.0_121
2017-03-22 07:06:47,245 [myid:] - INFO  [main:[email protected]100] - Client environment:java.vendor=Oracle Corporation
2017-03-22 07:06:47,245 [myid:] - INFO  [main:[email protected]100] - Client environment:java.home=/usr/local/soft/jdk/jre
2017-03-22 07:06:47,245 [myid:] - INFO  [main:[email protected]100] - Client environment:java.class.path=/usr/local/soft/zookeeper-3.4.9/bin/../build/classes:/usr/local/soft/zookeeper-3.4.9/bin/../build/lib/*.jar:/usr/local/soft/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/soft/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/soft/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/usr/local/soft/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/usr/local/soft/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/usr/local/soft/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/usr/local/soft/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/usr/local/soft/zookeeper-3.4.9/bin/../conf:.:/usr/local/soft/jdk/lib/dt.jar:/usr/local/soft/jdk/lib/tools.jar
2017-03-22 07:06:47,245 [myid:] - INFO  [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:java.compiler=<NA>
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:os.name=Linux
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:os.arch=amd64
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:os.version=2.6.32-431.el6.x86_64
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:user.name=root
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:user.home=/root
2017-03-22 07:06:47,246 [myid:] - INFO  [main:[email protected]] - Client environment:user.dir=/root
2017-03-22 07:06:47,249 [myid:] - INFO  [main:[email protected]] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 [email protected]
Welcome to ZooKeeper!
2017-03-22 07:06:47,277 [myid:] - INFO  [main-SendThread(localhost:2181):[email protected]] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2017-03-22 07:06:47,382 [myid:] - INFO  [main-SendThread(localhost:2181):[email protected]] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
2017-03-22 07:06:47,395 [myid:] - INFO  [main-SendThread(localhost:2181):[email protected]] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15af60b9055000a, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, config]
[zk: localhost:2181(CONNECTED) 1] rmr /brokers
[zk: localhost:2181(CONNECTED) 2] rmr /isr_change_notification
[zk: localhost:2181(CONNECTED) 3] rmr /consumers
[zk: localhost:2181(CONNECTED) 4] ls /cluster
[id]
[zk: localhost:2181(CONNECTED) 5] ls /
[cluster, controller_epoch, zookeeper, admin, config]
[zk: localhost:2181(CONNECTED) 6] ls /con

controller_epoch   config
[zk: localhost:2181(CONNECTED) 6] ls /controller_epoch
[]
[zk: localhost:2181(CONNECTED) 7] ls /config
[changes, clients, topics]
[zk: localhost:2181(CONNECTED) 8] rmr /config
[zk: localhost:2181(CONNECTED) 9] 
时间: 2024-07-29 08:07:56

kafka 0.10.2 部署失败后,重新部署的相关文章

Kafka 0.10问题点滴

15.如何消费内部topic: __consumer_offsets 主要是要让它来格式化:GroupMetadataManager.OffsetsMessageFormatter 最后用看了它的源码,把这部分挑选出来,自己解析了得到的byte[].核心代码如下: // com.sina.mis.app.ConsumerInnerTopic             ConsumerRecords<byte[], byte[]> records = consumer.poll(512);    

Kafka 0.10.1.0 Cluster的搭建和Topic简单操作实验

[kafka cluster机器]:机器名称 用户名称sht-sgmhadoopdn-01/02/03 root [安装目录]: /root/learnproject/app 1.将scala文件夹同步到集群其他机器(scala 2.11版本,可单独下载解压) [[email protected] app]# scp -r scala [email protected]:/root/learnproject/app/ [[email protected] app]# scp -r scala [

kafka 0.10.2 消息生产者(producer)

package cn.xiaojf.kafka.producer; import org.apache.kafka.clients.producer.*; import org.apache.kafka.common.Cluster; import org.apache.kafka.common.PartitionInfo; import org.apache.kafka.common.serialization.StringSerializer; import org.apache.kafka

关于CDH5.11.0自带kafka 0.10 bootstrap-server 无法消费

近日需要在项目用到kafka,然后本地使用cdh集成的kafka 进行安装调试,以及些样例代码,sparkstreaming 相关调用kafka 的代码使用的原始的api 而没有走zookeeper,虽然消费者能启动,但无法消费内容. 开始我使用shell下的zk方式是可以消费误认为kafka也是没有问题的,后来想了一下是否shell也可能使用api来访问看下情况. 之后我使用bootstrap-server的方式在shell下进行测试,果然后些的样例代码一样,无法消费. 之后就是无脑的百度,谷

Kafka 0.10 KafkaConsumer流程简述

ConsumerConfig.scala 储存Consumer的配置 按照我的理解,0.10的Kafka没有专门的SimpleConsumer,仍然是沿用0.8版本的. 1.从poll开始 消费的规则如下: 一个partition只能被同一个ConsumersGroup的一个线程所消费. 线程数小于partition数,某些线程会消费多个partition. 线程数等于partition数,一个线程正好消费一个线程. 当添加消费者线程时,会触发rebalance,partition的分配发送变化

kafka 0.10.2 消息消费者

package cn.xiaojf.kafka.consumer; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.Kaf

kafka 0.10.2 消息生产者

package cn.xiaojf.kafka.producer; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.Producer; import org.apache.kafka.clients.producer.ProducerRecord; import java.util.Properties; /** * Created by 肖建锋 on

HDFS2.0 NameNode HA 切换失败后的恢复(元数据写坏)

在测试 HDFS2.0 的 NameNode HA 的时候,并发put 700M的文件,然后 Kill 主 NN :发现备 NN 切换后进程退出. 2014-09-03 11:34:27,221 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [10.1

scala spark-streaming整合kafka (spark 2.3 kafka 0.10)

Maven组件如下: <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-0-10_2.11</artifactId> <version>2.3.0</version></dependency> 官网代码如下: pasting /* * Licensed to the Apache Software