error[No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException]

http://stackoverflow.com/questions/23228222/running-into-leadernotavailableexception-when-using-kafka-0-8-1-with-zookeeper-3

Kafka uses an external coordination framework (by default Zookeeper) to maintain configuration. It seems the configuration is now out-of-sync with the Kafka log data. In this case, I‘d remove affected topic data and related Zookeeper data.

For Test Environment:

  1. Stop Kafka-server and Zookeeper-server
  2. Remove the data directories of both services, by default they are /tmp/kafka-log and /tmp/zookeeper.
  3. Start Kafka-server and Zookeeper-server again
  4. Create a new topic

Now you are able to work with the topic again.

For Production Environment:

As the Kafka topics are stored in different directories, you should remove particular directories. You should also remove /brokers/{broker_id}/topics/{broken_topic} from
Zookeeper by using a Zookeeper client.

Please read Kafka documentation carefully to make sure the configuration structure, before you do anything stupid. Kafka is rolling out a delete
topic feature (KAFKA-330),
so that the problem can be solved more easily.

时间: 2024-10-08 11:00:50

error[No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException]的相关文章

Error starting the context, marking it as stopped org.apache.kafka.common.KafkaException: Failed to construct kafka consumer

ERROR [main] [org.apache.spark.streaming.StreamingContext] - Error starting the context, marking it as stopped org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafka

ERROR: The partition with /var/lib/mysql is too full! failed!

今天一来公司,发现服务器挂掉了,然后执行日常简易操作,重启web服务器,还是不可以,然后重启mysql,结果mysql重启不了,查看日志,发现:ERROR: The partition with /var/lib/mysql is too full! failed! 于是上网搜索,发现网上也有挺多遇到这种情况,有人贴代码: cd /var rm -rf log 也就是删除日志文件,然后重启mysql /etc/init.d/mysql start 结果我的mysql还是启动不了. 查看其他更多搜

linux中ERROR: The partition with /var/lib/mysql is too full!解决办法

今天在ubuntu上遇见这个问题.应该是我的第一分区太小了. 解决办法: [email protected]:/var$ cd /var [email protected]:/var$ rm -rf log 我们删除日志文件 [email protected]:/var$ /etc/init.d/mysql start 在开启mysql就正常了 linux中ERROR: The partition with /var/lib/mysql is too full!解决办法,布布扣,bubuko.c

kafka.common.KafkaException: fetching topic metadata for topics

2016-10-26 11:05:29,716  WARN [flume_dx2.zdp.ol-1477451127056-e2015d55-leader-finder-thread] kafka.consumer.ConsumerFetcherManager$LeaderFinderThread (line 89) [flume_dx2.zdp.ol-1477451127056-e2015d55-leader-finder-thread], Failed to find leader for 

报错:Error while fetching metadata with correlation id 67 : {alarmHis=LEADER_NOT_AVAILABLE}

报错背景: 单机安装了kafka,创建完成主题,启动生产者的时候产生报错现象.报错时持续不断打印日志信息. 报错现象: [2019-05-21 09:43:52,790] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 59 : {alarmHis=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkCl

kafka删除topic后再创建同名的topic报错(ERROR org.apache.kafka.common.errors.TopicExistsException)

[[email protected] logs]$ kafka-topics.sh --delete --zookeeper datanode1:2181 --topic firstTopic first is marked for deletion.Note: This will have no impact if delete.topic.enable is not set to true. [[email protected] logs]$ kafka-topics.sh  --creat

Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过sparkstreaming读取kafka的数据

定制avro schema: { "type": "record", "name": "userlog", "fields": [ {"name": "ip","type": "string"}, {"name": "identity","type":"str

Compiler Error Message: CS0006:Metadata file &quot;xxxx.dll&quot; could not be found

从SVN更新以后,清理解决方案重新build出现这个错误.未清理解决方案之前是可以正常生成的.所以考虑到解决方案文件的问题. 和自己正常文件进行对比后发现: GlobalSection(SubversionScc) = preSolution Svn-Managed = True Manager = AnkhSVN - Subversion Support for Visual Studio EndGlobalSection 遂删除.重新build 正常

使用mysql-connector-java.jar连接MySql时出现:Error while retrieving metadata for procedure columns: java.sql.SQLException: Parameter/Column name pattern can not be NULL or empty.

错误如下: 程序实现的功能是调用一个存储过程,但是不认这个存储过程的参数. 原因是版本太高了,由于使用的是6.0.6版本的,改成5.1.38即可. POM配置如下: <!-- mysql-connector-java --> <!-- http://mvnrepository.com/artifact/mysql/mysql-connector-java --> <dependency> <groupId>mysql</groupId> <