kafka.common.KafkaException: fetching topic metadata for topics

2016-10-26 11:05:29,716  WARN [flume_dx2.zdp.ol-1477451127056-e2015d55-leader-finder-thread] kafka.consumer.ConsumerFetcherManager$LeaderFinderThread (line 89) [flume_dx2.zdp.ol-1477451127056-e2015d55-leader-finder-thread], Failed to find leader for Set([v4-avail-service-request,0])
kafka.common.KafkaException: fetching topic metadata for topics [Set(v4-avail-service-request)] from broker [ArrayBuffer(BrokerEndPoint(0,ip-192-168-110-4.cn-north-1.compute.internal,9092))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:76)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:75)
at kafka.producer.SyncProducer.send(SyncProducer.scala:120)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 3 more

Flume从Kafka订阅数据报错Failed to find leader for Set

Flume部署在本地,Kafka集群部署在AWS云上。

解决办法

Consume端服务器配置/etc/hosts文件,其中机器名从要同Kafka服务端私有DNS名称相同(这个名称可以在报错信息中看到)

123.123.123.123 ip-192-168-110-4.cn-north-1.compute.internal

时间: 2024-10-12 07:03:40

kafka.common.KafkaException: fetching topic metadata for topics的相关文章

Error starting the context, marking it as stopped org.apache.kafka.common.KafkaException: Failed to construct kafka consumer

ERROR [main] [org.apache.spark.streaming.StreamingContext] - Error starting the context, marking it as stopped org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafka

kafka删除topic后再创建同名的topic报错(ERROR org.apache.kafka.common.errors.TopicExistsException)

[[email protected] logs]$ kafka-topics.sh --delete --zookeeper datanode1:2181 --topic firstTopic first is marked for deletion.Note: This will have no impact if delete.topic.enable is not set to true. [[email protected] logs]$ kafka-topics.sh  --creat

[异常处理]class kafka.common.UnknownTopicOrPartitionException (kafka.server.ReplicaFetcherThread)

在kafka.out日志里出现大量 ERROR [ReplicaFetcherThread-0-1], Error for partition [FLAG_DATA_SYC,1] to broker 1:class kafka.common.UnknownTopicOrPartitionException (kafka.server.ReplicaFetcherThread) 这是因为删除topic没删干净 在zookeeper里删除下列路径下的数据:/brokers/topics/[topic

Kafka Server写数据的时候报错org.apache.kafka.common.errors.RecordTooLargeException

向Kafka中输入数据,抛异常org.apache.kafka.common.errors.RecordTooLargeException 官网两个参数描述如下: message.max.bytes The maximum size of message that the server can receive int 1000012 [0,...] high fetch.message.max.bytes 1024 * 1024 The number of byes of messages to

全网最详细的启动Kafka服务时出现kafka.common.InconsistentBrokerIdException: Configured brokerId 3 doesn&#39;t match stored brokerId 1 in meta.properties错误的解决办法(图文详解)

不多说,直接上干货! 问题详情 执行bin/kafka-server-start.sh config/server.properties 时, [[email protected] kafka_2.11-0.9.0.0]$ bin/kafka-server-start.sh config/server.properties [2018-06-17 16:05:38,983] INFO KafkaConfig values: request.timeout.ms = 30000 log.roll.

org.apache.kafka.common.network.Selector

org.apache.kafka.common.client.Selector实现了Selectable接口,用于提供符合Kafka网络通讯特点的异步的.非阻塞的.面向多个连接的网络I/O. 这些网络IO包括了连接的创建.断开,请求的发送和接收,以及一些网络相关的metrics统计等功能. 所以,它实际上应该至少具体以下功能 使用 首先得谈一下Selector这东西是准备怎么让人用的.这个注释里说了一部分: A nioSelector interface for doing non-blocki

error[No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException]

http://stackoverflow.com/questions/23228222/running-into-leadernotavailableexception-when-using-kafka-0-8-1-with-zookeeper-3 Kafka uses an external coordination framework (by default Zookeeper) to maintain configuration. It seems the configuration is

Kafka 如何读取指定topic中的offset -------------用来验证分区是不是均衡!!!(__consumer_offsets)(注,本文尚在测试验证阶段,,,后续一俩天会追加修正)

我现在使用的是librdkafka 的C/C++ 的客户端来生产消息,用flume来辅助处理异常的数据,,, 但是在前段时间,单独使用flume测试的时候发现,flume不能对分区进行负载均衡!同一个集群中,一个broker的一个分区已经有10亿条数据,另外一台的另一个分区只有8亿条数据: 因此,我对flume参照别人的做法,增加了拦截器: 即在flume配置文件中 增加以下字段: ----- stage_nginx.sources.tailSource.interceptors = i2sta

[Kafka]How to Clean Topic data

1.方法一.直接删除Topic a:如果Kafka服务器 delete.topic.enable=false 1) kafka-topics.sh --delete --zookeeper host:port --topic topicname 2) 删除kafka存储目录(server.properties文件log.dirs配置,默认为"/tmp/kafka-logs")相关topic目录 3)删除zookeeper "/brokers/topics/"目录下相