Failed to send producer request with correlation id 2 to broker 0 with

部署Flume在Window环境中,Kafka部署在Linux上,从Flume发送事件到Kafka始终有一下错误,

经过长时间在网上搜索终于把问题解决,

修改kafka中配置项,

#advertised.host.name=<hostname routable by clients>

注释去掉,并配置上kafka所在linux的ip地址

advertised.host.name=192.168.10.10

重启kafka。

2016-04-16 16:43:34,069 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Connected to dx.zdp.ol:9092 for producing
2016-04-16 16:43:34,069 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Disconnecting from dx.zdp.ol:9092
2016-04-16 16:43:34,069 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:89)] Failed to send producer request with correlation id 2 to broker 0 with
 data for partitions [OtaAudit1,0]
java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:76)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:75)
        at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:106)
        at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:106)
        at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:106)
        at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
        at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:105)
        at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:105)
        at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:105)
        at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:104)
        at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:259)
        at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:110)
        at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:102)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
        at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:102)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:75)
        at kafka.producer.Producer.send(Producer.scala:77)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:135)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
2016-04-16 16:43:34,079 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Back off for 1000 ms before retrying send. Remaining retries = 3
2016-04-16 16:43:34,522 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 12

时间: 2024-10-04 03:24:26

Failed to send producer request with correlation id 2 to broker 0 with的相关文章

kafka producer生产数据到kafka异常:Got error produce response with correlation id 16 on topic-partition...Error: NETWORK_EXCEPTION

kafka producer生产数据到kafka异常:Got error produce response with correlation id 16 on topic-partition...Error: NETWORK_EXCEPTION 1.问题描述 2017-09-13 15:11:30.656 o.a.k.c.p.i.Sender [WARN] Got error produce response with correlation id 25 on topic-partition t

印象笔记无法同步问题解决 Unable to send HTTP request: 12029

问题 今天突然发现本地软件不能访问网络. 包括: 印象笔记无法同步, 搜狗输入法无法登陆. 但其它上网正常. 思路及解决过程 因为chrome上网 ,qq上网均正常. 且同事可以正常使用. 推测是本地网络原因. 想找到个网络监测工具(可以监测所有软件上网情况) --未找到合适的. 也推测是防火墙原因, 关闭防火墙仍不能访问. 此时, 通过evernote 活动日志发现了问题(对EverNote 此功能点赞 ) 看到日志如下: 16:01:21 [9640] 0% Connecting to ww

报错:Error while fetching metadata with correlation id 67 : {alarmHis=LEADER_NOT_AVAILABLE}

报错背景: 单机安装了kafka,创建完成主题,启动生产者的时候产生报错现象.报错时持续不断打印日志信息. 报错现象: [2019-05-21 09:43:52,790] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 59 : {alarmHis=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkCl

kafka Failed to send messages after 3 tries.

我在网上搜了很多资料 说是zookeeper的问题,修改更改server.properties下的host.name,zookeeper.connect的localhost 为ip地址 怎么也不管用 后来就单独建了一个项目,发现没有问题.结论是jar包冲突,逐个删除jar包 发现是log4j的版本太低,需要1.2.12以上的版本 最后问题解决 总结: 一.其实这个问题可以早点解决,但是由于我没有仔细看日志,所有卡了我两天的时间.之前都是只看错误日志,不看警告日志 log4j:WARN No ap

etcd报错failed to send out heartbeat on time

etcd服务出现了以下报错Mar 23 05:50:44 localhost etcd: failed to send out heartbeat on time (exceeded the 100ms timeout for 2.951502ms) 心跳检测报错主要与以下因素有关(磁盘速度.cpu性能和网络不稳定问题): 第一,etcd使用了raft算法,leader会定时地给每个follower发送心跳,如果leader连续两个心跳时间没有给follower发送心跳,etcd会打印这个log

org.springframework.beans.factory.BeanDefinitionStoreException: Failed to read candidate component class: file [/Users/lonecloud/tomcat/apache-tomcat-7.0.70 2/webapps/myproject/WEB-INF/classes/cn/lone

解决这个报错的解决办法: org.springframework.beans.factory.BeanDefinitionStoreException: Failed to read candidate component class: file [/Users/lonecloud/tomcat/apache-tomcat-7.0.70 2/webapps/myproject/WEB-INF/classes/cn/lonecloud/dao/Impl/UserDaoImpl.class]; ne

开发环境解决 kafka Failed to send messages after 3 tries

新建了一个kafka集群,在window下写了一个简单的producer做测试,结果遇到了消息发送失败的问题,代码如下: Properties props = new Properties(); props.put("metadata.broker.list", "192.168.1.107:6667"); props.put("serializer.class", "kafka.serializer.StringEncoder&quo

loadrunner send json request

1 web_custom_request("Update Todo", 2 "URL=http://localhost:3000/todos/3", 3 "Method=PUT", 4 "Resource=0", 5 "EncType=application/json", 6 "Mode=HTTP", 7 "Body={\"id\":3, \"is

【Flume】Rpc sink XX closing Rpc client:NettyAvroRpcClient {xx} …… Failed to send events 问题解决

从以上截图信息,就可以看出问题了,服务端和客户端连接信息对不上,服务端有很多ESTABLISHED的连接,其实是无用的.这种情况,起初,我也很奇怪,没有发现原因,只能查看日志了. 通过日志信息,发现出现了异常,但是很奇怪,在异常信息之前,有一句Rpc sink {} closing Rpc client: {} 这里destroyConnection了,摧毁了一个连接,好端端的为什么会摧毁连接呢,从flume源码来看,flume自身不会出现这种低端的BUG吧,好端端,摧毁自己的连接干啥,所以从f