使用kafka作为生产者生产数据到hdfs

关键:查看kafka官网的userGuide

配置文件:

agent.sources = r1
agent.sinks = k1
agent.channels = c1

## sources config
agent.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
agent.sources.r1.kafka.bootstrap.servers = 192.168.80.128:9092,192.168.80.129:9092,192.168.80.130:9092
agent.sources.r1.kafka.topics =1711
agent.sources.r1.kafka.consumer.timeout.ms = 1000
agent.sources.r1.kafka.consumer.group.id = consumer-group111

## channels config
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000
agent.channels.c1.transactionCapacity = 100
agent.channels.c1.byteCapacityBufferPercentage = 60
agent.channels.c1.byteCapacity = 1280
agent.channels.c1.keep-alive = 60

# Describe the sink
agent.sinks.k1.type =hdfs
agent.sinks.k1.hdfs.path = hdfs://bcqm1711/kafkadir
agent.sinks.k1.hdfs.filePrefix = Syslog
agent.sinks.k1.hdfs.round = true
agent.sinks.k1.hdfs.roundValue = 1
agent.sinks.k1.hdfs.roundUnit = minute
agent.sinks.k1.hdfs.fileType=DataStream
agent.sinks.k1.hdfs.writeFormat=Text
agent.sinks.k1.hdfs.rollInterval=0
agent.sinks.k1.hdfs.rollSize=10240
agent.sinks.k1.hdfs.rollCount=0
agent.sinks.k1.hdfs.idleTimeout=60
agent.sinks.k1.hdfs.callTimeout=60000

# Bind the source and sink to the channel
agent.sources.r1.channels = c1
agent.sinks.k1.channel = c1

原文地址:https://www.cnblogs.com/pingzizhuanshu/p/9102596.html

时间: 2024-10-30 13:39:07

使用kafka作为生产者生产数据到hdfs的相关文章

使用kafka作为生产者生产数据到hdfs(单节点)

关键:查看kafka官网的userguide agent.sources = kafkaSourceagent.channels = memoryChannelagent.sinks = hdfsSink agent.sources.kafkaSource.type = org.apache.flume.source.kafka.KafkaSourceagent.sources.kafkaSource.zookeeperConnect = 192.168.57.11:2181agent.sour

使用kafka作为生产者生产数据到hdfs(单节点)

查看kafka官网的userguide agent.sources = kafkaSource agent.channels = memoryChannel agent.sinks = hdfsSink agent.sources.kafkaSource.type = org.apache.flume.source.kafka.KafkaSource agent.sources.kafksSource.zookeeperConnect = 192.168.57.11:2181 agent.sou

使用kafka作为生产者生产数据_到_hbase

配置文件: agent.sources = r1agent.sinks = k1agent.channels = c1 ## sources configagent.sources.r1.type = org.apache.flume.source.kafka.KafkaSourceagent.sources.r1.kafka.bootstrap.servers = 192.168.80.128:9092,192.168.80.129:9092,192.168.80.130:9092agent.

kafka的生产者类

1 import com.*.message.Configuration; 2 import org.apache.kafka.clients.producer.Callback; 3 import org.apache.kafka.clients.producer.ProducerRecord; 4 import org.apache.kafka.clients.producer.RecordMetadata; 5 import org.apache.kafka.common.serializ

kafka中生产者和消费者API

使用idea实现相关API操作,先要再pom.xml重添加Kafka依赖: <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.8.2</artifactId> <version>0.8.1</version> <exclusions> <exclusion> <artifactId>jmxtools&

使用java创建kafka的生产者和消费者

创建一个Kafka的主题,连接到zk集群,副本因子3,分区3,主题名是test111        [[email protected] kafka]# bin/kafka-topics.sh --create --zookeeper h5:2181 --topic test111 --replication-factor 3 --partitions 3    查看Kafka的主题详情        [[email protected] kafka]# bin/kafka-topics.sh

基于Kafka的生产者消费者消息处理本地调试

(尊重劳动成果,转载请注明出处:http://blog.csdn.net/qq_25827845/article/details/68174111冷血之心的博客) Kafka下载地址:http://download.csdn.net/download/qq_25827845/9798176 安装解压即可 配置修改zookeeper.properties 与 server.properties修改为本地路径,如图所示: 将config文件夹中的zookeeper.properties 与 serv

Kafka之生产者消费者示例

本例以kafka2.10_0.10.0.0为例,不同版本的kafka Java api有些区别! 增加maven依赖 <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.10</artifactId> <version>0.10.0.0</version> </dependency> 生产者 package com.zns.k

初识 Kafka Producer 生产者

目录 1.KafkaProducer 概述 2.KafkaProducer 类图 3.KafkaProducer 简单示例 温馨提示:整个 Kafka Client 专栏基于 kafka-2.3.0 版本. @(本节目录) 1.KafkaProducer 概述 根据 KafkaProducer 类上的注释上来看 KafkaProducer 具有如下特征: KafkaProducer 是线程安全的,可以被多个线程交叉使用. KafkaProducer 内部包含一个缓存池,存放待发送消息,即 Pro