二、Kafka基础实战:消费者和生产者实例

一、Kafka消费者编程模型

1.分区消费模型

分区消费伪代码描述

main()
   获取分区的size
   for index =0 to size
     create thread(or process) consumer(Index)

第index个线程(进程)
consumer(index)
    创建到kafka broker的连接: KafkaClient(host,port)
    指定消费参数构建consumer: SimpleConsumer(topic, partitions)
    设置消费offset : consumer.seek(offset,0)
    while  True
      消费指定topic第index个分区的数据
      处理
     记录当前消息offset
     提交当前offset(可选)

2.组(Group)消费模型

按组(Group)消费伪代码描述

main()
   设置需要创建的流数N
   for index =0 to N
     create thread  consumer(Index)

第index个线程
consumer(index)
    创建到kafka broker的连接: KafkaClient(host,port)
    指定消费参数构建consumer: SimpleConsumer(topic, partitions)
    设置从头消费还是从最新的消费(smallest或largest)
    while  True
        从指定topic的第index个流取数据
        处理
     (offset会自动提交到zookeeper,无需我们操作)

Consumer分配算法

3.两种消费模型对比

分区消费模型更加灵活但是:
(1)需要自己处理各种异常情况;
(2)需要自己管理offset(以实现消息传递的其他语义);
分组消费模型更加简单,但是不灵活:
(1)不需要自己处理异常情况,不需要自己管理offset;
(2)只能实现kafka默认的最少一次消息传递语义;

消息传递三种语义:

(1)至多一次;

(2)最少一次;

(3)恰好一次;

二、Kafka消费者的Python和Java客户端实现

1.Python客户端实例讲解

•需要的软件环境:

已搭建好的kafka集群、Linux服务器一台、Python2.7.6 、          kafka-Python软件包

•分区消费模型的Python实现;

main.py

# -*- coding: utf-8 -*-
"""
This module provide kafka partition and group consumer demo example.

"""

import logging, time
import partition_consumer

def main():

threads = []
partition = 3
for index in range(partition):
threads.append(partition_consumer.Consumer(index))

for t in threads:
t.start()

time.sleep(50000)

if __name__ == ‘__main__‘:
#logging.basicConfig(
# format=‘%(asctime)s.%(msecs)s:%(name)s:%(thread)d:%(levelname)s:%(process)d:%(message)s‘,
# level=logging.INFO
# )
main()

partition_consumer.py

# -*- coding: utf-8 -*-
"""
This module provide kafka partition partition consumer demo example.

"""

import threading
from kafka.client import KafkaClient
from kafka.consumer import SimpleConsumer

class Consumer(threading.Thread):
    daemon = True
    def __init__(self,partition_index):
        threading.Thread.__init__(self)
        self.part = [partition_index]
        self.__offset = 0

    def run(self):
        client = KafkaClient("10.206.216.13:19092,10.206.212.14:19092,10.206.209.25:19092")
        consumer = SimpleConsumer(client, "test-group", "jiketest",auto_commit=False,partitions=self.part)

        consumer.seek(0,0)

        while True:
            message = consumer.get_message(True,60)
            self.__offset = message.offset
            print message.message.value

•组(Group)消费模型的Python实现;

main.py

# -*- coding: utf-8 -*-
"""
This module provide kafka partition and group consumer demo example.

"""

import logging, time
import group_consumer

def main():

    conusmer_thread = group_consumer.Consumer()
    conusmer_thread.start()

    time.sleep(500000)

if __name__ == ‘__main__‘:
    #logging.basicConfig(
    #    format=‘%(asctime)s.%(msecs)s:%(name)s:%(thread)d:%(levelname)s:%(process)d:%(message)s‘,
    #    level=logging.INFO
    #    )
    main()

group_consumer.py

# -*- coding: utf-8 -*-
"""
This module provide kafka partition group consumer demo example.

"""

import threading
from kafka.client import KafkaClient
from kafka.consumer import SimpleConsumer

class Consumer(threading.Thread):
    daemon = True

    def run(self):
        client = KafkaClient("10.206.216.13:19092,10.206.212.14:19092,10.206.209.25:19092")
        consumer = SimpleConsumer(client, "test-group", "jiketest")

        for message in consumer:
            print(message.message.value)

2.Python客户端参数调优

•fetch_size_bytes: 从服务器获取单包大小;

•buffer_size: kafka客户端缓冲区大小;

•Group:分组消费时分组名

•auto_commit: offset是否自动提交;

3.Java客户端实例讲解

•需要的软件环境:

已搭建好的kafka集群、Linux服务器一台、Apache Maven 3.2.3、 kafka 0.8.1

•分区消费模型的Java实现;

package kafka.consumer.partition;

import kafka.api.FetchRequest;
import kafka.api.FetchRequestBuilder;
import kafka.api.PartitionOffsetRequestInfo;
import kafka.common.ErrorMapping;
import kafka.common.TopicAndPartition;
import kafka.javaapi.*;
import kafka.javaapi.consumer.SimpleConsumer;
import kafka.message.MessageAndOffset;

import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class PartitionConsumerTest {
    public static void main(String args[]) {
        PartitionConsumerTest example = new PartitionConsumerTest();
        long maxReads = Long.MAX_VALUE;
        String topic = "jiketest";
        if(args.length < 1){
            System.out.println("Please assign partition number.");
        }

        List<String> seeds = new ArrayList<String>();
        String hosts="10.206.216.13,10.206.212.14,10.206.209.25";
        String[] hostArr = hosts.split(",");
        for(int index = 0;index < hostArr.length;index++){
            seeds.add(hostArr[index].trim());
        }

        int port = 19092;

        int partLen = Integer.parseInt(args[0]);
        for(int index=0;index < partLen;index++){
            try {
                example.run(maxReads, topic, index/*partition*/, seeds, port);
            } catch (Exception e) {
                System.out.println("Oops:" + e);
                 e.printStackTrace();
            }
        }
    }

    private List<String> m_replicaBrokers = new ArrayList<String>();

        public PartitionConsumerTest() {
            m_replicaBrokers = new ArrayList<String>();
        }

        public void run(long a_maxReads, String a_topic, int a_partition, List<String> a_seedBrokers, int a_port) throws Exception {
            // find the meta data about the topic and partition we are interested in
            //
            PartitionMetadata metadata = findLeader(a_seedBrokers, a_port, a_topic, a_partition);
            if (metadata == null) {
                System.out.println("Can‘t find metadata for Topic and Partition. Exiting");
                return;
            }
            if (metadata.leader() == null) {
                System.out.println("Can‘t find Leader for Topic and Partition. Exiting");
                return;
            }
            String leadBroker = metadata.leader().host();
            String clientName = "Client_" + a_topic + "_" + a_partition;

            SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
            long readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.EarliestTime(), clientName);

            int numErrors = 0;
            while (a_maxReads > 0) {
                if (consumer == null) {
                    consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
                }
                FetchRequest req = new FetchRequestBuilder()
                        .clientId(clientName)
                        .addFetch(a_topic, a_partition, readOffset, 100000) // Note: this fetchSize of 100000 might need to be increased if large batches are written to Kafka
                        .build();
                FetchResponse fetchResponse = consumer.fetch(req);

                if (fetchResponse.hasError()) {
                    numErrors++;
                    // Something went wrong!
                    short code = fetchResponse.errorCode(a_topic, a_partition);
                    System.out.println("Error fetching data from the Broker:" + leadBroker + " Reason: " + code);
                    if (numErrors > 5) break;
                    if (code == ErrorMapping.OffsetOutOfRangeCode())  {
                        // We asked for an invalid offset. For simple case ask for the last element to reset
                        readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
                        continue;
                    }
                    consumer.close();
                    consumer = null;
                    leadBroker = findNewLeader(leadBroker, a_topic, a_partition, a_port);
                    continue;
                }
                numErrors = 0;

                long numRead = 0;
                for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(a_topic, a_partition)) {
                    long currentOffset = messageAndOffset.offset();
                    if (currentOffset < readOffset) {
                        System.out.println("Found an old offset: " + currentOffset + " Expecting: " + readOffset);
                        continue;
                    }
                    readOffset = messageAndOffset.nextOffset();
                    ByteBuffer payload = messageAndOffset.message().payload();

                    byte[] bytes = new byte[payload.limit()];
                    payload.get(bytes);
                    System.out.println(String.valueOf(messageAndOffset.offset()) + ": " + new String(bytes, "UTF-8"));
                    numRead++;
                    a_maxReads--;
                }

                if (numRead == 0) {
                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException ie) {
                    }
                }
            }
            if (consumer != null) consumer.close();
        }

        public static long getLastOffset(SimpleConsumer consumer, String topic, int partition,
                                         long whichTime, String clientName) {
            TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
            Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
            requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
            kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
                    requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
            OffsetResponse response = consumer.getOffsetsBefore(request);

            if (response.hasError()) {
                System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition) );
                return 0;
            }
            long[] offsets = response.offsets(topic, partition);
            return offsets[0];
        }

        private String findNewLeader(String a_oldLeader, String a_topic, int a_partition, int a_port) throws Exception {
            for (int i = 0; i < 3; i++) {
                boolean goToSleep = false;
                PartitionMetadata metadata = findLeader(m_replicaBrokers, a_port, a_topic, a_partition);
                if (metadata == null) {
                    goToSleep = true;
                } else if (metadata.leader() == null) {
                    goToSleep = true;
                } else if (a_oldLeader.equalsIgnoreCase(metadata.leader().host()) && i == 0) {
                    // first time through if the leader hasn‘t changed give ZooKeeper a second to recover
                    // second time, assume the broker did recover before failover, or it was a non-Broker issue
                    //
                    goToSleep = true;
                } else {
                    return metadata.leader().host();
                }
                if (goToSleep) {
                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException ie) {
                    }
                }
            }
            System.out.println("Unable to find new leader after Broker failure. Exiting");
            throw new Exception("Unable to find new leader after Broker failure. Exiting");
        }

        private PartitionMetadata findLeader(List<String> a_seedBrokers, int a_port, String a_topic, int a_partition) {
            PartitionMetadata returnMetaData = null;
            loop:
            for (String seed : a_seedBrokers) {
                SimpleConsumer consumer = null;
                try {
                    consumer = new SimpleConsumer(seed, a_port, 100000, 64 * 1024, "leaderLookup");
                    List<String> topics = Collections.singletonList(a_topic);
                    TopicMetadataRequest req = new TopicMetadataRequest(topics);
                    kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);

                    List<TopicMetadata> metaData = resp.topicsMetadata();
                    for (TopicMetadata item : metaData) {
                        for (PartitionMetadata part : item.partitionsMetadata()) {
                            if (part.partitionId() == a_partition) {
                                returnMetaData = part;
                                break loop;
                            }
                        }
                    }
                } catch (Exception e) {
                    System.out.println("Error communicating with Broker [" + seed + "] to find Leader for [" + a_topic
                            + ", " + a_partition + "] Reason: " + e);
                } finally {
                    if (consumer != null) consumer.close();
                }
            }
            if (returnMetaData != null) {
                m_replicaBrokers.clear();
                for (kafka.cluster.Broker replica : returnMetaData.replicas()) {
                    m_replicaBrokers.add(replica.host());
                }
            }
            return returnMetaData;
        }
}

•组(Group)消费模型的Java实现;

package kafka.consumer.group;

import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;

public class ConsumerTest implements Runnable {
    private KafkaStream m_stream;
    private int m_threadNumber;

    public ConsumerTest(KafkaStream a_stream, int a_threadNumber) {
        m_threadNumber = a_threadNumber;
        m_stream = a_stream;
    }

    public void run() {
        ConsumerIterator<byte[], byte[]> it = m_stream.iterator();
        while (it.hasNext()){
            System.out.println("Thread " + m_threadNumber + ": " + new String(it.next().message()));

        }
        System.out.println("Shutting down Thread: " + m_threadNumber);
    }
}
package kafka.consumer.group;

import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;

public class GroupConsumerTest extends Thread {
    private final ConsumerConnector consumer;
    private final String topic;
    private  ExecutorService executor;

    public GroupConsumerTest(String a_zookeeper, String a_groupId, String a_topic){
        consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
                createConsumerConfig(a_zookeeper, a_groupId));
        this.topic = a_topic;
    }

    public void shutdown() {
        if (consumer != null) consumer.shutdown();
        if (executor != null) executor.shutdown();
        try {
            if (!executor.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS)) {
                System.out.println("Timed out waiting for consumer threads to shut down, exiting uncleanly");
            }
        } catch (InterruptedException e) {
            System.out.println("Interrupted during shutdown, exiting uncleanly");
        }
   }

    public void run(int a_numThreads) {
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(topic, new Integer(a_numThreads));
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
        List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);

        // now launch all the threads
        //
        executor = Executors.newFixedThreadPool(a_numThreads);

        // now create an object to consume the messages
        //
        int threadNumber = 0;
        for (final KafkaStream stream : streams) {
            executor.submit(new ConsumerTest(stream, threadNumber));
            threadNumber++;
        }
    }
    private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId) {
        Properties props = new Properties();
        props.put("zookeeper.connect", a_zookeeper);
        props.put("group.id", a_groupId);
        props.put("zookeeper.session.timeout.ms", "40000");
        props.put("zookeeper.sync.time.ms", "2000");
        props.put("auto.commit.interval.ms", "1000");

        return new ConsumerConfig(props);
    }

    public static void main(String[] args) {
        if(args.length < 1){
            System.out.println("Please assign partition number.");
        }

        String zooKeeper = "10.206.216.13:12181,10.206.212.14:12181,10.206.209.25:12181";
        String groupId = "jikegrouptest";
        String topic = "jiketest";
        int threads = Integer.parseInt(args[0]);

        GroupConsumerTest example = new GroupConsumerTest(zooKeeper, groupId, topic);
        example.run(threads);

        try {
            Thread.sleep(Long.MAX_VALUE);
        } catch (InterruptedException ie) {

        }
        example.shutdown();
    }
}

4.Java客户端参数调优

•fetchSize: 从服务器获取单包大小;

•bufferSize: kafka客户端缓冲区大小;

•group.id:  分组消费时分组名

三、Kafka生产者编程模型

1.同步生产模型

2.异步生产模型

3.两种生产模型伪代码描述

main()
  创建到kafka broker的连接:KafkaClient(host,port)
  选择或者自定义生产者负载均衡算法 partitioner
  设置生产者参数
  根据负载均衡算法和设置的生产者参数构造Producer对象

  while True
    getMessage:从上游获得一条消息
    按照kafka要求的消息格式构造kafka消息
     根据分区算法得到分区
    发送消息
    处理异常

4.两种生产模型对比

同步生产模型:

(1)低消息丢失率;

(2)高消息重复率(由于网络原因,回复确认未收到);

(3)高延迟

异步生产模型:

(1)低延迟;

(2)高发送性能;

(3)高消息丢失率(无确认机制,发送端队列满)

四、Kafka生产者的Python和Java客户端实现

1.Python客户端实例讲解

•需要的软件环境:

已搭建好的kafka集群、Linux服务器一台、Python2.7.6 、          kafka-Python软件包

main.py

# -*- coding: utf-8 -*-
"""
This module provide kafka sync and async producer demo example.

"""

import logging, time

from async.ASyncProducer import ASyncProducer
from sync.SyncProducer import SyncProducer

def main():
    threads = [
        ASyncProducer(),
        SyncProducer()
    ]

    for t in threads:
        t.start()

    time.sleep(5000)

if __name__ == "__main__":
    #logging.basicConfig(
    #    format=‘%(asctime)s.%(msecs)s:%(name)s:%(thread)d:%(levelname)s:%(process)d:%(message)s‘,
    #    level=logging.INFO
    #    )
    main()

•同步生产模型的Python实现

SyncProducer.py

# -*- coding: utf-8 -*-
"""
This module provide kafka partition and group consumer demo example.

"""

import threading, time

from kafka.client import KafkaClient
from kafka.producer import SimpleProducer
from kafka.partitioner import HashedPartitioner

class SyncProducer(threading.Thread):
    daemon = True

    def run(self):
        client = KafkaClient("10.206.216.13:19092,10.206.212.14:19092,10.206.209.25:1909")
        producer = SimpleProducer(client)
        #producer = KeyedProducer(client,partitioner=HashedPartitioner)

        while True:
            producer.send_messages(‘jiketest‘, "test")
            producer.send_messages(‘jiketest‘, "test")

            time.sleep(1)

•异步生产模型的Python实现

ASyncProducer.py

# -*- coding: utf-8 -*-
"""
This module provide kafka partition and group consumer demo example.

"""

import threading, time

from kafka.client import KafkaClient
from kafka.producer import SimpleProducer

class ASyncProducer(threading.Thread):
    daemon = True

    def run(self):
        client = KafkaClient("10.206.216.13:19092,10.206.212.14:19092,10.206.209.25:19092")
        producer = SimpleProducer(client,async=True)

        while True:
            producer.send_messages(‘jiketest‘, "test")
            producer.send_messages(‘jiketest‘, "test")

            time.sleep(1)

2.Python客户端参数调优

•req_acks:发送失败重试次数;

•ack_timeout:  未接到确认,认为发送失败的时间;

•async :  是否异步发送;

•batch_send_every_n:  异步发送时,累计最大消息数;

•batch_send_every_t:异步发送时,累计最大时间;

3.Java客户端实例讲解

•需要的软件环境:

已搭建好的kafka集群、Linux服务器一台、Apache Maven    3.2.3、 kafka 0.8.1

SimplePartitioner.java

package kafka.producer.partiton;

import kafka.producer.Partitioner;
import kafka.utils.VerifiableProperties;

public class SimplePartitioner implements Partitioner {
    public SimplePartitioner (VerifiableProperties props) {

    }

    public int partition(Object key, int a_numPartitions) {
        int partition = 0;
        String stringKey = (String) key;
        int offset = stringKey.lastIndexOf(‘.‘);
        if (offset > 0) {
           partition = Integer.parseInt( stringKey.substring(offset+1)) % a_numPartitions;
        }
       return partition;
  }

}

•同步模型的Java实现

package kafka.producer.sync;
import java.util.*;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class SyncProduce {
    public static void main(String[] args) {
        long events = Long.MAX_VALUE;
        Random rnd = new Random();

        Properties props = new Properties();
        props.put("metadata.broker.list", "10.206.216.13:19092,10.206.212.14:19092,10.206.209.25:19092");
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        //kafka.serializer.DefaultEncoder
        props.put("partitioner.class", "kafka.producer.partiton.SimplePartitioner");
        //kafka.producer.DefaultPartitioner: based on the hash of the key
        props.put("request.required.acks", "1");
        //0;  绝不等确认  1:   leader的一个副本收到这条消息,并发回确认 -1:   leader的所有副本都收到这条消息,并发回确认

        ProducerConfig config = new ProducerConfig(props);

        Producer<String, String> producer = new Producer<String, String>(config);

        for (long nEvents = 0; nEvents < events; nEvents++) {
               long runtime = new Date().getTime();
               String ip = "192.168.2." + rnd.nextInt(255);
               String msg = runtime + ",www.example.com," + ip;
               //eventKey必须有(即使自己的分区算法不会用到这个key,也不能设为null或者""),否者自己的分区算法根本得不到调用
               KeyedMessage<String, String> data = new KeyedMessage<String, String>("jiketest", ip, msg);
                                                               //             eventTopic, eventKey, eventBody
               producer.send(data);
               try {
                   Thread.sleep(1000);
               } catch (InterruptedException ie) {
               }
        }
        producer.close();
    }
}

•异步模型的Java实现

ASyncProduce.java

package kafka.producer.async;

import java.util.*;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class ASyncProduce {
    public static void main(String[] args) {
        long events = Long.MAX_VALUE;
        Random rnd = new Random();

        Properties props = new Properties();
        props.put("metadata.broker.list", "10.206.216.13:19092,10.206.212.14:19092,10.206.209.25:19092");
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        //kafka.serializer.DefaultEncoder
        props.put("partitioner.class", "kafka.producer.partiton.SimplePartitioner");
        //kafka.producer.DefaultPartitioner: based on the hash of the key
        //props.put("request.required.acks", "1");
        props.put("producer.type", "async");
        //props.put("producer.type", "1");
        // 1: async 2: sync

        ProducerConfig config = new ProducerConfig(props);

        Producer<String, String> producer = new Producer<String, String>(config);

        for (long nEvents = 0; nEvents < events; nEvents++) {
               long runtime = new Date().getTime();
               String ip = "192.168.2." + rnd.nextInt(255);
               String msg = runtime + ",www.example.com," + ip;
               KeyedMessage<String, String> data = new KeyedMessage<String, String>("jiketest", ip, msg);
               producer.send(data);
               try {
                   Thread.sleep(1000);
               } catch (InterruptedException ie) {
               }
        }
        producer.close();
    }
}

4.Java客户端参数调优

•message.send.max.retries: 发送失败重试次数;

•retry.backoff.ms :未接到确认,认为发送失败的时间;

•producer.type:  同步发送或者异步发送;

•batch.num.messages: 异步发送时,累计最大消息数;

•queue.buffering.max.ms:异步发送时,累计最大时间;

原文地址:https://www.cnblogs.com/pony1223/p/9757878.html

时间: 2024-08-30 16:13:03

二、Kafka基础实战:消费者和生产者实例的相关文章

Uni-app基础实战富文本框解析 WordPress rest api实例(二)

Uni-app基础实战富文本框解析 WordPress rest api实例 文本是更具上篇文章uni-app上下拉刷新的续文有需要了解上文的请点击下面连接访问 传送门: Uni-app实战上加载新下拉刷新 WordPress rest api实例 那么我们就开始了,主要的要是去介绍了以下一个插件的使用方式.官方的富文本框有markdown和html两种方式,但是样式欠佳! 本文插件传送门:uParse修复版,优化表格,css等,html富文本加载 1.导入组件 官网可以一键导入,直接点击hbu

4 kafka集群部署及生产者java客户端编程 + kafka消费者java客户端编程

本博文的主要内容有   kafka的单机模式部署 kafka的分布式模式部署 生产者java客户端编程 消费者java客户端编程 运行kafka ,需要依赖 zookeeper,你可以使用已有的 zookeeper 集群或者利用 kafka自带的zookeeper. 单机模式,用的是kafka自带的zookeeper, 分布式模式,用的是外部安装的zookeeper,即公共的zookeeper. Step 6: Setting up a multi-broker cluster So far w

Kafka(八)Python生产者和消费者API使用

单线程生产者 #!/usr/bin/env python # -*- coding: utf-8 -*- import random import sys from kafka import KafkaProducer from kafka.client import log import time import json __metaclass__ = type class Producer:     def __init__(self, KafkaServer='127.0.0.1', Ka

kafka 基础介绍

kafka 基础 kafka有四个核心API: 应用程序使用 Producer API 发布消息到1个或多个topic(主题). 应用程序使用 Consumer API 来订阅一个或多个topic,并处理产生的消息. 应用程序使用 Streams API 充当一个流处理器,从1个或多个topic消费输入流,并生产一个输出流到1个或多个输出topic,有效地将输入流转换到输出流. Connector API允许构建或运行可重复使用的生产者或消费者,将topic连接到现有的应用程序或数据系统.例如,

大数据技术之_10_Kafka学习_Kafka概述+Kafka集群部署+Kafka工作流程分析+Kafka API实战+Kafka Producer拦截器+Kafka Streams

第1章 Kafka概述1.1 消息队列1.2 为什么需要消息队列1.3 什么是Kafka1.4 Kafka架构第2章 Kafka集群部署2.1 环境准备2.1.1 集群规划2.1.2 jar包下载2.2 Kafka集群部署2.3 Kafka命令行操作第3章 Kafka工作流程分析3.1 Kafka 生产过程分析3.1.1 写入方式3.1.2 分区(Partition)3.1.3 副本(Replication)3.1.4 写入流程3.2 Broker 保存消息3.2.1 存储方式3.2.2 存储策

Kafka基础简介

kafka是一个分布式的,可分区的,可备份的日志提交服务,它使用独特的设计实现了一个消息系统的功能. 由于最近项目升级,需要将spring的事件机制转变为消息机制,针对后期考虑,选择了kafka作为消息中间件. kafka的安装 这里为了快速搭建,选择用docker docker run -d -p 2181:2181 -p 9092:9092 -v /opt/kafka/server.properties:/opt/kafka_2.11-0.10.1.0/config/server.prope

1.kafka基础架构

kafka基础架构 什么是kafka? Kafka是一个分布式的基于发布/订阅模式的消息队列,主要应用于大数据实时处理领域. 1.什么是消息队列? 2.使用消息队列的好处 1)解耦 允许你独立的扩展或修改两边的处理过程,只要确保它们遵守同样的接口约束. 2)可恢复性 系统的一部分组件失效时,不会影响到整个系统.消息队列降低了进程间的耦合度,所以即使一个处理消息的进程挂掉,加入队列中的消息仍然可以在系统恢复后被处理. 3)缓冲 有助于控制和优化数据流经过系统的速度,解决生产消息和消费消息的处理速度

《nodejs+gulp+webpack基础实战篇》课程笔记(四)-- 实战演练

一.用gulp 构建前端页面(1)---静态构建 npm install gulp-template --save-dev 通过这个插件,我们可以像写后台模板(譬如PHP)一样写前端页面.我们首先学习一下写法. 现在我们创建一个新任务:创建一个裸的index.html文件,然后在body里面写上 ,我的年龄是:<%= age %> 下载好gulp-template,我们引用并配置 var gulp_tpl = require("gulp-template"); gp.tas

室内设计 3Dsmax2012教程 效果图实例提高教程 室内设计实战教程 欧式效果图制作实例教程

热门推荐电脑办公计算机基础知识教程 Excel2010基础教程 Word2010基础教程 PPT2010基础教程 五笔打字视频教程 Excel函数应用教程 Excel VBA基础教程 WPS2013表格教程 更多>平面设计PhotoshopCS5教程 CorelDRAW X5视频教程 Photoshop商业修图教程 Illustrator CS6视频教程 更多>室内设计3Dsmax2012教程 效果图实例提高教程 室内设计实战教程 欧式效果图制作实例教程 AutoCAD2014室内设计 Aut