Open Sourcing Kafka Monitor

https://engineering.linkedin.com/blog/2016/05/open-sourcing-kafka-monitor

 

 

https://github.com/linkedin/kafka-monitor

https://github.com/Microsoft/Availability-Monitor-for-Kafka

 

 

Design Overview

Kafka Monitor makes it easy to develop and execute long-running Kafka-specific system tests in real clusters and to monitor existing Kafka deployment‘s SLAs provided by users.

Developers can create new tests by composing reusable modules to emulate various scenarios (e.g. GC pauses, broker hard-kills, rolling bounces, disk failures, etc.) and collect metrics; users can run Kafka Monitor tests that execute these scenarios at a user-defined schedule on a test cluster or production cluster and validate that Kafka still functions as expected in these scenarios. Kafka Monitor is modeled as manager for a collection of tests and services in order to achieve these goals.

A given Kafka Monitor instance runs in a single Java process and can spawn multiple tests/services in the same process. The diagram below demonstrates the relations between service, test and Kafka Monitor instance, as well as how Kafka Monitor interacts with a Kafka cluster and user.

这个平台比较有意思在于,不光是监控那么简单,

还包含完整的test框架,可以定义任意test,test由各种service,即组件,组合而成

  • Produce service, which produces messages to Kafka and measures metrics such as produce rate and availability.
  • Consume service, which consumes messages from Kafka and measures metrics including message loss rate, message duplicate rate and end-to-end latency. This service depends on the produce service to provide messages that embed a message sequence number and timestamp.
  • Broker bounce service, which bounces a given broker at some pre-defined schedule.

用上面的3个services,就可以组合出一个测试broker bounce的test

 

或者上面的case,通过两个kafka monitor,可以测试多datacenter之间的同步

 

Kafka Monitor Usage at LinkedIn

Monitoring Kafka Cluster Deployments

In early 2016 we deployed Kafka Monitor to monitor availability and end-to-end latency of every Kafka cluster at LinkedIn. This project wiki goes into the details of how these metrics are measured. These basic but critical metrics have been extremely useful to actively monitor the SLAs provided by our Kafka cluster deployment.

 

Validate Client Libraries Using End-to-End Workflows

As an earlier blog post explains, we have a client library that wraps around the vanilla Apache Kafka producer and consumer to provide various features that are not available in Apache Kafka such as Avro encoding, auditing and support for large messages. We also have a REST client that allows non-Java application to produce and consume from Kafka. It is important to validate the functionality of these client libraries with each new Kafka release. Kafka Monitor allows users to plug in custom client libraries to be used in its end-to-end workflow. We have deployed Kafka Monitor instances that use our wrapper client and REST client in tests, to validate that their performance and functionality meet the requirement for every new release of these client libraries and Apache Kafka.

 

Certify New Internal Releases of Apache Kafka

We generally run off Apache Kafka trunk and cut a new internal release every quarter or so to pick up new features from Apache Kafka. A significant benefit of running off trunk is that deploying Kafka in LinkedIn’s production cluster has often detected problems in Apache Kafka trunk that can be fixed before official Apache Kafka releases.

Given the risk of running off Apache Kafka trunk, we take extra care to certify every internal release in a test cluster—which accepts traffic mirrored from production cluster(s)—for a few weeks before deploying the new release in production. For example, we do rolling bounces or hard kill brokers, while checking JMX metrics to verify that there is exactly one controller and no offline partitions, in order to validate Kafka’s availability under failover scenarios. In the past, these steps were manual, which is very time-consuming and doesn’t scale well with the number of events and types of scenarios we want to test. We are switching to Kafka Monitor to automate this process and cover more failover scenarios on a continual basis.

时间: 2024-10-14 11:40:34

Open Sourcing Kafka Monitor的相关文章

Kafka端到端审计

概述 Kafka端到端审计是指生产者生产的消息存入至broker,以及消费者从broker中消费消息这个过程之间消息个数及延迟的审计,以此可以检测是否有数据丢失,是否有数据重复以及端到端的延迟等. 目前主要调研了3个产品: Chaperone (Uber) Confluent Control Center(非开源,收费) Kafka Monitor (LinkedIn) 对于Kafka端到端的审计主要通过: 消息payload中内嵌时间戳timestamp 消息payload中内嵌全局index

消息中间件选型分析——从Kafka与RabbitMQ的对比来看全局

一.前言消息队列中间件(简称消息中间件)是指利用高效可靠的消息传递机制进行与平台无关的数据交流,并基于数据通信来进行分布式系统的集成.通过提供消息传递和消息排队模型,它可以在分布式环境下提供应用解耦.弹性伸缩.冗余存储.流量削峰.异步通信.数据同步等等功能,其作为分布式系统架构中的一个重要组件,有着举足轻重的地位. 目前开源的消息中间件可谓是琳琅满目,能让大家耳熟能详的就有很多,比如ActiveMQ.RabbitMQ.Kafka.RocketMQ.ZeroMQ等.不管选择其中的哪一款,都会有用的

burrow+telegraf+Grafana实现Kafka Consumer Lag监控

kafka监控工具比较多,有kafka monitor,kafka manager, kafka eagle,KafkaOffsetMonitor 等,但是监控consumer lag最好用的当属burrow. Burrow是linkedin开源的一个监控Apache Kafka的工具,burrow可以将消费者滞后检查作为一项服务来对外提供. 它监视所有消费者的承诺偏移量,并根据需要计算消费者的状态,提供HTTP endpoint接口来获取消费者状态,能够监控Consumer消费消息的延迟,从而

深入理解Kafka必知必会(3)

Kafka中的事务是怎么实现的? Kafka中的事务可以使应用程序将消费消息.生产消息.提交消费位移当作原子操作来处理,同时成功或失败,即使该生产或消费会跨多个分区. 生产者必须提供唯一的transactionalId,启动后请求事务协调器获取一个PID,transactionalId与PID一一对应. 每次发送数据给<Topic, Partition>前,需要先向事务协调器发送AddPartitionsToTxnRequest,事务协调器会将该<Transaction, Topic,

Maxwell编译

Maxwell简介 Maxwell是一个能实时读取MySQL二进制日志binlog,并生成 JSON 格式的消息,作为生产者发送给 Kafka,Kinesis.RabbitMQ.Redis.Google Cloud Pub/Sub.文件或其它平台的应用程序.它的常见应用场景有ETL.维护缓存.收集表级别的dml指标.增量到搜索引擎.数据分区迁移.切库binlog回滚方案等.官网(http://maxwells-daemon.io) 编译Maxwell 导入Maxwell源码到IDEA编译完成后生

【转】Spark Streaming 实时计算在甜橙金融监控系统中的应用及优化

系统架构介绍 整个实时监控系统的架构是先由 Flume 收集服务器产生的日志 Log 和前端埋点数据, 然后实时把这些信息发送到 Kafka 分布式发布订阅消息系统,接着由 Spark Streaming 消费 Kafka 中的消息,同时消费记录由 Zookeeper 集群统一管理,这样即使 Kafka 宕机重启后也能找到上次的消费记录继而进行消费.在这里 Spark Streaming 首先从 MySQL 读取规则然后进行 ETL 清洗并计算多个聚合指标,最后将结果的一部分存储到 Hbase

Kafka日志及Topic数据清理

由于项目原因,最近经常碰到Kafka消息队列拥堵的情况.碰到这种情况为了不影响在线系统的正常使用,需要大家手动的清理Kafka Log.但是清理Kafka Log又不能单纯的去删除中间环节产生的日志,中间关联的很多东西需要手动同时去清理,否则可能会导致删除后客户端无法消费的情况. 在介绍手动删除操作之前,先简单的介绍一下Kafka消费Offset原理. 一.Kafka消费Offset 在通过Client端消费Kafka中的消息时,消费的消息会同时在Zookeeper和Kafka Log中保存,如

Kafka 0.8协议

介绍 概述 预备知识 网络 分区和引导 分区策略 批量处理 版本控制和兼容性 协议 Protocol Primitive Types Notes on reading the request format grammars Common Request and Response Structure Requests Responses Message sets Compression The APIs Metadata API Metadata Request Metadata Response

kafka+flume-ng+hdfs 整合

Kafka  由LinkedIn于2010年12月(https://thenewstack.io/streaming-data-at-linkedin-apache-kafka-reaches-1-1-trillion-messages-per-day/)开源出来一个消息的发布/订阅系统,用scala实现:版本从0.05到现在0.10.2.0(2017-02-25) 系统中,生产者(producer)主动向集群某个topic发送(push)消息(message):消费者(consumer)以组(